(Bloomberg) -- The European Union could clinch a tentative agreement on some of the world’s first regulations on artificial intelligence this week — unless debates over how to regulate advanced AI models drive it off the rails.
Representatives from the EU’s three institutions — the European Commission, the European Parliament and 27 member countries — will meet on Wednesday to hammer out the EU’s AI Act, first proposed in 2021. Politicians want a final agreement on the legislation by the end of the year and have raced to get a compromise in time for the final meeting on Dec. 6.
While officials are mostly optimistic that there could be a deal, there are still key outstanding debates over how much to regulate “general purpose” AI models — like OpenAI Inc.’s GPT-4, which have a wide range of possible uses — and how far to restrict government use of live facial scanning technology.
Read More: Regulate AI? How US, EU and China Are Going About It: QuickTake
With European parliamentary elections coming up next year, the act is running out of time to become law if negotiators can’t strike a deal soon.
The group may come to an early agreement to go ahead with the legislation in this meeting, leaving additional technical details to be hashed out in the coming months.
If officials don’t reach a deal, more meetings may be scheduled this month or in the new year. But the group has limited time before upcoming elections mean talks around the act will have to wait for a new parliament and commission later this year to restart.
Some countries, including France and Germany, are concerned that over regulation will impede innovation and hurt European startups such as Mistral AI and Aleph Alpha. Others fear the systemic risks of not controlling this technology.
Listen: The EU Is Leading The Charge On AI Regulation (Podcast)
The latest compromise proposals are a far cry from the ideas lawmakers discussed earlier in the year. The commission’s initial version ago focused on regulating AI’s use rather than specific technologies.
The EU’s current plan would require all general purpose AI developers to keep information on how models are trained, summarize the copyrighted material used and label AI-generated content. Systems that pose “systemic risks” would have to work with the commission through an industry code of conduct. They would also have to monitor and report any incidents from the models.
After the sudden popularity of ChatGPT, negotiators have been racing to come up with rules that both encourage the use of the powerful technology and mitigate potential risks. Still, the technology is moving at an unprecedented pace and some policymakers have acknowledged they aren’t sure where to draw the lines.
©2023 Bloomberg L.P.