EU AI Act: final release, response, and outlook

Oscar Kavanagh
3 min readAug 12, 2024

--

Anne Fehres and Luke Conroy & AI4Media / Better Images of AI / Hidden Labour of Internet Browsing / CC-BY 4.0

The highly anticipated EU AI Act was released in draft this January and has already encountered a series of concerns. The final draft of the Act was leaked to the public, displaying new definitions and requirements that differ substantially from its original proposal from April 2021. The tenants of the AI Act surround matching regulatory measures to the proportionate risk of a given AI system. The European Commission’s “AI Office” is tasked with enforcing and supervising the policies and was launched officially on February 21st with a mandate to engage with the scientific community as closely as possible. However, there remain worrying gaps where researchers need to take action:

(1) Self-policing risk assessment of AI developers, where the deployment of under-tested AI systems are often neglected by the community until they achieve an observable defect at a much higher adoption scale. This is where the AI Office may fail to identify unacceptable risk in AI systems on account of their failure to disclose internal design or information. According to the Center for Security and Emerging Technology, this unacceptable risk is characterized as “those that have a significant potential for manipulation either through subconscious messaging and stimuli, or by exploiting vulnerabilities like socioeconomic status, disability, or age. AI systems for social scoring, a term that describes the evaluation and treatment of people based on their social behavior, are also banned.”

(2) Classification of risk by the AI Office. The four risk levels defined in the Act are minimal risk, high risk, unacceptable risk, and specific transparency risk. Definitions for specific generative models or machine learning use cases are needed for better enforcement, which many researchers are concerned are too vague even as a starting point. Stuart Russell spoke on the draft: “They sent me a draft, and I sent them back 20 pages of comments. Anything not on their list of high-risk applications would not count, and the list excluded ChatGPT and most A.I. systems.”

(3) Enforcement within EU geography. A severe limitation to enforcement is unrestricted development outside of the European Union. Transmission of models, as seen by a host of foundation models from the United States as the basis for development abroad, is an irreversible trickle. While open source has been a key cultural feature in the AI community, their exclusion from regulation (notably the secrecy of the underlying software) ensures vulnerabilities that must be policed retroactively.

(4) Pace. Lack of rules have left a vacuum for high tech companies to expand AI development for existing products and invest in new programs, which may continue at similar scale in Europe from US big tech despite the AI Act. Whether the Act’s design and deployment regulations can match AI’s speed of improvement (known to have broken Moore’s law), its provisions remain the most comprehensive legislation in the world for general-purpose models. Its impact remains to be seen but will likely serve as the best available answer for an exploding experiment.

(5) Competition. The AI Act targets providers and deployers of AI systems, with providers held to the strictest requirements. Substantial financial penalties and limitations have some policymakers concerned about stifling innovation. While the law comes with an exemption for models used purely for research, small AI companies translating this research in Europe face difficulties in growth under these laws. “To adapt as a small company is really hard,” says Robert Kaczmarczyk, a physician at the Technical University of Munich in Germany and co-founder of LAION (Large-scale Artificial Intelligence Open Network).

Closing out February, Mistral AI drew deep scrutiny from European Parliament members following its newest LLM unveiling, Mistral Large, alongside an investment partnership with Microsoft Azure. Mistral had lobbied for looser AI regulation laws preceding this partnership, and as Europe’s foremost LLM software company, signals a rejection of comprehensive oversight for unrestrained growth objectives.

Further reading

originally published in February 2024 under the University of Cambridge newsletter “The Good Robot Podcast”

--

--

Oscar Kavanagh
Oscar Kavanagh

Written by Oscar Kavanagh

Hello, I'm Oscar. I cover topics central to AI alignment with human values. I also conduct AI/ML ethics research at Carnegie Mellon and Cambridge.

No responses yet