What are “Foundation Models” and why are they breaking down EU’s negotiations of the AI Act?
Introduction
The ongoing negotiation concerning EU's AI Act currently looks to have reached a stalemate. According to media coverage of the confidential discussions, the member states cannot agree on whether the Act should cover “foundation models” or not.
The term “foundation model” was coined by a large research group from Stanford University in a paper from 2021 where it was defined as “any model that is trained on broad data and can be adapted to a wide range of downstream tasks”. Most of the highly impactful AI models we use and read about today fit this definition, for example, OpenAI's GPT-4, Spotify’s AI DJ, Stability AI’s Stable Diffusion, or Meta’s Llama 2.
Given that foundation models are designed for generality, have many downstream applications, including some they were not specifically designed for, and because they have a significant impact on society overall, one would think that regulating foundation models was an utmost priority for all the EU lawmakers. However, EU’s three biggest economies, Germany, France, and Italy, opposed any regulation of foundation models in the Act, whatsoever, during a meeting on November 10.
The final scheduled round of talks will take place tomorrow. If the member states fail to reach a compromise on the matter, the whole AI Act could be in jeopardy. Let’s take a brief look at the main arguments from each side of the negotiation table.
Regulating BigTech
The constantly evolving state of AI makes it difficult to regulate. To do so effectively, the EU has to be ahead of its time. Like Shakespeare.
The problem is that only very few people deeply understand a) how AI technology works AND b) its potential impacts on society. I believe that this technical and social complexity has enabled the successful lobbying efforts of big AI companies. Right now, it looks like even EU’s well-oiled legislative machinery is failing to measure up to the challenge of regulating BigTech’s core business.
Two of Europe’s largest AI companies, the German Aleph Alpha and the French Mistral AI, have allegedly managed to change their respective governments' stances on foundation models. Both companies are known to have close government ties and Germany and France invest heavily in national AI research (see Time article)
Mistral AI is the French pendant to OpenAI, or perhaps more fittingly to Meta AI since Mistral - like Meta AI - released its latest model as open-source. CEO of Mistral, Arthur Mensch told Time:
“We have publicly said that regulating foundational models did not make sense and that any regulation should target applications, not infrastructure. This would be the only enforceable regulation and, in Europe, the only way to prevent US regulatory capture. We are happy to see that the regulators are now realising it.”
To understand the big AI companies’ position on foundation models even better, here is a quote from an open letter to EU from June, signed by European tech companies, VCs, and industry giants, including Mistral AI:
“ Under the version recently adopted by the European Parliament, foundation models, regardless of their use cases, would be heavily regulated, and companies developing and implementing such systems would face disproportionate compliance costs and disproportionate liability risks. Such regulation could lead to highly innovative companies moving their activities abroad, investors withdrawing their capital from the development of European Foundation Models and European AI in general. The result would be a critical productivity gap between the two sides of the Atlantic (..)
In a context where we know very little about the real risks, the business model, or the applications of generative AI, European law should confine itself to stating broad principles in a risk-based approach. The implementation of these principles should be entrusted to a dedicated regulatory body composed of experts at EU level and should be carried out in an agile process capable of continuously adapting them to the rapid pace of technological development and the unfolding concrete risks emerging. Such a procedure should be developed in dialogue with the economy.”
Regulating Foundation Models
In spite of the coalition between Germany, France, Italy, and some protests from the European tech community, not regulating foundation model in the AI Act is not an option. Staff in EU’s parliament demonstratively walked off during a meeting with government representatives of the EU Council last month, after hearing the three countries' resistance to regulating foundation models (Politico).
As Billy Berrigo from Time phrases it, exempting foundation models from regulation would be akin to passing social media regulation that doesn’t apply to Facebook or TikTok.
Max Tegmark, President of the Future of Life Institute is even more stringent in his comment to TechCrunch:
“This last-second attempt by Big Tech to exempt the future of AI would make the EU AI Act the laughing-stock of the world, not worth the paper it’s printed on. After years of hard work, the EU has the opportunity to lead a world waking up to the need to regulate these increasingly powerful and dangerous systems. Lawmakers must stand firm and protect thousands of European companies from the lobbying attempts of Mistral and US tech giants.”
A long list of international AI leaders and experts have signed a Letter of Concern to the German government on the need for foundation model regulation in the AI Act. In another open letter, addressed to the European legislators, signatories that span from researchers, SMEs, consumers, and think tanks calls for a "tiered approach" to regulating foundation models – meaning different requirement are imposed, depending on the model’s capacity (e.g., size, impact, number of applications or users).
To frame it in a very condensed form: Big AI companies (and Germany, France, and Italy) are calling for self-regulation, whereas other stakeholders are calling for a two-tiered approach. Under the two-tiered approach, companies behind the largest or most impactful foundation models have to live up to further requirements than the smaller foundation model developers. The logic being that big AI companies have an extended responsibility for their larger models and can bear the financial burden of doing more compliance work, contrary to smaller AI companies. At the other end, Big Tech companies would like to “kick the can down the road” so companies that make use of their foundation models for specific, downstream applications have to carry the regulatory burden.
To be continued…
Reads of the Week
Elon Musk's Dealbook Meltdown - Dave Karpf, 1 December, 2023 (The Future, Now, and Then)
Unauthorized “David Attenborough” AI clone narrates developer’s life, goes viral - Benj Edwards, 16 November, 2023 (Ars Technica).