For those of you who are all “just the facts”
The EU AI Act is still subject to change, although most people don’t expect too many major changes at this point. Any of this could change.
Also, IANAL. I don’t even play one on TV.
- Safe-harbour provisions for service provider liability are unaffected by the EU’s AI Act. Hosting rules are unaffected.
- Developers (not deployers) of foundation models need to register their models, with documentation, prior to making it available on the market or as a service.
- Foundation models need to come with documentation about their training data set and pass a number of to-be-implemented standardised benchmarks that examine the suitability of the data they use in terms of biases and other factors.
- The developers of a foundation model are responsible for compliance, not the deployers.
- Providers of Generative AI systems are required to document and publish detailed summaries of the copyright-protected training data they used, as a part of the registration process.
- The Act is clearly designed to benefit AI research through increased transparency and documentation.
- It bans a bunch of things that shouldn’t have been allowed in the first place.
- If you take a foundation model, fine-tune it for a specialised purpose, and deploy it as a part of your software, it won’t count as a foundation model, and you’ll probably be fine, as long as the original provider of the foundation model was compliant.
- If you’re using a foundation model over an API to add a specialised feature to your software, then you’ll probably be fine, as long as the original developer was compliant.
The AI Act covers a lot. It covers the use of AI for biometric identification, high-risk systems whose intended purpose involves people’s health and safety (or life and liberty), foundation models, generative AI, and your run-of-the-mill AI/ML software. It’s also painfully aware that these are early days and that regulators need to be flexible.
The focus of this essay is just foundation models and generative AI, and even with that narrow focus it’s already much too long.
The AI industry is having a temper tantrum
If you’ve been paying attention to tech social media over the past few days, you’ll have seen the outcry about the EU’s proposed AI Act.
The act isn’t final. It’s still subject to negotiation between various parts of the EU infrastructure and how it gets implemented can also change its effect in substantial ways.
That isn’t preventing the US tech industry from panicking. In a blog post that was later popularised by a noted tech commentator, AI enthusiasts have claimed that the EU is doing several very bad, double-plus ungood things and, with it, we Europeans are dooming ourselves to something or the other:
They’re banning open source AI models!It’ll be illegal to host AI models or code!They’re banning AI models accessed via an API.They’re banning fine-tuning of foundation models!
I’ve struck out the statements in the list above because, unfortunately for those who like a good panic, none of them seem to be true. With the act and the recent actions by GDPR regulators, the EU has joined AI ethicists such as Emily M. Bender, Timnit Gebru, and others on the tech industry’s Enemies of AI list.
The crimes of the ethicists, according to tech:
- A refusal to believe in an unfounded expectation of endless exponential growth.
- An insistence that models be evaluated based on genuine, not imagined, functionality.
- The clearly irrational belief that AI development should be transparent, sustainable, and avoid harming the societies we live in.
The EU’s crimes:
- A hatred of innovation and the future.
- An insistence on legislating themselves into the stone age.
- A completely irrational disbelief in the wonders provided so generously by the glorious, kind, and all-around awesome people in the tech industry.
Or, something.
It’s hard to keep track of industry and investor consensus now that bubble mania has set it, especially since quite a few of them are so helpfully using ChatGPT to generate fact-free incoherence for them.
(Imagine a meme of a greying scruffy dog turning its head to one side and going “roo?”. That’s me trying to parse some of the social media posts coming from AI fans. Most of it’s just “what?”)
I’m going to ignore the tantrums and instead have a look, for myself, at what the current proposal for the Act says. For this I’m using the consolidated PDF document of the amended act as published by the European Parliament as a reference.
Scope and service provider liability
Right at the outset of the act in Article 2: Scope, it makes it clear that it doesn’t intend to override existing safe-harbour laws for service providers:
5. This Regulation shall not affect the application of the provisions on the liability of intermediary service providers set out in Chapter II, Section IV of Directive 2000/31/EC of the European Parliament and of the Council6 [as to be replaced by the corresponding provisions of the Digital Services Act].
5b. This Regulation is without prejudice to the rules laid down by other Union legal acts related to consumer protection and product safety.
5c. This Regulation shall not preclude Member States or the Union from maintaining or introducing laws, regulations or administrative provisions which are more favourable to workers in terms of protecting their rights in respect of the use of AI systems by employers, or to encourage or allow the application of collective agreements which are more favourable to workers.
5d. This Regulation shall not apply to research, testing and development activities regarding an AI system prior to this system being placed on the market or put into service, provided that these activities are conducted respecting fundamental rights and the applicable Union law. The testing in real world conditions shall not be covered by this exemption. The Commission is empowered to may adopt delegated acts in accordance with Article 73 to specify this exemption to prevent its existing and potential abuse. The AI Office shall provide guidance on the governance of research and development pursuant to Article 56, also aiming at coordinating its application by the national supervisory authorities.
5d. This Regulation shall not apply to AI components provided under free and opensource licences except to the extent they are placed on the market or put into service by a provider as part of a high-risk AI system or of an AI system that falls under Title II or IV. This exemption shall not apply to foundation models as defined in Art 3.
The first and most important part here is clause 5.
“Chapter II, Section IV of Directive 2000/31/EC” is the EU’s version of Section 230 that governs “liability of intermediary service providers”. It covers hosting, “mere conduit” providers, caching, and forbids member states from imposing a general obligation to monitor on service providers. The AI Act specifically says that it does not affect the liability of intermediate service providers.
This means that, yes, GitHub and other code repositories are still allowed to host AI model code. Hosting providers don’t have any additional liability under the AI Act, only the providers of the models themselves and those who deploy them.
Existing rules about hosting still apply. Same as it’s been for the past twenty-three years.
Clauses 5d are probably the source of some of the tech industry’s confusion and anger. I’m guessing they interpret (or ChatGPT interpreted for them) the “this exemption shall not apply to foundation models” as applying to all the clauses from 5 to 5d, so they assume that none of those exceptions apply to foundation models, which would mean that the safe-harbour provision is indeed overridden.
That interpretation makes no sense because that would also mean that clauses 5b and 5c would also get dropped
5c in particular is about the EU reserving the right of member states to introduce further laws to protect labour from employers abusing AI software.
I can guarantee you that the Act isn’t intended to prevent the EU from making further legislation on foundation models.
The EU is also quite fond of it’s consumer protection laws and wouldn’t give foundation models a pass on those.
This means that interpreting “shall not apply to foundation models” as applying to all the exceptions is almost certainly nonsense.
There’s