The European Commission has defined what will count as a general purpose AI model under the AI Act, and confirmed that providers that don’t fall in line with its guidelines for adhering to the Act will face deeper scrutiny.
At the same time, “downstream” providers who adapt models developed by others have been given clarity on whether they will fall under the AI Act.
The guidelines follow the “Code of Practice” released by the EU last week, which laid out guidelines around transparency and copyright for all general purpose model providers, and safety and security requirements for the most advanced models.
The guidelines spell out who will fall “in and out of scope of the AI Act’s obligations for providers of general-purpose AI models.” That’s significant as the Act’s rules around General Purpose AI models will kick in from August 2.
A Commission spokesperson said that the guidelines would cover models that had used 10²³ FLOP of compute, and that can generate language, text-to-image, or text-to-video. Epoch.AI has identified at least 201 such models using that amount of compute, with another 126 possibly breaching this threshold.
“So if the model uses more than this amount of training compute, and if it can operate in these modalities, then it would be indicatively considered general purpose AI,” a commission spokesperson said.
This was a fairly straightforward criterion, the spokesperson added, and it should be fairly easy for companies to assess if their models are covered.
When it comes to downstream modifiers, they will only come under the General Purpose Model provisions if they used more than one third of the resources to train the original model, according to the guidelines.
“Many businesses in the European Union actually do not develop their own general purpose AI models, but rather modify models that already exist,” the spokesperson said. “And then if they only do a small modification, they should not be unduly burdened with all of the obligations under the AI act.”
Likewise, the guidelines detail some exemptions for open source models, “to ensure that the burden on true open source models is kept to the minimum.”
The spokesperson added that the guidelines “outline the implications of voluntary adherence to the code of practice.”
Providers are not obliged to signup to the guidelines, but doing so will mean “increased trust from the commission” the spokesperson said.
“It is much less straightforward if the if the provider chooses to comply by alternative means, and in that case, the commission would probably need to inquire more and need the provider to give certain arguments for why their chosen means of compliance is adequate.”
See also: UK unveils ‘light touch’ AI regulation plans
That will become more important from August 2026, when the commission will move from supporting suppliers to help ensure they are compliant, to taking enforcement action if they are not.
“The Commission will enforce compliance with all the obligations for all providers placing general purpose AI models on the European market, including with the full set of tools, including fines,” the spokesperson said.
Suppliers with models already on the market have until August 2027 to get in line.
Big tech, which effectively maps to the large model providers, had called for provisions of the AI Act to be postponed. Amongst the arguments was that there still much uncertainty about the AI Act to be resolved.
However, the Commission has pushed ahead regardless. Afterall, as law firm Crowell pointed out, “Ultimately, only the interpretation of the Court of Justice of the European Union is binding.”