By Jordi Calvet-Bademunt 

The European Union (EU) adopted the Artificial Intelligence (AI) Act in June 2024. Hailed as “the world’s first comprehensive AI law,” the AI Act includes a set of obligations for high-impact general-purpose AI models, which will start applying next year. Models are presumed to have a “high impact” when they have a certain amount of computation power. According to an August 2024 analysis, eight models from companies, including OpenAI, Google, Meta, and Mistral, are likely to be designated as high impact by the AI Act.

The AI Act requires providers of high-impact general-purpose systems to “assess and mitigate possible systemic risks.” Europe’s Digital Services Act (DSA) – a law imposing similar requirements on very large online platforms and search engines – has shown that such obligations can unduly restrict freedom of expression if inadequately applied.

Although the AI Act is already in place, there is an opportunity for freedom of expression advocates to protect this fundamental right through the General-Purpose AI Code of Practice, which is being developed. This Code is intended to guide general-purpose AI providers in implementing the provisions of the AI Act on systemic risk and other requirements until harmonized standards are approved in a few years.

Read More
Research Fellow 
 + Recent

Jordi Calvet-Bademunt is a Research Fellow at The Future of Free Speech and a Visiting Scholar at Vanderbilt University. His research focuses on free speech in the digital space.