By Jordi Calvet-Bademunt and Joan Barata

This piece is part of a series that marks the first 100 days since the full implementation of Europe’s Digital Services Act. You can read more items in the series here.

The adoption of the Digital Services Act (DSA) represented a major development within the context of the EU and beyond. Based on the precedent of the eCommerce Directive, the DSA incorporates new legal responsibilities for online platforms and new rights for users. It covers significant areas of intermediary and platform regulation, such as liability, transparency, appeal and redress mechanisms, regulatory bodies, systemic risk assessment and mitigation, data protection, and online advertising. With most of its provisions currently being enforced, the big test to determine this regulation’s appropriateness and overall success has only started. This testing period will particularly affect the so-called very large online platforms (VLOPs) and very large online search engines (VLOSEs), which are subject to the most specific and demanding obligations.

The implementation process of the DSA has been accompanied by the almost simultaneous discussion and adoption of other pieces of legislation, which will also impact certain aspects of platforms’ activities, particularly content moderation policies, such as the European Media Freedom Act (EMFA) or the regulation on transparency and targeting of political advertising (PolAd).

In this context, a new piece of legislation is of particular importance. The Council gave the final green light on May 21 to the adoption of the AI Act. It constitutes another landmark moment by enacting, at a global level, the first-ever legal framework addressing AI based on a risk-based approach. It provides developers and deployers with requirements and obligations regarding specific uses of AI.

Legislators discussed the DSA and AI Act in parallel, but, in principle, they cover separate areas of technology regulation. The AI Act generally governs AI technology, whereas the DSA regulates intermediary services, including online platforms. While the main discussions around the DSA took place within a context where AI was more nascent or had yet to trigger substantial societal and political debates, the AI Act was at the top of the legislative agenda in the EU in parallel with the more recent emergence of popular and controversial applications such as ChatGPT.

Though the DSA and AI Act were enacted separately, both platform regulation and the use of AI systems are becoming increasingly intertwined, as the preamble of the AI Act acknowledges. However, reaching a conclusion regarding the legal regime applicable to matters at the intersection between AI and platform regulation may, in some cases, require efforts to find consistency between two different and, in many ways, “parallel” pieces of legislation.

Read More
Research Fellow
+ Recent

Jordi Calvet-Bademunt is a Research Fellow at The Future of Free Speech and a Visiting Scholar at Vanderbilt University. His research focuses on free speech in the digital space.

Senior Legal Fellow
+ Recent

Joan Barata is a Senior Legal Fellow for The Future of Free Speech. He works on freedom of expression, media regulation, and intermediary liability issues.