By Jordi Calvet-Bademunt 

Last Friday, the European Commission shared with X its preliminary view that it is breaching the Digital Services Act (DSA), Europe’s online safety rulebook. Following the announcement, some media were quick to say that the Commission had charged “Elon Musk’s X for letting disinfo run wild.” In a conspiratorial tone, Elon Musk accused the Commission of offering X an “illegal secret deal”: if X quietly censored speech without telling anyone, the Commission would not fine the company.

Both these takes are one-sided. The preliminary findings are narrower than the initial investigation and do not explicitly deal with “information manipulation,” which is still being investigated. In addition, the “illegal secret deal” is not illegal and not so secret.

To promote transparency and public discourse, my organization, The Future of Free Speech, is tracking the enforcement of the DSA. Using the Commission’s official press releases, the following discussion seeks to provide a factual overview of the DSA enforcement proceeding against X.

The Origins Of The Findings

Last October, following Hamas’ attack on Israel, the European Commission sent X a request for information under the DSA regarding “the alleged spreading of illegal content and disinformation, in particular the spreading of terrorist and violent content and hate speech.”

Thierry Breton, the EU’s top digital enforcer and EU Commissioner, sent an accompanying letter that sparked significant concerns. He also sent similar letters to Meta, TikTok, and YouTube. Twenty-eight civil society organizations, including mine, wrote to Commissioner Breton to express their worry regarding the seeming conflation between illegal content and disinformation – which is generally protected by freedom of expression – and other issues.

Eventually, in December of 2023, the European Commission opened formal proceedings against X. This enforcement stage empowers the Commission to adopt interim measures and non-compliance decisions. According to the press release, the opening of the formal proceedings focused on five key issues:

  • Systemic Risks. Very large online platforms (VLOPs), like X, must assess any systemic risks stemming from the platform (articles 34 and 35 of the DSA). Systemic risk is a nebulous concept in the DSA that requires balancing multiple conflicting objectives, such as negative effects on civic discourse, public security, and free speech. We have publicly shared our concerns regarding these provisions on numerous occasions. The press releases suggest the obligations on systemic risk were the key basis for investigating the dissemination of illegal content and “information manipulation” in X.
  • Notice-And-Action Mechanism. Online platforms must swiftly notify users of content moderation decisions and provide information on redress possibilities (article 16).
  • Deceptive Interface. Online platforms must not design, organize, or operate their online interfaces in a way that deceives or manipulates their users or in a way that otherwise materially distorts or impairs their ability to make free and informed decisions (article 25). In particular, the European Commission was worried about the Blue checkmarks (more on this below).
  • Ad Repository. VLOPs must compile and publicly make available through a searchable tool a repository containing advertisements on their platform until one year after the advertisement was presented for the last time (article 39).
  • Researchers Access To Data. VLOPs must provide researchers with adequate access to platform data (article 40).

Before sharing the preliminary findings, the Commission sent X two other requests for information concerning its decisions to decrease the resources it devotes to content moderation and its risk assessment and mitigation measures regarding generative AI.

Read More
Research Fellow 
 + Recent

Jordi Calvet-Bademunt is a Research Fellow at The Future of Free Speech and a Visiting Scholar at Vanderbilt University. His research focuses on free speech in the digital space.