By Jordi Calvet-Bademunt

“When we started this work, we were curious. Now, we have real concerns.” The CEO of the Competition & Markets Authority (CMA), the British antitrust regulator, was referring to the competition risks her team has identified in the foundation models industry. Foundation models are a type of generative AI. Popular models include OpenAI’s GPT-4 and Google’s Gemini.

Generative AI is becoming more widespread, and there is a real risk that it will become controlled by a handful of companies, just like in other digital sectors. The Federal Trade Commission (FTC) in the United States and the European Commission in the European Union are also analyzing competition risks in generative AI. Naturally, antitrust regulators are concerned about the economic implications of this situation.

But, limited competition can have adverse effects well beyond the economy. A concentrated and homogenous generative AI industry can also be pernicious for freedom of expression and access to information. It would mean that just a few AI providers can decisively influence the type of information millions of users create and access. It would be a problematic outcome if, for instance, Google’s Gemini steered Gmail users to draft messages that favored specific information and viewpoints, while limiting or refusing assistance with other perspectives. Or if Microsoft’s Copilot did the same on Word, or Meta’s AI shaped what messages users wrote on its platforms.

While the future of generative AI is still unclear, and policymakers still have time to spur a competitive marketplace, we should prepare for the possibility that a few major players will dominate it. In such a consolidated marketplace, it is paramount that these dominant companies develop approaches and policies that align with human rights and, in particular, commit to freedom of expression and access to information.

Read More