Techopedia: 50% of U.S. States Enact Deepfake Laws To Protect 2024 Elections

By Ray Fernandez 19 U.S. states have enacted laws regulating the use of generative AI in election communications, and seven more are considering bills. With no federal mandate, more than half of American states have enacted or are exploring laws designed to criminalize the creation and distribution of deepfakes in election-related content. [….] Deepfake laws […]

MSNBC: California’s Solution to Fight AI Disinformation Is Worse than The Problem

By Jacob Mchangama  Democracy, California Gov. Gavin Newsom warns, is on the brink. The culprit? A wave of “disinformation powered by generative AI,” poised to “pollute our information ecosystems like never before.” With the 2024 election looming, Newsom and California Democrats argue that artificial intelligence-generated content threatens to warp public perception. In response, the Golden State has swiftly enacted two bold […]

Techopedia: Can AI Rules Damage Freedom of Speech?

By Tim Keary The risk of AI systems being used to enable censorship is often overlooked. On the vendor side, so much focus is placed on safety and content moderation that the output of these tools often displays ideological biases, often outright blocking legitimate outputs. [ . . . ] How AI Vendor’s Content Moderation […]

Tech Policy Press: The Digital Services Act Meets the AI Act

By Jordi Calvet-Bademunt and Joan Barata This piece is part of a series that marks the first 100 days since the full implementation of Europe’s Digital Services Act. You can read more items in the series here. The adoption of the Digital Services Act (DSA) represented a major development within the context of the EU and beyond. Based […]

Tech Policy Press: Generative AI Developers Should Commit to Free Speech and Access to Information

By Jordi Calvet-Bademunt “When we started this work, we were curious. Now, we have real concerns.” The CEO of the Competition & Markets Authority (CMA), the British antitrust regulator, was referring to the competition risks her team has identified in the foundation models industry. Foundation models are a type of generative AI. Popular models include OpenAI’s GPT-4 and Google’s […]

The Conversation: AI Chatbots Refuse to Produce ‘Controversial’ Output − Why That’s A Free Speech Problem

By Jacob Mchangama and Jordi Calvet-Bademunt Google recently made headlines globally because its chatbot Gemini generated images of people of color instead of white people in historical settings that featured white people. Adobe Firefly’s image creation tool saw similar issues. This led some commentators to complain that AI had gone “woke.” Others suggested these issues resulted from faulty efforts […]

NYU First Amendment Watch: Jacob Mchangama on the First Amendment Implications of Generative AI

By Susanna Granieri The rise of generative artificial intelligence has led to questions about its First Amendment implications — like its use by journalists or its application in defamation law — but it remains unclear how the nation’s courts will consider its potential impacts on the marketplace of ideas. The technology itself does not have rights, […]

Politico Pro: The Content Wars: Analyzing Chatbot Content Policies

In the Friday, March 1, 2024 POLITICO Pro Morning Tech Newsletter (Subscriber Only), Mallory Culhane and John Hende cite The Future of Free Speech‘s latest report, “Freedom of Expression in Generative AI – A Snapshot of Content Policies”: THE CONTENT WARS  ANALYZING CHATBOT CONTENT POLICIES — Some of the most popular chatbots’ policies on hate speech and […]

TIME: The Future of Censorship Is AI-Generated

By Jacob Mchangama and Jules White The brave new world of Generative AI has become the latest battleground for U.S. culture wars. Google issued an apology after anti-woke X-users, including Elon Musk, shared examples of Google’s chatbot Gemini refusing to generate images of white people—including historical figures—even when specifically prompted to do so. Gemini’s insistence on prioritizing diversity […]

Unherd: Beware the WEF’s new misinformation panic

AI-powered lies and manipulation constitute the gravest threat to humanity. At least this is the dystopian scenario espoused by the collective wisdom of 1,500 experts surveyed in the World Economic Forum’s 2024 Global Risks Report last week. Unfortunately, such outbreaks of “elite panic” are a recurring phenomenon. Whenever the public sphere is expanded through new communications technology, […]