
Democracy, California Gov. Gavin Newsom warns, is on the brink. The culprit? A wave of “disinformation powered by generative AI,” poised to “pollute our information ecosystems like never before.” With the 2024 election looming, Newsom and California Democrats argue that artificial intelligence-generated content threatens to warp public perception. In response, the Golden State has swiftly enacted two bold new laws designed to stem the tide of “deceptive” content spreading across the internet.
These laws not only likely violate the First Amendment, which protects even false political speech, but they are also rooted in exaggerated fears of AI disinformation.
An obviously deepfaked video of Vice President Kamala Harris, widely shared by Elon Musk, prompted Newsom’s push to regulate online discourse, but, of course, these laws will also ban the many parody AI videos of Donald Trump.
To be sure, disinformation, deepfakes and propaganda can spread and have real-life effects. But as researchers have pointed out — mostly to deaf ears — the extent and impact of disinformation are, thus far, typically much smaller than the alarmist scenarios assume. And a recent study by MIT researchers found that humans can frequently discern deepfakes with both audio and visual cues. That’s why widely shared deepfakes of Harris or Trump failed to convince anyone they were real.
[ . . . ]Read More
Jacob Mchangama is the Founder and Executive Director of The Future of Free Speech. He is also a research professor at Vanderbilt University and a Senior Fellow at The Foundation for Individual Rights and Expression (FIRE).