
By Andre Hiroki
[ . . . ]Panelists focused on whether AI-generated outputs should be considered speech under existing legal frameworks, and what that means for Section 230 liability. Miers argued that a blanket rule denying Section 230 protection to AI systems could have consequences beyond chatbots, potentially affecting long-standing online practices such as ranking, sorting and editing third-party content.
Other panelists emphasized how liability uncertainty could reshape the market. Matt Reeder, head of legal at Bluesky, pointed to decentralized platforms as examples of how liability protections can empower users. Describing Bluesky’s structure, Reeder said it is “much more like a farmers market,” where users own their identity and content and can move freely if they disagree with platform decisions.
Ashkhen Kazaryan, senior legal fellow at The Future of Free Speech, highlighted how even the Supreme Court has gradually narrowed the scope of Section 230 through case law, focusing on how the statute applies to modern content algorithms and moderation. Panelists discussed the Supreme Court’s consideration of Anderson v. TikTok, a case that has yet to reach the Court but raises questions about whether platforms can be held liable for harms linked to algorithmic amplification of third-party content.
“Algorithms, and the Supreme Court said this, are just tools that platforms are using to solidify their editorial discretion,” Kazaryan said.
Read MoreAshkhen Kazaryan is a Senior Legal Fellow at The Future of Free Speech, where she leads initiatives to protect free expression and shape policies that uphold the First Amendment in the digital age.
