
Why You Should Care About Algorithms, Free Speech, and Section 230
“I built this algo brick by brick.” You might have seen that sentence in the comments section while scrolling TikTok’s “For You Page” or Instagram Reels. It usually appears as a humorous reference to the fact that the commenter is seeing something in their feeds that they didn’t search for but were “fed” by the algorithm based on their previous engagements with content that they find interesting or relevant. While it may appear to be just a meta-joke among users, this comment reflects our collective and intuitive understanding of how our modern Internet functions: users and algorithms work together to shape our overall experience. Through every click, scroll, and pause, users train the system that determines what they see next. The result is a feed that feels curated not by chance, but by design.
At the core of that design is algorithmic curation, which refers to the use of computational systems to filter, rank, recommend, deprioritize, and remove content from the millions of posts uploaded each day.
A growing chorus of voices is arguing that the legal foundations that built the Internet as we know it need to change, including the algorithms that we interact with on a daily basis. They contend that platforms are either moderating too much or too little content, and that the algorithms used to perform this function should be subject to government scrutiny. These arguments often stem from a misunderstanding about how our current legal framework has shaped the modern Internet and what would happen if we were to abandon it.
Without these algorithms — along with the legal discretion for platforms to make content moderation decisions — platforms would be unable to manage the scale and complexity of user activity online. And if they can’t do that, users will be left with a fragmented version of the online ecosystem. With algorithms, platforms make a series of editorial decisions, shaped by their policies, incentives, and legal obligations, about what content to elevate, downrank, or exclude altogether.
This paper argues that algorithmic curation is not just a technical necessity. It is a form of editorial discretion protected by the First Amendment and essential to the functioning of the digital ecosystem. Section 230, in turn, provides the statutory safety net that makes such moderation feasible without constant legal peril.
First, the First Amendment safeguards a platform’s right to exercise editorial judgment, including decisions made through algorithmic systems about what to highlight, downrank, or remove. Second, Section 230 of the Communications Decency Act shields platforms from liability for third-party content, allowing them to moderate and curate that content.
Efforts to restrict these protections through must-carry legislation, government action, or litigation could risk undermining the Internet’s foundational legal framework. If platforms lose the ability to moderate and curate without liability, they face an untenable choice: allow all content, including material that platforms deem harmful to users, or engage in excessive censorship to avoid liability. Either approach diminishes free expression and leads to governmental control over editorial decisions. For us, the users, and our society at large, it means fewer spaces for dialogue, less access to information relevant to us, and greater exposure to content we don’t like. Algorithmic curation helps users navigate billions of pieces of content to find community, interact, and engage with the digital world in a meaningful way.
Download PaperAshkhen Kazaryan is a Senior Legal Fellow at The Future of Free Speech, where she leads initiatives to protect free expression and shape policies that uphold the First Amendment in the digital age.