
By Ashkhen Kazaryan
Below is an excerpt from our pubic comment. Click here to read the full submission.
Comments on the Rule 15 CSR 60-19 Proposed by the Attorney General of Missouri
The Future of Free Speech is an independent, nonpartisan think tank located at Vanderbilt University. We work to reaffirm freedom of expression as the bedrock of free and thriving societies through actionable research, empowering tools, and principled advocacy. The Future of Free Speech seeks to create a world where everyone’s right to freedom of expression is protected by law and reinforced by a culture that tolerates diverse viewpoints.
We submit this comment in response to the Missouri Attorney General’s proposed rule 15 CSR 60-19.
While the rule is couched in the language of consumer protection, at its core, it is a sweeping attempt to dictate how platforms design, organize, and deliver speech online. This raises profound constitutional and practical red flags. The rule mandates that platforms present users with the option to select their own third-party content moderation tools and provide algorithmic transparency.[1] Each of these mandates impermissibly compels or restricts protected editorial discretion in violation of the First Amendment, invites viewpoint-based regulation subject to strict scrutiny, exceeds the permissible scope of state authority under the Dormant Commerce Clause by regulating extraterritorially, and is preempted by federal law.
I. The Rule Violates the First Amendment
The First Amendment not only protects the right to speak but also the right not to speak and to curate content. The Supreme Court has never held that editorial discretion must be evenly or flawlessly applied to qualify for constitutional protection. The Supreme Court has consistently rejected government efforts to interfere with editorial judgment, holding that private speakers cannot be compelled to disseminate or organize speech according to government preferences.[2]
Missouri’s rule infringes on that protected discretion by requiring covered platforms to allow users to choose from a range of independent, third-party content moderators.[3] This mandate compels private platforms to integrate and operationalize moderation systems they may not agree with. These moderation systems may also conflict with the platforms’ safety standards, editorial priorities, and the platforms’ trust and safety architecture. The fact that a platform retains the right to maintain its own default moderator does not cure the constitutional defect; it still must facilitate and implement third-party systems it would not otherwise endorse.
This amounts to compelled hosting of editorial processes with which the platform may fundamentally disagree. Much like a newspaper cannot be forced to publish external inserts or opposing editorials, a digital platform cannot be forced to support alternative content moderation systems simply because the government deems them more ideologically balanced. As the Supreme Court reaffirmed in Moody v NetChoice, “however imperfect the private marketplace of ideas, here was a worse proposal—the government itself deciding when speech was imbalanced and then coercing speakers to provide more of some views or less of others.”[4]
That is precisely what Missouri’s rule does. By forcing platforms to accommodate third-party moderation tools, the state attempts to override platforms’ expressive autonomy and redesign their content governance structures. This is not a neutral consumer protection measure. It is a constitutionally suspect effort to reshape the flow of online expression by requiring platforms to share editorial authority with outside actors they did not choose.
It is especially important to note that the MO’s Attorney General’s press release explicitly cites Moody as the legal foundation for this regulation, claiming that the Supreme Court “recognized the authority of state governments to enforce competition laws in the interest of free expression.”[5] But that reading ignores the very next sentence in Moody:
“And the government can take varied measures, like enforcing competition laws, to protect that access. Cf., e.g., Turner I, 512 U. S., at 647 (protecting local broadcasting); Hurley, 515 U. S., at 577 (discussing Turner I ). But in case after case, the Court has barred the government from forcing a private speaker to present views it wished to spurn in order to rejigger the expressive realm. The regulations in Tornillo, PG&E, and Hurley all were thought to promote greater diversity of expression. See supra, at 14–16. They also were thought to counteract advantages some private parties possessed in controlling “enviable vehicle[s]” for speech. Hurley, 515 U. S., at 577. Indeed, the Tornillo Court devoted six pages of its opinion to recounting a critique of the then current media environment—in particular, the disproportionate “influen[ce]” of a few speakers—similar to one heard today (except about different entities). 418 U. S., at 249; see id., at 248–254; supra, at 14–15. It made no difference.”[6]
Moreover, the Missouri Attorney General has repeatedly framed the rule as a response to perceived censorship by social media platforms, accusing them of silencing disfavored voices and manipulating online discourse.[7] That stated purpose fatally undermines the rule’s constitutionality. Viewpoint-based regulatory motives trigger strict scrutiny, and efforts to correct alleged political imbalance on private platforms are not a legitimate government interest. The First Amendment does not authorize the state to engineer parity in the marketplace of ideas. It prohibits the government from penalizing speakers for organizing or presenting content in ways with which it disagrees.
Let’s think about this practically. Some critics have argued that Twitter users did not affirmatively consent to the platform’s most recent changes, such as the decision to amplify Elon Musk’s personal posts across users’ timelines, even among those who do not follow him. In February 2023, internal reports revealed that Twitter engineers were instructed to tweak the algorithm to boost Musk’s tweets after he expressed dissatisfaction with his engagement metrics.[8] For many users, this resulted in a sudden and persistent exposure to content they had not sought out, leading to accusations of platform manipulation or bias. However unpopular, this decision remains squarely within the scope of protected editorial discretion. Much like a newspaper publisher can prioritize its owner’s editorials, a platform may elevate the content of its leadership or favored voices. Discomfort with these choices may inform consumer trust or platform loyalty, but they do not transform a lawful exercise of expressive judgment into actionable deception under consumer protection law.
The Supreme Court has recognized that online platforms are central to modern expressive life, describing the Internet as “the most important place[]…for the exchange of views.”[9] If this rule’s legal and operational burdens render continued service in Missouri impractical, there is a high chance that platforms will choose to exit the state. That foreseeable consequence would suppress not only the platforms’ editorial discretion but also restrict Missouri residents’ access to major online forums for speech, information, and association.
Similar dynamics have played out in other jurisdictions, where platforms have withdrawn services from entire markets in response to laws imposing mandatory content carriage or burdensome design mandates. In Canada, Meta stopped allowing users to post links to news articles in response to the Online News Act, which required platforms to pay publishers for Meta’s users linking to their content.[10] In the European Union, companies have delayed or scaled back the deployment of certain services in response to far-reaching obligations under the Digital Services Act and AI Act, citing uncertainty, cost, and legal exposure.[11]
These examples demonstrate that when governments impose rigid mandates on platform architecture or content governance, the likely result is not greater user empowerment but reduced access to widely used digital spaces. A rule that predictably invites such exit while claiming to protect speech does not advance free speech rights; it diminishes them for both platforms and the users who rely on them.
[1] Missouri Code Regs. tit. 15, § 60‑19.020–.040.
[2] See Miami Herald Publ’g Co. v. Tornillo, 418 U.S. 241 (1974), Hurley v. Irish-Am. Gay, Lesbian and Bisexual Group of Boston, 515 U.S. 557, 574 (1995), Moody v. NetChoice, LLC, 144 S. Ct. 2383 (2024).
[3] 15 Mo. Code Reg. § 60-19.030.
[4] Moody v. NetChoice, LLC, 603 U.S. 707, 732-33 (2024).
[5] Press Release, Off. of the Mo. Att’y Gen., Attorney General Bailey Files Groundbreaking Rule to End Big Tech’s Censorship Monopoly and Protect Online Free Speech (May 6, 2025), https://ago.mo.gov/attorney-general-bailey-files-groundbreaking-rule-to-end-big-techs-censorship-monopoly-and-protect-online-free-speech/.
[6] Moody 603 U.S. 707, 732-33.
[7] Press Release, Off. of the Mo. Att’y Gen., Attorney General Bailey Files Groundbreaking Rule to End Big Tech’s Censorship Monopoly and Protect Online Free Speech (May 6, 2025), https://ago.mo.gov/attorney-general-bailey-files-groundbreaking-rule-to-end-big-techs-censorship-monopoly-and-protect-online-free-speech/.
[8] Casey Newton, Elon Musk Created a Special System for Showing You All His Tweets First, Platformer (Feb. 14, 2023), https://www.platformer.news/yes-elon-musk-created-a-special-system/.
[9] Packingham v. North Carolina, 582 U.S. 98, 107 (2017).
[10] Meta, Changes to News Availability on Our Platforms in Canada (June 1, 2023), available at https://about.fb.com/news/2023/06/changes-to-news-availability-on-our-platforms-in-canada/
[11] Pieter Haeck and Giovanni Coi, “Startups Side With Draghi: EU Red Tape Hampers Growth,” Politico, November 21, 2024, https://www.politico.eu/article/eu-tech-scene-denounces-ai-data-rules-bad-for-growth-meta-google-survey-gdpr/; Pieter Haeck , EU’s waffle on artificial intelligence law creates huge headache, Politico (June 16, 2025), https://www.politico.eu/article/how-the-eu-ai-rules-turned-into-massive-headache/.
Ashkhen Kazaryan is a Senior Legal Fellow at The Future of Free Speech, where she leads initiatives to protect free expression and shape policies that uphold the First Amendment in the digital age.