By Jacob Sullum

[ . . . ]

The FTC’s authority under Section 5 “does not, and constitutionally cannot, extend to penalizing social media platforms for how they choose to moderate user content,” Ashkhen Kazaryan, a senior legal fellow at the Future of Free Speech, argues in a comment that the organization submitted on Tuesday. “Platforms’ content moderation policies, even if controversial or unevenly enforced, do not fall within the scope of deception or unfairness as defined by longstanding FTC precedent or constitutional doctrine. Content moderation practices, whether they involve the removal of misinformation, the enforcement of hate speech policies, or the decision to abstain from moderating content users don’t want to see, do not constitute the type of economic or tangible harm the unfairness standard was designed to address. While such policies may be the subject of vigorous public debate, they do not justify FTC intervention.”

[ . . . ]

“Holding platforms liable under Section 5 for content moderation policies would necessarily intrude upon their editorial judgment,” Kazaryan notes. “The First Amendment not only protects the right to speak but also the right not to speak and to curate content. The Supreme Court has never held that editorial discretion must be evenly or flawlessly applied to qualify for constitutional protection.”

The FTC also suggests that content moderation practices “affect competition, may have resulted from a lack of competition, or may have been the product of anti-competitive conduct.” But Kazaryan notes that platforms compete based on different approaches to moderation. “The existence of platforms such as Rumble, Mastodon, Substack, Truth Social, and Bluesky,” she writes, “demonstrates that users have choices in moderation environments.”

Those environments also evolve over time based on business judgments or because of changes in ownership. “Under its previous leadership, Twitter developed strict rules against misinformation and hate speech,” Kazaryan notes. “Following Elon Musk’s acquisition, the platform reassessed those policies and relaxed many of them, allowing for broader latitude in political and ideological speech. Some saw this as irresponsible. Others viewed it as a welcome rebalancing in favor of free expression. Both views are valid. But neither justifies government intervention. The fact that a private entity revised its speech rules to reflect the views of new ownership is not a violation of law; it is a demonstration of First Amendment rights in action.”

Kazaryan also cites changes in moderation policies at Meta, which this year switched “from a top-down enforcement model to a new community fact-checking system that lets users add context to viral posts through crowd-sourced notes” on Facebook and Instagram. And she notes that YouTube has revised its “moderation policies on
election and health information in light of shifting scientific consensus and public debate.”

None of those changes “are inherently deceptive, unfair, or anticompetitive,” Kazaryan writes. “A platform’s decision to use a top-down moderation system or a community notes model is a design choice and an editorial judgment that the Supreme Court recognizes as protected by the First Amendment.”

Kazaryan also questions the premise that social media are systematically biased against right-of-center views. “Conservative accounts, influencers, and news sources have reached massive audiences across all major social media platforms,” she notes. “Data from the last several years shows how right-leaning voices have successfully promoted their perspectives online.”

Kazaryan backs up that assessment with several pieces of evidence. In the final quarter of 2019, for example, Breitbart‘s Facebook page “racked up more likes, comments, and shares” than The New York TimesThe Washington PostThe Wall Street Journal, and USA Today combined. Kazaryan adds that President Donald Trump’s “own social media presence remains unmatched; his accounts across platforms like X (formerly Twitter), Facebook, and Truth Social collectively boast nearly 170 million followers, significantly outpacing his political rivals.”

A 2020 Media Matters study, Kazaryan notes, “found that right-leaning pages garnered more total interactions than both left-leaning and non-aligned pages.” A 2021 study published in the Proceedings of the National Academy of Sciences “revealed that Twitter’s algorithmic amplification favored right-leaning news sources over left-leaning ones in six out of seven countries studied, including the United States.” A 2024 Pew Research Center study of “news influencers” on Facebook, Instagram, TikTok, X, and YouTube found they were “more likely to identify with the political right than the left.”

Read More Download Full Public Comment
Senior Legal Fellow at  
  + Recent

Ashkhen Kazaryan is a Senior Legal Fellow at The Future of Free Speech, where she leads initiatives to protect free expression and shape policies that uphold the First Amendment in the digital age.