top of page

FTC’s New Approach to Content Moderation


Introduction

The Federal Trade Commission’s (FTC) renewed investigation into technology platforms’ censorship highlights the growing debate over the balance between content moderation, free expression, and regulatory oversight in the digital space. Regulating online speech is nothing new to the legal and business communities in the United States. In 2022, Elon Musk’s acquisition of Twitter highlighted concerns over the platform’s new initiatives on content regulation regarding freedom of expression on social media platforms. These broader concerns are reflected in the debate surrounding the government ban on TikTok, a social media company owned by Chinese parent ByteDance Ltd. in late 2024. The Supreme Court ultimately upheld the constitutionality of the ban, citing concerns over national security. 


These developments signal a broader shift in the government’s approach to regulating digital platforms, with an increased focus on ensuring that policies do not infringe on users’ rights to free expression. While past regulatory efforts, such as the TikTok ban, have centered on national security concerns, more recent initiatives suggest a growing emphasis on the fairness and transparency of content moderation practices themselves. This shift became evident in February 2025 when the FTC launched a public inquiry to determine whether or not technology platforms are unfairly regulating user access to platforms based on personal expression of speech. The FTC aims to target confusing and unclear internal procedures that might unfairly limit users’ ability to interact with the platform and their ability to freely express their ideas.


FTC’s New Approach 

The FTC launched a public inquiry to “understand how technology platforms deny or degrade users’ access to services based on the content of their speech or affiliations, and how this conduct may have violated the law.” In its February 20th press release, the FTC claimed that “censorship by technology platforms is not just un-American, it is potentially illegal.” 

Users are almost always required to consent to the terms of service, meaning users must willingly relinquish the right to engage in certain forms of speech to use the platform. These restrictions are neither unconstitutional nor uncommon. The Supreme Court held as much in Moody v. Netchoice, striking down Texas and Florida laws that limited social media platform’s ability to censor content. The Court explained that content moderation is a protected speech activity. Moreover, Section 230 of the Communications Decency Act shields online platforms from liability stemming from user-generated content. 


However, in its recent Request for Information Regarding Technology Platform Censorship, the FTC claims that platforms may violate their own terms of use by employing “opaque or unpredictable internal procedures to restrict users’ access to services” without notice and without sufficient appeal processes following account restrictions. The FTC accuses large tech companies of demonetization and “shadow-banning” accounts, which may unlawfully restrict users’ speech. The investigation aims to learn how platforms demonetize and shadow-ban accounts, as well as what processes are available to users seeking recourse. If the FTC finds that a company has violated its terms of service, the company could face a range of consequences from private user lawsuits to FTC-imposed fines or court-ordered injunctions. Any serious consequences are unlikely for two reasons. 


First, terms of service are generally construed broadly, and platforms have wide discretion over what constitutes appropriate speech. The FTC makes a slightly different assertion: platforms suppress accounts due to the users’ affiliations or speech. However, it is hard to imagine that platforms are actively engaging in this type of “censorship” rather than simply deleting posts and suspending accounts that violate the terms of service. These platforms are primarily governed by algorithms that promote posts based on engagement generated by similar posts. It is more likely that the “suppressed” content is disfavored by the algorithm based on users’ dislike or disinterest in that content. 


Second, the Request for Information reads more like a political signal than an inquiry. Calling censorship “un-American” when platforms have been allowed to censor content for years to protect against misinformation and hate speech harkens back to the Trump Administration’s rhetoric and past gripes about de-platforming. While the relationship between Silicon Valley and the Trump Administration changed in the last election, the investigation appears to signal that this administration’s FTC will take a firmer stance against online censorship. 


Business Implication

The investigation’s early impact on business remains mostly unknown. However, some of the major players are already reacting. In January, Meta overhauled its policy and scrapped most of its in-house moderation capabilities in favor of community-based moderation. Mark Zuckerberg defended the move by saying fact-checkers are “too politically biased and have destroyed more trust than they’ve created.” Meta plans to work closely with the Trump Administration to create a more community-centric moderation structure. The change comes as part of a larger shift among the major tech companies to realign their policies in accordance with the new administration. The realignment has frustrated many liberal users and has paved the way for newer platforms like Bluesky and Substack to capture more of the market. 


Conclusion

While the FTC’s investigation into technology platforms’ content moderation practices raises important legal and policy questions, its practical impact remains uncertain. Platforms continue to wield broad discretion over their terms of service, and the opaque nature of algorithm-driven moderation makes it challenging to establish clear instances of deliberate suppression of speech. Furthermore, the inquiry’s political undertones suggest that it may serve more as a policy statement than a precursor to substantive regulatory change. Without concrete evidence of unfair or deceptive practices that violate existing laws, the FTC’s ability to impose meaningful enforcement remains limited. Unless this inquiry leads to legislative reforms or significant legal precedents, its long-term impact may be more symbolic than structural, reinforcing ongoing debates rather than reshaping the regulatory landscape.


*The views expressed in this article do not represent the views of Santa Clara University.



bottom of page