By: Jamelia Watson
Published: May 29, 2025
In December 2016, one month after the presidential election, the Meta company launched its fact-checking program to identify and address viral misinformation across social media platforms and to clear hoaxes that have no basis in fact.1 Seven years later, in 2025, during what seemed to be the era of political disinformation and health falsities, Meta announced that it was terminating its revered fact checking program, with its founder acknowledging that “more harmful content will appear on platforms now”.2
§ 230 of the Communications Decency Act protects internet service providers from being held liable for user-generated content.3 Specifically, it protects social media companies, like Meta, from lawsuits based on user content and allows companies to moderate content on their sites.4 It also prevents users and providers from being treated as the publisher of another person’s information.5 Platforms have traditionally used § 230 to justify content moderation efforts, arguing that removing misinformation is part of their responsibility to ensure a safe online space. 6The question, however, is whether the reverse should apply: should social media platforms bear legal responsibility for failures in content moderation that allow the spread of misinformation?
§ 230 & Platform Liability
§ 230 can be understood in two keyways as it relates to platform liability and content moderation. First, under § 230(c)(1), social media platforms cannot be sued for content that users post.7 This means these platforms are not legally treated as the publisher or speaker of someone else’s content, protecting them from liability and potential defamation claims.8 In the case of Zeran v. America Online, Incorporated, a federal appeals court held that the provision bars “lawsuits seeking to hold a service provider liable for its exercise of a publisher’s traditional editorial functions— such as deciding whether to publish, withdraw, postpone, or alter content.”9 Second, under § 230(c)(2), platforms have protection when moderating content.10 Under § 230(c)(2)(A), social media companies and users cannot be sued for removing or restricting content they believe is harmful, such as obscene, violent, or harassing material, if they act in good faith.11 This section has been used to justify fact checking features using ‘good faith’ efforts to remove misleading health claims and political disinformation that has led to tangible harm, including public health risks and interference with democratic processes.12
Today, we face a new challenge: the spread of misinformation fueled by algorithms. This year, Netflix released Apple Cider Vinegar, a series revisiting the story of Belle Gibson, an Australian social media influencer and “wellness” blogger.13 During a 60 Minutes interview, she was confronted with irrefutable evidence that she had fabricated her terminal cancer diagnosis, falsely promoted diet-based healing, and deceived both her social media followers and business sponsors—scamming people and corporations of hundreds of thousands of dollars through Meta platforms.14 It can be opined that § 230 is one of the ways Meta was shielded from liability for Belle Gibson’s cancer fraud by classifying her posts as third-party content. This example, coupled with the growth of other social media platforms like TikTok and a new wave of health influencers who promote eating disorders and vaccine misinformation challenges § 230 and the shielding of platforms from liability.
Meta’s Fact-Checking System & Its Termination
In December 2016, Meta’s launched its fact-checking program in response to claims that it had failed to stop foreign actors from leveraging its platforms to spread disinformation and create political discord among Americans.15 As a result, Meta faced mounting pressure from governments and regulatory bodies to take more responsibility for the content shared on their sites.16 The spread of hoaxes, misleading health claims, and political disinformation had led to tangible harm, including increasing public health risks during a global pandemic.17 Meta’s fact- checking initiative was a legal response to these concerns, aimed at reducing the spread of false information and mitigating potential legal liabilities for the company itself.18 Consequently, in the wake of another contentious election and alongside the growing pressure from other platforms such as X, formerly known as Twitter, Meta reversed its stance, claiming that “[fact checking tools] have resulted in too much content being censored that shouldn’t have been.”19 Further commenting on censorship and free speech, Meta’s founder acknowledged a “tradeoff” in the new policy, noting that “more harmful content will appear on the platform as a result of the content moderation changes.”20 The legal implications of this however remain steadfast, as without fact-checking, misinformation may proliferate unchecked, and Meta may use § 230 to shield itself from direct liability.
The Future of Platform Regulation
The current policy debate seems to hinge on whether § 230 currently continues to facilitate free speech all while avoiding excessive litigation, or whether a reformation or amendment to the Act can combat fraud, election misinformation, and health crises.21 Currently, there are legislative efforts at the state level such as the Safe Tech Act which would limit federal liability protection that applies to a user or social media company for claims related to content provided by third parties.22 This risk of regulation could deter platforms from moderating content at all, which could lead to further societal harm.
By voluntarily stepping away from content moderation and removing fact checking, these platforms blur the line between neutral parties and active publishers, opening them up to liability. If platforms stop moderating misinformation, especially harmful public health falsehoods or election fabrications, critics may argue that they are no longer acting as neutral intermediaries but rather as facilitators of misinformation, potentially exposing them to liability. A potential solution lies not in repealing § 230 but in refining its scope to reflect the evolving role of social media platforms in respect to political climates. While § 230 was designed to protect free speech, its broad scope has allowed platforms to shirk and skirt responsibility for the spread of harmful misinformation. Reform efforts, such as the Safe Tech Act, suggest a middle ground. Without meaningful reform, platforms may continue to operate in a legal gray area, avoiding accountability even as misinformation continues to spread.
1 Understanding Meta’s Fact-Checking Program, META (Oct. 20, 2023), https://www.facebook.com/government-nonprofits/blog/misinformation-resources.
2 Claire Duffy, Meta is getting rid of fact checkers. Zuckerberg acknowledged more harmful content will appear on the platforms now, CNN (Jan. 7, 2025, 1:06 PM), https://www.cnn.com/2025/01/07/tech/meta-censorship-moderation/index.html.
3 47 U.S.C. § 230.
4 Becca Branum, Zuckerman v. Meta and the Puzzle of § 230(c)(2), CTR. FOR DEMOCRACY & TECH. (Sept. 10, 2024), https://cdt.org/insights/zuckerman-v-meta-and-the-puzzle-of-section-230c2/.
5 Id.
6 Id.
7 Section 230 and the Communications Decency Act: Legal Issues and Policy Implications, CONG. RSCH. SERV., https://crsreports.congress.gov/product/pdf/IF/IF12584 (last updated Mar. 11, 2021).
8 Id.
9 See Zeran v. Am. Online, Inc., 129 F.3d 327, 330 (4th Cir. 1997).
10 See Section 230 and the Communications Decency Act: Legal Issues and Policy Implications, CONG. RSCH. SERV., https://crsreports.congress.gov/product/pdf/IF/IF12584 (last updated Mar. 11, 2021).
11 See id.
12 See id.
13 Lauren Blue, Review: Netflix’s ‘Apple Cider Vinegar’ Is the Wellness Wake-Up Call We All Need, The Everygirl (Feb. 11, 2025), https://theeverygirl.com/apple-cider-vinegar-netflix-review/.
14 Id.
15 Duffy, supra note 2.
16 See id.
17 See Amy Roeder, Meta’s fact-checking changes raise concerns about spread of science misinformation, HARV. T.H. CHAN SCH. OF PUB. HEALTH (Jan. 10, 2025), https://hsph.harvard.edu/news/metas-fact-checking-changes-raise-concerns-about-spread-of-science-misinformation/ (noting that many social media companies labeled content when it was scientifically incorrect to limit users to access misinformation, and added signals to help users find accurate information).
18 See Facebook’s long and halting fight against misinformation, AP (Oct. 18, 2020, 5:23 AM), https://apnews.com/article/virus-outbreak-donald-trump-conspiracy-theories-misinformation-elections-36cca2a5fb8911132b0d8f0b6a3e5a31 (citing a timeline that depicts that in 2018, Meta’s founder testified before Congress and apologized for the company’s missteps, as well as fake news, hate speech, a lack of data privacy and foreign interference in the 2016 elections).
19 See Duffy, supra note 2
20 Id.
21 S. 560, 118th Cong. (2023), https://www.congress.gov/bill/118th-congress/senate-bill/560/text.
22 Id.