By: Catherine Beal
Published on: October 28, 2024
Over the past several years, artificial intelligence (AI) has advanced at breakneck speed, revolutionizing numerous aspects of modern life and technological possibilities.[1] This rapid progress has brought immense eases in life as well as underscored the massive gaps in our awareness, detection, and ability to be deceived.[2] Despite the undeniable benefits of AI, it is crucial to confront how AI-generated content has become virtually indistinguishable from legitimate information, and how Congress can act to provide necessary minimum standards to curtail misinformation.
AI-generated content has infiltrated social media, especially in the context of the 2024 elections. AI content includes computer-generated text, photos, videos, and audio that are increasingly indistinguishable from human-created content. This content can mirror legitimate news sources and impersonate individuals (e.g., deepfakes). More recently, AI has been commandeered by bad actors seeking to deceive the public through fake accounts spreading false information.[3] This technological development has made it easier than ever to create and spread misinformation on a mass scale.
In the context of the United States election processes, AI contributes to misinformation via fake photos and videos conveying bogus endorsements, false information, and the prevalence of AI-generated social media users who create echo chambers of specific information.[4] The consequences of AI were acutely realized during the 2020 and 2024 U.S. election cycles, where third party groups led, and continue to lead, misinformation campaigns designed to sway public opinion and undermine democratic processes.[5]
In response to growing criticism, Facebook (Meta)[6] and X (formerly Twitter)[7] attempted to self-regulate AI-generated content by implementing policies moderating certain likely AI-content, supporting user-led fact checking initiatives, and even adding some algorithmic modifications aimed at curbing the spread of false information.[8] These efforts, while a step in the right direction, have been insufficient to appropriately identify, tag, and, when needed, remove fake content at the rate the fake content is being produced.
In the past year, the White House and Congress have taken action to address AI-created misinformation. In October 2023, the White House issued an Executive Order on Artificial Intelligence, balancing the need for comprehensive standards for AI safety and security with the need to maintain user and consumer privacy.[9] In the 118th Congress, several bills have been introduced aimed at combating AI-based misinformation in the election process, none of which have made it to the President’s desk for signature.[10]
As these bills show, crafting effective legislation to combat AI is complex, challenging, and requires balancing several competing interests. The benefits of protecting the public must be checked by the realistic technological feasibility — any software to track AI likely would require another type of AI to monitor. [11] Mandatory minimum standards for AI detection, such as increased digital literacy and requirements that AI-generated content be uniformly disclosed, need to be coordinated to expand and strengthen public education about the pitfalls and limitations of AI. The complexity of legislative success rests among protecting the public and preserving fundamental freedoms (e.g., free speech), while also establishing minimum standards that can last for several years — anticipating how AI could grow.
A significant question courts will have to address is whether AI produced content is entitled to First Amendment free speech rights. In 2010, the Supreme Court, in Citizens United v. Federal Election Commission, upheld free speech protections for campaign contributions by corporations.[12] Yale Law School Knight Professor Jack M. Balkin explained that AI programs would likely not elevate to the level of personhood previously given by the Court to corporations or associations because the source of the AI-generated information is not from a “group of human beings who work together,” unlike in Citizens United.[13]
As lawmakers, technology companies, courts, and the public look to the future of AI and the implications of its misinformation in the 2020 and 2024 election cycles, the urgency of addressing AI-generated misinformation on social media cannot be overstated. Comprehensive regulation, coupled with continued innovation in technology and media literacy, can help create a more informed public and a healthier digital ecosystem.
[1]See Artificial Intelligence’s Use and Rapid Growth Highlight Its Possibilities and Perils, U.S. Gov’t Accountability Off. (Sept. 6, 2023), https://www.gao.gov/blog/artificial-intelligences-use-and-rapid-growth-highlight-its-possibilities-and-perils.
[2] See Understanding the Different Types of Artificial Intelligence, IBM (Oct. 12, 2023), https://www.ibm.com/think/topics/artificial-intelligence-types.
[3] See Pranshu Verma, The Rise of AI Fake News is Creating a ‘Misinformation Superspreader’, Wash. Post (Dec. 17, 2023, 6:00 AM),
https://www.washingtonpost.com/technology/2023/12/17/ai-fake-news-misinformation/.
[4] See Dan Merica & Ali Swenson, Trump’s Post of Fake Taylor Swift Endorsement is His Latest Embrace of AI-Generated Images, Associated Press (Aug. 20, 2024, 4:48 PM), https://apnews.com/article/trump-taylor-swift-fake-endorsement-ai-fec99c412d960932839e3eab8d49fd5f.
[5] See Zachary Cohen, Sean Lyngaas & Evan Perez, Exclusive: US Intelligence Spotted Chinese, Iranian Deepfakes in 2020 Aimed at Influencing US Voters, CNN (May 15, 2024, 10:46 AM), https://www.cnn.com/2024/05/15/politics/us-intelligence-china-iran-deepfakes-2020-election/index.html. See also, Paige Gross, AI Will Play a Role in Election Misinformation. Experts Are Trying to Fight Back, Wash. State Standard (Aug. 19, 2024, 4:01 AM), https://washingtonstatestandard.com/2024/08/19/ai-will-play-a-role-in-election-misinformation-experts-are-trying-to-fight-back/.
[6] See Driven by Our Belief That AI Should Benefit Everyone, Meta, https://ai.meta.com/responsible-ai/.
[7] See Synthetic and Manipulated Media Policy, X (Apr. 2023), https://help.x.com/en/rules-and-policies/manipulated-media.
[8] See Tony Romm & Isaac Stanley-Becker, Facebook, Twitter Disable Sprawling Inauthentic Operation That Used AI to Make Fake Faces, Wash. Post (Dec. 20, 2019, 7:13 PM), https://www.washingtonpost.com/technology/2019/12/20/facebook-twitter-disable-sprawling-inauthentic-operation-that-used-ai-make-fake-faces/
[9] Exec. Order No. 14,110, 88 Fed. Reg. 7519 (Nov. 1, 2023).
[10] See, e.g., Securing Elections From AI Deception Act, H.R. 8858, 118th Cong. (2024); Fraudulent Artificial Intelligence Regulations (FAIR) Elections Act of 2024, S. 4714, 118th Cong. (2024); Preparing Election Administrators for AI Act, S. 3897, 118th Cong. (2024); AI Transparency in Elections Act 0f 2024, S. 3875, 118th Cong. (2024); Protect Elections from Deceptive AI Act, S. 2770, 118th Cong. (2023).
[11] See U.S. Gov’t Accountability Off., GAO-24-107292, Science & Tech Spotlight: Combating Deepfakes (2024).
[12] Citizens United v. Fed. Election Comm’n, 558 U.S. 310, 347 (2010).
[13] See AI and the First Amendment: A Q&A with Jack Balkin, Yale L. Sch. (Jan. 29, 2024), https://law.yale.edu/yls-today/news/ai-and-first-amendment-qa-jack-balkin.