By: Anne Waldron
Published: February 20, 2026
The rise of deepfakes has become a constant threat in Americans’ everyday lives.[1] Artificial intelligence developed initially to aid humans has been weaponized by hatred, exploitation, and lust.[2] The concept of fabricated images is not new. In 1888, 30-year-old Le Grange Brown was accused of showing and selling photographs of “undraped women” in local saloons.[3] Brown had taken portraits of hundreds of young high-society women during social events and then cut and pasted their faces onto the bodies of nude models.[4] While the crude methods of the 19th century required scissors and glue, today anyone with basic coding knowledge can program apps that turn everyday people into unwilling pornographic subjects. Deepfakes burst into public consciousness around 2017, with 96% of the videos produced falling into the category of non-consensual pornography or image-based abuse, overwhelmingly targeting women.[5] Sexual exploitation through doctored images is not new, but deepfake technology blurs the line between reality and fabrication in unprecedented ways. So, how is it that over 7 years after the start of Deepfakes is there finally legislation protecting everyday folks. How can we protect ourselves from something the law can’t prevent and how do we keep ourselves safe form the growing world of AI porn.
Deepfakes raise serious legal concerns under intellectual property law, particularly the “right of publicity” which refers to the right to control and profit from one’s own image and likeness.[6] The right of publicity was the center of White v. Samsung, in which Vanna White successfully sued Samsung for using a robot dressed in her likeness to promote its products without her consent. Other deepfake videos of celebrities have circulated widely on social media platforms, such as faux-pornographic videos of Taylor Swift that went viral on Twitter in 2024.[7] While celebrities worry about these AI videos preventing them from securing brand deals or major motion picture deals, they are able to rely on the right of publicity to protect themselves, on the other hand those who are not in the limelight are disproportionately harmed because they lack any comparable legal protection.
With the rise of AI and the software’s ease of use, anyone is capable of creating fake videos, including middle school and high school teens.[8] AI deepfake porn is one of the leading causes of bullying.[9] Francesca Mani was in 10th grade when she discovered a group of boys had used an AI software to fabricate photos of her and other female classmates naked.[10] These photos circulated around the school and had a severe impact on the girls’ safety and well-being, especially their mental health.[11] From the waves of harassment and bullying online, the victims a majority of the time are young teenage girls.[12] These photos, are created from nudify-AI apps, can follow victims through their adolescence and into their adult lives, and potentially negatively impact college admissions, job opportunities, or basic background checks.[13] These photos and videos can cause harm in numerous ways, many of which are unseen
Francesca is one of the many students who have been on the receiving end of AI’s negative aspects, but the threat of deepfakes targets every age group, with a strong hold on young teens.[14] These deepfakes and images look so realistic that it is hard to convince people otherwise.[15] This could lead to issues of explicit photos populating upon a simple google search that is easily accessible to the public. There is a common misconception that just because the deepfakes exist in the virtual sphere does not mean they can not cause real harm in the real world. The very real virtual effects of sexualized deepfakes may bleed into the real world and negatively impact These photos, and the culture of sexualization they promote, can have a number of consequences, many of which can be permanent.[16] The problems do not just end there—adults are now able to use AI to generate nude photos of kids and blackmail them into paying money, so the AI-generated videos or pictures are not released.[17]
For years, there was no clear federal legal remedy for victims of deepfake abuse. That changed on May 19, 2025, when the “Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act” (“TAKE IT DOWN Act” or the “Act”) was signed into law.[18] The law prohibits publishing intimate visual depictions or deepfakes of minors or non-consenting adults. It also requires “covered platforms” to establish a notice-and-takedown process through which victims can report violations of the law.[19] Once notified, platforms must remove the content promptly.[20] The Act categorizes images into two groups: authentic intimate images (non-AI, real images shared without consent) and deepfakes (AI-generated manipulated content).[21]
If you or a loved one becomes the victim of a deepfake under this Act, the reporting process is straightforward:
- Submit a notice containing an electronic signature of the individual depicted.
- Provide a good faith statement that the depiction was non-consensual.
- Include sufficient information for the platform to locate the content and to contact the individual filing the notice.[22]
Once notified, platforms have 48 hours to investigate and remove the material.[23] They must also make reasonable efforts to remove duplicates or reposts.[24] Failure to comply constitutes a violation of the Federal Trade Commission Act (“FTCA”).[25]
By May 19, 2026, covered platforms must fully comply with the obligations set forth in the TAKE IT DOWN Act.[26] Covered platforms are defined in the legislation as public websites, online services, and mobile applications that host user-generated content.[27] Notably, section 4 of the Act raises possible concerns for victims because the list provided of excluded platforms includes email services and providers of broadband internet access services.[28] Online services, applications, or websites are excluded as well if their content is preselected by the provider of the website rather than being user generated.[29] This can lead to tweens and others finding ways around the law as the technology needed to create these deepfakes is readily available. This opens the door for perpetrators to release AI-porn or revenge porn under the guise of a website created with no input from viewers of the website. Another cause for concern is the ability of AI-generated images to spread through email without regulation; however, the battle over privacy rights thwarts the government’s ability to regulate digital inboxes.
While the TAKE IT DOWN Act cannot prevent deepfakes from being created, it marks an important step toward protecting victims, particularly women and minors, whose lives could be devastated if such videos remained online. Not only does it protect the victims, but it is the first step towards regulating the capabilities of AI. As it stands, adequate federal ethics codes, rules, or regulation on AI do not exist, nor do sufficient protections of individuals from the harms of fake AI videos. This Act lays out the foundation for how the law can evolve to confront the darker sides of artificial intelligence while balancing innovation with accountability.
[1] U.S. Dep’t of Homeland Sec., Increasing Threats of Deepfake Identities 16 (2024), https://www.dhs.gov/sites/default/files/publications/increasing_threats_of_deepfake_identities_0.pdf
[2] Id. at 9-10.
[3] Jessica Lake, Disembodied Data and Corporeal Violation: Our Gendered Privacy Law Priorities and Preoccupation, 42 UNSW L. J. 119, 134 (2019).
[4] Id. at 134-5.
[5] Kristine Baekgaard, Georgetown Institute for Women, Peace and Security, Technology-Facilitated Gender-Based Violence: An Emerging Issue in Women, Peace and Security (2024).
[6] U.S. Dep’t of Homeland Sec., supra note 1 at 5.
[7] Bill Chappell, Deepfakes Exploiting Taylor Swift Images Exemplify a Scourge With Little Oversight, Nat’L Pub. Radio (Jan. 26, 2024), https://www.npr.org/2024/01/26/1227091070/deepfakes-taylor-swift-images-regulation.
[8] Farrah Walton, Teens using AI to Create Explicit Deepfakes of Classmates Prompts New Bill, CBS Austin (Aug. 16, 2024), https://cbsaustin.com/news/local/teens-are-using-ai-to-create-explicit-deepfakes-of-classmates-often-sparking-anxiety.
[9] Id.
[10] Kalie Walker, AI ‘Deepfakes’: A Disturbing Trend in School Cyberbullying, NeaToday (Apr. 10, 2025), https://www.nea.org/nea-today/all-news-articles/ai-deepfakes-disturbing-trend-school-cyberbullying.
[11] Id.
[12] Id.
[13] Id.
[14] Natasha Singer, Teen Girls Confront an Epidemic of Deepfake Nudes in Schools, The New York Times (Apr. 8, 2024),
https://www.nytimes.com/2024/04/08/technology/deepfake-ai-nudes-westfield-high-school.html.
[15] U.S. Dep’t of Homeland Sec., supra note 1 at 6..
[16] Id.
[17] Id.
[18] TAKE IT DOWN Act, S. 146, 119th Cong. (2025), https://www.congress.gov/bill/119th-congress/senate-bill/146/text
[19] Id.
[20] See id. (notifying platform owners about a legal policy change where viewers can report ai-deepfakes and the platform owners are required to remove the videos or photos).
[21] Id.
[22] See id. (Explaining the steps viewers or victims must take to remove deepfakes of themselves from certain platforms).
[23] Id.
[24] See id. (affirming the need for platforms to make strides in removing deepfakes from their pages).
[25] Id.
[26] Id.
[27] Id.
[28] Id.
[29] Id.