top of page

Safety as Spectacle: Beyond the Take It Down Act’s Weaponized Illusion of Protection

ree

Introduction 

AI-generated media has intensified the digital rights crisis of nonconsensual intimate imagery (NCII), especially through deepfakes that blur truth and fiction. Victims suffer reputational, emotional, and financial harm, while platforms struggle to respond effectively. With the rise of AI-driven tools, hyper-realistic digital forgeries now enable a new form of abuse—fabricating explicit content of anyone, from celebrities like Taylor Swift to politicians and even minors, regardless of whether they’ve ever shared intimate images. A single stolen selfie can be manipulated and spread across platforms and encrypted networks beyond the survivor’s control. Even successful takedown efforts rarely erase every copy, leaving victims uncertain who has viewed or shared the content—and why.

The harm is profound and enduring. Survivors face harassment, reputational damage, and repeated exposure to exploitative content. Victims are forced to relive their trauma through constant privacy violations and the fear that the abuse may never truly end.

Fourteen-year-old Elliston Berry, learned this firsthand after a classmate used a nudify tool to turn one of her Instagram photos into explicit content that quickly circulated among peers. She said her “innocence was erased pixel by pixel.” Her story shows how easily AI can weaponize everyday moments, and how powerless victims feel once the content spreads. Alongside 15-year-old Francesca Mani, another deepfake victim, Elliston became a vocal advocate for federal protections, leading to the enactment of the TAKE IT DOWN Act. 

Across the country, victims of AI-generated abuse have been left to navigate the fallout largely on their own—facing disbelief, delayed action, or outright dismissal. In CBC’s Her Face Was Deepfaked Onto Porn, a woman discovered her likeness inserted into pornographic videos. When police failed to act, she launched her own investigation to track down those responsible. In WIRED’s profile of Breeze Liu, deepfake videos of her circulated for years before tech platforms responded—highlighting how survivors must often fight relentlessly just to be acknowledged. These cases paint a stark picture of institutional failure in the face of a fast-moving threat.

Addressing this crisis requires survivor-centered laws, trauma-informed reporting, and enforceable tech safeguards. Enacted in May 2025, the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act (TAKE IT DOWN Act) is the first federal law to tackle this issue. 


The Shortcomings of the TAKE IT DOWN Act 

Although the TAKE IT DOWN Act recognizes the urgent issue of NCIIs, the act has many shortcomings and fails to offer the protections survivors need. First, stakeholders and experts Eric Goldman, Jess Miers, and Adi Robertson caution that the Act’s broad and vague provisions could convert its protective aim into a mechanism that chills protected speech and restricts lawful expression. Second, the Act’s vague and overbroad language grants regulators and advocacy groups wide interpretive power, allowing actors to stretch its scope to suppress lawful adult and LGBTQ+ expression and to advance ideological campaigns such as Project 2025. Third, its procedural flaws—including the unrealistic 48-hour removal mandate, lack of counter-notice or verification mechanisms, expanded FTC authority, and reach into encrypted platforms—undermine the very privacy and security the law claims to protect. Together, these flaws show that a law intended to protect survivors instead promotes automated censorship, invites politicized misuse, and erodes fundamental privacy and speech rights.


Censorship Disguised as Protection

The law establishes two enforcement mechanisms against NCII. Section 2 criminalizes publishing or distributing intimate images—authentic or AI-generated—shared without consent, where the subject had a “reasonable expectation of privacy.” For minors, it extends liability to include mere nudity intended to humiliate or gratify. Section 3, the Act’s centerpiece, compels platforms to remove reported imagery within 48 hours and make “reasonable efforts” to delete re-uploads, with violations enforced by the FTC as “unfair or deceptive practices.” But unlike the DMCA, TAKE IT DOWN omits sworn attestations, counter-notice rights, and penalties for false reports—inviting over-removal and abuse. This analysis centers on Section 3, where the drive for rapid takedowns threatens to replace procedural fairness with automated censorship.


At first glance, the TAKE IT DOWN Act promises faster, more consistent protection for survivors of digital sexual abuse. Yet beneath its urgent intent, the law’s vague language and procedural gaps risk producing new harms, undermining constitutional safeguards and the very survivor protections it claims to ensure. As Miers warns, the Act may function as a Project 2025 vehicle to criminalize pornography and marginalize LGBTQ+ communities, expanding censorship under the guise of protection.


Overbreadth, Vagueness, and Risk of Abuse

Key statutory terms such as “intimate,” “depiction,” and “synthetic” are left undefined, granting platforms and regulators broad discretion over what qualifies for removal. As Miers warns, this vagueness opens the door for overreach, potentially sweeping in lawful works such as satire, protest art, or documentary reporting that include nudity or intimacy without being exploitative. The law’s design, as legal analysts note, favors quick compliance over careful evaluation, effectively pressuring services to take down material first and verify later. Smaller platforms, lacking the legal or technical capacity to challenge complaints, are particularly vulnerable to over-censorship.

Further amplifying that risk, the Act shields platforms from liability for any “good-faith” takedown—even if the content doesn’t meet the statutory definition of an intimate depiction—creating a powerful incentive to remove anything remotely questionable. Nowhere do the criminal or civil provisions explicitly protect satire, parody, or artistic expression involving sexual imagery, leaving such speech at constant risk of suppression.

These structural flaws intentionally facilitate political abuse. As Robertson warns, the Act was enacted under an administration openly hostile to dissenting speech. Miers reminds us that during his 2025 State of the Union address, President Trump made his intentions clear, stating And I’m going to use that bill for myself too, if you don’t mind—because nobody gets treated worse than I do online.” Which was interpreted to reference a viral AI-generated video depicting him kissing Elon Musk’s feet—a prime example of political satire that could fall within the Act’s ambiguous scope. 


By compelling platforms to remove all identical copies of flagged material, the Act encourages broad preemptive deletions, mirroring the fallout of FOSTA-SESTA, which erased lawful adult content—disproportionately harming sex workers and marginalized creators. The result is predictable: over-removal from small services and continued immunity for the largest platforms.


Beyond overreach, the law’s enforcement model opens the door to moral and political misuse. Miers warns that its vague standards align with the Project 2025 agenda to criminalize pornography and restrict sexual expression under “family values.” Once the state can relabel consensual adult or LGBTQ+ content, it may erase lawful material under guise of protection. This would hit LGBTQ+ creators, sex educators, and body-positive communities hardest—groups already targeted by biased moderation. By collapsing the line between consent and obscenity, the TAKE IT DOWN Act risks enabling a new form of digital control, where survivor protection becomes the justification for censorship for various communities.


Procedural Deficiencies

Finally, the TAKE IT DOWN Act’s 48-hour takedown rule—the centerpiece of its “survivor-centered” design—lacks procedural safeguards found in established frameworks like the DMCA. There are no sworn attestations, penalties for false reports, or counter-notice options for wrongfully removed content. As the CCRI notes, this omission leaves the process highly vulnerable to abuse. Miers and Goldman warn that the law’s structure effectively “coerces platforms into automated censorship,” rewarding rapid compliance over accuracy and fairness. Smaller services, without the legal teams or infrastructure of major platforms, are most likely to over-remove to avoid liability—replicating what Goldman calls “the DMCA’s worst incentives.”


The Act’s definition of “covered platforms” is equally uneven. It applies broadly to user-generated content services but excludes curated or pre-selected ones, leaving major adult-content and media hosts outside its reach. This inconsistency weakens enforcement and fairness, carving out exemptions for some of the most problematic sites while burdening smaller platforms with compliance risk.


The TAKE IT DOWN Act also empowers the Federal Trade Commission FTC to treat violations as “unfair or deceptive acts or practices,” vastly expanding its reach beyond commercial regulation. The CCRI describes this as “an alarming expansion of FTC authority,” particularly amid rising concerns about the agency’s politicization. As Robertson observes, even a well-drafted law is dangerous in bad faith: the Act could become “ammunition” for an administration intent on punishing political opponents while shielding favored platforms—ironically those most likely to host the harms it claims to fight.


Additionally, the TAKE IT DOWN Act’s failure to exempt encrypted platforms creates one of its most serious flaws. By extending takedown duties to private messaging and storage services, it demands the impossible—content moderation in spaces providers cannot access. As Miers notes, this structure pressures companies to weaken encryption to show reasonable efforts, effectively transforming secure systems into surveillance tools. Without clear standards, platforms may overcorrect—deploying invasive scanning or dismantling encryption altogether. As Robertson warns, such measures would erode cybersecurity and expose survivors’ private communications, turning a law meant to protect privacy into one that destroys it.


Overall, these critiques expose a critical flaw: a law intended to protect survivors can inadvertently cause further harm—prioritizing speed over fairness, visibility over safety, and enforcement without empathy. Without stronger, comprehensive action, victims remain exposed while perpetrators exploit the gap between innovation and accountability.


Alternative Solutions to NCIIs

Existing laws already provide avenues for redress. Forty-eight states criminalize nonconsensual intimate imagery (NCII), and since 2022, federal civil remedies have been available. Traditional causes of action—defamation, harassment, extortion, and false light invasion of privacy—remain effective tools to hold offenders accountable. Rather than expanding federal power through sweeping new mandates, Congress could strengthen these existing protections and invest in enforcement to deliver real, and not symbolic, protection.


True solutions must put survivors at the center, combining accountability with education and prevention. At the same time, platforms need to adopt forensic governance systems grounded in authentication and transparency, such as those advocated by Dr. Hany Farid of Get Real Labs,to ensure verification is proactive, clear, and evidence-based.


Education and Digital Literacy

A 2025 survey by the Center for Democracy & Technology (CDT) found that 15% of students knew of AI-generated explicit images of classmates, with girls disproportionately targeted. Despite the prevalence of both real and synthetic NCII in K–12 schools, responses to online safety, harassment, and AI misuse remain inconsistent. A 2024 Education Week survey revealed that 71% of teachers had no professional development on AI. Researcher Riana Pfefferkorn explains that students often lack the maturity or long-term reasoning to grasp the consequences of their actions, but the impact on victims is always severe. She recalls a case where a student kept folders of AI-generated images targeting multiple classmates.


As such, legal action alone is not enough. Real progress demands early education that builds digital literacy, ethical awareness, and respect for online consent. Amina Fazlullah, head of tech policy advocacy at Common Sense Media, emphasizes that laws addressing deepfakes and NCII must be matched by classroom instruction that teaches students to detect manipulation, resist exploitation, and understand the broader impact, especially as young people increasingly use powerful AI tools without clear guidance. 


Integrating AI ethics, digital literacy, and consent education into schools—paired with enforceable legal protections—builds a multilayered defense that reduces harm, fosters resilience, and safeguards vulnerable students. Effective school training should cover:

  • What deepfakes are: AI-manipulated photos, videos, or audio showing someone saying or doing things they never did.

  • How they’re created: Techniques like face-swapping, voice cloning, and lip-syncing.

  • Risks and abuses: Including impersonation, blackmail, cyberbullying, non-consensual intimate images (NCII), and identity theft.

  • Detection skills: Spotting anomalies such as unnatural blinking, inconsistent shadows, lip-sync errors, verifying sources, and using reverse image or audio searches.

  • Ethical and practical discussions: Scenario-based questions that promote critical thinking, consent awareness, and responsible technology use.


Broader K–12 guidance recommends:

  • Staff training: Equipping educators to respond effectively to synthetic media harassment and NCII.

  • Policy updates: Revising codes of conduct and incident-response plans to explicitly address AI-driven abuse.

  • Student education: Lessons linking digital literacy with empathy, consent, and ethical technology use, recognizing students may engage with AI tools unaware of potential risks.


By combining a clear understanding of deepfake technology, detection and critical thinking skills, and ethical frameworks, schools can better prepare students and staff to recognize, respond to, and prevent AI-enabled harms.


From Detection to Verification: Rebuilding Trust in Deepfake Governance

Even the strongest laws and public education efforts can't succeed without reliable technical tools. Today’s deepfake detection systems fall short—they rely on AI to flag suspicious content using confidence scores, but they don’t provide solid, verifiable proof. These tools are often inaccurate, easily fooled by basic changes like cropping or compression. Worse, they operate like “black boxes,” offering little transparency into how results are produced. They are difficult to trust, especially in courts, where evidence must be consistent, explainable, and repeatable. Experts caution against reliance on unverifiable systems for authentication or reporting.


To move beyond these limits, researchers advocate a shift from detection to forensic verification—an approach grounded in transparency and scientific rigor. Founded by digital forensic expert, Dr. Hany Farid, Get Real Labs, embodies this shift. Instead of simply trying to spot fakes with AI, Farid’s team studies how generative models work, identifying tiny forensic traces left behind when media is synthetically created. By combining these findings with machine learning, their system can verify authenticity based on concrete evidence rather than probability scores.


This method builds a bridge between technical accuracy and legal reliability. As WIRED notes, forensic verification can enable continuous identity protection and produce results that hold up in court. It moves the deepfake conversation from one of trust to one of proof—making media authenticity something we can test, explain, and verify.


Conclusion

The TAKE IT DOWN Act is a spectacle promising to protect victims as a guise for advancing the administration’s agenda. Lasting protection demands a framework grounded in law, education, and forensics: law that defends both survivors and free expression, education that builds consent and digital literacy, and forensic systems that make authenticity verifiable. Sustainable governance of synthetic media depends on this balance—linking accountability, awareness, and truth to ensure that privacy and justice endure in the digital age.


*The views expressed in this article do not represent the views of Santa Clara University.

Comments


bottom of page