top of page

Problematic Pornography: Deep Fakes and the Dark Side of AI


Credit: cottonbro studio | Pexels


If you’ve been on social media lately, you’ve probably seen videos of the current and former presidents playing video games together while discussing topics like their favorite strains of marijuana. The videos aren’t a clever “PR” scheme cooked up by the White House in an attempt to repair relations between Biden and Trump, nor are they an attempt to make Biden appear more relatable to younger constituents: they’re deep fakes.


Deep fakes are a more complex form of photoshopping where artificial intelligence (“AI”) analyzes photos, videos, and audio of an individual in order to create a convincing but falsified replication in a new context. For example, in the popular videos of Biden and Trump playing video games, deep fake AI has been used to meticulously analyze the way the two presidents speak, including their facial expressions, phonetics, and unique mannerisms. The result is a video that appears shockingly similar to any of the genuine interviews or statements made by the presidents. Their voices are nearly identical, and their mouths move almost perfectly to match their words. The catch, of course, is that Biden and Trump have never sat down to play video games together and they certainly haven’t used, at least publicly, some of the vulgar language you’ll hear in the videos. With that being said, you wouldn’t be blamed for taking a brief moment to consider whether the deep fakes might be real. Deep fakes are appealing because they are realistic; they allow their makers to convincingly create content about politicians and celebrities, resulting in videos so absurd that it is immediately clear to the viewer they are nothing more than a falsity intended as comedy. However, the more advanced deep fake technology becomes, the more difficult the videos are to distinguish. This opens the door for abuse by creators motivated by something more than a quick laugh.


Recently, more harmful deep fakes have garnered mainstream attention. At the start of the Russia-Ukraine conflict, a video surfaced on Ukrainian news sites appearing to show President Zelensky calling for surrender. President Zelensky is shown standing solemnly in front of a podium where he urges Ukrainian soldiers to lay down their arms and surrender to the Russian army. The Ukrainian government quickly deemed the video a fake, pointing out that the President’s accent was inaccurate and his head appeared pixelated. The deep fake had been uploaded by Russian hackers in an attempt to sow uncertainty among Ukrainian civilians and military members. Luckily, the Ukrainian government warned its populace about the Russian government’s use of deep fake propaganda. Still, a single deep fake carried the potential for massive damage at a time when the morale of a nation hung in the balance.


An entirely different category of harmful deep fakes has been the latest focus of U.S. media: deep fake pornography. Imagine you find yourself in one of the darker corners of Twitter and you stumble across pornographic content, which, alone, is not unusual. Social media platforms are home to countless “Not Safe For Work” (“NSFW”) accounts. What is unusual, however, is who you see in the video: it’s you. You don’t remember filming adult content or consenting to having your body put on display for anyone who wishes to see it, because you never did. The body you see isn’t yours; it is likely taken from a real pornographic video shot by a real adult entertainer. The face, however, is yours and it’s been seamlessly attached to another individual’s body. Your facial expressions match the original expressions of the adult entertainer, and your words match theirs too, though the voice sounds like your own. To you, the video is obviously fake. To somebody else, it might look very real. Unfortunately, there is very little you can do to stop the deep fake from spreading. This nightmare is a reality for countless women today and the law has lagged in providing them with a remedy.


Background


Earlier this year, deep fake pornography made the headlines when viewers watching a popular Twitch streamer noticed that some of the streamer’s internet tabs led to pornographic deep fake sites featuring the faces of other female Twitch streamers. A Twitch user watching the stream screenshotted the deep fake site address and posted it on a widely shared Reddit thread, leading to further exposure. The streamer, Atrioc, apologized for viewing and spreading the content across the internet. He claimed that he paid for it out of “morbid curiosity.” For the female Twitch streamers, QTCinderella, Sweet Anita, Maya Higa, and Pokimane, no amount of morbid curiosity could justify their privacy and reputational violations. These streamers have millions of followers on Twitch and go to great lengths to craft and protect their images and brand. Sweet Anita, for example, has kept her real name off of the internet and has chat moderators ban any user who sexualizes her in the chat. Despite her efforts to remain in control of her own narrative, deep fake technology threatens her image. She stated, “I am being forced and sold [into sex work] by someone I don’t know.” QTCinderella similarly took to Twitter to voice her outrage at the situation, saying, “Stop spreading it. Stop advertising it … Being seen ‘naked’ against your will should NOT BE A PART OF THIS JOB.”


The female streamers are not alone. Since deep fakes began appearing online in 2015, their primary use has been for non-consensual pornography, and that trend is not slowing down. The number of pornographic deep fake videos found online has nearly doubled every year since 2018. A 2019 report found that over 96% of deep fakes found on the internet were non-consensual pornography, and 99% of the deep fakes featured women. Deep fake creators advertise openly online and allow anybody to solicit pornographic deep fake content of anyone’s likeness. For example, a Discord user offered to make deep fakes of “personal girls,” anybody with less than two million Instagram followers, for only $65. The process takes only five minutes.


The nonchalance of the industry has damaged reputations and the lives of the technology’s intended targets. One psychotherapist noted that “seeing images of yourself – or images that are falsified to look like you, in acts that you might find reprehensible, scary or that would be only for your personal life can be very destabilizing – even traumatizing. There is no ability to give consent.” Many of the women targeted by deep fake pornography speak about how they have been humiliated and dehumanized. For Sweet Anita, the leaked deep fakes stripped away personal choice and privacy. While she has no problem with sex work, she stated that she personally would never participate in that world. For her, the financial gain is outweighed by the stigma of sex work; “[i]t will cause people to disrespect and dehumanize me, it will affect people’s ability to listen to my opinions, [and to] take me seriously.” Despite her lack of participation in sex work, the creators of her deep faked pornography seem to have made the decision for her. Deep fake pornography also has physical consequences. QTCinderella said that the amount of body dysmorphia she’s experienced after seeing her deep fake photos has ruined her. QTCinderella claimed, “When you see a pornstar’s body so perfectly grafted onto where yours should be, it’s the most obvious game of comparisons that you could ever have in your life.”


Legal Landscape


Today, legal recourse for the unwarranted spread of deep fake pornography appears to be minimal. QTCinderella spoke to lawyers about potential legal action against the deep fake content site. The lawyers’ answers led to a disheartening conclusion: “[W]e don’t have a case; there’s no way to sue the guy.” While the majority of U.S. states have passed laws banning revenge porn, deep fake pornography falls into a gray area of the law. Both revenge porn and deep fake porn may fall into the same category of image-based sexual abuse, but deep fakes cannot be prosecuted under a revenge porn theory. In many cases, revenge porn requires the image or video to have actually revealed the victim’s nude body. Deep fake pornography does not satisfy this element. Yes, a nude body is shown, but it is the nude body of a porn star and not of the deep fake victim. While the law might seem particular, this unfortunate nude identity distinction allows deep fake pornography to fall through the cracks.


As a new and evolving form of technology, regulation for deep fake pornography has lagged. Only four states, California, Virginia, Georgia and New York, have implemented laws dealing with pornographic deep fakes. California gives deep fake victims a cause of action against anyone who “creates and intentionally discloses sexually explicit material” and knows that the depicted individual did not consent to the content’s creation or disclosure. CA Civil Code § 1708.86. Although slow state legislation is better than none, the Federal Government has yet to acknowledge the deep fake porn problem. On the other hand, many countries, like the United Kingdom, have already criminalized the spread of pornographic deep fakes. Meanwhile, American victims wonder when their privacy rights will be protected. Relief may be a long way off. Below, we explore constitutional roadblocks and potential solutions to remedy the deep fake pornography problem.


First Amendment Challenges


Congress has been relatively silent on the deep fake pornography issue. Only one bill concerning deep fakes has been passed into law: the 2020 National Defense Authorization Act (“NDAA”). By only requiring that reports be submitted to Congressional Intelligence Committees if they pose “potential national security impacts of machine-manipulated media,” the NDAA does very little to regulate deep fakes. Other legislative proposals have been introduced but shot down for varying reasons. Primary among them are concerns that a sweeping ban on deep fake pornography may violate the First Amendment, which in part prevents Congress from interfering with an individual’s right of free speech.


The right to free speech includes forms of expression, such as video games, written works, and online posts. Crucially, the government may not regulate speech based on its content, which is why laws attempting to regulate pornography or other NSFW forms of expression and speech are shot down under the First Amendment. Regulations of speech must typically be content-neutral in order to pass the Court’s scrutiny. Attempts to regulate speech simply because it contains pornographic content are, by their definition, unlawful content-based restrictions. Content-based regulations are struck down the vast majority of the time. Any attempt to regulate deep fake pornography by either outlawing deep fakes entirely, or by restricting only pornographic deep fakes, would likely fail as an unlawful intrusion into the right of free speech and expression. With that being said, there are categories of speech not protected under the First Amendment, like obscene speech and child pornography. If deep fake pornography were to fall into either of those categories, it would be unprotected and subject to regulation.


Would it be obscene to have your likeness stitched onto a porn star’s body so that anybody with internet access could watch you have sex? Courts would likely say “no.” Obscene speech refers to pornography that is patently offensive, violates contemporary community standards, and has no serious literary, artistic, political, or scientific value. That definition, however, fails to sufficiently define what qualifies as “obscene.” Judging by Supreme Court precedent, to count as “obscene,” the pornography has to take a serious step too far. Think of pornographic content that rises to the level of child pornography; there’s a reason obscenity and child porn are grouped together as exceptions to the American tenet of free speech. The problem with obscene speech is that it is difficult to narrowly define. The U.S. Supreme Court has taken a “I’ll know it when I see it” approach, yet that test is hardly appropriate when it comes to restricting protected speech. For these reasons, it is especially difficult to decide whether deep fake pornography is obscene.

Courts use the three-part Miller test to determine whether speech is obscene. Miller v. California, 413 U.S. 15 (1973). The first prong, which asks whether the speech appeals to a prurient interest, could be satisfied. Prurient interests are defined as an unhealthy or shameful interest in nudity and sex. It is possible that the creation of deep fake porn centered around illicit depictions of nonconsenting women would count as a prurient interest under Miller.


The second prong, which asks whether the speech is patently offensive, could also probably be satisfied. This element focuses not on whether the work itself depicts an offensive act, but whether the act is depicted offensively. The average person might find that using machine learning to “collect hundreds of images of a woman for the purpose of placing her face in a pornography without her consent” is patently offensive.


Third, to classify as obscene, the speech must also not have any societal, political, literary, or artistic value. Arguments could be made either way here, but many would struggle to find any value in falsified, nonconsensual, illicit pornography. The final element to the Miller test, however, makes it difficult to classify deep fake pornography as obscenity.


Each element is judged according to the values of the average person in a given community. In the age of the internet, however, there is no readily identifiable community or average person. With no single standard to apply to the internet community, the issue of whether deep fake pornography counts as “obscene” will vary based on who you ask. Many have called for obscene speech to be judged by a national standard rather than by a given community, but this approach also has its shortcomings. Doing so “would provide the most puritan of communities with a heckler’s veto affecting the rest of the Nation.” These issues make applying an already vague standard even more difficult and it appears unlikely that obscenity is the route towards deep fake porn regulation.


Section 230 Challenges


The difficulty in beginning to regulate deep fake porn doesn’t stop with the First Amendment. 47 U.S.C. Section 230 protections for internet service providers also shield their liability for the spread of deep fake pornography. Even if a court were to accept an argument that deep fake porn is “obscene” and should not qualify for First Amendment protections, the holding would do very little to combat the problem. Yes, third-party posters might be held liable for posting the NSFW deep fakes, but the distribution of those deep fakes would be largely unaffected.


Section 230 prevents internet service providers from being held liable for content posted by third-party users. It also gives service providers the freedom to regulate their platform by removing or not removing content without having to worry about being held liable for removing one thing and not another. In the case of pornographic deep fakes, service providers would be allowed to remove the content but would be under no legal obligation to do so. Twitter and Pornhub are a few of the websites that have committed to removing pornographic deep fakes meant to cause harm, but many other service providers are either too large or don’t care enough to act. Even if a site pledges to keep harmful deep fake content off of its pages, the technology has become so convincing and widespread that it is near impossible to remove completely.


More discouraging is the issue of individual liability. While a deep fake victim could sue the individual who posted the content, it is unlikely that a user posting deep fake pornography would ever do so under their real name, knowing the harm it can cause. Victims are left unable to sue the web site hosting the content and unable to identify the individual at the root of the harm.


Potential Solutions


There are a few potential solutions for deep fake pornography victims, though the options are slim and largely unhelpful. One option would be a claim for copyright infringement if the photo used in the deep fake was copyrighted. A defamation claim may also succeed because the victim is depicted in a video that they are not actually in. Defamation claims have four common elements:

  1. A false statement purported to be true;

  2. Communication of that statement to a third party;

  3. Fault amounting to at least negligence for the content’s poster; and

  4. Harm to the subject of the defamatory statement.

The elements of this claim could potentially be proven, but courts have shown that they are exceedingly wary of any restraint on free expression. Further, for public figures to recover in a defamation suit, the Supreme Court requires a showing that the statement was made with malice towards the individual. For private individuals, states can define their own standard of liability for publishers of defamatory content, making it hard to know how courts would approach deep fake pornography. Again, none of this matters unless you can identify the individual posting the deep fake pornography.


Another tort law option is a claim for intentional infliction of emotional distress (“IIED”). As shown above, in the case of the Twitch streamers, deep fake pornography clearly has the potential to cause such distress. A complainant has to prove four elements:

  1. The defendant acted;

  2. Their conduct was extreme and outrageous;

  3. Their conduct purposely or recklessly caused them emotional distress;

  4. Their conduct indeed caused such emotional distress.

This seems like a viable course of action given that pornographic videos falsely depicting a person engaging in sexual acts likely fall outside of our standards of decency and could be considered “extreme and outrageous.” However, IIED claims face the same issue that defamation claims face: a plausible claim does not matter unless the individual posting the deep fake pornography can be identified. A final but unlikely option for deep fake porn victims is a privacy claim. These claims allow individuals to recover for publication of private, “non-newsworthy” information that would highly offend a reasonable person. Unfortunately, defendants can refute this type of claim by arguing that the data used to create the deep fake pornography was publicly available. This makes it difficult for victims such as celebrities, streamers, and politicians to bring forth a viable claim because they are constantly in the public eye and there are numerous images of them circulating the internet.


A Third Exception to Free Speech


Though perhaps somewhat optimistic on our end, we believe there is a readily available solution that could solve the complex pornographic deep fake problem. We mentioned earlier that there are two exceptions to freedom of speech: obscenity and child pornography. These exceptions are not written directly into the First Amendment; they are exceptions created by U.S. Supreme Court precedent. It is accordingly within the Court’s authority to expand those exceptions or create a new exception to free speech in instances when justice so requires. We believe deep fake pornography is one of those instances.


Deep fake porn poses a unique problem because its subject matter is largely protected by the First Amendment and its distribution is protected by Section 230. Regardless of its apparent protections, it is deeply detrimental to its victims and has little redeeming value. Accordingly, it should be treated similarly to the way the Court treats child pornography. The Court’s child pornography exception is uniquely strict in that it prohibits both distribution and possession. The obscenity exception, on the other hand, only prohibits distribution. The difference between these exceptions lies in the nature of the content itself. In creating the exception for child pornography, the Court focused on the physical and psychological damage done to children who are incapable of consent. The Court goes beyond merely prohibiting its distribution because they recognize that doing so would not cure the disease. As long as possession of child pornography is allowed, there will be a market for it, and its viewers will find a way to obtain it. To prevent the harm caused by child pornography, you must cut the proverbial head off the snake. We believe the same idea applies for deep fake pornography.

The nature of deep fake pornography is not dissimilar to child pornography. Neither children nor the targets of deep fake porn are capable of consenting to its creation and distribution. Just as in the Court’s analysis of child pornography, deep fake porn subjects its victims to exploitation and harm. It tarnishes carefully crafted public images and subjects those unfortunate enough to be targeted by it to harassment and the stigma of sex-work. Without restricting both distribution and possession of deep fake porn, it would be near impossible to contain. It is not enough to merely restrict its distribution, especially in the age of anonymity on the internet. Possessing deep fake pornography must also be restricted in order to discourage those who would create or spread it. As AI technology continues to advance, so must the Court’s interpretation of free speech and protection for those harmed by it. To do otherwise risks ignoring and invalidating the experiences of a rapidly growing number of women victimized by deep fake pornography.


*The views expressed in this article do not represent the views of Santa Clara University.

Related Posts

See All

Comments


bottom of page