top of page

Deepfakes in the U.S.

Credit: Budiey | flickr

Misinformation, the ability to convince someone that something untrue is true, has become dangerous, constant, and unregulated in the media, online social platforms, and TV. Recently, a new form of misinformation—artificial intelligence (AI)—has made it possible for “deepfakes” to become incredibly difficult to identify.

Conceptually, deepfake technology is simple to understand. AI technology allows someone to create a video of anyone they want; their likeness, voice, and even mannerisms can be crafted into a near-perfect copy. These deepfakes can impersonate anyone, including your favorite celebrity, in order to advertise a product they didn’t actually endorse. A recent example of deepfakes are videos of female celebrities who have found themselves in explicit circumstances.

Deepfake technology has a social impact, but its impact also bleeds into the corporate, advertising, and political spheres. Moreover, deepfakes can significantly impact a person’s business, political, or personal reputation. So, how will industries who depend on public perception adapt to new technology that could allow someone to construct a full music video or alter someone’s entire persona with a single click?

Deepfakes have changed the advertising industry and may change the entertainment industry as we know it.

Deepfakes raise a series of issues, some of which will not be discussed in this article. Generally, people have noticed that deepfake technology has advanced further and quicker than the law can adapt to respond. It is common to see deepfakes used in advertisements and proliferating misinformation.

For advertisements, deepfakes make it easier for smaller businesses to appropriate a celebrity’s fame for the purposes of marketing their products. Appropriating a celebrity’s likeness grants businesses access to an enormous fanbase and media presence for the single price it costs to construct a deepfake image. No permission seems to be required from the celebrity who’s face and presence are being used for someone else’s gain. In essence, using celebrity faces for advertisements promises significant revenue. Even if media platforms have deep fake policies, it is unclear if they are being enforced or if they even can be enforced, because nothing can ever be truly deleted from the internet. For example, a deepfake of Joe Rogan promoting a supplement was taken down, but not before instigating a misinformed conversation in the comments. Although the false video had been removed, it was still accessible and allowed the company temporary access to Joe Rogan’s massive fanbase. The damage was done.

The movie industry is especially vulnerable to deepfake advertisements. Recently, Bruce Willis’s likeness was used in an advertisement the actor had no part in. The advertising industry may have to create an avenue to legally obligate advertisers to pay celebrities for their likenesses. It is now possible that advanced AI technology can simply use existing videos and sounds of celebrities without needing their physical presence. In the new AI technology realm, actors’ contracts may include clauses requiring actors to sign away any legal claims against using their image for deep fake advertisements. It is reasonable to assume that deepfakes’ success in the advertising industry may allow massive regulatory challenges in Hollywood. Currently, deepfakes can make successful music videos and advertisements, but could it replace actors in movies?

Deepfakes are the newest and easiest way to spread misinformation, ruin reputations, and instigate fraudulent conduct.

Understandably, the deepfake industry has been incredibly efficient and effective at spreading misinformation. Other than the political implications of misinformation, deepfakes have the unique power to ruin carefully built reputations, threaten national security, and scam people and companies. In 2020, the U.S. Securities and Exchange Commission charged an individual who spread false rumors on online platforms to capitalize on the effect it would cause on company stocks. With deepfakes’ help, any bad actor could spread false rumors through the mouth of a company’s CEO to manipulate the market. Deepfakes could also create videos of company executives making morally reprehensible statements, thereby making it difficult for them to keep their jobs or find employment after termination. Again, for the low price of making a deepfake, an entire reputation could be ruined.

Deepfakes are also the perfect tool for fraud, especially now that voices can be manipulated. For example, in Hong Kong, a bank manager received a call from the director of a company he’d previously spoken with and ultimately authorized a transfer of $35 million. Paired with the ability to spoof call, deepfakes’ unique ability to mimic voices nearly identically will make you believe you’re sending money to your intended recipient.

In short, the deepfake industry can make millions off of the different trades that utilize one’s identity and reputation.

Deepfakes have brought alarming legal concerns and responses in an attempt to address its impact.

Growing Technology and the use of AI have made it easier than ever to instantly produce and distribute deepfake content. Having the public interpret online content for its authenticity now leaves room for skepticism.

To combat this issue, the U.S. National Defense Authorization Act Section 5709 requires the Department of Homeland Security to observe and report any findings where deepfakes threaten national security or target U.S. elections. For example, deepfakes misrepresent a political candidate's stance on a legal issue or portray a candidate’s misdeeds. To incentivize people to help stop deepfakes, Section 5724 grants funds to stimulate the research, development, and commercialization of deepfake detection technologies.

States are still tackling their approach to addressing the dangerous concerns over deepfakes. Texas was the first state in the nation to prohibit the creation and distribution of deepfake videos by making it a misdemeanor. The new law, which intends to protect political candidates and officials, imposes liability “if a person creates or causes the deepfake video to be published or distributed within 30 days of an election." In addition, the Texas law requires the element of intent “to injure a candidate or influence the result of an election.” On the other hand, California passed AB 730 and AB 602. AB 730 mirrored Texas's law by prohibiting the use of deepfakes to influence political campaigns. AB 602 provides a right of action against anyone who creates or intentionally distributes sexually explicit material of another without their consent or that was falsely made. AB 602 remains in effect today, but AB 730 will sunset on January 1, 2023. AB 730 is being scrutinized for potential first amendment violations, with the opposition arguing that since the act prohibits altering content in political campaigns, the candidate would be barred from using modified videos for themselves.

Development and creative resolutions to monitor deepfake misuse have emerged within the tech and publishing industry.

The publishing and technology worlds have allied by creating Project Origin. Project Origin aims to develop mechanical prototypes that detect misrepresented and manipulated electronic content. Project Origin collaborators have established a framework where the prototype focuses on: (1) digitally signed links that provide verifiable tracing back of media content to the publisher, and (2) validation checks certifying that the material remained unaltered during distribution. The ability to track back to the publisher may deter individuals from creating deepfakes, but this new tracking system still poses a threat to privacy rights.

In 2018, the Pentagon's Defense Advanced Research Projects Agency (DARPA) awarded a non-profit research group three contracts to study and research deepfakes. So far, researchers have identified speaker and location-based scene inconsistencies with 75% accuracy. Tech companies and non-profit organizations continue their research to tackle widespread deepfake misuse.


Fraud and manipulation are classic common law legal issues, yet the rise in ingenuine deepfake technology poses new regulatory hurdles. Deepfake’s success and almost flawless mimicry is a corporate, political, societal, and legislative concern. Its existence will challenge the law to evolve. Until we can figure out how deepfakes can be adequately regulated and safely integrated into society, we will continue to see these tools impact reputations, advertisements, and political campaigns.

*The views expressed in this article do not represent the views of Santa Clara University.


bottom of page