Alright, Alright, Alright: Owning Your Identity in the Age of AI Deepfakes
- Lucie Riley & Cristina Rindfleisch
- 2 days ago
- 7 min read

“That which withers in the age of mechanical reproduction is the aura of the work of art.”
— Walter Benjamin
Advances in generative artificial intelligence (“AI”) have made it possible to replicate a person’s voice, likeness, and mannerisms with unprecedented realism. These technologies allow users to create convincing audio and video impersonations, referred to as “deepfakes,” with minimal effort. While early deepfakes were often crude and easily detectable, recent developments have dramatically improved the quality of generative artificial intelligence. Scholars have warned that the rapid progression of this new technology poses a serious risk to privacy, reputation, and the integrity of public discourse.
Against this backdrop, Matthew McConaughey recently secured eight trademark registrations for his persona, voice, and likeness in order to prevent unauthorized use of his image by AI. His registrations include a 7 second porch scene and his iconic, “Alright, Alright, Alright,” phrase from Dazed and Confused. McConaughey’s strategy reflects a growing concern among performers and public figures that AI could enable widespread replication of their identities without consent. McConaughey’s proactive efforts to safeguard his identity underscore a rapidly growing concern that generative AI will facilitate the widespread replication and exploitation of individual’s identities.
Generative AI is accelerating the commodification of identity by transforming voices, likeness, and personas into replicable digital assets that can be created and distributed at scale. McConaughey’s strategy is demonstrating that public figures are turning to existing intellectual-property doctrines to guard against AI impersonations, and the limitations of these existing frameworks have become increasingly apparent. Legal protections such as the right of publicity, trademark, and copyright were developed long before technology made widespread, near perfect imitations both inexpensive and widely accessible. The rapid proliferation of AI generated deepfakes illustrates how easily identity can now be replicated or exploited, for example, Cybersecurity firm DeepStrike estimated online deepfakes skyrocketed from roughly 500,000 in 2023 to about 8 million in 2025, with annual growth nearing 1,500%. As a result, existing legal doctrine struggles to address the scale, speed, and realism of AI impersonation, revealing a growing need for clearer and more comprehensive legal protections against the unauthorized digital replication of identity.
Existing legal frameworks, including common law, state law, and federal intellectual property law, are ill-equipped to address AI -generated identity replication at scale. To remedy these deficiencies, Congress should adopt a uniform federal framework such as the NO FAKES Act, that recognizes identity as a protectable interest and provides consistent, nationwide protection against unauthorized digital replicas.
Rise of AI Deepfakes and Cloning
Deepfakes operate by using generative adversarial networks (GANs). A GAN consists of two networks, a generator and a discriminator. The first creates the synthetic content while the second evaluates it compared to its real life counterpart.
When deepfakes were first introduced, they were not convincing and it was easy to detect their falsity. However, recent studies have investigated human judgment of deepfake audio and have found that humans are poorly equipped to detect AI voice clones. In fact, detection tools are currently lagging behind the creation and improvement of deepfake technology. The rise of deepfake usage and improvement of its technology has the potential for various negative consequences including misinformation, privacy violations, security, and harassment.
There are several existing avenues for protection such as right of publicity, trademark (which McConaughey turned to), and copyright. These legal frameworks are not a perfect match for the current AI landscape, but they do offer some recourse.

Existing Legal Frameworks
Right of Publicity
Public figures often turn to the right of publicity for recourse against commercial misuse of their identity and likeness. Courts have expanded the right of publicity to also include distinctive voices. For example, in Midler v. Ford Motor Company, the Ninth Circuit addressed singer Bette Midler’s voice imitated in a commercial and said a voice is as distinctive as a face and therefore misappropriation of Midler’s voice constituted a tort.

AI voice cloning introduces rampant opportunities for the “imitation” of distinctive voices for profit as in Midler. A more recent example includes a song featuring artists Drake and Weeknd, “Heart on my Sleeve” which quickly grew in popularity only to be removed from major platforms because the song was actually AI generated. AI has become exponentially better at imitation since this incident in 2023.

Courts have also found that a public figure’s name or likeness does not necessarily need to be used in order to violate the right of publicity, so long as their identity is sufficiently tied to the appropriating use. In Carson v. Here's Johnny Portable Toilets, Inc., the Sixth Circuit found that the use of Johnny Carson’s phrase, “Here’s Johnny,” appropriated Johnny Carson’s identity even though his full name or likeness were not used. The court found that the association between that phrase and Johnny Carson was being commercially exploited, and constituted an invasion of his right. This is similar to how McConaughey’s “Alright, Alright, Alright,” is associated with his identity without his name or likeness.
While the right of publicity offers some protection, its state-by-state variation and focus on commercial use render it insufficient to address the non-commercial harms posed by AI- generated identity replication.
Trademark
Another legal avenue for protecting identity, one that McConaughey has strategically pursued, is trademark law. Under the Lanham Act, individuals may bring claims against parties whose use of names, symbols, or other identifying characteristics creates a likelihood of consumer confusion regarding sponsorship or endorsement.
Celebrity plaintiffs have previously used trademark law to prevent unauthorized commercial associations with their identities. In Waits v. Frito-Lay Inc., a landmark Ninth Circuit case from 1992, the court held that unauthorized imitation of Tom Waits’s distinctive voice in advertising constitutes false endorsement under the Lanham Act. Demonstrating that trademark law can extend beyond traditional marks to protect against misleading associations with identity, but only where the use creates a likelihood of consumer confusion in a commercial context.
McConaughey’s trademark registrations represent strategic use of this existing doctrine. Registering elements of his persona, like catchphrases and recognizable imagery, will strengthen his standing to bring any future claims of unauthorized use in federal court; and hopefully discourage AI videos that aren’t targeting the commercial market. McConaughey’s approach demonstrates a dynamic use of trademark law as a response to the evolving generative AI landscape; the approach reframes aspects of his identity and work as commercial source identifiers subject to trademark protection.

Nevertheless, this highlights the very limitations of trademark law. Trademark law is not designed to protect one's identity itself. Its primary purpose is to prevent consumer confusion in commercial markets. As a result, where AI is replicating one’s identity outside the commercial context it will be outside the scope of protection offered by existing trademark law.
Copyright
The current landscape of copyright law offers even more limited protection against AI-generated identity replication. Under U.S. copyright law, protection extends only to original works of authorship fixed in a tangible medium of expression. A person’s voice, likeness, or persona alone generally does not qualify for copyright protection.
The United States Copyright Office has acknowledged these limitations in its recent report on artificial intelligence and digital replicas. The report notes that current copyright law does not adequately address situations where artificial intelligence systems generate unauthorized replicas of individuals’ voices or likenesses. Because copyright law focuses on creative works rather than identity itself, it is fundamentally ill-suited to address the unauthorized digital replication of identity.
Emerging Legislation and Proposed Solution

Current protections against AI-generated deepfakes consist of a fragmented patchwork of state laws. While many states have enacted legislation targeting harms such as nonconsensual deepfake pornography and misinformation, these efforts remain inconsistent in scope and enforcement. For example, Tennessee’s ELVIS Act expands the right of publicity to cover AI-generated voice simulations and imposes liability on those who enable such impersonations, while California’s proposed Digital Dignity Act focuses on platform accountability and enhanced remedies. Despite these developments, state-by-state regulation is inherently limited. Variations across jurisdictions create gaps in protection and uncertainty, underscoring a need for a uniform federal framework.

Proposed legislation such as the NO FAKES Act (“Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2025”), introduced in the House in April 2025 by Rep. Maria Salazar, moves in this direction. The NO FAKES Act establishes a federal cause of action that extends liability to distributors and imports of a product that is primarily designed to produce digital replicas of a specifically identified individual without the authorization of the individual, right holder, or the law.
However, any federal approach must navigate significant constitutional constraints. Under United States v. Alvarez, falsity alone does not remove speech from First Amendment protection; rather, regulation generally requires a showing of cognizable harm. This creates a central tension: laws broad enough to address the scale of AI-generated impersonation risk infringing on protected speech, while narrower laws may fail to meaningfully address the harm. Although proposals like the NO FAKES Act attempt to balance these concerns with the inclusion of safeguards to prevent chilling of constitutionally protected speech, like parody and satire, the challenge of reconciling effective regulation with First Amendment protections remains unresolved.

A December 2025 executive order by the Trump Administration signals a shift toward a national policy framework for AI. Currently, in the absence of federal legislation, states are left to regulate independently, despite the likelihood that such laws may ultimately be preempted or constrained by future federal action. This dynamic further highlights the need for a coherent national framework capable of addressing both the technological scale of AI and the constitutional limits on its regulation.
Conclusion
Nearly a century ago, Walter Benjamin, warned that technological reproduction would erode the “aura” of original works by severing them from their unique presence in time and space. Today, generative artificial intelligence presents a parallel challenge, not to works of art, but to human identity itself. Enabling rapid and vast replication of voices, likeness, and personas, generative AI is threatening the very distinctiveness of identity by reducing it to a replicable, commodified asset.
Public figures, like Mathew McConaughey, have turned to trademark law in an effort to protect elements of their identity. In the context of unauthorized commercial exploitation courts have continued to rely on the right of publicity. As anxiety grows over the potential large-scale harmful effects posed by the development of generative AI, the federal government and state governments have begun to develop target legislation as a response. This current patchwork of new state laws and existing legal doctrine, are fragmented at best and do not provide a cohesive response to address the scale and speed by which generative AI is evolving.
A federal framework, such as the NO FAKES Act, would provide a unified approach, setting clear standards and more effective remedies against unauthorized replication of identity. The challenge of a federal unified framework lies in balancing competing constitutional and policy interests.
The progress of generative AI continues to evolve and the rapid increase in AI deepfakes demands that the law confront a fundamental question: whether identity should be recognized as a cognizable form of intellectual property deserving of protection.
*The views expressed in this article do not represent the views of Santa Clara University.



Comments