What Creators Can Do Now to Protect Against Deepfakes
Imagine a creator waking up to find fans tagging them in a video. They click on the link and the video pops up. There, they see an AI-generated double, wearing their face and hawking a cryptocurrency scam, in their voice, with their cadence. Somewhere else, a podcast is narrated by a clone of that same voice. They get tagged in responses to the podcast, with listeners clearly having no idea it is not real.
This is not a hypothetical. This is Tuesday.
A creator’s brand is their identity. Because of that, AI voice and likeness cloning is an existential threat to their business. The question clients ask constantly is what the law can actually do about it. The honest answer: not enough, yet. Here is where things stand.
The Cruise-Pitt Deepfake: A Watershed Moment
In February 2026, ByteDance (the creators of TikTok, among other ventures) launched Seedance 2.0, an AI video model. Within hours, a filmmaker typed a two-line prompt and generated a hyper-realistic rooftop fight scene between Tom Cruise and Brad Pitt. The clip hit a million views on X within days.
Two lines of text. No actors. No crew. No consent.
Industry groups and unions responded swiftly. SAG-AFTRA publicly condemned the use of AI-generated replicas without consent, and organizations affiliated with the Human Artistry Campaign criticized the technology as a threat to creators’ livelihoods. Deadpool co-writer Rhett Reese declared: “I hate to say it. It’s likely over for us.”
Despite public outcry and informal takedown efforts, versions of the clip continued to circulate across platforms. This is the world creators are operating in.
Deepfakes in the Courtroom: Lehrman & Sage v. Lovo
In July 2025, a Southern District of New York federal judge spent 60 pages explaining just how limited existing law is against AI identity theft. Voice actors Paul Lehrman and Linnea Sage were hired through Fiverr by employees of Lovo, Inc., an AI text-to-speech company, who promised their recordings would be used only for internal research. Lehrman was paid $1,200. Sage was paid $400.
Instead, Lovo cloned their voices and sold them commercially under the names “Kyle Snow” and “Sally Coleman.” Lehrman discovered this when he heard his own voice narrating an MIT podcast he had never recorded.
The plaintiffs filed 16 claims. Most failed:
- Federal trademark (Lanham Act): dismissed. A voice is only protectable as a trademark when it serves as a source identifier. Lehrman and Sage’s voices were the product being sold, not an indicator of the product’s origin, which was sold under the fake “Snow” and “Coleman” names.
- Copyright: mostly dismissed. Under 17 U.S.C. §114(b), copyright in a sound recording does not extend to AI outputs that mimic but do not directly reproduce the original.
- State law: the only foothold. Claims under New York’s Civil Rights Law Sections 50 and 51 (unauthorized commercial use of voice or likeness) survived, along with breach of contract and consumer protection claims.
The lesson: a creator defrauded into providing recordings whose voice is then cloned and sold commercially will likely lose the strongest federal claims. State law can be a backstop, but it is inconsistent as we’ll discuss further below.
Laws Protecting Creators Against Deepfakes and Why They Fall Short
Limited Federal Protection
The TAKE IT DOWN Act, signed in May 2025, is the first major federal deepfake law. It prohibits publishing non-consensual intimate imagery including AI-generated deepfakes intended to cause harm and requires platforms to remove reported content within 48 hours.
This act is certainly meaningful for victims of intimate imagery abuse. For creators protecting their commercial identity, however, it does not cover brand impersonation, fake endorsements, or AI voice cloning for competing services. The NO FAKES Act, reintroduced in April 2025, would prohibit creating or distributing AI replicas of a person’s voice or likeness without consent, but it remains pending in the United States House of Representatives as of the date of this article.
The Right of Publicity State Law Patchwork
Courts increasingly point creators toward state right-of-publicity laws when federal claims fail. The problem is that these protections amount to a patchwork quilt, and in much of the country, that quilt has serious holes.
There is no federal right of publicity. Some states, including California, Illinois, Tennessee, Texas, and Indiana, have robust statutes establishing publicity rights, with defined remedies. Others, including Michigan, New Jersey, and New Hampshire, rely entirely on common law misappropriation claims, which are narrower, harder to prosecute, and less predictable.
Even in states with strong statutes, the right generally applies only to commercial uses of AI deepfakes. A deepfake that does not explicitly sell something may fall outside the statute’s reach, and First Amendment defenses for satire and commentary are frequently available to defendants.
Tennessee’s ELVIS Act is the current gold standard.
Effective July 1, 2024, the Ensuring Likeness, Voice, and Image Security Act explicitly names AI voice cloning as a prohibited use, closing a gap that older statutes, written before that technology existed, simply leave open. Combined with Tennessee’s indefinite post-mortem protection, the ELVIS Act gives artists and their estates lasting, modern tools to fight back. Johnny Cash’s estate put it to immediate use, suing Coca-Cola under the ELVIS Act in December 2025. For recording artists and performers especially, Tennessee’s framework is the most purpose-built protection currently on the books anywhere in the country protecting them against AI deepfakes.
The State Law Post-Mortem Gap
The problem with AI deepfakes is perhaps most acute for estates. Many states still treat the right of publicity as a privacy right that dies with the person. For those that extend it posthumously, the duration varies wildly: California provides 70 years, Indiana 100, Tennessee indefinitely (while rights are exploited), Virginia only 20.
In many states, this posthumous protection is limited. Notably, New York’s post-mortem right of publicity applies only to certain categories of deceased performers, leaving many estates without meaningful protection. Minnesota, Prince’s home state, has the same gap, as his estate learned after his death in 2016.
In many states, an AI company can generate a deepfake of a deceased artist using their voice and likeness and the estate has no right-of-publicity claim to assert. For any creator building lasting commercial value, assuming that right-of-publicity law protects their legacy is dangerously incomplete without a jurisdiction-by-jurisdiction analysis.
How SAG-AFTRA Protections Help and What They Don’t Cover
The 118-day SAG-AFTRA strike in 2023 secured historic AI protections for union performers. The resulting agreement, ratified in December 2023, requires informed consent and compensation for any use of a performer’s digital replica, distinguishes between replicas created during a performer’s employment and those created independently, and prohibits using replicas beyond their originally authorized scope without new consent and new pay. The 2024 Sound Recording Code extended analogous protections to recording artists.
These are genuine wins. But they only bind studio signatories and protect union members on covered productions. They do nothing for independent creators, and they have no mechanism to stop foreign companies like ByteDance from generating AI replicas of performers from a two-line prompt. The union agreements set the floor for ethical AI use in the industry, but it does not solve the enforcement problem.
Matthew McConaughey’s Playbook: Trademarking Himself
Facing this legal landscape, McConaughey secured eight federal trademarks through his company J.K. Livin Brands, Inc., covering sound marks, motion marks, and his signature phrases. The centerpiece is a sound mark for “Alright, Alright, Alright” so precisely described that the USPTO registration specifies the relative pitch of each syllable. It protects not just words but a specific performance. Applications were filed in December 2023; the USPTO approved them in December 2025.
The strategic logic: when someone uses an AI McConaughey clone, they face not only state right-of-publicity claims but federal trademark infringement claims. That opens federal court, creates stronger cease-and-desist leverage, and provides a nationally uniform legal claim that does not vary by state.
Notably, McConaughey’s legal team has connected the strategy directly to Lovo. The actors in that case had no registered trademarks, costing them the legal presumptions that McConaughey’s registrations now carry. That gap may have changed the outcome.
That said, this approach is not without its limitations. Trademark law still requires use in commerce and a likelihood of consumer confusion. Non-commercial deepfakes may evade liability. And most importantly, this strategy has yet to be tested in court against AI defendants.
What Creators Should Be Building Now To Protect Against Deepfakes
Until the law catches up, creators must build defensively. Creators should consider taking any or all of the following steps to maximize their insulation against this threat:
- Right-of-publicity audit by jurisdiction. Understand which states offer meaningful protection and which offer virtually none. Entity formation, registration strategy, and contract structure may all be informed by this analysis.
- Trademark registration of distinctive voice and performance markers. Even one well-chosen sound or phrase mark creates a secondary federal enforcement mechanism and the geographic uniformity that state law cannot provide.
- AI guardrails in every contract. Prohibit AI training, voice cloning, unauthorized secondary use, and sublicensing for AI purposes. The Lovo case survived primarily on contract law.
- Controlled AI licensing. McConaughey’s authorized ElevenLabs partnership demonstrates how establishing authorized uses makes unauthorized uses more clearly identifiable and harder to defend.
- Post-mortem rights in estate planning. Tennessee now has the strongest and most future-proof post-mortem protections for voice and AI cloning, while California’s 70-year post-mortem statute is the most mature and widely litigated right of publicity in the nation. Creators with reasons to domicile in these states or actively use marks there gain protections that New York and Minnesota estates cannot access.
- Monitor and preserve evidence. When misuse is discovered, preservation before content disappears is often the difference between a viable claim and one that cannot be proven.
In summary
Federal law has profound gaps. State right-of-publicity law is a patchwork. The SAG-AFTRA agreements protect union members but not the broader creator economy. And the TAKE IT DOWN Act was not built for commercial identity theft.
McConaughey’s trademark strategy is arguably the most creative response yet, layering federal and state protections, creating uniform national coverage, and opening courthouse doors that state-only strategies cannot. It is not a complete solution. No solution currently is. But it is exactly the kind of layered, forward-thinking architecture that creators need to be building now, while advocating loudly for the federal right-of-publicity law and comprehensive AI legislation that should already exist.
AI is coming for creators’ voices, faces, brands, and legacies, and the legal infrastructure to protect them must be in place before it arrives. Building layered protection through trademarks, contracts, right-of-publicity planning, and strategic jurisdictional choices can help creators and studios prepare for a rapidly changing landscape. Working with professionals experienced in both AI law and the creative economy can ensure those protections evolve alongside the technology.
View all posts by this author
