A few years ago, contracts with artificial intelligence clauses felt novel. They appeared here and there in experimental projects, in creative projects where guarantees of “No AI” were desired, or in deals involving AI tools.
Now, AI-related clauses are showing up routinely, even in agreements where AI is not the focus of the work at all. Publishing agreements, creator collaborations, vendor contracts, marketing engagements, services agreements and more increasingly include language addressing AI use, ownership or disclosure by default.
From Edge Case to Standard Clause
In many agreements today, AI language appears the same way confidentiality or IP clauses do. They’re just part of the contractual boilerplate. In practice, this means contracts include AI clauses preemptively as a default rather than in response to a known issue or for a specific reason. AI clauses may have carried over from prior deals or internal templates, even when the project itself is conventional and doesn’t require any AI tools.
For creators and studios, the presence of these clauses reflects a collective attempt to account for AI tools that are increasingly becoming part of workflows, even when those tools aren’t the centerpiece of the work.
The Common Categories of AI Clauses
While the wording varies, most AI provisions fall into a handful of familiar buckets. All or some combination of these clauses (or even others beyond what’s listed here) may appear in an agreement. Recognizing these clauses makes them easier to evaluate.
- Use of AI Tools During Production: Some clauses address whether AI tools may be used at all, used only with permission, or otherwise place limits on how they can be used during creation. These clauses often appear even when AI tools are only incidental to the creative process.
- Ownership of AI-Assisted Outputs: Many agreements attempt to clarify who owns work that is created with the assistance of AI tools or require representations regarding its use or non-use, even if the final output is primarily human-driven.
- Representations About Originality or Training Data: Some clauses ask parties to represent that AI tools used were trained on lawful data or that outputs do not infringe third-party rights. These assertions may be difficult to verify in practice, and contracting parties should pay attention to how courts decide on key cases in the space.
- Disclosure Obligations: A growing number of agreements require disclosure of AI use, sometimes broadly and sometimes only upon request.
Parties should pay special attention when drafting or negotiating to the kind of artificial intelligence being referenced. Contracts may draw a distinction between generative artificial intelligence and other forms of artificial intelligence; it is up to the parties to decide on what “artificial intelligence” means in the context of the agreement.
The Real Issue
While some AI clauses are intended to guarantee that no AI will be used in the process of carrying out the contract, many AI clauses can also be used to future-proof agreements in a rapidly changing environment and define parameters around its use.
The key question to ask is: does the contract language accurately match the project? This, of course, goes for any clause in the agreement, but especially so when courts and legislatures haven’t yet drawn the guardrails around a new technology.
Final Thoughts
The growing presence of AI clauses is part of a broader shift toward normalization of these terms in contracts. Even though the law is known for being way behind technology, contracts must adjust as tools evolve. Understanding why these clauses appear, what they typically cover and how they intersect with real-world workflows makes it easier to spot when a provision deserves a closer look or when it’s simply an accurate part of a deal.
View all posts by this author
