AI Best Practices

As video game development continues to evolve, more game developers are turning to generative artificial intelligence (AI) tools to create engaging and immersive gaming experiences. From generating unique game content to optimizing game performance, AI tools offer a plethora of possibilities in video game development. However, using AI tools comes with its own set of legal considerations. Here are some best practices for using AI tools in video game development:

Tip 1: Mind The Prompt

Prompts are the user’s input into the AI to generate output, which can be used to generate everything from dialogue, to quests, to in-game scenarios. The quality of that output will depend, to a large degree, on how effectively the user is able to tell the AI what the user wants. However, effective and responsible use of prompts depends on more than just that quality of description.

In particular, video game developers may be tempted to shortcut that descriptive process by including third parties’ intellectual property in prompts to generate very specific output. For example, “describe a creature like a Goron in Tears of the Kingdom but made of marshmallow instead of rock.” While the legal landscape surrounding AI tools is largely unsettled, use of a third party’s creative work to generate an AI response arguably might misappropriate that party’s intellectual property rights.

Given the intellectual property infringement risks that already exist surrounding AI output, as will be detailed below, developers should take care to mitigate that risk however they can. That risk mitigation starts with prompts.

Tip 2: Protect Sensitive Data

Prompts are only one component of the AI output process. In order to learn how to respond to a prompt, AIs first have to “learn” by analyzing vast swaths of data. Many of these AI tools obtain this data through a “scraping” process, where bots obtain large amounts of data from across the internet in order to feed it through the AI tool. That information is then stored in the AI’s library. Additionally, while AI tools typically agree not to share user-provided data with third parties, they do make clear that prompts will be used to train the AI further.

To date, multiple AI tools, including ChatGPT, have already been subject to data breaches, exposing personal and proprietary information to unauthorized parties. Unfortunately, the “scraping” process often results in a large library of sensitive and proprietary information that may then be exposed via these data leaks.

This tip has both an active and passive component. From an active standpoint, avoid including sensitive information in prompts. From a more passive standpoint, this new exposure point is yet another reason to take precautions against sensitive or proprietary information being freely accessible in other places online, allowing it to be caught up in AI tools’ scraping processes.

Tip 3: Thoroughly Vet the Output

Unfortunately, responsible use of AI does not end with monitoring the information the AI tools obtain access to. To repeat a point made above, AI tools are only as good as the data they are trained on, which will be incorporated into that output. Broadly, there are three primary risks in using unvetted AI output:

  1. Even if the prompt does not contain racist, sexist, or other prejudicial or offensive content, the data set may, resulting in an offensive output. The output may also contain subtler uses of gendered language or offensive caricatures, making it somewhat harder to police.
  2. AI tools cannot easily differentiate between correct and incorrect information and may confidently state misinformation. They have even been shown, in some situations, to invent sources after the user requests substantiation. Generally, AI tools are most useful for generalized launchpads, rather than subject matter expertise.
  3. The AI’s output may include protected intellectual property or sensitive personal information of third parties. Use, or even possession of this information in some circumstances, may put the recipient at risk for intellectual property infringement or other offenses.

This tip is likely the most important, but most difficult, to abide by. Developing proper vetting procedures for output received from AI tools should be considered a requirement by any company or individual intending to make regular use of AI tools, especially where that output may be incorporated into a game.

It should also be noted here: Most AI tools expressly disclaim liability for output that infringes on intellectual property or other personal rights. While this may not necessarily mean they cannot be held liable, it does suggest difficulty to recover for harm that they may cause.

Tip 4: Protect Against Irresponsible Partners

Despite the risks, AI tools are rapidly seeing widespread adoption across industries. It’s worth considering how any commercial partners or employees might be leveraging these tools, and whether existing agreements with those partners include any requirements that they do so responsibly. These considerations may show up in a wide variety of standard legal documents, from non-disclosure agreements to representations and warranties in developer agreements.

Connor Richards

Connor is an attorney at Odin Law and Media building his practice focused on the video game, entertainment, and esports industries. Prior to joining Odin, Connor worked at Ernst & Young, assisting multinational corporations with a variety of tax matters. Connor also actively participates in the Esports Bar Association and as a guest on the occasional industry podcast. He can be reached at connor at odin law dot com.

Contact Us

Address:

4600 Marriott Drive, Suite 520
Raleigh, NC 27612

Phone:

(919) 813-0090

Email:

info@odinlaw.com