AI Tools in Legal Work: Good Help Still Needs a Lawyer

When I studied Spanish translation in college, Google translate was in its infancy and only provided the roughest of translations. My tools were primarily dictionaries and thesauruses, internet searches, example texts, and my own knowledge of Spanish and English. Within the last year, however, I took a Spanish short story, ran it through a machine translator, and saved myself hours of work. The result was largely excellent, but it still had some glaring consequential errors. I still had to review the translation, correct those errors, and make judgment calls about the language that no algorithm could make for me. Someone who didn’t have that knowledge might have published the English version of the short story, glaring errors and all.

The same dynamic applies to large language models and other artificial intelligence tools (“AI Tools”) used for legal work. AI Tools for legal work have improved significantly, and will continue to do so, but serious issues remain with the work they produce. Without a lawyer acting as the “translator” to catch errors and refine drafts, clients risk ending up with documents that do not work as intended and that expose themselves and their businesses to greater legal liability.

This post covers (1) what we’ve seen AI Tools get wrong in practice, (2) the real cost of skipping the lawyer, and (3) the privilege risks most clients haven’t considered.

What We’ve Seen

We have seen clients run documents through an AI Tools and ask us to review the resulting changes and suggestions. Across those reviews, a clear pattern has emerged. AI Tools tend to:

  1. Repeat language already in our draft, adding redundancy without value;
  2. Introduce imprecise or vague language that creates openings for disputes down the road;
  3. Add provisions addressed in other documents (for example, adding privacy provisions to terms of service) because the AI Tools lack full context;
  4. Bind the client to obligations they don’t need to take on; benefits they could voluntarily provide without contractually committing themselves; and
  5. Occasionally, add a sentence or two that is useful.

Another, perhaps subtler, problem is researching legal issues. I have tested AI Tools specifically built for legal research, asking the same underlying legal question but with positive or negative framings. The result: the AI Tools will often produce answers that sound like they point in opposite directions, even when they are substantively consistent. A client who asks only one question, phrased one way, may come away convinced that what they’re about to do is either a wonderful or terrible idea, without ever knowing the nuance exists. That’s not a minor quirk; it is a structural feature of how these tools work, and it is another reason experienced legal judgment cannot be replaced by a prompt.

AI Tools also do not warn users when they are operating outside their competence. They generate plausible-sounding text even when they lack the information to produce accurate output, also known as “hallucinating.” In a legal document, a hallucinated clause or a subtly wrong legal standard is not an academic problem. Instead, it can mean a contract that fails to protect a user or affirmatively obligates the user to something the user never intended. These hallucinations are not a bug but an inherent feature of AI models, as they arise from the way the system works. A language model does not “know” facts the way people often assume. It generates the most likely response based on patterns in training data and context, which can result in the model being very convincingly wrong. Ergo, AI should never be treated as an authority and workflows that incorporate it should be built around verification instead of trust.

The stakes of ignoring this are real. Companies that have bypassed legal counsel, run strategic decisions through AI Tools, and then acted on those outputs have ended up paying tens of millions of dollars when the AI Tools’ recommendations failed in practice. Clients don’t know what they don’t know and neither does the machine.

The Real Cost of Skipping the Lawyer

The appeal of AI Tools is understandable: they seem to offer legal work product at a fraction of the cost of hiring an attorney. But that math often runs in reverse. When AI-generated documents have errors—vague terms, missing protections, misaligned obligations—a lawyer still has to fix them, whether before the documents go out the door or later in a dispute. And the old adage about an ounce of prevention being worth a pound of cure holds true in law as well. It’s far, far less costly to fix errors before a document is executed than to enter costly litigation about a dispute down the road.

Sometimes the costs are more subtle. For example, legal documents generated by AI Tools may take more time to review and tune up than a lawyer starting with a tried-and-true template with which they’re already familiar. The right model is not AI instead of a lawyer, but a lawyer overseeing the AI Tools and providing the judgment, context, and accountability that the tools cannot.

Privilege Risks AI Users May Not Have Considered

Attorney-client privilege and the work product doctrine are among the most important protections available in litigation. Both can be compromised by careless use of AI Tools.

The core principle of privilege is confidentiality. Communications between attorney and client are protected precisely because they are kept within that relationship. When clients introduce a third-party AI platform into that communication—by uploading legal strategy documents, sharing privileged communications, or running confidential correspondence through an LLM—courts are beginning to hold that such uses of AI Tools constitutes a waiver of the attorney-client privilege.

Specific practices that create risk of privilege waiver include:

  • Using a meeting transcription or AI note-taking bot in attorney-client discussions, which introduces a third-party service into a protected communication;
  • Running privileged communications or confidential legal strategy through a consumer LLM platform; and
  • Using AI tools to conduct or summarize legal research in ways that incorporate facts or strategy from pending matters, without attention to how that data is retained and used by the platform.

The right question to ask before using any AI Tool in connection with legal matters is whether doing so could be characterized as voluntarily disclosing privileged information to a third party. When in doubt, ask an attorney before using the tool, not after. Ultimately, the privilege is held by the client and theirs to determine whether they want to waive the privilege, but clients should know beforehand whether they’re unintentionally doing so.

Where This Leaves Us

AI Tools for legal work are here to stay and they will keep improving. But like that machine translation of a Spanish short story, the output is a draft, not a finished product. Those drafts still need someone who knows both languages (the client’s needs and the legal language to implement those wants and needs) to catch what the machine missed, make the calls the algorithm cannot, and take responsibility for the result. Good legal help still needs a good lawyer.

Ryan Fairchild

Contact Us

Address:

4208 Six Forks Rd.
STE 1000
Raleigh, NC 27609

Phone:

(919) 813-0090

Email:

[email protected]