AI on Trial

AI on Trial

Coding for Counsel: Treating Every AI Prompt Like Computer Code

Precision, Ethics, and the Surprising Parallels Between Lawyering and Programming in the Age of Generative AI

Patrick Barone's avatar
Patrick Barone
Aug 01, 2025
∙ Paid
1
1
Share

Many lawyers make the critical mistake of treating GenAI prompting as simple conversation, overlooking the fact that each prompt functions much like a line of computer code: small variations can yield dramatically different results.

Rethinking prompting as a form of programming, where precision, structure, and intentionality are paramount, transforms AI from a risky black box into a powerful, reliable tool for legal practice.

Picture this:

It’s late on a Thursday. You’ve just finished prepping for a big hearing, and to save time, you ask your new GenAI assistant to “summarize these cases and draft a client update.”

Minutes later, you get a beautifully formatted memo. But by Monday morning, you’re fielding panicked emails: the GenAI missed a key holding, misapplied precedent, and, worse, overlooked a confidential document that should never have left your draft folder. The blame? “AI drift”, and a prompt that was as vague as a bad motion in limine.

If you’re a lawyer in 2025, you are already living this reality.

As a reader of this Substack, you know that Generative AI isn’t a parlor trick. You also know it’s not a summer associate to be left unsupervised. Through careful use you’ve learned that GenAI is a tool of enormous power, and equally enormous risk. And like any such tool, its results are only as precise as your instructions. But you may not yet fully appreciate the finesse and foresight required for effective prompting.

Here’s the truth:

Prompting isn’t just a technical flourish. It’s the new legal writing skill. Effective prompting is the difference between good advocacy and meaningless ambiguity. It’s also the difference between ethical compliance and professional jeopardy.

In a world where courts, clients, and opposing counsel all use GenAI, your ability to control what your tools produce both a strategic necessity as well as an ethical imperative.

You wouldn’t stand up in court and improvise your argument (would you?). You also wouldn’t send a client memo built on your best guess or half-remembered law. So why would you trust your reputation, and your client’s outcome, to a prompt written in haste?


The Hidden Ethics of Every Prompt

Let’s step back from the technology hype for a moment. Ask yourself: What is the real risk if you get this wrong? The answer is: everything that matters.

The American Bar Association’s Model Rule 1.1, mirrored in Michigan and most jurisdictions, now tells us that competent representation requires, and I quote,

“keep[ing] abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.”

Relevant technology includes AI and the “benefits and risks” part isn’t just about buying new software that fails to perform. It’s about understanding how the AI technologies you use can help or harm your client .

Here’s what the Michigan State Bar said in its 2024 report:

“AI literacy is rapidly becoming a threshold competence for Michigan lawyers, on par with legal research, digital security, and confidentiality protocols… Lawyers must develop a working understanding of how to prompt and supervise generative AI systems, both to harness their benefits and to avoid errors or ethical lapses.”

In other words: prompt engineering is now part of your ethical duty - no different from understanding e-discovery, client confidentiality, or even the rules of evidence.

As the Harvard C

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Patrick T. Barone, PLLC
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture