Stop Writing Better Prompts
Lawyers Will Get More From AI by Designing Better Workflows
Lawyers have begun experimenting with AI in the same way most people do, by writing a prompt, pasting in a document, and asking the system for an answer, and in many cases the results appear surprisingly competent.
Large language models can summarize transcripts, outline motions, and draft arguments in seconds, and the technology improves with every new model release. But legal work has never been a single step process, and the more lawyers rely on one-shot prompts, the more they encounter inconsistent results, missing analysis, and answers that sound confident but fail under closer scrutiny.
That tension reflects a deeper problem in how lawyers are approaching AI. Much of the current conversation treats prompting as if it were a question and answer exchange, and that framing encourages users to pile information into the model and hope the output improves.
What many lawyers are discovering is that the value of AI does not come from writing a better prompt. It comes from designing a better workflow.
The Limits of the One-Shot Prompt
A single prompt often asks a model to perform several kinds of reasoning at once.
Consider a common example. A lawyer pastes a motion into an AI system and asks for a summary of the key arguments. The request appears simple, but the underlying task is not. To produce a reliable answer, the system would need to identify the governing legal rule, separate factual claims from legal conclusions, weigh which arguments matter most, and organize the analysis into a coherent explanation.
Each represents a distinct forms of reasoning. But a single prompt compresses them into one step. The model must analyze the record, evaluate the argument, and draft the explanation simultaneously.
That compression creates instability, resulting in important arguments being overlooked. Minor points receive too much attention, and sometimes the system fills gaps with language that is repetitive, and sounds plausible but does not reflect the actual record.
Lawyers recognize this pattern because it mirrors a familiar problem in practice. When complex reasoning is forced into a single step, mistakes multiply.
The issue is not that the model lacks capability. The issue is that the workflow is poorly designed.
Why AI Systems Work Better in Stages
Developers working with modern language models increasingly avoid the single prompt approach. Instead, they break complex work into a sequence of smaller tasks.
The system may begin by identifying relevant facts. A second step extracts governing legal standards. A third step organizes those elements into an outline. Only after those stages does the model attempt to generate a written explanation.
Each step narrows the scope of the reasoning.
But the real benefit is not just accuracy. The structure forces the analysis to develop in the same order that human reasoning typically unfolds.
Lawyers do not begin drafting arguments the moment they open a file. They gather facts, identify the governing law, and test the structure of an argument before committing anything to paper.
AI systems perform more reliably when they are used in a similar way.
What This Looks Like in Legal Practice
The difference becomes clearer when applied to everyday legal tasks.
Take deposition review. A lawyer might upload a transcript and ask the system for a summary. The result often captures general themes, but critical testimony may disappear inside a broad overview.
But a staged workflow produces a different result.
The system can begin by extracting a timeline of events from the testimony. A second step identifies admissions or inconsistencies. A third step groups testimony according to the elements of the claims or defenses in the case. Only then does the model generate a narrative summary.
By the time the final explanation appears, the testimony has already been organized.
Motion drafting shows the same pattern.
A lawyer might ask an AI system to write a motion to suppress. The model may produce a draft quickly, but the reasoning often feels thin. Key facts may be buried, and the argument structure may not match the governing legal standard.
But if the task is staged, the process changes. The model can first extract relevant facts from police reports and transcripts. A second step identifies the governing suppression standards. A third step generates an outline based on those rules. Only after those stages does the system draft individual sections of the motion.
The result is rarely perfect.
But the reasoning becomes easier to evaluate because the structure is visible.
Why Hallucinations Often Begin With Workflow Design
Much of the discussion around AI errors focuses on hallucinations, situations where a model produces information that does not exist in the underlying material.
These errors are often attributed to the model itself.
But the structure of the task frequently plays a larger role.
Lawyers often provide large volumes of material and ask the system to produce analysis immediately. A prompt may include a lengthy brief, several judicial opinions, and an email chain, followed by a request for legal argument.
From the model’s perspective, the task is unusually broad. It must locate relevant facts, interpret multiple authorities, evaluate competing arguments, and draft a conclusion in a single step.
But when reasoning is forced to occur all at once, the system begins filling gaps.
That is where fabricated citations and distorted summaries appear.
Breaking the work into stages reduces that pressure. If the system is first asked to extract quotations from cases, the output can be verified before any argument is drafted. If the next step identifies the governing rule, the structure of the analysis becomes clearer before the model attempts to apply it.
Each stage constrains the reasoning.
Errors still occur, but the system is less likely to invent information because the scope of each task is limited.
In many cases, hallucinations are not random failures.
They are the predictable result of asking a complex system to perform too many forms of reasoning at once.
The Practical Shift for Lawyers
The practical lesson is straightforward.
Lawyers should stop thinking about AI as a tool that answers questions. Instead, it should be treated as a system that participates in a process.
Rather than asking the model to produce a finished document, break the work into stages. Extract the facts. Identify the governing legal rule. Outline the argument. Draft sections individually. Review the output before assembling the final document.
Each step limits the scope of the reasoning.
But it also creates visibility. Intermediate outputs allow the lawyer to evaluate the analysis before relying on the final product.
That structure mirrors how legal work is already performed inside most law offices. Junior lawyers and clerks gather facts, research governing law, and organize arguments before the final draft is written.
AI systems perform better when they are used in a similar way.
Treating AI as a workflow tool does not eliminate mistakes. It does not remove the lawyer’s responsibility for the final work product.
But it makes the technology more predictable and easier to supervise.
And in legal practice, predictability often matters more than speed.
Looking Ahead
The next generation of AI systems is already moving toward longer, more autonomous tasks. Instead of responding to a single prompt, these systems analyze documents, organize information, and generate draft work products over extended periods of time.
But that development raises new questions for lawyers.
If an AI system performs hours of analysis before a lawyer reviews the result, the structure of that workflow becomes part of the lawyer’s professional responsibility. Supervision, confidentiality, and verification all become central concerns.
Those questions will become more important as the technology evolves.
But for now, the lesson is simpler.
The reliability of AI in legal practice depends less on the prompt and more on the process that surrounds it.
About the Author
Patrick T. Barone is a criminal defense attorney who writes about the intersection of law, technology, and artificial intelligence in modern legal practice. His work focuses on the practical realities of using AI tools in litigation, including motion practice, discovery analysis, and the professional responsibilities that arise when lawyers rely on machine-assisted reasoning.
Through his practice and writing, Barone explores how emerging technologies can improve legal workflows without compromising the judgment and oversight that effective advocacy requires.
To learn more about Barone’s AI-supercharged criminal defense law practice, visit the firm’s website.


