More Context, Worse Analysis
Why lawyers need to rethink how they use AI prompts
Lawyers are trained as subject matter experts, and we are rewarded for gathering facts. We build cases by accumulating detail, adding documents, adding authorities, adding context. In practice, more facts can change outcomes, and that instinct carries over to how many lawyers use generative AI.
So we do the same thing with these systems. We supply more documents, more data, more details, and expect the analysis to improve.
But unlike in the courtroom, with generative AI that approach often produces the opposite result. When too much unstructured material is placed into a single prompt, the model becomes less stable. Important facts are diluted, relationships blur, and the analysis can sound complete while missing what actually matters.
The practical solution is to use context deliberately by selecting the facts that matter, grouping them by issue, and introducing them in stages so the system can reason through them in order, rather than attempting to process everything at once.
Why One Prompt Fails
A common workflow looks like this. A lawyer uploads a lengthy motion, several cases, and a set of emails, then asks the system to identify key facts, generate cross-examination questions, or evaluate the strength of an argument. The request appears efficient because everything is in one place.
But we are asking the system to do too much at once.
When too much material is introduced at once, the system does not reliably separate what matters from what does not. Important facts lose their position. Supporting details begin to compete with controlling ones. The connection between law and fact becomes less clear.
At that point, adding more context does not improve the analysis. It makes it harder for the system to determine what matters.
Because so much information is present, the result often feels comprehensive, but in practice it is less reliable, which forces additional rounds of prompting to recover clarity and defeats the original purpose of efficiency.
Therefore, lawyers must change their paradigm when approaching generative AI with the first prompt. Instead of a deluge of information followed by multiple rounds of editing and refinement, the first prompt should be deliberate, selecting only the facts that matter, organizing them by issue, and introducing them in stages so the system can reason through them sequentially.
Where This Breaks in Practice
In motion practice, a lawyer might provide a full brief along with several cases and ask the system to strengthen the argument. The output may introduce new points, but it may also misinterpret the governing standard or fail to align the argument with controlling authority.
In discovery analysis, a lawyer might upload a large document set and ask for key themes. The system may identify patterns, but it may also overlook critical documents that do not fit the dominant narrative.
In both situations, the problem is not the absence of information, but an absence of structure.
How to Structure the First Prompt
The solution begins with the first prompt.
The goal is not to transfer the file. The goal is to define the problem in a way the system can reason through it.
This requires three steps.
First, select the facts that matter. Identify the facts that drive the legal issue, not everything in the record. If the issue is reasonable suspicion, limit the facts to what bears on that standard. If the issue is intent, select accordingly.
Second, organize those facts by issue. Do not present them as a narrative block. Group them so the system can see how they relate to the legal question.
Third, stage the analysis. Do not ask for everything at once. Ask for one form of reasoning at a time. One prompt can identify the governing standard. The next can apply that standard to the selected facts. A later prompt can test or refine the analysis.
A prompt that includes only the facts relevant to a legal standard will outperform one that includes the entire file, even if both contain the same information.
This staged approach reduces ambiguity and makes the reasoning visible.
When the system is asked to perform one step at a time, the output can be evaluated before moving forward. Errors are easier to identify. Assumptions are easier to challenge. The result is more reliable because the process is controlled.
Why Structure Improves Oversight
For lawyers, the question is not only efficiency. It is trust.
AI-assisted work product has to be defensible. Citations must be accurate. Legal standards must be correctly stated. The application of law to fact must track the published account. In effect, the output is on trial, and the lawyer is the jury.
Unstructured prompts make such scrutiny difficult, yet the ethical use of AI demands it. When large amounts of material are processed at once, it is harder to see what the system relied on, how it interpreted authority, and whether it introduced error.
Structured prompting changes that.
When the work is staged, each step can be validated. A prompt that extracts controlling authority can be checked against the published case. A prompt that states the legal standard can be verified before it is applied. A prompt that applies the standard to selected facts can be reviewed against the record.
This makes it easier to detect and correct hallucinations, imagined citations, misstatements of law, and faulty applications before they compound, and before the lawyer has authorized the output by affixing their signature thereto.
Practical Implications for Lawyers
Artificial intelligence can improve efficiency in legal work, but only if it is used in a responsible way that aligns with how legal reasoning actually occurs.
Providing more information does not guarantee better results. In many cases, it introduces ambiguity that weakens the analysis.
The more reliable approach is to treat prompting as part of legal reasoning itself. The first prompt defines the problem. Subsequent prompts test and refine the analysis in stages.
That approach requires slightly more effort at the outset.
But it produces work that is easier to understand, evaluate, and defend.
In practice, that is what matters.
About the Author
Patrick T. Barone is a criminal defense attorney who writes about the intersection of law, technology, and artificial intelligence in modern legal practice. His recent published work focuses on the practical realities of integrating AI systems into litigation workflows, including motion practice, discovery analysis, and the professional responsibilities that arise when lawyers rely on machine-assisted reasoning.
Visit the Barone Defense Firm website to learn more about Barone’s AI-supercharged criminal defense law practice.
Endnotes
Anthropic (long context / structured prompting)
Anthropic, Prompting for Long Context (2024), https://www.anthropic.com/news/prompting-long-context.
OpenAI (multi-step workflows / staged reasoning)
OpenAI, Building Agents and Multi-Step Workflows (2025), https://openai.github.io/openai-agents-python/multi_agent/.
OWASP (prompt design + injection risk)
OWASP Found., Top 10 Risks for Large Language Model Applications (LLM01: Prompt Injection) (2023), https://owasp.org/www-project-top-10-for-large-language-model-applications/.
NIST AI Risk Management (trustworthy AI framing)
Nat’l Inst. of Standards & Tech., Artificial Intelligence Risk Management Framework (AI RMF 1.0) (2023), https://www.nist.gov/itl/ai-risk-management-framework.
For Further Reading
Stop Writing Better Prompts: Lawyers Will Get More From AI by Designing Better Workflows — why legal AI requires structured workflows
Prompt Injection: A New Security Risk for Law Firms Using AI — how adversarial inputs can influence AI systems


