Part II – Framing, Refining, and Rethinking: Precision Control Over AI Output
Using System, Auto-Engineering, and Reflective Prompts to Elevate Advocacy
In the first part of this series, we explored how to build strategy before drafting, using techniques like Step-Back Prompting to surface legal principles before pushing toward conclusions.
In this second installment, we invite you to rethink how you use AI not just as a tool for content generation, but as a structured partner in legal reasoning. This means shifting focus from when to prompt to how to control how the model thinks, speaks, and revises.
To do that, we turn to a new set of advanced tools: System Prompting, Automatic Prompt Engineering, and Self-Reflective Prompting. And please notice again that this style of prompting is a Trustworthy and Ethical use of AI in a criminal defense practice.
Each of these techniques moves us away from improvisational back-and-forth and toward intentional, structured output that mirrors the rigor of litigation writing and legal reasoning.
Whether you're drafting a mitigation report, fine-tuning a cross-examination question, or elevating the emotional resonance of a sentencing memo, these prompts allow you to direct not just the content but the character of AI-generated output.
System Prompting: Framing the Frame
System prompting allows you to exert high-level control over how a model responds by setting constraints at the outset, before the user ever inputs a natural-language prompt. Unlike persona prompting, which focuses on simulating a character or role (e.g., "act like a jury consultant"), system prompting defines the model’s functional behavior, tone, and output format.
It is used most effectively to specify not just who the model is, but what kind of output it should return and how that output should be structured. This is especially useful when the legal task demands consistency, structure, or compliance with a particular document format.
In practical terms, system prompting helps enforce boundaries around the model’s behavior. Instead of an opaque and general prompt such as “prepare a summary or a draft of x,” you’re programing the AI model to behave like a controlled subprocess within your workflow, returning only what you’ve carefully predefined.
Criminal Defense Example:
You're drafting a sentencing mitigation memo for a felony case and want to ensure the AI generates a document that follows your exact structural requirements.
Use this system-level instruction:
“You are a paralegal assisting a criminal defense attorney in preparing a felony sentencing memorandum. Your output will be used as a first draft for inclusion in a formal legal filing. Follow the structure and tone typically used in legal memoranda. Use the following section headings and format each section using concise, factual bullet points: Overview, Client Background, Mitigating Factors, Supporting Evidence (including documents, expert opinions, or testimony), Client Quote (if available). Avoid narrative transitions, filler language, or persuasive summaries. Use full paragraphs with no bullet points, each section should contain and elaborate on a distinct facts, details, or observations that can be later expanded into full prose.”
This level of specificity defines the model’s role and distinctly defines its deliverables. Your prompt ensures the output is not only relevant, but properly formatted, immediately usable, and easy to integrate into legal documents or workflows.
System prompting closes the loop between strategy and structure.
Once mastered, it allows defense lawyers to move beyond improvisational prompting toward precise, repeatable workflows, ones that produce usable, formatted output ready for inclusion in legal documents.
With this foundation in place, we now turn to a technique that lets the model generate its own best prompt. Rather than crafting every instruction from scratch, what if the AI could help you refine your own question before you even ask it?
Automatic Prompt Engineering: Prompts That Prompt Themselves
Automatic Prompt Engineering (APE) is the process of having the model generate multiple versions of a user-supplied prompt, allowing the user to compare, refine, and select the most effective one. It serves as a meta-level prompting tool: instead of focusing solely on what you want the model to do, you start by asking the model to help you frame the question itself.
This is especially useful when you are unsure how best to phrase a prompt to elicit the tone, structure, or argumentative depth you’re looking for. Unlike persona or role prompts, APE operates at the instruction layer, offering you diverse formulations of your original query that vary in complexity, precision, and rhetorical effect.
Think of it as prompt-level voir dire.
You’re testing language instead of jurors, selecting not just for relevance but also for clarity and strategic fit.
Criminal Defense Example:
You're preparing a cross-examination in a motion to suppress hearing, and you suspect that the officer routinely recycles boilerplate phrases, like “based on my training and experience,” without grounding them in specific facts.
Rather than drafting your question outright, begin with:
“Generate 7 different ways to ask a cross-examination question that highlights the repetitive use of the phrase ‘based on my training and experience’ in affidavits. Make the tone firm but respectful.”
From those variations, you might select or adapt the strongest formulation:
“Detective, are you aware that in your last ten affidavits for search warrants, you used the phrase ‘based on my training and experience’ over 50 times without citing a single individualized fact?”
By using automatic prompt engineering, you’re exploring the rhetorical terrain before committing to a single formulation, much like testing the emotional resonance of themes in voir dire. This method allows for tailored questioning that underscores pattern overreach while preserving judicial credibility.
Self-Reflective Prompting: Asking the AI to Audit Itself
Once you’ve worked with the AI to produce what feels like a final draft, whether it’s a motion to suppress or dismiss, a client letter, or a sentencing narrative, it’s tempting to stop there. But one final prompt can be the difference between adequate and excellent. That’s where self-reflective prompting comes in.
Self-reflective prompting is a technique in which you ask the model to critique its own prior output. Rather than treating the model’s response as final, you turn it back onto itself:
“Was this clear?”
“What’s missing?”
“What would make this stronger?”
This approach transforms the AI from a generator into a reviewer, one capable of identifying structural gaps, logical flaws, or rhetorical opportunities in its own writing.
Criminal Defense Example:
Let’s say you’re drafting a sentencing memo that emphasizes your client’s rehabilitation. You begin with:
“Write a paragraph explaining how my client’s sustained sobriety, employment, and mentorship in AA support a downward departure at sentencing.”
The model responds:
“The defendant has demonstrated a strong commitment to rehabilitation by maintaining sobriety for over 18 months, securing stable employment, and serving as a mentor in Alcoholics Anonymous. These actions reflect personal growth and community reintegration, both of which support a downward departure from the sentencing guidelines.”
Solid, but flat.
Now try:
“Critique the above paragraph. Identify any weaknesses in tone, structure, or persuasiveness. Suggest how it could be improved.”
The model might return:
“The paragraph is factually sound but lacks emotional depth. It frames the defendant’s progress in abstract terms without offering vivid details or humanizing language. Consider anchoring the client’s efforts in a specific story, for example, mentoring a struggling newcomer at AA. Adding such texture could strengthen the persuasive appeal.”
With this insight, you revise the prompt:
“Rewrite the paragraph, incorporating a specific example of the client mentoring a newcomer at AA and highlighting the emotional transformation involved.”
Now the output moves beyond formulaic argument, it becomes something closer advocacy, something you can then fully refine to reflect your own voice.
Adding a Persona to the Reflective Process
You can strengthen the effectiveness of self-reflective prompting by assigning a persona to the AI reviewer. Rather than asking the model to critique its own output in a vacuum, prompt it to act as a seasoned legal professional or editorial authority.
For instance, you might instruct:
“You are the senior editor of the Yale Law Journal. Critique the following paragraph for logical coherence, persuasive impact, and conformance with high-level legal writing standards.”
or:
“You are the managing partner of a top litigation firm preparing this argument for a federal appellate brief. Review the draft for strategic tone, structural clarity, and overall persuasive effectiveness.”
This added layer of role-based framing sharpens the model’s evaluative focus, prompting it to adopt a more discerning and professional lens.
It’s not just reviewing, it’s reviewing as someone whose reputation depends on it.
Why It Works
Legal writing is full of blind spots: overused language, unexamined assumptions, and emotionally inert phrasing. By asking the model to evaluate itself, you create a feedback loop that helps expose those blind spots and close the gap between what you intended and what you actually conveyed.
Self-reflective prompting is especially powerful when paired with other techniques in this article. After you use automatic prompt engineering to test different versions of a motion or argument, apply self-reflection to each one. See which survives scrutiny. Then revise accordingly.
Think of self-reflective prompting as internal quality control, an appellate lens for your trial-level writing.
Missed Part I?
Catch up here →
About the Author and the Barone Defense Firm
Learn more about Barone’s AI-powered Criminal Defense Practice at www.baronedefensefirm.com, where our mission is to provide cutting-edge, compassionate, and relentless defense by combining the best of human advocacy with the smartest technology available.