AI Without Compromise
How Trial Lawyers Can Harness Generative AI Without Exposing Client Confidentiality
Generative AI isn’t just another iteration of internet-era technology—it’s fundamentally different. While the internet gave lawyers fast access to information, GenAI gives third-party systems access to lawyers’ thought processes, client narratives, and strategic reasoning. This reverses the flow of control and dramatically heightens the risks to client confidentiality.
As NYU professor and AI expert Gary Marcus recently warned, OpenAI’s partnership with the U.S. military—alongside its closed-source data practices—creates the potential for what he calls “Orwellian surveillance” in professional domains. “The merger of a surveillance state and a black-box AI company is something we should all be concerned about,” he said in a 2024 Business Insider article.
For trial lawyers bound by rules of confidentiality, this isn’t a theoretical concern—it’s a daily, practical challenge. To use GenAI effectively, we must find ways to do so without handing sensitive information to third-party servers.
Confidentiality and Cloud-Based AI Tools
The Michigan Rules of Professional Conduct—and Model Rule 1.6—strictly prohibit the unauthorized disclosure of client information. This includes any fact related to the representation, regardless of whether it seems significant or publicly available.
Most mainstream GAI tools, like ChatGPT, Claude, and Gemini, are cloud-based. Anything you enter is processed—and often stored—on external servers. Even anonymized facts can present risk when combined with identifiable patterns, language, or metadata.
The Florida Bar tackled this issue head-on in a January 2024 advisory opinion, making clear that lawyers must evaluate whether a GAI tool presents a risk to client confidentiality. If so, they may need informed consent, or they should avoid the tool altogether. The opinion recommends private, firm-controlled AI systems as a preferred alternative.
🔗Read the Florida Bar Ethics Opinion on AI (2024)
DeepSeek R1: AI With Local Control
DeepSeek R1 is one of the most promising solutions for lawyers who want to use GAI without sacrificing control over their data. Unlike proprietary models, DeepSeek is open-source and designed for local deployment—meaning you can run the model entirely on firm-managed hardware with no external server involvement.
In May 2025, DeepSeek released a major update:
16K context window (up from 4K), enabling deeper legal analysis.
Enhanced retrieval-augmented generation (RAG) support, ideal for case law integration.
Optimized memory and GPU usage, making it feasible for smaller firms to run on mid-tier infrastructure.
With this update, DeepSeek R1 now rivals GPT-4 in reasoning benchmarks, but offers something the closed systems never can—confidentiality by design.
If your firm has the technical capacity (or a trusted IT partner), DeepSeek’s newest release makes private GAI more accessible than ever.
Our firm at Barone Defense Firm continues to advise on secure AI integration, particularly for lawyers handling sensitive criminal cases. Contact us to schedule a free 20 minute AI consultation.
The Tradeoffs: Transparency, Hallucination, and Technical Lift
Of course, there are tradeoffs. Running your own model requires real infrastructure—GPUs, storage, monitoring—and the ability to manage risks like hallucinations or bias. Open-source tools also demand vigilance: DeepSeek’s earlier releases faced criticism over lack of transparency, including chip sourcing during regulatory gray zones. And it’s not cheap - you’ll likely pay upwards of $30,000.00 just for the hardware.
Still, these risks are manageable—and unlike cloud-based tools, they’re your risks to control.
Ethical Mastery, Not Just Technical Proficiency
Competence in technology isn’t optional. Trial lawyers must maintain technological proficiency to meet their professional obligations. That includes understanding the risks of using GenAI, selecting appropriate tools, and explaining these decisions to clients when needed.
DeepSeek R1’s open-source architecture and local deployment model offer a viable path forward: powerful legal assistance tools that don’t force lawyers to choose between innovation and ethical compliance.
Final Thoughts
For trial lawyers, the question isn’t whether to use generative AI—it’s how to use it responsibly. In a profession where confidentiality is foundational, adopting AI tools that you don’t control can be perilous.
DeepSeek R1 and other private-deployable models shift the balance back to the lawyer. They allow for creativity, efficiency, and strategic insight—without ever putting your client’s trust at risk.
For a broader look at how generative AI is transforming advocacy, see Patrick Barone’s NACDL Champion article:
🔗 The Future of Advocacy: The Trial Lawyer’s Guide to Large Language Model Generative AI