
[EDRM Editor’s Note: The opinions and positions are those of John Tredennick.]
The senior partner sat in his office late on a Thursday evening, staring at a motion filed by opposing counsel. They had identified contradictions across seventeen depositions—patterns his 40-person review team had missed despite six weeks of work and $280,000 in costs. The motion cited testimony from witnesses his team had reviewed. But no human reviewer, no matter how skilled, could systematically track testimony patterns across dozens of witnesses and thousands of pages.
He later learned that the opposing counsel’s team had found those contradictions in minutes using an AI-powered analysis platform. Not because they were smarter or worked harder, but because they had access to tools that could do what human reviewers simply cannot—process and cross-reference information at scale.
This partner wasn’t resistant to change out of stubbornness. He had legitimate concerns about AI—questions about accuracy, ethics, and professional responsibility that every thoughtful lawyer should ask. But in that moment, he realized something important: the world he had mastered was changing, and the skills that had made him successful weren’t enough anymore.
This scene, or variations of it, is playing out across the legal profession. And it raises a question we all need to answer: How do we move forward responsibly?
What We’re Wrestling With
The hesitation many lawyers feel about AI comes from a legitimate place, and we should acknowledge that clearly.
There was something genuinely valuable about the traditional approach. You could assemble a document review team, train them on coding protocols, conduct quality control rounds, and know exactly how the work got done. You could bring in contract attorneys you’d worked with before, brief them on case strategy, and trust that experienced reviewers were applying professional judgment. You could walk into the review room and see the work happening.
Human reviewers brought legal training and contextual judgment to ambiguous decisions. Quality control meant senior associates spot-checking decisions and providing feedback. The accountability was clear—when someone made a bad privilege call, you knew who made it and could address it. These weren’t arbitrary preferences. They were genuine virtues of a system developed over decades.
But that system also had costs we accepted because we had no alternative. Manual review of millions of documents took months and cost hundreds of thousands of dollars. Important patterns went unnoticed because no individual reviewer saw enough documents to connect scattered evidence. Strategic decisions got made on incomplete information because thorough investigation would take too long. Clients paid $75-100 per hour for contract attorney review that often achieved only 60-70% accuracy on complex determinations.
We built an entire professional culture around these constraints, and for good reason—they were the best we could do with the tools available. The question now is whether we can do better, and if so, what our professional obligations require.
Why Standing Still Isn’t an Option
The legal profession’s caution about new technology is well-documented and, frankly, often well-founded. Lawyers resisted email, claiming face-to-face meetings and phone calls were superior for client relationships—and they had a point about relationship-building. They resisted electronic filing, preferring paper submissions they could hold—and paper does have certain reliability advantages. They resisted document review platforms, insisting that manual review was more reliable—and early platforms did have significant limitations.
In each case, however, the profession eventually adopted the new approach—not because lawyers suddenly became technology enthusiasts, but because the alternatives became professionally untenable. You couldn’t effectively serve clients while refusing to use email. You couldn’t compete with firms that had embraced discovery technology while insisting on purely manual review.
Generative AI is following a similar pattern, but the pace is faster and the capabilities are more significant. Several factors make this moment different.
The Technology Works Today
This isn’t speculation about what AI might do someday. The capabilities exist now and are being deployed in real cases. Firms are using AI to conduct investigations in 48 hours that previously took ten weeks. They’re analyzing 500,000 documents with greater accuracy than manual review teams. They’re synthesizing testimony across dozens of depositions and identifying contradictions human reviewers miss.
Ethical Guidance Now Exists
Many lawyers have reasonably cited ethical uncertainty as justification for proceeding carefully. That caution was appropriate—rushing into uncharted territory without guidance would be irresponsible. But the guidance has now arrived.
The ABA issued Formal Opinion 512 in July 2024, providing comprehensive guidance on using AI in legal practice. State bars have issued similar opinions. The ethical framework is now clear:
- Lawyers can use AI tools with appropriate understanding and oversight
- Verification of AI outputs is required, not optional
- Client confidentiality must be protected through proper vendor selection and data handling
- Competence includes understanding the technology you use
- Professional judgment remains with lawyers, not machines
This framework actually reinforces what experienced lawyers have always known: technology serves professional judgment, not the other way around. The ethical question has shifted from “should we use AI?” to “how do we use AI responsibly?” And increasingly, a related question: what are our obligations when available tools could improve quality and reduce costs for clients?
Competitive Pressure is Real
Major law firms have AI initiatives. Corporate legal departments are demanding efficiency. Government agencies are adopting AI-powered investigation capabilities. The firms and lawyers developing AI competencies now are building advantages that compound over time.
Consider a practical example. A regulatory agency sends a Civil Investigative Demand requiring response in ninety days. One approach: spend ten weeks and $80,000 on preliminary document review before making strategic decisions. Another approach: spend 48 hours and $2,000 using AI-powered analysis to understand the full evidentiary picture, then dedicate the remaining time to strategy and response preparation.
The question isn’t about embracing technology for its own sake. It’s about which approach better serves the client’s interests.
The Profession Has Already Moved
The Mata v. Avianca case made headlines when lawyers were sanctioned for submitting fake AI-generated case citations. The incident was a cautionary tale about careless AI use. But the more significant lesson was often missed: lawyers are already using AI widely enough that multiple instances of this error occurred within months. The profession has moved. The question isn’t whether AI will be adopted—it’s whether adoption will be thoughtful and competent or rushed and poorly executed.
The question isn’t whether AI will be adopted—it’s whether adoption will be thoughtful and competent or rushed and poorly executed.
John Tredennick, CEO and Founder, Merlin Search Technologies.
What Thoughtful Adoption Looks Like
Moving forward doesn’t mean abandoning professional judgment or blindly trusting AI outputs. It means intelligently deploying AI capabilities while maintaining appropriate verification and human oversight. This is where experienced lawyers have the most to contribute—their judgment about when and how to use these tools is exactly what the profession needs.
Investigation and Early Case Assessment
A Friday afternoon call: the board wants to know by Monday whether executives discussed pricing with competitors. Traditional investigation would require weeks of keyword search refinement and manual review before providing preliminary answers.
AI-powered investigation uploads the document collection, poses questions in natural language, and delivers comprehensive analysis within 48 hours. The system identifies communications showing pricing discussions, flags items requiring follow-up analysis, constructs timelines showing when communications occurred, and identifies individuals involved with their level of participation.
This isn’t theoretical. It’s how investigations work today when you use the right tools. The cost drops from $40,000-$80,000 to under $2,000. The timeline shrinks from ten weeks to two days. The comprehensiveness improves because AI systematically reviews the entire population rather than sampling based on search results.
And critically: the senior lawyer’s judgment about what the findings mean, how to advise the board, and what steps to take next remains indispensable. AI accelerates fact-finding; it doesn’t replace legal analysis.
Document Review and Production
Traditional document review scales linearly—double the documents, double the cost and time. A million-document production costs $1-1.5 million and takes three months with a team of contract attorneys.
AI-powered review analyzes documents at $0.10-0.25 per document and completes in three weeks. For that million-document matter, costs drop to $100,000-$200,000. More importantly, quality often improves. AI applies uniform criteria across all documents. It doesn’t suffer from attention lapses or fatigue. It identifies patterns across the entire population rather than whatever documents individual reviewers happen to see.
The senior partner’s role shifts from managing reviewer teams to designing review protocols, establishing verification procedures, and making strategic decisions about how AI findings inform case strategy. This is arguably a better use of partner-level expertise than coordinating contract attorney schedules and conducting quality control rounds.
Testimony Analysis
Traditional testimony management involves assigning paralegals or junior associates to spend 6-8 hours per deposition creating summaries. For cases with dozens of witnesses, the timeline extends to weeks and costs can reach $80,000-$150,000 in billable time.
AI processes 102 depositions totaling 18,000 pages in 131 minutes, producing structured summaries with tables of contents, topic organizations, and hyperlinks to source testimony. The cost drops to under $500 for AI processing.
The comprehensiveness improves as well. AI extracts every temporal reference, identifies all mentions of specific individuals, tracks every exhibit discussion, and creates multiple analytical views of the same testimony. Human summarizers make selections about what to include; AI can be comprehensive because processing time isn’t the constraint.
Trial Preparation
Preparing opening and closing arguments traditionally requires teams of lawyers spending weeks reviewing transcripts, organizing exhibits, and drafting argument sections.
AI analyzes the entire trial record and generates sophisticated first drafts in hours. For the BP Oil Spill trial, AI produced comprehensive closing arguments from multiple strategic perspectives—TransOcean’s defense, Halliburton’s position, government plaintiffs’ case, and BP’s response—each running 40-60 pages with specific citations to trial testimony and exhibits.
These AI-generated drafts aren’t the final work product. They’re comprehensive first drafts that senior lawyers refine, personalize, and adapt to their strategic judgment. But starting from a thoroughly researched, well-organized draft instead of blank pages transforms the economics and timeline of trial preparation.
A Practical Path Forward
Forward movement requires deliberate steps, not magical transformation. Here’s what responsible adoption looks like.
Education First
You cannot supervise AI-assisted work competently without understanding the technology. Read comprehensive resources like “Generative AI for Smart Discovery Professionals.” Attend CLE programs focused on practical AI applications. Learn what Large Language Models actually do, where they fail, and what verification protocols are necessary.
This education isn’t optional professional development. It’s foundational competence. You wouldn’t manage complex litigation without understanding the Rules of Civil Procedure. You shouldn’t deploy AI without understanding how it works and what safeguards it requires.
Start with Internal Matters
Before using AI on client work, experiment on internal matters where mistakes have lower consequences. Analyze your firm’s own documents. Test investigation workflows on completed cases where you know the right answers. Develop comfort with the technology and understanding of its capabilities and limitations.
This experimentation period lets you fail safely, learn from errors, and develop institutional knowledge about what works before client interests are at stake.
Develop Verification Protocols
AI outputs require verification. The level of verification should be proportional to stakes and demonstrated reliability. Initially, verify intensively—perhaps 100% of privilege determinations until accuracy is established. As reliability is proven, reduce to sampling protocols—5-15% for high-stakes applications, lighter for lower-risk uses.
Document your verification processes. When methodology is challenged, you want evidence that you implemented systematic quality control, not ad hoc checking when you remembered to do it.
Build Institutional Knowledge
Effective AI use develops over time through organizational learning. When someone discovers a prompt that produces excellent results, make it available to the team. When errors occur, investigate root causes and adjust processes. When new capabilities emerge, evaluate them systematically rather than chasing every shiny object.
Organizations that treat AI adoption as continuous improvement rather than one-time implementation will compound advantages over those that view it as a technology purchase.
Organizations that treat AI adoption as continuous improvement rather than one-time implementation will compound advantages over those that view it as a technology purchase.
John Tredennick, CEO and Founder, Merlin Search Technologies.
Share and Iterate
The most effective learning happens through exchange. Share what works across your organization. Discuss failures honestly to prevent repetition. Iterate on processes based on experience rather than assuming initial approaches are optimal.
This cultural shift—from individual expertise to collective learning, from stable processes to continuous refinement—may be more important than any specific technology.
The Reality We Face
That partner from the opening scene learned something valuable from his experience. He’s now leading his firm’s AI practice, delivering superior results to clients while mentoring younger lawyers on thoughtful AI integration. The skills that made him a great trial lawyer—strategic thinking, attention to detail, skepticism of easy answers—turned out to be exactly what responsible AI adoption requires.
He’ll tell you it wasn’t an easy transition. There were moments of frustration, failed experiments, and steep learning curves. But he’ll also tell you that the alternative—watching competitors outpace his team while explaining to clients why his work cost more and took longer—wasn’t really an alternative at all.
This isn’t about optimism or pessimism. It’s not about embracing change for its own sake or fetishizing new technology. It’s about recognizing that the capabilities exist today, the competitive pressure is real now, and the ethical frameworks are established. Forward isn’t just a direction we might choose. For practical purposes, it’s the only direction available.
Forward isn’t just a direction we might choose. For practical purposes, it’s the only direction available.
John Tredennick, CEO and Founder, Merlin Search Technologies.
The profession has changed. The tools exist. The guidance is clear. Clients expect efficiency. Courts assume competence with available technology.
The question is whether we participate in shaping how forward movement happens or get pulled along by it. Whether we develop expertise while there’s time to do it thoughtfully or scramble to catch up when clients demand it. Whether we’re among the lawyers demonstrating what responsible AI use looks like or among those explaining why we waited too long.
The comfortable world we know is evolving. The familiar methods we trusted are being supplemented—and in some cases surpassed—by approaches we’re still learning. The certainty we felt about how legal work should be done is being refined by evidence that it can be done differently.
Forward is our best direction. The question is whether we move forward with intention, understanding, and professional judgment—or whether we move forward only when we’ve exhausted every alternative and lost ground we’ll struggle to recover.
The time for that choice is now. Not when clients demand it. Not when competitors have captured market position. Not when courts expect AI competency as baseline professional behavior. Now, while there’s still time to lead rather than follow, to shape rather than react, to build expertise rather than patch together urgent responses.
Forward is our best direction. Better for us, better for our clients.
Assisted by GAI and LLM Technologies per EDRM GAI and LLM Policy.

