
[EDRM Editor’s Note: This article was first published here on November 10, 2025, and EDRM is grateful to Trusted Partner Exterro for permission to republish. EDRM is happy to amplify our Trusted Partners news and events.]
The legal world can’t stop talking about artificial intelligence—and for good reason. AI promises to transform how we discover, analyze, and understand information. But before we sprint into this new era, there’s an essential truth we can’t ignore: you have to walk before you run.
That walk begins with technical competence—the foundational fluency in the systems, data, and digital tools that underpin every modern legal task. A decade after the ABA added a duty of technology competence, too many professionals still struggle with the basics of discovery platforms, collaboration systems, and data security. And as firms race to adopt AI, those gaps are becoming more visible—and more consequential.
AI isn’t a shortcut around this learning curve. In fact, it magnifies it. That’s why ACEDS’ webinar on this year’s AI and Risk-themed eDiscovery Day is titled, Walk Before You Run: Why Technical Competence Must Precede AI Literacy.
The Forgotten Foundation of Technical Competence
For years, technology competence has been treated as a check-the-box exercise—something to satisfy ethical guidelines or continuing education requirements. But in today’s environment, it’s more than a professional obligation; it’s the foundation of responsible AI adoption.
At its core, technical competence means understanding how the tools you rely on actually work:
- How your discovery platform indexes and searches data.
- How privilege filters and redaction logs function.
- How to evaluate the security of shared workspaces or cloud environments.
- How metadata travels through each step of review and production.
These aren’t abstract IT issues—they’re central to the practice of law in the digital age. When a lawyer doesn’t understand how a platform tags or preserves data, or when a paralegal misconfigures permissions in a shared workspace, the risks extend far beyond inconvenience. They reach into ethics, confidentiality, and defensibility.
Now, as AI enters the picture, the consequences of shaky technical footing multiply. AI tools don’t fix broken workflows; they amplify them. If your processes are inconsistent, your data unstructured, or your teams unsure of their tools, AI will only make the chaos faster and more visible.
That’s why walking before running matters. The pathway to AI readiness isn’t paved with hype—it’s built on understanding.
From Technical Competence to AI Confidence
Here’s the good news: technical competence isn’t about coding or mastering new software overnight. It’s about curiosity, precision, and practice. The lawyers who ask how their tools make decisions—the ones who take time to understand data fields, system logic, and search parameters—are the same lawyers best positioned to evaluate AI’s capabilities and limits.
Think of competence as the soil that allows AI literacy to grow. When professionals know the fundamentals of data management, validation, and defensible process design, they can engage with AI confidently rather than fearfully. They can ask the right questions:
- What data trained this model?
- Can its recommendations be explained and verified?
- Does it preserve chain of custody and maintain privilege integrity?
Those are not technicalities—they’re the hallmarks of professional judgment in the AI era.
True AI literacy doesn’t come from mastering prompts or dashboards. It comes from understanding the mechanics of digital information: how it’s created, preserved, analyzed, and protected. That’s the kind of fluency that builds not just competence, but confidence—and confidence is what clients notice.
In fact, firms that prioritize foundational training in e-discovery and data governance often find that AI adoption becomes smoother, not slower. Their people already understand the importance of validation, auditability, and process discipline—the very qualities that make AI defensible and effective.
Building a Responsible Path Forward
So how do legal teams build that foundation? It starts by reframing technical competence not as an IT problem, but as a professional mindset. Every lawyer today works with digital evidence, communicates through digital systems, and makes decisions shaped by digital data. Understanding those mechanics isn’t optional—it’s essential to delivering competent representation.
A few simple but powerful habits make a difference:
- Prioritize ongoing technical training—not just one-time certifications, but regular refreshers on discovery tools, data formats, and collaboration security.
- Document workflows and assumptions. If you can’t explain how your team preserved, reviewed, or produced data, it doesn’t matter how advanced your tools are.
- Foster collaboration between legal and technical professionals. The best AI-readiness programs combine lawyers’ judgment with technologists’ insight.
- Encourage responsible curiosity. Ask how systems make recommendations or classify data; don’t just trust the output.
This is where the profession must evolve. As AI becomes more embedded in discovery and case strategy, the real differentiator won’t be who adopts AI first—it will be who adopts it responsibly.
Technical competence is the bridge to that future. It ensures that AI is not a black box, but a transparent, explainable partner in decision-making. It allows legal teams to combine human judgment with machine precision in a way that’s defensible, ethical, and efficient.
Or, put more simply: when you understand how your technology works, you’re better prepared to trust—and challenge—the intelligence built on top of it.
[W]hen you understand how your technology works, you’re better prepared to trust—and challenge—the intelligence built on top of it.
Team Exterro.
The ACEDS Code & Counsel Working Group is exploring this intersection in the upcoming webinar, “Walk Before You Run: Why Technical Competence Must Precede AI Literacy” on eDiscovery Day 2025. The discussion will bring together experts who are helping firms and in-house teams build defensible, ethical frameworks for AI adoption—starting with the basics.
Because in the end, readiness isn’t about racing to adopt the newest tool. It’s about mastering the ones that make you a more capable, confident, and credible professional today.
Assisted by GAI and LLM Technologies per EDRM GAI and LLM Policy.

