EDRM Blog

Legal AI – Don’t Be Scared!

EdiscoveryCat image with laser eyes, with 2 outstretched hands with legal scales, court, gavel, magnifiying glass, finger prints.

Deciphering legal AI and debunking myths

Artificial intelligence (AI) has the potential to fundamentally change the practice of law, from automating repetitive tasks, to dramatically accelerating time to insight, to amplifying the decisions of legal practitioners across vast data sets. Despite the potential benefits and the seeming endless hype surrounding legal AI, adoption of this new tech remains low across the legal ecosystem. 

Many misconceptions and fears are holding legal practitioners back from embracing the tech-enabled future. But they don’t have to. Not only do AI’s benefits outweigh the risks, but in many cases, the bigger risk is maintaining the status quo. For legal practitioners in particular, there is an opportunity to drive efficiency, stand out as tech leaders for their legal program, and advance digital innovation through embracing AI.

Drawying with Machine Learning in the center, with Supervised, Unsupervised, Reinforcement Learning as nodes, plus a node for clustering.

What IS legal AI? 

There is actually quite a bit of confusion surrounding legal AI and frankly, AI in general. The original definition proffered in 1956 by the man who coined the term AI, John McCarthy, is: “the science and engineering of making intelligent machines.” A more elaborate definition characterizes AI as “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.”

Understanding the machine learning at play in legal AI 

In the context of the practice of law and ediscovery in particular, the deployment of AI has generally fallen into the category of machine learning. As the name implies, machine learning is a subset of AI characterized by the use of algorithms and statistical methods to enable machines to improve or learn through experience. Broadly speaking, most legal AI use cases today rely on algorithms trained with human input to identify similar or dissimilar categories of documents to reduce the amount of time it takes to surface key concepts or evidence.

Evolution of legal AI 

Understanding the capabilities and limitations of your specific legal AI is important in framing expectations and workflows to maximize its effectiveness. 

Flavors of legal AI 

  • Supervised machine learning: Algorithms learn from human input and labeling within the dataset at the outset (training in TAR 1.0)
  • Unsupervised learning: Algorithms make inferences about a data set without human input (clustering, concept categorization, and social network analysis)
  • Reinforcement learning: Algorithms continue to learn from human input in an ongoing way, getting “smarter” over time (TAR 2.0, DISCO AI) 

The first type of AI to hit the legal stage was a version of supervised machine learning for ediscovery that was trademarked with the infamous name “Predictive Coding™.” Now commonly categorized as technology assisted review (TAR) 1.0, this version of machine learning had experts code a seed set of data “train” an algorithm for several “rounds” until the suggestions made by the algorithm met a certain level of precision (statistical threshold of accuracy) and recall (statistical threshold of completeness). The trained algorithm could then predict relevance across the entire data set. However, TAR 1.0 has some significant weaknesses in terms of accuracy and adding new data to the model. The preeminent case Da Silva Moore is worth the read to see how far we’ve come since TAR 1.0.

Supervised Learning workflow, with raw data, sample with feedback, algorithm, Product of tested algorithm, manual verification, Production.  Arrows going left to right.

The next addition to the legal AI landscape was in the form of analytics that did not require human input (unsupervised machine learning algorithms). In ediscovery generally, unsupervised learning algorithms are used for concept clustering, near-duplicate detection, and concept search. These models can be used independently or concurrently with other machine learning models. 

Usupervised learning (high reliance on alogrithm for raw data, large $ for review.

The most recent addition to the ediscovery lexicon, reinforcement learning, is often described as TAR 2.0. In this machine learning model, rather than having a discrete window of time to train the model, human input continually refines and informs the algorithm. Learning and improving predictions about likely responsive material, this model allows the review team to greatly accelerate their review speeds and surfaces likely relevant material in a fraction of the time it takes for linear review or TAR 1.0 models.

Reinforcement leaarning, algorithm is continually trained by human input, can be automated when maximally accurate.

Each of the above models has discrete use cases as a standalone solution and/or as part of an integrated solution that incorporates multiple machine learning models. Savvy legal practitioners can benefit from the speed to insight and reduction of cost associated with leveraging some or all of these solutions. 

Legal AI is less scary than you might think! 

Now that we understand what the heck legal AI is, let’s dig into why many practitioners have yet have embraced it. What myths and misconceptions are holding you back? 

Legal AI seems like risky business

Many legal professionals mistakenly believe that just because AI is new to the practice of law, that it must inherently be risky and unproven. The reality is that AI has been widely deployed in industries like retail and finance for decades, with compelling industry-wide benefits. In fact, data volumes are so large and complex today that attempting to use your dad or grandma’s approach to discovery is the bigger risk

Mr. Bean says "Its a pretty risky business" wIth raised eybrow.

Is using legal AI ethical? 

Applications of AI do raise ethical concerns, but not in the way you may imagine. The ABA Model Rules of Professional Conduct demand that practitioners possess the same competence, supervision, and diligence with AI as with other technology and practice-specific knowledge. While legal practitioners don’t need to be statisticians or computer scientists, they do need to have a general understanding of the tech they use, the benefits it provides, and know when and how to ask questions and ask for help. In the not-too-distant future, refusing to adopt AI may be a violation of ethics!

Don’t trust the black box

Most legal practitioners are not fluent in lambda calculus and coding languages, and may be intimidated by technology they don’t quite understand or by algorithms that produce outcomes in a far different way than the human mind. Thankfully, AI applications today are more about augmenting and better equipping people to make informed decisions as opposed to making a decision on their behalf. Like Google or Netflix, the AI-enabled tools legal practitioners use today do not take the human out of the equation. Rather, they give humans better material to work with and make better decisions from. 

Do judges get it? 

While the thought of explaining machine learning or analytics to a judge may make you break out in a cold sweat, the clear trend of the bench is in support of AI adoption. Da Silva Moore opened the door to using technology-assisted review (TAR) and cases like Entrada v. Yardi and In Re Broiler Chicken have continued to reinforce and expand the application of AI in the courts. 

But I have always done it this way

Much in the way that we no longer rely on the typewriter and telegraph since the advent of the telephone and computer, practitioners cannot refuse to embrace better technology just because they have always used another less tech-enabled approach. Practitioners are faced with more data volume and complexity than ever before. By 2025, it’s estimated that 463 exabytes of data will be created each day globally – that’s the equivalent of 212,765,957 DVDs per day! The old way of doing things is simply not enough to uncover key evidence when facing thousands of GBs of data.

Robot in suit at desk with office people around.  Lady says, "You'll find we are a little ahead of the curve here at Smith, Jones and KRX-421.

I don’t want to be replaced by robots

The future looks less like the Terminator, with mindless robots ruling the world, and more like Iron Man, with people-centric technology helping humanity do more. In the legal context, that means savvy professionals using new technology to become better practitioners. So fear not, it will not be robo-lawyers who take over in the future — rather it will be tech-savvy legal practitioners who can leverage AI and other tools to get evidence faster and make more informed decisions. 

All or nothing

When it comes to AI, many believe the only option is to go all-in or not adopt the tech at all. The truth is that there is a spectrum of AI-powered analytics with more or less human involvement that can be deployed to surface key information, QC work, or better organize and prioritize data. You do not need to go the route of relying completely on an algorithm to make coding decisions — using TAR to prioritize documents can still yield dramatic results without moving away from having eyes on every document. How AI is deployed depends on your risk profile and the objectives of your review.

Aren’t humans the gold standard?

As a human, it is easy to assume that having human eyes reviewing each document will yield the best results, but studieshave consistently shown this is simply not the case. In fact, humans can miss anywhere from 20-75% of all responsive documents, while AI-powered solutions coupled with savvy legal practitioners were substantially more accurate in a fraction of the time and expense. 

Very scared cat with open jaws and big eyes, "Oh no! Not math!"

Oh no, it’s math

Lambda calculus and statistics are enough to send many legal practitioners running for the hills, but thankfully newer iterations of legal AI (Like DISCO AI) no longer require mathematical expertise to gain valuable insights. Rather, like the Netflix queue or Amazon suggestions, the legal AI algorithm can simply run in the background gaining insights as you progress through a data set without ever having to resort to complicated mathematical equations. New legal AI really is iPhone-easy (speaking of something else full of AI that is so easy my 70-year-old dad and 6-year old niece can both use it). 

At the end of the day, much like in our personal lives, AI is here and not likely to go anywhere anytime soon. Thankfully, as with personal applications of AI, it is getting easier to use, more user-friendly, and seamless in deployment. New iterations of AI in law are designed to help — not hinder — the savvy legal practitioner. 

4

Cat Casey

Catherine “Cat” Casey, Chief Innovation Officer, DISCO Cat Casey is the Chief Innovation Officer for DISCO, the leading cloud-based AI-powered legal technology company, where she spearheads development and strategy for its advanced legal technology solutions. She is a frequent keynote speaker and outspoken advocate of legal professionals embracing technology to deliver better legal outcomes. Casey has over a decade and a half of experience assisting clients with complex ediscovery and forensic needs that arise from litigation, expansive regulation, and complex contractual relationships. Before joining DISCO, Casey was the director of Global Practice Support for Gibson Dunn, based out of their New York office. She led a global team comprising experienced practitioners in the areas of electronic discovery, data privacy, and information governance. Prior to that, Casey was a leader in the Forensic Technology Practice for PwC. Prior to that Casey built out the antitrust forensic technology practice and served as the national subject matter expert on ediscovery for KPMG. Casey has an A.L.B. from Harvard University and attended Pepperdine School of Law.


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

en_USEnglish
X