Like the perseverance of Bigfoot or the Chupacabra, myths that limit the acceptance and use of machine learning/AI continue to strike distrust in the hearts of legal professionals longing to venture into a land of efficiency in eDiscovery. EDRM is excited to announce an upcoming social media series aimed at tackling these myths. Our goal is to start a discussion in the eDiscovery community and begin to take these myths head on with help from a friendly but fearless Demythigator. So kick back, grab a soda (or another beverage of your choice), and join us on LinkedIn for our conversation with the Demythigator about the Myth of the Month.
Myth #1: ML/AI is a Black Box, and That’s Necessarily a Problem
Dear Demythigator: One of the most dominant myths that limits the use of machine learning/AI in the legal industry is that the technology is a “black box,” and other approaches are preferable because they are more transparent
Demythigator’s response: Many technologies are incomprehensible to users, but this does not mean they should not be used. For legal professionals, the way that keyword search indexes can search and return results for millions of documents is inexplicable, but unquestioned. ML/AI technology is a more recent technological advancement, so its adoption can naturally raise questions. If AI/ML is providing accurate, reliable, and consistent results, however, we should not fear it solely because its mechanics are difficult to understand. We routinely trust our human brains to make decisions, but we are far more of a black box than a trained algorithm. Unlike an algorithm, a person may not always make the decision based on the same facts, and cannot be forced to ignore non-relevant factors such as bias and hearsay. Nevertheless, we rely on the human brain in many aspects of litigation, including decisions concerning an individual’s guilt or freedom made by a judge or jury, and even the relevance of documents in a collection. So how do we determine whether the black boxes of our human brains or our AI/ML approaches are making good decisions? The same way – we evaluate the results.
Myth: Black boxes are necessarily bad and should be avoided
Demythigator’s response: While many lawyers may not take time to understand how an algorithm or linear classifier works, that does not make the technology dangerous. The real issue is validating that the technology does what it purports to do (we will explore validation further in a later myth). Fully understanding a technology that we believe to be transparent does not guarantee its effectiveness. While transparency can be useful, it does not necessarily ensure a good outcome. Flipping a coin is a completely transparent process, but it does not lead to a high degree of success if you are using that method to determine which documents to produce.
Agree or disagree? Share your thoughts in the comments! Have thoughts you prefer not to share on social media for this myth or ideas or suggestions for future myths to tackle? Send those to email@example.com, and visit the EDRM blog.