Use of artificial intelligence (“AI”) tools in eDiscovery creates new opportunities for attorneys. By extracting, analyzing, and applying information from large data sets, AI tools can provide new insights, systematize processes, speed time to resolution, and reduce costs. A notable example is technology-assisted review (“TAR”), a process that makes use of machine learning to prioritize or classify relevant material in document reviews. Legal practitioners may reduce costs, time, and mistakes by applying TAR in litigation, antitrust reviews, investigations, and other matters. However, as legal teams’ uses of these technologies evolve, ethical issues may arise, particularly with the opportunities for reusing the results of the computer learning in future matters, but for different clients.
Attorneys who authorize the use of machine learning on their client data can improve their ability to protect themselves and their clients by first learning what, if any, of their client’s information will possibly inform algorithms beyond the initial matter.
Khrys McKinney, Project Trustee and Principal, K L McKinney
The white paper addressing “Professional Responsibility Considerations in AI for eDiscovery: Competence, Confidentiality, Privacy and Ownership” is published by EDRM’s Analytics and Machine Learning’s subgroup on AI Ethics and Bias, led by Project Trustees, Khrys McKinney, Principal, K L McKinney and Dave Lewis, Chief Scientific Officer, Redgrave Data.
AI programs like ChatGPT say the darndest things. So do machine learning systems that attorneys might train on client data, and it behooves them to be aware of the risks to confidentiality, privacy, and intellectual property. We hope this white paper will provide helpful guidance.
Dave Lewis, Project Trustee and Chief Scientific Officer, Redgrave Data.
Read the entire paper by using the arrows below to advance the pages:
?tmstv=1687280067&v=169282Download the entire paper below: