Artificial Intelligence, or “AI,” refers to the capability of machines to mimic aspects of human intelligence, such as problem solving, reasoning, discovering meaning, generalizing, predicting, or learning from past experience. AI includes both unsupervised and supervised machine learning, but it also includes a number of other processes, such as natural language processing (“NLP”). In the context of this paper, AI describes an automated process that is used to classify, categorize, summarize, makes predictions, or provide information regarding data or information using statistical, rule-based, or other algorithmic means.
Alan Turing generally is credited with the origin of the concept of AI when he speculated in 1950 that “thinking machines” could reason at the level of human beings. Turing proposed an “imitation game,” which others have called a “Turing test,” as a means of deciding whether a computer was intelligent. Essentially in its simplest terms, the argument was that a computer that was capable of holding a conversation with a human observer would demonstrate intelligence if the observer could not discern whether he or she was conversing with another person or with a computer. A computer that could imitate a human in conversation would be said to pass the Turing test and therefore be intelligent.
A few years later, John McCarthy introduced the term “artificial intelligence” in a proposal for a 1956 workshop on building machines to emulate human intellectual capacity. This workshop was intended to investigate “the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”[1]
Although McCarthy and the other participants in the 1956 workshop had high hopes for accomplishing with computers the full range of human intellectual capacity, computer scientists have succeeded instead in developing computer systems that each address a narrow range of accomplishments, such as playing games like chess, Go or Jeopardy!, identifying cancer in images of skin lesions, and driving vehicles. Artificial general intelligence—where a computer can perform all the tasks a human can—remains a future goal, and there is some debate as to whether it will ever be achieved.
Some methods of machine learning require human supervision and some do not. Unsupervised machine learning refers to AI systems that can operate independent of, or prior to, human intervention—such as concept clustering. These methods organize and classify documents or data by various features, which can include subject matter, without human training. Supervised machine learning refers to AI systems that are trained based on human decision making—such as technology-assisted review (“TAR”) or predictive coding programs that work by having humans classify some documents as “relevant” or “not relevant” to a particular subject matter, and then having computer software learn to make those distinctions based on automated analysis of the human decisions. There are other technologies used in the law that may not use machine learning at all, but are sometimes referred to as “artificial intelligence” because they mimic human cognitive processes.
Rules-based systems also can play an important role in eDiscovery. The main advantage of rule-based systems is their transparency or explainability; humans can often see how the computer makes its decisions. In supervised machine learning, humans provide the criteria for rendering a classification and examples of the different categories, so the computer can learn the distinction between the different categories. In rules-based systems, rather than the computer discerning the rules, humans provide the rules explicitly and transparently. Rules-based systems are more often used in legal contexts outside of eDiscovery, but they can have some use, for example, in anonymization and redaction. Rules-based systems are used less often for categorization in eDiscovery because they require the expertise of both subject matter experts and rules-construction experts, including linguists and statisticians. Therefore, they tend to be more resource intensive and time consuming to develop as compared to machine learning approaches.
[1] McCarthy, J., Minsky, M., Rochester, N., & Shannon, C. E. (1955). A proposal for the Dartmouth Summer Research Project on Artificial Intelligence. http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf