
[EDRM Editor’s Note: The opinions and positions are those of Michael Berman.]
John Tredennick and William Webber published “Generative AI for Smart Discovery Professionals” (Merlin Search Technologies, Inc. 4th ed. 2025), available at no cost from Generative AI For Smart Discovery Professionals – Merlin Search Technologies (hereafter “Tredennick”).
I had previously read John C. Tredennick, et al., TAR for Smart People – Google Books (Catalyst 2019), and was impressed. The recent Generative AI book did not disappoint.
Mr. Tredennick tells the reader what he is going to explain, explains it, and then reminds the reader what has been explained. The writing is clear and concise. For example: “LLM’s are ‘brains in jars’ without memory, whose only interface with the world is a temporary ‘whiteboard’ onto which users write their instructions.”
LLM’s are ‘brains in jars’ without memory, whose only interface with the world is a temporary ‘whiteboard’ onto which users write their instructions.
John Tredennick, Generative AI for Smart Discovery Professionals (Merlin Search Technologies, Inc. 4th ed. 2025).
Important concepts, such as “pre-training” and “training cutoff” are covered. One very interesting topic is that of “context windows” and how their size is significant. Mr. Tredennick explains that the “context window” is the “whiteboard” for the “brain in a jar.” The book also covers the confidence level of AI systems, which is a “numerical measure of how certain an AI system is about its prediction or classification.”
The topic of “hallucinations” is explored. More importantly, the book describes lower-risk scenarios and mitigation strategies, as well as emphasizing the need for verification. One example is that summarization or extraction presents a lesser risk than generation. Another is a discussion of Retrieval Augmented Generation or “RAG.”
A significant discussion is that of “verification strategies,” including sampling, risk-based verification, cross-checking, and others. Of course, the “human-in-the-loop” “imperative” is emphasized.
Ethical issues are covered. In addition to the ABA Formal Opinion, there is a discussion of “transparency and client communication,” as well as supervisory and billing considerations.
Mr. Tredennick also gives valuable step-by-step examples of sequential queries, privilege analysis, analyzing contradictory testimony, timeline construction, and evaluating opposing arguments. For example, he wrote that AI can “systematically compare testimony” and find “what different witnesses said about the same events.” Further, he suggests that “timeline systems” perform “gap analysis” and “identify not just what’s present but what’s conspicuously absent.”
Part III covers “practical applications in discovery and investigations.” In addition to responsiveness and privilege reviews, it discusses use of AI to find passages meeting redaction criteria, as well as transcript and medical record analysis. Mr. Tredennick points to a real-world scenario in which 102 transcripts, totaling 18,000 pages, were summarized in 131 minutes, providing complete summaries, structured outlines, topic tables, and page-line hyperlinks to the original text.
The book discusses zero-shot, one-shot and few-shot prompts. This is a prompt engineering process that involves whether and how to provide examples to the AI. Tredennick provides the “anatomy of an effective prompt.”
One last (for this blog, but not the book) important point, is that Mr. Tredennick states: “Repository quality determines output quality.” He notes that AI “can only work with the materials it is given.” If the repository in incomplete, the results will be less than optimal.
Assisted by GAI and LLM Technologies per EDRM’s GAI and LLM Policy.

