Designing Generative AI for Legal Professionals: Key Principles and Best Practices

Designing Generative AI for Legal Professionals: Key Principles and Best Practices by Ralph Losey
Image: Ralph Losey using WordPress’s Stable Diffusion.

[Editor’s Note: EDRM is proud to publish Ralph Losey’s advocacy and analysis. The opinions and positions are Ralph Losey’s copyrighted work.]


Generative AI is transforming the landscape of legal technology, offering unprecedented opportunities to automate tasks and streamline complex workflows. Yet, designing AI tools that truly meet the needs of legal professionals requires more than just technical expertise; it demands a deep understanding of the everyday challenges and workflows lawyers face. From automating document review to drafting briefs, these tools have the potential to save time and boost productivity—but only if they are designed with real-world legal practice in mind. A set of six design principles, identified in a May 2024 study by IBM researchers, provides a practical roadmap for creating AI applications tailored to the unique demands of the legal profession. This article explores these principles, offering actionable steps for developers and legal professionals alike.

A middle-aged man in a business suit sits at a desk in an office, holding a document and looking at a humanoid robot. The robot, positioned on the opposite side of the desk, has a smooth white exterior, a round head with a single large circular sensor, and black accents. The man appears focused and severe, while the robot is turned slightly towards him as if engaged in the interaction. A laptop is open on the desk between them. The office background is bright, with glass walls and modern decor.
Image by Ralph Losey using WordPress’s Stable Diffusion.

In the last year, a wave of generative AI tools has emerged, ranging from free Custom GPTs on platforms like OpenAI’s ChatGPT to premium legal tech applications costing tens of thousands annually. While the technology behind these tools is impressive, developing effective applications requires a deep understanding of legal workflows and needs. Generative AI is fundamentally different from traditional software and requires a distinct approach to design.

A May 2024 study, Design Principles for Generative AI Applications, by IBM researchers lays out six practical principles for designing effective generative AI tools. This article examines how these principles can be applied specifically to the legal tech sector, offering a guide for those looking to build or select tools that are both innovative and practical.

A group of young adults sits around a table in a modern office setting, engaged in a collaborative meeting. Five people, four seated and one standing in the background, are visible. The seated individuals use laptops and appear to be in good spirits, smiling and interacting. In the background, an older man in a suit observes the group, also smiling. To the right, a large easel with a paper pad displays handwritten notes, although the text is mostly unreadable. The room has a contemporary design with exposed brick and industrial-style lighting.
Software Design
Image by Ralph Losey using WordPress’s Stable Diffusion.

Outline of the Scientific Article and Authors

Design Principles for Generative AI Applications (CHI ’24: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, Article No.: 378, Pgs. 1-22, May 11, 2024) (hereinafter the “Study“) was authored by: Justin D. Weisz, Jessica He, Michael Muller, Gabriela Hoefer, Rachel Miles, and Werner Geyer. They are all part of the IBM Research Team, which has over 3,000 members. The Study describes the extensive peer review process that the authors went through to decide upon the six key principles of generative software design. It was very impressive.

The 22-page Study is a high-quality, first-of-its-kind research project. The IBM sponsored Study has 196 footnotes and is sometimes quite technical and dense. Still, it is a must-read for all serious developers of generative AI-based software and is recommended for any law firm before making major purchases. The success of AI projects depends upon the selection of well-designed software. Poorly designed applications with an impressive list of features are a recipe for frustration and failure.

My article will not go into the many details as to how the design guidelines were derived but focuses instead on the end result. Still, readers might benefit from a quick review of the Study’s Abstract:

Generative AI applications present unique design challenges. As generative AI technologies are increasingly being incorporated into mainstream applications, there is an urgent need for guidance on how to design user experiences that foster effective and safe use. We present six principles for the design of generative AI applications that address unique characteristics of generative AI User Experience (UX) and offer new interpretations and extensions of known issues in the design of AI applications. Each principle is coupled with a set of design strategies for implementing that principle via UX capabilities or through the design process. The principles and strategies were developed through an iterative process involving literature review, feedback from design practitioners, validation against real-world generative AI applications, and incorporation into the design process of two generative AI applications. We anticipate the principles to usefully inform the design of generative AI applications by driving actionable design recommendations.

Design Principles for Generative AI Applications, Weisz et al. (May 11, 2024).

The six principles are:

  1. Design Responsibly.
  2. Design for Mental Models.
  3. Design for Appropriate Trust and Reliance.
  4. Design for Generative Variability.
  5. Design for Co-Creation.
  6. Design for Imperfection.

The first three principles offer new interpretations of known issues with AI systems through the lens of generative AI. The next three design principles identify unique characteristics of generative AI systems. Study, Figure 1. All six principles support two important user goals: a) optimizing generated text to meet task-specific criteria; and b) exploring different possibilities within a specific domain.

This article will discuss how each principle can apply to the legal profession.

In a modern office, three humanoid robots with sleek white and black exteriors sit around a conference table with laptops as if participating in a meeting. Across the table, a man and a woman in business attire sit closely together, observing the robots with friendly expressions. The room is bright, with large windows and a clean, open design. The robots have circular sensors on their heads and articulated limbs, giving them a high-tech, futuristic appearance.
Image by Ralph Losey using WordPress’s Stable Diffusion.

Background on Generative AI and Its Potential in the Law

Generative AI is distinguished by its ability to create new content rather than merely analyzing existing data. This capability stems from reliance on large-scale foundation models trained on incredibly large datasets to perform diverse tasks with human-like fidelity (meaning that, like humans, it can sometimes make mistakes). In legal practice, generative AI can streamline several key tasks when implemented thoughtfully, including:

  • Legal Research: Automating the process of searching for relevant case law, statutes, and regulations.
  • Document Drafting: Generating contracts, briefs, and other legal documents based on specified parameters.
  • Due Diligence: Analyzing large volumes of documents to identify potential risks and liabilities.
  • Contract Review: Identifying and flagging potential issues in contracts.
  • Legal Writing: Generating clear and concise legal writing.
  • Brainstorming: Suggesting new ideas based on simulated experts talking to each other. See e.g., Panel of AI Experts for Lawyers.
A close-up of three humanoid robots with sleek white and black designs standing in a small group, facing each other as if in conversation. Each robot has a smooth, expressive face with glowing blue eyes and a neutral expression. The heads and bodies feature rounded contours with black accents, visible joints, and mechanical components. Though details are indistinct, the background is softly blurred, suggesting an indoor environment with warm lighting. The scene gives an impression of interaction or communication among the robots.
AIs Talking to Each Other
Image by Ralph Losey using WordPress’s Stable Diffusion.

Integrating AI in any field requires a thoughtful approach, and the legal profession, with its emphasis on ethics and accuracy, demands even greater diligence. AI should augment legal work without compromising the profession’s core values.

We present six principles for the design of generative AI applications that address unique characteristics of generative AI User Experience (UX) and offer new interpretations and extensions of known issues in the design of AI applications. Each principle is coupled with a set of design strategies for implementing that principle via UX capabilities or through the design process.

Design Principles for Generative AI Applications, Weisz et al. (May 11, 2024).

The Study outlines six practical design principles that offer a roadmap for developing generative AI tools tailored to legal practice. Here’s how each principle can be implemented to ensure that AI applications meet the unique demands of the legal field:

1. Design Responsibly

  • Human-Centered Approach: To implement this, developers should start with user research, such as interviews with lawyers to understand their daily challenges. For instance, incorporating a feedback loop into AI tools allows legal professionals to directly flag inaccuracies, ensuring continuous improvement of the tool’s outputs. This can be achieved by incorporating design thinking and participatory design methodologies. Observing how legal professionals perform their tasks and understanding their challenges are essential first steps.

    For example, research into actual lawyer practice can provide valuable insights into how generative AI can be best integrated into their daily routines. It’s not about replacing lawyers but about empowering them with tools that enhance their capabilities and decision-making processes.

  • Addressing Value Tensions: The development of legal tech involves various stakeholders, including legal professionals, developers, product managers, and decision-makers like CIOs and CEOs. Stakeholders often have differing values and priorities. For instance, legal professionals prioritize accuracy and reliability, while developers may focus on efficiency and innovation. These differing values can lead to value tensions that need to be identified and addressed proactively.

    The Study suggests using the Value Sensitive Design (VSD) framework, which provides a structured approach to identifying stakeholders, understanding their values, and navigating the tensions that may arise.

  • Managing Emergent Behaviors: A unique characteristic of generative AI is its potential to exhibit emergent behaviors. These are capabilities extending beyond the specific tasks a model was trained for. While emergent behaviors can be beneficial, leading to unexpected insights or efficiencies, they can also pose risks, such as generating biased or offensive content. Designers must consider whether to expose or limit these behaviors, weighing potential benefits against possible harm. This might involve a combination of technical constraints and user interface design strategies to guide AI output and prevent undesirable results.

    For example, if a generative AI tool designed to summarize legal documents starts generating legal arguments, designers might need to adjust the model’s parameters or provide users with clear instructions on how to use the tool responsibly.

  • Testing for User Harms: Generative AI models, particularly those trained on extensive text datasets, are susceptible to producing biased, offensive, or potentially harmful outputs Rigorous testing and ongoing monitoring are essential to minimize these risks. Designers and developers should benchmark models against established datasets to identify hate speech and bias. Additionally, providing users with clear mechanisms to report problematic outputs can help identify and address issues that may not be caught during testing.
A close-up of a humanoid robot with a metallic, silver-toned face and intricate design details, including glowing blue accents around its eyes and on the sides of its head. The robot's expression is calm and neutral, with finely crafted features that give it a lifelike appearance. The head has circular patterns and illuminated components, and the neck shows exposed mechanical parts. In the background, a softly focused office setting with bookshelves and large windows suggests a modern, professional environment. The overall look combines advanced technology with human-like aesthetics.
Looking Out for Unexpected Harm
Image by Ralph Losey using WordPress’s Stable Diffusion.

2. Design for Mental Models

  • Orienting Users to Generative Variability: Legal professionals are accustomed to deterministic systems in which the same input consistently produces identical outputs. Generative AI, however, introduces variability, generating different outputs from the same input. Designers must address this shift by helping users comprehend and leverage this inherent variability. This may involve presenting multiple output options, enabling users to explore different possibilities, or providing clear explanations of factors influencing output variation.

    For example, a contract-drafting tool might provide templates and successful prompt examples, guiding users on accurately specifying contract clauses and provisions.

  • Teaching Effective Use: Legal professionals must adapt their skills and workflows to effectively incorporate generative AI into their practices. This includes understanding how to construct effective prompts, recognizing the limitations of the technology, and critically evaluating the generated outputs.

    Designers play a crucial role in facilitating this learning by offering comprehensive tutorials, real-world examples, and clear explanations of AI capabilities and constraints. For example, a contract-drafting tool could offer templates and examples of successful prompts, guiding users on how to specify desired contract clauses and provisions accurately.

  • Understanding Users’ Mental Models: Understanding how legal professionals conceptualize these tools and their capabilities is crucial for designing intuitive and effective legal tech applications.

    User research methods like interviews and observations are essential for understanding users’ mental models. Asking users to describe how they believe a particular application works can reveal valuable information about their understanding and expectations. This understanding enables designers to align user interfaces and interactions with users’ existing mental models, making adopting new tools smoother and more intuitive.

    For example, if users perceive a legal research tool as a supplement to traditional databases, designers can highlight the complementary nature of AI-powered research, emphasizing its ability to uncover connections and insights that might be missed through conventional methods.

  • Tailoring AI to Users: A significant advantage of generative AI is its ability to adapt to individual users. By leveraging techniques like prompt engineering, designers can tailor the AI’s responses based on user preferences, background, and specific needs. This may include adjusting language complexity and style, providing tailored recommendations, or adapting the user interface for individual workflows. For instance, a legal writing tool might learn from a user’s style and preferences, generating suggestions and text that aligns with their voice and tone.

Most Lawyers Enjoy Tailoring Their AI to Fit Their Practice and Personalities
Images by Ralph Losey using WordPress’s Stable Diffusion.

3. Design for Appropriate Trust & Reliance

  • Calibrating Trust Through Transparency: Legal professionals must understand when to trust generative AI outputs and when to exercise caution. Transparency is key to establishing this trust. In practice, this can be achieved by adding a ‘source traceability’ feature to AI tools, allowing lawyers to view the origins of information used in AI-generated summaries. This transparency helps lawyers decide when to rely on the AI’s outputs and when to conduct additional research.

    This may also include displaying confidence levels for outputs, flagging areas for further review, or providing disclaimers about AI’s inherent imperfections. For example, a contract review tool might flag clauses with low confidence scores, encouraging users to examine those sections more closely.

  • Providing Justifications for Outputs: To enhance transparency, designers should give users insight into the reasoning behind AI outputs. This could involve revealing the AI’s ‘chain of thought,’ showing the source materials used to generate the output, or displaying the model’s confidence levels. Understanding how AI reaches a result allows users to better assess its validity and make informed decisions.

    For instance, a legal research tool might display snippets from source documents that support specific AI-generated legal arguments, allowing users to verify the accuracy and relevance of the information. This makes it easy for legal professionals to trust but verify. This is the fundamental mantra for the legal use of AI in these early days because it can still make errors and sometimes even sycophantic hallucinations.

  • Encouraging Critical Evaluation with Friction: Overreliance on AI may lead to complacency and missed opportunities for critical thinking, both of which are essential in legal practice. Designers can incorporate cognitive forcing functions into the user interface to encourage users to slow down, carefully review outputs, and engage in critical evaluation.

    This may include requiring users to manually confirm or edit AI-generated suggestions, presenting alternatives alongside AI recommendations, or highlighting potential inconsistencies or risks for user review. For example, a contract-drafting tool might flag commonly disputed clauses or those requiring special attention, encouraging users to review these sections thoroughly.

  • Clarifying the AI’s Role: AI systems can serve various roles, from simple tools to collaborative partners or advisors. Put another way, is the tool designed for a centaur-type hybrid mode or a more complex cyborg mode? See e.g., From Centaurs To Cyborgs: Our evolving relationship with generative AI (4/24/24).

    Clearly defining the AI’s intended role in legal tech applications shapes user expectations and promotes appropriate trust. For example, an AI positioned as a “research assistant” might be expected to provide comprehensive information, while a “contract-drafting tool” might be primarily expected to generate initial drafts for further review and editing. By accurately representing the AI’s capabilities and limitations within a defined role, designers can mitigate the risk of users over-relying on the technology or misinterpreting its outputs.
Four young professionals gather around a large desktop computer in a bright, modern office, engaged in a collaborative discussion. Three men and one woman, all casually dressed and smiling, are focused on the screen, with two gesturing as they talk. In the background, an older man in a dress shirt and tie observes them with a friendly expression. The room is well-lit with large windows and a spacious, open layout, creating an energetic and positive work environment.
Supervised Assistants You Trust
Image by Ralph Losey using WordPress’s Stable Diffusion.

4. Design for Generative Variability

  • Accommodating Generative Variability: Legal professionals are used to deterministic systems, where the same input consistently produces the identical output. Generative AI introduces variability, producing different outputs even with identical inputs. Designers must address this shift by helping users comprehend and leverage this inherent variability.

    This could involve presenting multiple output options, allowing users to explore different possibilities, or providing clear explanations of the factors that influence output variation. For instance, a legal research tool powered by generative AI could offer different summaries of a case, each focusing on a specific aspect, allowing users to gain a more comprehensive understanding of the legal precedent.

  • Facilitating Effective Use: Legal professionals must adapt their skills and workflows to integrate generative AI effectively into their practices. This includes understanding how to construct effective prompts, recognizing the limitations of the technology, and critically evaluating the generated outputs.

    Designers can play a key role in facilitating this learning process by providing comprehensive tutorials, real-world examples, and clear explanations of the AI’s capabilities and constraints. For example, a contract-drafting tool could offer templates and examples of successful prompts, guiding users on how to specify desired contract clauses and provisions accurately.

  • Highlighting Differences and Variations: Visual cues can help users quickly understand how multiple outputs differ from each other. This could involve highlighting changes between drafts, color-coding outputs based on confidence levels, or using visual representations to display the distribution of outputs.
A group of five young adults sits around a table in a casual, collaborative setting, engaged in a creative discussion. Laptops, notebooks, and colorful markers on the table surround them. The woman in the center, wearing a light-colored sweatshirt, has a thoughtful expression with her arms crossed. In contrast, the others, dressed casually, are actively involved in the conversation, with two holding markers as they gesture. The background is softly blurred, showing wall decorations and a warm, inviting atmosphere, suggesting a relaxed workspace or study environment.
Image by Ralph Losey using WordPress’s Stable Diffusion.

5. Design for Co-Creation

  • Supporting Co-Editing and Refinement: Legal professionals frequently need to adapt and refine AI-generated content to meet specific requirements, legal precedents, or client needs. To implement this, developers should focus on co-editing features that let lawyers refine AI-generated text directly within the interface, such as tools for editing clauses in AI-drafted contracts. This approach ensures that AI outputs are not treated as final but are instead starting points that lawyers can shape to fit specific needs.

    This could also involve providing tools for manipulating charts and images or adjusting parameters to fine-tune outputs. A contract-drafting tool could enable users to revise specific clauses with versions that are either more aggressive or cooperative than standard or to incorporate additional provisions based on client instructions.

  • Guiding Effective Prompt Crafting: The quality and relevance of outputs generated by AI models are heavily dependent on the prompts provided. Designers play a crucial role in helping users craft effective prompts by offering clear guidance, templates, and examples.

    This may include interactive tools that guide users in defining their needs, specifying output characteristics, and refining prompts to achieve optimal results. For instance, a legal research tool might include a structured prompt builder, helping users define research questions, specify relevant jurisdictions, and refine search parameters for more targeted results.
Image by Ralph Losey using WordPress’s Stable Diffusion.

6. Design for Imperfection

  • Communicating Uncertainty Transparently: Designers must be transparent about potential imperfections in AI-generated outputs. This involves clearly communicating the technology’s limitations, displaying confidence levels, and highlighting potential error areas.

    Designers can use disclaimers and visual cues to alert users to uncertainties, encouraging critical evaluation of the results. For example, a legal research tool might use color coding to indicate confidence levels of different sources, helping users prioritize reliable information.

  • Integrating Domain-Specific Evaluation Tools: Legal professionals require ways to assess AI-generated output quality and reliability using domain-specific metrics. Designers can integrate domain-specific evaluation tools directly into legal tech applications.

    This may include features like automatic citation checks, factual accuracy verification against reliable sources, or evaluating the persuasiveness of legal arguments using predefined criteria. Providing these tools empowers users to validate AI-generated content and make informed decisions in their legal work.

    Domain-specific tools could drill down even further into sub-specialties of the law. For instance, one version for ERISA litigation and another for personal injury, or one version for civil litigation and another for criminal.

  • Offering Options for Output Improvement: Instead of presenting AI-generated outputs as final, designers should provide users with opportunities for refinement and improvement. This may include editing tools, enabling users to regenerate outputs with different parameters, or suggesting alternatives based on user feedback. Enabling users to iteratively refine AI-generated content fosters a collaborative approach to legal work, positioning AI as a starting point for human expertise and judgment.

  • Collecting Feedback for Continuous Improvement: User feedback is a critical element in adapting AI tools to real-world legal practice. Including simple feedback mechanisms—such as a button to flag unclear or inaccurate results—allows developers to fine-tune the tool over time, ensuring that it remains aligned with user needs. Multiple built-in mechanisms should enable users to easily provide feedback on AI-generated outputs, flag errors, suggest improvements, or rate feature usefulness. This continuous feedback loop helps retrain models, adjust parameters, refine prompts, and improve the overall user experience, ensuring that legal tech applications evolve to meet the dynamic needs of legal professionals.

    However, these user feedback features are sorely lacking in most legal software today. Far too often, users are left with limited options—complaining to project managers, voicing concerns to sales representatives, or ultimately canceling their subscription. In many cases, direct conversations with company leaders, like CEOs or head software designers, yield little if action is not taken by the vendor to address user concerns. This creates frustration and limits the potential for meaningful product improvement.

    Legal tech companies must do more than just provide feedback channels; they must actively listen and take action. Integrating mechanisms like in-app feedback buttons, instant AI responses and timely human followup, automated surveys, and regular user forums can ensure that feedback doesn’t just disappear into a void. More importantly, companies should demonstrate a commitment to implementing user suggestions and keeping users informed of changes. Continuous improvement must be more than a slogan—it should be a practice embedded into every stage of development. Without this, legal professionals will inevitably turn elsewhere in search of tools that better align with their needs.
Two men are seated in a modern office, engaged in a focused conversation. The younger man on the left, wearing glasses and a checkered shirt, listens attentively with a severe expression. The older man on the right, dressed in a suit and also wearing glasses, is gesturing with his hands as he speaks, appearing to explain or emphasize a point. The background shows a softly blurred office environment with large windows and some greenery, giving the scene a professional yet relaxed atmosphere.
Tech Actively Listening to Lawyer
Image by Ralph Losey using WordPress’s Stable Diffusion.

Integrating generative AI into legal practice is not a simple transition; it requires strategic planning, targeted training, and a deep understanding of both technology and legal processes. Success will depend on close collaboration between software developers, legal professionals, and AI experts, ensuring that AI tools are tailored to the complex needs of the legal field. A key element of this collaboration is creating robust feedback mechanisms that allow legal professionals to directly shape the evolution of AI tools. By actively listening to user input and iterating on design, legal tech companies can ensure that AI applications remain relevant and effective.

With a clear roadmap that includes user training, open feedback channels, and a commitment to continuous improvement, generative AI can transform legal practice, driving progress while preserving the profession’s core values. Legal professionals and developers should begin by identifying key areas where AI can add value and prioritize building feedback mechanisms that facilitate ongoing refinement. This approach will ensure that AI integration is not only successful but also sustainable, ultimately creating tools that truly serve the legal profession’s needs.

Five people sit around a table in a dimly lit workspace, focused intently on multiple computer monitors displaying lines of code. The team members, wearing headphones and casual attire, are engaged in a collaborative coding or development session, with some discussing and pointing at the screens. The background features additional monitors with code, shelves filled with books and equipment, and an industrial-style ceiling with exposed ducts, creating a tech-focused, hacker-style environment. The setting suggests a late-night or intense work session in a dedicated programming or cybersecurity workspace.
Software Design for the Future
Image by Ralph Losey using WordPress’s Stable Diffusion.

Listen to the Echoes of AI Podcast all about this article on the EDRM Global Podcast Network.

Ralph Losey Copyright 2024 – All Rights Reserved


Assisted by GAI and LLM Technologies per EDRM GAI and LLM Policy.

Author

  • Ralph Losey headshot

    Ralph Losey is a writer and practicing attorney specializing in providing services in Artificial Intelligence. Ralph also serves as a certified AAA Arbitrator. Finally, he's the CEO of Losey AI, LLC, providing non-legal services, primarily educational services pertaining to AI and creation of custom GPTS. Ralph has long been a leader among the world's tech lawyers. He has presented at hundreds of legal conferences and CLEs around the world and written over two million words on AI, e-discovery, and tech-law subjects, including seven books. Ralph has been involved with computers, software, legal hacking, and the law since 1980. Ralph has the highest peer AV rating as a lawyer and was selected as a Best Lawyer in America in four categories: E-Discovery and Information Management Law, Information Technology Law, Commercial Litigation, and Employment Law - Management. For his full resume and list of publications, see his e-Discovery Team blog. Ralph has been married to Molly Friedman Losey, a mental health counselor in Winter Park, since 1973 and is the proud father of two children.

    View all posts