On a recent EDRM webinar, Generative AI: Investigating the Rise of Intelligent Counsel, participants told us that 60% had not yet tried GenAI. To provide a path for those who want to get started with GenAI, we created a How-To document for legal tech professionals and the attorneys they serve.
Barebones instructions to access ChatGPT:
- Open a web browser. Go to https://openai.com or https://chat.openai.com/auth/login.
- Create an account. Log in.
- Your input, search or question is called a “prompt.” A prompt is unstructured text (words) describing what you would like back from ChatGPT. Look to the bottom of OpenAI site for the prompt window.
- The output, results or answer is called a “Response.”
- Type in the prompt window to ChatGPT any question to regarding any nonconfidential topic, hit enter, pause a second and experience what happens.
- The response will scroll as if a person is typing.
- Refine or augment your question.
No instructions for the legal tech community would be complete without a capabilities and limitations statement.
- Text Generation: ChatGPT can create articles, stories, scripts, marketing copy, and more. It’s a powerful tool for content creation across various domains, and be careful with legal briefs.
- Code Assistance: If you’re a developer, ChatGPT can help you write code snippets, debug code, and explain programming concepts, including Python, JSON and WordPress.
- Language Translation: ChatGPT is proficient in translating text between languages, making it a valuable asset for communication across borders.
- Answering Questions: You can ask ChatGPT questions on a wide range of topics and receive informative answers (see configuration to generate more factual rather than whimsical text).
- Spell and grammar check: You may find ChatGPT’s results superior to Word’s.
- Configuration: You can set ChatGPT’s creativity or hallucination level (temperature) and other parameters.
- Lack of Understanding: While it may appear humanlike, ChatGPT does not possess true understanding or consciousness. It generates text based on patterns it has learned from its training data, and it lacks genuine comprehension of concepts, emotions, or context.
- Inaccuracies and Errors: The model may produce inaccurate or incorrect information, especially when presented with complex or nuanced topics. Users should independently verify any critical or factual information generated by ChatGPT, especially ChatGPT-3.5, trained on data earlier than 2022 and currently totally limited to answers from that timeframe.
- Sensitivity to Input: ChatGPT can be highly sensitive to input phrasing and instructions. Slight changes in prompts may lead to significantly different responses, making it challenging to consistently produce the desired output.
- Generating Plausible but Untruthful Information: ChatGPT might generate text that sounds plausible but is not factually accurate. It can produce fictional or speculative content that may be misleading if taken as truth, especially since its default is to be highly creative (high temperature) and its personality is super confident.
- Biased or Inappropriate Responses: ChatGPT may inadvertently produce biased or inappropriate content, reflecting biases present in its training data. OpenAI has made efforts to reduce biases, but it’s important to review and moderate the model’s output to ensure ethical and accurate responses, especially since some of the training data came from the Dark Web and platforms with a trash talking culture.
- Verbosity and Repetition: ChatGPT can sometimes be excessively verbose and repeat certain phrases. Users may need to manually edit or refine the generated text to improve clarity and conciseness.
- Lack of Source Citing: ChatGPT doesn’t provide citations or sources for the information it generates. Users should independently verify and cite information obtained from ChatGPT before using it in academic or professional contexts, even when generating items like court cases. Some recent legal research platforms using GenAI in API form send the output through a filter to quality control and return cites to cases.
- Engaging in Harmful or Unintended Activities: Without proper guidance and instruction, ChatGPT can generate content that is inappropriate, offensive, or harmful. It’s essential to carefully review and moderate the model’s outputs.
- Difficulty with Complex Reasoning: ChatGPT may struggle with complex reasoning, long narratives, or multi-step problem-solving tasks. It’s more adept at generating short and coherent text.
- Dependency on Prompts: ChatGPT’s responses are heavily dependent on the quality and clarity of the prompts provided. Ambiguous or unclear prompts may lead to suboptimal or irrelevant responses.
- Large-Scale Content Generation: Generating extremely large volumes of content in a short period using ChatGPT may result in diminishing quality or coherence as the response length increases.
Setting Parameters for ChatGPT: Temperature and Other Configurable Elements
When interacting with ChatGPT, you have the ability to adjust various parameters that influence the model’s behavior and the quality of its output.
There are several other ways you can configure and fine-tune your interactions with ChatGPT to achieve specific outcomes. Here are some additional configuration options and techniques:
- Temperature: The “temperature” parameter affects the randomness and creativity of the model’s responses. A higher temperature value (e.g., 0.8) makes the output more diverse and unpredictable, while a lower value (e.g., 0.2) makes it more focused and deterministic. To set the temperature, when creating a prompt, include the “temperature” parameter. For example, “Write an explanation of the EDRM model first with temperature set at 0.8 and then with the temperature set at 0.1.” Compare the output.”
- Max Tokens: The “max tokens” parameter controls the maximum length of the generated output. It’s useful for limiting the response to a specific length, especially when you want concise answers or snippets. Setting a value of, say, 50, will ensure that the output contains at most 50 tokens (words or characters).
- System-Level Instructions: You can provide explicit instructions to guide ChatGPT’s behavior. For instance, you can start your prompt with phrases like “Translate the following to French” or “Write a poemabout.” These instructions help the model understand the context and task you want it to perform.
- Context Management: If you’re engaging in a conversation or multi-turn dialogue, it’s important to maintain context. You can include previous messages or prompts in the conversation history to ensure that ChatGPT understands the ongoing discussion.
- Prefixes and Introductions: Sometimes, providing a context-setting introduction before your prompt can improve the coherence of the generated response. For instance, “In the following paragraph, explain the concept of…” This helps set the stage for the desired output.
- Examples and Demonstrations: You can show ChatGPT examples of the type of response you’re looking for. Provide a sample answer and ask the model to follow a similar structure or style.
- Prompt Engineering: Crafting clear, specific, and detailed prompts can significantly influence the quality of the output. Experiment with different prompts and variations to find the most effective way to convey your intent.
- Filtering and Post-Processing: After receiving a response, you can apply additional filtering or post-processing to refine the output. This might involve removing certain phrases or words that you want to exclude.
- Custom Tokens and Special Formatting: You can use special tokens, such as <|endoftext|>, to mark the end of your prompt or indicate specific instructions. This can help improve the structure and organization of the generated text.
- Task-Specific Instructions: Depending on your use case, you can include task-specific instructions to guide ChatGPT. For example, if you’re requesting code assistance, you can specify the programming language and the problem statement clearly.
- Language and Tone: You can influence the language, tone, and style of the output by including instructions like “Write in a formal tone” or “Use simple language” or write like “Dorothy Parker.”
- Rephrasing and Iteration: If the initial output isn’t quite what you’re looking for, you can iterate by rephrasing your prompt or refining your instructions until you achieve the desired result.
- Roles and personas: influence the output by “as if written by a litigation partner” or “written to be persuasive to an audience of non college graduate union members.”
- Experimentation: Don’t be afraid to experiment with different combinations of parameters, instructions, and prompts. AI models like ChatGPT can be highly sensitive to slight changes in input.
With these instructions, you should be well on your way to exploring ChatGPT. There are no shortage of GenAI tools to explore, including Microsoft Co-Pilot, Google Bard and others.
Additional Resources (** indicates an EDRM Trusted Partner)
Ralph Losey and the Dude: The Basics of Chat GPT AI for eDiscovery Lawyers and Technicians
John Tredennick and William Webber: ChatGPT 101- How Do I Use It?**
Legal Specific Use Cases:
John Tredennick and William Webber ChatGPT: Should Ediscovery Lawyers be Nervous?**
John Tredennick and William Webber ChatGPT: Five Ways to Use ChatGPT in Investigations and Ediscovery**
Michael Okerlund, Dr. Milena Higgins and Joe Longtin: Carpe Data! Stop “Ctrl+Fing” Transcripts and Use Testimony Intelligence**
Jeffrey Wolff, Bobby Malhotra and Mary Mack: Generative AI in Legal: Investigating the Rise of Intelligent Counsel**
Dr. Maura R. Grossman, Dr. Johannes Scholtes and John Tredennick, Esq.: Flash Webinar- Navigating the Risks of Generative AI: An Expert Panel Discussion
Doug Austin, Dr. Maura R. Grossman, Tom O’Connor and Mary Mack: Ethics and Explainability of AI Algorithms in Legal Use Cases**
Assisted by GAI and LLM Technologies per EDRM GAI and LLM Policy.
EDRM Copyright 2023 –Creative Commons Attribution 4.0 International. Attribute to https://EDRM.net.