The Dual Impact of Large Language Models on Human Creativity: Implications for Legal Tech Professionals

ComplexDiscovery The Dual Impact of Large Language Models on Human Creativity: Implications for Legal Tech Professionals
Image: Rob Robinson, ComplexDiscovery with AI.

[EDRM’s Editor’s Note: This article was first published here on October 31, 2024and EDRM is grateful to Rob Robinson, editor and managing director of Trusted Partner ComplexDiscovery, for permission to republish.]


ComplexDiscovery’s Editor’s Note: This article draws insights from the report “Human Creativity in the Age of LLMs: Randomized Experiments on Divergent and Convergent Thinking” by Harsh Kumar et al., published as a preprint in September 2024. The study provides empirical evidence on the dual impact of large language models on human creativity. It is essential reading for professionals considering the integration of AI in their workflows, highlighting the importance of balancing efficiency gains with the preservation of independent cognitive abilities.


In the age of rapid technological advancement, large language models (LLMs) like ChatGPT are reshaping the way professionals in various fields approach their work. But as these tools become commonplace in creative processes, questions arise about their long-term effects on independent human cognition. A recent study titled “Human Creativity in the Age of LLMs: Randomized Experiments on Divergent and Convergent Thinking” by Harsh Kumar and colleagues from the University of Toronto provides essential insights into how LLMs influence human creativity, both immediately and after the tools are set aside.

Study Overview and Purpose

The study aimed to explore how different forms of LLM assistance affect human creativity, focusing on divergent thinking (generating varied and unique ideas) and convergent thinking (refining and narrowing down to effective solutions). This topic is especially relevant as professionals increasingly rely on AI to assist in complex cognitive tasks. The paper’s authors sought to understand not just the short-term productivity boosts provided by LLMs but also their potential implications for independent, unassisted performance.

Experimental Framework

The research involved two separate, pre-registered experiments with 1,100 participants. The first experiment tested divergent thinking using the well-known Alternate Uses Test (AUT), where participants brainstormed creative uses for common objects. The second experiment evaluated convergent thinking through the Remote Associates Test (RAT), which asks participants to identify a single word connecting three seemingly unrelated words.

Participants were split into three groups: those who received no LLM assistance, those given a standard LLM-generated list of answers, and those guided by an LLM using structured prompts without revealing direct solutions (mimicking a coaching approach). After initial rounds with or without assistance, all participants completed a final round without any AI support to assess their independent creative output.

Key Findings

  1. Immediate vs. Residual Effects: The study confirmed that LLMs can indeed boost performance during assisted tasks. Participants who received AI assistance generated more ideas or solved problems faster during the exposure phase. However, these gains did not always translate into improved performance in subsequent unassisted rounds. Notably, participants who worked with LLM-generated strategies showed diminished originality and diversity in their independent work.
  2. Divergent Thinking Observations: In tasks focused on divergent thinking, participants displayed an initial hesitance to adopt AI-suggested ideas. When tested without assistance, those who had prior LLM exposure often generated fewer original or varied ideas compared to the control group. This suggests a potential “homogenization” effect, where repeated LLM use leads to a narrowing of thought.
  3. Convergent Thinking Results: The effects on convergent thinking were different. While participants with direct LLM assistance found immediate success in identifying solutions, those who received strategic, coach-like guidance fared worse in subsequent independent rounds. This indicates that while structured assistance might help solve problems quickly, it could impede users’ ability to internalize problem-solving strategies effectively.
  4. Diversity and Creativity: Across both experiments, the research highlighted that over-reliance on AI can result in reduced creative diversity. When participants used LLM-generated strategies, their independent outputs were more similar to each other, raising concerns about long-term impacts on collective innovation.

As Kumar’s study shows, while these tools can offer short-term creative enhancements, their influence may extend into how users engage with tasks once the AI is turned off.

Rob Robinson, Editor and Managing Director of ComplexDiscovery.

These findings carry significant implications for professionals in the legal technology sector, where creativity and precision are both vital. Legal work often involves divergent thinking (e.g., brainstorming case strategies) and convergent thinking (e.g., distilling key arguments or synthesizing complex information). The potential for LLMs to offer immediate efficiency gains is attractive but must be balanced with an awareness of potential cognitive dependencies.

Enhancing Efficiency Without Sacrificing Independence: Legal tech professionals must ensure that reliance on LLMs doesn’t erode the ability to think creatively and independently. This means incorporating AI in a way that supports, but does not replace, human cognitive processes. For instance, using LLMs to structure initial drafts or offer diverse case law examples can streamline workflows. However, practitioners should critically engage with and adapt these AI suggestions to preserve unique insights and case-specific innovation.

Mitigating Homogenization: The study’s findings on homogenization should prompt legal teams to develop practices that maintain diversity in thought and argumentation. Diverse perspectives are essential for crafting compelling legal strategies and anticipating counterarguments. Overuse of AI tools that funnel thinking into a narrow band could stifle the innovative approaches needed in complex legal challenges.

Designing AI Tools as Co-Creative Partners: The research underscores the need for AI tools that act as “sneakers” or “coaches”—augmenting rather than substituting human effort. Legal tech developers should prioritize designing systems that provide support while fostering users’ problem-solving skills. This can be achieved through iterative use cases, where LLMs encourage exploration without presenting complete answers, allowing lawyers to build their reasoning capabilities.

Final Thoughts

The integration of LLMs into the professional workflow is not without its trade-offs. As Kumar’s study shows, while these tools can offer short-term creative enhancements, their influence may extend into how users engage with tasks once the AI is turned off. For legal tech professionals, the challenge lies in leveraging the efficiency of AI while nurturing the independent thinking that underpins the practice of law. Balancing these aspects is crucial for sustaining innovation and maintaining a competitive edge in the evolving landscape of legal services.

Read the original release here.


About ComplexDiscovery OÜ

ComplexDiscovery OÜ is a highly recognized digital publication providing insights into cybersecurity, information governance, and eDiscovery. Based in Estonia, ComplexDiscovery OÜ delivers nuanced analyses of global trends, technology advancements, and the legal technology sector, connecting intricate issues with the broader narrative of international business and current events. Learn more at ComplexDiscovery.com.

News Sources

Additional Reading

*Reported on with permission per Creative Commons (CC BY- 4.0).

Source: ComplexDiscovery OÜ


Assisted by GAI and LLM Technologies per EDRM GAI and LLM Policy.

Author

  • Rob Robinson

    Rob Robinson is a technology marketer who has held senior leadership positions with multiple top-tier data and legal technology providers. He writes frequently on technology and marketing topics and publish regularly on ComplexDiscovery.com of which he is the Managing Director.

    View all posts