[Editor’s Note: This article was first published July 19, 2023 by Cristin Schmitz, Law 360 Canada.]
Legal experts in professional ethics and computer science are questioning the recent trailblazing moves of two Canadian trial courts that issued novel practice directions requiring lawyers and litigants to disclose to the bench any use of “artificial intelligence” (AI) in their legal research and court submissions.
Law360 Canada asked University of Ottawa law professor Amy Salyzyn, board chair of the Canadian Association for Legal Ethics, and University of Waterloo computer science professor Maura Grossman, a Buffalo, N.Y., attorney who teaches “Legal Values and Artificial Intelligence” at Osgoode Hall Law School, to reflect on Canadian courts’ initial efforts to oversee lawyers’ and litigants’ reliance in legal proceedings on AI, following the Nov. 30, 2022, public debut of OpenAI’s chatbot, “Chat Generative PreTrained Transformer.
Spurred by a high-profile American case last month where a federal judge fined two New York attorneys for filing, without verifying, a ChatGPT-generated legal brief that cited non-existent precedents, as well as a general concern about fake law corrupting the Canadian justice system, the chief justices of the Manitoba Court of King’s Bench and the Yukon Supreme Court separately issued groundbreaking practice directions in June that require parties and counsel to disclose to the trial courts their use or reliance on AI in court proceedings or filings, as well as to explain to the bench for what purpose, or how, such AI was used.
Read the entire article here.