Practice directions push for transparency when AI is used in legal matters
Practice directions push for transparency when AI is used in legal matters
ChatGPT inaccuracies make headlines, Artificial Intelligence technology shown to fabricate information and exposes lawyers to risk of breaching privilege
With more user-friendly Artificial Intelligence programs such as ChatGPT gaining popularity, two Canadian courts have taken proactive measures to ensure transparency when it comes to using AI tools in the practice of law.
In June 2023, practice directions were issued by the Court of King’s Bench of Manitoba and the Supreme Court of Yukon, both highlighting concerns regarding the reliability and accuracy of research generated by AI tools. In response to these concerns, both courts have mandated that when AI technology is used in the preparation of legal submissions, the party must disclose what tools were used and for what purpose.
Courts for Alberta, Quebec and Nova Scotia followed suit in October 2023, each publishing a notice to the profession urging caution using AI in submissions to the court. In December 2023, the Federal Court of Canada issued a notice requiring a declaration at the top of any court document that used AI to generate content, and announced interim guidelines that ban Federal Court judges from using AI in decisions.
It would be reasonable to presume more directions of this nature could follow in other jurisdictions.
ChatGPT launched at the end of November 2022 and in less than a year we have already seen the potential consequences of using AI in law practice reach the news. Two New York lawyers were hit with sanctions in June 2023 after it was discovered they submitted a legal brief that contained fake case citations generated by ChatGPT.
According to the Findings of Fact in that case, one of the lawyers admitted he attempted to verify some of the information generated by ChatGPT with an online search to double-check the case citations. When his search returned no results, he assumed it was because the case was just difficult to find – claiming he was “operating under the false assumption and disbelief that this website could produce completely fabricated cases.”
The sanctions for the New York lawyers included a $5,000 US fine and an order to notify the very real judges who were quoted in the decisions that were fabricated by the AI.
This likely will not be the last AI-related legal scandal in the news as the world watches these chatbot tools rise in mainstream use. While the future impact of AI on the profession is uncertain, several important observations have emerged in the current discourse.
It can lie – and then lie about lying
Some users have noted an issue with ChatGPT is not simply that it can make mistakes – it is the confidence with which the tool will serve false data.
In one experiment to test the AI’s simple counting skills, ChatGPT got the answer wrong. When the user questioned its results, the chatbot proceeded to give three different answers – each time apologizing and assuring this new answer was correct.
These confident fabrications are referred to as AI hallucinations. According to ChatGPT’s creator company OpenAI, the program gathers its information from three primary sources: “(1) information that is publicly available on the internet, (2) information that we license from third parties, and (3) information that our users or our human trainers provide.”
The AI then combs through this data, pinpointing statistical patterns, and the answers it generates are a combination of words it believes to be statistically relevant. It does not distinguish what is true or false.
It is a tool, you are still the lawyer
The world took notice this year when an updated version of ChatGPT (GPT-4) managed to score in the 90th percentile in a simulated law school bar exam. Just months before, the previous version of ChatGPT flunked a good portion of the exam.
It is a significant achievement with profound implications for the legal profession, but we are not about to replace lawyers with a legion of robots (though that does sound like a compelling plot for the next TV legal drama). While AI can be a useful tool for legal research, it cannot replace the education, experience, and insight of a skilled lawyer. Further, a lawyer is ultimately responsible for the tools they use. They cannot blindly rely on them; they must exercise good judgement and oversight.
A responsible lawyer would not do a complicated Google search and blindly take the first option the search engine spits out. Recognizing that tools like ChatGPT are another way of sifting through large volumes of data – without automatic safeguards against misinformation creeping into the results – it is essential to weigh its responses with a critical eye.
It might share input with other ChatGPT users
Employees at Samsung learned the hard way that ChatGPT logs every conversation and uses all inputs to inform its future responses with other users. The Samsung employees caused a data leak by uploading sensitive information into the AI tool, including an example where one person reportedly uploaded an entire meeting and asked ChatGPT to create the meeting minutes.
Understanding that anything input into ChatGPT could be absorbed into the AI’s knowledge bank and later regurgitated into another user’s conversation, it is not a stretch to see how easily a lawyer could accidentally breach client privilege while conducting research using ChatGPT.
The onus is on lawyers to understand the tools they choose to use, and to use them responsibly. As seen in the case of the two New York lawyers, failing to understand how technology works is not a good excuse when it comes to such critical mistakes in legal representation.
ALIA does not provide legal advice. ALIAdvisory newsletters, ALIAlert warnings, ALIAction notices and the content on ALIA’s website, notices, blogs, correspondence and any other communications are provided for general information purposes only and do not constitute legal or other professional advice or an opinion of any kind. This information is not a replacement for specific legal advice and does not create a solicitor-client relationship.
ALIA may provide links to third-party websites. Links are provided for convenience only; ALIA does not vet or endorse the information contained in linked websites or guarantee its accuracy, timeliness or fitness for a particular purpose.