The popularity of the use of generative Artificial Intelligence (AI) platforms has surged among lawyers in recent years. The potential of AI to aid in legal research, draft legal documents, provide instant access to information, and offer significant efficiencies is appealing for legal practitioners, however there are risks and limitations which need to be kept front of mind.
What's on this page?
One of the most notable generative AI tools has been the emergence of large language model (LLM) chatbots. LLMs perform the task of predicting the next word in a series of words. These tools are trained on vast amounts of internet data to accurately predict what word comes next in a sentence.
This article examines the limitations and risk strategies related to legal practitioners’ use of generative AI, illustrated by ChatGPT.
Limitations
Information accuracy
A significant concern with AI tools is the capacity to generate false responses, known as ‘hallucinations’. When ‘hallucinating’ the tool can fabricate content or cite non-existent cases, therefore any content must be carefully scrutinised and independently verified.
The case of Roberto Mata v Avianca exemplifies the importance of independently verifying AI outputs. In this case, a lawyer from a prominent firm relied on ChatGPT for research purposes and filed a document referring to several non-existent cases. The lawyer, who admitted to never having used ChatGPT for legal research before, said that he ‘was unaware of the possibility that its content could be false’. The court held the lawyer accountable, and he was fined for submitting misleading and erroneous information to the court.
In another case, the mayor of a Victorian council initiated legal action against ChatGPT on the basis that it wrongly stated that he had been imprisoned for bribery, when in reality he had been a whistleblower about the alleged bribery scandal and was not charged with a crime himself. Although the content has now been removed, it is easy to see how such inaccuracies could have the potential for significant legal consequences.
ChatGPT’s homepage now specifically states that ‘ChatGPT may produce inaccurate information about people, places, or facts.’
ChatGPT's knowledge is limited by the currency of its initial training data, which only goes up until September 2021. Consequently, any responses generated may not reflect current legislation or case law developments, creating further legal risks for lawyers relying solely on ChatGPT’s outputs.
Confidentiality
AI tools can encourage breach of confidentiality, with data supplied by the user potentially able to make its way into the public domain through subsequent queries by other users, or by system vulnerabilities inherent in software developed under rapid cycle. In March 2023, ChatGPT confirmed its first data breach which exposed a range of information including the questions asked of the tool by other users, and other personal data such as email addresses.
Firms using generative AI tools may also risk violating non-disclosure agreements or client agreements that expressly prohibit the use of AI technologies.
Privacy
As with confidential information, lawyers should be careful not to input personal data.
The use and disclosure of personal information is subject to varying laws in different jurisdictions, including Australia's Privacy Act 1988. Lawyers using AI tools must be mindful of their obligations concerning the handling of personal data.
Practitioners should be aware that AI products can collect information about the user. As at August 2023 ChatGPT privacy policy states that it collects IP addresses and browser information as well as data on the interactions users have with the site. Critically, ChatGPT also states that it may share users’ personal information with unspecified third parties, without informing them, to meet their business objectives.
Copyright and Intellectual Property
The training data used by AI tools draws upon a wealth of data that is likely to include copyrighted material. At present, ChatGPT does not provide source references or explanations for output generation, which poses a significant risk for practitioners relying on the data. It also may lead the user to unknowingly violating copyright laws when using its responses.
Risk management
Due to the limitations above, practitioners considering the use generative AI technologies such as ChatGPT should consider implementing the following safeguards.
Independent Verification
AI tools are not wholly reliable in providing legal information, all outputs should be scrutinised and treated as a supplement to the research process, rather than a definitive source. Always verify critical legal information from reputable and up-to-date legal sources before using it in legal work.
Data Protection
Avoid inputting client data or other sensitive/confidential information into AI tools. Limit input to non-sensitive and publicly available information for research purposes. Opt-out of data retention and other measures offered by AI tools to minimize the risk of sensitive information being stored or shared.
ChatGPT provides useful information on opt-out options.
Copyright Compliance
Keep abreast of copyright laws. Avoid adopting AI responses verbatim and use the responses only as a starting point before applying your own legal skills and judgement. Always be mindful that that the use of AI generated data could inadvertently expose you to copyright breaches.
Privacy Compliance
Adhere to relevant privacy laws and regulations and ensure that data collection, storage, and usage practices align with legal requirements for your jurisdiction.
Supervise staff using AI tools
If you supervise staff that you know use AI tools, pay attention to ensure accuracy and validity of work generated. Some practitioners may lack the experience to appreciate the deficiencies of the inputs, necessitating extra oversight and guidance.
Training and Awareness
Develop internal usage policies, and ensure staff are educated on the proper and safe use of AI tools as well as the risks presented by AI.
A survey conducted by a social network for professionals revealed that 68% of employees surveyed who were using AI tools for work, weren’t telling their managers, making clear policies and management oversight critical.
Client Engagement
Consider if client engagement documents need to be amended to confirm that the client should not put any correspondence, documents or written materials provided by your firm into ChatGPT.
Also, consider if it is a ‘red flag’ when a potential new client has already ‘researched’ their matter using ChatGPT and has drafted documents using ChatGPT.
Conclusion
ChatGPT and generative AI technologies specifically for lawyers are rapidly developing. Practitioners have their professional and ethical duties to their client, which requires them to ensure that the client’s confidentiality and privacy is protected. Practitioners need to ensure that the content is reviewed, considered, scrutinised, and then adapted. Whilst this article was based on research from a range of reliable sources as at August 2023, given the rapid development of technology, case law and legislation, practitioners should continue to stay in touch with current developments.
Further reading
In June 2023, the Federal Government commenced consultation in relation to the Safe and Responsible AI in Australia Discussion Paper. The purpose of this consultation is to seek views on how the Australian Government can 'mitigate any potential risks of AI and support safe and responsible AI practices'.
In July 2023, the NSW Bar Association issued a report on Issues Arising from the Use of AI Language Models (including ChatGPT) in Legal Practice which practitioners may find helpful.