AI tools offer benefits and challenges to lawyers that are best managed through clear risk management principles, robust policies, and an awareness of the risks inherent in AI usage.
AI tools offer substantial advantages in tasks such as research, document review, and drafting, however, they also introduce unique risks. One area of specific concern is the tendency for AI to give incorrect information with apparent confidence (confidently incorrect). Around the world, including Australia, lawyers using AI assisted research tools have been asked to explain why they have provided the court with non-existent cases. Invariably the response from the lawyers has been along the lines that they did not understand how the technology works and consequently, did not understand how to use it in a safe manner.
The Supreme Court of Victoria, Guidelines for litigants: Responsible use of Artificial Intelligence in Litigation states that:
Parties and practitioners who are using AI tools in the course of litigation should ensure they have an understanding of the manner in which those tools work, as well as their limitations. (Supreme Court of Victoria, Guidelines for litigants: Responsible use of Artificial Intelligence in Litigation (2024) )
This understanding requires that users of AI technologies should have both an understanding of how the technology achieves its results, and how to incorporate this technology into a work process that mitigates the risks inherent in the tool.
How does generative AI get it wrong?
Knowing how the technology works can assist in knowing how to use it safely and therefore mitigate the risks associated with it. Generative AI produces text using machine learning models trained on vast datasets of human language, such as articles, books, and web content. These models learn to predict and generate words based on patterns and context in the data, often creating coherent and contextually relevant responses. However, because generative AI relies on statistical patterns rather than understanding, it can produce “hallucinations,” or fabricated and incorrect information, without signalling uncertainty.
Hallucinations can arise when the model encounters gaps in its training data, ambiguous prompts, or requests for information that is esoteric or does not exist. In these situations, the AI will apply its best prediction which in turn can generate plausible sounding but inaccurate content. While the output appears highly credible, because the AI does not truly “know” or “verify” facts, this tendency to invent details poses risks in situations that require accuracy and reliability, such as legal advice.
Whether the AI technology is free or for fee (such as part of a research or practice management system), lawyers should be aware of the possibility that it can produce results that are confidently wrong. To this end, results should be treated with a level of caution and always verified by the lawyer to ensure that the information is correct. Failure to correct erroneous information could lead to significant liabilities such as personal cost orders, complaints, referrals by the court to the Victorian Legal Services Board + Commissioner or claims against the practitioner.
Unmanaged AI risks and shadow IT
If understanding how the technology works is the first part of an AI risk mitigation strategy, then using the technology safely is the second part. And the antithesis of safe AI practice is the use of secretive or unauthorised AI tools. Shadow IT—the use of unauthorised technology solutions—presents significant risks in legal practice. With the increased availability of high-powered AI-based tools, lawyers and staff may turn to unsanctioned AI applications for tasks like legal research, drafting, or document review. However, using unapproved AI tools can expose law firms to a range of risks including data security and explainability. (In artificial intelligence, explainability, also known as interpretability, is the ability to explain the internal workings and outcomes of an AI model in a way that humans can understand.)
To mitigate these risks, law firms should adopt a proactive approach in identifying the most appropriate AI for a given task so as to reduce the risk of unsafe tools being used. Open communication channels that encourage employees to disclose technology use, combined with a clear policy defining acceptable AI tools, can help manage and reduce the risks associated with Shadow IT in legal practice.
AI Usage Policy as a Risk Management Control
Applying risk management principles to AI implementation and usage is essential for ensuring these technologies operate within safe and ethical guidelines. An effective AI Usage Policy is a primary control that looks to set these clear boundaries. The purpose of the policy is to manage the use of the AI tools and state what is acceptable and unacceptable use. This includes which tools can be used for what purpose.
It begins with defining which AI applications are permissible, specifying the types of tasks where AI can be used, such as document analysis or legal research, and outlining prohibited uses, especially those that may compromise client confidentiality or data integrity (In artificial intelligence, data integrity the quality and accuracy of the data that the system uses or was trained on). Data security protocols are crucial in an AI Usage Policy. The policy should require robust measures for protecting client data, including encryption, access restrictions, and compliance with privacy regulations. It should also mandate the vetting of third-party vendors to ensure that AI tools meet security and ethical standards.
Transparency and explainability are additional key components. The policy should require that any AI tools in use have a level of transparency that allows users to understand how decisions are made, particularly for high-stakes applications like predictive analysis (Predictive (data) analysis is a type of AI tool that considers patterns and statistics (among other things) to forecast future events. This tool can be used in litigation.). It should be noted that the creation of the policy will require some reflection on what AI tools are beneficial and the risks that are associated with their use.
Finally, the policy should encourage continuous employee training on AI risks and proper usage. This training should foster a culture of responsible AI use, where staff are aware of potential ethical and legal implications of using AI in their practice.
Controls do not Eliminate the Need for Judgement
Importantly, lawyers should keep front of mind that the use of generative AI even with the implementation of AI risk management controls, does not relieve them of the burden of exercising good judgement (Supreme Court of Victoria, Guidelines for litigants: Responsible use of Artificial Intelligence in Litigation (2024). Rule 8 states: Generative AI does not relieve the responsible legal practitioner of the need to exercise judgment and professional skill in reviewing the final product to be provided to the Court). As always, the product of an AI tool should be treated as a stage in the workflow of legal practice, with the lawyer’s oversight and reasoning always taking pre-eminence.
Tips
- Treat AI with caution. Always check the AI output and use sound judgement. Remember that the lawyer is ultimately responsibility for the final result.
- Implement an AI Usage Policy. Establish clear guidelines on acceptable AI applications, data security protocols, and instigate regular audits to ensure compliance and ethical standards. Identify any unauthorised AI tools in use to reduce risks of shadow IT.
- Train Staff on AI risks and compliance. Enhance awareness and ensure that employees understand how to responsibly use approved AI tools.