How Generative AI Can Enhance Legal Research Responsibly
Legal professionals are now moving beyond the initial hype surrounding generative artificial intelligence (AI) technology and taking a clear-eyed look at specific potential use cases for AI tools in the practice of law.
The LexisNexis International Legal Generative AI Study found that two-thirds (65%) of lawyers foresee the highest potential for generative AI tools is to assist them in researching matters. This topped drafting documents (56%), document analysis (44%) and email writing (35%) as other leading potential use cases.
Legal technologists and industry observers agree that generative AI will significantly enhance legal research capabilities, provided that these breakthrough tools are developed and used responsibly.
Why is legal research a primary use case for generative AI?
“Generative AI can sift through vast amounts of legal precedents, statutes and case law to identify relevant information quickly,” reports Rooney Law Insights. “It can provide concise summaries, highlight critical arguments, and even predict potential outcomes based on historical data, enabling attorneys to expedite their research process.”
Of course, as we blogged about recently, LexisNexis has been leading the way in the development of legal AI tools for years, working to provide lawyers with legal research products that leverage the power of AI technology to support key legal tasks. We’re now pioneering the use of generative AI for legal research, with a focus on how these tools can enable legal professionals to achieve better outcomes and responsibly support the practice of law.
In essence, generative AI represents an exciting new era in legal research because this technology does more than search databases and surface relevant content; generative AI uses Large Language Models (LLMs) to create new content in the form of images, text, audio and more. Specific applications that we envision for the future include conversational search, insightful summarization, and intelligent legal drafting capabilities — all supported by state-of-the-art encryption and privacy technology to keep sensitive data secure and ensure reliability.
But the key to the success of generative AI in legal research is to embrace and deploy tools that have been specifically developed for use in the legal context. These tools must be responsibly developed with great caution, deliberation and sensitivity to the unique requirements demanded by the ethics of the practice of law.
The Lexis Practical Guidance team recently published an infographic checklist that provides an overview of five key considerations attorneys should keep in mind when using generative AI in their legal practices. These cautions included the following:
- Protect confidential information — Do not enter any information that is protected by the attorney client privilege or contains your client’s confidential, sensitive or proprietary information. This disclosure could constitute a breach of your duty of confidentiality.
- Watch for hallucinations — Open-web generative AI apps, such as ChatGPT, are known to sometimes provide misinformation or entirely made-up responses (known as “hallucinations”). Independently confirm the veracity of information that any LLM provides.
- Trust, but verify — A generative AI tool is trained on data that is selected by the developer, so there is a risk of inaccurate answers to your queries if that data set contains errors. Carefully review and verify all research outputs with a reliable legal research database to ensure their accuracy.
- Ensure client needs are met — Generative AI models are not reading and interpreting cases or secondary sources to provide an informed response to your request. Be sure to supplement all results with your own research and analysis to meet your client’s needs.
- Understand plagiarism risks — Any LLM’s sources are not readily apparent to end users. When drafting documents based on legal research aided with generative AI, proceed cautiously to ensure you are not plagiarizing an existing source and exposing your firm to liability.
These concerns are real and important, but legal professionals need to understand them and then let those cautionary notes guide their selection of which generative AI tools they use and how they use them. Ignoring this technology revolution is not a strategy for success in the future practice of law.
“While we must address the legitimate concerns regarding the risks of generative AI adoption, it’s equally critical to recognize the transformative value this technology can offer if we are able to effectively manage the risks in an ethical, responsible and compliant manner,” write Natalie Pierce and Stephani Goutos, attorneys at Gunderson Dettmer, in the Columbia Law School Blue Sky Blog.
We have developed Lexis+ AI, a generative AI platform, to support legal professionals with the ethical and responsible adoption of this exciting new technology. We believe this innovative product will allow us to meet users wherever they are in their legal research process. Lexis+ AI is built on the largest repository of accurate and exclusive legal content, providing lawyers with trusted comprehensive results that are backed by verifiable and citable authority.