Blink twice, and you might miss the rapid changes happening in legal technology. As larger companies launch AI-driven tools and legal ethics AI guidelines begin to emerge, what will come next? There’s no telling. The practice of law, and even how we perceive the nature of legal work, is undergoing a significant change. Rapid generative AI advancement has placed the legal industry at a crossroads, and regardless of the path chosen, things may never be quite the same.
In November, the legal industry saw two major generative AI developments, showcasing the swift evolution of this technology. A few weeks ago, Thomson Reuters announced the integration of generative AI into a number of their platforms, while LexisNexis announced the further roll-out of generative AI into their suite of products.
Thomson Reuters’ recent integration of generative AI into its platforms was significantly enabled by the acquisition of Casetext and its generative AI legal assistant, CoCounsel, for $650 million in August. The acquisition supported Thomson Reuters’ introduction of AI-powered enhancements, including AI-assisted research for Westlaw Precision customers, a new generative legal AI assistant interface for its suite of products, and CoCounsel Core, a legal assistant that complements Westlaw Precision and provides lawyers with eight core skills: AI-Assisted Research on Westlaw Precision, Prepare for a Deposition, Draft Correspondence, Search a Database, Review Documents, Summarize a Document, Extract Contract Data, and Contract Policy Compliance.
Meanwhile, LexisNexis also stepped up, launching Lexis+ AI in the U.S., which includes conversational search, intelligent drafting, and document summarization. Also released last month was its new generative AI service called Lexis Snapshot, which alerts users to summaries of legal documents across the LexisNexis portfolio. Additionally, Lexis+ AI capability is now available in the Lexis Create document drafting tool integrated into Microsoft Word.
In parallel to these announcements, the State Bar of California’s Committee on Professional Responsibility and Conduct’s (COPRAC) released practical guidance for the use of generative AI in the practice of law on November 16. Importantly, at the outset, COPRAC explained that “the existing Rules of Professional Conduct are robust, and the standards of conduct cover the landscape of issues presented by generative AI in its current forms. However, COPRAC recognizes that generative AI is a rapidly evolving technology that presents novel issues that might necessitate new regulation and rules in the future.”
One notable recommendation concerns billing for AI-generated work. COPRAC advises that lawyers may charge for the time they spend using AI to develop inputs and edit outputs, but should not bill hourly for the time saved by AI use. Also important was the admonition that all AI-created output should be carefully reviewed and updated for accuracy before its submission to a court.
Some of the key guidance provided covers the following issues:
- Duty of confidentiality: Lawyers must ensure that no confidential information is input into generative AI systems absent sufficient data protection.
- Duty to supervise: Supervisory lawyers should establish clear policies on the permissible uses of generative AI and ensure compliance with professional obligations.
- Client communication obligations: Lawyers should consider disclosing their intention to use generative AI to clients and also be aware of client directives that would conflict with its use.
- Technology competence: Lawyers must understand how generative AI works, what its limitations are, and should carefully review AI outputs for accuracy and bias.
- Charging for work produced by generative AI: Lawyers may charge for the time spent creating, refining, and reviewing generative AI outputs but must not charge for the time saved by using generative AI. Fees and costs associated with generative AI should be clearly explained in fee agreements.
- Candor to the tribunal and prohibition on discrimination: Lawyers must review all generative AI outputs for accuracy and correct any errors before submission to courts.
As we witness the rapid deployment of generative AI tools by leading legaltech companies, the simultaneous arrival of legal ethics guidance is both timely and necessary. The novelty of generative AI technology and the unprecedented pace of technological advancement has placed legal professionals in an uncomfortable position: risk obsolescence or adopt untested generative AI tools with no clear guidance on how to do so ethically.
The latest AI releases and California’s ethics guidance provide a clear path forward for lawyers seeking to embrace change and innovate in their practices. Even so, make sure to hold on to your hats. All signs point to continued exponential rates of advancement and unpredictable times ahead. The more you can adapt and roll with the changes, the better off you — and your practice — will be in the weeks and months to come.
Nicole Black is a Rochester, New York attorney and Director of Business and Community Relations at MyCase, web-based law practice management software. She’s been blogging since 2005, has written a weekly column for the Daily Record since 2007, is the author of Cloud Computing for Lawyers, co-authors Social Media for Lawyers: the Next Frontier, and co-authors Criminal Law in New York. She’s easily distracted by the potential of bright and shiny tech gadgets, along with good food and wine. You can follow her on Twitter at @nikiblack and she can be reached at firstname.lastname@example.org.