There’s so much going on with AI right now that it’s hard to know which way to turn. Now that the kerfuffle with Open AI’s CEO Sam Altman appears to have simmered down, what’s next? Is that a question that we should ask ChatGPT? That’s what I did.
Here’s its verbatim response:
I don’t have real-time information, and my training only includes data up to January 2022, so I can’t provide updates on specific events or developments that may have occurred after that date. As of my last update, I don’t have information on future events, including the future activities or decisions of OpenAI and its board of directors.
For the latest and most accurate information, I recommend checking recent news articles, official announcements from OpenAI, or their official website. Keep in mind that the future actions of organizations can be influenced by various factors, and predicting specific outcomes can be challenging.
All righty, then. Note that ChatGPT right now only includes data up to January 2022. Thus, you are out of luck finding the most recent cases on your topic. Arrrggghhhh! Does that mean that you must Shepardize? (I know, an archaic, dinosaur term.)
It’s hard enough for a lawyer’s ego when he loses a case to another lawyer or law firm. But what about when a law firm loses a case to an entity, that is, a robot law firm not authorized to practice law? Ouch! That’s the fate that befell an Illinois law firm that sued DoNotPay alleging “false association and false advertising under the federal Lanham Act and Illinois state law.” Since even a robot law firm knows that “only a fool has himself for a lawyer,” DoNotPay hired real live counsel. DoNotPay human lawyers succeeded in getting the case dismissed.
Have you ever heard the term “social loafing?” It doesn’t refer to laziness on social media, but to another kind of laziness, the willingness to let AI do the heavy lifting, while kicking back. The issue is whether it makes us more efficient and our lives easier, or whether it makes us slackers, and for lawyers as well as for other professionals, that is not a good thing.
We’ve already seen lawyers sanctioned for reliance (to their detriment) on AI for brief writing. What’s next? When there’s a human and a robot working in tandem, guess who does the work? According to a Wall Street Journal article:
A study published in October in the journal Frontiers in Robotics and AI has an answer: A person who works alongside a robot is less likely to focus on details than when he or she works alone. Anyone who has worked in a team knows that one or two people usually carry the load while the others sit back and watch; researchers call this ‘social loafing.’ It turns out that people treat robots the same way.
Remember how the Rules of Professional Conduct require lawyers to take responsibility for any screw-ups and not blame snafus on staff? What’s the likelihood that lawyers will start blaming the robots? And how do you think judges will react? If it’s anything like that now notorious case where ChatGPT created cases out of whole cloth, it will not go well. What’s the moral here? Don’t rely on AI for finished and correct work product. How about this for a slogan? “Don’t rely on AI?” Catchy and it rhymes.
State bars are starting to think about the various ramifications of AI for legal practice. Florida is warning its lawyers that they must tell clients when they use and bill for AI and that the lawyers are responsible for reviewing AI work product. But that’s not all. Client confidentiality is a huge concern, along with oversight, advertising, legal fees, and costs.
The Florida Bar’s Board Review Committee on Professional Ethics has issued for comment a proposed advisory opinion on all the ways in which AI impacts the practice of law and the ethical obligations that arise from its use. This is just the first, but certainly not the last, of various state bars weighing in on the issues.
And not to be outdone, California has published a report on the rewards and risks of Generative AI in the public sector. What’s the difference between plain old vanilla AI and Generative AI? Here’s an explanation from the report for the dinosaurs among us:
GenAI builds on advances in conventional AI and uses very large quantities of data to output unique written, audio, and/or visual content in response to freeform text requests from its users and programmers. GenAI tools have the capacity to produce entirely new content instead of simply regurgitating inputted data. Unlike conventional AI systems designed for specific tasks, GenAI models are designed to be flexible and multifunctional. GenAI products are already available as standalone applications such as ChatGPT, Dall-E, and Bard.
Look at the helpful list of citations/references at the report’s end!
And last, but certainly not least, the Fifth Circuit is considering a rule that will require lawyers to certify use of AI. Rule 11 sanctions lurking? The court would be the first in the nation to require certification, but I can’t believe that other circuits won’t be far behind. Remember: you sign the pleading; you certify its accuracy and authenticity. I don’t think it’s gotten to the state of “use AI, go to jail,” but that might not be as far-fetched as it seems, so keep your Monopoly “Get Out of Jail Free” cards nearby.
Jill Switzer has been an active member of the State Bar of California for over 40 years. She remembers practicing law in a kinder, gentler time. She’s had a diverse legal career, including stints as a deputy district attorney, a solo practice, and several senior in-house gigs. She now mediates full-time, which gives her the opportunity to see dinosaurs, millennials, and those in-between interact — it’s not always civil. You can reach her by email at email@example.com.