Ed. note: This is the latest in the article series, Cybersecurity: Tips From the Trenches, by our friends at Sensei Enterprises, a boutique provider of IT, cybersecurity, and digital forensics services.
The Story of AI Gone Wrong Goes Viral
On May 27, The New York Times published a story about a lawyer who relied on ChatGPT to write a brief. What a story. Even on a holiday weekend, this tale went viral.
The authors spend considerable time warning lawyers attending our artificial intelligence webinars to validate everything that ChatGPT produces. This story will now become a permanent and prominent part of the CLE that we dubbed: “The Rise of AI in the Legal Profession: Lawyers Brace for Impact.”
Ironically, ChatGPT suggested that title.
The Facts of the Case
Roberto Mata sued the airline Avianca, claiming he was injured when a metal serving cart struck his knee on a flight to Kennedy International Airport in 2019.
When Avianca requested that a Manhattan federal judge toss out the case, Mr. Mata’s lawyers strenuously objected, submitting a 10-page brief that cited more than half a dozen relevant court decisions. There was Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines, and Varghese v. China Southern Airlines, with its learned discussion of federal law and “the tolling effect of the automatic stay on a statute of limitations.”
That all sounds plausible, right?
Mysteriously, the airline’s lawyers and the judge were unable to find the referenced decisions or the quotations cited and summarized in the brief.
We know you can guess what happened. ChatGPT made it all up.
The Unfortunate Lawyer Who is Now a Poster Child for Relying on ChatGPT
Steven A. Schwartz is a lawyer with the firm Levidow, Levidow & Oberman. On May 25, Schwartz threw himself on the mercy of the court, explaining in an affidavit that he had used the artificial intelligence program to do his legal research — “a source that has revealed itself to be unreliable.”
Mr. Schwartz told Judge P. Kevin Castel that he had no intent to deceive the court or the airline. He said that he had never used ChatGPT, and “therefore was unaware of the possibility that its content could be false.” So much for the ethical duty of competence with technology.
Schwartz told Judge Castel that he had asked ChatGPT to verify that the cases were real and it replied that it had.
How is it possible that he knew he needed verification but didn’t understand that verification could not come from ChatGPT itself?
Schwartz said he “greatly regrets” relying on ChatGPT “and will never do so in the future without absolute verification of its authenticity.”
Judge Castel said in an order that he had been presented with “an unprecedented circumstance,” a legal submission full of “bogus judicial decisions, with bogus quotes and bogus internal citations.” The judge set a hearing for June 8 to discuss potential sanctions.
Many lawyers will be carefully watching the outcome of that hearing.
The Nuts and Bolts of a Case Gone Wrong
After Mr. Mata sued Avianca, the airline filed papers requesting dismissal of the case because the statute of limitations had expired.
Mr. Mata’s lawyers argued in a brief filed in March that the lawsuit should proceed, citing the references and quotes from many court decisions that we now know are false.
Unsurprisingly, Avianca’s lawyers wrote to Judge Castel, saying they couldn’t locate the cases that were cited in the brief.
With respect to Varghese v. China Southern Airlines, they said they had “not been able to locate this case by caption or citation, nor any case bearing any resemblance to it.”
They highlighted a lengthy quote from the purported Varghese decision contained in the brief. “The undersigned has not been able to locate this quotation, nor anything like it in any case,” Avianca’s lawyers wrote.
The lawyers also pointed out that the brief cited a case called Zicherman v. Korean Air Lines Co. Ltd., an opinion purportedly handed down by the U.S. Court of Appeals for the 11th Circuit in 2008. They could not find that case either.
Judge Castel ordered Mr. Mata’s attorneys to provide copies of the opinions referred to in the brief. The lawyers submitted a compendium of eight; in most cases, they listed the courts and judges who issued them, the docket numbers and dates.
The copy of the alleged Varghese decision was six pages long and Mr. Mata’s attorneys said it was written by a member of a three-judge panel of the 11th Circuit. But Avianca’s lawyers told the judge that they could not locate that opinion, or the others, on court dockets or legal databases.
Bart Banino, a lawyer for Avianca, said that his firm, Condon & Forsyth, specialized in aviation law and that its lawyers could tell the cases in the brief were not real – and suspected that a chatbot may have been involved. As it turned out, they were right.
Mr. Schwartz did not respond to a message seeking comment, nor did Peter LoDuca, another lawyer at the firm, whose name appeared on the brief.
Mr. LoDuca said in an affidavit that he did not conduct any of the research in question, and that he had “no reason to doubt the sincerity” of Mr. Schwartz’s work or the authenticity of the opinions.
The Judge and Mr. Schwartz Investigate
Judge Castel appears to have investigated on his own when he called for the June 8th hearing. He wrote that the clerk of the 11th Circuit had confirmed that the docket number printed on the purported Varghese opinion was connected to a different case.
Calling the opinion “bogus,” Judge Castel noted that it contained internal citations and quotes that were also nonexistent. He further indicated that five of the other decisions submitted by Mr. Mata’s lawyers seemed to be fake.
At this point, Mr. Mata’s lawyers offered affidavits containing their version of what had happened.
Mr. Schwartz wrote that he had originally filed Mr. Mata’s lawsuit in state court, but after the airline had it transferred to Manhattan’s federal court, where Mr. Schwartz is not admitted to practice, his colleague Mr. LoDuca became the attorney of record. Mr. Schwartz said he had continued to do the legal research without Mr. LoDuca’s involvement.
Mr. Schwartz said that he had used ChatGPT “to supplement” his own work and that, “in consultation” with it, found and cited the half-dozen nonexistent cases. He said ChatGPT had assured him that Varghese was a real case. He submitted a copy of the exchange with ChatGPT to the court.
He asked for a source and ChatGPT gave him a legal citation.
He asked if other cases the chatbot had provided were fake.
ChatGPT replied, “No, the other cases I provided are real and can be found in reputable legal databases.”
However, those cases could not be found. ChatGPT simply made things up.
Both lawyers, who work for the firm Levidow, Levidow & Oberman, have been ordered to explain why they should not be disciplined at the June 8th hearing.
Other attorneys, including author Nelson, have had experience with ChatGPT citing non-existent cases, articles, books and invalid hyperlinks.
As one leading law firm has advised its attorneys sternly, when using AI, “You must validate everything coming out of the system. You have to check everything.”
This case will certainly be a poster child for that advice.
Sharon D. Nelson (firstname.lastname@example.org) is a practicing attorney and the president of Sensei Enterprises, Inc. She is a past president of the Virginia State Bar, the Fairfax Bar Association, and the Fairfax Law Foundation. She is a co-author of 18 books published by the ABA.
John W. Simek (email@example.com) is vice president of Sensei Enterprises, Inc. He is a Certified Information Systems Security Professional (CISSP), Certified Ethical Hacker (CEH), and a nationally known expert in the area of digital forensics. He and Sharon provide legal technology, cybersecurity, and digital forensics services from their Fairfax, Virginia firm.
Michael C. Maschke (firstname.lastname@example.org) is the CEO/Director of Cybersecurity and Digital Forensics of Sensei Enterprises, Inc. He is an EnCase Certified Examiner, a Certified Computer Examiner (CCE #744), a Certified Ethical Hacker, and an AccessData Certified Examiner. He is also a Certified Information Systems Security Professional.