Three ChatGPT cases in May see US lawyers in hot water for error-filled briefs 

Lawyers in three separate cases this month have been hauled before the US courts accused of filing error-filled briefs, after they relied on ChatGPT to help with filings and didn’t check the output.  

The most recent is the law firm Butler Snow in Alabama, which has faced blistering criticism from U.S. District Judge Anna Manasco after filing non-existent citations in two motions. The four attorneys who signed the motions have been ordered to show good cause why they shouldn’t be sanctioned under the Federal Rule of Civil Procedure 11 and appeared yesterday (21 May) at a show cause hearing. 

 The firm, which has been hired to defend Alabama’s prison system, said that “what happened here is unacceptable,” adding, “tempted by the convenience of artificial intelligence, counsel improperly used generative AI to supplement two motions and did not verify the citations that AI provided.” 

Lawyer William Cranford prepared a draft ‘motion for leave’ and sent it to partner Matthew Reeves, who revised it using ChatGPT, adding false authorities. Attorneys William Lunsford and Daniel Chism appear on the motion for leave as co-counsel but did not have involvement in its preparation. A separate ‘motion to compel’ saw Cranford prepare a draft and Reeves add hallucinated citations. 

The firm, which awaits the Judge’s decision on the consequences, has said it will hold extensive new training for the firm on the appropriate use of AI.  

A second case within the last 10 days saw a federal judge in Tampa issue strong criticism of the defence team representing former Deadspin video editor Timothy Burke, after they filed a motion to dismiss eight criminal hacking charges, with the motion riddled with legal inaccuracies. 

Judge Kathryn Mizelle has ordered the filing to be struck from the record, citing what she called “unprofessional misrepresentations of legal citations” by attorneys Michael Maddux and Mark Rasch. 

The two lawyers said that Rasch filed the motion to dismiss and that he assumes sole and exclusive responsibility. His research included Westlaw and Westlaw’s AI features, Google, Google Scholar, as well as the ‘deep research’ feature of ChatGPT v 4.5. The final version wasn’t put through Westlaw Quick Check. During the editing process, certain verified portions of the memo were replaced with earlier drafts. Rasch blamed a lack of time and said that the prudent approach would have been to ask for additional time. “Counsel regrets not having taken this step and apologises sincerely for any inconvenience, confusion, and distrust caused to this Court, opposing counsel, and Mr. Burke,” Rasch said in a filing on 19 May. The judge issued a warning and said she expects checks to be conducted by humans going forward.

The third case saw Ellis George and K&L Gates fined $31,000 on 6 May after filing a brief in the case of State Farm General Insurance Co that included non-existent authorities. 

 Judge Wilner of the Central District of California said that a lawyer from Ellis George law firm had used AI for the outline of a brief and sent it to K&L Gates, where it was incorporated. No-one checked the research. K&L Gates says that no-one knew that AI had been used to prepare the outline. The judge contact K&L Gates about the errors and the brief was resubmitted – with remaining AI generated errors still in the body of the text.  

In his order of 6 May, the Judge says:  

“I conclude that the lawyers involved in filing the Original and Revised Briefs collectively acted in a manner that was tantamount to bad faith. Fink, 239 F.3d at 994. The initial, undisclosed use of AI products to generate the first draft of the brief was flat-out wrong. Even with recent advances, no reasonably competent attorney should out-source research and writing to this technology – particularly without any attempt to verify the accuracy of that material. And sending that material to other lawyers without disclosing its sketchy AI origins realistically put those professionals in harm’s way. Mr. Copeland candidly admitted that this is what happened, and is unreservedly remorseful about it. 

“Yet, the conduct of the lawyers at K&L Gates is also deeply troubling. They failed to check the validity of the research sent to them. As a result, the fake information found its way into the Original Brief that I read. 

“That’s bad. But, when I contacted them and let them know about my concerns regarding a portion of their research, the lawyers’ solution was to excise the phony material and submit the Revised Brief – still containing a half-dozen AI errors. Further, even though the lawyers were on notice of a significant problem with the legal research (as flagged by the brief’s recipient: the Special Master), there was no disclosure to me about the use of AI. Instead, the e-mail transmitting the new brief merely suggested an inadvertent production error, not improper reliance on technology. Translation: they had the information and the chance to fix this problem, but didn’t take it. Cohen, 724 F.Supp.3d at 259. 

“I therefore conclude that (a) the initial undisclosed use of AI,(b) the failure to cite-check the Original Brief, and (perhaps most egregiously), (c) the re-submission of the defective Revised Brief without adequate disclosure of the use of AI, taken together, demonstrate reckless conduct with the improper purpose of trying to influence my analysis of the disputed privilege issues. The Ellis George and K&L Gates firms had adequate opportunities – before and after their error had been brought to their attention – to stop this from happening. Their failure to do so justifies measured sanctions under these circumstances.”

The judge decided in that case to sanction the firms, not the individuals. 

Other recent cases that fell just outside of May include a Canadian judge reprimand lawyer Jisuh Lee for including hallucinated cases in a brief in the Ontario Superior Court.  

Despite public awareness of the risks of using unchecked GenAI citations, we appear to be seeing a rash of new cases. What is interesting is some of the different scenarios we are seeing as to how mistakes happen, and lawyers will have to consider their position and processes around including any submissions from other law firms.