Opinion: The New York fake ChatGPT citations case is a gift to law firms globally 

When news first broke that a lawyer had filed a brief including fake case citations made up by ChatGPT, I suspect that most people in the IT, knowledge and wider legal community fairly quickly reached three conclusions. 1) This was always going to happen. 2) Thank goodness it wasn’t us. 3) If there was ever a good way to bring to life all our warnings about the risks surrounding using ChatGPT in the wrong way, this is it. (I suspect that some of you had a bit of a laugh too, although very quietly and with an element of ‘but for the grace of God there go I.’) 

To recap some fairly well trodden facts, Steven Schwartz from New York law firm Levidow, Levidow & Oberman filed a court brief while representing Roberto Mata, who is suing Colombian airline Avianca for an injury sustained while on a flight to New York City.  

Schwartz used ChatGPT to supplement his legal research, citing in his court filing the likes of a six-page judgment in the case of Varghese v China Southern Airlines Co Ltd, and other cases where the court, judge, docket numbers and dates were included.  

The issue? You obviously guessed it, the cases were not real, and Avianca’s lawyers wrote to the judge in the case, Judge Castel, saying that they were unable to find the cases in court dockets or legal databases. In the case of Varghese in particular, they noted that they couldn’t find anything bearing any resemblance to it. 

In response to an order from Judge Castel, Schwartz filed an affidavit with the court pinpointing six citations and opinions from ChatGPT that were “unreliable”, saying that he was “unaware of the possibility that its content could be false.” 

At a court hearing yesterday (8 June), Schwartz was grilled for two hours by Judge Castel, with the New York Post reporting him as saying: “I just never could imagine that ChatGPT would fabricate cases. My assumption was I was using a search engine that was using sources I don’t have access to.” He admitted that he should have done further due diligence, saying: “In hindsight, God I wish I did that, but I didn’t do that.”  

There’s so much that can be unpacked from his comments. Firstly, there is still a huge education piece required around what generative AI is, does, and doesn’t do. Secondly, people in many cases will naturally assume that technology is right. And lastly, people are prone to assuming that all digital tools will work in the same way. 

Judge Castel said yesterday that he would rule on whether to issue sanctions at a later date. The mitigation put in front of him included that lawyers are bad with technology, and that Schwartz, who has never been sanctioned before, is a “standup lawyer.” 

Whichever way it goes, the reputational damage is done. Standup lawyer or no, it is incredible that Schwartz would rely on ‘caselaw’ that when read out in court was said to lack any credibility.

For knowledge and IT teams in law firms, however, this case is manna from heaven. Firms are doing varying degrees of restricting or governing the use of ChatGPT (or any other gen AI chatbot for that matter) and sending out a plethora of educational and instructive pop up messages and emails. But the business leaders I’ve spoken to acknowledge that a) people want to use it and b) there’s not an awful lot they can do to stop them. How to ensure people use generative AI tools such as ChatGPT sensibly and within acceptable parameters is the big question.  

Seeing Mr Schwartz getting publicly, embarrassingly, and possibly even career threateningly raked over the coals, may be the first of some big answers.

1 thought on “Opinion: The New York fake ChatGPT citations case is a gift to law firms globally ”

  1. There is an old adage, “you get what you pay for.” Considering how much Lexis/Westlaw cost to subscribe, it was pretty naive, to say the least, for this attorney to think that ChatGPT was accessing sources he didn’t have access to.

Comments are closed.