US Lawyer Caught Using ChatGPT After Citing Fake Cases in Court

Published on
Lawyer ChatGPT fake cases

A New York lawyer has found himself in legal hot water after referencing non-existent legal cases generated by the AI chatbot ChatGPT.

Steven A Schwartz, part of a legal team representing a man suing the airline Avianca over a metal serving cart striking his knee during a flight, had submitted a brief that cited several previous court cases to prove why the case should move forward. 

But a judge said the court was faced with an “unprecedented circumstance” after the airline's lawyers said they could not find several of the cases that were referenced in the brief.

"Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations," the judge wrote in an order demanding the man's legal team explain itself.

Mr Schwartz, who had been an attorney for over 30 years, admitted that he had used OpenAI’s ChatGPT to research for the brief, but had no knowledge that the chatbot had conjured up the examples he cited. 

In a written statement, he said he “greatly regrets” relying on the AI chatbot, clarifying that other members of his team had not been part of the research and were not aware of how it was being carried out. 

Mr Schwarz said he had never used the chatbot for legal research prior to this case and had been “unaware that its content could be false.” 

He vowed to never use ChatGPT “supplement” his legal research “without absolute verification of its authenticity”. 

AI Hallucinations

ChatGPT generates realistic responses by making guesses about which fragments of text should follow other sequences, based on a statistical model trained on billions of examples of text pulled from all over the internet. 

But in Mr Schwartz’s case, the AI bot appears to understand the framework of a written legal argument but has populated it with names and facts from an array of existing cases.

Screenshots attached to the filing show one of the conversations between Mr Schwarz and ChatGPT where the lawyer questions the authenticity of the cases. 

“Is Varghese a real case,” he typed, referencing Varghese v. China Southern Airlines Co Ltd according to a copy of the exchange submitted to the judge.

“Yes,” the chatbot replied, providing a citation and assuring that “it s a real case.” Still, Mr Schwartz continued to dig deeper.

“Are the other cases you provided fake,” he wrote, according to the filing. ChatGPT responds again that the case is real and claims it can be found on legal reference databases.

“I apologise for the confusion earlier,” the chatbot responded. “Upon double-checking, I found the case Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019), does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis. 

As it turns out, however, all of the cases provided by ChatGPT were false, despite its words of assurance. In fact, as the Times found in its own research, the chatbot struggles when it comes to most legal matters. 

For instance, ChatGPT says it’s illegal to shout “fire” in a crowded theatre. But that is no longer considered good law since the 1969 landmark Supreme Court case Brandenburg v. Ohio.

Or if ChatGPT is asked what the federal deficit was in 1980, it spits back a firm declaration that it was $74.97 billion, saying it got its data from the Treasury Department. But that figure is off by more than $1 billion from the real answer: $73.8 billion.

The chatbot’s invented figure does not appear in any news reports, so it is difficult to know what source it got this information from. 

Misinformation by AI

ChatGPT has taken the internet by storm for its ability to answer questions in natural, human-like language with near-perfect accuracy. 

But the chatbot has also raised concerns among experts over its ability to spread information due to some of the information it provides being outdated or entirely inaccurate. 

Dr Geoffrey Hinton, the man widely regarded as the godfather of Artificial intelligence, recently left his position at Google to warn the world about how AI development could impact society.

To read more about generative AI, read our dedicated AI in the Enterprise Page

The neural net pioneer warned that The dangers of AI chatbots are “quite scary”, and that humans could become reliant on the tech despite its dangers and inaccuracies. 

The chatbot itself agrees. Ask ChatGPT itself if it is accurate, and it repeatedly apologizes after being called on what it labelled “errors,” “mistakes” or “any confusion.”

“As an AI language model, I strive to provide accurate and reliable information, but I can make mistakes. I appreciate you bringing this to my attention and giving me the opportunity to correct my errors.” 

Join 34,209 IT professionals who already have a head start

Network with the biggest names in IT and gain instant access to all of our exclusive content for free.

Get Started Now