A lawyer suing Colombian airline Avianca has submitted a brief full of previous cases that were made up by ChatGPT.
According to New York Times, after opposing counsel pointed out the fraudulent cases, US District Judge Kevin Castel found that “six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” and scheduled a hearing to discuss sanctions for the plaintiff’s lawyers.
In his affidavit, lawyer Steven A. Schwartz admits to using OpenAI’s chatbot for study. He did the only natural thing to check the cases: he questioned the chatbot if it was lying.
When ChatGPT was pressed for a source, he apologised for the initial mistake and asserted the case was authentic, claiming it could be found on Westlaw and LexisNexis. Satisfied, he inquired about the other incidents, which ChatGPT assured him were all genuine.
The opposing counsel described in agonising detail how the Levidow, Levidow & Oberman lawyers’ submission was a brief full of lies to the court.
Schwartz said he was “unaware of the possibility that its content could be false.” He now “greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity.”
This demonstrates the inefficiencies of using chatbots for research without also double (or triple) checking their sources elsewhere. Microsoft’s Bing premiere has been synonymous with outright lying, gaslighting, and emotional manipulation. In its maiden demo, Google’s AI chatbot Bard made up a fact regarding the James Webb Space Telescope.