The lawyer was on the receiving end of what’s called hallucination, where AI models such as ChatGPT fabricate information and present it as facts. (Bloomberg)AI 

Lawyer Apologizes for Utilizing ChatGPT to Generate False Legal Cases

An American attorney was left embarrassed when he utilized ChatGPT to draft a court document, as the AI software generated fabricated cases and verdicts.

New York-based attorney Steven Schwartz apologized to a judge this week for sending a bogus letter generated by an OpenAI chatbot.

“I simply had no idea that ChatGPT was capable of tampering with entire cases or legal opinions, especially in a way that appeared genuine,” Schwartz wrote in court.

The rout happened in a civil case in federal court in Manhattan involving a Colombian man suing Avianca airline.

Roberto Mata claims he was injured when a metal serving plate hit his leg during an August 2019 flight from El Salvador to New York.

After the airline’s lawyers asked the court to dismiss the case, Schwartz filed a response claiming to cite more than half a dozen decisions to support why the lawsuit should continue.

They included Petersen v. Iran Air, Varghese v. China Southern Airlines and Shaboon v. Egyptair. Varghese’s case even contained dated internal quotations and citations.

There was one big problem, though: Neither Avianca’s lawyers nor chairman P. Kevin Castel could find the cases.

Schwartz had to admit that ChatGPT had it all figured out.

“The court is presented with an unprecedented situation,” Judge Castel wrote last month.

“Six of the cases presented appear to be false court decisions with false citations and false internal citations,” he added.

The judge ordered Schwartz and his attorney to appear before him for possible sanctions.

– ‘mocked’ –

On Tuesday, before the hearing, Schwartz said he wanted a “profound apology” from the court for his “deeply regrettable mistake.”

He said his graduate children had been introduced to ChatGPT and it was the first time he had ever used it in his professional work.

“At the time I was doing legal research on this case, I believed ChatGPT was a reliable search engine. I now know that was wrong,” he wrote.

Schwartz added that “it was never my intention to mislead the court.”

ChatGPT has become a global sensation since it launched late last year for its ability to generate human-like content, including essays, poems and conversations, from simple prompts.

That has sparked a surge in generative AI content, and lawmakers have been scrambling to figure out how to regulate such robots.

An OpenAI spokesperson did not immediately respond to a request for comment on Schwartz’s snafu.

It was first reported by The New York Times.

Schwartz said he and his firm, Levidow, Levidow & Oberman, have been “publicly ridiculed” by the media.

“This has been deeply embarrassing on both a personal and professional level as these articles will be available for years to come,” he wrote.

Schwartz added, “This case has been an eye-opening experience for me, and I can assure the court that I will never make a mistake like this again.”

Related posts

Leave a Comment