Donald Trump's former lawyer Michael Cohen included phony cases generated by AI in a brief arguing for his release from post-prison supervision. (Bloomberg)News 

Court briefs mistakenly used AI-generated fake cases, says Michael Cohen.

According to court documents released on Friday, Michael Cohen, the former lawyer of Donald Trump, unintentionally included fabricated cases produced by artificial intelligence in a recent brief where he argued for his release from post-prison supervision.

Cohen, who was fired in 2019 after pleading guilty to lying to Congress, said in a statement that he used Google’s artificial intelligence tool Bard to come up with cases and then sent them to his lawyer. The letter, filed in federal court in Manhattan, supported his request for an early end to requirements that he check in with a probation officer and obtain permission to travel outside the United States.

David Schwartz, the attorney who filed the case, said he mistakenly believed that Danya Perry, an attorney representing Cohen, had reviewed the cases and did not review them himself. In a letter to the court, Perry asked that “Mr. Schwartz’s error in filing the motion, which contains improper references, should not be held against Mr. Cohen” and that the judge release him from supervision.

Polite finger-pointing abounds after statutory counterfeiting.

The attorneys cited their client as the source of the falsified precedents and offered their own admission that he had obtained the cases from Bard and was unable to compare them to standard legal research sources.

For his part, Cohen said, “It didn’t occur to me at the time — and it still surprises me — that Mr. Schwartz would leave the cases wholesale without even confirming their existence.” He said he thought of Bard as a “highly charged search engine” rather than a service that would generate legitimate-looking but bogus lawsuits.

The case is “a simple story of a client making a well-intentioned but ill-informed proposal,” relying on his attorney to research the cases before relying on them briefly, Perry said, arguing that Cohen is blameless. As for Schwartz, he said he’s only guilty of an “embarrassing” mistake.

The Bane of Lawyers

Schwartz is not the first lawyer to have to explain AI-related mistakes in federal court in Manhattan. In June, two lawyers were fined $5,000 after a judge found they had relied on bogus cases created by OpenAI Inc.’s ChatGPT and then made misleading statements after bringing the problem to their attention.

The use of artificial intelligence in legal research has prompted judges across the country to issue orders regarding its use. A federal appeals court in New Orleans is considering a rule that would require lawyers to prove either that “a generative AI program was not used” to draft legal filings or that the AI-generated work was reviewed and approved by a human lawyer.

Related posts

Leave a Comment