Lawyers in the US Penalized $5,000 for Utilizing False Case Citations Generated by ChatGPT
Despite being taught to always check sources from the first essay in school, New York attorney Steven Schwartz relied on ChatGPT to find and review sources for a case in which a man was suing Colombian airline Avianca for injuries sustained on a flight to New York City. This decision has resulted in a $5,000 fine issued to Schwartz, his associate Peter LoDuca, and their law firm Levidow, Levidow and Oberman by a judge. ChatGPT produced six cases as precedent, including “Martinez v. Delta Airlines” and “Miller v. United Airlines,” which were either inaccurate or non-existent.
In his decision to fine Schwartz and others, Judge P Kevin Castel explained: “Technological advances are commonplace, and there is nothing inherently inappropriate about using a reliable AI tool. However, current rules require attorneys to play a gatekeeping role to ensure the accuracy of their disclosures.” In principle, you can use ChatGPT for your work, but at least check its claims. By failing to do so, the lawyers had “abandoned their duty”, including when they stood by the false statements after the court questioned their legality.
Examples of inaccuracies in ChatGPT and other AI chatbots are widespread. Take the National Eating Disorder Association’s chatbot, which offered weight loss tips to people recovering from eating disorders, or ChatGPT, which falsely accused a law professor of sexual assault using a nonexistent Washington Post article.