Two US lawyers who relied on a legal brief written by the artificial intelligence (AI) tool ChatGPT have been slapped with fines.
According to Judge P. Kevin Castel, attorneys Peter LoDuca and Steven Schwartz had “abandoned their responsibilities” by submitting an AI-generated brief in their client’s lawsuit against Avianca airline in March. Furthermore, they “then continued to stand by the fake opinions after judicial orders called their existence into question.”
The two lawyers, as well as their law firm Levidow, Levidow & Oberman, have been ordered to each pay $5,000 in fines. Seperately, the judge granted a motion by the airline to dismiss the suit, filed by the attorney's on behalf of Roberto Mata, who had alleged that he had suffered trauma to his knee after being struck by a metal service tray during a flight to New York City from El Salvador.
Judge P. Kevin Castel pointed out that LoDuca and Schwartz had exhibited “bad faith” with their persistently reiterated false statements regarding the brief after attorney for the Colombian airline had first flagged the legal citations as ostensibly being from fictitious court cases.
“In researching and drafting court submissions, good lawyers appropriately obtain assistance from junior lawyers, law students, contract lawyers, legal encyclopedias and databases such as Westlaw and LexisNexis. Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance... But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings,” Judge Castel wrote in his order.
Bogus Cases
Schwartz had earlier acknowledged in an affidavit that he consulted ChatGPT when carrying out legal research on the case. Schwartz claimed that ChatGPT had assured him of the reliability of the citation in the brief, and when he asked the AI tool if the cases in question were fake, it assured him that they “can be found in reputable legal databases such as LexisNexis and Westlaw.”
However, Castel concluded in early May that after looking into the plaintiff’s filing, “six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.” At the time, the Judge called it “an unprecedented circumstance.”
While some have been praising the language model for its professional applications, such as for developing code, others have criticized its potential for abuse amid allegations that students are using the model to write essays.