New York Attorney Jae Lee is reportedly the latest lawyer caught using ChatGPT after citing a nonexistent case in a legal filing. Attorney Lee was caught after they failed to provide a copy of the ChatGPT-fabricated case when requested by the court.
Lee was reportedly appealing the dismissal of her client's medical malpractice lawsuit by a district court. The judges of the 2nd Circuit stated that she had mentioned two decisions in the appeal, one of which, Bourguignon v. Coordinated Behavioral Health Services, was a nonexistent case.
When the court in November could not locate the case, it directed the attorney to provide a copy of the decision in question. In response, she said she could not provide a copy of the ruling.
Before turning to ChatGPT, which recommended the fictitious Bourguignon case, Lee told the court she had had trouble finding a pertinent case. The justices concluded that Lee had not read the ruling she quoted or taken any other action to verify its accuracy.
Sanctions Against ChatGPT-Fabricated Cases
According to reports, she cited the nonexistent state court ruling in an appeal to bring her client's lawsuit alleging a Queens doctor performed an improper abortion, back to life. Although the attorney acknowledged that they had included a ChatGPT-recommended case, he or she insisted that there had been no malice or bias against the other side or the legal system.
The US Court of Appeals for the 2nd Circuit found that the actions of attorney Jae Lee of the JSL Law Offices, located in Uniondale, New York, were substantially below what is expected of counsel. Lee was forwarded by the court to a grievance panel, which decides on potential sanctions like fines and suspensions.
A request for comment regarding the 2nd Circuit's order, which also maintained the dismissal of the original case, was not immediately answered by the doctor defendant's attorney.
ChatGPT Lawyers
The order is the most recent instance of a lawyer filing a bogus case citation mistakenly using an AI tool. It has been discovered that generative AI systems "hallucinate" information, which means they may create writing that is false but yet compelling.
Recently, two Manhattan attorneys, Steven Schwartz and Peter LoDuca, were penalized $5,000 by US District Judge P. Kevin Castel in June of last year for submitting a court brief created by ChatGPT that contained false statements from cases that never happened.
According to court documents made public last month, Michael Cohen, the former attorney for Donald Trump, inadvertently included fictitious cases produced by Google's AI technology in a brief supporting the president's release from post-prison supervision.
Last June, Forbes also reported that the lawyer for a man suing Colombia-based Avianca Airlines in a standard personal injury suit used ChatGPT to prepare a filing. However, in a similar fashion, the chatbot delivered fabricated cases that the attorney presented to the court, prompting the judge to consider sanctions.
Related Article : British Judge Uses ChatGPT to Write Ruling, Hails AI's Vast Potential