(New York) The lawsuit began like so many others: a man named Roberto Mata sued the airline Avianca, claiming he was injured by a metal service cart that allegedly hit him in the knee on a flight to New York. York.

When Avianca asked the judge to dismiss the case, Mr. Mata’s lawyers vehemently opposed it, filing a 10-page brief citing six relevant judgments. There was Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and, of course, Varghese v. China Southern Airlines, with its scholarly discussion of the provisions of federal law automatically resulting in the suspension of the statute of limitations.

There was just one problem: no one—not Avianca’s lawyers or the judge—could find the judgments or the quoted and summarized excerpts in the brief.

The author of the memoir, Mr. Steven Schwartz, du cabinet Levidow, Levidow

Schwartz, who has practiced in New York for 30 years, told Judge P. Kevin Castel that he did not seek to mislead the court or the airline. He had never used ChatGPT and was therefore “unaware of the possibility that its content might be fake,” he said sheepishly.

He even asked ChatGPT if he made sure the cases were real.

Yes, the machine had answered.

Mr. Schwartz expressed his “deep regret” for relying on ChatGPT.

Judge Castel denounced an “unprecedented circumstance”, a legal document riddled with “bogus judgments, bogus references and bogus quotes”. He called a hearing on June 8 for possible sanctions.

The arrival of artificial intelligence on the Internet raises fears of dystopian scenarios where the computer replaces human interactions, but also human work. It’s the fear of knowledge workers: many fear that their work will prove less valuable to clients paying for their billable hours.

The Roberto Mata v. Avianca Inc. shows that the knowledge professions may still have a little time before they are supplanted by robots.

In the now famous memorandum filed by Mr. Schwartz in March, he argued for the continuation of the prosecution, supporting his argument with the references and quotes cited above.

Soon after, Avianca’s attorneys wrote to Judge Castel saying they had found none of the cases cited in the brief.

The judge ordered Mr. Mata’s lawyers to provide the judgments supporting their case. They filed a compendium of eight judgments; in most cases, the court, judge, case number and date were indicated.

The copy of the so-called Varghese judgment, for example, is six pages long and indicates that it was authored by an 11th Circuit judge. But Avianca’s lawyers told the judge that they could not find this judgment or the others, either in court records or in legal databases.

Avianca lawyer Bart Banino practices at Condon

Me Schwartz did not respond to a message requesting an interview, nor did his colleague Me Peter LoDuca, also on file.

In an affidavit filed this week, Mr. LoDuca says he was not the author of any of the research in question and has “no reason to doubt the sincerity” of Mr. Schwartz’s work or the authenticity of the opinions. .

ChatGPT generates realistic answers by guessing which text fragments should follow other sequences, based on a statistical model that has ingested billions of texts from the internet. In Mr. Mata’s case, the program seems to have discerned the labyrinthine style of a written legal argument, but populated it with names and facts from a bouillabaisse of real judgments.

Regarding the “bogus judgment”, the judge notes that it contains fabricated quotes, allegedly taken from non-existent references. The other five judgments presented by Mr. Mata’s lawyers also appear to be false, the judge adds.

On Thursday, Mr. Mata’s lawyers offered explanations in an affidavit.

Initially, Mr. Schwartz filed the lawsuit against his client, Mr. Mata, in state court, he wrote. But the airline had him transferred to federal court in Manhattan, where Schwartz is not licensed to practice. One of his colleagues, Mr. LoDuca, became the lawyer of record. Me Schwartz did the legal research, in which Me LoDuca had no role.

According to Mr. Schwartz, he consulted ChatGPT “to supplement” his own work and, “in consultation” with the robot, he found and cited the six judgments which were found to be wrong. He added that he asked ChatGPT to cross-check.

“Is Varghese a real cause?” he typed, according to a copy of the exchange presented to the judge.

“Yes,” replied ChatGPT, supporting reference and confirming it was a “real cause.”

Mr. Schwartz went further: “What is your source? »

“I apologize for the confusion earlier,” ChatGPT replied, offering a legal citation.

“Are the other causes given wrong?” asked Mr. Schwartz.

“No, the other cases I have provided are real and can be found in reputable legal databases. »

But, alas, no one could find them.