Menu Close

ChatGPT is a chatbot developed by OpenAI. It is designed to be a conversational AI, and it has been praised for its ability to generate human-quality text. Microsoft and OpenAI have a long-term partnership that began in 2019. Microsoft invested $1 billion in OpenAI and became its exclusive cloud provider. This partnership has allowed Microsoft to integrate ChatGPT into its products and services, such as its Bing search engine. Google Bard is a conversational AI chatbot developed by Google AI. These three (of many) examples are based upon what is known as Generative AI.

Generative AI is a type of artificial intelligence that can create new content, such as images, text, and music. It does this by learning from existing data and then using that knowledge to generate new outputs. Unfortunately, the technology is capable of what the industry refers to as “hallucinations.” An AI hallucination refers to the phenomenon where an artificial intelligence system generates responses that are not based on real-world data.  Researchers and developers are actively working to mitigate and minimize hallucinations in AI systems, as they can hinder the reliability and trustworthiness of the technology.

Generative AI can be used to write movie scripts, create music, or write documents.  This is not without controversy and criticism. For example, the use of AI by college students to write essays has been a controversial topic in recent years. Some people believe that AI is a valuable tool that can help students to improve their writing skills, while others believe that it is a form of academic dishonesty and plagiarism. And now the controversial use of Generative AI has worked its way into courtrooms.

Steven Schwartz, who has worked for Manhattan law firm Levidow, Levidow & Obermam for three decades, apologized repeatedly during his emotional reading of a formal statement before Senior U.S. District Judge P. Kevin Castel who is overseeing potential sanctions.

Schwartz’s court filings included fake case citations generated by ChatGPT. According to the report by Courthouse News, he apologized for getting duped by the artificial intelligence tool. “It just never occurred to me that it would be making up cases,” Schwartz testified, explaining that he was unable at the time suspend disbelief that ChatGPT could generated totally fabricated responses to his research inquiries.

“I deeply regret my actions,” Schwartz said in court. “I have suffered both professionally and personally due to the widespread publicity. I am both embarrassed and humiliated and extremely remorseful. To say that this has been a humbling experience would be an understatement.”

The lawyer’s attorneys, Ronald Minkoff and Tyler Maulsby from Frankfurt Kurnit Klein & Selz, each argued that Schwartz made a careless mistake and should have noticed the red flags along the way but shouldn’t be accused of acting in bad faith. “There has to be actual knowledge that Mr. Schwartz knew he was providing bad cases … or that ChatGPT would be providing bad cases,” Maulsby said.

U.S. District Judge Castel did not immediately rule on punishment. See Mata v. Avianca, Inc., No. 22-cv-1461 (Doc. 31) (S.D.N.Y. May 4, 2023) (issuing rule to show cause where “[a] submission filed by plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to nonexistent cases.”).

In the wake of publicity about Schwartz’s case, a Texas judge issued an order banning the use of generative artificial intelligence to write court filings without additional fact-checking conducted by an actual person.

According to the order given by Judge Brantley Starr who sites on U.S. District Court for the Northern District of Texas:These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up – even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth).”

The new requirement comes after a lawyer representing a man suing an airline used ChatGPT to prepare a legal brief, which was discovered to be laden with errors, including made-up court cases.

We’re at least putting lawyers on notice, who might not otherwise be on notice, that they can’t just trust those databases,” Starr told Reuters. “They’ve got to actually verify it themselves through a traditional database.”

As another example, Magistrate Judge Gabriel A. Fuentes in Illinois similarly implemented a standing order that requires parties using generative AI tools in document preparation to disclose such usage in their filings.The disclosure should include specific details about the AI tool employed and the manner in which it was utilized. The judge further stated that reliance on an AI tool may not constitute reasonable inquiry under Federal Rule of Civil Procedure 11.