UNITED STATES DISTRICT COURT NORTHERN DISTRICT OF CALIFORNIA SAN JOSE DIVISION
The "AI" boom has severely affected the corporate and business spheres, with the opportunity, to say the least, to use such AIs for better information searching, text rephrasing, and other use cases in which employees are using these tools in the working environment. However, there is always a downside to such usage, as an example is provided below, which was blindly created by an AI company, Anthropic.
The case Anthropic v. Universal Music Group and others involves a fictional citation in the filing submitted by the Claude AI company (Anthropic) Lawyers. Olivia Chen, a scientist, created the citation via Claude AI as part of the defense of Anthropic, citing a non-existent research paper for a stronger argument. The opposing legal counselors of Universal Music Group, ABKCO, and Concord have implied that the citation to the academic research does not exist and was created by Claude AI.
In the response, the Anthropic legal counselor stated that the Claude AI was used to format legal citations in the document, and this procedure created an error. The citation included a source (link) to the research paper; however, the subject matter and authors cited differed from those filled in the form.
The matter isn't the first precedent where the AI has fabricated information in court proceedings; for example, U.S. Case: Mata v. Avianca, Inc.
In 2023, attorneys Steven Schwartz and Peter LoDuca from the law firm Levidow, Levidow & Oberman represented a client in a personal injury lawsuit against Avianca Airlines. Schwartz used ChatGPT, an AI tool, to draft a legal brief, which included six fabricated case citations. After the initial review of the opposing legal counsel and judge, it was stated that the cases were non-existent. After this presumption, the initial lawyers generated the cases via ChatGPT to prove they existed. However, the judge denied the motion that these cases were factual, and the lawyers were fined.
It is a well-known fact that chatbots, such as ChatGPT and Claude AI, often generate false information, also known as "hallucinations," which do not exist in reality. The output provided by these chatbots can be personalized and may closely align with the user's prompt. For example, if I request the chatbot to create information and citations for a court filing, in 8 out of 10 instances, the response may be tailored and fabricated, producing non-existent legal precedents or clauses that cannot be found anywhere in actual law.
Therefore, it is crucial to emphasize that AI generated information should always be verified to prevent issues similar to those caused by Anthropic legal counselors and scientists.
#AI #Law #Chatgpt #Artifical #ArtificialIntelligence #Hallucination
Copyright © 2025 Kristaps U - All Rights Reserved.