As artificial intelligence continues to take hold across disciplines, its use in legal drafting has raised critical questions about accountability and reliability. In the recent case of Al-Hamim v. Star Hearthstone, LLC, 2024 COA 128 (Colo. App. 2024), the court addressed this novel issue and highlighted the risk of relying on Generative Artificial Intelligence (GAI) tools for court filings. Al-Hamim's appeal brief contained fabricated legal citations, or "hallucinations," generated by a GAI tool. While the court ultimately decided not to impose sanctions, it issued a clear warning that future submissions containing fictitious citations could lead to severe consequences, signaling a turning point in how courts may address AI-related errors moving forward.

Background of the Case

The quick advancement of Artificial Intelligence (AI), particularly Generative Artificial Intelligence (GAI), has transformed daily life on many fronts, including the creation of written content. GAI tools can generate text that closely resembles human writing; however, these tools are not yet specifically designed to draft legal documents or conduct legal research. This limitation can lead to significant issues, as users unfamiliar with GAI's shortcomings may unknowingly produce documents that include fabricated legal citations, referred to as "hallucinations." Snell v. United Specialty Ins. Co., 102 F.4th 1208, 1230 (11th Cir. 2024). A hallucination occurs when a GAI tool generates false or partially inaccurate information in response to a query. Id.

In this case, the pro se plaintiff Al-Hamim's claims stemmed from allegations of breach of the warranty of habitability and the implied covenant of quiet enjoyment against his landlords, Star Hearthstone LLC and IRT Living (collectively the "Landlords"). The district court dismissed his claims given that he had violated Colorado law by submitting a document to the court containing fabricated citations. Al-Hamim appealed and rather unwisely relied again on a GAI tool to prepare his opening brief for the appeal. The brief contained both legitimate legal citations and fake ones created by the GAI tool.

The case marks the first time a Colorado appellate court has addressed the issue of fabricated citations in court filings generated with the assistance of GAI. The appellate court affirmed the lower court's dismissal of Al-Hamim's claims and issued a warning to all parties- self-represented litigants and attorneys alike- that future filings containing non-existent judicial opinions with fake quotations and citations generated through the use of AI could result in sanctions.

The Repercussions of Hallucinations

After the court was unable to locate several cases cited in Al-Hamim's brief, it ordered him to provide complete and unedited copies of the cited cases or explain if the citations were generated by a GAI tool. The court also required him to show cause why he should not be sanctioned for citing those fictitious cases. In his response, Al-Hamim admitted that he relied on GAI to prepare his opening brief, acknowledged that the citations were hallucinations, and conceded that he had failed to review the brief. He did not attempt to explain to the court why he should be spared sanctions.

The court outlined several harms resulting from the submission of fictitious legal authorities. These included wasting the opposing party's resources to expose inaccuracies, diverting the court's attention from other matters, and damaging the reputation of courts, judges, and the parties involved. Furthermore, reliance on fabricated citations could undermine judicial authority if litigants challenge authentic rulings by questioning their validity.

The court emphasized that individuals using GAI tools for legal drafting must thoroughly review the results to ensure no fictitious citations are included. Further, self-represented litigants, like Al-Hamim, are held to the same procedural standards as licensed attorneys and must bear the consequences of procedural errors. Colorado Appellate Rule 28(a)(7)(B) requires that an appellant's opening brief include a clear, concise discussion supported by valid legal citations. The brief containing GAI-generated hallucinations violated this rule. The court noted that while legal-specific AI tools were not implicated in the case, users of general-purpose GAI tools must be vigilant about the potential for errors, biases, and false information in the results provided by the GAI tool.

Appropriate Sanctions When a Self-Represented Litigant Submits a Court Filing Containing Hallucinations

Colorado appellate courts have authority to impose sanctions, including a dismissal of appeals or attorney's fees, for violations of the Colorado Appellate Rules C.A.R. 38(a) and 39.1. However, until now, no Colorado appellate court has addressed the appropriate consequences for a self-represented litigant who submits a filing containing GAI-generated hallucinations. Courts in other jurisdictions have addressed similar situations, often issuing warnings rather than immediate sanctions.

In Anonymous v. New York City Department of Education, No. 24-cv-04232, 2024 WL 3460049, at *7 (S.D.N.Y. July 18, 2024) (unpublished opinion), a self-represented litigant submitted filings with fictitious citations generated by GAI. The court acknowledged the seriousness of submitting false legal authority but declined to impose sanctions, citing litigants' self-represented status. Similarly, in Transamerica Life Insurance Company v. Williams, No. CV-24-00379, 2024 WL 4108005, at *2 n.3 (D. Ariz. Sept. 6, 2024) (unpublished order), an Arizona court warned a self-represented litigant against using GAI tools like ChatGPT to draft filings containing non-existent legal citations, but reserved sanctions for future violations.

In the case at hand, the court found that Al-Hamim's submission of a brief containing hallucinations violated C.A.R. 28(a)(7)(B). However, it determined that his actions were less egregious than the misconduct in other the cases previously cited. Al-Hamim admitted to using GAI and was openly accountable for errors in his brief. The court also noted that the landlords' legal team failed to alert the court to hallucinations or request attorney's fees.

Given these mitigating factors and lack of precedent in Colorado, the court decided not to impose sanctions, deeming such measures disproportionate. However, it declined to grant Al-Hamim an opportunity to refile his brief. The court warned both self-represented litigants and attorneys that submit future filings containing GAI-generated hallucinations could result in sanctions, including monetary penalties or dismissal of appeals. The judgment of the lower court was affirmed.

Conclusion

This case marks the first time a Colorado appellate court has addressed the submission of GAI-generated hallucinations in legal filings. Although the court declined to impose sanctions against Al-Hamim, it issued a firm warning to litigants and attorneys alike that future violations will not be treated with the same leniency.

The court emphasized that users of GAI tools must rigorously review their documents to ensure that the cites are correct and in compliance with procedural rules. Whether represented by counsel or appearing pro se, parties are held to the same standards and must accept the consequences of procedural missteps. With this judgment, Colorado joins other jurisdictions in cautioning against the unrestrained use of GAI in legal filings, reinforcing the principle that technological convenience must never outweigh the pursuit of accuracy and integrity in the judicial process. Fundamentally, this should be a reminder for all litigants (self-represented or otherwise) that checking your work is an essential part of litigation.

By using this site, you agree to our updated Privacy Policy.