The Prosecutor, The Algorithm, and The Phantom Case Law: Inside Del Norte County’s Legal Reckoning

A deep dive into the Del Norte County scandal where a District Attorney admitted to filing a brief containing AI hallucinations. This analysis explores the systemic failures of 'shadow AI' in criminal courts, the blurry line between algorithmic fabrication and human negligence, and the looming crisis of trust in prosecutorial integrity.
The Prosecutor, The Algorithm, and The Phantom Case Law: Inside Del Norte County’s Legal Reckoning
Written by Mike Johnson

In the quiet, fog-laden jurisdiction of Del Norte County, California, a legal drama has unfolded that serves as a stark warning for the entire American judicial system. It is a story that moves beyond the now-familiar anecdotes of lawyers misusing ChatGPT and strikes at the heart of prosecutorial integrity. While the legal profession has spent the last year chuckling uneasily at civil litigators caught inventing case law, a recent admission by District Attorney Katherine Micks has shifted the conversation to a far more somber venue: criminal court, where the liberty of defendants hangs in the balance. As reported by the ABA Journal, Micks recently conceded that her office filed a prosecution brief containing AI-generated hallucinations, a revelation that has sent shockwaves through the state’s defense bar and prompted a reevaluation of technological safeguards in public offices.

The incident centers on a motion to suppress evidence in a drug trafficking case, a routine procedure that turned extraordinary when the defense attorney, Keith Morris, began checking the prosecution’s citations. Morris, attempting to do his due diligence, found himself chasing ghosts; the case law cited by the District Attorney’s office simply did not exist. This was not merely a matter of incorrect formatting or misinterpretation of precedent. The citations were complete fabrications—digital mirages conjured by a large language model and pasted onto state letterhead. The gravity of this error cannot be overstated. In criminal proceedings, the state wields the immense power of incarceration, and the integrity of the arguments presented by the prosecution is the bedrock upon which judicial trust rests.

A Rural Courtroom Becomes Ground Zero for the Legal Profession’s Struggle With Generative Technology and the Definition of Competence

However, the narrative took a complex turn when the District Attorney offered her explanation to the court. As detailed by the ABA Journal, Micks admitted that one of the briefs in question was indeed the product of generative AI usage, specifically citing “hallucinations” where the software invented legal authority. Yet, she vehemently denied that AI was the culprit in a second, separate brief that also contained erroneous citations. For this second document, Micks attributed the failures to human error—specifically, the reliance on an outdated “brief bank” and the sloppy recycling of old legal arguments. This distinction is critical for industry insiders analyzing the situation. It highlights a dual failure mode: the seductive, dangerous ease of AI adoption without verification, and the persistent, analog problem of administrative negligence. The incident suggests that while AI is a new vector for error, it is exacerbating pre-existing vulnerabilities in legal workflows where volume often trumps precision.

The defense bar’s reaction was immediate and scathing. Morris, representing the defendant, argued that such failures effectively deprived his client of due process, forcing the defense to spend valuable resources debunking nonexistent laws. The ABA Journal notes that this incident echoes the infamy of the Mata v. Avianca case in New York, where two lawyers were sanctioned for similar conduct. However, the Del Norte case carries a heavier weight because the error originated from the government. When a private attorney fails, a client loses money or a case; when a prosecutor fails in this manner, the legitimacy of the state’s punitive machinery is called into question. The presiding judge, Superior Court Judge Darren McElfresh, expressed deep skepticism, noting that the court relies on the accuracy of counsel to function. The erosion of this reliance threatens to grind the wheels of justice to a halt, as judges may now feel compelled to independently verify every citation submitted by the DA’s office.

Distinguishing Between Algorithmic Fabrication and Traditional Administrative Negligence in Public Office Workflows

The technical underpinnings of this failure point to a fundamental misunderstanding of what Large Language Models (LLMs) actually are among legal practitioners. These systems are not knowledge databases; they are predictive engines designed to generate plausible-sounding text. When a prosecutor asks an LLM for case law regarding suppression motions, the model does not search Westlaw or LexisNexis. Instead, it predicts what a legal citation looks like based on statistical probability. It generates a volume number, a reporter abbreviation, and a page number that feel mathematically correct to the model, even if they correspond to nothing in reality. This phenomenon, known as hallucination, is a feature, not a bug, of current generative AI architecture. For a District Attorney to introduce this volatility into the criminal justice system suggests a profound gap in technological literacy at the leadership level.

Furthermore, the “human error” defense regarding the second brief—the “copy-paste” excuse—reveals a systemic issue in how district attorney offices manage their institutional knowledge. If outdated, erroneous briefs are being recycled without review, it suggests that the introduction of AI is merely pouring gasoline on a fire of existing inefficiency. The ABA Journal reporting highlights that Micks characterized the non-AI errors as a failure to update templates. This admission paints a picture of an office struggling with the basics of legal writing, making the introduction of a “shortcut” tool like ChatGPT almost inevitable. It serves as a case study for legal tech consultants and firm managers: AI tools cannot fix a broken workflow; they will only automate the production of errors at a scale previously unimaginable.

The Erosion of Institutional Trust and the High Stakes of Prosecutorial Accuracy in the Age of Automation

The ramifications of the Del Norte incident are likely to extend far beyond the borders of California. Legal ethics experts are closely watching how the State Bar of California responds. Under Rule 1.1 regarding the duty of competence, a lawyer must apply the diligence and learning necessary to perform legal services. The question now facing disciplinary boards is whether the unverified use of generative AI constitutes a violation of this primary duty. In the wake of the Mata sanctions, federal judges across the country began issuing standing orders requiring attorneys to disclose the use of AI. The Del Norte incident will likely accelerate the adoption of such orders in state criminal courts, creating a patchwork of regulations that lawyers must navigate. It forces a new standard of practice: “Trust, but Verify” is no longer sufficient; the new standard is “Verify, then Verify again.”

Moreover, this case exposes the asymmetry of resources in the criminal justice system. Defense attorneys, often overworked and underpaid, are now tasked with the additional burden of auditing the prosecution’s use of technology. If Keith Morris had not been diligent in checking the citations, the phantom case law might have been accepted by the court, potentially establishing a disastrous precedent. This shifts the labor of verification onto the defense, creating an ethical hazard where the state can produce voluminous, AI-generated filings at zero cost, while the defense must expend significant billable hours or public defender resources to deconstruct them. It is a form of asymmetric warfare via algorithm, whether intentional or, as in this case, the result of staggering negligence.

Navigating the Future of AI Integration Without Compromising Constitutional Due Process and Professional Ethics

The industry must also grapple with the psychological aspect of “automation bias”—the human tendency to trust output generated by a computer. In high-volume legal environments like a DA’s office, the temptation to accept the AI’s output as “probably right” is immense. Micks’ admission that AI was used for one brief but not the other creates a murky operational picture. It suggests an environment where AI usage is ad-hoc, unregulated, and shadow-IT driven, rather than a sanctioned, integrated part of the legal stack. This “shadow AI” usage is perhaps the greatest threat to legal firms and government offices today. Without clear policies, firewalled tools, and enterprise-grade legal AI (which is grounded in actual case law databases), individual attorneys will continue to turn to consumer-grade tools like ChatGPT to cut corners, with disastrous results.

Ultimately, the Del Norte County incident serves as a definitive case study for the entire legal sector. It proves that the risks of generative AI are not theoretical, nor are they confined to high-stakes civil litigation in metropolitan hubs. They have permeated the granular level of rural criminal justice. The ABA Journal coverage underscores a reality that every managing partner and district attorney must now face: the barrier to entry for creating convincing but fraudulent legal documents has dropped to zero. As the profession moves forward, the focus must shift from merely condemning these errors to building robust frameworks—both technological and educational—that prevent them. The integrity of the record, and the freedom of the accused, demands nothing less than a total overhaul of how digital tools are vetted and deployed in the courtroom.

Subscribe for Updates

AITrends Newsletter

The AITrends Email Newsletter keeps you informed on the latest developments in artificial intelligence. Perfect for business leaders, tech professionals, and AI enthusiasts looking to stay ahead of the curve.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.

Notice an error?

Help us improve our content by reporting any issues you find.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us