Home / Blog / Mata v. Avianca
Analysis

Why Mata v. Avianca Keeps Happening: AI Citations in Legal Briefs

Three years after Judge Castel sanctioned a lawyer for filing a brief full of cases that did not exist, courts are still seeing the same failure. AI citations in legal briefs keep breaking the same way. Here is how to stop it.

Samuel Anderson
Samuel Anderson
CEO & Founder, Aewita · April 22, 2026 · 7 min read
Stack of bound federal reporters on a courtroom bench, a yellow legal pad and pen beside them, single overhead light

I reread Judge Castel’s sanctions opinion every few months. Not because the facts have changed — they have not — but because the lesson gets forgotten. Mata v. Avianca, Inc., docketed as 1:22-cv-01461 in the Southern District of New York, is a routine personal-injury action against an airline. It became the defining AI-in-law cautionary tale because plaintiff’s counsel submitted a brief full of judicial opinions that had never been written.

On June 22, 2023, Judge P. Kevin Castel issued the sanctions opinion. He fined the attorneys $5,000 and found they had acted in subjective bad faith. The tool they had used, without understanding its failure modes, was a general-purpose AI chat tool. The tool invented cases, gave them federal reporter citations, attributed them to real circuits, and even produced plausible-sounding internal quotations. When opposing counsel and the court could not find any of the cases in Westlaw or on PACER, the attorneys went back to the same tool and asked it whether the cases existed. The tool reassured them they did. They filed again.

That last part is the part that still stops me cold. Not the initial mistake. The confirmation step that should have caught it, and didn’t, because the person checking was the same AI that had invented the problem.

Why it keeps happening after Mata v. Avianca

Mata was not an isolated incident. Since 2023, federal and state judges have issued repeated sanctions and show-cause orders for AI-generated fake citations in briefs filed by solo practitioners, by regional firms, and in at least one case by attorneys at a well-known national firm. The fact pattern is almost always the same: a lawyer asked a general-purpose AI chat tool for case law, the tool invented it, the lawyer did not verify it against a primary source, and the brief got filed.

The root cause is structural, and it is not that AI is inherently dishonest. It is that a general-purpose AI chat tool is not grounded in a legal corpus. It predicts the next plausible token based on how judicial opinions tend to sound. The citation it generates is shaped like a real citation because real citations are what it was trained on. The sentence it produces is shaped like a real holding because real holdings are what it was trained on. The problem is that “shaped like” is not the same thing as “is.”

“Check it later” is not a workflow. It is a hope that a human will catch what the tool already failed to catch.

The reason Mata keeps happening is that most legal AI workflows still have a “check it later” step that lives outside the tool. The tool produces a citation. The lawyer is supposed to verify it. Under time pressure, some lawyers do not. Some do, but superficially — they confirm a reporter volume exists without confirming the specific case appears on the cited page. The failure mode is human, but it is a failure mode the tool makes easy.

What citation verification actually means

Verification, in the sense that prevents a Mata, means two things happen inside the system every single time an answer is produced. First, the system retrieves a specific primary source — a real opinion, with a real docket, from a real court. Second, the system independently checks that the citation it is about to produce resolves to that source on the cited page. If the citation cannot be resolved, the system does not render it. It flags it.

That is what I mean when I say every citation is independently verified against the source before it reaches you. It is not a marketing phrase. It is a design commitment. The AI that produces the answer is not the AI that verifies the citation. The verifier is a separate process that resolves every reference against our case and statute index. A patent-pending citation verification pipeline is part of why a lawyer using Aewita does not have to be the first line of defense against fabricated citations. The platform is the first line. The lawyer is still the second line, and the lawyer still signs the brief. That is professional responsibility. But the first line matters.

Our research product page walks through the full retrieval and verification flow. The short version: we index every published U.S. court opinion from 1700 to today, every federal statute, and every state statute for all 50 states and the District of Columbia. 792 document types. 22 practice areas. When a citation in a generated answer does not resolve to a document in that index, it does not appear in the answer. It appears as a flag.

What courts are doing after Mata

Federal judges did not wait for the ABA to issue guidance. Judge Brantley Starr of the Northern District of Texas made the earliest and most widely covered move: in May 2023, days before Judge Castel’s Mata sanctions, he required attorneys appearing before him to certify that either no portion of a filing was drafted by generative AI, or that any AI-drafted portion had been checked for accuracy by a human using a reliable source. Other judges followed with variations: some requiring affirmative disclosure of AI use, some requiring a human accuracy certification, some banning AI drafting outright for specific filings.

The practical effect is that if you file a brief in federal court today, you may be operating under a standing order that requires a specific AI-use certification. You are certainly operating under Rule 11, which requires that legal contentions in a signed filing be warranted by existing law or a non-frivolous argument for extending it. A cited case that does not exist is not a warranted legal contention.

If you want to read Judge Castel’s sanctions order directly, the case docket is searchable on CourtListener under Mata v. Avianca, Inc., 1:22-cv-01461 (S.D.N.Y.). It is worth the forty-five minutes.

Five questions to ask before pasting an AI-generated case name into a brief

If you are using any AI tool — a general-purpose chat tool, a legal AI platform, a browser plugin, anything — to help research or draft a court filing, here are the five questions I would ask before a single case name from that tool lands in a document you file.

  1. Is this tool grounded in retrieved primary sources, or is it generating from training data alone? If the tool cannot show you the actual opinion it is citing, the tool is not doing the work you think it is.
  2. Is every citation independently verified against that primary source before it is displayed? “Displayed in green text” is not verification. Resolution against the actual cited page is verification.
  3. What happens when the tool cannot resolve a citation? Does the answer suppress the unverifiable claim, flag it explicitly, or render it anyway with a visual indicator most users will not see?
  4. What is the published hallucination rate? If the vendor cannot give you a measured number with a confidence interval, you are flying blind. Aewita’s number is under 0.3% at 95% confidence across 800 consecutive queries in internal testing; most competitors have no comparable figure.
  5. Who else sees the prompt? This is a Rule 1.6 question, not a Rule 11 question, but both matter. If the answer involves a third-party AI provider, your citation-verification problem and your confidentiality problem are the same problem in two hats. See our security page.

Those five questions are the bar. If your current tool can answer all five with specifics, fine. If it cannot, you are one rushed filing away from your own Mata opinion.

How Aewita approaches citation verification

I will keep this section short, because the product pages are where the detail lives. Aewita’s research surface is grounded in retrieved primary sources. When you ask a research question, the system retrieves the relevant opinions and statutes from our index, generates an answer strictly anchored in what it retrieved, and independently verifies every citation in the final answer against the cited primary source. Unverifiable references are flagged, not rendered as clean blue links. We built the AI, we host the AI, and the verifier runs on our infrastructure alongside the retrieval and inference steps.

The measured outcome, verbatim: in internal testing, Aewita observed zero hallucinated outputs across 800 consecutive queries — statistically, a rough upper bound under 0.3% at 95% confidence. Ask any other legal AI vendor for their number. If they will not produce one, you have your answer.

For how this compares to the other products in the market, our comparison page is the reference. For the architecture that makes the verification possible, the research product page is the place to start.

The professional-responsibility point Mata was really making

Judge Castel’s opinion is often read as a warning about AI. It is really a warning about verification. A lawyer who pastes an unverified citation into a brief has failed the same professional duty whether the unverified citation came from an AI, an associate, a junior paralegal, a paid-database search, or a half-remembered case name. The verification step is not outsourceable. It lives with the signing lawyer.

What changed with generative AI is the scale and confidence with which unverified citations can be produced. A paralegal who is unsure will leave a note. A general-purpose AI chat tool that is unsure will write the citation in the same tone it writes every other citation. The tool cannot tell you what it does not know. That is what a verifier is for, and that is why “check it later” is the failure pattern that keeps producing Mata-style sanctions.

The way to stop Mata from happening again in your firm is not to ban AI. It is to insist that the AI your firm uses does the verification step inside the tool, every time, by design. Ask your vendor how. If the answer is a deflection, change vendors.

See how Aewita verifies every citation.

Every citation independently verified against the source. Unverifiable references flagged, not rendered. $99 per attorney per month.