~6 min read
Avianca and after — what every state has done since
In June 2023, U.S. District Judge P. Kevin Castel of the Southern District of New York sanctioned two attorneys for citing six fabricated cases in a brief filed in Mata v. Avianca, Inc.. The cases were generated by ChatGPT. The attorneys hadn't verified them. The judge ordered $5,000 in sanctions and a public letter of apology to each judge falsely attributed to the made-up opinions.
Two years later, the pattern is well-known. What's less widely tracked is how the bar associations, state supreme courts, and federal districts have actually responded — and how patchworked that response is.
The federal pattern
Several federal districts now require certifications. The Northern District of Texas requires every brief to certify that no AI was used in drafting, OR if AI was used, that every citation was verified by a human. The Eastern District of Pennsylvania has a similar order.
The pattern is: courts don't ban AI; they require attestation that you verified the output. The certification language varies, but the spirit is the same — your signature on a brief now carries a "I checked the cases cited" warranty by default.
State variation
California's Bar Standing Committee on Professional Responsibility issued an opinion in late 2023 noting that competent representation under Rule of Professional Conduct 1.1 requires lawyers to verify any AI-generated research before relying on it. New York's State Bar Task Force on AI issued similar guidance. Florida and Texas have followed.
State trial courts have taken individual action. Several counties in Pennsylvania, New Jersey, and California have issued local rules or standing orders requiring disclosure. The pattern is growing fast enough that any blanket survey is out of date in months — but the direction is consistent: verify, don't trust.
What it actually takes to verify
Westlaw and LEXIS will tell you whether a citation is well-formed. They will not always tell you whether the case exists — at least not without deliberate cross-referencing across reporters. The Bluebook citation- checker reduces formatting errors but doesn't query a case database.
The free, public CourtListener database covers ~18 million U.S. cases — federal courts, most state appellate courts, and a growing slice of state-trial-court records. Querying CourtListener for every citation in a brief is the cheap, pre-filing baseline check. That's what Citation Sentinel does, automatically, in Word.
The honest limitation
Even CourtListener's index isn't exhaustive. State trial-court rulings, administrative agency decisions, and recent unpublished opinions can legitimately not appear. A citation flagged "Unindexed" might still be a real case — the lookup just couldn't confirm. That's why Citation Sentinel's "Unindexed" state never claims invalidity; it surfaces "we can't confirm this; verify externally."
The cases that come back as Likely invalid — the ones where CourtListener can't even match the reporter format — are the ones that scream "fabricated". The Avianca filings would have shown red on every cite.