“If ChatGPT told you to jump off a cliff, would you do that too?!?” screams my internal scolding voice. Because despite an ever increasing number of cautionary tales about lawyers ill-advisedly letting G(PT)sus take the wheel, another pair of law firms have bumbled their way into sanctions.
This time, it’s Ellis George and K&L Gates starring in the reboot of ChatGPT Presents: Fantasy Legal Filings, and in a nod to Hollywood’s lack of creativity, once again the plot hinges on lawyers submitting fake cases to a federal court. It’s not a sequel anyone asked for, but this is how we ended up with J.J. Abrams making Star Wars movies.
The filing in Lacey v. State Farm only involved 27 citations, but the Special Master in the case, former C.D. California Magistrate Judge Michael R. Wilner, determined that NINE of them were wrong in some way. No one went to law school because they were good at math, but that’s one-third of the total citations.
In the ruling issued May 6, Wilner said that after consulting online database and legal research service Westlaw, he discovered that “nine of the 27 legal citations” were “incorrect in some way,” “[at] least two of the authorities cited do not exist at all” and “several quotations attributed to the cited judicial opinions were phony and did not accurately represent those materials.”
Hey, if the Department of Justice can make up fake quotes, why can’t the private sector get in on the fun!
Seriously, though there’s no good reason for this to be happening at this point. The Avianca case was almost two years ago and while a lot of the coverage at the time tried to blame generative AI for hallucinating cases, a second-wave of coverage stressed that this isn’t a technology problem, but a lawyer laziness problem. Trump fixer Michael Cohen’s lawyer submitted a brief citing fake cases a year and a half ago. But in every instance, the error — as the IT people say — is between the keyboard and the chair. All they needed was a lawyer doing the job they should be doing anyway and citechecking the cases. “Even with recent advances, no reasonably competent attorney should out-source research and writing to this technology — particularly without any attempt to verify the accuracy of that material,” wrote the Special Master.
Maybe K&L Gates could’ve spent more time editing and less time shadow purging all references to diversity off their website.
“Directly put, Plaintiff’s use of AI affirmatively misled me,” he wrote. “I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them—only to find that they didn’t exist. That’s scary. It almost led to the scarier outcome (from my perspective) of including those bogus materials in a judicial order. Strong deterrence is needed to make sure that attorneys don’t succumb to this easy shortcut.”
Hopefully it did not “almost” lead to a scarier outcome. I’d like to think the Special Master planned to check the cases before issuing an order regardless.
Wilner sanctioned the firms for the defense’s share of his 30-day fee and an additional $5,000 to reflect a share of the defense’s costs preparing their response brief — Wilner did not think sticking them with the full amount was necessary for deterrence.
But… isn’t it, though? We’ve been writing about lawyers doing this for two years now! When the judge hit the Avianca lawyers for $5K the technology was new and the mistake was novel. No one can make that claim today. And the Avianca lawyers were a small shop working on their own — this is two firms, including one in the Am Law 50. How many hands did this filing pass through? None of them checked the cites? How does that happen?
It’s not about AI hallucinating. Hallucinating is what it does. Blaming the AI for this is like blaming a vending machine for not giving you steakhouse dinner. But what makes AI a powerful tool is that it can deliver that clutch fix of Mountain Dew and Twinkies that you need to fuel a long night of actually editing briefs.
This sanction might be enough to deter these firms from doing it again, though the public humiliation probably did that already. Sanctions aren’t needed for specific deterrence, they’re needed to put the fear back into practitioners that they can’t farm out their professional responsibilities to AI.
Joe Patrice is a senior editor at Above the Law and co-host of Thinking Like A Lawyer. Feel free to email any tips, questions, or comments. Follow him on Twitter or Bluesky if you’re interested in law, politics, and a healthy dose of college sports news. Joe also serves as a Managing Director at RPN Executive Search.
The post Law Firms Use Artificial Intelligence To Earn Very Real $31K Sanction! appeared first on Above the Law.