AI Hallucinations – Jakes v Youngblood et All: “Even more outrageously, when accused of a serious ethical violation…attorney…chose to double down.”
Catching up with some of the AI hallucination cases from last week, Jakes v Youngblood in the US District Court for the Western District of Pennsylvania is quite remarkable:
“…Whoever or whatever drafted the briefs signed and filed by [Attorney], it is clear that he, at the very best, acted with culpable neglect of his professional obligations. The alternative is that he acted in a conscious effort to deceive and mislead the Court. At this point, in light of [Attorney]' s continuing offenses in his reply brief, the Court is inclined to believe the latter.”
The "latter" scenario mentioned here, deliberate deception, is unprecedented among the AI hallucination cases involving lawyers that I've encountered. Let's hope this is not the case because, if it is, we may see some of the most severe sanctions yet. The show cause hearing is listed for 24 July 2025.
Other key takeaways:
Continued use of hallucinations is severe ethical misconduct.
Lawyers cannot delegate their duties for accuracy and candour by blaming AI or third parties.
Judges seem more alert to AI involvement use even without direct evidence.
Culpable negligence or intentional deceit are highly significant.
Withdrawal does not avoid accountability.
My full analysis can be read here.