A combination of a bad lawyer (probably using an LLM) and a bad judge (who couldn't care less and could have been fooled by a human made invented [or misunderstood] case) let that happen. I nonetheless propose we don't ban lawyers and judges yet (though it would make justice much quicker than it is now). The example in the OP is like an article on a plane crash. It happens, and we rightfully get reports on such occurrences, but it doesn't mean than it's representative on usual plane travel. Same with this kind of goof: I expect US judges to actually check the precedents the lawyer is quoting. Errors predate LLMs, and the human lawyer coul be honestly mistaken about case law.If that were true, the issue cited in the OP, and in other similar cases, would never have arisen.
I also think the article would get less views it was titled differently. Because, if we read the article linked by the OP, the story is:
- Husband file for divorce and apparently couldn't (or wasn't really trying to) contact Wife,
- Wife file to reopen the divorce case, citing precedent that Husband should have tried to contact her, and noting that Husband filing contained bogus cases,
- Husband doesn't address this claim, and Husband attorney's still rely on two invented case and to irrelevant cases,
- Husband's attorney provide 11 new cases, either also irrelevant or invented,
- There is a suspicion that the court order was actually written by the Husband's lawyer.
There is absolutely no evidence whatsoever that the bogus cases were invented by AI and not a human. I'd say it's easier to ask an AI to do it and it's probably what happened, but the exact same thing could happen if Husband's attorney, who kept providing bogus case even when told the previous one where bogus, was just inventing them themselves.
The crux of the problem isn't in the AI, anymore than it is with the typewriter Husband's attorney used to file the claim. The problem lies with Husband's attorney, and with the judge who didn't check the cases -- and who didn't use AI at all.
What you mean is professionally, you should find the reference anyway.
Yes, and I wouldn't call someone not doing this a professional. Also, I am pretty sure the AI models proposed by editors like LexisNexis will contain specific countermeasures to detect hallucinations before they get sent to the user.
Last edited: