Judge decides case based on AI-hallucinated case law


log in or register to remove this ad

A lot, but it's not a problem. After all, we're spending lot of power and water to have an infrastructure allowing us to have our silly discussions about AI, law, and pretending to be elves that have absolutely no importance, just for fun. Spending some more to actually lessen the burden of work on humanity and possibly have more people being able to afford access to the legal system, nurturning a world where justice and fairness is within reach for all and not just the priviledged few seems a fair use of resources.
Except LLMs aren’t nurturing any such thing. And a really bad lawyer that can only puke up what it finds online with definitionally no understanding is worse than no lawyer.

This tech will never bring us closer to a better future, it will only help billionaires steal even more of the pie.
 

I’m not sure I’d agree with that, personally. My time practicing criminal law was (mercifully) brief- 99% of my practice has been civil.

Just to clarify, I wasn't opposing civil law to criminal law, but countries using a judicial tradition based on civil law to countries having a judicial tradition based on common law, like UK or the US, where the role of case law is indeed paramount : Civil law (legal system) - Wikipedia
 

Except LLMs aren’t nurturing any such thing. And a really bad lawyer that can only puke up what it finds online with definitionally no understanding is worse than no lawyer.
You're assuming that only really bad lawyers would use this tool (and be unable to understand its limitation) rather than every lawyer, including the good ones, who can understand the limitations and use it to increase their productivity. There is no reason to assume good professionals would shun tools that would let them take more cases in the same amount of time. Most of them replaced their paper codes with online versions, many use Dragonspeech to dictate instead of typing on their antique typewriters, why wouldn't they adopt the newer tools once they become better than the existing ones -- which doesn't necessarily mean foolproof? It's quite easy to notice hallucinations in your field of expertise.
 
Last edited:




I think, he's saying that the tool is equivalent to a really bad lawyer.

Then sure. I don't think anyone with no training should use it to replace a lawyer altogether, anymore than one should replace a doctor with a LLM when trying to get a medical diagnostic. But medical AI (not general purpose LLMs) are helping doctors right now and I think the same can happen with law professionals. I'd welcome it eagerly.
 

It's better at making stuff up convincingly than any human.

On this I disagree. Whenever I tried to "test" an LLM with specialized knowledge in my field of expertise, I very quickly noticed the limitations. Asking for explicit link to an existing webpage supporting the claim is often enough to debunk a lot of the most egregious hallucinations. Also, professionally, you'll generally need the reference anyway, so it's something one should do every single time when interacting with a LLM.

While I am pretty sure I could be fooled by a human expert, especially one I'd more naturally trust (a colleague, for example).
 
Last edited:

Asking for explicit link to an existing webpage supporting the claim is often enough to debunk a lot of the most egregious hallucinations.
Indeed, LLMs are set up to try and avoid citing sources, to make it harder to find evidence of plagiarism. It's a useful trick for someone who does their job diligently and honestly.
Also, professionally, you'll need the reference anyway.
If that were true, the issue cited in the OP, and in other similar cases, would never have arisen.

What you mean is professionally, you should find the reference anyway.
 

Remove ads

Top