Judge decides case based on AI-hallucinated case law


log in or register to remove this ad

It feels like to me that the people most qualified to do quality control would be the people who are actually qualified to do the work that the AI is purporting to replace!

"We've fired you as a teacher, but we would like you to go through these lesson plans and make sure they will be effective at teaching children."
Totally agree. Unfortunately though if customers/clients see a product or service that costs substantially less than yours because they incorporated AI into their workflow, you see the pressure many firms are under to figure this out. Also, if you rely on outside investors or shareholders for capital, you can bet they are asking these questions.
 


It seems like most of the issues people are encountering with the technology are with misuse, not the technology as such. If you're the first lawyer to get a fake case from a LLM, not your fault. If you're the 50th, it's on your lack of due diligence.

There's an interesting case from a day ago where a patient claims a LLM diagnosed them successfully, confirming their results with a physician. I know some companies claim a higher success rate than physicians.

It reminds me of the trajectory of Wikipedia. When I was in middle school, every teacher we had emphasized that it was untrustworthy and a bad source. By grad school, the faculty were recommending it as a source for various topics. I think that was possible in part because the attitude of skepticism towards it is now widespread.
 


It seems like most of the issues people are encountering with the technology are with misuse, not the technology as such. If you're the first lawyer to get a fake case from a LLM, not your fault. If you're the 50th, it's on your lack of due diligence.

There's an interesting case from a day ago where a patient claims a LLM diagnosed them successfully, confirming their results with a physician. I know some companies claim a higher success rate than physicians.

It reminds me of the trajectory of Wikipedia. When I was in middle school, every teacher we had emphasized that it was untrustworthy and a bad source. By grad school, the faculty were recommending it as a source for various topics. I think that was possible in part because the attitude of skepticism towards it is now widespread.
Wikipedia is unreliable largely because of its crowdsourced nature and there are cases of bad actors actively crapping in articles. It's still just a starting point, as any LLM should be, rather than a true source.
 

Sure, but you were commenting on the use by laymen, weren't you? At least that's how I read your comments about not being able to afford a lawyer. That's what I was responding to.

Not necessarily. I was comparing the situation where you can't afford a lawyer because he needs to work, say, 100 hours on your case. Having the opportunity to go to an AI-using lawyer that will be able to spend "only" 75 hours -- 50 hours of the same task as the other lawyer because they can't yet be automaed, and 25 hours of proofreading AI and correcting instead of spending 50 hours to do everything by hand, and you can afford the price of 75 hours. It's not ideal -- because other people might be unable to afford even the 75-hours lawyer -- but it would still be an improvement to get a lot of people's right defended. Even if the lawyer going faster can make some amount of mistake.

I edited by post to make it clearer.

It's a tool, and it will be more useful or effective in the hands of a trained user, at least initially. In the case of LLM, it's because they'll be able to identify hallucinations where a non-trained user would fall for them.

At some point, it's possible to imagine that we'll get an e-lawyer that can totally do the job for the layman so lawyers are unnecessary, but that would need a much lower error rate (whether it's 10%, 1% or 0.1% will probably depend on the stakes) before social acceptance improves. It's the same with plane. If they crash 1 in 3 flights, nobody will take them, if they crash... sometimes, people will fly. I googled an Airbus document that stated that the yearly fatal accident rate per million flight was around 25 in the 1950s, and 0.12 now. So visibly, people accepted to risk dying 250 times more than now while flying, and yet flying got popular.
 
Last edited:

Wikipedia is unreliable largely because of its crowdsourced nature and there are cases of bad actors actively crapping in articles. It's still just a starting point, as any LLM should be, rather than a true source.
My point wasn't that Wikipedia is 100% reliable. But it is clearly reliable enough to be useful. As are LLMs. The problem is less with the tool than in poor use of the tool.
 

Wikipedia is unreliable largely because of its crowdsourced nature and there are cases of bad actors actively crapping in articles. It's still just a starting point, as any LLM should be, rather than a true source.
To be fair, wikipedia is a lot better source than many others because of how rigorous its standards are. The sun’s not hot unless you can cite primary sources. It’s got its problems, of course it does, but it’s far more reliable than any LLM.
 

To be fair, wikipedia is a lot better source than many others because of how rigorous its standards are. The sun’s not hot unless you can cite primary sources. It’s got its problems, of course it does, but it’s far more reliable than any LLM.
I'd go so far to say conceiving of LLMs as a "source" is using them improperly. You're not supposed to ask a question and read the answer as if it were a published document.

Find the part that is interesting. Ask it to expand on that. Ask for documentation with links to the primary sources. Ask if it really applies to this circumstance. Take advantage of the interactivity.
 

Remove ads

Top