Judge decides case based on AI-hallucinated case law


log in or register to remove this ad


Here's a case in which lawyers submitted hallucinated case law and it was done right, though I think the sanctions are on the light side for it.

Quoting the article:
Damien Charlotin tracks court cases from across the world where generative AI produced hallucinated content and where a court or tribunal specifically levied warnings or other punishments. There are 206 cases identified as of Thursday — and that's only since the spring, he told NPR. There were very few cases before April, he said, but for months since there have been cases "popping up every day."

Charlotin's database doesn't cover every single case where there is a hallucination. But he said, "I suspect there are many, many, many more, but just a lot of courts and parties prefer not to address it because it's very embarrassing for everyone involved."

A couple hundred cases “since the spring”, and an educated guess that most don’t get reported.

Wrist slaps for AIBS need to turn into suspensions, IMHO, ASAP.
 

Quoting the article:


A couple hundred cases “since the spring”, and an educated guess that most don’t get reported.

Wrist slaps for AIBS need to turn into suspensions, IMHO, ASAP.

This bit really concerns me:

Charlotin's database doesn't cover every single case where there is a hallucination. But he said, "I suspect there are many, many, many more, but just a lot of courts and parties prefer not to address it because it's very embarrassing for everyone involved."

Are we looking at the possibility that AI slop threatens to drown even the legal system just because it's too "embarrassing" to deal with? I'm envisioning bad actors exploiting an already creaky system in very detrimental ways.
 

Quoting the article:


A couple hundred cases “since the spring”, and an educated guess that most don’t get reported.

Wrist slaps for AIBS need to turn into suspensions, IMHO, ASAP.
I agree. It amounts to legal malpractice and should be subject to a review by the appropriate Bar Association, at the least, in addition to court sanctions.
 


"I had no discipline and didn't even look at it, then I started to focus and worked it every day.

Thanks ChatGPT!"

Spoilers, it wasn't the chatbot that did it.

:ROFLMAO:

Allan also said the journey helped her re-discover other money sources she hadn't thought about in awhile.

"My husband was actually like, 'Oh, didn't we have a brokerage account?'" Allan recalled.

"There's $10,200 sitting in this account that is available. Like I could literally cry right now," she said in a TikTok video.

Although Allan's method worked for her, some financial experts like Noelle Carter, president and CEO of Parachute Credit Counseling, warn ChatGPT and AI should be treated as a tool and not a solution.

"AI can be a powerful assistant to come up with ideas, but, you know, certainly not a substitute for human expertise or critical thinking," Carter said.

Other experts also encouraged people to only spend within their means so as to avoid debt completely.

"Human Expertise."
"Critical Thinking."

...spend within their means to avoid debt completely.

Interested Ooo GIF by reactionseditor
 
Last edited:

This bit really concerns me:



Are we looking at the possibility that AI slop threatens to drown even the legal system just because it's too "embarrassing" to deal with? I'm envisioning bad actors exploiting an already creaky system in very detrimental ways.
I don’t think so.

I suspect that what’s happening in those cases is that the judge- having caught the attorneys using bogus AI cases- gives a verbal reprimand (and possibly a contempt citation) and allows them to rectify their pleadings without referring them to the bar association for further action.

It still slows proceedings down, though.
 

Condescension implies a patronizing superiority. I’m not claiming to be superior; that they are inferior.

I’m saying that the average layperson doesn’t have the training to evaluate medical treatment claims with accuracy, and part of that lack is not having the necessary vocabulary. For example, the concept of comorbidity isn’t that difficult, it’s just the existence in a particular patient multiple afflictions capable of harming or killing them
I've been reflecting on this for a few days and want to add something.

My wife used to work with a patient population where diabetes was common. They'd get young kids with risk factors to develop it soon, and advise the parents how to help the kids avoid it. In many cases, the parents would get super prickly and defensive--"I'm diabetic, my friends are diabetic, are you saying there is something wrong with it? I don't think there is anything bad about my kid becoming diabetic".

Because it was so common, perhaps because they were insecure about it, it became a kind of identitarian marker for them. So, it took a lot of care to communicate solutions appropriately. Obviously the medical professionals trying to stop the kids from getting diabetes weren't saying they were better than people who had it. But it was perceived that way.

---

The expertise angle works the same way. I know that you, Danny, are not saying that you (or medical professionals) are superior to other people in any way. But, there are a lot of people who aren't well educated, who didn't finish high school or attend college. And this can become an identitarian marker. And the same kind of defensiveness manifests. It can become kind of a game to look at highly educated people and make fun of the stupid things they do and their misconceptions (and what they waste their money studying, to connect to the science funding).

When you approach this kind of community and say "you don't have the training to evaluate medical claims with accuracy", it doesn't matter if you're right, and it doesn't matter that you have the best intentions and really want to help them. It is going to come across, to many people, as if you are saying "I think you're stupid because you didn't go to college". If you repeat this over and over while making decisions that affect their lives, it's going to breed distrust and resentment and conspiratorial thinking.

---

If LLMs repeat this same kind of phrasing, I suspect you will see conspiracy theories regarding the creators, attempts to 'fix' the LLMs (we have seen some already) and 'alternative' LLMs (likewise).
 

I don’t think so.

I suspect that what’s happening in those cases is that the judge- having caught the attorneys using bogus AI cases- gives a verbal reprimand (and possibly a contempt citation) and allows them to rectify their pleadings without referring them to the bar association for further action.

It still slows proceedings down, though.
Which could be particularly bad if in the hands of a vexatious litigant, or someone whose whole purpose is to fatigue the court. I've mentioned before that I was a witness in a case where the accused went through something like 8 lawyers, over the course of a few years.
 

When you approach this kind of community and say "you don't have the training to evaluate medical claims with accuracy", it doesn't matter if you're right, and it doesn't matter that you have the best intentions and really want to help them. It is going to come across, to many people, as if you are saying "I think you're stupid because you didn't go to college". If you repeat this over and over while making decisions that affect their lives, it's going to breed distrust and resentment and conspiratorial thinking.
(Emphasis mine.)

You don’t do that. You do NOT insult the populace when drafting laws.

You simply legislate that LLMs cannot dispense medical or legal information beyond recommendations to seek professional guidance. Companies or individuals releasing AIs that do so get held liable- preferably under a strict liability standard.

We do this all the time: seat belt laws; fireworks ordinances; requiring licenses or specialized training to engage in certain activities or professions; restricting access to certain materials or products, etc. Nowhere does any of said legislation include judgmental language as to the mental faculties of the general populace and/or those covered by those laws, even in the legislative history notes that explain why the law is being considered.

Does this prevent pushback? Absolutely not. Almost all of the examples I used generated some level of resistance. Some called seatbelt laws an example of “the nanny state”. Licensing requirements get opposed. People ignore fireworks bans.

But you NEVER draft a law, however well-intentioned, with an explanation that too many idiots exist out there. Not even the safety warnings get judgy; they just set forth the rules.
 

Pets & Sidekicks

Remove ads

Top