Judge decides case based on AI-hallucinated case law

So, here's an issue we have to deal with.

All technologies are expressed through specific implementations. What we refer to when we speak about a technology is actually a generalization, based off of however many specific implementations we are familiar with, and however deeply we understand those implementations.

Speaking about the generalization as if we can disregard the context is asking for trouble, confusion and misunderstanding, because those details actually matter.

Then maybe the use of "AI" is inappropriate when the target is (I suppose) ChatGPT?
 

log in or register to remove this ad


Just had a horrible tangential thought:

We know that some AIs are incorporating the errors of others because their training process included those errors in the training. GIGO.

If law-specific AIs are being trained in a similar fashion, what’s to stop them from incorporating erroneous, invalidated, distinguished or overturned decisions in their training processes? Is someone going to weed that stuff out beforehand?

How will they handle bad cases that nonetheless need to be included for contextual purposes?

How will they handle cases involving AI hallucinations?

As I mentioned, I have tried a few AI things for giggles when doing research. When it's gone completely bonkers is when it bases answers on repealed or overturned statutes.

Like I've said, I've never seen AI get something completely right. It either screws up some important nuance (but gets the gist) or, and this happens a lot of the time ... gets the 100% wrong answer. Not great, Bob.

That said, this was just general AI tools I was playing with. There is a Westlaw one that helps with searching that I sometimes use which is decent for searching on first pass, but isn't as good as a well-constructed Boolean inquiry if I need something specific.
 


Like I've said, I've never seen AI get something completely right. It either screws up some important nuance (but gets the gist) or, and this happens a lot of the time ... gets the 100% wrong answer. Not great, Bob.

And, not speaking of AI but of ChatGPT, it does it with assurance. Real-life example no further than two days ago:

Me: can you remind me of the maximum employment time for an adjunct professor (I know it's six years in my juridiction, I wanted the source)
ChatGPT: Of course, it's nine years (snip a long series of unasked details)
Me: can you websearch the source for me?
ChatGPT: Of course. And the exact and precise duration is six years, as evidenced by <link>.

It found the right text, but never it bothered to mention that the previous answer was wrong, even though it contradicted itself.
 

At what point of harm does a disclaimer at the bottom no longer serve? At what point is it no longer reasonable to just wave hands vaguely at it and say, "Well, they were misusing the tool!"? Do you have a line in mind for that? Do you have a calculus of how much personal or societal harm is acceptable?
I suppose everyone does. For me gen AI seems about the same level as Internet access. It seems clearly on the ok side of that line.
Because, as seen above, we now see AI spouting anti-Semitic rhetoric. That's harmful. Should we take that as an acceptable level of harm? We should be okay with saying, "Well, folks shouldn't listen to Grok," and just letting it slide?
Yes. There is stuff to be said about free speech but I think most people know what I'll say, so I'll just confirm I'll bite that bullet.

(No, I am not asserting ChatGPT is a person).
 

Yes. There is stuff to be said about free speech but I think most people know what I'll say, so I'll just confirm I'll bite that bullet.

(No, I am not asserting ChatGPT is a person).

From my point of view, where anti-semitic hate-speech on a website is punished by up to 1 year of jail and 45,000 € fine, the idea that "something" specific should be done for hate-speech generated by AI accessed through a website sounds very strange in the first place. Especially when framed as "should we really accept that?"
 

There’s a reason Umbran started this thread.

We know that some of the cases in the original pleadings were AI hallucinations. This particular thread is just talking about one particular case involving the law, but it’s not even the only one mentioned on this board. I’ve personally brought up other cases in other threads. There’s a few other MAJOR cases that have popped up in the law I personally know of, and Legal Eagle has done at least 3 videos on the subject.

And there have been OTHER threads & comments posted on ENWorld about AI hallucinations in other fields of work.
The existence of hallucinations is not surprising at this point ... nor is the fact that people are uncritically trusting it. We saw the same with adoption of the internet. I don't see why it rises to a level where something must be done about the technology, in this case.
 

I trust fully that you are recounting what you heard accurately. I’ve also seen many people do similar with the infamous McDonald’s hot coffee case, but that was a lot more nuanced than the average Joe realizes. That McDonald’s way overheated the coffee beyond mcdonalds own recommendations because that specific store believed people wanted their coffee hot when they got to work.

There is also a mediatic lens effect. It tends to report headlines, people read headlines, and even if the article got the details right, people remember the headline better.

I can quote two court decisions in France where a people moving from the city to a village sued his neighbours because they had a rooster that made noise at sunrise. In one case, he won, in the other, he got a frivolous lawsuit penalty.

In both cases, the reporting was more about either the stupidity of city-dwelling people who can't adapt to the life in the countryside, and not on the specifics (in one case, it was indeed someone who, well, discovered the concept of roosters, in the other, he had bought a house with a small farm nearby, which was replaced by a huge chicken-raising open-air park. But what the public opinion remembered was that Parisians are suing their neighbour because roosters yell, or churchbells rings...
 
Last edited:

The existence of hallucinations is not surprising at this point ... nor is the fact that people are uncritically trusting it. We saw the same with adoption of the internet. I don't see why it rises to a level where something must be done about the technology, in this case.

Yeah. We're seeing reports of hallucinations. Noone, as far as I know, deny that they exist (though there is no proof they are actually involved in the specific case mentionned in this thread) and there is no reason to suppose they don't happen when asking about law. But, the question that must be answered to determine if this tool is useful, is "how often are there hallucinations, and do they hamper any productivity increase for a specific task".

That they get reported is logical, the same as plane crashes are overreported compared to plane landing successfully. There is also a risk that un undertermined number of hallucinations are happening and are not noticed.

Evaluation of the tool involves:
  • Assessing the frequency of hallucinations (if it is wrong X% of the time, it's not useful, with X varying across use cases),
  • Assessing the facility to weed out hallucinations (if the AI is tasked to provide evidence of each claim, it should lower them substantially, if they are obvious, it's easy to identify them, if the AI workflow that involves running another AI to detect them and it can do that 100% of the time, then hallucination are not a problem... if they happen often in a way that it is difficult to dect them, it's not a useful tool,
  • Assessing the consequence on the use case (I had an AI hallucinate my sorcerer with glowing green eyes in the AI image thread, and it gave glowing green eyes to the dragon instead... that's not bothering me too much as I like the end result anyway... while an hallucination for an AI doing self-driving for my car would be more problematic.
Reporting of the existence of hallucination doesn't say anything about AI in general, especially when people use it in violation of its licence (it might be important for the people involved, and the specific editor of the AI involved, but not for AI in general. We'll see if Husband's lawyer sues ChatGPT for providing bogus legal precedents... possibly invoking a few more bogus precedents?)
 

Remove ads

Top