Judge decides case based on AI-hallucinated case law

No.

That's not how sane people react. No-one who has a solid grasp on reality hears someone call their friend, who they know is a nutcase, however nice they might, a nutcase, and then goes "OMG U CRITICISED MY FRIEND I MUST BECOME A NUTCASE 2". They might well go "Well Ruin Explorer is a total scumbag who doesn't get how kind and loving Nutcase Ted is, eff that guy, not listening to him", which is fine. But that's very, very different to "I TOO MUST DRINK THE KOOLAID!".

Only people who absolutely wanted to embrace the nutjar do that. It's the same naughty word as racism and homophobia. People say "Oh well if you call someone who his racist and homophobic, racist and homophobic, they and their friends will become Nazis!" and it's like, no, not unless they were already well on the way there, and just looking for any excuse.
That's not at all what I'm saying. But I see this isn't going to be fruitful.
 

log in or register to remove this ad


That's not at all what I'm saying.
Isn't it?

You said:
And when people who are somewhere between your views and the "dangerously stupid" views hear this condescension--especially if it is directed towards people they care about, and especially if it mischaracterizes those people in some way--then it will cause them to lose trust in science.
It kind of seems precisely what you're saying - i.e. that if you call someone who is, frankly, immune to rationality, immune to rationality, then people who are in-between him and a more sane viewpoint will automatically move towards being irrational because they somehow associate this critique with "science" in general? I'm not sure what nuance I'm missing here.

And nothing I've seen - and I've known some insane hippies, note (weirdly enough I seem to get along with them pretty well IRL) - suggests that's true. My experience is in fact to the contrary, that people with somewhat hippy-dippy magical-thinking views, when they see a friend go full nutcase, and people around them say "Wow that guy is a total nutter", they distance themselves from said nutter and tend to reconsider some of their hippy-dippy magical thinking views, or suddenly contexualize them to be about feelings, not how they actually believed things worked.
 
Last edited:

That's not really meaningfully condescending though. It's not even truly unhelpful - it still helps other by illustrating and labelling a dangerously stupid person as a dangerously stupid person, because sane people think twice before joining the guy people called dangerously stupid. It just doesn't help the dangerously stupid person not be dangerously stupid.

But that's the core problem with people who reject rationality, reason and science. They're intentionally or unintentionally cutting themselves off from being reached by others, and you can't just magically fix that. Generally people who recover that particular kind of stupidity do so because something awful happens directly to them or a loved one, and they can square it with the irrational stance/belief they had, so they have to reconfigure themselves mentally, which includes recognising that they were a fool. Some people are too far gone or too narcissistic to manage that.


Yup. There's a huge element of desperation and a smaller element of essentially being paid off here in the way politicians around the world are treating AI and LLMs with insane enthusiasm. They're desperate for it to magically reverse stuff they've caused by decades of policies, so are extremely keen to believe absolutely insane gibberish bollocks about it, and to protect it at all costs. Says a lot about the reasoning skills and ability to think long-term of many politicians, honestly (and it doesn't say anything good!). Re: paid off a lot of AI companies are promising to spend huge amounts and "create jobs", but in fact most of the spend is going to abroad to buy hardware, and most of the jobs are extremely low-paid security guards, and not even many of them!
This reminds me of the fact that people of a specific mindset were so incensed that Wikipedia seemed to contradict their deeply held beliefs, on so many subjects, that they felt the need to create their own version that explicitly supported them, right or wrong.
 

But that’s just it. You’re not providing evidence it’s obviously defective.
...literally the first post in this thread.
@Dannyalcatraz and @Snarf Zagyg

You guys are gonna love this one...

So, some lawyers use generative AI to prepare submissions to the court. And, as you should know, AI sometimes makes stuff up, but presents it as factual.

Well, now a judge apparently decided a case on basis of cases that did not exist...

The only human error involved was believing the AI.

Also you're ignoring the part where AI told people to eat glue, told people to eat poisonous mushrooms, went full Nazi, etc.

Can you show any other product with this level of sheer wrongness? Or are you seriously claiming there's no evidence of AI hallucinations?
 

Can you show any other product with this level of sheer wrongness? Or are you seriously claiming there's no evidence of AI hallucinations?
I think the discussion is two sheep passing in the night. Everyone knows there are hallucinations. Everyone knows the hallucinations are bad if you take them seriously.

The question is whether "believe what the AI tells you, no questions asked" is a reasonable use case.
 

I think the discussion is two sheep passing in the night. Everyone knows there are hallucinations. Everyone knows the hallucinations are bad if you take them seriously.

The question is whether "believe what the AI tells you, no questions asked" is a reasonable use case.
The hallucinations are the defect.

The question is why such an obviously defective product hasn't been pulled.

If a normal mushroom guide told people to eat poisonous mushrooms it would be yanked from shelves and the author could be sued. And yet when AI does the exact same thing you're claiming it's the fault of humans for trusting it.
 

The question is whether "believe what the AI tells you, no questions asked" is a reasonable use case.
Given we know for sure some proportion of the population is going to do that, yeah I think it absolutely is. No Western nation has the level of strong mental health and learning difficulty and just plain idiot support such that we can avoid such people, who are like, 10% of the population or more, accessing this stuff. We're also not preventing children from accessing it, which is insane on the part of our society, frankly. This stuff more likely to be directly harmful to young minds than most media sex and violence, I'd suggest. We can "blame the parents", sure, but we still have laws about that stuff for a reason, and this stuff should come with a bigass mandatory health warning.

Especially as it's being given away for free, and promoted by our own politicians and wealthy individuals. But because the politicians are desperate to reap a percieved benefit from it, they won't slap a health warning on it, or age-gate it, or in any way inconvenience it, which further exacerbates the problem.
 

The hallucinations are the defect.

The question is why such an obviously defective product hasn't been pulled.

If a normal mushroom guide told people to eat poisonous mushrooms it would be yanked from shelves and the author could be sued. And yet when AI does the exact same thing you're claiming it's the fault of humans for trusting it.

The internet tells me to do dumb crap all the time. Is it defective? Should we just pull it as well?
 


Pets & Sidekicks

Remove ads

Top