Judge decides case based on AI-hallucinated case law

While it's a vivid cautionary example, don't the same concerns and cautions here apply broadly to internet search tools like Google?

And the end result of the search, websites, which are all presenting things in a way that people are inclined to trust... especially websites with an agenda. But people have learnt that "I've seen on the Internet" is a warning flag, ever since the quote "Abraham Lincoln said never to trust anything on the Internet." appeared on the Internet...

Are LLMs more worrisome because they involve less work sorting through results, and thus fewer obvious points for the user to read critically and assess the sources?

They are new. And outside of academia, a lot of people might not bother to check sources and trust their search engine like a gospel. It's reasonable for things where the overwhelming majority of sources will be right (if you ask the oxgen ratio in the air, there is a low chance you'll get a wrong answer), comforting that initial belief. People need some time to adjust to any new technology. They adjusted to state-controlled information (by listening to two different national radios), they adjusted to privately-controlled information (by assigning trust values to different medias), they adjusted to wikipedia, they'll need time to adjust to AI and know that they need to check the sources for anything important to use the tool correctly.
 
Last edited:

log in or register to remove this ad

A medical AI trained by the national public health service would be far less susceptible to being manipulated by corporate interest. Same with a law database.
You HOPE.

Thing is, just like laws written by lobbyists, pharmaceutical companies and other institutions DO try to influence what info gets disseminated to the general public. Back when my Dad opened his private medical practice in the early 1980s, pharma reps would ply him with all kinds of stuff, ranging from baseball caps to knickknacks & plushies to free meals to tickets to sporting events. (GOOD tickets.)

A lot of that got shut down in the 2000s (food deliveries continue) because the gov’t got worried about undue influence on doctors’ prescription tendencies.

I’m not going to say my Dad is immune to such things, but I know for a fact that he absolutely didn’t prescribe at least some of the meds that were presented with such deal sweeteners.
 

You HOPE.

Thing is, just like laws written by lobbyists, pharmaceutical companies and other institutions DO try to influence what info gets disseminated to the general public. Back when my Dad opened his private medical practice in the early 1980s, pharma reps would ply him with all kinds of stuff, ranging from baseball caps to knickknacks & plushies to free meals to tickets to sporting events. (GOOD tickets.)

I imagine that and similar cases happened in France around the same time period. But individual doctors can be more easily "swayed" toward prescribing pill A rather than pill B because it's more difficult to provide oversight on individual medical appointments. A public service AI system would be monitored by lots of civil servants so it would be harder to bribe to put loopholes in it. I'd be more wary of genuine errors.
 


You don’t need to bribe it. The bias is introduced by the data it is trained on. And then, since it’s a black box, it’s impossible to detect.
Yes.

This is a very major reason why LLMs are being pushed so hard by various billionaires and other people who want to control society. Instead of having to carefully buy off people, buy influence, run campaigns and so on (which we have seen even the richest man on the planet directly fail at), you can, in theory, just go into the back end of the LLM and get it to output views more favourable to your POV.

This isn't theory or conspiracy theory, note, we can see this having happened with Grok literally in the last few days, when it declared itself "MechaHitler" and so on.

If most people start getting most of their information from LLMs, especially if it's a small number of LLMs all essentially based out of the US and all owned by billionaires with relatively similar ideas (i.e. leaning hard technocratic, anti-democratic), then it'll be incredibly easy to control what people believe is true, as most people don't exactly work hard to establish facts.

It helps that cameras exist and are everywhere, but part of the reason AI video (which is thankfully still trash) is being pushed so hard is an attempt to be able to undermine them and spew FUD about even direct footage of events.
 

That’s definitely a possibility. If medical students are mostly judged on coursework rather than exams (as horrible and stressful as exams are) that’s definitely an option. And if after graduating you rely on machine learning for all your diagnoses your skills will rust, even assuming the models are reliable.
Even relying on LLMs for part of your work dulls your acumen. It’s exceptionally dangerous.
In that context, I see limitations like "no legal or medical advice" to be overreach.
How? How would a government “take advantage” of a law banning the use of ai in medical and legal casework?
I hope that Grok's anti-Semitic meltdown has folks thinking a moment about how generative AI actually operates, and the implications for its use.

Musk is a ham-handed dullard, so when he sought to adjust Grok, he did so poorly, without subtlety, and the thing went kind of berserk. But in the process he made it blatantly obvious that these AI cannot be assumed to neutral arbiters of information. A more crafty creator could disguise the bias better.

Now, imagine the use of generative AI in law, when the tech-mogul behind it has a political agenda. Imagine the use of generative AI in healthcare, when the creator has taken a large investment from a pharmaceutical company.

For the end-user a generative AI is a black box, its sources and biases not analyzable by the user. Its operation can only be trusted to serve your best interests as far as you can trust its makers to have your own best interests in mind.
Yep. And if a technology being safe relies on humans behaving differently that humans behave….that tech is going to do harm.
 

You don’t need to bribe it. The bias is introduced by the data it is trained on. And then, since it’s a black box, it’s impossible to detect.

I meant, bribe the people responsible of curating the training data to introduce biased content favoring the interest of corporations over the interest of the health service. Especially since the training data could be subjected to a later audit.
 

You don’t need to bribe it. The bias is introduced by the data it is trained on. And then, since it’s a black box, it’s impossible to detect.
This doesn't follow. Grok being a black box did not stop us from detecting its bias. If the output is biased, we can observe it. If it's not observable, then not an issue.

(But of course it will be biased because literally everything is biased).

This isn't theory or conspiracy theory, note, we can see this having happened with Grok literally in the last few days, when it declared itself "MechaHitler" and so on.

If most people start getting most of their information from LLMs, especially if it's a small number of LLMs all essentially based out of the US and all owned by billionaires with relatively similar ideas (i.e. leaning hard technocratic, anti-democratic), then it'll be incredibly easy to control what people believe is true, as most people don't exactly work hard to establish facts.
And this is too cute. It relies on the assumption that people just kind of blindly follow what the LLMs tell them. But the recent experience with Grok shows people don't. They just point and laugh, the same dynamic that has occurred with print and news media for years.

How? How would a government “take advantage” of a law banning the use of ai in medical and legal casework?
Do you need me to spell it out? Imagine some creative interpretation of what counts as 'legitimate medical advice'. Or 'legitimate legal advice'.
 
Last edited:

This doesn't follow. Grok being a black box did not stop us from detecting its bias. If the output is biased, we can observe it. If it's not observable, then not an issue.

True. Also, for a professional AI, the test phase should certainly try to determine if the result is following the goal. Having an AI that, for example, conveniently forgets to point out generic pills over brand pills would be easily detected (for example, they'd probably go for something less blatant) before it is send to professionals.

(But of course it will be biased because literally everything is biased).

Sure. The goal being that it reflects the bias we find virtuous over the others.

And this is too cute. It relies on the assumption that people just kind of blindly follow what the LLMs tell them. But the recent case shows people don't. They just point and laugh, the same dynamic that has occurred with print and news media for years.

You're now tempting me into suggesting that purposefully bad AI models should be leaked to lawyers so they can be identified and disbarred after they all make their cases by quoting the Wile E. Coyote vs Roadrunner case...
 
Last edited:

And this is too cute. It relies on the assumption that people just kind of blindly follow what the LLMs tell them. But the recent case shows people don't. They just point and laugh, the same dynamic that has occurred with print and news media for years.
I'm confused. Isn't this thread about a judge making a legal decision based on the work of a lawyer who used LLMs? So not everyone is pointing and laughing.
 

Pets & Sidekicks

Remove ads

Top