Judge decides case based on AI-hallucinated case law

If the otherwise reputable FDA page is distributing misinformation like that, it’s not the LLM’s fault. Nor is it the LLM’s fault if the FDA or other trusted resource has been compromised & contaminated with inaccurate info (deliberately or not) and this corruption has not yet been detected.
I'm imagining a case where it has been detected and we all know it's wrong, but the legal environment mandates the LLMs lead to this false information.

I’m being realistic, not condescending.
It can be both. If someone refuses advice about treatments that will help them because they don't trust you, it is satisfying to say "that person is a fool and shouldn't make their own decisions". But it doesn't help them, because it exacerbates their lack of trust.

One recurring topic in our household is how bad people are at evaluating info their doctors give them. Even pre-COVID, we discussed studies illustrating how common it was for patients to come in demanding treatments or pharmaceuticals completely unrelated to their symptoms or even final diagnoses. (That’s part of the history behind why antibiotics got overprescribed.)

---

So forgive my dim view of laymen evaluating much beyond the most basic medical information.
I agree with all of the above. I share your dim view.

When public healthcare officials in the USA repeatedly revised their assessments of the dangers of COVID and prevention methods as they gathered new information, a sizable percentage of my fellow countrymen thought they were being deliberately lied to.
I won't speak specifically about COVID. But in general, when things are presented authoritatively and turn out to be wrong, that undermines trust. We still see people citing the 1975 global cooling article as evidence that it is all bunk--and that wasn't even a very authoritative portrayal. This is true especially when you are asking people to make major lifestyle changes as a result of your authoritative portrayal.

As a result of that mistrust of doctors & health department officials publicly updating everything ASAP, there are now strong movements in certain states to ban ALL mandatory vaccinations, including those with safety records going back decades, like the polio & MMR vaccines. We’re seeing more and larger outbreaks of entirely preventable diseases here.🫤
And I agree this is a massive problem. But I disagree on the solution. I think circling the wagons and restricting information to experts only is going to make the trust situation worse, not better. It takes decades to build trust and not very much time at all for it to evaporate. Pointing to a degree or a license is something that only works in a high-trust environment. And that no longer exists.

I don't like to speak about myself, but in case it helps prove my bona fides: there have been major consequences in my field. Support for scientific funding has evaporated. Projects that have been decades in development are not going to happen. A number of friends and colleagues have left the country for better options. Trust matters.
 

log in or register to remove this ad

Those jurisdictions are vanishingly small in number. Have you looked at the warning labels on any products recently? Remember that ever one of those warnings it there because someone did it.
You have this a bit backwards. Those jurisdictions and "reasonable person" standards are common, but warning labels largely get instituted because of exceptional cases, or exceptional laws in important markets (like California's Prop 65).
 

We don't forbid oil, knives, kitchen rolls, cars, guns, clubs... What we do is forbid killing, irrespective of how people do it.
When one of those tools is defective, it gets recalled so it stops harming/killing people. So we do forbid those things from being put out at a low quality. And low quality is what AI is right now.
 

Those jurisdictions are vanishingly small in number. Have you looked at the warning labels on any products recently? Remember that ever one of those warnings it there because someone did it.

Mitigating someone's responsability doesn't prevent warnings to be given. It makes sense to tell people that you shouldn't drink a bucketful of water in one go, without necessarily suing the public water system provider because they provide unlimited water to houses without any warning.

Edit: see @Mannahnin answer for a better phrasing.
 

If a pharmaceutical company released a medication for testing on the public, that had major side effects that occur as often as AI hallucinates, how long do you think it would stay out for public use? And do you think the company would survive all of those lawsuits?
The pharmaceutical company makes far different types of claims about the drug than the ai company is making about the ai.

I'm sure there has been testing, but a program that hallucinates as often as AIs do, and which go racist as often as they do, perhaps they shouldn't be out for public use yet.
My guess is that Ai's built/trained specifically for a limited domain don't typically do such things.

There's also the question of inferring user intent. Sometimes users want the AI to make something up. Sometimes they want only actual facts. Inferring the right intent more than 99.999% of the time might be difficult for a general purpose AI.
 

You have this a bit backwards. Those jurisdictions and "reasonable person" standards are common, but warning labels largely get instituted because of exceptional cases, or exceptional laws in important markets (like California's Prop 65).
Those "exceptional cases" are what I mentioned; someone doing the stupid.
 

When one of those tools is defective, it gets recalled so it stops harming/killing people. So we do forbid those things from being put out at a low quality. And low quality is what AI is right now.

A knife that is perfectly fit to cut a huge piece of meat is perfectly fit to kill people. It is not being recalled. It is not being regulated by a permit to cook. While defective product are recalled, they are recalled when they present a risk as part of their intended use to be deemed defective.

A drill into the skull will probably kill people (honestly I don't want to google to check that...) and we didn't generally recall drills. We do recall a drill that shatters when drilling a hole in a brick wall and potentially injure people. It is recalled because it is dangerous as part of its intended use. The intended use of Chat-GPT (I wouldn't use the word "AI" in general for this discussion) is to chat with people over general subject and provide an interactive experience. Using it for legal or medical professional replacement is like using the drill on a colleague's anal orifice for fun (real legal case, unfortunately): there is no fault from the drill maker and we still enjoy drills in my juridiction (the civil liability was determined to be the employer's). And contrary to drills, Chat-GPT comes with warning label against using them in this specific, unintended way that might lead to a problem.

A dedicated medical AI that would give bad advice would be defective and certainly be recalled (or corrected over the network).
 
Last edited:

A knife that is perfectly fit to cut a huge piece of meat is perfectly fit to kill people. It is not being recalled. It is not being regulated by a permit to cook. While defective product are recalled, they are recalled when they present a risk as part of their intended use to be deemed defective.

AI isn't working right. AI is faulty at the moment. For God's sake, forget law for a moment and realize that it recently suggested that someone put glue on a pizza to keep the cheese from sliding around.
 


A knife that is perfectly fit to cut a huge piece of meat is perfectly fit to kill people. It is not being recalled. It is not being regulated by a permit to cook. While defective product are recalled, they are recalled when they present a risk as part of their intended use to be deemed defective.

A drill into the skull will probably kill people (honestly I don't want to google to check that...) and we didn't generally recall drills. We do recall a drill that shatters when drilling a hole in a brick wall and potentially injure people. It is recalled because it is dangerous as part of its intended use. The intended use of Chat-GPT (I wouldn't use the word "AI" in general for this discussion) is to chat with people over general subject and provide an interactive experience. Using it for legal or medical professional replacement is like using the drill on a colleague's anal orifice for fun (real legal case, unfortunately): there is no fault from the drill maker and we still enjoy drills in my juridiction (the civil liability was determined to be the employer's). And contrary to drills, Chat-GPT comes with warning label against using them in this specific, unintended way that might lead to a problem.

A dedicated medical AI that would give bad advice would be defective and certainly be recalled (or corrected over the network).

Not to get into the weeds here, but I will note that these issues are complicated and nuanced and just talking about hypotheticals is likely to create further confusion.

It's not just the intended use. It's also both the actual use and the reasonably foreseeable use. So liability can attach if a manufacturer, for example, makes something that is totally legal in an intended use, knowing (or with it being reasonably foreseeable) that there will be a misuse. Does Sackler ring any bells?

Next, we also need to stop conflating specific-purpose AIs used by professionals with general purpose AIs. In America, an AI used for most medical applications would have to receive vetting through a regulatory framework (a medical device). Because even though they are being used by trained medical professionals, there are high standards for tools in that profession.

In the end, it really doesn't matter, does it? There's too much money invested already and too much money to be made. The AI avalanche has begun, it is too late for the pebbles to vote. Or, at least, that's the process that we are seeing going on. And I am not a luddite- far from it. But given what we've seen over the past decade, I do not have a great amount of optimism that concentrating more power into corporations and trusting that they will have our best interests at heart will end well.

Maybe I'm wrong.

But if there is one thing, and only one thing, I wish people would take from this it is this- in a nuanced and complicated area, the use of a simplifying hypothetical is likely to be more misleading than helpful.
 

Pets & Sidekicks

Remove ads

Top