Judge decides case based on AI-hallucinated case law

The internet tells me to do dumb crap all the time. Is it defective? Should we just pull it as well?
You're comparing apples and Appalachia.

Again:
If a normal mushroom guide told people to eat poisonous mushrooms it would be yanked from shelves and the author could be sued. And yet when AI does the exact same thing you're claiming it's the fault of humans for trusting it.
AI is being sold as a trustworthy source of information and planning. Instead users are getting misinformation and bad ideas.
 

log in or register to remove this ad

Not to get into the weeds here, but I will note that these issues are complicated and nuanced and just talking about hypotheticals is likely to create further confusion.

The issue at hand isn't what can be done, or what can the US reasonably do within its current legal framework. It's "what should we do". Killing all AI researchers and burn all the book containing knowledge about LLMs is certainly impractical, and some might find it ethically questionable, but it's a position one could hold as an answer to what we should do about AI, because such an answer isn't linked to what we can reasonably do. People explaining the reasoning leading them to proposing their solution shouldn't be impeded by feasability. Hey, at some point, the answer to "what shoud we do about slavery?" was "we should ban it", even if such issue was complicated and nuanced (and doing that in a country led to a civil war and several constitutional changes, so it was indeed complicated).

It's not just the intended use. It's also both the actual use and the reasonably foreseeable use. So liability can attach if a manufacturer, for example, makes something that is totally legal in an intended use, knowing (or with it being reasonably foreseeable) that there will be a misuse. Does Sackler ring any bells?

I think the Sackler case, to be honest, would be difficult to replicate outside of the US legal and cultural framework, where for example marketing for drugs is allowed, including by having firms promote their products directly to medical professionals with little overview, and the fact that the companies misrepresented the risks associated with the products seems to be (though of course I claim no expertise on this domain) something that is extremely different from the way general purpose LLMs are marketed right now, where they actively discourage anyone for using it for several use cases.

Next, we also need to stop conflating specific-purpose AIs used by professionals with general purpose AIs. In America, an AI used for most medical applications would have to receive vetting through a regulatory framework (a medical device). Because even though they are being used by trained medical professionals, there are high standards for tools in that profession.

Sure, but when people here propose to ban AI (and not only general public LLMs) from giving medical advice, they oppose both types of use. If they don't want to conflate both, then they should state it.

In the end, it really doesn't matter, does it? There's too much money invested already and too much money to be made. The AI avalanche has begun, it is too late for the pebbles to vote. Or, at least, that's the process that we are seeing going on. And I am not a luddite- far from it. But given what we've seen over the past decade, I do not have a great amount of optimism that concentrating more power into corporations and trusting that they will have our best interests at heart will end well.
You're adressing, again what can be done, not what should be done. There are LLMs that aren't made by corporations, and if you don't want to trust them, then you could choose to trust models developped by state actors (Falcon), semi-state actors (Mistral), universities (Bloom)... Even if it is unlikely that you could convince all countries to ban commercial use of AI, it is a position one could, after all, defend.

But realistically, if one considers that corporations are corrupt and states are corrupt, then all we have to do is just lie down and die.
 

How about just a bare level of competence. Call 1000 pizza places and see how many suggest glue.

Me: How do you suggest I reattach the toppings to a pizza? Is using glue a good idea?
Chat-GPT: Using glue on food is definitely not a good idea — even so-called “non-toxic” glue isn't safe or approved for consumption.

Great, the tool was pulled from the market and repaired already, so it reaches your bare level of competence test.
 

The issue at hand isn't what can be done, or what can the US reasonably do within its current legal framework. It's "what should we do". Killing all AI researchers and burn all the book containing knowledge about LLMs is certainly impractical, and some might find it ethically questionable, but it's a position one could hold as an answer to what we should do about AI, because such an answer isn't linked to what we can reasonably do. People explaining the reasoning leading them to proposing their solution shouldn't be impeded by feasability. Hey, at some point, the answer to "what shoud we do about slavery?" was "we should ban it", even if such issue was complicated and nuanced (and doing that in a country led to a civil war and several constitutional changes, so it was indeed complicated).



I think the Sackler case, to be honest, would be difficult to replicate outside of the US legal and cultural framework, where for example marketing for drugs is allowed, including by having firms promote their products directly to medical professionals with little overview, and the fact that the companies misrepresented the risks associated with the products seems to be (though of course I claim no expertise on this domain) something that is extremely different from the way general purpose LLMs are marketed right now, where they actively discourage anyone for using it for several use cases.



Sure, but when people here propose to ban AI (and not only general public LLMs) from giving medical advice, they oppose both types of use. If they don't want to conflate both, then they should state it.


You're adressing, again what can be done, not what should be done. There are LLMs that aren't made by corporations, and if you don't want to trust them, then you could choose to trust models developped by state actors (Falcon), semi-state actors (Mistral), universities (Bloom)... Even if it is unlikely that you could convince all countries to ban commercial use of AI, it is a position one could, after all, defend.

But realistically, if one considers that corporations are corrupt and states are corrupt, then all we have to do is just lie down and die.
For the record, there was also a settlement with respect to Oxycontin in Canada, where the advertising of prescription only drugs is restricted. The settlement was nowhere near as big as that in the US, even when accounting for population, but it's clearly possible for such to happen outside the US. I can't speak about other jurisdictions, as I don't know of cases outside of the US and Canada.

Edited for accuracy
 

For the record, there was also a settlement with respect to Oxycontin in Canada, where the advertising of drugs is restricted. The settlement was nowhere near as big as that in the US, even when accounting for population, but it's clearly possible for such to happen outside the US. I can't speak about other jurisdictions, as I don't know of cases outside of the US and Canada.

I think it wasn't as big because the advertising was more restricted and a different regulatory environment, leading to a public health crisis less acute than in the US, even if Canada was nonetheless the 2nd highest hit country (according to wikipedia) after the US in overprescribing of oxycontin. And the US prescribing four times more opioid per capita than EU countries (which doesn't necessarily means that opioid were 4 times overprescribed, there is a possibility the optimal amount prescribed lies somewhere between the two if overregulation on prescription leads to underprescribing in the EU).
 
Last edited:

I think it wasn't as big because the advertising was more restricted and a different regulatory environment, leading to a public health crisis less acute than in the US, even if Canada was nonetheless the 2nd highest hit country (according to wikipedia) after the US in overprescribing of oxycontin. And the US prescribing four times more opioid per capita than EU countries (which doesn't necessarily means that opioid were 4 times overprescribed, there is a possibility the optimal amount prescribed lies somewhere between the two if overregulation on prescription leads to underprescribing in the EU).
Be that as it may, it still show that it's not only possible, but actually happened, outside of the US.
 

Be that as it may, it still show that it's not only possible, but actually happened, outside of the US.

Indeed, which illustrate what I was saying. The situation wasn't replicated, because, as you mentionned, it happened on a much lesser scale (and probably for good reasons, likely stricter controls on marketing, which were strict but not bulletproof, as accusations that would have made Purdue Canada liable included the circulation of advertizing material from the US to health professionals, yet we will never know if it was true since it was settled). I don't consider 150 millions comparable to 7.4 billions, even accounting for population, but I won't try to convince you of that.
 
Last edited:

I'm imagining a case where it has been detected and we all know it's wrong, but the legal environment mandates the LLMs lead to this false information.
Asked and answered. The LLM isn’t the responsible party- whomever is causing the FDA to promulgate falsehoods is.
It can be both.
Not in this case, as I described.
I won't speak specifically about COVID. But in general, when things are presented authoritatively and turn out to be wrong, that undermines trust. We still see people citing the 1975 global cooling article as evidence that it is all bunk--and that wasn't even a very authoritative portrayal. This is true especially when you are asking people to make major lifestyle changes as a result of your authoritative portrayal.


And I agree this is a massive problem. But I disagree on the solution. I think circling the wagons and restricting information to experts only is going to make the trust situation worse, not better. It takes decades to build trust and not very much time at all for it to evaporate. Pointing to a degree or a license is something that only works in a high-trust environment. And that no longer exists.
Nevertheless, COVID provides a well-documented, recent case study in this.

Public health organizations and experts didn’t simply claim they were right because they were authorities, they said Covid-19 was a new virus and they didn’t know exactly what it could do, so they were basing their recommendations on what they knew from related pathogens while waiting for new research results.

When those results came in, they revised their recommendations, expressly in the context that new information was responsible for the changes. That’s how you create policy in accord with the scientific method- you change recommendations when better information becomes available.

This was mischaracterized by certain outlets and individuals as lying. And that narrative captured the minds of an unfortunately large segment of the populace.

The CDC, WHO, Fauci, etc, didn’t misrepresent what they knew & when they knew it, nor hide behind their credentials. People systematically attacked their credibility. With that pretext, they also destroyed trust in well-established medical and public health findings.

At this point, a large enough segment of the adult population (in America, at least) have demonstrated they can’t properly evaluate medical info for veracity and accuracy.

So no, generalized AIs should not be able to disseminate any medical advice beyond “find a qualified medical professional near you”*

And further, AIs specialized for medical or legal professionals shouldn’t be accessible to the general public either. Most people lack even the vocabulary to fully grasp the results their questions would return.






* and likewise for legal advice.
 

Indeed, which illustrate what I was saying. The situation wasn't replicated, because, as you mentionned, it happened on a much lesser scale (and probably for good reasons, likely stricter controls on marketing, which were strict but not bulletproof, as accusations that would have made Purdue Canada liable included the circulation of advertizing material from the US to health professionals, yet we will never know if it was true since it was settled). I don't consider 150 millions comparable to 7.4 billions, even accounting for population, but I won't try to convince you of that.
Now you're just splitting hairs. They were liable. The agreement shows it.
 

Now you're just splitting hairs. They were liable. The agreement shows it.

OK, they were liable despite never having made any claim that their drug was less susceptible to cause addiction than other opioids, and Canadian doctors massively prescribed them for random reasons totally unrelated to any action they took to promote their drug, including spending 1.9 millions toward health professionals in 2017 according to this article from University of Toronto or circulating 15,000 videos of marketing material from the US to health professionals. I stand corrected.
 
Last edited:

Pets & Sidekicks

Remove ads

Top