Judge decides case based on AI-hallucinated case law


log in or register to remove this ad

Wasn't sure which AI thread this should go into

Judges Don’t Know What AI’s Book Piracy Means​

Can AI companies keep stealing books to train their models?


Well ... I don't think that the headline is accurate or correct, and it does a great disservice to the article.

I have a few minor quibbles with the article from a purely legal standpoint, but overall I think it does a pretty good job in a short amount of space outlining some of the issues in a way that might make them a little easier for most people to understand.


-For example, I don't think the article actually summarized the rulings very well. And while it did a really good job of pointing out some of the issues (see, e.g., the Lemley controversy) it didn't expand on why that's important.

Briefly- so the issue with Lemley quitting the legal defense team and the aside about studies is this; maybe Lemley quit because of the post, but he also quit at the same time that he co-authored a study showing that the AI would regurgitate copyrighted products in part and/or in whole. Which ... yeah. And when the article notes that studies on this exact problem are likely to be undertaken and understood by the companies making the AI products- which means that they are disincentivized to make them or produce them, and so bringing these suits will put plaintiffs at a disadvantage.

But overall? For a short article? 4/5 stars. Headline? 0/5 stars.
 

No. That was absolutely correct. I have repeatedly (and repeatedly) stated that this is a complicated and nuanced issue.

This is true. Or at least we're agreeing on this.

I was not the one making a universal statement of how product liability law works- you were.

No, I wasn't. I was saying initially that consensus would be impossible to find because there are lot of nuanced situation between countries (and within countries in some case, as you correctly added). Limitations on what a technological tool should or should not do will depend on the legal framework applicable to the market it is sold in, resulting in probably different outcomes.

I was providing you an example of why that is incorrect.

Well, if you had the impression I was making a general statement, maybe I wasn't clear. I was just saying that using a country-specific legal framework to dictate what a product should or should not do won't give a useful result in general.

Moreover, you will have to excuse me if I find your statements regarding other jurisdictions somewhat curious; it has been a while since I have looked into it, but I recall learning that Germany (like most countries) does have a legal regime regarding reasonably foreseeable use.

It also has provisions that take into account user's responsability, with nuanced situations, so a general statement on the AI operator being liable when giving bad advice after warning the user not to use it within a specific context was too broad to be applicable. A user that is told not to use an AI for X, and that do X nonetheless, might see his ability to claim damage from the manufacturer lessened or suppressed in some situations.

Moreoever, this is an EU issue- which was in the news recently because at the end of last year, the EU's new product liability directive came into place (replacing the one that was in force for three or four decades) that maintained the criterion for assessing a product's defectiveness that includes the reasonably forseeable use of the product- not just the intended use.

The product liability directive doesn't change the criteria for establishing liability (
In light of the imposition on economic operators of liability irrespective of fault, and with a view to achieving a fair apportionment of risk, a person that claims compensation for damage caused by a defective product should bear the burden of proving the damage, the defectiveness of a product and the causal link between the two, in accordance with the standard of proof applicable under national law.)
What was found was that establishing the link between the AI-produced result and the defectiveness of the product was difficult with AI and sometimes impossible (in certain Member State), especially in light of a few rulings where algorithmically produced result, even if harmful, were deemed not to fall within the realm of liability. Hence the effort to draw an harmonization directive for the future.

Again, I do not think that this conversation is productive. I have repeatedly stated that I do not think you are conversant in the details of what I am discussing, and that's fine. You are still entitled to your opinions on the matter, and they are as valid as mine. I do ask that you stop telling me that I am wrong about something I do happen to understand. Good?

Where did I say that you were wrong with US liability? When you claimed that the rebuttal to your mention that liability works as you say was criticized as being ONE EXAMPLE ONLY, I did mention that it was incorrect (my position wasn't to dismiss this as a one-off example, but I was still saying that using location-specific constraint to determine what an AI should be able to do or not wasn't going be fruitful). It implied no wrong with you -- a misunderstanding can lie with both part of the conversation.

I told you that made a generalization, which apparently I misread from your post since you clarify that you didn't intend to make a general statement. I apologize for misunderstanding your post, which came among a lot of generalizations in other posts in the thread. I had no intent to say you're wrong about how things would work in the US or even common law countries.

Again, I do not think that this conversation is productive. I have repeatedly stated that I do not think you are conversant in the details of what I am discussing

Indeed, I don't think conversing with someone claiming to repeatedly insult my professional skill is productive either, so it's certainly best we don't engage with each other.
 
Last edited:

Here's a fun one, just to show it's not just a problem in the United States...


Quite a doozy. It's like the immovable object and the unstoppable force. What happens when you have a pro se litigant who admits to using AI with hallucinated sources face up against an attorney who admits nothing but is sketch as all get out (and yeah, probably used AI but just changed the case cites to cases that exist but don't actually mean what the AI said).
 


This is my general viewpoint, because facts don’t care about feelings.
But regulations are not statements of fact. They weigh many competing concerns, like whether they violate any rights and how to balance economic vs public health goals.

Mary Mallon was insulted by and thus refused refused to abide by doctors’ orders and those of NY public health services. As a one-woman disease vector, she earned the nickname “Typhoid Mary” and a one-way trip into permanent quarantine.
Nor is my point "if anyone is insulted you can't make a law". But that how people respond to a regulation is one of many things you must weigh.

No we don’t. The justification offered is that the regulation will “improve outcomes in cases of _________”, or “reduce instances of _______ by N%”, not “we have to protect the uneducated citizens from this danger”.
That is the exact opposite of what has taken place in this thread. The argument made was explicitly "we have to protect uneducated citizens from the danger".

When the US mandated using seatbelts in most passenger vehicles, people complained. But nowhere in the record or minutes of the legislation will you find discussion about how average people don’t understand the risks. Certainly, there were studies that supported the rule, but they were not written in terms of the Average Joe’s perceptions.

The same goes for our rules on tobacco and alcohol sales. We have age limits for purchases and warning labels on the products (and ours are tamer than in some countries). But the core warning was framed “The Surgeon General has determined that _______ is harmful to your health.”, not that people are too stupid to understand.
If you think I am talking about the minutes of legislation or the text of surgeon general warnings you have not understood me. I'm talking about the topics raised when discussing legislation. We are having such a discussion right now. And the arguments being offered are about the ability of people to evaluate medical claims accurately.
 

If you think I am talking about the minutes of legislation or the text of surgeon general warnings you have not understood me. I'm talking about the topics raised when discussing legislation. We are having such a discussion right now. And the arguments being offered are about the ability of people to evaluate medical claims accurately.

I think the seatbelt analogy is quite good, though. Why do we mandate people to use seatbelt? Because they are demonstrated to diminish the number of death and improves the quality of life of survivors, yet people didn't wear it based on communication alone, so indeed the law was established to protect people against their own judgement (but not promoted as such). The representatives can sometimes make better informed choices that the less informed majority would make, or a minority would do.

Here, the argument would be that AI should be prevented to give medical advice because people would use it to avoid going to a doctor, even if they are told that it they shouldn't rely on a general purpose LLM for health advice. I think the reasoning are quite analogous.

I don't agree that this is necessary but I can see the analogy.

I don't agree it would be necessary because:
a) we had ample opportunity to block people from bad medical advice, yet we generally don't, including countries with heavy regulation on speech, unless the person speaking is trying to pass off as medical advice, so the risk, while existing, wasn't deemed too large to mandate a regulatory reaction (ie, I can access the WebMD website, while I can't access some porn websites or Nazi-propaganda websites, so obviously our legislator thought it wasn't necessary to block medical discussion not dressed as a medical authority -- if at some point thousands of people start dying because they don't go to the doctor because they thought a general AI opinion is enough and they missed a bad disease, then this position might change).
b) preventing the AI from discussin medical topic might be too wide (outside of medical diagnosis, the topic can be relevant to idle conversation, which is what LLM chatbot are promoted as doing)
c) in some case, people will tell things to a chatbot (think of the woman who had question about conception she didn't dare to ask her friends) and they might benefit from an answer alarming them: "I am not a doctor, I am not qualified to give health advice, and you should really double-check everything I tell you but the symtom you typed are those of an heart attack, call 911 immediately". It might help, whereas the standard disclaimer of "I am not allowed to speak about health, go see a doctor" wouldn't elicit the same answer.
d) where I live, the cost of a medical consultation is zero, so there is very few entry barreer for going to see a doctor (the time taken to wait for your turn, basically).
e) going to see a doctor is still necessary to get drugs, and if the AI says "it's only a common cold" (and you secretly have cancer), you still need to get an appointment to get the treatment, and a human doctor will assess your condition.

So the bad outcome would only happen when people disregard the warning, the AI makes an error and downplays the risk, and the patient had a dangerous disease which they chose to "suffer through" instead of getting a mostly free cure. I think we're under the threshold of public intervention -- though I can see the cost of doctor + drugs being an incentive to listen to ChatGPT instead of going to a professional for uninsured people.
 
Last edited:

I think the seatbelt analogy is quite good, though. Why do we mandate people to use seatbelt? Because they are demonstrated to diminish the number of death and improves the quality of life of survivors, yet people didn't wear it based on communication alone, so indeed the law was established to protect people against their own judgement (but not promoted as such). The representatives can sometimes make better informed choices that the less informed majority would make, or a minority would do.
Yeah I think the analogy gets the basics right; it is similar in kind, just different in the specifics.

Back when I was trying to justify COVID policies in my community, I used seat belts as an example of why safety regulations could be useful. It helped in some cases, but many people responded by saying seat belt laws were exactly the kind of government overreach they found onerous.
 

Well, the laws protecting people against themselves are rarely popular among every single person (if they were, maybe a law wouldn't be necessary). That's why some think that a representative governement is better than a direct democracy, because it's easier to inform a small circle of lawmakers than the whole population. I don't think people reject the seatbelt based on all available information, they do while often tending to downplay the risks linked to driving without it (or they think they are good drivers, which isn't the point, the seatbelt protects you against all the other bad drivers) and so on.

The goal of the government is to find the best balance between individual liberty, healthcare concern, cost... and popularity (in democracies).
 
Last edited:

There’s a reason why diagnostic programs are still significantly less accurate than living MDs.
That's not quite as clear as you might think. There have been a few studies that show comparable or better accuracy from LLMs. Lots of caveats, especially since the papers have mostly used synthetic data, but here are some interesting bits of info:
  • Patient records are growing massively. One health system reports 20% of their patients have medical records longer than Moby Dick. An LLM can read that and identify relevant details in 1 minute. A doctor cannot do so. So, if you arrive at an ED, and a doctor who does not know you has to review your records, it may be highly valuable to ask the LLM to look for correlations between your records and your symptoms.
  • Open Evidence -- many, many doctors feel this significantly helps them in their diagnosis. I don't have access so I cannot test it (I'm a doctor of statistics, working in the medical field), but the weight of opinion seems to be that it really does help a doctor make a diagnosis significantly.
  • In the US and EU, we have about 800 or 900 doctors for every million people. Other places in the world have 4. In many places in the world, an LLM that is only 95% as good as a doctor is a fantastic advance over no doctor. And in the US, recent legislation will make millions unable to see a doctor, so we're effectively moving into the same space -- a choice between an LLM diagnosis or no diagnosis.
People are just starting to investigate this space now. But my personal POV is that if you do not have medical access, LLMs are significantly better than just web searching. If you do have medical access, LLMs assist doctors by (i) summarizing large volumes of data for specific goals in short times (ii) noticing issues outside a doctor's core specialty (iii) ensuring that standard stuff has been done.

LLMs seem good at summarizing and detecting relevant information. They are also pretty good at presenting information in a coherent report. They will absolutely make mistakes, just like in this thread's title. But weigh that against the number of times a human lawyer makes a mistake. We haven't seen decent studies yet, but I am not convinced that the rates are very different.
 

Pets & Sidekicks

Remove ads

Top