Judge decides case based on AI-hallucinated case law

I saw an example recently of Google's AI Summary assistant saying that a steamer or iron could be used to take the wrinkles off of men's private parts.

And you can easily see how this happens. Text-based generative AI is mostly a word-association machine. Steamers and irons are associated with wrinkles. You find wrinkles in fabric and...
 

log in or register to remove this ad

If that’s a true and complete summary, then this is a prime example of why people don’t trust the legal system.
It's as true a summary as I can give, without somehow finding and posting the decision. I was associated with a couple of motorcycle magazines, at the time, and got the info as it came in.
 

Just had a horrible tangential thought:

We know that some AIs are incorporating the errors of others because their training process included those errors in the training. GIGO.

If law-specific AIs are being trained in a similar fashion, what’s to stop them from incorporating erroneous, invalidated, distinguished or overturned decisions in their training processes? Is someone going to weed that stuff out beforehand?

How will they handle bad cases that nonetheless need to be included for contextual purposes?

How will they handle cases involving AI hallucinations?
 

Just had a horrible tangential thought:

We know that some AIs are incorporating the errors of others because their training process included those errors in the training. GIGO.

If law-specific AIs are being trained in a similar fashion, what’s to stop them from incorporating erroneous, invalidated, distinguished or overturned decisions in their training processes? Is someone going to weed that stuff out beforehand?

LexisNexis and Dalloz are two reputable editors of legal text databases. I don't know their internal operation, but I guess they trained or will train (respectively) an AI helper to search that database, that is trusted already and hand-reviewed by experts. And the database includes already decisions that were overturned, but they are identified as such, with the whole string of decisions (initial ruling, appeal, revisions...)

How will they handle cases involving AI hallucinations?

Since the helper is said to provide the source alongside the analysis, I guess they expect (and contractually mandate) the user to click the link to the decision and ensure it actually says what the AI thinks it says.
 

It's as true a summary as I can give, without somehow finding and posting the decision. I was associated with a couple of motorcycle magazines, at the time, and got the info as it came in.

I trust fully that you are recounting what you heard accurately. I’ve also seen many people do similar with the infamous McDonald’s hot coffee case, but that was a lot more nuanced than the average Joe realizes. That McDonald’s way overheated the coffee beyond mcdonalds own recommendations because that specific store believed people wanted their coffee hot when they got to work.
 

I can't explain why you took issue, you'd be the best person to identify why you did take issue?

The point was that:

1. We notice that sometimes, AI can give bad legal advice.
2. A person said what I understood as "AI should be prevented to give legal advice because bad legal advice, contrary to bad hairstyle advice, can be catastrophic".
3. I said "We do ban a lot of speech, why not ban giving bad legal advice altogether? So the goal of protecting people from bad advice would be met, in the case of AI and in the case of the many, many other sources of bad legal advice."

Note that I have no problem to make the company operating a service offering legal advice liable for the bad advice it gives. Especially if it sells it.

headshake

Look. the problem with people just throwing around terms and metaphors is that there is this existing superstructure of ... laws and policies and concepts that already exist ... that means that these metaphors and analogies don't work. At all. And ignoring the fact that different countries operate differently doesn't resolve the issue- it exacerbates it.

Very (VERY) briefly.

The United States has a system of dual sovereignty. States and the federal government. The licensing of attorneys is done by the states (except when it's not, see, e.g, the patent bar). In order to practice law, you must have a license. But within this country (US), there is also robust protection for freedom of speech. So the ability to opine, even incorrectly, about the law is protected (absent, again, some caveats). So every different jurisdiction (state) will have different rules about what will constitute "the practice of law" that can go up to, but not exceed, the protections of the FA. If someone practices law without a license, that's considered a violation- a UPL (unauthorized practice of law). (There are further distinctions about the authority of the bar and the law to regulate lawyers who engage in UPL, because that's a things, and state laws)

The exact contours of what constitutes advice and what constitutes a UPL can vary- for example, generally, a non-lawyer can sell a book that contains general legal information, including legal forms. But it is always the case that to hold yourself out as an attorney, even impliedly, would constitute a UPL. It is also unlawful in most jursidictions for a corporate entity to perform legal services even if those services are performed by a person, because corporations are not people and only a natural person can perform legal services.

And so on. And these issues are distinct from those in the medical profession (in the United States) ... and those are distinct from defamation liability ... and those are distinct from the issues that would occur in general products liability .... which are distinct from, inter alia, the possible issues that might arise if these are considered a tool that is used in the medical field (which has to go through an FDA process) ... and so on.

In other words, there are a lot of very distinct issues, and we can't just say, "It's like someone giving advice and you act on it." Because it isn't- it's a product, made by a corporate entity (usually) for a mass market, and the different ways in which it is used will have different frames in which to analyse it.
 

I have no problem making the manufacturer liable using regular, common means of making it liable. The idea that was discussed was preventing AI from doing legal advice, which isn't making the manufacturer liable, it's banning the tool altogether.

So, here's an issue we have to deal with.

All technologies are expressed through specific implementations. What we refer to when we speak about a technology is actually a generalization, based off of however many specific implementations we are familiar with, and however deeply we understand those implementations.

Speaking about the generalization as if we can disregard the context is asking for trouble, confusion and misunderstanding, because those details actually matter.
 

Just had a horrible tangential thought:

We know that some AIs are incorporating the errors of others because their training process included those errors in the training. GIGO.

If law-specific AIs are being trained in a similar fashion, what’s to stop them from incorporating erroneous, invalidated, distinguished or overturned decisions in their training processes? Is someone going to weed that stuff out beforehand?

How will they handle bad cases that nonetheless need to be included for contextual purposes?

How will they handle cases involving AI hallucinations?
This is a though those of us in IT have already had, about a variety of subjects.
 

I trust fully that you are recounting what you heard accurately. I’ve also seen many people do similar with the infamous McDonald’s hot coffee case, but that was a lot more nuanced than the average Joe realizes. That McDonald’s way overheated the coffee beyond mcdonalds own recommendations because that specific store believed people wanted their coffee hot when they got to work.
Also, a point that laymen often miss, is that McDonalds had already been put on notice that they were acting against regulations.

EDIT - I should also note that it wasn't even the first lawsuit about it, that they lost. It was simply the most publicized.
 
Last edited:

headshake

Look. the problem with people just throwing around terms and metaphors is that there is this existing superstructure of ... laws and policies and concepts that already exist ... that means that these metaphors and analogies don't work. At all. And ignoring the fact that different countries operate differently doesn't resolve the issue- it exacerbates it.

I am not ignoring that fact. Never in this thread have we limited our reasoning to the context of the United States. I think that acknowledging that country works differently is exactly the basis of why I said "we'll probably won't find a consensus".

Very (VERY) briefly.

The United States has a system of dual sovereignty. States and the federal government. The licensing of attorneys is done by the states (except when it's not, see, e.g, the patent bar). In order to practice law, you must have a license. But within this country (US), there is also robust protection for freedom of speech. So the ability to opine, even incorrectly, about the law is protected (absent, again, some caveats). So every different jurisdiction (state) will have different rules about what will constitute "the practice of law" that can go up, but not exceed, the protections of the FA. If someone practices law without a license, that's considered a violation- a UPL (unauthorized practice of law).

I appreciate your explanation (really, no sarcasm). But the argument I responded to wasn't "commercial companies operating an AI product giving bad legal advice should be prevented to do so in the US". I wouldn't have reacted to that. It was "we (generally, worldwide) should ban AI (the technology, in general, not specifically limited to corporations) from providing legal advice (in general, again, not "by pretending to be a lawyer, even impliedly") as the ban was suggested as an answer to the idea that, with proper warning and encouragement to see other source, AI could do some useful vulgarization of legal texts or decisions.
 
Last edited:

Remove ads

Top