Judge decides case based on AI-hallucinated case law

I'm not anti mitigation by any means. But if you can't even get folks to stop eating cheeseburgers...at a certain point it becomes clear the chicken has left the coop, with respect to the environmental criticisms.

You say that as if changing the dietary choices of an entire culture should be easier than changing the energy use patterns of a handful of companies.

I don't agree with that assessment.
 

log in or register to remove this ad

Well, to be optimistic, maybe humanity will find a way to stupid itself to extinction in such a way that is less damaging to other species, such as deporting the people who harvest the food then starving to death.

Please keep the general politics commentary off these boards, per the rules. Thanks.
 

Now, your lawyer handles your life, liberty, and financial and legal well-being. If they are using an untested tool, shouldn't that give you pause?
I certainly wouldn't trust a lawyer that would use tools he doesn't understand, whose limitations he doesn't seem to know, and who doesn't review the end result. For example, if he take a filing form on his computer, but doesn't know how to use a word processor correctly and doesn't replace "CLIENT'S NAME" with my name so that my filing is denied, I'd be miffed. Yet, I wouldn't blame Microsoft for selling Word out in the open, I'd blame the lawyer for not learning how to use a tool as part of his job.

There is no such thing as a life with zero risk. However, in areas of particularly significant consequences, we typically do what we can to reduce and manage that risk.

Yes. That's why LexisNexis or Dalloz are selling dedicated AI tools for lawyers and judges, because ChatGPT, a general purpose tool, isn't working well enough for this precise task.

We build purpose-made tools, and we test the heck out of them. We keep redesigning and refining until the risks come down to something like a manageable level, the remaining risks are known, and can be communicated and managed. And, the seller accepts some liability if the tool causes harm.

That's exactly what is happening. A general purpose tool came out, unready to handle any case, then it evolved so it could handle some case (casual conversation, making pretty elf pictures...) and law database editors are refining it for professional use.

That hasn't happened with AI tools. They throw ChatGPT out there, and folks use it for whatever they darned well feel like, and risk to thrid parties be darned, hey what?

We were talking about AI as a technology. If a judges uses Hammurabi's code of law to decide cases, instead of the correct statutes, the problem lies with him not using the right book, not with Hammurabi's code of law or whole the book technology.

ChatGPT has a line at the bottom of it saying that the tool is making mistakes and that verification should be made on all of its output. The lawyer misused a tool, and used it for something it is not fit. That disqualifies the lawyer, not necessarily the specific tool (that can be fitting for its intended use cases) and has absolutely no bearing on the technology as a whole.
 
Last edited:

Thank you for a lead in for another direction to consider...

Does anyone here think that devices used in medical practice should be thrown out into the market with no testing as to their efficacy or safety? Like, if someone built a new heart-lung machine to use during heart transplants, you'd want that thoroughly tested before it got used on you, right?

Now, your lawyer handles your life, liberty, and financial and legal well-being. If they are using an untested tool, shouldn't that give you pause?

There is no such thing as a life with zero risk. However, in areas of particularly significant consequences, we typically do what we can to reduce and manage that risk. We build purpose-made tools, and we test the heck out of them. We keep redesigning and refining until the risks come down to something like a manageable level, the remaining risks are known, and can be communicated and managed. And, the seller accepts some liability if the tool causes harm.

That hasn't happened with AI tools. They throw ChatGPT out there, and folks use it for whatever they darned well feel like, and risk to thrid parties be darned, hey what?
I don't think this is a great analogy. We aren't relying on the AI the same way we rely on the medical device--we trust the practitioner to use it appropriately.

Instead--suppose a doctor buys a vital tracking device from a third party and asks you to wear it. It will get your heart rate, bp, etc. All of the data it gives is trash, unless it is properly calibrated. The doctor screws up the calibration, trusts the data anyway, and then treats you improperly based on it.

In this case, the failure is on the doctor, not the tool. The tool would work fine if the doctor used it correctly. I don't think the manufacturer should be liable in this case.

AI hallucinations have been pretty well known since ChatGPT came out. If you're using it in your job and aren't checking for hallucinations...that's on you.
 


I don't think this is a great analogy. We aren't relying on the AI the same way we rely on the medical device--we trust the practitioner to use it appropriately.

We aren't?
Well, maybe you and I aren't.
But many are going to ChatGPT to get medical advice for themselves, their family members, and their pets.

AI hallucinations have been pretty well known since ChatGPT came out. If you're using it in your job and aren't checking for hallucinations...that's on you.

At what point of harm does a disclaimer at the bottom no longer serve? At what point is it no longer reasonable to just wave hands vaguely at it and say, "Well, they were misusing the tool!"? Do you have a line in mind for that? Do you have a calculus of how much personal or societal harm is acceptable?

Because, as seen above, we now see AI spouting anti-Semitic rhetoric. That's harmful. Should we take that as an acceptable level of harm? We should be okay with saying, "Well, folks shouldn't listen to Grok," and just letting it slide?
 

We aren't?
Well, maybe you and I aren't.
But many are going to ChatGPT to get medical advice for themselves, their family members, and their pets.

They are probably the same people who googled their symptoms and deduced that their light cough was a rare tropical disease they somehow caught in Finland and clogged doctor's offices, or worse who tried to treat their cancer by drinking carrot juice. We didn't ban the Internet, blame universities for designing it without failsafes and government for not requirement an Internet Using Licence.

There is a level of harm that make something illegal (nuclear weapons? counterfeited money?). There are technology that are mostly negative, but are allowed yet, as long as one passes some kind of test because it's OK when used correctly (guns, cars and prescription pills come to mind), there are some techs that have a few bad cases (like paracetamol, it can kill you, or the Internet (you can become convinced that the Earth is flat) or social network (you can join the jump off a cliff challenge)) and yet are available freely.

Most technologies fall in the last situation. Camcorders are free to use, even if they expose you to the risk of having your sextapes shared with your work colleagues. They don't even put a label saying that you probably shouldn't film your sexual organs and your head at the same time... ChatGPT is in the same situation : it can be detrimental when misused.

At what point of harm does a disclaimer at the bottom no longer serve? At what point is it no longer reasonable to just wave hands vaguely at it and say, "Well, they were misusing the tool!"? Do you have a line in mind for that? Do you have a calculus of how much personal or societal harm is acceptable?


I agree that there is a cutoff point. I don't think it's reached for ChatGPT, since we don't do anything about many other techs than can kill or severely harm or bring detriment to people if misused. Oftentimes, we don't even put labels (we don't have for kitchen knives). We are selling chainsaws, ovens, microwaves, fireworks... The bar is very high when it come to restricting people's liberty to do something.

Will we all agree on where to draw the line? Probably not: I mentionned gun as a thing that is "mostly dangerous, yet regulated". You'll find people to say that it should be unregulated, and people to say they should be forbidden. And we're talking of a tool specifically designed to kill people. That's it's stated goal. So I don't think we'll find a consensus to ban a tool that is openly designed to chat with you, and that you decide to ask it to do a job for which you had to pass very hard qualification to do.


Because, as seen above, we now see AI spouting anti-Semitic rhetoric. That's harmful. Should we take that as an acceptable level of harm? We should be okay with saying, "Well, folks shouldn't listen to Grok," and just letting it slide?

That's another point where consensus won't be reached. In some countries, anti-semitic rhetoric will be protected as free speech. In other, they'll lend you in jail. There is a line to be drawn about allowing people to speak, and I don't think we'll ever reach a consensus. But what might happen is that an "acceptable speech filter" might be added to Grok, rather than banning AI technology. That's what's happening in China, where apparently they prevented model for speaking about Tien-An-Men, which they deem harmful.

Antisemitic books do exist. In some countries, Mein Kampf is forbidden to be printed. In other, it's available in regular library. In no country that I know was the printing press technology criticized despite the extremely detrimental impact this one specific use had.

Edit: well, actually, the Church did try to ban the printing press because of the printing of Protestant books. So, admittedly, it's not exactly "in no country that I know of", but I meant it "recently".
 
Last edited:


Thank you for a lead in for another direction to consider...

Does anyone here think that devices used in medical practice should be thrown out into the market with no testing as to their efficacy or safety? Like, if someone built a new heart-lung machine to use during heart transplants, you'd want that thoroughly tested before it got used on you, right?

Now, your lawyer handles your life, liberty, and financial and legal well-being. If they are using an untested tool, shouldn't that give you pause?

There is no such thing as a life with zero risk. However, in areas of particularly significant consequences, we typically do what we can to reduce and manage that risk. We build purpose-made tools, and we test the heck out of them. We keep redesigning and refining until the risks come down to something like a manageable level, the remaining risks are known, and can be communicated and managed. And, the seller accepts some liability if the tool causes harm.

That hasn't happened with AI tools. They throw ChatGPT out there, and folks use it for whatever they darned well feel like, and risk to thrid parties be darned, hey what?

Sure. I can agree that they should be tested. Do we all agree with that? Because I get the vibes that some want AI taken out back with a shotgun and put out of its misery.

The next question is how do you test them?
 


Remove ads

Top