Judge decides case based on AI-hallucinated case law

Something being true doesn't stop it from being condescending.
Condescension implies a patronizing superiority. I’m not claiming to be superior; that they are inferior.

I’m saying that the average layperson doesn’t have the training to evaluate medical treatment claims with accuracy, and part of that lack is not having the necessary vocabulary. For example, the concept of comorbidity isn’t that difficult, it’s just the existence in a particular patient multiple afflictions capable of harming or killing them

But it was a major point of distrust with the COVID-era fatality reports. People claimed that the people tracking COVID fatalities were nefariously including people who died from other causes. Even when I tried to explain it in simple terms, some couldn’t accept that someone died because of multiple factors. For them, every person died because of a single cause.
 

log in or register to remove this ad

I would say that if you eliminate the “because of new information” language, you’re going to INCREASE the odds of accusations of lying sticking.

Should the experts take more pains in avoiding jargon? Absolutely- clarity is key to messaging. (That lesson was drummed into me by my Wills & Estates prof, Stanley Johanson.) But you still have to let the people know you’re not just making things up; that there’s REASONS behind the changes in advice.

What I noticed during Covid, that may not have happened everywhere, is that epidemy experts were directly put in the limelight. The problem being that epidemy experts aren't necessarily communication expert. So some amount of perfectible communication error was unavoidable. And, at some point, they were also asked about things outside of their field (for exemple, "ok, as an epidemiologist, you recommand locking people at home, but how about the economical impact of the measure?") and it's difficult not to answer a journalist in this case (even if they knew they should have defered to other people for that answer). And the political personal was rather happy to have expert to take responsability for unpopular measures by saying "look, it's unavoidable, the experts say so."
 

I have encountered many extremely knowledgeable people who were bad communicators. Some were in fields that demanded some skill in communication.

Add to that being naive and/or unskilled at politics (in general, not just the governmental version of it) and you get people who are very prone to stepping on societal land mines.

So when, as you correctly point out, the willingness of politicians to shove others into those minefields…🤷🏾‍♂️

Bringing us back to the case at hand, we are running not tarrying down a similar path. We have faceless, expressionless LLMs giving answers in all kinds of fields as a live beta test. Some are “dressed” as advisors or as musicians. Others seem like virtual buddies. Each harbors hidden dangers. It is plain they’re not experts by their haphazard way of responding.

This beta test is an early encounter in a dangerous forest, and it doesn’t take an oracle to see that they need refinement before they’re suitable for use…if they ever will be.
 

This beta test is an early encounter in a dangerous forest, and it doesn’t take an oracle to see that they need refinement before they’re suitable for use…if they ever will be.

I think that's where the rift between our positions lie. You consider them to be unsuitable for use. I consider them to be suitable for use, as magic 8-balls are. They don't claim to be something one can rely on for any decision, but they can help sometimes. I'd even say that I get more use of LLMs than magic 8-balls (translating a sentence is often context-dependant, and LLMs, while prone to errors, tend to find the correct way to express things better than dictionaries, for example, and I am pretty sure I'd convey my point better here if an LLM was integrated to the board so I could type in my native language and it automatically gets translated to English), while knowing full well not to trust them for anything important.
 
Last edited:


You are okay with doctors and judges making decisions with a magic 8-ball!?
No. A doctor or lawyer using a general purpose LLM or a magic 8-ball should be punished the same way. It is not an appropriate tool for professional usage, yet I don't think we should ban magic 8-balls.

I am OK with a doctor or judge making decisions using professional AI that are dedicated to their field, since they are closer to dedicated database than LLMs in their working.
 

No. A doctor or lawyer using a general purpose LLM or a magic 8-ball should be punished the same way. It is not an appropriate tool for professional usage, yet I don't think we should ban magic 8-balls.
Maybe because we have never caught a judge making decisions based on one? (and how many judges haven't been caught?)

Everyone is innocent until it is randomly decided that they are guilty.

Who decides who gets punished? Judges. So you are using a magic 8-ball to select punishments for using a magic 8-ball.
 

Maybe because we have never caught a judge making decisions based on one?

It is very possible that most court rulings are decided by a magic 8-ball, because all judges are lazy and just roll a dice to determine if the person is guilty (applying appropriate modifier to the roll, like "do I like the suspect physical appearance?"). They probably got their diploma after a teacher randomly threw the copies from a stair, and graded the work based on the step they landed on.
 

It is very possible that most court rulings are decided by a magic 8-ball, because all judges are lazy and just roll a dice to determine if the person is guilty. They probably got their diploma after a teacher randomly threw the copies from a stair, and graded the work based on the step they landed on.
You say like you think that isn't true! Although I suspect more bribery than randomness went into that diploma.

It kind of explains why you think this isn't a bad thing though - you believe humans are inherently honest. "Everything is for the best in all possible worlds."

It's not a logical position, even if almost everyone was honest*. You still have to guard against the potential that a person might be a bad actor.


I know for a fact that I'm not always entirely honest**, so I can be 100% certain that this is not true.

**I cheated playing Monopoly against my grandchildren for a start.
 
Last edited:

You say like you think that isn't true! Although I suspect more bribery than randomness went into that diploma.

EDIT: removed since obviously most people were confused by the wording.

It's not a logical position, even if almost everyone was honest*. You still have to guard against the potential that a person might be a bad actor.

If I were convinced that everyone was honest, I wouldn't call for sanctions (which imply checking for problems, in order to punish the responsible, you need to detect them).

Here, I say that most people are honest (if anything, by self-interest, a doctor rolling an 8-ball to prescribe drugs, or accepting an iphone from a drug company, or giving away unnecessary sick leaves... being barred from office would make other doctors wary of doing that) and we need failsafes to detect the ones that aren't, and punish them appropriately. Failsafes like an appeal procedure, or having several judges deciding together whenever important matters are decided, go far to prevent problems.
 
Last edited:

Pets & Sidekicks

Remove ads

Top