Judge decides case based on AI-hallucinated case law

Rigor is Paramount :** Your primary goal is to produce a complete and rigorously justified solution . Every step in your solution must be logically sound and clearly explained . A correct final answer derived from flawed or incomplete reasoning is considered a failure.
This, surprisingly enough, can be a problem. Consider the self guiding vehicle. Now, consider the moral dilemma presented in the I Robot movie: the robot chooses to save the person they have a high chance of saving, rather than the person that they have a slim chance of saving. The trouble is, the person it saves is an adult, the person it allows to die is a child. As a consequence, the rescued adult is burdened with guilt. The point, sometimes an emotional decision is the correct one, not the logical one.

Now, getting back to medicine, logic is only as good as the information provided, and the person supplying the information is the patient, who is neither a medical professional, nor a machine. Thus, the provided information is not objective. Most obviously, some people exaggerate pain levels, whilst others play it down. A good diagnostician needs to be able to gauge the character of the patient in order to correctly interpret the information they provide.
 

log in or register to remove this ad

This, surprisingly enough, can be a problem. Consider the self guiding vehicle. Now, consider the moral dilemma presented in the I Robot movie: the robot chooses to save the person they have a high chance of saving, rather than the person that they have a slim chance of saving. The trouble is, the person it saves is an adult, the person it allows to die is a child. As a consequence, the rescued adult is burdened with guilt. The point, sometimes an emotional decision is the correct one, not the logical one.

I am not sure all people would prefer to have died for the slim chance of saving someone they don't know. Their child? Sure. A child they know? Maybe. A completely random child? Not so sure. There have been extensive tests on this type of scenario leading to the conclusion that people wouldn't want to buy a self-driving vehicle that would abide by the rules they prefer (in your scenario, a majority would say they prioritize the child when asked, but wouldn't want to be a the seat of a car that would jump off a cliff to avoid a child). I guess you're on the optimistic side of the spectrum.

I thought that was interesting for this audience, because it shows how different these prompts are compared to how many people are using LLMs. Their initial prompt is below (apoologies the format has some issues due to my copying from the pdf. See page 5).

Like most tool, the way to use it depends on the result you want. To have a chat buddy, you don't input the same instructions than to solve a math problem, get a sample of computer code, get to play a gamebook, or get the reference to something. Whenever I ask an LLM for an information and to back it with links to Internet websites corroboring its answer, I get nearly no hallucinations (of course I check the reference).
 
Last edited:


Like most tool, the way to use it depends on the result you want. To have a chat buddy, you don't input the same instructions than to solve a math problem, get a sample of computer code, get to play a gamebook, or get the reference to something. Whenever I ask an LLM for an information and to back it with links to Internet websites corroboring its answer, I get nearly no hallucinations (of course I check the reference).
Something as simple as "provide references" and then confirming the references yourself would avoid the majority of the issues cited in this thread.
 

I am not sure all people would prefer to have died for the slim chance of saving someone they don't know. Their child? Sure. A child they know? Maybe. A completely random child? Not so sure. There have been extensive tests on this type of scenario leading to the conclusion that people wouldn't want to buy a self-driving vehicle that would abide by the rules they prefer (in your scenario, a majority would say they prioritize the child when asked, but wouldn't want to be a the seat of a car that would jump off a cliff to avoid a child). I guess you're on the optimistic side of the spectrum.
And yet, in the real non-test scenario world, people die all the time while trying to save an unrelated child. Both First Responders and civilians.
 



This, surprisingly enough, can be a problem. Consider the self guiding vehicle. Now, consider the moral dilemma presented in the I Robot movie: the robot chooses to save the person they have a high chance of saving, rather than the person that they have a slim chance of saving. The trouble is, the person it saves is an adult, the person it allows to die is a child. As a consequence, the rescued adult is burdened with guilt. The point, sometimes an emotional decision is the correct one, not the logical one.

Now, getting back to medicine, logic is only as good as the information provided, and the person supplying the information is the patient, who is neither a medical professional, nor a machine. Thus, the provided information is not objective. Most obviously, some people exaggerate pain levels, whilst others play it down. A good diagnostician needs to be able to gauge the character of the patient in order to correctly interpret the information they provide.
If it was running off of modern AI the robot would have tried to save a sunken tire with a smiley face drawn on it because it misidentified it as a person and calculated it had a 113% chance of saving them.
 

And yet, in the real non-test scenario world, people die all the time while trying to save an unrelated child. Both First Responders and civilians.

In most stories of heroic savings I heard of, it was one person intervening and a crowd of people passively doing nothing, possibly cheering (or worse using their smartphones to film). But I gladly accept that most people are going to sacrifice their lives selflessly for others and I just got my views distorted by the inaccurate and gloomy news reports.


Washington Post said:
Fousseynou Samba Cissé became a French national hero after he climbed onto the ledge of a building and saved six people, including two babies, trapped by smoke. [...] A neighbor recorded the act of bravery and posted it online [...] “There were a lot of people down there watching them, filming them, screaming for help,”

This guy got a medal for his heroic rather average action.

That or @Mannahnin is right.
 
Last edited:

In most stories of heroic savings I heard of, it was one person intervening and a crowd of people passively doing nothing, possibly cheering (or worse using their smartphones to film). But I gladly accept that most people are going to sacrifice their lives selflessly for others and I just got my views distorted by the inaccurate and gloomy news reports.
This reminded me of Bystander effect - Wikipedia
 

Pets & Sidekicks

Remove ads

Top