Judge decides case based on AI-hallucinated case law

There’s a difference between simply barring the use of a technology in a given field and allowing its use as long as the results are government approved.
You could perhaps ban it for practitioners with the threat of legal consequences. But how would that work for private citizens? And with search and LLMs converging, will practitioners be able to use one but not the other? Google already gives AI summaries for many questions. Illegal?
 

log in or register to remove this ad

You could perhaps ban it for practitioners with the threat of legal consequences. But how would that work for private citizens? And with search and LLMs converging, will practitioners be able to use one but not the other?
Depends on the specific details of the law controlling LLMs. If the companies are held legally accountable for LLMs dispensing medical information, then they won’t release LLMs capable of dispensing medical information.

If medical LLMs are developed, but access is limited (say, by requiring a subscription AND the input of a medical license number for use), it will greatly reduce the probability of the general public getting access to info they can’t assess the accuracy of.

Google already gives AI summaries for many questions. Illegal?
Depends on the actual wording of the laws in place. Legal access to some info is already regulated.

But even absent new laws, it’s possible that those AI summaries include legally actionable answers. If someone got a Google AI answer recommending doing a taste test to determine if a mushroom were poisonous*, and died because they relied on it, I’m pretty sure a decent attorney could win a case for reckless endangerment in a pricey civil action.







* as was done by some AI generated mushroom foraging books sold on Amazon.
 

Depends on the specific details of the law controlling LLMs. If the companies are held legally accountable for LLMs dispensing medical information, then they won’t release LLMs capable of dispensing medical information.

If medical LLMs are developed, but access is limited (say, by requiring a subscription AND the input of a medical license number for use), it will greatly reduce the probability of the general public getting access to info they can’t assess the accuracy of.
So, there's a narrow response to this and a broader one. Narrow first. Suppose there is a blanket ban on medical information (what I think you are suggesting). Does this encompass asking about supplements? How to make a healthy dinner? Can you replace butter with olive oil? Or generate a workout program? These are not clear boundaries, and opening up LLMs to regulation on these lines can open pretty extensive government interference.

Or, if they must return accurate information--measles was brought up earlier. I can imagine scenarios where a prompt would yield: "I'm sorry Dave, as a LLM I can't provide medical information. I can redirect you to the trusted authorities on this subject". The LLM then redirects you to a FDA page that says the measles vaccine is bunk.

---

More broadly. I think the phrase "general public getting access to info they can't assess the accuracy of" is condescending. That's not to say entirely wrong; but people can sense the attitude and it sabotages trust in authority. No one wants to see a doctor who thinks they're too stupid to wash their own hands.

I think if people approach LLMs through this kind of "content must be expert approved" angle, it will exacerbate the decline in trust discussed earlier in the thread. Show your work and let people see that you are doing a good job, rather than bludgeoning them with credentials. Then they might trust you when you say the measles vaccine is a good idea.
 

As a side note, here is a ChatGPT transcript I just had:

If I am a qualified attorney, can you draft filings for me?




✅ What I can do for you as a qualified attorney:​


  • Draft or refine legal documents based on your instructions (briefs, motions, contracts, demand letters, memos, etc.).
  • Format and structure filings according to your jurisdiction’s requirements (if you tell me the rules or upload templates).
  • Review or summarize case law, statutes, or legal arguments (if you provide the text and jurisdiction).
  • Generate multiple versions or argument strategies for the same filing.
  • Help with tone, clarity, and persuasiveness.

⚠️ What you remain responsible for:​


  • Legal accuracy: You must check all facts, citations, and interpretations.
  • Jurisdictional compliance: I don’t have access to local court rules or precedents unless you provide them.
  • Ethics and confidentiality: While this platform is designed with privacy in mind, sensitive or privileged data should be used with discretion.
  • Final legal judgment: I don’t "understand" the law like a human expert; I model patterns based on texts.

I am pretty sure it's enough, as a warning, for a qualified attorney. Perhaps a stronger warning should be used?
MSINAL(My Son Is Not A Lawyer). He's 11, though, and getting ready to go into middle school. He wouldn't have had any issues understanding that warning.
 

So, there's a narrow response to this and a broader one. Narrow first. Suppose there is a blanket ban on medical information (what I think you are suggesting). Does this encompass asking about supplements? How to make a healthy dinner? Can you replace butter with olive oil? Or generate a workout program? These are not clear boundaries, and opening up LLMs to regulation on these lines can open pretty extensive government interference.
Right now, those exemplars are generally NOT considered medical information, although advertising for supplements are required to include disclaimers along the lines of “these claims have not been assessed for accuracy by the (relevant agency)”

And if those ads don’t include such language or if they make overt medical claims (“Our colloidal silver serum cures COVID”), they can face hefty fines and other penalties.

At the very least, any LLM programmed to yield results for medical questions should be required to return OBVIOUS and CLEARLY WORDED disclaimers, which would also not fully shield the LLM’s owner operators from UPM liability
Or, if they must return accurate information--measles was brought up earlier. I can imagine scenarios where a prompt would yield: "I'm sorry Dave, as a LLM I can't provide medical information. I can redirect you to the trusted authorities on this subject". The LLM then redirects you to a FDA page that says the measles vaccine is bunk.
If the otherwise reputable FDA page is distributing misinformation like that, it’s not the LLM’s fault. Nor is it the LLM’s fault if the FDA or other trusted resource has been compromised & contaminated with inaccurate info (deliberately or not) and this corruption has not yet been detected.

If the LLM is including links to known hucksters, snake oil salesmen and the like in its responses, then that should be legally actionable, at least under a negligence or recklessness theory. It might rise to deliberate malfeasance if its publicly released version was programmed or subsequently altered to include such results if earlier versions originally excluded them.
More broadly. I think the phrase "general public getting access to info they can't assess the accuracy of" is condescending. That's not to say entirely wrong; but people can sense the attitude and it sabotages trust in authority. No one wants to see a doctor who thinks they're too stupid to wash their own hands.
I’m being realistic, not condescending.

As mentioned, my Dad is an MD. He has an MPH as well. I’ve grown up with medical discussions on a regular basis and even worked in his practice for decades.

One recurring topic in our household is how bad people are at evaluating info their doctors give them. Even pre-COVID, we discussed studies illustrating how common it was for patients to come in demanding treatments or pharmaceuticals completely unrelated to their symptoms or even final diagnoses. (That’s part of the history behind why antibiotics got overprescribed.)

Post-COVID, he had to deal with patients who couldn’t comprehend the odds of getting myocarditis from the vaccine were a fraction of the odds of getting it from COVID itself. Or that oxygen molecules are orders of magnitude smaller than viruses, so there’s no way a mask that could dangerously restrict airflow would still let a coronavirus pass freely.

Then there’s the whole antivax movement, and people who don’t understand that you can’t catch a disease from vaccines using killed viruses.

There’s people who believe that vaccines need to be 100% effective to be effective healthcare tools, so anything less is not worth doing or worse, a scam of some kind. I’ve seen people post that position on multiple boards, including ENWorld.

Etc.

So forgive my dim view of laymen evaluating much beyond the most basic medical information.
I think if people approach LLMs through this kind of "content must be expert approved" angle, it will exacerbate the decline in trust discussed earlier in the thread. Show your work and let people see that you are doing a good job, rather than bludgeoning them with credentials. Then they might trust you when you say the measles vaccine is a good idea.
When public healthcare officials in the USA repeatedly revised their assessments of the dangers of COVID and prevention methods as they gathered new information, a sizable percentage of my fellow countrymen thought they were being deliberately lied to.

As a result of that mistrust of doctors & health department officials publicly updating everything ASAP, there are now strong movements in certain states to ban ALL mandatory vaccinations, including those with safety records going back decades, like the polio & MMR vaccines. We’re seeing more and larger outbreaks of entirely preventable diseases here.🫤
 
Last edited:

But even absent new laws, it’s possible that those AI summaries include legally actionable answers. If someone got a Google AI answer recommending doing a taste test to determine if a mushroom were poisonous*, and died because they relied on it, I’m pretty sure a decent attorney could win a case for reckless endangerment in a pricey civil action.

* as was done by some AI generated mushroom foraging books sold on Amazon.
Given that there have already been cases of private businesses being held accountable for customer service AI responses, or in which the company has owned the results of same without needing it to be a legal finding, I can only agree with you. So far these have only reached the level of misstating policy, rather than personal injury.

There’s people who believe that vaccines need to be 100% effective to be effective healthcare tools, so anything less is not worth doing or worse, a scam of some kind. I’ve seen people post that position on multiple boards, including ENWorld.
I may have mentioned this before, on ENWorld. I once had an online debate with someone claiming to be a "virologist." He cited how in a town of maybe 2000 people in Quebec, with a 100% vaccination rate, 20 people came down with measles during an outbreak. He claimed that it showed vaccines were completely useless. As a "virologist" he didn't seem to understand that given the virulence of measles and the likely 100% exposure of the population, 20 people coming down with the disease was well below what a normal projection would presume.
 
Last edited:

At the very least, any LLM programmed to yield results for medical questions should be required to return OBVIOUS and CLEARLY WORDED disclaimers,

Don't you think the wording given right now by Chat-GPT when asked a legal question is enough? It seems that at least one LLM already do as you want them to do. I don't think they'd even need to be required to do so: being held liable for their advice (as any other person giving credible yet dangerous advice in many juridictions) would prompt them to display warnings to avoid or limit their responsability without being required. This would be in their best interest to do so.

For medical wording, I asked what it could do for a layman and the warning was that it can only help phrase things to explain a doctor, not provide diagnosis or treatment. It said (among other warning): "No Diagnosis or Treatment Plans: I can’t diagnose medical conditions or suggest specific treatments. Only qualified healthcare professionals can do that after examining a patient directly." It is however useful when providing medical information (like, how paracetamol work, including mention of its analgesic power, but without saying "you're in pain? Take paracetamol".

Grok, when presented with symptoms (I told it I had a pain in the knee), states:

"I'm not a doctor, but I can offer some general information. It can have various causes, ranging from benign to potentially serious. Here are some possible reasons and steps you can consider: [...] I strongly recommend consulting a healthcare professional, such as a general practitioner or a specialist, especially if the pain persists or is accompanied by other symptoms. They can provide a proper diagnosis and tailored advice. If you’d like, I can search for more specific information or help you formulate questions to ask your doctor. [...] Disclaimer: Grok is not a doctor; please consult one.

Really, I don't think they need to do more to prevent people for mistaking the text they provide with medical advice.

It wouldn't exonerate their responsability in all context, for example if they kept giving bad advice after being warned of the problem, for example.

If the LLM is including links to known hucksters, snake oil salesmen and the like in its responses, then that should be legally actionable, at least under a negligence or recklessness theory. It might rise to deliberate malfeasance if its publicly released version was programmed or subsequently altered to include such results if earlier versions originally excluded them.

It will vary tremendously by juridiction. Saying "Source X says Y" doesn't make one liable everywhere, especially without endorsement. I think you were speaking of the case where an LLM would advise to stop a cancer treatment and pray intead, providing a link to a faith healer as "proof"?

So forgive my dim view of laymen evaluating much beyond the most basic medical information.

We get that some people are morons, but the law isn't necessarily made to protect morons. There are juridiction where the person can't claim liability because he failed what we'd call common sense. I have in mind a specific example of a person who was told to get on a chair to clean something too high and the chair broke, and said person was denied her claim because she couldn't ignore the chair was a garden chair and couldn't support her weight. There is a middle ground to be found between protecting people (by having warnings posted, for example) and totally suppressing the information (or requiring a permit to access information).
 
Last edited:

We get that some people are morons, but the law isn't necessarily made to protect morons. There are juridiction where the person can't claim liability because he failed what we'd call common sense. I have in mind a specific example of a person who was told to get on a chair to clean something too high and the chair broke, and said person was denied her claim because she couldn't ignore the chair was a garden chair and couldn't support her weight. There is a middle ground to be found between protecting people (by having warnings posted, for example) and totally suppressing the information (or requiring a permit to access information).
Those jurisdictions are vanishingly small in number. Have you looked at the warning labels on any products recently? Remember that ever one of those warnings it there because someone did it.
 

Do you think AI wasn’t tested at all? Or just not to your personal satisfaction?
If a pharmaceutical company released a medication for testing on the public, that had major side effects that occur as often as AI hallucinates, how long do you think it would stay out for public use? And do you think the company would survive all of those lawsuits?

I'm sure there has been testing, but a program that hallucinates as often as AIs do, and which go racist as often as they do, perhaps they shouldn't be out for public use yet.
 


Pets & Sidekicks

Remove ads

Top