Judge decides case based on AI-hallucinated case law

With regard to the topic at hand, I think general purpose LLMs accessible through a website should be able to discuss legal or medical topics, with the appropriate warnings on accuracy. No training them over this kind of material may lead only to more hallucinations, and won't improve their accuracy on the topic over time, and won't allow a lot of useful explanation
There are at least 50 afflictions humans suffer that can cause “flu-like symptoms”, which vary greatly in rarity. Some have subvarieties with different severities.

Some are viral. Some are bacterial. Some are parasitic. Some are fungal. Some are immune diseases. Some are syndromes.

Some are endemic or seasonal. Some are restricted to certain areas…but could potentially escape those confines with the right vector.

Some are usually only a short-term or long-term inconvenience. Some can kill within 24 hours or less. But most are survivable within a certain treatment window.

It is unlikely that someone with one of the more exotic afflictions consulting ChatGPT or similar nonspecialized AI could actually distinguish between it and a nasty cold, simply because they don’t know the right questions to ask, or lack the specialized knowledge to fully understand an answer the program might give.

I know a fair amount about a bunch of these illnesses, but not enough that I’d trust an AI for a diagnosis.
 

log in or register to remove this ad

There are at least 50 afflictions humans suffer that can cause “flu-like symptoms”, which vary greatly in rarity. Some have subvarieties with different severities.

Some are viral. Some are bacterial. Some are parasitic. Some are fungal. Some are immune diseases. Some are syndromes.

Some are endemic or seasonal. Some are restricted to certain areas…but could potentially escape those confines with the right vector.

Some are usually only a short-term or long-term inconvenience. Some can kill within 24 hours or less. But most are survivable within a certain treatment window.

It is unlikely that someone with one of the more exotic afflictions consulting ChatGPT or similar nonspecialized AI could actually distinguish between it and a nasty cold, simply because they don’t know the right questions to ask, or lack the specialized knowledge to fully understand an answer the program might give.

I know a fair amount about a bunch of these illnesses, but not enough that I’d trust an AI for a diagnosis.
Intuition is the most important skill for diagnosing these. And that is acquired through experience.
 

"We ought not to care about whether people are insulted when making public health regulations"?
This is my general viewpoint, because facts don’t care about feelings.

Mary Mallon was insulted by and thus refused refused to abide by doctors’ orders and those of NY public health services. As a one-woman disease vector, she earned the nickname “Typhoid Mary” and a one-way trip into permanent quarantine.
That is why I quoted your phrasing. I'm speaking about the justification for the law, not the text of the law itself. When we discuss whether we ought to have such a law, the justification offered is that people cannot evaluate medical claims accurately.
No we don’t. The justification offered is that the regulation will “improve outcomes in cases of _________”, or “reduce instances of _______ by N%”, not “we have to protect the uneducated citizens from this danger”.

When the US mandated using seatbelts in most passenger vehicles, people complained. But nowhere in the record or minutes of the legislation will you find discussion about how average people don’t understand the risks. Certainly, there were studies that supported the rule, but they were not written in terms of the Average Joe’s perceptions.

The same goes for our rules on tobacco and alcohol sales. We have age limits for purchases and warning labels on the products (and ours are tamer than in some countries). But the core warning was framed “The Surgeon General has determined that _______ is harmful to your health.”, not that people are too stupid to understand.
 


There are at least 50 afflictions humans suffer that can cause “flu-like symptoms”, which vary greatly in rarity. Some have subvarieties with different severities.

Some are viral. Some are bacterial. Some are parasitic. Some are fungal. Some are immune diseases. Some are syndromes.

Some are endemic or seasonal. Some are restricted to certain areas…but could potentially escape those confines with the right vector.

Some are usually only a short-term or long-term inconvenience. Some can kill within 24 hours or less. But most are survivable within a certain treatment window.

It is unlikely that someone with one of the more exotic afflictions consulting ChatGPT or similar nonspecialized AI could actually distinguish between it and a nasty cold, simply because they don’t know the right questions to ask, or lack the specialized knowledge to fully understand an answer the program might give.

I know a fair amount about a bunch of these illnesses, but not enough that I’d trust an AI for a diagnosis.

Sure. Nobody is proposing, as far as I know, to trust a general purpose LLM with a diagnosis. However, if I wanted to know about a few illness that cause flu-like symptoms, not because I want a diagnosis but because I want to read about some exotic diseases out of boredom or to create the outline of an adventure involving someone having a cough turning out to be deadly, I'd prefer the AI to give me the most accurate information it can instead of being told either hallucinations about flu-like symptom being an early warning of transforming to a Deep One, or being told to go see a doctor to chat.

There’s a reason why diagnostic programs are still significantly less accurate than living MDs.

Dedicated diagnosis programs? Not so sure. There are independant studies that seem to show otherwise. I obviouslycan't assess the value of the journal it is published in, but it part of the series of reports showing, roughly, the same trend.


The paper was recently presented at the annual conference of the American College of Physicians (ACP) and published in the journal Annals of Internal Medicine I under the title “Comparison of initial artificial intelligence (AI) and final physician recommendations in AI-assisted virtual urgent care visits.”

The compiled ratings led to compelling conclusions: AI recommendations were rated as optimal in 77% of cases, compared to only 67% of the physicians’ decisions; at the other end of the scale, AI recommendations were rated as potentially harmful in a smaller portion of cases than physicians’ decisions (2.8% versus 4.6%). In 68% of the cases, the AI and the physician received the same score; in 21% of cases, the algorithm scored higher than the physician; and in 11% of cases, the physician’s decision was considered better.

At some point, a relevant exam for being a doctor might be to beat a dedicated AI at its job. In the meantime, it should drive us to pour more effort into perfecting AI.
 
Last edited:

so, while taking a shower i thought about WebMD's Symptom Checker and all the jokes about it basically saying you're going to die and thought I'd take a look at it :

This tool does not provide medical advice.
See additional information

Then further down I see:

This tool does not provide medical advice. It is intended for informational purposes only. This tool may leverage certain generative artificial intelligence tools to generate results, and is not a substitute for professional medical advice, diagnosis or treatment. Never ignore professional medical advice in seeking treatment because of something you have read on the WebMD Site.
 

This is incorrect.

No. That was absolutely correct. I have repeatedly (and repeatedly) stated that this is a complicated and nuanced issue. I have been the one who has raised the issue of how different jurisdictions matter- for example, not just different countries, but within different countries. I assume that, because lack of familiarity, you probably just passed over my statement regarding the issues involving regulating the legal practice which is almost impossible to summarize in America because we have fifty* separate sovereign systems (the states) along with a separate sovereign system (the federal government) and there is an interplay of how those different laws, rules, and regulations work. It's why something as simple as "What are the rules regulating practice in a federal court" aren't amenable to a quick and easy answer.

*Again, simplification. Actually it's more than 50 due to other concerns.

I was not the one making a universal statement of how product liability law works- you were. I was providing you an example of why that is incorrect. Moreover, you will have to excuse me if I find your statements regarding other jurisdictions somewhat curious; it has been a while since I have looked into it, but I recall learning that Germany (like most countries) does have a legal regime regarding reasonably foreseeable use. Moreoever, this is an EU issue- which was in the news recently because at the end of last year, the EU's new product liability directive came into place (replacing the one that was in force for three or four decades) that maintained the criterion for assessing a product's defectiveness that includes the reasonably forseeable use of the product- not just the intended use.

I had assumed you would know that, given that the new EU directive was specifically drafted to supplement the old directive regarding new technologies, including, but not limited to, AI (see, e.g., new criterion about products that can learn or acquire features after entering service).

Again, I do not think that this conversation is productive. I have repeatedly stated that I do not think you are conversant in the details of what I am discussing, and that's fine. You are still entitled to your opinions on the matter, and they are as valid as mine. I do ask that you stop telling me that I am wrong about something I do happen to understand. Good?
 

No. That was absolutely correct. I have repeatedly (and repeatedly) stated that this is a complicated and nuanced issue. I have been the one who has raised the issue of how different jurisdictions matter- for example, not just different countries, but within different countries. I assume that, because lack of familiarity, you probably just passed over my statement regarding the issues involving regulating the legal practice which is almost impossible to summarize in America because we have fifty* separate sovereign systems (the states) along with a separate sovereign system (the federal government) and there is an interplay of how those different laws, rules, and regulations work. It's why something as simple as "What are the rules regulating practice in a federal court" aren't amenable to a quick and easy answer.

*Again, simplification. Actually it's more than 50 due to other concerns.
And that's before we get into Louisiana and it's love all things French
 


And that's before we get into Louisiana and it's love all things French

I may have once did a skit with friends in law school that had, inter alia, a bit that included the civil law (Louisiana/Continental Europe) / civil law (the law that regulates non-criminal rights and duties) distinction ... sort of like a "Who's on First."

Non, je ne regrette rien.


.....Torts? They're at my favorite bakery! Lex Loci Delicious!
 

Pets & Sidekicks

Remove ads

Top