If you think I am talking about the minutes of legislation or the text of surgeon general warnings you have not understood me. I'm talking about the topics raised when discussing legislation. We are having such a discussion right now. And the arguments being offered are about the ability of people to evaluate medical claims accurately.
I think the seatbelt analogy is quite good, though. Why do we mandate people to use seatbelt? Because they are demonstrated to diminish the number of death and improves the quality of life of survivors, yet people didn't wear it based on communication alone, so indeed the law was established to protect people against their own judgement (but not promoted as such). The representatives can sometimes make better informed choices that the less informed majority would make, or a minority would do.
Here, the argument would be that AI should be prevented to give medical advice because people would use it to avoid going to a doctor, even if they are told that it they shouldn't rely on a general purpose LLM for health advice. I think the reasoning are quite analogous.
I don't agree that this is necessary but I can see the analogy.
I don't agree it would be necessary because:
a) we had ample opportunity to block people from bad medical advice, yet we generally don't, including countries with heavy regulation on speech, unless the person speaking is trying to pass off as medical advice, so the risk, while existing, wasn't deemed too large to mandate a regulatory reaction (ie, I can access the WebMD website, while I can't access some porn websites or Nazi-propaganda websites, so obviously our legislator thought it wasn't necessary to block medical discussion not dressed as a medical authority -- if at some point thousands of people start dying because they don't go to the doctor because they thought a general AI opinion is enough and they missed a bad disease, then this position might change).
b) preventing the AI from discussin medical topic might be too wide (outside of medical diagnosis, the topic can be relevant to idle conversation, which is what LLM chatbot are promoted as doing)
c) in some case, people will tell things to a chatbot (think of the woman who had question about conception she didn't dare to ask her friends) and they might benefit from an answer alarming them: "I am not a doctor, I am not qualified to give health advice, and you should really double-check everything I tell you but the symtom you typed are those of an heart attack, call 911 immediately". It might help, whereas the standard disclaimer of "I am not allowed to speak about health, go see a doctor" wouldn't elicit the same answer.
d) where I live, the cost of a medical consultation is zero, so there is very few entry barreer for going to see a doctor (the time taken to wait for your turn, basically).
e) going to see a doctor is still necessary to get drugs, and if the AI says "it's only a common cold" (and you secretly have cancer), you still need to get an appointment to get the treatment, and a human doctor will assess your condition.
So the bad outcome would only happen when people disregard the warning, the AI makes an error and downplays the risk, and the patient had a dangerous disease which they chose to "suffer through" instead of getting a mostly free cure. I think we're under the threshold of public intervention -- though I can see the cost of doctor + drugs being an incentive to listen to ChatGPT instead of going to a professional for uninsured people.