We aren't?
Well, maybe you and I aren't.
But many are going to ChatGPT to get medical advice for themselves, their family members, and their pets.
They are probably the same people who googled their symptoms and deduced that their light cough was a rare tropical disease they somehow caught in Finland and clogged doctor's offices, or worse who tried to treat their cancer by drinking carrot juice. We didn't ban the Internet, blame universities for designing it without failsafes and government for not requirement an Internet Using Licence.
There is a level of harm that make something illegal (nuclear weapons? counterfeited money?). There are technology that are mostly negative, but are allowed yet, as long as one passes some kind of test because it's OK when used correctly (guns, cars and prescription pills come to mind), there are some techs that have a few bad cases (like paracetamol, it can kill you, or the Internet (you can become convinced that the Earth is flat) or social network (you can join the jump off a cliff challenge)) and yet are available freely.
Most technologies fall in the last situation. Camcorders are free to use, even if they expose you to the risk of having your sextapes shared with your work colleagues. They don't even put a label saying that you probably shouldn't film your sexual organs and your head at the same time... ChatGPT is in the same situation : it can be detrimental when misused.
At what point of harm does a disclaimer at the bottom no longer serve? At what point is it no longer reasonable to just wave hands vaguely at it and say, "Well, they were misusing the tool!"? Do you have a line in mind for that? Do you have a calculus of how much personal or societal harm is acceptable?
I agree that there is a cutoff point. I don't think it's reached for ChatGPT, since we don't do anything about many other techs than can kill or severely harm or bring detriment to people if misused. Oftentimes, we don't even put labels (we don't have for kitchen knives). We are selling chainsaws, ovens, microwaves, fireworks... The bar is very high when it come to restricting people's liberty to do something.
Will we all agree on where to draw the line? Probably not: I mentionned gun as a thing that is "mostly dangerous, yet regulated". You'll find people to say that it should be unregulated, and people to say they should be forbidden. And we're talking of a tool specifically designed to kill people. That's it's stated goal. So I don't think we'll find a consensus to ban a tool that is openly designed to chat with you, and that you decide to ask it to do a job for which you had to pass very hard qualification to do.
Because, as seen above, we now see AI spouting anti-Semitic rhetoric. That's harmful. Should we take that as an acceptable level of harm? We should be okay with saying, "Well, folks shouldn't listen to Grok," and just letting it slide?
That's another point where consensus won't be reached. In some countries, anti-semitic rhetoric will be protected as free speech. In other, they'll lend you in jail. There is a line to be drawn about allowing people to speak, and I don't think we'll ever reach a consensus. But what might happen is that an "acceptable speech filter" might be added to Grok, rather than banning AI technology. That's what's happening in China, where apparently they prevented model for speaking about Tien-An-Men, which they deem harmful.
Antisemitic books do exist. In some countries, Mein Kampf is forbidden to be printed. In other, it's available in regular library. In no country that I know was the printing press technology criticized despite the extremely detrimental impact this one specific use had.
Edit: well, actually, the Church did try to ban the printing press because of the printing of Protestant books. So, admittedly, it's not exactly "in no country that I know of", but I meant it "recently".