We started from the situation where AI is giving bad legal advice, accompanied with warning on its faillibility and recommandation to consult a lawyer.
The argument was made that it should be configured to refuse to give any sort of legal information, since warnings aren't sufficent, because giving bad legal advice can have catastrophic consequences.
I agree (maybe we should configure AI to refuse to address legal topics), and I say that if the goal is to protect people from getting legal advice and acting upon it with catastrophic consequences, the solution isn't to modify AI, the solution lies with dealing with the problem as a whole. There is no reason to make an AI specific law, when a general "anti-bad-legal-advice law" would be much more effective.
My view is that, however, people can deal with bad advice, irrespective of the source, and we might simply have to do nothing, but I can see how the other point of view can be adopted.