But because I happened to mention the Sacklers (who are just one of many examples), it immediately became, "Well, the opioid crisis is only ONE EXAMPLE!"
This is incorrect. You mentionned the Sackler case as an illustration of how you said liability works (and how actual use, even if unintended, will lead to the AI company being liable despite disclaimers). Except that it's an illustration on how liability work in the US and possibly other common law countries (as I noticed you've narrowed the scope of your explanation this time). So it isn't particularly useful to support the argument on "how liability works". It was still an example supporting "how liability works in a particular system". Several of the key elements of the case you used as illustration aren't working the same, or even existing, elsewhere (perimeter of liability, the amounts awarded, the scope of the problem, the possibility of having a settlement, even the concept of settlement...) and didn't work to support a general statement on how liability is working in general, if such thing was possible. To be clear, it's not the example I reject, it's the idea that liability is working exactly the same everywhere the way you're saying it works ("as a bedrock principle of the law", no less), which can't be shown by identifying a single example of
anything. If you say "all countries use the dollar as a currency", you can't demonstrate it's true by showing, although correctly, that New-Zealand uses dollars. Especially when you're saying this with authority when telling that to someone from the UK, who kind of know what is their currency.
If things were working as you say, the EU lawmakers and their legal advisors would all be complete morons, having spent the last 3 years trying to draft a directive on AI liability (and ultimately failing to agree), based on the
explicitely stated premise that it is exceedingly difficult to make AI operators liable under existing Member States' laws. They must surely be mistaking tort with a cake and have no grasp on what they're doing.
And even the liability aspect was a tangent to the question of whether AI should be able to give legal or medical advice to the general public -- for the operator to be liable for the bad advice given, the system must be able to give an advice in the first place, or there would be nothing to complain about.
I will reiterate that as to the subject of general AIs drafting legal documents, I would state the following:
A. I think that in America, the corporations that knowingly allow this to happen should be subject to UPL penalties in each state.
B. I also think that any attorney who uses a product and submits it to a Court, signing their name to same, should be harshly disciplined, with no less than a 90 day suspension from the practice of law.
But that's me.
With the context added, it is a perfectly fine position to hold. On a board where people routinely tend to say "doing X is illegal" or "the supreme court* has ruled against that..." or "the constitution has provisions against that", so one can't support this [or denounce this, depending on the topic]", I feel that we made a big step forward when formulating an opinion on law by specifying the country (or group of countries) they intended to be speaking about. At last!
Despite the clear warnings given to users, given that the US's have a large perimeter for the monopoly granted to lawyers, it may be totally justified there for UPL penalties to be applicable to companies operating a general purpose LLM which accept to provide a list of cases supporting a position. I don't have any reservation with your statement. This is a different statement from "AI shouldn't be allowed to give legal advice" or "AI giving legal advice is breaking the law".
* not to single out the US, but I honestly never saw someone quoting the Bundesverfassungsgericht to support an argument on what one can or cannot do.