Judge decides case based on AI-hallucinated case law

(Emphasis mine.)

You don’t do that. You do NOT insult the populace when drafting laws.

You simply legislate that LLMs cannot dispense medical or legal information beyond recommendations to seek professional guidance. Companies or individuals releasing AIs that do so get held liable- preferably under a strict liability standard.

We do this all the time: seat belt laws; fireworks ordinances; requiring licenses or specialized training to engage in certain activities or professions; restricting access to certain materials or products, etc. Nowhere does any of said legislation include judgmental language as to the mental faculties of the general populace and/or those covered by those laws, even in the legislative history notes that explain why the law is being considered.

Does this prevent pushback? Absolutely not. Almost all of the examples I used generated some level of resistance. Some called seatbelt laws an example of “the nanny state”. Licensing requirements get opposed. People ignore fireworks bans.

But you NEVER draft a law, however well-intentioned, with an explanation that too many idiots exist out there. Not even the safety warnings get judgy; they just set forth the rules.
Now you've gone and done it. You're going to have to explain Strict Liability and Absolute Liability :ROFLMAO:
 

log in or register to remove this ad

(Emphasis mine.)

You don’t do that. You do NOT insult the populace when drafting laws.

You simply legislate that LLMs cannot dispense medical or legal information beyond recommendations to seek professional guidance. Companies or individuals releasing AIs that do so get held liable- preferably under a strict liability standard.

We do this all the time: seat belt laws; fireworks ordinances; requiring licenses or specialized training to engage in certain activities or professions; restricting access to certain materials or products, etc. Nowhere does any of said legislation include judgmental language as to the mental faculties of the general populace and/or those covered by those laws, even in the legislative history notes that explain why the law is being considered.

Does this prevent pushback? Absolutely not. Almost all of the examples I used generated some level of resistance. Some called seatbelt laws an example of “the nanny state”. Licensing requirements get opposed. People ignore fireworks bans.

But you NEVER draft a law, however well-intentioned, with an explanation that too many idiots exist out there. Not even the safety warnings get judgy; they just set forth the rules.
I'm not sure I understand what you're getting at here. I know that. I didn't think that was how the law was going to be phrased.
 

Now you've gone and done it. You're going to have to explain Strict Liability and Absolute Liability :ROFLMAO:
Running Man Abandon Thread GIF by MOODMAN
 

Now you've gone and done it. You're going to have to explain Strict Liability and Absolute Liability :ROFLMAO:

It doesn't matter.

Look, I pointed out that there is a longstanding body of law (in the common law) that doesn't just look at a product's intended use- but at the actual and reasonably foreseeable use- these are all simple concepts for anyone who can tell the difference between a tort and, um, a cake.

But because I happened to mention the Sacklers (who are just one of many examples), it immediately became, "Well, the opioid crisis is only ONE EXAMPLE!" Because people don't want to understand the law, they want to argue that they are right.

It's not one example- it's a bedrock principle of the law. But this isn't really the forum for that- people would rather discuss fanciful hypotheticals because that's more likely to support what they already know to be true.

These are all complicated concepts, and the framing matters- there is a difference between, for example, a general use AI released to the public, and a specialized AI that has gone through FDA certification and used for diagnostic procedures in the medical field. Because people mix-and-match the framing to make their points, not much is being accomplished other than people talking past each other.

I will reiterate that as to the subject of general AIs drafting legal documents, I would state the following:
A. I think that in America, the corporations that knowingly allow this to happen should be subject to UPL penalties in each state.
B. I also think that any attorney who uses a product and submits it to a Court, signing their name to same, should be harshly disciplined, with no less than a 90 day suspension from the practice of law.

But that's me.
 

It doesn't matter.

Look, I pointed out that there is a longstanding body of law (in the common law) that doesn't just look at a product's intended use- but at the actual and reasonably foreseeable use- these are all simple concepts for anyone who can tell the difference between a tort and, um, a cake.

But because I happened to mention the Sacklers (who are just one of many examples), it immediately became, "Well, the opioid crisis is only ONE EXAMPLE!" Because people don't want to understand the law, they want to argue that they are right.

It's not one example- it's a bedrock principle of the law. But this isn't really the forum for that- people would rather discuss fanciful hypotheticals because that's more likely to support what they already know to be true.

These are all complicated concepts, and the framing matters- there is a difference between, for example, a general use AI released to the public, and a specialized AI that has gone through FDA certification and used for diagnostic procedures in the medical field. Because people mix-and-match the framing to make their points, not much is being accomplished other than people talking past each other.

I will reiterate that as to the subject of general AIs drafting legal documents, I would state the following:
A. I think that in America, the corporations that knowingly allow this to happen should be subject to UPL penalties in each state.
B. I also think that any attorney who uses a product and submits it to a Court, signing their name to same, should be harshly disciplined, with no less than a 90 day suspension from the practice of law.

But that's me.
I was joking mostly about the hair-splitting and reframing that has been going on, every time what seems to be a valid point is raised ;)
 

But because I happened to mention the Sacklers (who are just one of many examples), it immediately became, "Well, the opioid crisis is only ONE EXAMPLE!"

This is incorrect. You mentionned the Sackler as an illustration of how liability works. Except that it's an illustration on how liability work in the US and possibly other common law countries. So it isn't particularly useful to support the argument on "how liability works". It was still an example supporting "how liability works in a particular system". Several of the key elements of the case you used as illustration aren't working the same, or even existing, elsewhere (perimeter of liability, the amounts awarded, the scope of the problem, the possibility of having a settlement, even the concept of settlement...) and didn't work to support a general statement on how liability is working in general, if such thing was possible. To be clear, it's not the example I reject, it's the idea that liability is working exactly the same everywhere the way you're saying it works.

And even the liability aspect was a tangent to the question of whether AI should be able to give legal or medical advice to the general public -- for the operator to be liable for the bad advice given, the system must be able to give an advice in the first place, or there would be nothing to complain about.

I will reiterate that as to the subject of general AIs drafting legal documents, I would state the following:
A. I think that in America, the corporations that knowingly allow this to happen should be subject to UPL penalties in each state.
B. I also think that any attorney who uses a product and submits it to a Court, signing their name to same, should be harshly disciplined, with no less than a 90 day suspension from the practice of law.

But that's me.

With the context added, it is a perfectly fine position to hold. On a board where people routinely tend to say "doing X is illegal" or "the supreme court* has ruled against that..." or "the constitution has provisions against that", so one can't support this [or denounce this, depending on the topic]", I feel that we made a big step forward when formulating an opinion on law by specifying the country (or group of countries) they intended to be speaking about. At last!

Despite the clear warnings given to users, given that the US's have a large perimeter for the monopoly granted to lawyers, it may be totally justified there for UPL penalties to be applicable to companies operating a general purpose LLM which accept to provide a list of cases supporting a position. I don't have any reservation with your statement. This is a different statement from "AI shouldn't be allowed to give legal advice" or "AI giving legal advice is breaking the law".



* not to single out the US, but I honestly never saw someone quoting the Bundesverfassungsgericht to support an argument on what one can or cannot do.
 
Last edited:

Pets & Sidekicks

Remove ads

Top