Judge decides case based on AI-hallucinated case law

(Emphasis mine.)

You don’t do that. You do NOT insult the populace when drafting laws.

You simply legislate that LLMs cannot dispense medical or legal information beyond recommendations to seek professional guidance. Companies or individuals releasing AIs that do so get held liable- preferably under a strict liability standard.

We do this all the time: seat belt laws; fireworks ordinances; requiring licenses or specialized training to engage in certain activities or professions; restricting access to certain materials or products, etc. Nowhere does any of said legislation include judgmental language as to the mental faculties of the general populace and/or those covered by those laws, even in the legislative history notes that explain why the law is being considered.

Does this prevent pushback? Absolutely not. Almost all of the examples I used generated some level of resistance. Some called seatbelt laws an example of “the nanny state”. Licensing requirements get opposed. People ignore fireworks bans.

But you NEVER draft a law, however well-intentioned, with an explanation that too many idiots exist out there. Not even the safety warnings get judgy; they just set forth the rules.
Now you've gone and done it. You're going to have to explain Strict Liability and Absolute Liability :ROFLMAO:
 

log in or register to remove this ad

(Emphasis mine.)

You don’t do that. You do NOT insult the populace when drafting laws.

You simply legislate that LLMs cannot dispense medical or legal information beyond recommendations to seek professional guidance. Companies or individuals releasing AIs that do so get held liable- preferably under a strict liability standard.

We do this all the time: seat belt laws; fireworks ordinances; requiring licenses or specialized training to engage in certain activities or professions; restricting access to certain materials or products, etc. Nowhere does any of said legislation include judgmental language as to the mental faculties of the general populace and/or those covered by those laws, even in the legislative history notes that explain why the law is being considered.

Does this prevent pushback? Absolutely not. Almost all of the examples I used generated some level of resistance. Some called seatbelt laws an example of “the nanny state”. Licensing requirements get opposed. People ignore fireworks bans.

But you NEVER draft a law, however well-intentioned, with an explanation that too many idiots exist out there. Not even the safety warnings get judgy; they just set forth the rules.
I'm not sure I understand what you're getting at here. I know that. I didn't think that was how the law was going to be phrased.
 

Now you've gone and done it. You're going to have to explain Strict Liability and Absolute Liability :ROFLMAO:
Running Man Abandon Thread GIF by MOODMAN
 

Now you've gone and done it. You're going to have to explain Strict Liability and Absolute Liability :ROFLMAO:

It doesn't matter.

Look, I pointed out that there is a longstanding body of law (in the common law) that doesn't just look at a product's intended use- but at the actual and reasonably foreseeable use- these are all simple concepts for anyone who can tell the difference between a tort and, um, a cake.

But because I happened to mention the Sacklers (who are just one of many examples), it immediately became, "Well, the opioid crisis is only ONE EXAMPLE!" Because people don't want to understand the law, they want to argue that they are right.

It's not one example- it's a bedrock principle of the law. But this isn't really the forum for that- people would rather discuss fanciful hypotheticals because that's more likely to support what they already know to be true.

These are all complicated concepts, and the framing matters- there is a difference between, for example, a general use AI released to the public, and a specialized AI that has gone through FDA certification and used for diagnostic procedures in the medical field. Because people mix-and-match the framing to make their points, not much is being accomplished other than people talking past each other.

I will reiterate that as to the subject of general AIs drafting legal documents, I would state the following:
A. I think that in America, the corporations that knowingly allow this to happen should be subject to UPL penalties in each state.
B. I also think that any attorney who uses a product and submits it to a Court, signing their name to same, should be harshly disciplined, with no less than a 90 day suspension from the practice of law.

But that's me.
 

It doesn't matter.

Look, I pointed out that there is a longstanding body of law (in the common law) that doesn't just look at a product's intended use- but at the actual and reasonably foreseeable use- these are all simple concepts for anyone who can tell the difference between a tort and, um, a cake.

But because I happened to mention the Sacklers (who are just one of many examples), it immediately became, "Well, the opioid crisis is only ONE EXAMPLE!" Because people don't want to understand the law, they want to argue that they are right.

It's not one example- it's a bedrock principle of the law. But this isn't really the forum for that- people would rather discuss fanciful hypotheticals because that's more likely to support what they already know to be true.

These are all complicated concepts, and the framing matters- there is a difference between, for example, a general use AI released to the public, and a specialized AI that has gone through FDA certification and used for diagnostic procedures in the medical field. Because people mix-and-match the framing to make their points, not much is being accomplished other than people talking past each other.

I will reiterate that as to the subject of general AIs drafting legal documents, I would state the following:
A. I think that in America, the corporations that knowingly allow this to happen should be subject to UPL penalties in each state.
B. I also think that any attorney who uses a product and submits it to a Court, signing their name to same, should be harshly disciplined, with no less than a 90 day suspension from the practice of law.

But that's me.
I was joking mostly about the hair-splitting and reframing that has been going on, every time what seems to be a valid point is raised ;)
 

But because I happened to mention the Sacklers (who are just one of many examples), it immediately became, "Well, the opioid crisis is only ONE EXAMPLE!"

This is incorrect. You mentionned the Sackler case as an illustration of how you said liability works (and how actual use, even if unintended, will lead to the AI company being liable despite disclaimers). Except that it's an illustration on how liability work in the US and possibly other common law countries (as I noticed you've narrowed the scope of your explanation this time). So it isn't particularly useful to support the argument on "how liability works". It was still an example supporting "how liability works in a particular system". Several of the key elements of the case you used as illustration aren't working the same, or even existing, elsewhere (perimeter of liability, the amounts awarded, the scope of the problem, the possibility of having a settlement, even the concept of settlement...) and didn't work to support a general statement on how liability is working in general, if such thing was possible. To be clear, it's not the example I reject, it's the idea that liability is working exactly the same everywhere the way you're saying it works ("as a bedrock principle of the law", no less), which can't be shown by identifying a single example of anything. If you say "all countries use the dollar as a currency", you can't demonstrate it's true by showing, although correctly, that New-Zealand uses dollars. Especially when you're saying this with authority when telling that to someone from the UK, who kind of know what is their currency.

If things were working as you say, the EU lawmakers and their legal advisors would all be complete morons, having spent the last 3 years trying to draft a directive on AI liability (and ultimately failing to agree), based on the explicitely stated premise that it is exceedingly difficult to make AI operators liable under existing Member States' laws. They must surely be mistaking tort with a cake and have no grasp on what they're doing.

And even the liability aspect was a tangent to the question of whether AI should be able to give legal or medical advice to the general public -- for the operator to be liable for the bad advice given, the system must be able to give an advice in the first place, or there would be nothing to complain about.

I will reiterate that as to the subject of general AIs drafting legal documents, I would state the following:
A. I think that in America, the corporations that knowingly allow this to happen should be subject to UPL penalties in each state.
B. I also think that any attorney who uses a product and submits it to a Court, signing their name to same, should be harshly disciplined, with no less than a 90 day suspension from the practice of law.

But that's me.

With the context added, it is a perfectly fine position to hold. On a board where people routinely tend to say "doing X is illegal" or "the supreme court* has ruled against that..." or "the constitution has provisions against that", so one can't support this [or denounce this, depending on the topic]", I feel that we made a big step forward when formulating an opinion on law by specifying the country (or group of countries) they intended to be speaking about. At last!

Despite the clear warnings given to users, given that the US's have a large perimeter for the monopoly granted to lawyers, it may be totally justified there for UPL penalties to be applicable to companies operating a general purpose LLM which accept to provide a list of cases supporting a position. I don't have any reservation with your statement. This is a different statement from "AI shouldn't be allowed to give legal advice" or "AI giving legal advice is breaking the law".



* not to single out the US, but I honestly never saw someone quoting the Bundesverfassungsgericht to support an argument on what one can or cannot do.
 
Last edited:

I'm not sure I understand what you're getting at here. I know that. I didn't think that was how the law was going to be phrased.
OK, so you don’t mean what you said literally, but metaphorically. 👍🏽

Here’s the thing, though: nearly every safety law, public health measure, OSHA regulation, professional conduct standard, or licensing requirement gets some kind of pushback. Some frame it as an insult to their intellect and autonomy (where none was meant), or a government overreach. Some merely ignore them and do as they please.*

On occasion, the pushback wins, and regs get redrafted or withdrawn. Some get so much flak they never get passed.

But we DON’T make a practice of refusing to create rules to regulate behaviors simply because the regulations may be unpopular or may hurt feelings. That would render governments superfluous.



* despite never owning a true sports car, I have a long history of repeated speeding (except in school zones) and only having gotten 2 tickets, so I’m not claiming innocence here.
 

I honestly never saw someone quoting the Bundesverfassungsgericht to support an argument on what one can or cannot do.
Given enough time, any discussion of legal frameworks on an international board WILL feature examples from different countries. I know I’ve seen citations of regulations from Canada, England, New Zealand, France and others right here on ENWorld.🤷🏾‍♂️
 

OK, so you don’t mean what you said literally, but metaphorically. 👍🏽
I meant it literally. That is why I quoted your phrasing. I'm speaking about the justification for the law, not the text of the law itself. When we discuss whether we ought to have such a law, the justification offered is that people cannot evaluate medical claims accurately.

Here’s the thing, though: nearly every safety law, public health measure, OSHA regulation, professional conduct standard, or licensing requirement gets some kind of pushback. Some frame it as an insult to their intellect and autonomy (where none was meant), or a government overreach. Some merely ignore them and do as they please.*

On occasion, the pushback wins, and regs get redrafted or withdrawn. Some get so much flak they never get passed.

But we DON’T make a practice of refusing to create rules to regulate behaviors simply because the regulations may be unpopular or may hurt feelings. That would render governments superfluous.
I'm confused. I think popularity is exactly why we make regulations in democratic societies. A step removed with representation, subject to lobbying, regulatory capture, outsourced to experts who we collectively decide to trust, but ultimately based on and legitimated by popularity.

In many cases the US lacks regulations that are common elsewhere because they would be unpopular here. Gun control is the most obvious. Also restrictions on speech. These regulations hurt feelings in a profound way because many US citizens believe they violate natural rights.

I know you know all of this so I am probably misreading your point. I hope it clarifies why I am confused. Maybe you mean something like: "We ought not to care about whether people are insulted when making public health regulations"? Or "these regulations are not interfering with rights so any insult is minor and categorically different than with gun control"?
 

Given enough time, any discussion of legal frameworks on an international board WILL feature examples from different countries. I know I’ve seen citations of regulations from Canada, England, New Zealand, France and others right here on ENWorld.🤷🏾‍♂️

Sure, and that's great (and interesting!) when it shows that different countries do find sometimes widely different solutions to apply to common problems, or that things aren't a problem at all.

With regard to the topic at hand, I think general purpose LLMs accessible through a website should be able to discuss legal or medical topics, with the appropriate warnings on accuracy. No training them over this kind of material may lead only to more hallucinations, and won't improve their accuracy on the topic over time, and won't allow a lot of useful explanation -- if you want to know if some countries forbids chewing gum, it is most certainly for entertainment purpose and shouldn't be something outside of what an AI chatbot can discuss. It is technically feasible to have a filter in the answers actually displayed (for example, if you ask ChatGPT to draw a well-known political figure, it will draw the image, and display a text saying it's not going to do so). Such a filter might be adapted to specific audiences (either by country, by age, or by signing an agreement showing you understand the limitation of the tool you're using...)
 
Last edited:

Pets & Sidekicks

Remove ads

Top