Judge decides case based on AI-hallucinated case law

Or the non-corporate Chinese entity that published for free the AI software that you run on your own computer in the case of, say, Deepseek.

Well, I am sure you know the law well enough to understand that in that case, if you use someone else's model on your own computer, then the only liability for defamation that would attach would be when you then publish the information that you get from it. You can think that through.

I think you're trying to make a point like, "Look, this stuff is inevitable, so why let pesky laws and stuff stop it, or the Chinese might beat us," but ... and I mean this in the best possible way ... so? We have laws that we abide by.

For that matter, I am sure you can puzzle out what would happen if a foreign entity operates unlawfully in another country. Because ... you know, it happens in other areas as well.
 

log in or register to remove this ad

Maybe. Maybe we should be happy with a result and the accompanying warning to consult a professional and not rely on it for anything important.

But then we might forbid humans to do so as well. Because bad legal advice abound all around. What we do for healthcare isn't to forbid it, but to forbid impersonating a doctor, though.

Is it (AI) a TOOL, made by a corporation? Or is it a HUMAN?

You can't switch up metaphors for your convenience. Either a corporation is making a product for people to use, or not.
 

Well, I am sure you know the law well enough to understand that in that case, if you use someone else's model on your own computer, then the only liability for defamation that would attach would be when you then publish the information that you get from it. You can think that through.

Indeed. We're discussing AI technology, and I wanted to point out that it doesn't need to be accessed through a website by a commercial provider. Putting restriction on commercial providers doesn't affect AI in itself.

I think you're trying to make a point like

No. I would have said it, don't you think so? I have no reason to root for the US in a US/Europe/China competition. If the US banned AI, I couldn't care less.
 

Is it (AI) a TOOL, made by a corporation? Or is it a HUMAN?

It is a tool. But if we make a law preventing bad advice to be given on the basis that we need to protect people from bad advice, and AI is giving bad advice, a much better law would be to forbid the outcome (bad advice is given) rather than the tool. We don't forbid oil, knives, kitchen rolls, cars, guns, clubs... What we do is forbid killing, irrespective of how people do it. The goal of the proposed law is to protect people from doing silly things based on bad legal advice, and it can certainly come from AI, as much as websites or a random drunkard in a bar. It would be much more effective to ban giving bad legal advice, irrespective of the author (corporation or people) since we're looking at reaching the goal of protecting people from bad advice.
 

Indeed. We're discussing AI technology, and I wanted to point out that it doesn't need to be accessed through a website by a commercial provider. Putting restriction on commercial providers doesn't affect AI in itself.

And as I noted earlier, different applications of AI will have different frameworks. I have been consistently talking about possible tort liability - not a regulatory framework. Right? Notice that I've been mentioning things like defamation and failure to warn and defective product.

Those are things I can speak to. I have no idea if a regulatory framework would be preferable, and what it would look like. Although it's generally a given that corporations prefer a regulatory framework that immunizes or lessens the possibility of tort liability. Especially where, as here, they have a product with a broad base of users and ... let's say that the product is still a work in progress.

Pretty common story. Move fast. Break things. Get big enough that the government regulates you before the tort liability kicks in.
 


It is a tool. But if we make a law preventing bad advice to be given on the basis that we need to protect people from bad advice, and AI is giving bad advice, a much better law would be to forbid the outcome (bad advice is given) rather than the tool. We don't forbid oil, knives, kitchen rolls, cars, guns, clubs... What we do is forbid killing, irrespective of how people do it. The goal of the proposed law is to protect people from doing silly things based on bad legal advice, and it can certainly come from AI, as much as websites or a random drunk I can meet in a bar. It would be much more effective to ban giving bad legal advice, irrespective of the author (corporation or people) since we're looking at reaching the goal of protecting people from bad advice.

...at this point, I don't think you understand the basic principles of what is being discussed.
 

It is a tool. But if we make a law preventing bad advice to be given on the basis that we need to protect people from bad advice, and AI is giving bad advice, a much better law would be to forbid the outcome (bad advice is given) rather than the tool. We don't forbid oil, knives, kitchen rolls, cars, guns, clubs... What we do is forbid killing, irrespective of the matter.
We make laws that limit the way that tools can be produced all the time. Take a look at table saws, for example, and proposed legislation regarding SawStop style technology. When something is dangerous we will frequently either mandate how it must be used, or change how it must be made in order to mitigate risk.
 


...at this point, I don't think you understand the basic principles of what is being discussed.

We started from the situation where AI is giving bad legal advice, accompanied with warning on its faillibility and recommandation to consult a lawyer.

The argument was made that it should be configured to refuse to give any sort of legal information, since warnings aren't sufficent, because giving bad legal advice can have catastrophic consequences.

I agree (maybe we should configure AI to refuse to address legal topics), and I say that if the goal is to protect people from getting legal advice and acting upon it with catastrophic consequences, the solution isn't to modify AI, the solution lies with dealing with the problem as a whole. There is no reason to make an AI specific law, when a general "anti-bad-legal-advice law" would be much more effective.

My view is that, however, people can deal with bad advice, irrespective of the source, and we might simply have to do nothing, but I can see how the other point of view can be adopted.
 
Last edited:

Pets & Sidekicks

Remove ads

Top