Judge decides case based on AI-hallucinated case law

I suppose everyone does. For me gen AI seems about the same level as Internet access. It seems clearly on the ok side of that line.

Is there any particular metric or information you are using to come to that conclusion, or is it just a feeling you have that it is okay?

Yes. There is stuff to be said about free speech but I think most people know what I'll say, so I'll just confirm I'll bite that bullet.

So interesting point here - why would we consider the output of a generative AI "speech"?

As far as the mechanics of its operation are concerned, it does not present the ideas, thoughts, intent, or will of any particular person. Last I heard, the output of generative AI is so divorced from speech of a person that it cannot be protected with copyright. I don't know of any cases where the output has been specifically stated to reflect the position of any parent company - probably quite the opposite, actually, though someone who is more familiar with the EULAs in question can correct me, but I expect the companies generally disavow liability for what the things put out.

If it isn't "speech" of any person or legal entity in particular, it should not be protected by the 1st Amendment. If it is to be protected, someone has to own it, and accept the liabilities and restrictions that exist on speech.
 

log in or register to remove this ad


Is there any particular metric or information you are using to come to that conclusion, or is it just a feeling you have that it is okay?
Do you have a particular metric in mind? Not sure this is something we can quantify.

Generally I think it is ok for books to print things that are false, for internet sites to post things that are false, for people to tell each other things that are false. I don't see a categorical difference with a LLM returning information which is false.

So interesting point here - why would we consider the output of a generative AI "speech"?
I don't. I didn't bring up the free speech point because I think it deserves 1A protection.

There are extensive debates about the limits of free speech in cases where the 1A doesn't apply--what platforms should permit to be posted, for example. In that context I think it's very reasonable for a LLM operator to limit or control the output of their LLM.

In general I am suspicious of a regulatory regime that seeks to control what AI operators can disseminate. I don't have a categorical objection--we've mentioned the case of copywritten work, but explicit content is also really important here.

But we've also discussed the Tiananmen case. And how particular AI operators can bias the resulting output to their own views (Grok). I think it is easy to imagine how a legal regime that heavily polices LLM output could be taken advantage of by those in power.

In that context, I see limitations like "no legal or medical advice" to be overreach.
 


Generally I think it is ok for books to print things that are false, for internet sites to post things that are false, for people to tell each other things that are false. I don't see a categorical difference with a LLM returning information which is false.

I guess it depends on where you live. I don't know of many places where printing, posting or telling false things is enough to be prosecuted, absent additional specific circumstances, even voluntarily. One might be found liable, but an outright ban is uncommon. I'd love to have examples of the contrary, but yes, generally just having a website that spews false things isn't punished. Or we could fine people for saying the Earth is flat.

There are extensive debates about the limits of free speech in cases where the 1A doesn't apply--what platforms should permit to be posted, for example. In that context I think it's very reasonable for a LLM operator to limit or control the output of their LLM.

That's what many countries do. Their view of free speech isn't defined by the first amendment of the US constitution, and they generally had a long debate on who can be prosecuted for press-related offenses on written media first, and then the Internet. Several countries hold the editor of the website responsible for what is written, irrespective of the author, because it's propagating the illegal message that is sanctionned, not creating it (so it's of little interest to determine if it can be attributed to the person using the service, or the company selling the service).

In that context, I see limitations like "no legal or medical advice" to be overreach.

Same here, but on the basis that it doesn't enter one of the known categories of regulated speech, and long debates have been held to determine exactly what is tolerable speech (even if unethical or wrong or dangerous or simply because one doesn't like them) and what is banned, with no two countries having the exact same lines (blasphemy being protected speech in some countries and deemed too offensive in others, for example among many others), so the matter is considered settled.
 
Last edited:

As far as the mechanics of its operation are concerned, it does not present the ideas, thoughts, intent, or will of any particular person. Last I heard, the output of generative AI is so divorced from speech of a person that it cannot be protected with copyright.

That depends. In the US (where it doesn't seem to be entirely decided, but you probably know more than me) and in Japan, you can't, but in the EU or China, the tendancy is to consider that anything that result from a human creative process can be protected. So the output of the answer to basic question might not, but more involved effort can be protected (for example, if you write the outline of a story and ask the AI to refine it into a full-fledged novel, you can enjoy IP protection for the output).


I don't know of any cases where the output has been specifically stated to reflect the position of any parent company - probably quite the opposite, actually, though someone who is more familiar with the EULAs in question can correct me, but I expect the companies generally disavow liability for what the things put out.

They (ChatGPT, Claude) do tend to present the output as contractually owned by the user. Which wouldn't necessarily exempt the operator of the website of its responsability.
 
Last edited:

Remove ads

Top