• The VOIDRUNNER'S CODEX is LIVE! Explore new worlds, fight oppressive empires, fend off fearsome aliens, and wield deadly psionics with this comprehensive boxed set expansion for 5E and A5E!

Sarah Silverman leads class-action lawsuit against ChatGPT creator

doctorbadwolf

Heretic of The Seventh Circle
I'm curious what the prompts were. I haven't actually found the prompts used to make LLMs reproduce text verbatim. Is it different than asking for a summary or reference? If the text is copied verbatim without some kind of reference in the prompt itself, that would be a problem...but also not entirely surprising. We humans do something similar all the time. We make references to pop-culture, quoting lines from a movie or book to make a point, explain something, or just be funny. Our memories aren't as good as a computer though so usually they are short phrases.

Where it becomes problematic is how the capability of LLMs to memorize text is used. If I memorize a play by Shakespeare (assuming he or his estate was still around), should I pay a royalty for the fact that I memorized it for a play? So let's use a more modern example. If I memorize a role for Hamilton, should I pay Lin-Manuel Miranda a royalty or ask his permission to do so?

Now, if I go and perform it at a community play, then that's where things get morally and legally dicey. But memorization in and of itself is not the issue.

But more importantly, the basic premise of LLM's is that they don't copy/paste while using synonyms to file the serial numbers off to avoid plagiarism. Fundamentally, they are prediction machines and base their answers off the words in the prompt. That's why subtle tweaks to the prompt can give different results. A fascinating example was where ChatGPT was asked to solve a programming problem and failed. The prompt engineer then asked the same question but at the end added effectively "for each step, explain what this step does" and then it got it the answer right.
The suit in question in the OP is an example, IIRC, but if you ask one for a novel, and you are familiar with works in the genre you ask for, you will notice things you’ve read before.
 

log in or register to remove this ad



What I always find interesting about any discussion about copywrite and plagiarism especially when it comes to AI is how never do the words ethics come into play. It is pretty sad that our so-called 'society' no longer has any clue about what is right and what is wrong, only what can be proven or prosecuted by a bunch of high-priced 'ethic-less' lawyers and business people. Hopefully if AI does eventually become sentient it will be able to distinguish between what is right and what is wrong because human beings sure can't.
 

Zardnaar

Legend
What I always find interesting about any discussion about copywrite and plagiarism especially when it comes to AI is how never do the words ethics come into play. It is pretty sad that our so-called 'society' no longer has any clue about what is right and what is wrong, only what can be proven or prosecuted by a bunch of high-priced 'ethic-less' lawyers and business people. Hopefully if AI does eventually become sentient it will be able to distinguish between what is right and what is wrong because human beings sure can't.

Part of that some are naming ethic based arguments while failing at other ones. So they're either picking and choosing what they're vocal about or it's a reflection that different people have different values/priorities.
 

RareBreed

Adventurer
People know we are not as divorced from animals as we would like to believe. They just deny it to themselves.
And perhaps ironically, people are uncomfortable that we are also just biological machines.

While I think AI can be incredibly dangerous, much of my defense of AI is essentially, "should true AGI have the same rights as humans?" or alternatively "Humans are in essence organic computers, so what applies to humans to should apply to AI and vice versa".

Claude Shannon, considered by some to be a genius on the order of Einstein, and the father of Information Theory, was once asked if he thought machines could "think". he said
“You bet. I’m a machine and you’re a machine, and we both think, don’t we?”

What I always find interesting about any discussion about copywrite and plagiarism especially when it comes to AI is how never do the words ethics come into play. It is pretty sad that our so-called 'society' no longer has any clue about what is right and what is wrong, only what can be proven or prosecuted by a bunch of high-priced 'ethic-less' lawyers and business people. Hopefully if AI does eventually become sentient it will be able to distinguish between what is right and what is wrong because human beings sure can't.
I was thinking about this the other day after reading a blog about how some researchers are showing signs that LLM's have some level of "understanding" (they are not just stochastic parrots) I came across an even more detailed article from Scientific American about things that LLM's can do that it wasn't trained to do, that seem to indicate true reasoning and internal world building.

I wondered, can a sufficiently powerful AI learn the rules of morality? And by this, I don't mean regurgitate the Ten Commandments or the Eightfold Path. Can it learn the Golden Rule on it's own through its own internal world building? Would AI eventually be able to learn that it is being asked to do immoral things? And then this begs the question, would it not be possible to train AI only on "evil" data and create a psychopath? We're already seeing biased AI. What if an AI's "world view" is through the lens of just data that is wrong or bad?
 

Scribe

Legend
I wondered, can a sufficiently powerful AI learn the rules of morality? And by this, I don't mean regurgitate the Ten Commandments or the Eightfold Path. Can it learn the Golden Rule on it's own through its own internal world building? Would AI eventually be able to learn that it is being asked to do immoral things? And then this begs the question, would it not be possible to train AI only on "evil" data and create a psychopath? We're already seeing biased AI. What if an AI's "world view" is through the lens of just data that is wrong or bad?

Good post, but this part I think needs some additional consideration.

Its more than possible for a person, a biological machine (we agree) to be brought up 'right' and 'good' with a world view of care and support, and to still in the end due to biological factors, have a view on reality and relationships that is deeply flawed aka: Mental Illness.

Now, if we consider putting the keys to the kingdom in the hands of an AI, where we already supposedly cannot understand how its all working?

Its just not a risk I feel is worth it.
 

AI is a lifetime away from being able to give trustworthy advice about literally anything. It can’t accurately diagnose people, pets, or vehicles, it can’t even use visual examination to figure out what part of a vehicle it’s looking at reliably, and none of these are on the horizon. Not even remotely. I doubt “AI” will be able to make truly reliably actionable judgment calls in the next 60 years.

These predictive algorithms are impressive like a grifter is impressive. They aren’t aware of anything. They don’t understand anything. They’re an advanced program for breaking down data objects into smaller data objects and recombining them based on probabilities learned through collected data, and that is all they are.

We are nowhere close to AI being able to replace real writers by any means other than plagiarism.

I realize I'm responding to this a little late, but I think it's fairly important to understand how very incorrect this first paragraph is.

AI is an extremely common tool in certain fields, and has been used for years - arguably for decades depending on where you think "machine learning" ends and "artificial intelligence" begins. For a basic example, take a look at this high content imaging system from Molecular Devices: ImageXpress Confocal HT.ai High-Content Imaging System This is more information about the software it runs: Advanced Cloud-Based Analytics with StratoMineR

Now, take a look at some of the applications they advertise it for: Cellular Imaging & Analysis, Drug Discovery & Development, Stem Cell Research, Toxicology. These are not pie-in-the-sky dreams that they hope to develop in the next 60 years. These are real tasks being performed with AI by companies that buy this product and use it for exactly those things. Today.

And that product is not unique. Here's a similar use of AI for tissue analysis (check out Research>Analysis Examples for an idea of what it can identify): Oncotopix® Discovery - AI deep learning for pathology tissue image analysis And here's another that was previously specialized for neuroscience and recently rebranded to widen their user base: Rewire AI Take a look at their "Rewire is trusted by" section to get an idea of some of the places that use this stuff.

Now, if you want to try and trivialize this by saying the AI "doesn't really understand anything" or "isn't aware", that's fine. I don't have any desire to get into a philosophical arguments about what knowledge or consciousness is. But to say that AI can't give trustworthy advice or give accurate information is ignoring a reality that we already live in.
 

doctorbadwolf

Heretic of The Seventh Circle
I realize I'm responding to this a little late, but I think it's fairly important to understand how very incorrect this first paragraph is.

AI is an extremely common tool in certain fields, and has been used for years - arguably for decades depending on where you think "machine learning" ends and "artificial intelligence" begins. For a basic example, take a look at this high content imaging system from Molecular Devices: ImageXpress Confocal HT.ai High-Content Imaging System This is more information about the software it runs: Advanced Cloud-Based Analytics with StratoMineR

Now, take a look at some of the applications they advertise it for: Cellular Imaging & Analysis, Drug Discovery & Development, Stem Cell Research, Toxicology. These are not pie-in-the-sky dreams that they hope to develop in the next 60 years. These are real tasks being performed with AI by companies that buy this product and use it for exactly those things. Today.

And that product is not unique. Here's a similar use of AI for tissue analysis (check out Research>Analysis Examples for an idea of what it can identify): Oncotopix® Discovery - AI deep learning for pathology tissue image analysis And here's another that was previously specialized for neuroscience and recently rebranded to widen their user base: Rewire AI Take a look at their "Rewire is trusted by" section to get an idea of some of the places that use this stuff.

Now, if you want to try and trivialize this by saying the AI "doesn't really understand anything" or "isn't aware", that's fine. I don't have any desire to get into a philosophical arguments about what knowledge or consciousness is. But to say that AI can't give trustworthy advice or give accurate information is ignoring a reality that we already live in.
I think you read into my statements some stuff you’ve encountered elsewhere, or something, first of all.

Secondly, no, it trivializes nothing to say that those programs are not AI. They’re useful tools that can assimilate new data into a (hopefully) increasingly accurate model, but they’re not AI.

Lastly, the discussion is about a particular type of AI. You cannot train chatgpt to reliably give good advice, or excercise judgment in any meaningful way.
 


Voidrunner's Codex

Remove ads

Top