The AI Red Scare is only harming artists and needs to stop.

Some info on how GenAI models handle input data:

Importantly, the input data for GenAI models is not stored as part of the model. AI image generation models do not keep copies of all the images they were trained on internally. What they do do is to take those inputs and use them to modify a set of weights which allow them to generate images in the future. These weights essentially indicate how likely it is for one part of a image to appear a certain way, given the other parts of the image and the text prompts associated with the image.

So it would be a stretch to say that these weights are actual copies of the original images in the way that a copyright law might recognize. In some ways, this is true about human artists: They have learned and trained their brains to understand how art is put together and in their brains somewhere there is a whole set of knowledge about how images can be generated in plausible and pleasing ways.

So if a GenAI tool (or a human artist) simply keeps that knowledge internal and never uses it, it's hard to see a problem. The real concern is when it gets used.

If you asked me to draw a picture in the style of X, I might be able to produce something that maybe some people could recognize as in the style of X, but it's unlikely to worry an IP lawyer or an ethics committee, because I am a terrible artist. A professional artist could almost certainly produce something that both legally and ethically steals some else's intellectual property. A GenAI tool similarly will be able to do the same sort of thing. Copying owned art is well covered by existing laws.

A more concerning issue is about art which is not directly copied, but uses significant elements of style from specific artists. I'm not super-knowledgeable about art law, but my impression is that if an artist creates an image of something not previously drawn in the style of another artist, that that is not covered by copyright. Basically, copyrighting style is not possible.

So why do people not worry about human artists doing this versus an AI doing this? One thought is that the cost is so much lower for an AI to do essentially what an artists does, that it makes the problem no longer a negligible one, but a pervasive one: Previously, to get a painting of a medieval battel in the style of Van Gogh, it would take a minimum of several weeks and thousands of dollars. Now it takes 30 seconds and costs cents.

So my feeling is that the area of using GenAI to "be creative" (as opposed to summarization, retrieval, translation or the other more mundane uses of GenAI) will require new legislation. It's not so much that we don't want it to be possible to get images using the styles of artists, or to write novels in the style of certain writers, it's that we don't want it to become the dominant way of creating art; we don't want it to be so easy that human-created art becomes a niche market for neo-luddites. Essentially, we are looking for a form of protective legislation that ensures that human creativity is rewarded and not disadvantaged by the existence of GenAI.

Fundamentally, although GenAI tools can currently be used for copyright infringing purposes, I don't think that stopping that will make the world a better place for creators. I think we need new laws that protect artists from a sea of cheap AI art. I'm not generally a fan of protectionism, but in this case, I think I might be.
 

log in or register to remove this ad


Are you sure you do?

That answers that question. No you do not know the difference.

Yea, relegating AI to a tool doesn't support your view. Pencils are tools, typewriters, brushes, paint dyes are all tools. Tools get used all the time to create commercial art without the permission or recompensation of the original artists.

Nope, that's not how image copyright works. You can't take and copy person A's photo, but you can take your own. Even if they two images are indistinguishable.

Yea, that argument is just so useless. They said the same thing about BetaMax. And "they" probably said the same thing about offset presses.

Why? So every job and career choice should be protected for the life of the person? Or just artists?

I suspect I know who you are talking about. He's been open about why he's leaving art. He's a great artist and I have and continue to buy stuff from him. But he's smart, and he's making a smart choice to finish his degree and go into a different career, one that will not be directly threatened by AI. He's a near perfect example of what everyone should do throughout their lifetime. Re-evaluate their life choices and the path they are going down.

Smart people do this all the time. Programmers learn new languages. Engineers learn new processes. Stores sell different products. This is normal, expected and healthy. Allowing people to bury their heads in the sand and not learn, change and GROW through their lives are not doing them any favors, and is actively harmful to society. Imagine if we still only had the cars or airplanes that we designed in the 1920's. Or if our medical doctors had to work with only the knowledge and tools of the 1600's? Or if artists could only use chalk and stones from 2000 BC?

Not all progress is immediately beneficial, and "we" need to direct and focus it. But adapting to change is critical.
Yes actually, all artist jobs should be protected. Whoever you are talking about is not who I am working with. I also didnt say that artists shouldnt use AI, but that we need regulations to save their jobs. If you dont value a world where artists have jobs, thats none of my business.
 

Swanosaurus

Adventurer
Yea, relegating AI to a tool doesn't support your view. Pencils are tools, typewriters, brushes, paint dyes are all tools. Tools get used all the time to create commercial art without the permission or recompensation of the original artists.

I don't quit get who the "original artist" would be in this context. If I'm plagiarising someones work using a brush and then make money of it, that might very well be a copyright violation. If I'm using a typwriter to reproduce The Lord of the Rings and then sell copies, that's certainly a copyright violation. If I'm creating an original work myself, however, I'm the original artist.

But anyway, I don't feed other people's creative work into my brush or paint so that they will create something from it automatically. If I get inspired by other people's art, it means that I process other works - to put it a little pompously - as ressource to develop my personality as an artist. I use their work for personal edification. To me, that seems very different from using other people's creative works as a ressource for machine-processing. If I use a machine to print reproductions of paintings someone holds the copyright of for commercial use, I should pay them. The reproduction may not be exactly the painting, but I make use of their creative work to make money. If I feed someone's painting into an AI to enable it to create new paintings, I'm basically doing the same thing - using the creative work of another person to enable a machine to create something that I can sell (or to enable it, more generally, so that I can sell the machine's services). I'm using the specifically creative content of other peoples art as a ressource to make money, and that warrants recompensation.

Basically, to me it's more about exploitation of other people's work than it is about plagiarism.
 

tomBitonti

Adventurer
If you post a story on the internet and I read it along with thousands of other stories and then take all those stories into my mind and write my own story. That story I write will be bits and pieces of what you wrote. But I will not be stealing what you wrote.

Responding to just the above point.

A person reading an article, or looking at a painting, or listening to music, is not the same as software doing a similar thing:

* Software generally has the ability to retain much more detail than a person.

* I am not a lawyer, but I suspect: That a copywritten work has been authorized for public viewing does not automatically authorize the software's "viewing" of the same work for training purposes. A use of a copywritten work is subject to limitations according to whatever license was obtained. While there are fair use exceptions, I'm not aware of "software training" being one of those exceptions.

Uploading to youtube, facebook, &etc, almost certainly assigns rights. One of them very probably allows software training. This may apply as well to text sent through popular software.

How a copywritten work is used matters! My understanding is that copying a movie for time-shifting is allowed without extra authorization. Copying a movie to give it (or sell it) to a friend is not allowed.

Thx!
TomB
 

Thomas Shey

Legend
Some info on how GenAI models handle input data:

...

Just wanted to note if you want to construct a valid criticism of generative AI in the creative area, how Graham did it here is how you do it. It gets the root of the potential issues in a way that doesn't try to make distinctions that are not clearly true, and doesn't require anyone to be able to look into the quasi-black boxes that are both human brains and neural networks in different ways.

The only note I'll make is that the line between plagiarism and influence is muddy at best, and gets muddier the farther you get from text. Its become clear it can be nearly impossible to do in music, for example.
 

cbwjm

Seb-wejem
They said the same thing about "THE BLOCKCHAIN!" How'd that work out?
It's still here, but AI clearly has far greater impact on our daily lives than than the block chain and much like the industrial revolution, it is going to help streamline work flows. It's already made some jobs easier and that's just casual use by people using chatGPT to help write a report or something. I think that ultimately it will be for the betterment of mankind, though as Sir Peter Gluckman said, the risks need to be weighed up against the benefits.
 

tomBitonti

Adventurer
Given that we haven't a clue how the human brain works, that you would confidently declare that amazes me. How the heck do you know what method the human brain uses? Go ahead and win a Noble prize and a lot of other acclaim by revealing such secrets of the mind.



That's not clear to me at all. When I was a younger naive software engineer I always imagined that one day we'd get this Turing grade AI's and I'd interact with them and I'd be forced to conclude that they were intelligent because I couldn't distinguish them from a human. But that's not what has happened at all. Instead, its been obvious from the start that the current generation AI were as sentient as bricks, but the really strange thing is the more that I interact with them the more I realize interactions with humans have the same flaws and patterns. The more I interact with AI, the less obviously sentient or intelligent in the sense that I had assumed humans become. It's not at all clear how humans produce speech or why they produce speech, but it could be underneath that there is just some predictive text rendered in biological form. I've had to overturn all my preconceptions about how intelligence worked and how language worked. The sense/refence model no longer is big enough and complete enough to describe what is going on.

There are currently missing elements and algorithms that humans have that AI lack or which haven't been integrated together in interesting ways, sure, but that's coming fast.

I was watching Deep Blue live against Kasparov about 25 years ago, and in the final match Deep Blue began playing an unusual sequence while Kasparov had a pawn advanced to the seventh row, and the commentators - experts in chess - where saying on the broadcast, "Well, this is typical of computer play. The AI is unable to reason about the impact of a promoted pawn on the board, or else its foreseen Kasparov's win and its stalling. Computers will never be able to defeat humans in cheese because they lack true imagination and true creativity. You need a human spirit to truly understand chess." (I'm not making this up. I may forget the exact words, but this is the sort of stuff they were saying.) And in the middle of this rant, Kasparov suddenly resigned. And the commentators were dumbfounded. "Why has Kasparov resigned?" And several seconds passed, and one of these experts said, "Because... it's mate in two?!?!" In two mind you? In two moves! It wasn't just that suddenly it turned out that imagination and creativity and actually understanding cheese were just algorithms and predictive ability, as I had fully expected that. What I really discovered then was humans weren't very good at chess at all, because the chess world was watching this and it took all of them to the last moment to even see what the computer was doing. Maybe Kasparov had seen it earlier or not. But the chess world was by and large oblivious. I'd witnessed by first Turing grade AI, and I realized that being indistinguishable from human was strictly domain dependent.

The exact text or the exact form of an image isn't being stored in the neural networks being generated by reading the text or looking at the images. We don't know exactly what it is that is being stored, but we do know for sure it isn't a copy or a compression or anything like that. So if an AI mind stores something it learns from reading a text or scanning an image, how is that fundamentally different than me with my meat brain storing something I learn from reading a text or scanning an image? And if you digitize my mental process so that it can be done faster, does it become a copyright violation just because you now find it more threatening? And if an AI produced image wouldn't be a copyright violation if it was produced by a human mind, how does it become a copyright it was produced by an artificial mind?

There is a fundamental axiomatic assumption by the zealots that this process is inherently theft but I think that assumption is unwarranted and not really supportable. If I read a book and retain some impression of that book in my mind, the copy in my mind isn't a copyright violation. It only becomes a violation of copyright if I reproduce it in some fashion that would violate copyright, and neither the storage mechanism of these AI nor the way they produce images inherently violates copyright. So no theft has occurred. If someone trains an AI on what is publicly available on the net, well, that was not an ethical violation that I could see. The whole point of intellectual property protection is to encourage innovation. It's not there to stop innovation. The writers of this software have done maybe the most innovative thing with human language since it was invented. It's not theft.
There is a lot here to unpack, and much of it doesn't fit in the current thread.

Most certainly, people do a lot of imagining of what other people are thinking. Often, they have a good sense of this. Also, often, they get a lot wrong. There seems to be a lot of this going on when people assign human feelings to material things, like a doll, or to purely imaginary things (just about any character in fiction).

What this says is that telling if an AI is "sentient" or has emotions is harder than it might otherwise seem, because of what seem to be trained-in (or possibly evolved) mechanisms that people use to project characteristics onto other people.

Maybe people and today's computers think the same. But the consensus is that, at least for now, how computers "think" is very different than how people think. This is shown by the differences in capabilities -- what computers do well compared with what people do well. What I've read suggests that as computers / software gains capabilities, that it would be a mistake to necessarily expect that computers will end up thinking in the same (or very similar) fashion to how people think.

I can't say how different it is for software to train on an input compared with a person to view (and possibly gain input) on the same input. (My opinion is that the processing seems different.) But, if the work is copywritten, uses (other than fair uses) are restricted. If authorization is not granted, then it's not granted. If a copyright holder has not authorized a computer to train on a copywritten material, that seems to be the legal end of things. Certainly we can argue over whether training should be fair use or not, but is it not reasonable to posit that, unless definitely authorized, copywritten material is not authorized for training software?

TomB
 

You first. This is not a situation where your position is the obvious default case. As I said, that's begging the question.
The burden of proof is on you. It's not begging the question to demand you substantiate your claims with evidence.

That answers that question. No you do not know the difference.
No, AI takes somebody else's work and at best regurgitates it in a way that appears different.
Using it is plagiarism by any definition.

Just wanted to note if you want to construct a valid criticism of generative AI in the creative area, how Graham did it here is how you do it. It gets the root of the potential issues in a way that doesn't try to make distinctions that are not clearly true, and doesn't require anyone to be able to look into the quasi-black boxes that are both human brains and neural networks in different ways.
Nobody has to deep-dive into how the code works to point out the problems with AI.

And it's not 'potential' issues, we can already see the problems in real life.

much like the industrial revolution, it is going to help streamline work flows. It's already made some jobs easier and that's just casual use by people using chatGPT to help write a report or something.
There's no reason to believe that. The opposite's been proven in fact. Law AI that cites fake court cases, college students using AI to write essays, businesses using AI to get out of having to pay artists for their work, AI's thus far proven to be another "THE BLOCKCHAIN!"

Also I noticed nobody's addressed my point about how the top people in AI are incredibly shady. When this guy's in charge why should anybody trust it?
 

Yeah. Because AI isn't about culture. It is about the data you put into it. Cultural bits are only one kind of data.

Thinking about AI (generative or otherwise) from only the point of view of writing fictional text or making pretty pictures is wearing blinders so you don't see most of the possibilities.
Nuclear power also has many possibilities, but I would be a fool to not be keenly aware of it's dangers first and foremost. I don't think of it from only one point of view. Assuming that just because my concerns outweigh the positives I can acknowledge means I am blind to them is reductive and dismissive of clear and legitimate concerns. None of this is helped by the fact that many eagerly dance around the broad use of the AI term to encompass a number of actually disparate technologies. Furthermore, my concerns while moral and cultural, are also professional. I am a teacher. AI proliferation is a legit issue in my field of work.
 

Split the Hoard


Split the Hoard
Negotiate, demand, or steal the loot you desire!

A competitive card game for 2-5 players
Remove ads

Top