AI/LLMs AI art bans are going to ruin small 3rd party creators

I think largely unavailable for creators to not be influenced in anything they create, with some rare exceptions.
But generally when looking for example at doing a painting, and working out what to put in the corner, they are thinking what do I think will go well there, and yes that will be framed by background/ history / influences, but they are choosing for themselves. They may decide differently depending on time of day / mood / weather etc.
Influences they will have will be based on what they were able to be exposed to, schooling, their external environment (and why you get some horrific reconstruction of animals that they havent seen personally) media, books they've purchased etc.
An LLM in same situation isnt thinking about what would go wrll there, it is instead performing calculations and the calculation result will say what will go there, and it will take that as given. If given same input / situation, it will give same output each time (hence need to be able to adjust / deepen prompts).
the calculations will be based on its training data and any iterations it has done since (so what you provide in prompts in LLMs and outputs received will feed cycle).

Yes, I acknowledge (most of) the above. Some of it is not quite accurate. But "close enough for guvment work" (as they used to say at the Bath Iron Works...)

In my preferred world, any inputs will have been ethically derived, purchases of training material where not made publicly freely available etc, and as much as I can't use art from books for inspiration without buying a copy or loaning a copy (and following any restrictions on use of said art, so can't just copy and then sell unless license says can), LLMs shouldn't be able to use inputs not purchased/ loaned with ability to monetize.

And this is the question I was just asking Morrus.
 

log in or register to remove this ad

My initial reaction is that the process doesn't matter, that it's the result that counts, and the process argument strikes me as almost an excuse to claim the two things are totally different.
Process almost always matters.

As I mentioned above, art students DO learn by copying. They often pay for the process, but not always. They often work from pieces in the public domain, but not always. They often work from pieces that are still covered by copyright with permission, but not always.

When they work from the copyrighted work of others, it is usually de minimus and/or not for commercial purposes.

AIs capable of making visual or musical works OTOH, are usually trained on MASSIVE amounts of data- more than a human could hope to internalize. There is no distinction made between training them on public domain or copyrighted material: copyright holders are not asked for permission nor given compensation. And the AI output is almost always intended for commercial purposes.

And what will AIs do as fewer and fewer creators share their work publicly? They won’t be innovating anything. That way lies stagnation.
 

And what will AIs do as fewer and fewer creators share their work publicly? They won’t be innovating anything. That way lies stagnation.

Well, they might have to actually start paying for the right to train, including possibly hiring legions of creators to generate content just for training. Either of which they definitely should have done the first time(s) around. I'm certainly not arguing with that.
 

In my preferred world, any inputs will have been ethically derived, purchases of training material where not made publicly freely available etc, and as much as I can't use art from books for inspiration without buying a copy or loaning a copy (and following any restrictions on use of said art, so can't just copy and then sell unless license says can), LLMs shouldn't be able to use inputs not purchased/ loaned with ability to monetize.
Agreed with what you're saying, and to expand it further, my simple answer to the tool line of thinking is that we as a species have, probably since the advent of tools, chosen to value human way of life more than inanimate tools. A gen AI that respects humanity (or even doesn't so vividly and completely disrespect it, in terms of our laws and values, and also in terms of the environment we live in, and the space we have to inhabit, and etc., etc.) would be more permissible to many. It's not the fact that it's a tool doing it, it's that it's a really crummy tool towards people, in multiple dimensions.

Another way, I believe humans learning to paint improves human-ness. Gen AI polluting a lake to do a terrible simulation of painting by applying heuristics doesn't.
 

No, I wasn't making an ends justifies the means argument.

When you refer to the 'means' do you mean that the AI companies trained their models without permission of the artists, and that it didn't constitute fair use, whereas the art student myelinating their own brain (and muscles) on the same content is doing so via copyright-respecting means?

EDIT: In other words, I thought by "process" you were referring to what happens once the prompt is submitted, but maybe you were also including the process of how the model got trained.
Two things. But I feel like I keep saying this over and over and over again.

It’s clearly pointless as I'm restating, again, the basic, primary objections to LLMs like it's new news this many months/years into the constant avalanche of identical threads. I find it utterly bizarre that anybody on these forums (or anywhere else, frankly) even vaguely interested in this topic could possibly not know the basic premises of the anti-LLM argument, even if they disagree with them. I certainly know the constantly repeated pro-LLM arguments verbatim, as they get repeated identically time and time again (usually by the same people). But here it is, again, for the record. I'm sure I'll be typing this out again next week.

1) the mass piracy (they've been caught illegally torrenting massive archives of pirated books) and the plagiarism (you can literally see artists' signatures in some of the outputs, which belies the claims that the LLMs are making something new, though they’ve coded the LLMs to hide that better these days)

2) the massive environmental impact with entire power plants being built to power enormous server farms

Both of these things do harm. And until LLMs can operate without doing harm, the process continues to matter.

whereas the art student myelinating their own brain (and muscles) on the same content is doing so via copyright-respecting means?
I would have a problem with an art student pirating millions of books and then burning down a rainforest, too.
 

And what will AIs do as fewer and fewer creators share their work publicly? They won’t be innovating anything. That way lies stagnation.
Yup, we end up in a world where art dies. LLMs eat their own tails and regurgitate the same slop out over and over ad infinitum, and human creators stop creating because they can’t afford to do so (and the moment they do, the LLMs just slurp it out and slop it out again). Eventually there’s nothing new being made. We’re just regurgitating the same meal over and over until all we eat is an identical grey pasty porridge.
 

I suspect I understand it better than you. I was writing back-propagating neural networks 30+ years ago, and have worked in all kinds of AI and A-Life techniques since then.

That said, I'm not disagreeing with you that the process is different. My initial reaction is that the process doesn't matter, that it's the result that counts, and the process argument strikes me as almost an excuse to claim the two things are totally different.

That said, unless you are sick of discussing this, I truly am curious why you think that difference is essential.
The Book of Numbered Lists (pg. 14)
“Three Steps of COMPETITION
1. Look
2. Learn
3. Stomp”


It is the unauthorized wholesale use of copyrighted material to train the LLM or AI that has some people crying “Foul!”
I’m one of them.

“They” are playing dirty.
That ruins the fun for everyone else.

IMHO.
Errors may be freely attributed to ignorance.
 

One can be creative (which includes imaginitive) as all heck and yet still not know how to put that creativity into practice.

I can dream up all kinds of great images in my head but haven't got the art-making ability to put them on paper or screen. That's where I need some sort of processor, be it human or machine, to do it for me to my instructions.
I'm about a around the middle in those mind blindness scales. I am not great at drawing. I got okay at characters with mountains of rote practice, but I am bad at landscapes and drawing that level of depth, and part of that is the image in my head that I'm picturing is pretty fuzzy.

That said:
Some art mediums are less about imagine and then produce, and have more of a push and pull refining something you can see.

Like 3d sculpting.
e7cfb2ed_original.jpg

{This piece is old and its been a couple years now since I did any 3d sculpting or modeling now. But it 3d printed okay at 35mm tall when I test printed it. Took me like a week when I made it. My avatar is a screenshot of a prototype 3d vtuber avatar from scratch, took me 4-5 weeks}.

And then there are others that are more about understanding the process, and you can make changes to a nondestructive process to get varied results, like materials design or using shadernodes and geometrynodes in blender.

You still have things to learn, but it's a different skillset, and doesn't necessarily have the same prerequisites as being good at illustration, you know?
 
Last edited:

Yup, we end up in a world where art dies. LLMs eat their own tails and regurgitate the same slop out over and over ad infinitum, and human creators stop creating because they can’t afford to do so (and the moment they do, the LLMs just slurp it out and slop it out again). Eventually there’s nothing new being made. We’re just regurgitating the same meal over and over until all we eat is an identical grey pasty porridge.
1774309911672.png
 

I don't think you can separate the idea from the execution like that. If you come up with a cool rhyme, you thought up and put together every bit of that rhyme. If you come up with an idea for a cool picture, there are a billion data points in the implementation of that idea that you haven't filled in. Thinking up 'Heh, a painting of a woman with a slight smile' is not the same as painting the Mona Lisa.
Worth noting: The Mona Lisa was a portrait of Lisa Gherardini, wife of a wealthy silk merchant, which DaVinci never delivered to the client. He certainly made decisions in its execution, but he didn't just conjure the image up out of his imagination.

There is another Mona Lisa (the Isleworth Mona Lisa) which some historians think was DaVinci's first draft, IIRC during a period where he was experimenting with Canvas (he wrote about it and how he didn't like it and preferred working on wood), and that he didn't deliver it because he was unsatisfied with the piece, and the famous Mona Lisa was made years later using the original piece as reference and remaking it to prove he could using his later more refined skill. That's not a point or anything, just a fun history factlet.
 

Recent & Upcoming Releases

Remove ads

Top