The AI Red Scare is only harming artists and needs to stop.

Wow that /wasitai certainly creates a lot of false positives. I've been feeding it RPG art from prior to 2020 and it keeps saying it was produced by a robot.
Yikes. I fed it my self-portrait, which I drew in 2006:
1718315705515.png

Self-portrait, 2006. Marker on colored paper.

It says it was created by AI, which didn't even exist almost 20 years ago. And it's a drawing of me, that I did myself!

Undeterred, I fed it my portrait of Gary Oldman, one of the first commissions I've ever done:
1718315838773.png

Gary Oldman, 2006, ink and pastel on colored paper

It also says that Gary Oldman there was created by AI. I beg to differ!

Rob Thomas?
1718315906947.png

"Rob Thomas," 2008. Marker on paper.

Also created by AI, apparently. Which it certainly was not.

How about Alice Cooper?
1718315941060.png

"Poison," 2017 Inktober prompt. Marker on paper, 30-minute time limit.

FINALLY it gives me credit for my own work. It says that Alice there was generated by a human.

Yeah, I wouldn't trust /wasitai to tell me anything about a piece of art. Yuck.
 
Last edited:

log in or register to remove this ad

Yikes. I fed it my self-portrait, which I drew in 2006:
View attachment 367318
Self-portrait, 2006. Marker on colored paper.

It says it was created by AI, which didn't even exist almost 20 years ago. And it's a drawing of me, that I did myself!

Undeterred, I fed it my portrait of Gary Oldman, one of the first commissions I've ever done:
View attachment 367319
Gary Oldman, 2006, ink and pastel on colored paper

It also says that Gary Oldman there was created by AI. I beg to differ!

Rob Thomas?
View attachment 367320
"Rob Thomas," 2008. Marker on paper.

Also created by AI, apparently. Which it certainly was not.

How about Alice Cooper?
View attachment 367321
"Poison," 2017 Inktober prompt. Marker on paper, 30-minute time limit.

FINALLY it gives me credit for my own work. It says that Alice there was generated by a human.

Yeah, I wouldn't trust /wasitai to tell me anything about a piece of art. Yuck.
This is the day that CleverNickName finally realizes he's been AI all along.
 



I want people to use a different word than stealing when... nothing has been stolen. Just like you don't accuse someone of theft when they burn down your house or hit you with a bat.

And more to the point; just because I might argue "arson isn't theft", does not mean I have stated "arson is not a crime".

I'm making a linguistic argument, not a legal one.

I'm fond of misappropriated.
 

See, CleverNickname’s post is what I thought the thread topic was meant to be about in the first place — not the issues with generative AI itself, but the fact that the reaction has led to actual human artists having their work classed as AI-created and therefore unethical, either by tools like the one CleverNickname cites or by overzealous human observers who “can just tell.”
 

See, CleverNickname’s post is what I thought the thread topic was meant to be about in the first place — not the issues with generative AI itself, but the fact that the reaction has led to actual human artists having their work classed as AI-created and therefore unethical, either by tools like the one CleverNickname cites or by overzealous human observers who “can just tell.”
Well, you know how the Internet is. People form alliances quickly, dig in their heels, and then yell back and forth without listening to each other.

But yes, that is a real concern--my little demonstration is just the tip of the iceberg. And it's depressing that whenever someone brings it up, the prevailing attitude is "eh, who cares?" As long as certain people can get text and images on demand without having to pay for them, they don't think it's a problem...they think it's progress.
 

The question I see not being answered is, "what comes next?"

Unfortunately, it's highly likely that it's going to be a political reaction in multiple marketspaces.... and thus really isn't discussable here in all but the vaguest, "deep pockets lead to ignoring the human costs"....
 

Moral panics, unfortunately, happen. They are a common fallout of human cognition.

Saying that such panic shouldn't happen, or has to stop, is equivalent to saying, "We should be much more rational than we are." Which is a great notion, but it gets hard to get a handle on practical solution.
That’s more than a bit dismissive. He’s arguing that that this moral panic shouldn’t happen, and he explains the specific reasons that it will hurt those it claims to help.
 

"how art has always worked" - incorrect.

The models use statistical analysis to produce, which is a very different method than the human brain.

Let me go for a simpler one for an example, Markov chains. It's early predictive text - if you have this word, what are the words that come next in a corpus and the chances for each? It can do it with more multiple words before as well. It can put together sentences, much like your autocorrect can. Even if you trained it on every book you've ever read, it is not the same way as a human author putting something together. Like you put together your post.

It only looks the same when you don't know the middle section of AI art and Human art and since they both have the inputs and outputs think that the middle is the same. It's not. In a way that is very much not "how art has always worked".
Please explain how a human brain does it.

Edit: because here's the thing: in my profession (teaching) we are really struggling with what to do about AI, since in many ways, it writes better than most humans. But also since it suggests that a lot of the things we thought were exceptional about humans...maybe not so much.

For example, for decades I have taught students how to write essays. There are basically four ingredients to the process.

1. Help them perceive the underlying patterns, often through visual aids.
2. Study examples (typically concurrently with 1).
3. Practice writing essays, comparing them to examples.
4. Assess and give feedback.

Rinse and repeat. That...doesn't look particularly different from how LLMs seem to learn. Human brains are much more intuitive at recognizing patterns, and we don't need the vast sample pool of an LLM. We don't know exactly how a human brain manages these processes, but then we don't know exactly how a LLM manages it either. The human brain has a memory of this process and an idea of itself, but there's nothing magical about it.

I do not see how training a LLM by having it read a lot is ethically distinct from training a human brain by having it read a lot.
 
Last edited:

Remove ads

Top