The AI Red Scare is only harming artists and needs to stop.

Yikes. I fed it my self-portrait, which I drew in 2006:
View attachment 367318
Self-portrait, 2006. Marker on colored paper.

It says it was created by AI, which didn't even exist almost 20 years ago. And it's a drawing of me, that I did myself!

Undeterred, I fed it my portrait of Gary Oldman, one of the first commissions I've ever done:
View attachment 367319
Gary Oldman, 2006, ink and pastel on colored paper

It also says that Gary Oldman there was created by AI. I beg to differ!

Rob Thomas?
View attachment 367320
"Rob Thomas," 2008. Marker on paper.

Also created by AI, apparently. Which it certainly was not.

How about Alice Cooper?
View attachment 367321
"Poison," 2017 Inktober prompt. Marker on paper, 30-minute time limit.

FINALLY it gives me credit for my own work. It says that Alice there was generated by a human.

Yeah, I wouldn't trust /wasitai to tell me anything about a piece of art. Yuck.
For their main assessment, my IB (International Baccalaureate) Theory of Knowledge students have to write an essay exploring one of six knowledge questions. They write it as a series of drafts, each of which gets teacher feedback, before uploading the final version for external moderation.

Before the essay is uploaded, we run it through TurnItIn.com to check for plagiarism and, as of last year, through an AI checker. It flagged two essays as AI, even though I knew, and had documented, that they were not: I had met with the students to review their initial outline and proposal, assessed an initial rough draft, and given further verbal feedback as they made revisions towards a final draft.

On the other hand, I had one student who was sketchy during the process, and handed in a final draft very late, and with most of the body paragraphs obviously written by AI. I busted him on it myself, and he had to rewrite the whole thing under supervision. But not before that draft was run through the AI checker...which declared that it was 100% the work of a human. A human who had already confessed to using an AI for most of it.
 

log in or register to remove this ad

Well, if you unpack it, they said: "We used copyrighted material, but we are convinced that that doesn't violate copyright." So whether they admitted that they're violating copyright law or not depends on whether you consider what they did a violation of copyright. Them claiming that whatever they did wasn't a violation doesn't really enter into it.

Neither does it depend "whether you consider what they did a violation of copy copyright" the only thing that matters is if the law, thinks what they are doing is a violation of copyright.

Which is ongoing since 2020 - https://www.courtlistener.com/docket/18689287/uab-planner5d-v-facebook-inc/

But there is a strong case that just training AI isn't a breach of copyright law.

"Copyright infringement requires not just copying of a work’s material form but also the unauthorized use of the work for its expressive purpose. Merely technical or non-communicative uses are not uses of a work for its expressive purpose and therefore are not copyright infringement."

"Copyright protects creative expression, but model training extracts unprotectable ideas and patterns from data."

source - https://btlj.org/wp-content/uploads/2023/02/0003-36-4Quang.pdf

"(b) In no case does copyright protection for an original work of authorship extend to any idea, procedure, process, system, method of operation, concept, principle, or discovery, regardless of the form in which it is described, explained, illustrated, or embodied in such work." source - 17 U.S.C. § 102(b) 17 U.S. Code § 102 - Subject matter of copyright: In general

Now if it is ethically right is a whole other kettle of fish.
 

If I am trying to publish an RPG and I pay for art from artists, I'm supporting my community. If I publish an RPG and I use an AI program that uses an unpaid artist's work to produce art, I'm not supporting my community.
If you publish an RPG and create the art yourself you're not supporting your community either, but - quite legitimately - nobody would bat an eye about it.

And like it or not, AI art does level the playing field in that it gives those who a) can't afford to pay artists and b) aren't good enough artists to do the art themselves an avenue to still get some art into their RPGs.
 

Please explain how a human brain does it.
Earlier in this thread we got into a discussion of we don't know exactly how the brain does it, but we know how it does not. So being able to explain how the brain does it isn't a requirement to being able to discuss how AI does it in a way the brain does not.

But all of that said, let me give a try of the process using human vs. AI art.

A human art capable of producing a realistic image similar to what AI arts does envisions the various objects which exist in a 3D world, establishes a point of view from which to render it. They translate from 3D to 2D, including what can been seen and what blocks view, where light sources are, perspective and foreshortening, etc. They start with the real world and then from that move to how does it look in an image. Heck, we've had had 3D render programs for a long time that are made to emulate that process, and it's what 3D games do.

AI art is models generating it via statistical analysis. It does not involve that process at all, since there was never a 3D model to translate. A human artist could redo the exact same scene but drawn a few degrees to the left and a foot forward. An AI model can't.

Edit: because here's the thing: in my profession (teaching) we are really struggling with what to do about AI, since in many ways, it writes better than most humans. But also since it suggests the a lot of the things we thought were exceptional about humans...maybe not so much.
"Writes better."

One common issue with LLMs (Large Language Models - AI writing) is what they are now calling "hallucinations". I'm not fond of that as a descriptor but it's in common usage. If they have information, they can use the information. If they don't have the information, they often will make up information. Not so different from human - except that they can't tell they made up the information. They can sprinkle falsehoods and incorrect information in, and don't know.

An example of this was with ChatGPT-3.5, we were playing a new boardgame, Dice Theme Park, and asked it for strategies. There were whole sections about Mascots and such that just didn't exist in the game, but were presented with the confidence of everything else.

A human writer would know when they are bulling around. But there is no "they" to understand this with LLMs. We anthropomorphize them because it seems like someone talking to us, and because we as humans anthropomorphize lots of things. Pets. Cars. Computers. What have you.

Instead it's taking the current and previous prompts and statistically generating words. It's spicy autocorrect. Yes, it's the Porsche of conversation compared to the Horse-and-Buggy of conversation of autocorrect, but being more advanced just means it's better at it's job, that it picks the right words, not that it's actually thinking about the concepts.

Generating output from input that looks human - yes. Is generated by the same process - not at all.

Frankly, it's the anthropomorphism that's a big part of the perception issue. Because people treat it like a human, they mistakenly compare it to how a human would learn.
 
Last edited:

And like it or not, AI art does level the playing field in that it gives those who a) can't afford to pay artists and b) aren't good enough artists to do the art themselves an avenue to still get some art into their RPGs.
Free public domain art for commercial use has been available before AI. You will often have to pay anyway if you want better AI-produced art since many operate with freemium services. So it's still mostly about the haves and have nots. 🤷‍♂️
 


"Writes better."

One common issue with LLMs (Large Language Models - AI writing) is what they are now calling "hallucinations". I'm not fond of that as a descriptor but it's in common usage. If they have information, they can use the information. If they don't have the information, they often will make up information. Not so different from human - except that they can't tell they made up the information. They can sprinkle falsehoods and incorrect information in, and don't know.

An example of this was with ChatGPT-3.5, we were playing a new boardgame, Dice Theme Park, and asked it for strategies. There were whole sections about Mascots and such that just didn't exist in the game, but were presented with the confidence of everything else.

A human writer would know when they are bulling around. But there is no "they" to understand this with LLMs. We anthropomorphize them because it seems like someone talking to us, and because we as humans anthropomorphize lots of things. Pets. Cars. Computers. What have you.

Instead it's taking the current and previous prompts and statistically generating words. It's spicy autocorrect. Yes, it's the Porsche of conversation compared to the Horse-and-Buggy of conversation of autocorrect, but being more advanced just means it's better at it's job, that it picks the right words, not that it's actually thinking about the concepts.

Generating output from input that looks human - yes. Is generated by the same process - not at all.

Frankly, it's the anthropomorphism that's a big part of the perception issue. Because people treat it like a human, they mistakenly compare it to how a human would learn.
Legal Eagle talks about AI hallucinations in a video about how two lawyers used ChatGPT for a court case, and ChatGPT made up fictitious court cases, which resulted in them getting into serious hot water with the judge. Legal Eagle has a number of videos on the legal issues of AI, including a few involving generated images.
 

Its not a scare, it is extremely real. I've commissioned a few pieces of art over the years, but the last three or four pieces I've made have been with a free AI creator, and they are far better than what I had purchased in the past. All it required was a simple text description, and a few minutes trial and error.
 

Its not a scare, it is extremely real. I've commissioned a few pieces of art over the years, but the last three or four pieces I've made have been with a free AI creator, and they are far better than what I had purchased in the past. All it required was a simple text description, and a few minutes trial and error.
I'd honestly be interested to see some good AI RPG art - because for now, I'm mainly looking at stuff thinking, "well, that looks like another ugly AI pic", not even knowing whether it's true or not. For example, if you would present this cover to me:
I'd probably say: "Yeah, that has to be AI", but really, is it? Or is it only in a certain style that I happen to dislike and associate with AI?

I think I have an easier time figuring out if a text is created by an AI: For one, there are the hallucinations (like a text that about Star Trek that recently came up on facebook and mentioned an Episode that simply doesn't exist); also, they tend to be extremely repetitive, saying the same thing over and over again in different words.
 

Neither does it depend "whether you consider what they did a violation of copy copyright" the only thing that matters is if the law, thinks what they are doing is a violation of copyright.

Which is ongoing since 2020 - https://www.courtlistener.com/docket/18689287/uab-planner5d-v-facebook-inc/

But there is a strong case that just training AI isn't a breach of copyright law.

"Copyright infringement requires not just copying of a work’s material form but also the unauthorized use of the work for its expressive purpose. Merely technical or non-communicative uses are not uses of a work for its expressive purpose and therefore are not copyright infringement."

"Copyright protects creative expression, but model training extracts unprotectable ideas and patterns from data."

source - https://btlj.org/wp-content/uploads/2023/02/0003-36-4Quang.pdf

"(b) In no case does copyright protection for an original work of authorship extend to any idea, procedure, process, system, method of operation, concept, principle, or discovery, regardless of the form in which it is described, explained, illustrated, or embodied in such work." source - 17 U.S.C. § 102(b) 17 U.S. Code § 102 - Subject matter of copyright: In general

Now if it is ethically right is a whole other kettle of fish.

I’m not finding the Quang paper convincing: Essential steps are simple declared to be true. For example:

https://btlj.org/wp-content/uploads/2023/02/0003-36-4Quang.pdf
Copyright law distinguishes between creative expression and unprotecta- ble ideas.11 In this Note, “data mining” will specifically refer to the mining of expressive data (i.e., literary works, photographs, video) for functional, or non- expressive, purposes. Expressive applications of data mining (i.e., AI- generated art, music, and literature) are outside of the scope of this analysis.

Bold added by me. This is the heart of the analysis, and it cannot simply be declared to be true.

TomB
 
Last edited:

Remove ads

Top