WotC WotC: 'Artists Must Refrain From Using AI Art Generation'

WotC to update artist guidelines moving forward.

After it was revealed this week that one of the artists for Bigby Presents: Glory of the Giants used artificial intelligence as part of their process when creating some of the book's images, Wizards of the Coast has made a short statement via the D&D Beyond Twitter (X?) account.

The statement is in image format, so I've transcribed it below.

Today we became aware that an artist used AI to create artwork for the upcoming book, Bigby Presents: Glory of the Giants. We have worked with this artist since 2014 and he's put years of work into book we all love. While we weren't aware of the artist's choice to use AI in the creation process for these commissioned pieces, we have discussed with him, and he will not use AI for Wizards' work moving forward. We are revising our process and updating our artist guidelines to make clear that artists must refrain from using AI art generation as part of their art creation process for developing D&D art.


-Wizards of the Coast​


F2zfSUUXkAEx31Q.png


Ilya Shkipin, the artist in question, talked about AI's part in his process during the week, but has since deleted those posts.

There is recent controversy on whether these illustrations I made were ai generated. AI was used in the process to generate certain details or polish and editing. To shine some light on the process I'm attaching earlier versions of the illustrations before ai had been applied to enhance details. As you can see a lot of painted elements were enhanced with ai rather than generated from ground up.

-Ilya Shlipin​

 

log in or register to remove this ad

FrogReaver

As long as i get to be the frog
Well then we'd truly have artificial intelligence as it would be able to understand what it was experiencing. I think that's still quite a ways off. Believe me, I'm quite worried about Generalized AI, but I think we're not much closer to that despite the appearances.
IMO. That isn’t required either.

What we currently have is a very clever shortcut that relies on imitating human creativity by consuming patterns and using those patterns to produce new things but it has absolutely no understanding of the pattern and thus cannot regulate its inputs.
kind of. It’s not exactly clear what ‘understanding’ means though.

It’s also true that before kids understand they mimic.
 

log in or register to remove this ad

FrogReaver

As long as i get to be the frog
Not only do they already do this, it is already causing a problem — outputs are degenerating as the AI regurgitates itself. Even its maths ability has degraded. Without the original input of people, it’s nothing.
One new technological advancement away from solving that problem.
 

Morrus

Well, that was fun
Staff member
One new technological advancement away from solving that problem.
That’s not an advancement. That’s a complete redefining of the technology. It’s making it a totally different thing to what it does. It’s literally a different subject — and not one even conceptually viable at present. AI can’t create, period. You’d need something radically new and different or do that, something that doesn’t exist. You might as well be talking about magic.
 

Not only do they already do this, it is already causing a problem — outputs are degenerating as the AI regurgitates itself. Even its maths ability has degraded. Without the original input of people, it’s nothing.

Yeah, this is why so much of the hype around AI destroying the world or taking over information/knowledge jobs is incredibly premature: without those people, they don't have a dataset to farm and suddenly you have the "trash in, trash out" problem. It still can't parse problems in a meaningful way that requires critical thought, which is why you get all the hilarity of people trying to use it to answer legal questions (not necessarily in court, but I've seen several people try it on Twitter and get hilarious smacked down).

One new technological advancement away from solving that problem.

That's like saying we're one step away from warp travel. What you're talking about is a huge advancement, one that we really don't have a hint of yet.
 

FrogReaver

As long as i get to be the frog
Yeah, this is why so much of the hype around AI destroying the world or taking over information/knowledge jobs is incredibly premature: without those people, they don't have a dataset to farm and suddenly you have the "trash in, trash out" problem. It still can't parse problems in a meaningful way that requires critical thought, which is why you get all the hilarity of people trying to use it to answer legal questions (not necessarily in court, but I've seen several people try it on Twitter and get hilarious smacked down).
This I agree with.

That's like saying we're one step away from warp travel. What you're talking about is a huge advancement, one that we really don't have a hint of yet.
I’d suggest this is more of a small iteration. Maybe it ends up not possible but it’s not the kind of jump that warp drive would be.
 


Clint_L

Hero
But it is. The reproduced signatures are dead giveaways.
Well, no. It's counterintuitive, but from the AI's perspective, the signature is simply part of the style that it is aping - with the Fazetta example, it can't easily tell the difference between his signature and his style of painting gorgeously sculpted muscles.
Exactly. It’s a meaningless distinction. And a reduction of the issue into one of semantics, which avoids the actual debate.
I really don't think this is the case - here the specifics are very important not just legally, but potentially ethically as well.

For example, when studying an author, a very typical assignment is to ask students write their own piece in the style of that author - a pastiche. Similarly, art teachers routinely ask students to paint something in the style of Picasso, or whatever. Music teachers, same, with compositions. This is done to help students learn and understand what makes those artists interesting and distinct. I don't think anyone has a problem with that sort of copying, legally or ethically.

But what about if you publish the work in some way? After the Beatles broke, huge numbers of bands immediately starting aping everything from their suits to their haircuts to George Harrison's 12 string Rickenbacker, not to mention their style of musical composition. Again, mostly totally okay legally. Lots of folks looked down on The Monkees creatively, and maybe even ethically, but they weren't getting sued by EMI. On the other hand, if you get too close and copy something that is distinct enough, you can get into legal trouble, as several of the Beatles would themselves eventually find out. But that is a huge grey area and subject to continual litigation.

Similarly, I don't think using AI to copy an artistic style is a cut and dry legal or ethical issue - it is going to come down to how exact the copy is, and would probably have to be litigated on a case by case basis. The blanket legal issues seem to be coming from a different kind of copying, in how AIs are trained. But even here, it isn't cut and dry. For example, what about an AI trained completely on material that is in the public domain? And what makes how an AI "trains" in an artistic style different from how a human "trains" in an artistic style, given that we don't fully understand the former, and are considerably farther from understanding the latter?

I don't think these are semantic questions at all; I think they get right at the heart of why this issue is so confounding and why well-intentioned people can come to radically different conclusions.

Personally, I don't see much ethically wrong with the WotC artist in this case using AI to enhance their own art; I suspect they viewed it it as similar to using Adobe or Grammarly, and I do as well. I do think that companies training AIs, since they are intended for commercial purposes, should work out a fair compensation for artists that they are using, or stop using them, though I recognize that my reasoning for requiring this of an AI and not a human commercial artist might not be entirely logically coherent. And I think that these events are moving at such a rapid pace that all of our current beliefs are going to seem woefully outdated very quickly.
 

robus

Lowcountry Low Roller
Supporter
Yeah, this is why so much of the hype around AI destroying the world or taking over information/knowledge jobs is incredibly premature: without those people, they don't have a dataset to farm and suddenly you have the "trash in, trash out" problem. It still can't parse problems in a meaningful way that requires critical thought, which is why you get all the hilarity of people trying to use it to answer legal questions (not necessarily in court, but I've seen several people try it on Twitter and get hilarious smacked down).
Agreed, I'm really not concerned about AI taking away my programming job because it has zero understanding of the problems I'm trying to solve. The only way it's producing code at the moment is because there's a ton of sample code fragments (created by humans) that its digested and is able to regurgitate fragments of that code in different forms. I guess it might have digested a ton of open source code too, but I think if I asked it to produce an new Operating System it wouldn't get very far. :) A function to do some small thing, sure. A sophisticated piece of software? Forget about it.
 

It’s really not. It’s an entirely different technology. Creating new stuff just isn’t what AI does. It’s not an iteration, it’s a different technology (one that doesn’t exist).
The AI isn't working on its own. There is a human element, the person who enters the prompt and then picks the generated image that's closest to what they wanted, often over multiple iterations. The end result may not be strictly "new", but with a clever prompt it can be close enough to "new" for all practical purposes.
 

Snarf Zagyg

Notorious Liquefactionist
It’s really not. It’s an entirely different technology. Creating new stuff just isn’t what AI does. It’s not an iteration, it’s a different technology (one that doesn’t exist).

From This American Life, episode 803, discussing an experience with ChatGPT4-

Things got stranger, though. Sebastien woke up, middle of the night, with this thought-- I wonder if it can draw. Because again it's been trained on words. It has never seen anything.

Drawings seem completely outside its realm. There are other AI models trained specifically to create images, but this one, again, only knew words. It's just playing the game of "what is the next word I should spit out?" To test this, he needed a way for it to even be able to try to draw. So he does something clever.

He asks it to write a piece of computer code to draw something. And the coding language he asks it to use, he picks something intentionally obscure, not really meant for drawing pictures at all. It's called TikZ. OK, so he has this idea, gets out of bed, opens up his laptop, and types in draw me a unicorn in TikZ. He has two little kids asleep in the next room who are always talking about unicorns.

Sebastien Bubeck- And it started to output lines of code. I take those lines of code, put it into a TikZ compiler, and then I press enter. And then, boom, you know, the unicorn comes on onto the screen.

The thing she's describing is they took the code it had written for drawing the unicorn, they edited it to take out the horn, and turned the unicorn around so it was facing the opposite direction. Then they fed that code back to a new session of GPT-4 and said-- "This is code for drawing a unicorn, but it needs a horn. Can you add it?" It put it right on the head.

....


There's other examples. But as these quickly evolve, the issue I think more people are grappling with is to really try and think about not what they can or can't do ... but rather, why do we keep insisting we are special?

I don't have good answers to the last question.
 

Related Articles

Remove ads

Remove ads

Top