• The VOIDRUNNER'S CODEX is coming! Explore new worlds, fight oppressive empires, fend off fearsome aliens, and wield deadly psionics with this comprehensive boxed set expansion for 5E and A5E!

WotC WotC: 'Artists Must Refrain From Using AI Art Generation'

WotC to update artist guidelines moving forward.

After it was revealed this week that one of the artists for Bigby Presents: Glory of the Giants used artificial intelligence as part of their process when creating some of the book's images, Wizards of the Coast has made a short statement via the D&D Beyond Twitter (X?) account.

The statement is in image format, so I've transcribed it below.

Today we became aware that an artist used AI to create artwork for the upcoming book, Bigby Presents: Glory of the Giants. We have worked with this artist since 2014 and he's put years of work into book we all love. While we weren't aware of the artist's choice to use AI in the creation process for these commissioned pieces, we have discussed with him, and he will not use AI for Wizards' work moving forward. We are revising our process and updating our artist guidelines to make clear that artists must refrain from using AI art generation as part of their art creation process for developing D&D art.


-Wizards of the Coast​


F2zfSUUXkAEx31Q.png


Ilya Shkipin, the artist in question, talked about AI's part in his process during the week, but has since deleted those posts.

There is recent controversy on whether these illustrations I made were ai generated. AI was used in the process to generate certain details or polish and editing. To shine some light on the process I'm attaching earlier versions of the illustrations before ai had been applied to enhance details. As you can see a lot of painted elements were enhanced with ai rather than generated from ground up.

-Ilya Shlipin​

 

log in or register to remove this ad

robus

Lowcountry Low Roller
Supporter
The AI isn't working on its own. There is a human element, the person who enters the prompt and then picks the generated image that's closest to what they wanted, often over multiple iterations. The end result may not be strictly "new", but with a clever prompt it can be close enough to "new" for all practical purposes.
We're talking about the degeneration problem. AI can't consume its own content without going mad.
 

log in or register to remove this ad



Clint_L

Hero
The AI isn't working on its own. There is a human element, the person who enters the prompt and then picks the generated image that's closest to what they wanted, often over multiple iterations. The end result may not be strictly "new", but with a clever prompt it can be close enough to "new" for all practical purposes.
I mean, now we are getting into semantics over what "new" means. Take my sarcastic office memo example, from earlier in this (?) thread. That memo is definitely something that never existed in the world before. Yes, the individual words existed, but not arranged in that way and with that effect. It was created by ChatGPT in response to fairly minimal input.

If I had given that assignment to a writing student, we would not question that the resulting product was a new thing that they created. So we start to get into philosophical questions about what it means for something to be "new" or "creative," the extent to which art requires intentionality, and so on.
 

Lanefan

Victoria Rules
And of course, when artists can no longer functionally exist as viable career, and AI simply regurgitates existing art instead of making new art, we end up in a world where new art ceases to be created.
That's a rather extreme position.

Even if AI ends up creating a lot of commercial/advertising/illustrative art, new art will always continue to be created by real people even if only for fun, and some of that human-made art will be good enoguh that other people (or even corporations) will buy it.
I also think that this slide is not as inevitable as some folks suggest and that legislation can redirect this into a more ethical path. Already companies like Kickstarter and DTRPG etc. are enacting policies, and that is becoming more widespread; and legislatures can do likewise. Requiring disclosure of data sets, opt-in for artists, and compensation when their work is regurgitated, is not exactly an unreasonable or far-fetched possibility.
That compensation piece will get messy once public-domain art gets factored in: did that AI piece pull from this piece of compensatable art or that piece of public-domain art...and how can you tell?
 

Nikosandros

Golden Procrastinator
There's other examples. But as these quickly evolve, the issue I think more people are grappling with is to really try and think about not what they can or can't do ... but rather, why do we keep insisting we are special?

I don't have good answers to the last question.
Yeah, I think that those developments are challenging our assumptions about what is exactly human thought and we might not like the answer too much...
 

Umbran

Mod Squad
Staff member
Supporter
Well, there are different ethical and legal implications. AI art isn’t copied directly, it’s more “in the style of,” like pastiche. Talking about the product here, I’ll address the process too.

But it is. The reproduced signatures are dead giveaways.

So, I am not an expert on generative AI, but I've read up a bit because this is culturally relevant, and some of my early doctoral research was on training neural networks1.. Actual software engineers feel free to correct me.

How do Generative AIs work? Here's, broadly and generically, how:

1) Assemble a training set of data - generally, a bunch of actual examples that you want the AI to simulate. Usually a very large bunch, if you want good results. You also include metadata - with the actual Frazetta works you include in the training data, you tag them as being by Frazetta, or in Frazetta's style. You might also tag them as being fantasy, containing dragons, Conan, barbarians, axes, etc.

2) Train the AI - there are a bunch of ways to do this, but we can use one simple method to demonstrate some of the activity - you present the system with the entire set of tags, and one example from the training set, and ask the machine to guess what tags apply to the example.

If the system guesses right, parts of the algorithm that are responsible for that guess are strengthened. If it guesses wrong, the parts of the algorithm are weakened. In either case, it updates a "reference dataset" with the example, associated with the right tags, for later.

Lather, rinse, repeat. Each repeat alters its algorithm, and the reference set, to maximize its ability to answer correctly.

3) Then, to use the generative AI, you reverse the process. You hand it a collection of tags (the description of what you want it to produce), and it spits out a collection of stuff from its reference set that the algorithm says matches those tags.

So, since Frazetta signed all his work, that signature appears in every example of his work presented, so his signature is strongly associated with the tag of his name. The AI will often spit that out as an element in a query to give his style - for the machine, his "style" includes his signature, you see.

Thus - assembling the training data is an act that likely violates copyright for prose or visual art generative AIs, because you make a digital copy not for personal use to make that set. Then, also the reference set will still retain snippets of the original data, like Frazetta's signature, which will also violate copyright in much the same way as song-sampling can infringe on a musician's copyright.



1 I was working on training a neural network to simulate high energy particle physics events. My datasets were publicly available data from high energy particle accellerators/colliders. No sketchy source data for me!
 

Morrus

Well, that was fun
Staff member
That's a rather extreme position.
No, it’s pretty middle of the road.

Even if AI ends up creating a lot of commercial/advertising/illustrative art, new art will always continue to be created by real people even if only for fun, and some of that human-made art will be good enoguh that other people (or even corporations) will buy it.

Not to the needed extent. A few artists making art for free which gets fed into the machine without compensation to them is not going to sustain entire industries.

That compensation piece will get messy once public-domain art gets factored in: did that AI piece pull from this piece of compensatable art or that piece of public-domain art...and how can you tell?
You require disclosure and oversight and you regulate it. Just like they do in many other industries. It’s not unusual.
 

Clint_L

Hero
There's other examples. But as these quickly evolve, the issue I think more people are grappling with is to really try and think about not what they can or can't do ... but rather, why do we keep insisting we are special?

I don't have good answers to the last question.
That is what teachers are really wrestling with. We have been teaching based on a theory of the mind that is rooted in a whole lot of cultural and metaphysical assumptions. We've known for some time that there are problems with this model, but education is a vast industry with a whole lot of inertia. Trillions of dollars and centuries worth of inertia. And this new technology is revealing that a lot of things we had assumed were uniquely special about human minds might not work at all like we thought.

We aren't close to wrapping our heads around the implications yet, or understanding what it means for education going forward. For us, this is an unparalleled existential crisis. This system with vast inertia has just hit a mighty big iceberg.
 

Umbran

Mod Squad
Staff member
Supporter
Even if AI ends up creating a lot of commercial/advertising/illustrative art, new art will always continue to be created by real people even if only for fun, and some of that human-made art will be good enoguh that other people (or even corporations) will buy it.

Why would a corporation buy it when they can wait for someone to post it online and scrape it?

That compensation piece will get messy once public-domain art gets factored in: did that AI piece pull from this piece of compensatable art or that piece of public-domain art...and how can you tell?

Part of regulation would need to be an audit trail on the AI training sets. And how can you tell? Well, right now, how does the music industry know when someone has sampled a song they didn't have rights to?
 

Related Articles

Remove ads

Remove ads

Top