WotC WotC: 'Artists Must Refrain From Using AI Art Generation'

WotC to update artist guidelines moving forward.

After it was revealed this week that one of the artists for Bigby Presents: Glory of the Giants used artificial intelligence as part of their process when creating some of the book's images, Wizards of the Coast has made a short statement via the D&D Beyond Twitter (X?) account.

The statement is in image format, so I've transcribed it below.

Today we became aware that an artist used AI to create artwork for the upcoming book, Bigby Presents: Glory of the Giants. We have worked with this artist since 2014 and he's put years of work into book we all love. While we weren't aware of the artist's choice to use AI in the creation process for these commissioned pieces, we have discussed with him, and he will not use AI for Wizards' work moving forward. We are revising our process and updating our artist guidelines to make clear that artists must refrain from using AI art generation as part of their art creation process for developing D&D art.


-Wizards of the Coast​


F2zfSUUXkAEx31Q.png


Ilya Shkipin, the artist in question, talked about AI's part in his process during the week, but has since deleted those posts.

There is recent controversy on whether these illustrations I made were ai generated. AI was used in the process to generate certain details or polish and editing. To shine some light on the process I'm attaching earlier versions of the illustrations before ai had been applied to enhance details. As you can see a lot of painted elements were enhanced with ai rather than generated from ground up.

-Ilya Shlipin​

 

log in or register to remove this ad


log in or register to remove this ad

Thank you for the clarification. I actually assumed that was the case, but I didn't know it. Also, I do think it is an important distinction.

From a legal point of view, it's an important distinction: you need a database of images to train a model, but the model itself (the file which is used to generate images) doesn't contain any image or part thereof, which would be a copyright infringement. It's also technically a good thing: models are huge already (1.4 GB for stable diffusion models, 6 GB for SD XL, for example) and they would be a challenge to distribute if they had to incorporate a library of images.

The model is a file that will help the generator toward the desired result. The generator starts by generating a random full noise image. Then, it will remove the noise in several steps. Of course, if the noise-removing was random, we would have a result that would be simply another random mess of pixels. That's why a model is used at this step, so that the denoising isn't random.

The person operating the generator provides a "prompt", that's a text with keywords, that will guide the denoising process toward generating an image that will be (more or less, depending on the quality of the model) faithful to the prompt. It doesn't need to refer to the picture, only to the model, which contains (in layman's term, which I am) mathematical data that will guide the denoising process toward a given result. So by activating a few of those specifically, the denoising isn't random, it's guided by the word inputted in the prompt, leading to an image that should be acceptable by the viewer. It doesn't need images to refer to.

It's also how you can you use the process to enhance/modify an image: you take an existing image, and you'll add a certain quantity of noise to it, and then run the denoising process. If it's too little noise, you'll get something that is close to the original image. If it's too much, you'll get a totally random image. In this case, I guess it took the dinosaur leg part of the image (for example) and prompted to say it should be dinosaur skin, and the generator added noise to the image, and denoised the dinosaur leg following the prompt, leading to a thing with the original shape of the dinosaur leg from the orginal image and the dinosaur-skin texture from denoising toward dinosaur-skin-texture, resulting in a mix from the original concept art and the AI intervention.

If the artist criticized by WotC had worked on his own sketch, I wonder what would have been their reaction.
 
Last edited:

dave2008

Legend
If the artist criticized by WotC had worked on his own sketch, I wonder what would have been their reaction.
The artist did do the AI work on their own sketch. The sketches they made were sometimes very close to the concept artist sketch, other times not so much, and still other times I don't believe there was any concept art. However, we have seen some of the AI artist original sketches they use along with the AI. Additionally, they have been producing art for WotC long before AI came on the scene (at least I assume it wasn't used back in 2014).
 

The artist did do the AI work on their own sketch. The sketches they made were sometimes very close to the concept artist sketch, other times not so much, and still other times I don't believe there was any concept art. However, we have seen some of the AI artist original sketches they use along with the AI. Additionally, they have been producing art for WotC long before AI came on the scene (at least I assume it wasn't used back in 2014).

OK, I was under the impression that Shkipin had gotten concept art from another WotC contractor (April Prime), and modified them through AI (among, possibly, other processes) to make the published result, instead of taking inspiration from concept art. That was from the other thread maybe, and my own cursory look over it as I am less interested in the internal working of WotC. Thanks for the clarification on the latest developments!
 



Art Waring

halozix.com
NYT Considers Legal Action Against OpenAI:

From NPR.org

So-called large language models like ChatGPT have scraped vast parts of the internet to assemble data that inform how the chatbot responds to various inquiries. The data-mining is conducted without permission. Whether hoovering up this massive repository is legal remains an open question.


If OpenAI is found to have violated any copyrights in this process, federal law allows for the infringing articles to be destroyed at the end of the case.
In other words, if a federal judge finds that OpenAI illegally copied The Times' articles to train its AI model, the court could order the company to destroy ChatGPT's dataset, forcing the company to recreate it using only work that it is authorized to use.


Federal copyright law also carries stiff financial penalties, with violators facing fines up to $150,000 for each infringement "committed willfully."


"If you're copying millions of works, you can see how that becomes a number that becomes potentially fatal for a company," said Daniel Gervais, the co-director of the intellectual property program at Vanderbilt University who studies generative AI. "Copyright law is a sword that's going to hang over the heads of AI companies for several years unless they figure out how to negotiate a solution."
 

This will prove interesting, should they go all the way. We'd be entering an interesting era, with a definite possibility of competitive lawmaking. McKinsey estimated the other day that economic fallout of generative AI could reach up to 4,400 billions annually, and some countries might be interested to grant more freedom to this sector to get the lion's share, especially as others will shun it. Especially those that might care less for artists or those where they already live mostly off public subsidies, lessening the risk of impact on their livelihood. While drawing RPG pictures, and pictures in general, is a very small part of this wealth, the law will certainly cover the general case of datamining for AI training in general (like the text and datamining exception in the EU being currently granted to public institutions, not discriminating on the type of data). Therefore the outcome of ChatGPT and other text-based mining could very well affect the current topic.


While I agree with the conclusion of Daniel Gervais, I think the GAFAM will try to "figure out how to negociate a solution" with governments, not individual authors of board posts used in training.
 
Last edited:

Remove ads

Remove ads

Top