• The VOIDRUNNER'S CODEX is coming! Explore new worlds, fight oppressive empires, fend off fearsome aliens, and wield deadly psionics with this comprehensive boxed set expansion for 5E and A5E!

WotC WotC: 'Artists Must Refrain From Using AI Art Generation'

WotC to update artist guidelines moving forward.

After it was revealed this week that one of the artists for Bigby Presents: Glory of the Giants used artificial intelligence as part of their process when creating some of the book's images, Wizards of the Coast has made a short statement via the D&D Beyond Twitter (X?) account.

The statement is in image format, so I've transcribed it below.

Today we became aware that an artist used AI to create artwork for the upcoming book, Bigby Presents: Glory of the Giants. We have worked with this artist since 2014 and he's put years of work into book we all love. While we weren't aware of the artist's choice to use AI in the creation process for these commissioned pieces, we have discussed with him, and he will not use AI for Wizards' work moving forward. We are revising our process and updating our artist guidelines to make clear that artists must refrain from using AI art generation as part of their art creation process for developing D&D art.


-Wizards of the Coast​


F2zfSUUXkAEx31Q.png


Ilya Shkipin, the artist in question, talked about AI's part in his process during the week, but has since deleted those posts.

There is recent controversy on whether these illustrations I made were ai generated. AI was used in the process to generate certain details or polish and editing. To shine some light on the process I'm attaching earlier versions of the illustrations before ai had been applied to enhance details. As you can see a lot of painted elements were enhanced with ai rather than generated from ground up.

-Ilya Shlipin​

 

log in or register to remove this ad

J.Quondam

CR 1/8
For anyone interested, here's a fairly non-technical overview on how LLMs work, though it's not a quick read. (It also discusses those unicorn pics created by Chat GPT that @Snarf Zagyg mentioned upthread.) It's just about the technical side, and doesn't really delve into legal, ethical, economic, or philosophical issues.


The big takeaway imo is that these researchers do understand how these sorts of AIs work but really do not understand why they do some of the things they do.
 

log in or register to remove this ad

J.Quondam

CR 1/8
So, by my description, this becomes less mysterious. That episode of This American Life was from this June. But software engineers have been using ChatGPT as a coding tool since its release. The full data set for its training has not been revealed, but if it includes software man pages and GitHub, well, then its "guess the next word" will include code in its possible contexts.

And putting it into an obscure language is unremarkable, and may not be part of the AI proper - generative AIs have systems separate from the AI to help keep the output cogent and reasonable, and there are already systems that will translate code from one language to another. If a translation element is built into the formatting systems, that's not weird.
I don't quite understand you here. The code isn't the interesting bit. What's strange is that the gpt was able to output a "picture" of a unicorn as instructions in this graphical language, and did so without ever being trained on an image of a unicorn. It's an LLM, so at most what it had to reference was text descriptions of unicorns, maybe some ascii images. I've also read speculation that it might also have ingested TiKZ code for drawing unicorns.

(Spoiler: The pics it generated look like something stuck to a fridge by the parents of a three year-old unicorn aficionado. They're not exactly Jeff Easley quality illustrations or anything!)
 

Umbran

Mod Squad
Staff member
Supporter
Because scraping it isn't going to reproduce the original piece in its entirety, or even close.

You say that as if "in its entirety" is necessary for corporate concerns, and as if images are only useful down to being able to see brushstrokes. Neither is correct.

They usually can't until they hear it on the radio, which means it's become successful enough that it's worth suing over.

Well, yeah. We aren't in the Minority Report universe... yet.
 

Umbran

Mod Squad
Staff member
Supporter
I don't quite understand you here. The code isn't the interesting bit.

The quote speaks to the code as if it is an interesting bit. And I can't be expected to know what you, personally, find interesting, now can I?

What's strange is that the gpt was able to output a "picture" of a unicorn as instructions in this graphical language, and did so without ever being trained on an image of a unicorn. It's an LLM, so at most what it had to reference was text descriptions of unicorns, maybe some ascii images. I've also read speculation that it might also have ingested TiKZ code for drawing unicorns.

Note that it will also have about a bazillion references to unicorns as "horse with a horn", which means all the examples of horses will also apply. And horses are quadrupedal animals, so those references apply...

Which all means that the volume of text references that lead you to somethign approximating a unicorn is probably larger than we expect. Unicorns are not obscure.

It would be interesting to see what happens if you try to get it to draw something more obscure. Like a squonk.
 

FrogReaver

As long as i get to be the frog
But I commented anyway ;)
Okay let me be more clear. Generative AI isn’t going to take entry level jobs in most knowledge fields anymore than Google or Siri did, at least without significant technological improvements. That’s not the skill set LLM AI has. In fact, LLM AI is already moving toward more super specialized use cases instead of being a generalized AI.

What LLM AI is good at is predicting the next word or next pixel or identifying something. They suck at math. They aren’t good at coding. They are amazing but they still have vast limitations.
 



FrogReaver

As long as i get to be the frog
I don't quite understand you here. The code isn't the interesting bit. What's strange is that the gpt was able to output a "picture" of a unicorn as instructions in this graphical language, and did so without ever being trained on an image of a unicorn. It's an LLM, so at most what it had to reference was text descriptions of unicorns, maybe some ascii images. I've also read speculation that it might also have ingested TiKZ code for drawing unicorns.

(Spoiler: The pics it generated look like something stuck to a fridge by the parents of a three year-old unicorn aficionado. They're not exactly Jeff Easley quality illustrations or anything!)
It may have just to have any code for outputting a unicorn. Depending on the drawing libraries there may be identical or nearly identical function calls between the languages outside of the specific syntax. I don’t know TiKZ at all. But writing hello world in Java is similar to writing hello world in c++, etc.
 

Now that's the correct analogy. What AI does is the same as music sampling.

So basically, if the AI generates something that is recognizable as part of an existing artwork, the one that generates should get the rights from the artist before diffusing his creation, and if the generated products displays things that can't be recognized as taken from an artwork or just extremely small samples, then they don't need authorization? That's the decision of the CJUE "kraftwerk" case, after all, with regard to sampling. (Plus another exception that I don't see any clear applicability).

I'd say that most of what AI creates right now isn't part of an already existing work, except maybe for some creational artifact in the training (if there is only one artist who ever drew a sskurgz, it's possible any sskurgz ever generated would look exactly like the one learned, but normally the training parameters should prevent that.)
 
Last edited:

Morrus

Well, that was fun
Staff member
I'd say that most of what AI creates right now isn't part of an already existing work,
And I'd say otherwise. And thus we distill the debate down to its basics. That's what it all hinges on.

Indeed, I dispute the use of the word 'creates'. AI doesn't create. It literally can't.
 

Related Articles

Remove ads

Remove ads

Top