WotC WotC: 'Artists Must Refrain From Using AI Art Generation'

WotC to update artist guidelines moving forward.

After it was revealed this week that one of the artists for Bigby Presents: Glory of the Giants used artificial intelligence as part of their process when creating some of the book's images, Wizards of the Coast has made a short statement via the D&D Beyond Twitter (X?) account.

The statement is in image format, so I've transcribed it below.

Today we became aware that an artist used AI to create artwork for the upcoming book, Bigby Presents: Glory of the Giants. We have worked with this artist since 2014 and he's put years of work into book we all love. While we weren't aware of the artist's choice to use AI in the creation process for these commissioned pieces, we have discussed with him, and he will not use AI for Wizards' work moving forward. We are revising our process and updating our artist guidelines to make clear that artists must refrain from using AI art generation as part of their art creation process for developing D&D art.


-Wizards of the Coast​


F2zfSUUXkAEx31Q.png


Ilya Shkipin, the artist in question, talked about AI's part in his process during the week, but has since deleted those posts.

There is recent controversy on whether these illustrations I made were ai generated. AI was used in the process to generate certain details or polish and editing. To shine some light on the process I'm attaching earlier versions of the illustrations before ai had been applied to enhance details. As you can see a lot of painted elements were enhanced with ai rather than generated from ground up.

-Ilya Shlipin​

 

log in or register to remove this ad

Vaalingrade

Legend
For the average person, the immediate threat from AI is going to be a massive glut of grifts in place of every digital good or service they try to get a straight answer or entertainment from.

Did you like being able to search for things and finding what you want in a reasonable amount of time? Those days are over.
 

log in or register to remove this ad

Mecheon

Sacabambaspis
Speaking of more AI mess-ups (hi its me I got an avatar its everyone's favourite prehistoric jawless fish sacabambaspis)

Just an archeological description of a scimitar getting its wires crossed with either Dungeons and Dragons or Elden Ring due to using AI to write the document

are we really sure this is something on 'thought' and not just 'mindlessly regurgitating other content with no context'?

 

I am not sure it's an AI mess up, but it's a fun webmaster messup. It's more a case of unclear instructions. This is a valid answer if you don't add context into your question. Sure, the AI is mindless (because it's not emulating any mind), it's just collating the most probable elements that would answer your question. It delivers the "most probable" opinion from his database.

If the training was done on expert opinions, there is a great chance it will be valuable (as in diagnose assistant AI), if it's trained on messages pulled from the Internet, it will tell the average idea you could hear by striking a conversation with a random person in your neighbourhood watering hole, ie, not something you'd rely for anything. It is also prone to be spouting outdated information since, contrary to popular belief, AI are not training themselves without supervision. If you ask for public and easily verifiable information like who is the holder of a specific public office, you'll get the answer that was right when the document the AI was trained on were collected. I just asked Bard to check, and I get a 10 years old information... and that's because the list of office holder on Wikipedia wasn't updated since 2013...

I really don't understand why people would assume what is answered by the a chatbot AI trained on Internet data is true. It's just well-presented, but anyone with high-school education could make a well-presented essay in favour of patently false things. And on the internet, a large part of questions about scimitars would be RPG-related, not history and archeology related. An AI failure, and there are many, would be if they included that when the context was explicitely provided. AI isn't a mind reading machine, it isn't more apt to do anything than its training data is. That's why we need more education, in general, about AI, to avoid the same fiasco we had with the Internet, where conspiracy theory abound because somehow people lose a lot of their critical thinking when it's written on the computer (or told on TV...) and they shouldn't lose their critical thinking toward AI-presented data.
 
Last edited:


What a hoot. It was used to generate a reference image. The artwork is all their own.
  1. It was not used to generate a reference image. The reference image was generated by a real artist who was hired to do concept art.

  2. They created the sketch of an image based on the concept art, then used AI enhancement to do the fine linework. Then it was put into the book as the final product.

  3. By it's nature the artwork is not "all their own" because for AI enhancement to work it must have a massive dataset to draw from to do things like fills and lines and details. By its very nature, you really can't do AI work and have it be your own because of the data requirements.
 

dave2008

Legend
Not sure if this has been noted yet, but they removed all of the AI art from the DnDBeyond version of the book.

I believe they said they will replace when new art is ready
 

J.Quondam

CR 1/8
Not sure if this has been noted yet, but they removed all of the AI art from the DnDBeyond version of the book.

I believe they said they will replace when new art is ready
Can you roughly estimate how much of the art is being replaced? Is it just one or two pics, or a quarter of all the art, or two-thirds, or what?
 

Golroc

Explorer
Supporter
  1. It was not used to generate a reference image. The reference image was generated by a real artist who was hired to do concept art.

  2. They created the sketch of an image based on the concept art, then used AI enhancement to do the fine linework. Then it was put into the book as the final product.

  3. By it's nature the artwork is not "all their own" because for AI enhancement to work it must have a massive dataset to draw from to do things like fills and lines and details. By its very nature, you really can't do AI work and have it be your own because of the data requirements.

The last point isn't entirely correct. An AI system which does image transformation of various kinds does not need a dataset to draw from to perform its function. It usually has no dataset during operation. Data was used during the training process, but that's a different thing - and it can also be very different to that used image generation tools. This doesn't invalidate the legal and ethical discussions about the training data used, and in particular how it was acquired. But I think it is an important distinction. Some types of image transformation AI can be created without using any human-created art or photo at all. Purely synthetic (2D/3D rendered) images are sufficient for a fair number of transformative tools.

Is this the case for the specific tools used by this artist? We don't know. But it is possible to do transformative tooling without having a massive dataset - and such tools will be commonplace very soon, and the training done purely with legal reference data, either by licensing it or by creating it synthetically through rendering (rasterization, raytracing, etc).

I know this may sound like nitpicking or trying to pick a fight, but I honestly think it is important for those of us with technical insight (and this is not an appeal to authority - by all means if anyone wants to refute my claims I am open to constructive discussion) to help clear up misunderstanding. It is important that artists, publishers and the wider public are aware of the technical possibilities. We will reach a point, if we haven't already, where certain use of AI tooling is impossible to trace and is also perfectly legal. It will be very hard in the coming months and years to figure out where to draw the lines - regardless of whether it is a a publisher making guidelines for contributors/staff, an artist trying to figure out which tools are useful/legal/ethical, a consumer trying to make purchases that support artists, an activist trying to call out companies or individuals with questionable practices/policies.

I strongly believe "no AI assistance ever" will probably end up hurting people who could have benefitted from perfectly legal and ethical use of AI. And on the other hand, simply dividing it into "AI image creation = bad" and "AI image enhancement = ok" is also problematic. It's a tough topic, and the technical realities are changing so quickly that even many self-proclaimed experts are not fully aware of recent developments.
 

dave2008

Legend
Can you roughly estimate how much of the art is being replaced? Is it just one or two pics, or a quarter of all the art, or two-thirds, or what?
I haven't gone through the whole book, but it is all the giant dinosaur art and at least three giants (two frost giants and demonized giant)

So, if there are 71 stat blocks it is possibly around 10%?
 

dave2008

Legend
The last point isn't entirely correct. An AI system which does image transformation of various kinds does not need a dataset to draw from to perform its function. It usually has no dataset during operation. Data was used during the training process, but that's a different thing - and it can also be very different to that used image generation tools. This doesn't invalidate the legal and ethical discussions about the training data used, and in particular how it was acquired. But I think it is an important distinction. Some types of image transformation AI can be created without using any human-created art or photo at all. Purely synthetic (2D/3D rendered) images are sufficient for a fair number of transformative tools.

Is this the case for the specific tools used by this artist? We don't know. But it is possible to do transformative tooling without having a massive dataset - and such tools will be commonplace very soon, and the training done purely with legal reference data, either by licensing it or by creating it synthetically through rendering (rasterization, raytracing, etc).

I know this may sound like nitpicking or trying to pick a fight, but I honestly think it is important for those of us with technical insight (and this is not an appeal to authority - by all means if anyone wants to refute my claims I am open to constructive discussion) to help clear up misunderstanding. It is important that artists, publishers and the wider public are aware of the technical possibilities. We will reach a point, if we haven't already, where certain use of AI tooling is impossible to trace and is also perfectly legal. It will be very hard in the coming months and years to figure out where to draw the lines - regardless of whether it is a a publisher making guidelines for contributors/staff, an artist trying to figure out which tools are useful/legal/ethical, a consumer trying to make purchases that support artists, an activist trying to call out companies or individuals with questionable practices/policies.

I strongly believe "no AI assistance ever" will probably end up hurting people who could have benefitted from perfectly legal and ethical use of AI. And on the other hand, simply dividing it into "AI image creation = bad" and "AI image enhancement = ok" is also problematic. It's a tough topic, and the technical realities are changing so quickly that even many self-proclaimed experts are not fully aware of recent developments.
Thank you for the clarification. I actually assumed that was the case, but I didn't know it. Also, I do think it is an important distinction.
 

Related Articles

Remove ads

Remove ads

Top