WotC WotC: 'Artists Must Refrain From Using AI Art Generation'

WotC to update artist guidelines moving forward.

After it was revealed this week that one of the artists for Bigby Presents: Glory of the Giants used artificial intelligence as part of their process when creating some of the book's images, Wizards of the Coast has made a short statement via the D&D Beyond Twitter (X?) account.

The statement is in image format, so I've transcribed it below.

Today we became aware that an artist used AI to create artwork for the upcoming book, Bigby Presents: Glory of the Giants. We have worked with this artist since 2014 and he's put years of work into book we all love. While we weren't aware of the artist's choice to use AI in the creation process for these commissioned pieces, we have discussed with him, and he will not use AI for Wizards' work moving forward. We are revising our process and updating our artist guidelines to make clear that artists must refrain from using AI art generation as part of their art creation process for developing D&D art.


-Wizards of the Coast​


F2zfSUUXkAEx31Q.png


Ilya Shkipin, the artist in question, talked about AI's part in his process during the week, but has since deleted those posts.

There is recent controversy on whether these illustrations I made were ai generated. AI was used in the process to generate certain details or polish and editing. To shine some light on the process I'm attaching earlier versions of the illustrations before ai had been applied to enhance details. As you can see a lot of painted elements were enhanced with ai rather than generated from ground up.

-Ilya Shlipin​

 

log in or register to remove this ad

Snarf Zagyg

Notorious Liquefactionist
The Jesuits? We’ve moved just a bit past the Jesuits. Respectfully, you are missing my point. My point is that AI is easing foundational questions about both how human minds work, and how we should be training them.

I think that one of the reasons we are getting a certain type of pushback ... not the issues related to ethics, or to jobs ... but the specific pushback that, "Oh, they are just copying stuff. WE THINK!" ... is that at a very deep level, people are uncomfortable because these AIs call into question not whether or not they are thinking, but whether we are.

If you keep up with neuroscience (as I am sure some of us have), you know that there's a lot of well-known material out there that should give us pause. For example, there is a LOT of research that shows that a lot of the information that our bodies receive (sensory information) is never consciously perceived, yet nevertheless affects how we think. More importantly, our bodies can, and will, react to things before we can consciously act- and then our brain will "fill in" the idea that we acted after we already acted.

In other areas, it has long been known that our brains have localized (and specialized) areas to process certain things that we think of as conscious thought. An easy one is facial recognition. We like to think that when we see someone, we (as in some consciousness) recognizes that person; that's not true. Instead, there's an area of the brain devoted to facial recognition; if it gets damaged (as can happen during a stroke or injury) people lose the ability to identify other people. In some cases, damage to the area can result in Capgras syndrome (the belief that people you know have been replaced by an identical impostor) because you still have the ability to recognize people, but have lost the ability to form the connection to the emotional response.

Thinking about these things ... thinking about thought, is not a comfortable experience for many people. When confronted with the latest generation of AI that can accomplish many things that were once in our wheelhouse, then, a lot of people are forced to react with a kneejerk reaction-

Yeah, whatever. BUT THEY AREN'T THINKING!

That's a comfortable dodge. Because it assumes the answer. What is it that "they" aren't doing? Well, it's what we do! But ... what is it that we do, exactly? This doesn't answer questions, by the way. There is a difference between training a neural net on gigantic troves of information that we produced, as opposed to a person "learning" by experience and sensory information. But as we are seeing these models advance quickly, the questions people ask themselves can get more uncomfortable. What is "understanding" a connection, after all? Isn't that just having another connection?

IMO.
 

log in or register to remove this ad

FrogReaver

As long as i get to be the frog
That's already bad enough. AI doesn't have to replace all of the 10 accountants behind the scenes hammering away at Excel, Salesforce and SAP in their cubicles.

If it replaces 8 of them with the last two kept with the changed role of finding and correcting the few errors the AI made, that's already disruptive enough.
It’s not even going to do that. Generative ai is terrible at math.
 

Nikosandros

Golden Procrastinator
I think that one of the reasons we are getting a certain type of pushback ... not the issues related to ethics, or to jobs ... but the specific pushback that, "Oh, they are just copying stuff. WE THINK!" ... is that at a very deep level, people are uncomfortable because these AIs call into question not whether or not they are thinking, but whether we are.

If you keep up with neuroscience (as I am sure some of us have), you know that there's a lot of well-known material out there that should give us pause. For example, there is a LOT of research that shows that a lot of the information that our bodies receive (sensory information) is never consciously perceived, yet nevertheless affects how we think. More importantly, our bodies can, and will, react to things before we can consciously act- and then our brain will "fill in" the idea that we acted after we already acted.

In other areas, it has long been known that our brains have localized (and specialized) areas to process certain things that we think of as conscious thought. An easy one is facial recognition. We like to think that when we see someone, we (as in some consciousness) recognizes that person; that's not true. Instead, there's an area of the brain devoted to facial recognition; if it gets damaged (as can happen during a stroke or injury) people lose the ability to identify other people. In some cases, damage to the area can result in Capgras syndrome (the belief that people you know have been replaced by an identical impostor) because you still have the ability to recognize people, but have lost the ability to form the connection to the emotional response.

Thinking about these things ... thinking about thought, is not a comfortable experience for many people. When confronted with the latest generation of AI that can accomplish many things that were once in our wheelhouse, then, a lot of people are forced to react with a kneejerk reaction-

Yeah, whatever. BUT THEY AREN'T THINKING!

That's a comfortable dodge. Because it assumes the answer. What is it that "they" aren't doing? Well, it's what we do! But ... what is it that we do, exactly? This doesn't answer questions, by the way. There is a difference between training a neural net on gigantic troves of information that we produced, as opposed to a person "learning" by experience and sensory information. But as we are seeing these models advance quickly, the questions people ask themselves can get more uncomfortable. What is "understanding" a connection, after all? Isn't that just having another connection?

IMO.
Yes, that's the point I tried to raise in a previous reply, but this is much more eloquently stated.
 

Thinking about these things ... thinking about thought, is not a comfortable experience for many people. When confronted with the latest generation of AI that can accomplish many things that were once in our wheelhouse, then, a lot of people are forced to react with a kneejerk reaction-

I am very comfortable postulating that I have no free will and no agency but I have the illusion of it while being manipulated by external stimuli and training. It's nothing new (Spinoza, at least, but I might be off by millenias...) but it makes me puzzled by debate about PCs having agency... :ROFLMAO:

To answer the more serious points, you're right but I think the outrage is more cause by empathy toward RPG artists (because it's easier to have class empathy with them when their job are destroyed than with stock traders [who were thoroughly replaced by computer specialists and quant experts] and coalminers or factory workers [who were thoroughly replaced by machines or unhappy, low-paid laborers in far-far-away countries]). I think the existential questions IA raises are a small motivator of the overall reactions.

Also, it helps that the legal questions can be answered without considering what is a creative process at all, because then one would have to wonder why taking a picture of birds is an artistic, creative activity, when all you do is push a button when you see one, while having a computer program trained to take a picture when certain conditions (a bird is in front of the camera) are met isn't.
 
Last edited:

Art Waring

halozix.com
Here is a link to a website called Have I Been Trained? (If you are a visual artist with concerns, the website also offers the option for artists to opt out of the LAION-5B dataset).

It allows for the searching of the LAION-5B dataset to see if your art has been used to train an ai-generative tool. Simply type in Frank Frazetta's name and you can see his entire catalogue of work, including his own artist signature in black and white of you scroll down a page or two.

To the best of my knowledge his works are still under copyright (I think to his estate?). Why are copyrighted works present in a dataset that claims to exist for non-profit purposes?
 


Here is a link to a website called Have I Been Trained? (If you are a visual artist with concerns, the website also offers the option for artists to opt out of the LAION-5B dataset).

It allows for the searching of the LAION-5B dataset to see if your art has been used to train an ai-generative tool. Simply type in Frank Frazetta's name and you can see his entire catalogue of work, including his own artist signature in black and white of you scroll down a page or two.

To the best of my knowledge his works are still under copyright (I think to his estate?). Why are copyrighted works present in a dataset that claims to exist for non-profit purposes?

The LAION-5B data set is a list of pairs of URLs and keywords describing the linked image. It doesn't embed any image at all. It's an original work composed or URLs and text.

Providing a link to copyrighted content is perfectly legal (otherwise referencing a scientific publication would be impossible and the Internet wouldn't be allowed).
 

FrogReaver

As long as i get to be the frog
It doesn't need to do math, it only needs to put data in the right column and then let Excel handle the math.
excel already has features that do that. External data from odbc connections. Importing a comma separated list. Copy and paste from one sheet to another. Getting data in the right column in excel - trivial and probably wouldn’t even save 1 FTE across a 1000 man organization.
 

excel already has features that do that. External data from odbc connections. Importing a comma separated list. Copy and paste from one sheet to another. Getting data in the right column in excel - trivial and probably wouldn’t even save 1 FTE across a 1000 man organization.
All these tasks adds up. I think we're in the beginning of the biggest workplace shakeup since the arrival of the WWW in the 90's. Many jobs will be eliminated, and the workers who get to keep their jobs will be the ones who embrace these new tools.

But this tangent is getting far off topic, so that's my last post on the matter :)
 

Art Waring

halozix.com
The LAION-5B data set is a list of pairs of URLs and keywords describing the linked image. It doesn't embed any image at all. It's an original work composed or URLs and text.

Providing a link to copyrighted content is perfectly legal (otherwise referencing a scientific publication would be impossible and the Internet wouldn't be allowed).
That's not what I am talking about. Whether its a link to an image or an embedded image, the dataset is still being used to create commercial content by referencing the dataset (see using artists names as prompts).

This is going back around to the argument "if you don't want your work stolen don't post it on the internet." And that's being dismissive to artists who need to have their portfolios online to get freelance work.
 

Related Articles

Remove ads

Remove ads

Top