D&D 5E Glory of the Giants' AI-Enhanced Art

AI artist uses machine learning to enhance illustrations in Bigby.

The latest D&D sourcebook, Bigby Presents: Glory of the Giants, comes out in a couple of weeks. However, those who pre-ordered it on D&D Beyond already have access, and many are speculating on the presence of possible AI art in the book.

One of the artists credited is Ilya Shkipin, who does traditional, digital, and AI art. In an interview with AI Art Weekly in December 2022, Shkipin talked at length about their AI art, including the workflow involved.

On Twitter, Shkipin talked more [edit--the tweet has since been deleted but the content is below] about the AI process used in Bigby, indicating that AI was used to enhance some of the art, showing an example of the work.

There is recent controversy on whether these illustrations I made were ai generated. AI was used in the process to generate certain details or polish and editing. To shine some light on the process I'm attaching earlier versions of the illustrations before ai had been applied to enhance details. As you can see a lot of painted elements were enhanced with ai rather than generated from ground up.

-Ilya Shkipin​


ilya.png


ilia2.png


Discussions online look at more of the art in the book, speculating on the amount of AI involvement. There doesn't appear to be any evidence that any of the art is fully AI-generated.

AI art is controversial, with many TTRPG companies publicly stating that they will not use it. DriveThruRPG has recently added new policies regarding transparency around AI-generated content and a ban on 'standalone' AI art products, and Kickstarter has added similar transparency requirements, especially regarding disclosure of the data which is used to train the AI. Many artists have taken a strong stance against AI art, indicating that their art is being 'scraped' in order to produce the content.

UPDATE- Christian Hoffer reached out to WotC and received a response:

Have a statement from Wizards over the AI enhanced artwork in Glory of the Giants. To summarize, they were unaware of the use of AI until the story broke and the artwork was turned in over a year ago. They are updating their Artist guidelines in response to this.

Wizards makes things by humans for humans and that will be reflected in Artist Guidelines moving forward.

-Christian Hoffer​

The artist, Ilya Shkipin, has removed the initial tweet where the AI process is discussed, and has posted the following:

Deleted previous post as the future of today illustrations is being discussed.

Illustrations are going to be reworked.

-Ilya Shkipin​

 

log in or register to remove this ad


log in or register to remove this ad

Golroc

Explorer
Supporter
Would this be a correct statement: in order to ‘enhance’ the artwork created by artist 1, an AI algorithm will collate art from as many artists as is fed into it (potentially artists 2 through infinity) and add line work and coloring to artist 1’s work. Artists 2 through infinity may have no knowledge their work was placed in the algorithm.

Not entirely correct. The tricky thing about image generation AI is that contrary to how they're often described in media, there isn't any active trawling of external images, nor is there any internal data storage of artistic (or otherwise) imagery. The neural network has been trained to create images by removing noise. It generally hasn't been fed any image data at all. However, an adverserial AI which has the job of guessing whether a particular image is AI generated or not, has been trained on real data.

The full training process of these paired AI systems is quite complicated, and I'm grossly simplifying things already. But essentially the image generator AI starts randomly creating garbage images from text prompts. It keeps doing this until it can produce images which look like "real" art to the adversary AI. This is really difficult to achieve without having an image generator that just creates the same image every time.

In reality the datasets and different AI systems involved is larger. This is not to say that artists should just give up their rights and accept that AI can mimick their work. But the tricky part is that an AI may be perfectly capable of mimicking a specific artist without ever having been trained directly on any work by said artist. Therefore training set inclusion is not a good criteria for whether something is a violation or not.

Instead, I would say that the output is what matters. As with humans really. If an AI creates art that is clearly an imitation of work by an artist, that is a violation of said artist's rights. It doesn't matter if the artist which and how many works were included in one or more parts of chain - or even if any were included at all! Because we will eventually have AI which can imitate without ever having been trained on something.

But to return to your question - in order to enhance this artwork as shown by this artist there is no collation or access to the work of artists 2 to infinity. Certain parts of a neural network are triggered in order to perform image editing operations. The training of this network is so complex, that is impossible to say which "neurons" resulted from what sources - because the AI was likely trained using other AI systems. Some of which are trained on general concepts and some on specific art.

There is no algorithm. Just a neural network. I am staunchly in favor of protecting artists from the commercial and legal impact of AI systems. But the best way to do this is by focusing on output. There will be so much complexity, obfuscation and emergent behavior, that proving the inclusion of work is not possible. And an artist shouldn't have to prove anything. If a work is derivative, it should require consent of the artist (or whoever holds the rights to the art - which in my country is always the artists, but in some countries it can be a corporation).

I believe AI companies should gain approval from and compensate artists whose work is used for training, but I also think artists should inform themselves about the technical aspects. An artist should be able to contribute work to training without accepting that derivative works are created. Because when used by talented and creative individuals AI can create things that are novel.

I think Ilya Shkpin is an example of an artist showing the potential of AI to be a tool for productivity and creativity. I am optimistic AI will in the end help artists work as artists and not as a "human illustration robots" toiling away for very low wages producing work that is under tight creative control of others. It is something to be embraced - although sadly I think some corporations will fight this - as they want to exploit cheap human creative labor for as long as possible. They will not fight this for the sake of artists. They will do so to keep their position as gatekeepers in the creative industries - and to keep wages down. We see this in other industries and professions as well. Some business owners do not want workers to be empowered. It will be sad if image generation AI end up being used only by AI "spamshops" - thus driving down the wages of real artists. A real artist using AI can outcompete such companies easily - doing more and doing it better.
 


Abstruse

Legend
One part of the controversy around generative algorithmic art (aka "AI" art, though I hate that term because it evokes images of Data sitting at his easel from Star Trek: The Next Generation rather than what it is, autocorrect with delusions of grandeur) is the source data for the algorithm.

At least one Wizards of the Coast artist on Bluesky (not linked because it's still invite-only and you need an account to view) stated they were upset and felt betrayed by Wizards of the Coast using "AI" to "enhance" their work.

"I feel so genuinely betrayed by WotC allowing an artist to use AI as part of their process for interior art in the most recent DnD book. Having concepts I worked on ran through a scraping program and turned into slop isn't really how I wanted to spend August."

It's highly likely that this person's work, which was submitted likely back in the spring if not earlier before concerns of generative algorithms were as widespread and artists began adding clauses to their contracts to prevent its use, is now part of those databases without their knowledge or consent because the terms of service of almost every "AI" program on the market adds all work submitted to the database. Meaning this person did work for Wizards of the Coast in good faith, and Wizards then provided it to someone else to "enhance" it who then handed the artwork over to some techbros to do with it as they pleased.

If this doesn't sound like a big deal to you because artwork done for Wizards of the Coast is almost exclusively work-for-hire so Wizards owns all the rights, allow me to explain a similar situation with SAG-AFTRA actors and why they're striking against the studios: Film studios and television production companies want the right to scan actors for their likeness and be allowed to use that scan in any way they see fit in any future project going forward without any further compensation or even making them aware of what they're doing with their image. And there are reports that Marvel Studios has already done this on the production of recent shows by scanning background actors without informing them of what they were doing or how the scans would be used.

That is essentially what has happened to artists who worked on Bigby Presents: Glory of the Giants. Their work was submitted to the database of a third-party company to be copied and reused without their permission or any compensation, damaging that artist's ability to get future work. Just like a work-for-hire contract with an artist costs more because the company is buying all rights for future use, this sort of exploitation of the artist's work should also have higher compensation and be disclosed as part of the process.
 




Art Waring

halozix.com
Not entirely correct. The tricky thing about image generation AI is that contrary to how they're often described in media, there isn't any active trawling of external images, nor is there any internal data storage of artistic (or otherwise) imagery. The neural network has been trained to create images by removing noise. It generally hasn't been fed any image data at all. However, an adverserial AI which has the job of guessing whether a particular image is AI generated or not, has been trained on real data.

The full training process of these paired AI systems is quite complicated, and I'm grossly simplifying things already. But essentially the image generator AI starts randomly creating garbage images from text prompts. It keeps doing this until it can produce images which look like "real" art to the adversary AI. This is really difficult to achieve without having an image generator that just creates the same image every time.

In reality the datasets and different AI systems involved is larger. This is not to say that artists should just give up their rights and accept that AI can mimick their work. But the tricky part is that an AI may be perfectly capable of mimicking a specific artist without ever having been trained directly on any work by said artist. Therefore training set inclusion is not a good criteria for whether something is a violation or not.

Instead, I would say that the output is what matters. As with humans really. If an AI creates art that is clearly an imitation of work by an artist, that is a violation of said artist's rights. It doesn't matter if the artist which and how many works were included in one or more parts of chain - or even if any were included at all! Because we will eventually have AI which can imitate without ever having been trained on something.

But to return to your question - in order to enhance this artwork as shown by this artist there is no collation or access to the work of artists 2 to infinity. Certain parts of a neural network are triggered in order to perform image editing operations. The training of this network is so complex, that is impossible to say which "neurons" resulted from what sources - because the AI was likely trained using other AI systems. Some of which are trained on general concepts and some on specific art.

There is no algorithm. Just a neural network. I am staunchly in favor of protecting artists from the commercial and legal impact of AI systems. But the best way to do this is by focusing on output. There will be so much complexity, obfuscation and emergent behavior, that proving the inclusion of work is not possible. And an artist shouldn't have to prove anything. If a work is derivative, it should require consent of the artist (or whoever holds the rights to the art - which in my country is always the artists, but in some countries it can be a corporation).

I believe AI companies should gain approval from and compensate artists whose work is used for training, but I also think artists should inform themselves about the technical aspects. An artist should be able to contribute work to training without accepting that derivative works are created. Because when used by talented and creative individuals AI can create things that are novel.

I think Ilya Shkpin is an example of an artist showing the potential of AI to be a tool for productivity and creativity. I am optimistic AI will in the end help artists work as artists and not as a "human illustration robots" toiling away for very low wages producing work that is under tight creative control of others. It is something to be embraced - although sadly I think some corporations will fight this - as they want to exploit cheap human creative labor for as long as possible. They will not fight this for the sake of artists. They will do so to keep their position as gatekeepers in the creative industries - and to keep wages down. We see this in other industries and professions as well. Some business owners do not want workers to be empowered. It will be sad if image generation AI end up being used only by AI "spamshops" - thus driving down the wages of real artists. A real artist using AI can outcompete such companies easily - doing more and doing it better.
Yet its already been established that ai-image generation tools have been trained on the LAION dataset, a dataset which is non-profit and for non-commercial purposes. Using the LAION dataset for commercial purposes (like paid subscription models) should be a red flag for anyone trying to defend the current state of ai-gen tools.

Ai-tools use machine learning, they don't possess a "neural network" or anything of the sort. They are trained off of billions of images scraped off the internet, without permission, proper credit or compensation.

I am simply trying to stop disinformation from spreading further. AI-tools have absolutely no human qualities, please stop trying to compare the methods of living artists to that of machines that learn by algorithm.
 



Remove ads

Remove ads

Top