D&D 5E Glory of the Giants' AI-Enhanced Art

AI artist uses machine learning to enhance illustrations in Bigby.

The latest D&D sourcebook, Bigby Presents: Glory of the Giants, comes out in a couple of weeks. However, those who pre-ordered it on D&D Beyond already have access, and many are speculating on the presence of possible AI art in the book.

One of the artists credited is Ilya Shkipin, who does traditional, digital, and AI art. In an interview with AI Art Weekly in December 2022, Shkipin talked at length about their AI art, including the workflow involved.

On Twitter, Shkipin talked more [edit--the tweet has since been deleted but the content is below] about the AI process used in Bigby, indicating that AI was used to enhance some of the art, showing an example of the work.

There is recent controversy on whether these illustrations I made were ai generated. AI was used in the process to generate certain details or polish and editing. To shine some light on the process I'm attaching earlier versions of the illustrations before ai had been applied to enhance details. As you can see a lot of painted elements were enhanced with ai rather than generated from ground up.

-Ilya Shkipin​


ilya.png


ilia2.png


Discussions online look at more of the art in the book, speculating on the amount of AI involvement. There doesn't appear to be any evidence that any of the art is fully AI-generated.

AI art is controversial, with many TTRPG companies publicly stating that they will not use it. DriveThruRPG has recently added new policies regarding transparency around AI-generated content and a ban on 'standalone' AI art products, and Kickstarter has added similar transparency requirements, especially regarding disclosure of the data which is used to train the AI. Many artists have taken a strong stance against AI art, indicating that their art is being 'scraped' in order to produce the content.

UPDATE- Christian Hoffer reached out to WotC and received a response:

Have a statement from Wizards over the AI enhanced artwork in Glory of the Giants. To summarize, they were unaware of the use of AI until the story broke and the artwork was turned in over a year ago. They are updating their Artist guidelines in response to this.

Wizards makes things by humans for humans and that will be reflected in Artist Guidelines moving forward.

-Christian Hoffer​

The artist, Ilya Shkipin, has removed the initial tweet where the AI process is discussed, and has posted the following:

Deleted previous post as the future of today illustrations is being discussed.

Illustrations are going to be reworked.

-Ilya Shkipin​

 

log in or register to remove this ad

J.Quondam

CR 1/8
Yes its the hyped term, but its not how it functions in reality. Machine learning is years away from mimicing a human-like neural network.
"Neural network" is just a technical term, and the method has been used in machine-learning and AI research (and biological brain research, of course) for a very long time. They have been deployed in a range of applications for years now. They are obviously a long way from human levels of complexity, but they are constructed to mimic the interactions between neurons in a brain. Wiki.
 

log in or register to remove this ad

EzekielRaiden

Follower of the Way
I'm no expert on AI, but AFAIK, generative AIs, in general, do use neural networks.
Yeah, I would expect a neural network involved unless explicitly told otherwise. Otherwise, there's no "training" involved.

Edit: Oh, if anyone thought this meant literal neurons, as in something actually functioning the way human brain cells function, then no, there is no such thing.

A "neural network" in AI parlance is, as a general rule, a set of "layers" of "nodes." E.g. you could have 10 layers, each with 6 nodes in them. These nodes are the analogic "neurons' of the neural network. Generally, layers only take data from the layer before them (except the first layer, which naturally must take data from human input.) They get assigned assigned random instructions (weights) for how to pick up data from the layer which came before. So, for example, with a very small neural network of 3 layers with 4 nodes, you'd have...

Layer 1: A1, B1, C1, D1
Layer 2: A2, B2, C2, D2
Layer 3: A3, B3, C3, D3

The first layer receives input data, so its "training" is about what information it should factor into its calculation to determine what value it outputs. E.g., node A1 might pick out certain pixels of an image, while B1 would pick out a different set of pixels. A given node takes in all of its weighted input data, and then returns a number, which can be used by other nodes down the line. The sum total of the neural network is thus a HUGE list of parameters which all feed one layer into another until you reach the final layer, at which point its output is parsed for human use.

Relatively simple applications of neural networks, such as optical character recognition, it is often possible to get at least a loose idea of what the network is doing because you don't need very many layers or nodes to do that task. Often, nodes will develop weights which (for example) identify the presence of curved structures in specific areas, approximating certain kinds of convolutions (which are how regular software performs edge-detection) in a much more application-specific way.

I don't think it's possible, at least at this time, to create software that can "enhance" artistic work in this way without using this layers-of-nodes structure. And, as said above, the common parlance for this "layers-of-nodes" structure, in computer science, is "a neural network."
 
Last edited:

Golroc

Explorer
Supporter
Yet its already been established that ai-image generation tools have been trained on the LAION dataset, a dataset which is non-profit and for non-commercial purposes. Using the LAION dataset for commercial purposes (like paid subscription models) should be a red flag for anyone trying to defend the current state of ai-gen tools.

Ai-tools use machine learning, they don't possess a "neural network" or anything of the sort. They are trained off of billions of images scraped off the internet, without permission, proper credit or compensation.

I am simply trying to stop disinformation from spreading further. AI-tools have absolutely no human qualities, please stop trying to compare the methods of living artists to that of machines that learn by algorithm.
I appreciate you trying to stop disinformation - but please read up on machine learning, before accusing others of spreading misinformation and doing the very same thing yourself because of an apparent lack of knowledge on the subject. It doesn't help your (admirable) fight for the livelihood of artists when you accuse others of lacking technical insight, and in the process of doing so expose yourself. And that's on top of the strawman argument you make - because nowhere did I compare the methods of living artists to that of generative AI. Please educate yourself before you start pointing fingers. And please don't put words in my mouth.

Addendum (edited in): I am not an advocate of equivocating generative AI tools with human artists. But I am an advocate for finding ways to use AI technology to empower artists and creatives, reduce exploitative labor practices - and of course in a way that maintains respect for the legal rights and reputation of artists - amateurs and professionals, commercial and fine art. I believe that turning the topic of generative AI into a polarized for/against discussion one hands the initiative to the people with the worst motives and intentions. It is imperative to pursue ways to use AI in positive ways. You may disagree with my belief that AI can be used in positive ways. That's fine - disagreement can lead to interesting discussion. There are more than two possible positions on this topic.
 
Last edited:

Art Waring

halozix.com
I appreciate you trying to stop disinformation - but please read up on machine learning, before accusing others of spreading misinformation and doing the very same thing yourself because of an apparent lack of knowledge on the subject. It doesn't help your (admirable) fight for the livelihood of artists when you accuse others of lacking technical insight, and in the process of doing so expose yourself. And that's on top of the strawman argument you make - because nowhere did I compare the methods of living artists to that of generative AI. Please educate yourself before you start pointing fingers. And please don't put words in my mouth.
I'm not actually accusing anyone (apologies if that how it seems), its an attempt to separate what humans do from how machines are trained on datasets. Most of the pro-ai comments on these forums & elsewhere are trying to compare ai-output as being akin to human creativity. It was a generalization, so for that I do apologize.

I do agree in educating yourself to be better informed in a subject.

EDIT: As to your post edit, I am not 100% against ai-tools, but I am for the advocacy of ethical guidelines for the use of ai-tools, and fair compensation for artists, credit, and the option to opt in or out for artists in any given dataset.
 
Last edited:

Scribe

Legend
We have at least 2 other AI focused threads at the moment, that have at times meandered into 'what exactly are these AI programs doing'.

I have to wonder if Wizards knew, considering the profile of the artist I assume so, and if they would have accepted AI art now, or how long ago this was accepted?
 

Abstruse

Legend
A lot of the terms around generative algorithms is chosen specifically to evoke concepts that just aren't there, starting with the very name "AI". It's not artificial intelligence like having a chat with HAL9000 or Data from Star Trek painting at his easel. It's autocorrect with delusions of grandeur. It's the Chinese Room claiming it speaks fluent Mandarin.

The same goes for "machine learning" and "neural network" and all the other buzzwords thrown around. They're attempting to make you think of the science fiction/futurist ideas behind those terms to describe something that they're not. It would be like taking a catapult and calling it a "Space Travel Device" because theoretically sometime in the future the technology might advance to the point of being able to send something into orbit despite it not doing that now and having no way of doing it in the near future.
 


Weiley31

Legend
While I do agree that using AI-art to "make" images and what not does suck, I don't see an issue with using it to do minor hits like touching/tightening up an image. The artist is still drawing/getting paid for their work and NOT having it stolen by a completely A.I. generated art.
 

bedir than

Full Moon Storyteller
The idea that generative AI was never trained on original art is proven false when generative AI art has included the signatures of original artists in the works.

Also, this usage isn't generative AI so it seems a non-sequitur to argue about generative AI when what was done wasn't that.
 


Remove ads

Remove ads

Top