D&D 5E Glory of the Giants' AI-Enhanced Art

AI artist uses machine learning to enhance illustrations in Bigby.

The latest D&D sourcebook, Bigby Presents: Glory of the Giants, comes out in a couple of weeks. However, those who pre-ordered it on D&D Beyond already have access, and many are speculating on the presence of possible AI art in the book.

One of the artists credited is Ilya Shkipin, who does traditional, digital, and AI art. In an interview with AI Art Weekly in December 2022, Shkipin talked at length about their AI art, including the workflow involved.

On Twitter, Shkipin talked more [edit--the tweet has since been deleted but the content is below] about the AI process used in Bigby, indicating that AI was used to enhance some of the art, showing an example of the work.

There is recent controversy on whether these illustrations I made were ai generated. AI was used in the process to generate certain details or polish and editing. To shine some light on the process I'm attaching earlier versions of the illustrations before ai had been applied to enhance details. As you can see a lot of painted elements were enhanced with ai rather than generated from ground up.

-Ilya Shkipin​


ilya.png


ilia2.png


Discussions online look at more of the art in the book, speculating on the amount of AI involvement. There doesn't appear to be any evidence that any of the art is fully AI-generated.

AI art is controversial, with many TTRPG companies publicly stating that they will not use it. DriveThruRPG has recently added new policies regarding transparency around AI-generated content and a ban on 'standalone' AI art products, and Kickstarter has added similar transparency requirements, especially regarding disclosure of the data which is used to train the AI. Many artists have taken a strong stance against AI art, indicating that their art is being 'scraped' in order to produce the content.

UPDATE- Christian Hoffer reached out to WotC and received a response:

Have a statement from Wizards over the AI enhanced artwork in Glory of the Giants. To summarize, they were unaware of the use of AI until the story broke and the artwork was turned in over a year ago. They are updating their Artist guidelines in response to this.

Wizards makes things by humans for humans and that will be reflected in Artist Guidelines moving forward.

-Christian Hoffer​

The artist, Ilya Shkipin, has removed the initial tweet where the AI process is discussed, and has posted the following:

Deleted previous post as the future of today illustrations is being discussed.

Illustrations are going to be reworked.

-Ilya Shkipin​

 

log in or register to remove this ad

Zardnaar

Legend
Yeah, but if you take a real look at those rather than just the thumbnail, you can see that so much of the fine linework is just jumbled. Look at the railings, the tire spokes on the car, the driver's side chair (or whatever the hell that thing poking up in there is), the feet of the people, the window panes on the left side of the corner of that factory, most of the background left of center behind that skeleton cyborg...

AI is really good at looking impressive at a glance. Once you look closer, it's like realizing you're looking at an illusion: you begin to see the artifice and the rest of it starts to fall apart. Same deal with the pictures here: they look okay on a glance, but if you actually look at what's going on, it starts to fall apart.

Edit: I think the best way to put it is that AI is good enough at getting close to what we think should be there that our minds can fill in what we think is there and think it's really good... until we look and see that it's not what we were mentally filling in, but something far less refined and far more chaotic.

Good point. Cheers I had to enlarge them.
 

log in or register to remove this ad

Snarf Zagyg

Notorious Liquefactionist
Yeah, but if you take a real look at those rather than just the thumbnail, you can see that so much of the fine linework is just jumbled. Look at the railings, the tire spokes on the car, the driver's side chair (or whatever the hell that thing poking up in there is), the feet of the people, the window panes on the left side of the corner of that factory, most of the background left of center behind that skeleton cyborg...

AI is really good at looking impressive at a glance. Once you look closer, it's like realizing you're looking at an illusion: you begin to see the artifice and the rest of it starts to fall apart. Same deal with the pictures here: they look okay on a glance, but if you actually look at what's going on, it starts to fall apart.

Edit: I think the best way to put it is that AI is good enough at getting close to what we think should be there that our minds can fill in what we think is there and think it's really good... until we look and see that it's not what we were mentally filling in, but something far less refined and far more chaotic.

I am going to say this, only to make a rather banal point.

This is also true of human artists. I have a long relationship with an artist, and while I lack the gene myself, am constantly told about certain things to look for. One of the things I can't unremember is that a lot of artists, even ones that work commercially, will have particular issues with hands or feet, so you will often see pictures where the hands/feet are either badly drawn, or once the artist is aware of it, the pictures are framed in such a way that you won't see the problem issues.

For that matter, a lot of good illustration is meant to be looked at from a certain distance because the lines are impressionistic; once you zoom in on a given area, it falls apart. Because it's not photo-realistic.

Or artists that seem to have no idea how the human body moves. Once you start to think about this, a lot of comic book art is ... suspect.

It's certainly the case that AI art has some known issues (the fingers/hands is one of them) and produces artifacts. However, what is notable is how quickly it has improved and how the most recent generation has already dealt with the artifacts of the prior generation. It's also notable how many artists have become interested in the use of AI as a productivity tool.

That said, this is all orthogonal to the underlying ethical issues. But while the ethical issues (and legal issues, which are separate and different) need to be discussed, I would probably avoid the "AI art is bad," argument, simply because that's unlikely to remain an argument for very long.
 

There are several ways of framing the problem.

If the problem is... AI art is not good.
Then let the market decide, no artist is threatend since... it is not good. Only producers of bad art might be threatened, but if their goal was to create bad art, they can become prompt engineer and sell bad art to their usual client for a fraction of the effort needed so far. I do feel it's the weakest argument since progress has been huge over the last few month and it's only a question of time before AI art is of enough quality to satisfy a large part of the audience (who, also, might not be expert enough to mind problematic details. Mickey Mouse has become a popular character despite missing a finger).

If the problem is... AI are illegally trained (from the claim that the authors didn't agree to license their art for that)
Then the problem is transient. It won't take long to have AI trained in Turkmenistan, which according to the US patent office doesn't have copyright laws (whether it is because they never needed them or because they don't believe in a state-sponsored monopoly granted to authors). Also, even without locating a subsidiary in Turkmenistan, at some point enough people will get an Adobe Firefly licence (or another model trained only on public domain art, like mitsua-diffusion). Maybe not everyone, and certainly not the masses who just uses AI to recreate the likeliness of their RPG character to put on the character sheet without having to learn painting, or the masses who just want to generate images of skimily-clad waifus, but professionals likke the one discussed in this thread will certainly afford a Photoshop licence.

If the problem is... the claim that an artwork created with IA is a derivative work from, err, anything it was trained for,
Then it's an interesting legal discussion. Which will probably have a lot of different answers with 190+ countries. For example, according the Bern convention, a derivative work must be a creative work that can enjoy copyright protection. As the US (apparently, I read it in this thread) ruled that AI art can't be copyrighted, by definition it's not a derivative work. But other jurisdiction might have a different approach.

If the problem is... the claim that models contain copyrighted works and so they can't be distributed
Then the problem is lack of understanding of the part of the speaker, and the discussion would focus around educating people on how AI works, as it will be an increasing part of our lives. They are around the corner and understanding AI is certainly a useful skill to get.

If the problem is "only legal, not ethical",
Then it's only a transient problem. In 100 or so years, all of the current art will be public domain. In the interim, all the public domain art will have made its way in digital form with the increase of capacity on the Internet and it will be trivial to build a legal AI model. If the problem isn't ethical, all the questions about the future of artists and whether they can be competitive are temporary, until public domain absorbs enough artwork to make AI training viable. Also, these questions are out of the scope of the case where one of the 190+ countries decides that IA training is fair use -- a position that is a possible outcome. Both Google and Adobe are pushing for a "do not train" tag, implying that they are looking forward to an implicit authorization regime (the second best outcome for them outside of fair use).

If the problem is ethical, in the sense that copyright laws don't protect artists enough,
Then it's an interesting ethical discussion on the correct level of protection an artist should have over its creations. But it's not limited to images and could very well branch into adventure writing. Also, it is a topic that would cover the questions about the future of artists (once the current art is public domain and automatic-artist-replacement are legally feasible without copyright consideration).

If the problem is ethical in the sense that art is the product of an artist, so a random guy typing a prompt isn't doing art...
Then it's an ethical problem, but a very different one from the ones usually discussed. It's an interesting take, which would discount collective artworks (where you can hire a team of artists and instruct them to paint parts of a canvas according to your general specification, and you'd be the copyright holder once you publish the art, even if you don't know what a brush is. While generally accepted, it can be an ethical topic as well.

If the problem is ethical in the sense that using artist name in a prompt is very close to plagiarizing said artist,
Then the discussion is part ethical, part technical. The tech aspect is in prompting/training: (unless artists have a definite style like Mucha, the names in the prompt don't really do a lot... plug names of different impressionist artists, you'll get an impressionist artwork, but it will be difficult to distinguish between Pissaro and Morisot). So the technical part would be to identify keyword that lead to the same result as mentionning an artist name ("worthy of WOTC"?). Also, improving the captioning and traning algorithm to avoid silly things like "since all art by X has a signature, then I am approaching a better work by X by generating a signature). It's also part ethical because nothing in generative IA force one to prompt with "in the style of Corot" instead of "in the style of a Barbizon school painter". The ethicality might not lie with the tool, but with the way people are using it.

If the problem is ethical, in the sense that artists deserve income to live and do art,
Then it's an interesting ethical discussion on whether using employment vs capital as a mean to distribute riches is appropriate to a post-labor society. It isn't limited to art, since after all, the problem with "art" is just that artists usually enjoy their work and we can guess that miners or factory workers didn't enjoy breaking their back doing those jobs. But it's very possible that truck drivers and taxi drivers actually enjoy driving (or plane drivers...) and yet there will be soon a point where they will become unneeded.

I am not sure all those discussions can be solved in a single thread, where people will be approaching different things and answering each other while actually discussing widely different problems.
 
Last edited:

shadowoflameth

Adventurer
Was Ilya responsible for the dinosaurs? That is the more egregious issue. Using AI to enhance your own art is very, very similar to using any other digital tool. Using AI to enhance someone else's art is plagiarism IMO.
Doing it without permission would be plagiarism. Doing it on a project for hire with the go ahead from the publisher paying both artists for their work is working on the project. And it isn't by mistake to'miss' that an AI tool was used. As WotC rightly points out, there was no guideline against that at the time given to the artists or in their policies. In hindsight, they're thinking that there should have been because clearly, some customers don't like it now, but they likely never gave it a thought when these works were commissioned.
 

No, because the AI tool being used scrapes other peoples' art, not just yours. There's no AI tool that can extrapolate on your one single piece of art without outside input. That requires originality which is human input.
Okay, so none of the art on page 1 is hers?
This is not remotely how AI generated art works.
I think I'm asking about something else. Maybe I'm misunderstanding something, or becoming hyperfocussed on a trivial point. But, as I understand it, the artist made some art and then polished it with an AI tool. She already made the art on her own. She did not gather the art from the AI and then polish it on her own.

Scenario 2 is a problem. I'm not sure how much a problem Scenario 1 is,
 

dave2008

Legend
Doing it without permission would be plagiarism. Doing it on a project for hire with the go ahead from the publisher paying both artists for their work is working on the project. And it isn't by mistake to'miss' that an AI tool was used. As WotC rightly points out, there was no guideline against that at the time given to the artists or in their policies. In hindsight, they're thinking that there should have been because clearly, some customers don't like it now, but they likely never gave it a thought when these works were commissioned.
I don't think you understand how concept art works. That being said, I haven't seen the concept art or final work to speak on this case in an informed manner.
 

Parmandur

Book-Friend
I don't think you understand how concept art works. That being said, I haven't seen the concept art or final work to speak on this case in an informed manner.
The concept art in question is actually in the book, too.

Ilya did do intermediate paintings in the meantime, too, so it's not that the concept art was plagiarized (it was not), but the detailing work was from an AI that assuredly plagiarized other, uncredited work.
 

dave2008

Legend
The concept art in question is actually in the book, too.

Ilya did do intermediate paintings in the meantime, too, so it's not that the concept art was plagiarized (it was not), but the detailing work was from an AI that assuredly plagiarized other, uncredited work.
i had the impression the concept artist thought Ilya use their concept art directly - which would be plagiarism. If not, it is just standard AI art issues.
 

I am going to say this, only to make a rather banal point.

This is also true of human artists. I have a long relationship with an artist, and while I lack the gene myself, am constantly told about certain things to look for. One of the things I can't unremember is that a lot of artists, even ones that work commercially, will have particular issues with hands or feet, so you will often see pictures where the hands/feet are either badly drawn, or once the artist is aware of it, the pictures are framed in such a way that you won't see the problem issues.

For that matter, a lot of good illustration is meant to be looked at from a certain distance because the lines are impressionistic; once you zoom in on a given area, it falls apart. Because it's not photo-realistic.

Or artists that seem to have no idea how the human body moves. Once you start to think about this, a lot of comic book art is ... suspect.

Those are different kinds of mistakes and not really comparable.

Yes, at a certain level detail can fall apart, but they don't do so in the same ways. For example, look at the railing for the stairs in the piperoom: it's not straight, but closer to a tangle of strings. People can shorthand things, make mistakes, do weird things with perspective, but this is a mistake that is caused by not being able to think, only imitate. That sort of artifacting isn't really comparable to human problems with art, and you can see it in the "This is meant to be a bow but it also melds with my hand" or the typical nonsense clothing that is produced by AI.

So yeah, Rob Liefeld can suck at drawing feet and we're always going to have giant chest Captain America, but then again I'm not sure it's really directly comparable unless Shaft's bow starts blending through his hand and bending in weird ways. :p

It's certainly the case that AI art has some known issues (the fingers/hands is one of them) and produces artifacts. However, what is notable is how quickly it has improved and how the most recent generation has already dealt with the artifacts of the prior generation. It's also notable how many artists have become interested in the use of AI as a productivity tool.

That said, this is all orthogonal to the underlying ethical issues. But while the ethical issues (and legal issues, which are separate and different) need to be discussed, I would probably avoid the "AI art is bad," argument, simply because that's unlikely to remain an argument for very long.

It's certainly possible in the future that this occurs, but at this point I think it's arguably a much harder problem to solve than they'd let on. Right now we have imitation machines that copy things without understanding them, and until they can these sorts of problems are going to happen. More than that, I think we're probably going to see a lot more AI drift as they try to hammer these sorts of problems out like we are seeing with the LLMs at moment.
 

Snarf Zagyg

Notorious Liquefactionist
Those are different kinds of mistakes and not really comparable.

Yes, at a certain level detail can fall apart, but they don't do so in the same ways. For example, look at the railing for the stairs in the piperoom: it's not straight, but closer to a tangle of strings. People can shorthand things, make mistakes, do weird things with perspective, but this is a mistake that is caused by not being able to think, only imitate. That sort of artifacting isn't really comparable to human problems with art, and you can see it in the "This is meant to be a bow but it also melds with my hand" or the typical nonsense clothing that is produced by AI.

So yeah, Rob Liefeld can suck at drawing feet and we're always going to have giant chest Captain America, but then again I'm not sure it's really directly comparable unless Shaft's bow starts blending through his hand and bending in weird ways. :p

Let me start by saying that we have a mutual belief that Rob "Pouches Everywhere" Liefeld is so terrible at drawing feet, even Quentin Tarantino is like, "Naw, I'll pass."

That said, he was still a viable and famous commercial artist. Yes, there can be different points of failure, but I think you don't fully grasp how quickly the technology is improving, as well as how much better it is (generally) than what most people can do- including some artists.

When you add in the fact that it can quickly generate tons of images, which actual artists can choose to refine further ... well, I will reiterate my earlier point. Arguing the artistic merits of AI is likely to be a losing proposition.

This is orthogonal to the issue of ethics and the legal issues, by the way. Just remember- it was, what, a little more than a year ago when we had a thread where people where posting their own nightmare images from an AI program (remember the Wombo threads?). Now, we are discussing the idea that they will displace human artists.

It's going that quickly.

It's certainly possible in the future that this occurs, but at this point I think it's arguably a much harder problem to solve than they'd let on. Right now we have imitation machines that copy things without understanding them, and until they can these sorts of problems are going to happen. More than that, I think we're probably going to see a lot more AI drift as they try to hammer these sorts of problems out like we are seeing with the LLMs at moment.

Based on my understanding of what is going on, I think you are incorrect, and I think that we are already seeing the transition with the newest models.

I think that a lot of people have a limited understanding of what is actually happening, and think that they are simply "regurgitating" what they are fed, or that these are just advanced auto-completes. Which is a useful way of thinking about how the iterative process, but also is a simplification that leads to incorrect views.

But that, and $5, will get you a cup of coffee at Starbucks. If you are right, we don't have to worry. So ... hopefully you're right!
 

Remove ads

Remove ads

Top