D&D 5E Glory of the Giants' AI-Enhanced Art

AI artist uses machine learning to enhance illustrations in Bigby.

The latest D&D sourcebook, Bigby Presents: Glory of the Giants, comes out in a couple of weeks. However, those who pre-ordered it on D&D Beyond already have access, and many are speculating on the presence of possible AI art in the book.

One of the artists credited is Ilya Shkipin, who does traditional, digital, and AI art. In an interview with AI Art Weekly in December 2022, Shkipin talked at length about their AI art, including the workflow involved.

On Twitter, Shkipin talked more [edit--the tweet has since been deleted but the content is below] about the AI process used in Bigby, indicating that AI was used to enhance some of the art, showing an example of the work.

There is recent controversy on whether these illustrations I made were ai generated. AI was used in the process to generate certain details or polish and editing. To shine some light on the process I'm attaching earlier versions of the illustrations before ai had been applied to enhance details. As you can see a lot of painted elements were enhanced with ai rather than generated from ground up.

-Ilya Shkipin​


ilya.png


ilia2.png


Discussions online look at more of the art in the book, speculating on the amount of AI involvement. There doesn't appear to be any evidence that any of the art is fully AI-generated.

AI art is controversial, with many TTRPG companies publicly stating that they will not use it. DriveThruRPG has recently added new policies regarding transparency around AI-generated content and a ban on 'standalone' AI art products, and Kickstarter has added similar transparency requirements, especially regarding disclosure of the data which is used to train the AI. Many artists have taken a strong stance against AI art, indicating that their art is being 'scraped' in order to produce the content.

UPDATE- Christian Hoffer reached out to WotC and received a response:

Have a statement from Wizards over the AI enhanced artwork in Glory of the Giants. To summarize, they were unaware of the use of AI until the story broke and the artwork was turned in over a year ago. They are updating their Artist guidelines in response to this.

Wizards makes things by humans for humans and that will be reflected in Artist Guidelines moving forward.

-Christian Hoffer​

The artist, Ilya Shkipin, has removed the initial tweet where the AI process is discussed, and has posted the following:

Deleted previous post as the future of today illustrations is being discussed.

Illustrations are going to be reworked.

-Ilya Shkipin​

 

log in or register to remove this ad

Golroc

Explorer
Supporter
There are several ways of framing the problem.

If the problem is ... (snip)
Very good post. I think one angle that I feel is sorely lacking is that the focus should be on the act publishing of art (and intellectual property in general) - not on the process. What someone does to produce graphics - be that person an artist, an amateur or even a hack - should not be the focus. The focus should be on:

A) Published art and text
It is a problem if someone publishes/distributes/sells art, text or imagery, that is grossly derivative or plagiarizing, to the extent that it violates intellectual property (and here the different legal framework across the world is a complicating factor). It shouldn't matter whether the production involved reproduction by human hands, "classic" software and/or AI*. If you rip off someone's work - you're doing a bad thing.

B) Software and services
It is a problem if software (desktop application or web-based service) contains copy-righted material. It is also a problem if it is capable of producing imagery which violates the intellectual property of others. You can't sell an application which reproduces such works. This is a bit more fuzzy than it seems at first. At one extreme you have "give me the full text of the book Shadow of Abracadara" at the other extreme you have a word processor, which you can use to type in the full text of a book and then store it locally on your machine. Word processors are obviously not a problem - but the point at which a tool is too good at following instructions isn't quite as easily definable as one might imagine.

C) Training data
High-quality training data for neural networks is an extremely valuable commodity. It is not hard to crawl the internet for everything and train an AI. It is very hard and very expensive to curate a collection of material (be it art, text or something else). Some websites, like say Deviant Art or certain places for people to share fan fiction - even discussion forums like this one - are very good sources of high quality training data. Companies which make use of such data must obtain the consent of the owners of the material.

D) Selling training data is not selling intellectual property
This one I think is probably the most overlooked. An artist who sells the right to use work in training data, is not selling the right to reproduce said work. Those things must be kept apart. Why would a company want training data if they are not allowed to reproduce it? Well, that's because AI tooling can, when used by humans (and in the future perhaps even autonomously), produce (or contribute to the production of) creative works. It might not be art. It might be ugly. It might be derivative. But it can be something that if done by a human would be considered legal. And therefore training data has a value in making the AI tooling better at creating such things (better might not mean quality, it could also mean other things, so consider it broadly). Artists need to retain their intellectual property rights regardless of whether they consent to their work being used for training.

Right now the focus is very much on the interplay between training data and the capacity/tendency to reproduce in a way which violates intellectual property - and defining those lines. That's an important discussion. But it's not the only discussion. And it's really important for the future livelihood of artists that this is not reduced to a question of training data and what workflows are acceptable for artists who use AI tooling.


* By AI I refer to the definition used in computer science, which covers neural networks, even if they do not possess general intelligence or cognition of any kind. I am simply using it is an umbrella term for certain types of systems, not as a claim of those systems having any particular qualities.
 

log in or register to remove this ad

A) Published art and text
It is a problem if someone publishes/distributes/sells art, text or imagery, that is grossly derivative or plagiarizing, to the extent that it violates intellectual property (and here the different legal framework across the world is a complicating factor). It shouldn't matter whether the production involved reproduction by human hands, "classic" software and/or AI*. If you rip off someone's work - you're doing a bad thing.

Or even that good, old, photocopy. Also, it should only matter if it's used to redistribute, because you can't rip off by just consuming for yourself. I know it might be controversial for some, but the people who just, say, download an image on his computer to print on his character sheet is doing no harm, since he's using the artwork privately for his own purpose. The result of the "mixing of points" could result in an objection to a technology in general, or an objection to a particular source for models, depriving the possibility to use a technology in a non-infringing way (the "non copy-protected outcome" chosen in the US) or ony private use since you can't infringe on copyright through private use.


B) Software and services
It is a problem if software (desktop application or web-based service) contains copy-righted material. It is also a problem if it is capable of producing imagery which violates the intellectual property of others. You can't sell an application which reproduces such works. This is a bit more fuzzy than it seems at first. At one extreme you have "give me the full text of the book Shadow of Abracadara" at the other extreme you have a word processor, which you can use to type in the full text of a book and then store it locally on your machine. Word processors are obviously not a problem - but the point at which a tool is too good at following instructions isn't quite as easily definable as one might imagine.

Indeed, and I'd defende the word processor analogy. We don't regulate word processors, even if they can be used to retype the whole Shadow of Abracabra. We make it illegal to distribute your (home-typed) shadows of abracadabra books. It should be the same with AI. If it somehow was able to actively infringe, as in "write me a derivative work of an existing work, like the true ending of the Game of Thrones series", then it's the outcome that should be monitored, not the tool that can type out the book.


C) Training data
High-quality training data for neural networks is an extremely valuable commodity. It is not hard to crawl the internet for everything and train an AI. It is very hard and very expensive to curate a collection of material (be it art, text or something else). Some websites, like say Deviant Art or certain places for people to share fan fiction - even discussion forums like this one - are very good sources of high quality training data. Companies which make use of such data must obtain the consent of the owners of the material.

It's a contentious point, for some at least. For example, some have claimed that Adobe "strongarmed" them into relinquishing their rights because they didn't knew/understand what they relinquished and others say that they couldn't understand AI back when they entered into the contact. So you'll find people to refer to ethics even after giving consent contractually. At the other hand of the spectrum, you have the datamining exception validated in the EU, where a non-profit (in many case, public) research institution can basically do ignore copyright as long as they don't benefit financially from the outcome. Some will find it acceptable (the EU, obviously, who considered the overall benefit of having AI companies flourishing and paying taxes for the greater good), other might find that it's unfair to right holders who put their work on the Internet.

D) Selling training data is not selling intellectual property
This one I think is probably the most overlooked. An artist who sells the right to use work in training data, is not selling the right to reproduce said work. Those things must be kept apart. Why would a company want training data if they are not allowed to reproduce it? Well, that's because AI tooling can, when used by humans (and in the future perhaps even autonomously), produce (or contribute to the production of) creative works. It might not be art. It might be ugly. It might be derivative. But it can be something that if done by a human would be considered legal. And therefore training data has a value in making the AI tooling better at creating such things (better might not mean quality, it could also mean other things, so consider it broadly). Artists need to retain their intellectual property rights regardless of whether they consent to their work being used for training.

Indeed. I think however that nobody ever suggested (apart maybe a few who equate training to theft) that they were losing any right over their own work by allowing it to be used in training. It would extreme if the data in the training data had to be transfered and not only licensed for this use. Also, it would prevent using public domain works (because you can't appropriate them). I could see some case where it would be the case that the trainer owns exclusive rights over the material used in training: a private company that wouldn't disclose anything about their database, or a country-wide effort (if China or India tasks its art teachers to draw each a few specific pieces and caption them well, much better than the LAION-5B crap, they wouldn't need as many work to create an effective training model) and they might want to host the next "digital holywood without pesky actors and writers..." Those are fringe case, though. And TBH if a sovereign entity wanted to have a model like that, it would most probably create an exception for public training database much like they created an exception for public libraries to lend book.

Right now the focus is very much on the interplay between training data and the capacity/tendency to reproduce in a way which violates intellectual property - and defining those lines. That's an important discussion. But it's not the only discussion. And it's really important for the future livelihood of artists that this is not reduced to a question of training data and what workflows are acceptable for artists who use AI tooling.

TBH I am rather pessimistic over the livelyhood of many, many people, including many highly-paid jobs. But wealth-sharing method in a society is political so I won't say much more on this.

To the precise point of reproducing art in the way that it doesn't infringe copyright, it is easily within reach (in satisfactory or not satisfactory manners for the artists) especiall since it's a transient problem: at some point, there will be enough well-captioned, different pictures in the public domain or under free license that developping a model with them will be hasslefree and not the sole perview of Adobe.

I am pretty sure that worrying regulations, or the fear thereof, will only spur companies toward developping this database earlier than later. Also, it is a problem that can be lessened by explaining better how it works. There are people who honestly think that models are hundred of GB large because they must contain all the images in a compressed format (despite having functional model that are less than GB files). I guess many people think that "in the style of greg rutkowski" is a prompt that produces art that have the style of greg rutkowski in them, while it's not the case. It just happened that the autocaptioning of fantasy images displayed on artstation where labelled as "art by greg rutkowski", irrespective of whomever draw the picture. So it is a keyword that orients the generation toward generic fantasy art, with no link to greg rutkowski. If the artstation art had been labelled as art by George Washington, then you could ask the AI to draw dragons and pretty elves in the style of the late president's famous paintings... errr or not. Dispelling these worries would help allaying the fear of plagiarism (everyone can check by interrogating CLIP over their own artworks, it will propose styles of several artists none of which are you, generally). I have CLIP-ed a photography of a colleague I had just taken and she was said to be "an artwork by X, Y and Z" where those three were 19th century painters. I suppose that my Android phone has some dead painters's soul spliced to it ;-)
 
Last edited:

jasper

Rotten DM
I mean, those would have to be actual facts and they very much aren't, especially if you've been following the discourse around this even at a cursory level. It's so weird that people want to find ways of making this some sort of benign mistake when Wizards pointedly did not take a stance on AI art for D&D when it absolutely did with MTG.
Montgomery Trade Gooses. Oh Magic. Since I don't play, I don't care of the MGT, nor follow the MTG policy. Was it a BLANKET policy or only on the magic side of house. If was on the magic side of houses, it doesn't matter to D&D side of the house.
 

jasper

Rotten DM
It's not that simple. There's still a lot of questions around the dinosaur image, but what people are worried about is they hired April to do concept art, credited her as a concept artist, and compensated her as a concept artist, then IF someone else fed that concept art into an AI engine and that is used as finished interior art without crediting and compensating April as an interior artist (and maybe even giving credit and compensation to whoever the second person is), then that is a far cry from "they got paid, the deal was done."

A closer comparison would be an author signing away the rights for a single movie, then someone else just changes the characters' names but keeps everything else about the story the same, then they make multiple movies out of it without crediting the original author or compensating them beyond the original rights buy.

Or someone tracing April's art on a separate layer of Photoshop, adding some different colors, and then having it published as finished interior art, again with no credit or compensation to April for finished interior art.

It's still unclear exactly what happened with that image and who did what with it, but at the very least there's more serious questions with that one than the others.
Like Starship Troopers. I don't recall the later cartoons or movies mentioning Heinlein. Did they even mention Robert in the first movie.
 

Montgomery Trade Gooses. Oh Magic. Since I don't play, I don't care of the MGT, nor follow the MTG policy. Was it a BLANKET policy or only on the magic side of house. If was on the magic side of houses, it doesn't matter to D&D side of the house.

That was my point. D&D not having a policy while Magic does suggests neglect at best and possible encouragement at worst.
 

jasper

Rotten DM
That was my point. D&D not having a policy while Magic does suggests neglect at best and possible encouragement at worst.
This is called monday night quarter backing. I support three shops in my building doing software. These good people even each lunch together and I have to have multiple meetings with them to get one software standard. So neither neglect nor encourage just typical management.
 

This is called monday night quarter backing. I support three shops in my building doing software. These good people even each lunch together and I have to have multiple meetings with them to get one software standard. So neither neglect nor encourage just typical management.

It's called Monday Morning Quarterbacking, and the rest of your post is a non sequitur.
 

Commissioning concept art and then making a directly derivative work is incredibly lazy and silly - regardless of whether one is using AI or tasking an artist with turning the concept art into an illustration. Both of these are perfectly capable of creating an illustration based on the concept art which isn't directly and grossly derivative (and yes, the human artist will do a much better job, but AI systems can turn concept into illustrations). The whole dinosaur thing thus highlights another problem with the art direction for this book. Concept art isn't "draft" art. Even if an artist signs off the rights when delivering the work - it's very odd for a publisher to treat concept art in this way. Ilya can certainly do better work (AI, mixed and traditional), so if that was the artist doing this (which I don't think has been confirmed?) then that is also weird. To me this feels like the book was made on an extremely tight budget, which isn't really acceptable considering the cost of the book and the expectations for such an established game and publisher.
Have you seen the concept art and the final works? Someone posted them a few days ago and they final works do not appear to be copies of the concept art. They are original works IMO.
 

So it's about plagiarism. AI art doesn't create art. It regurgitates it. It doesn't make anything new. And you can see that when artists' mutated signatures appear in the output, which is a thing that happens. That's not what a human artist drawing 'inspiration' from their predecessors does.

Until there is an opt-in framework whereby artists can be compensated for having their art scanned and reprinted (albeit mutated), there are ethical issues.

And to to those who compare it to the engine replacing the horse, or whatever--sure. But the engine doesn't keep the horse around and suck on its calories. It uses it's own. AI needs the artist to copy from. No artist, no AI.

So we're not talking 'inspiration' or 'influence' - AI copies art to the extent that you can sometimes still see the original signatures.

That's the difference.
Although an AI could sample just an artist own art. Ilya was producing art before AI. They could have the AI just sample their own work couldn’t they? Probably not as effective because of sample size, but they are also not using it to create something from scratch either.
 

Morrus

Well, that was fun
Staff member
Although an AI could sample just an artist own art. Ilya was producing art before AI. They could have the AI just sample their own work couldn’t they? Probably not as effective because of sample size, but they are also not using it to create something from scratch either.
It could. But it doesn’t. If it did, there would be no ethical issues at all.
 

Remove ads

Remove ads

Top