Multiple "AI Art" Updates and Controversies in Tabletop Gaming

BackerKit bans, Wizards of the Coast replaces, and Essen Spiel caught using algorithmically generated artwork.

Three news stories this week came out about algorithmic generation aka "AI Art" in the tabletop gaming industry.

backerkit-ai-policy-1.png
BackerKit announced that effective October 4, no project will be allowed with any writing or art assets that were entirely created by algorithmic generation aka “AI”. From the blog post:

At BackerKit, our team is passionate about people’s passions. For ten years, we’ve supported creators in their journey to launch projects and build thriving creative practices and businesses. We’ve developed deep relationships and respect for the people who breathe life into crowdfunding projects, and we are committed to defending their well-being on our platform.

That’s why we are announcing a new policy that aims to address growing concerns regarding ownership of content, ethical sourcing of data, and compensation for the process of creating content. […]

As part of this consideration, BackerKit has committed to a policy that restricts the use of AI-generated content in projects on our crowdfunding platform.

This policy goes into effect on October 4, 2023.

[…] This policy emphasizes that projects on BackerKit cannot include content solely generated by AI tools. All content and assets must first be created by humans.

This doesn’t impact content refined with AI-assisted tools like “generative content fill” or “object replacement” (image editing software functions that help blend or replace selected portions of an image), other standard image adjustment tools (saturation, color, resolution,) or AI language tools that refine human-created text with modifications to spelling, grammar, and syntax.

Software assisted by AI, such as transcribers or video tracking technology are permitted under these guidelines. However, software with the purpose to generate content using AI would not be permitted.

The post includes image examples of what content is and is not allowed. Additionally, BackerKit will add an option to the back end for creators that will allow them to “exclude all content uploaded by our creators for their projects from AI training”. This is opt-out, meaning that by default this ban is in place and creators who want their work used for training generative algorithms must go in and specifically allow it.

altisaur.png

This move comes alongside a pair of recent controversies in tabletop gaming. Last month, Wizards of the Coast came under fire as it was revealed a freelance artist used algorithmic generation for artwork included in Bigby Presents: Glory of the Giants. Wizards of the Coast quickly updated their stance on algorithmic generation with a statement that the artwork would be removed from the D&D Beyond digital copies of the book and will place new language in contracts banning the use of algorithmic generation.

This week, Gizmodo reporter Linda Codega reported that the artwork in the D&D Beyond version of Bigby Presents has now been replaced with new art. No announcement was made about the new artwork, and Gizmodo’s attempts to contact Wizards of the Coast for a statement directed them to the statement made in August. The artist who used algorithmic generation, Ilya Shkipin, has been removed from the art credits from the book, and the artwork has replaced by works by Claudio Prozas, Quintin Gleim, Linda Lithen, Daneen Wilkerson, Daarken, and Suzanne Helmigh.

IMG_7777-e1696254650600.jpg

Meanwhile, the largest tabletop gaming convention in Europe, Essen Spiel, recently ran into the same controversy as promotional material for the convention used algorithmically generated artwork including the convention’s official app, promotional posters, and tickets for the event.

Marz Verlag, the parent company for the convention, responded to a request for comment from Dicebreaker:

"We are aware of this topic and will evaluate it in detail after the show. Right now please understand that we cannot answer your questions at this moment, as we have a lot to do to get the show started today," said a representative for Merz Verlag.

"Regarding the questions about Meeps and timing, I can tell you quickly that the marketing campaign [containing AI artwork] has been created way before we had the idea to create a mascot. The idea of Meeps had nothing to do with the marketing campaign and vice versa."

Meeps, a board game-playing kitten and totally innocent of the controversy (because who could blame a cute kitty), is the new mascot for the convention announced this past July voted on by fans and was designed by illustrator Michael Menzel.
 

log in or register to remove this ad

Darryl Mott

Darryl Mott


log in or register to remove this ad

J.Quondam

CR 1/8
For anyone who doesn't trust tech info on a D&D forum* and is interested in the actual methodologies behind generative "AI", here are a couple fairly recent, LONG semi-technical articles on a tech journalism site aimed at the layperson:
They're very deep-dives, but very informative. But if those are too much, it's easy enough to find lighter weight info on reliable tech sites.

The main takeaway, though, is that there are several different "AI" technologies out there, and they are all very complex, imperfectly understood, and evolving very rapidly. Thus simplistic declarations by laypersons - pro or con - about what AIs do or how AIs work are probably wrong.

Among non-technical readers, imo it's much more credible and productive opine on the ethical questions surrounding these AIs, such as how to manage their training data; how people interact with them; or how their outputs are managed/used.




* Hint: Don't trust tech info on a D&D forum. That applies to other fields as well, including but not limited to: law, medicine, elections, dream interpretation, pie recipes, powerlifting, and also quantum psycho-transposition, probably.
 




Dire Bare

Legend
There's somebody standing there for when customers have a problem at self check-out.

At least, where I live. Everywhere I have lived.
Not where I live. It varies store by store.

Some stores have one employee monitoring 4 to 6 self-checkouts. That usually works out okay.

But at Wal-Mart . . . some (not all) of my local Wal-Marts are down to two old-fashioned check out lanes manned by cashiers and replaced them with huge banks of self-checkouts monitored by just one "manager" (who can actually help if something goes wrong) and one or two employees who only seem capable of pointing out empty check-outs to customers . . . . it blows my mind Wal-Mart gave me MORE reasons to avoid their stores as much as possible.

The reduction in staff doesn't bother me too much, as we have low unemployment in the US right now and many retail stores (and other frontline work) are still struggling to fill positions. Reducing manned check-outs with self check-outs is like AI in some ways . . . it can be done well, respecting workers, creating efficiency for workers and customers, and lower prices . . . and it can be done poorly, making workers and customers experiences worse, creating chaos, and causing prices to rise as theft increases.

Short-term vs. long-term thinking. Guess which style of management most US based corporations are best at? Not the kind that respects workers and end-users . . . and that is our current problem with AI-generated art.
 

FrogReaver

As long as i get to be the frog
LLMs don't copy anything. Instead, when the are told to do Thing A, based on their training, they apply a probability of what follows A.
it doesn’t matter how a LLM copies something. It matters whether it does. That it does so by assigning probabilities to what follows A is interesting but not particularly important. On another thread I provided a D&D example using elves. About 75% of the output provided by the LLM AI to my prompt was directly taken in sequence from the 5e PHB. Done by any human that would be plagiarism.

We absolutely need to discuss the important issues of how generative systems are trained, but we shouldn't obfuscate the real issues by relying on imprecise and emotional terminology.
I agree here.

Generative AI is absolutely not going away, any more than photography might. The only way to integrate it into our creative future is to understand what it actually is and work to make sure it is ethically trained. Trying to force it back into the bottle is a fool's errand.
What if AI cannot be ethically trained?
 

FrogReaver

As long as i get to be the frog
I don't believe that imitating a signature is classified as plagiarism. It may be considered fraud if it's done with the intent to mislead about the origin of a work, but that won't apply here since AI doesn't have any intent.
I think the argument is more - if AI even copies signatures it’s extremely likely they are copying most of everything else too.
 

I think the argument is more - if AI even copies signatures it’s extremely likely they are copying most of everything else too.
Interesting that you word it as "AI even copies", as if signatures should be the last thing copied. The way the process works, the opposite is what's really happening. Signatures are often reapeated in almost the same form across a large numbers of works, so the AI will calculate that images in the style of [artist] are statistically likely to contain something that looks like the signature of [artist]. But that only goes for the signature, not the rest of the image.
 

talien

Community Supporter
Interesting that you word it as "AI even copies", as if signatures should be the last thing copied. The way the process works, the opposite is what's really happening. Signatures are often reapeated in almost the same form across a large numbers of works, so the AI will calculate that images in the style of [artist] are statistically likely to contain something that looks like the signature of [artist]. But that only goes for the signature, not the rest of the image.
Here's the thing: the counter argument from AI developers is that AI doesn't actually copy signatures. It copies the style of signatures so much that it looks like your signature because it was "trained" on your signature and therefore generated something that looks an awful lot like your signature.

That may be technically true. But the whole concept of a signature is imperfect -- signatures change over time as you age, for example -- so we humans assign significance to something AI can't understand is important.

Or to put it another way, we have programmers who never considered ownership being a problem when they scooped up billions of art. Signatures are the intersection of ownership and art, and AI is built by its very nature to consider all art just data.

In short: it doesn't matter how the AI generated my signature. It still looks like my signature. It just has to look close enough for people (i.e., humans) to get confused, and then we have problems.
 

Related Articles

Remove ads

Remove ads

Top