Multiple "AI Art" Updates and Controversies in Tabletop Gaming

BackerKit bans, Wizards of the Coast replaces, and Essen Spiel caught using algorithmically generated artwork.

Three news stories this week came out about algorithmic generation aka "AI Art" in the tabletop gaming industry.

backerkit-ai-policy-1.png
BackerKit announced that effective October 4, no project will be allowed with any writing or art assets that were entirely created by algorithmic generation aka “AI”. From the blog post:

At BackerKit, our team is passionate about people’s passions. For ten years, we’ve supported creators in their journey to launch projects and build thriving creative practices and businesses. We’ve developed deep relationships and respect for the people who breathe life into crowdfunding projects, and we are committed to defending their well-being on our platform.

That’s why we are announcing a new policy that aims to address growing concerns regarding ownership of content, ethical sourcing of data, and compensation for the process of creating content. […]

As part of this consideration, BackerKit has committed to a policy that restricts the use of AI-generated content in projects on our crowdfunding platform.

This policy goes into effect on October 4, 2023.

[…] This policy emphasizes that projects on BackerKit cannot include content solely generated by AI tools. All content and assets must first be created by humans.

This doesn’t impact content refined with AI-assisted tools like “generative content fill” or “object replacement” (image editing software functions that help blend or replace selected portions of an image), other standard image adjustment tools (saturation, color, resolution,) or AI language tools that refine human-created text with modifications to spelling, grammar, and syntax.

Software assisted by AI, such as transcribers or video tracking technology are permitted under these guidelines. However, software with the purpose to generate content using AI would not be permitted.

The post includes image examples of what content is and is not allowed. Additionally, BackerKit will add an option to the back end for creators that will allow them to “exclude all content uploaded by our creators for their projects from AI training”. This is opt-out, meaning that by default this ban is in place and creators who want their work used for training generative algorithms must go in and specifically allow it.

altisaur.png

This move comes alongside a pair of recent controversies in tabletop gaming. Last month, Wizards of the Coast came under fire as it was revealed a freelance artist used algorithmic generation for artwork included in Bigby Presents: Glory of the Giants. Wizards of the Coast quickly updated their stance on algorithmic generation with a statement that the artwork would be removed from the D&D Beyond digital copies of the book and will place new language in contracts banning the use of algorithmic generation.

This week, Gizmodo reporter Linda Codega reported that the artwork in the D&D Beyond version of Bigby Presents has now been replaced with new art. No announcement was made about the new artwork, and Gizmodo’s attempts to contact Wizards of the Coast for a statement directed them to the statement made in August. The artist who used algorithmic generation, Ilya Shkipin, has been removed from the art credits from the book, and the artwork has replaced by works by Claudio Prozas, Quintin Gleim, Linda Lithen, Daneen Wilkerson, Daarken, and Suzanne Helmigh.

IMG_7777-e1696254650600.jpg

Meanwhile, the largest tabletop gaming convention in Europe, Essen Spiel, recently ran into the same controversy as promotional material for the convention used algorithmically generated artwork including the convention’s official app, promotional posters, and tickets for the event.

Marz Verlag, the parent company for the convention, responded to a request for comment from Dicebreaker:

"We are aware of this topic and will evaluate it in detail after the show. Right now please understand that we cannot answer your questions at this moment, as we have a lot to do to get the show started today," said a representative for Merz Verlag.

"Regarding the questions about Meeps and timing, I can tell you quickly that the marketing campaign [containing AI artwork] has been created way before we had the idea to create a mascot. The idea of Meeps had nothing to do with the marketing campaign and vice versa."

Meeps, a board game-playing kitten and totally innocent of the controversy (because who could blame a cute kitty), is the new mascot for the convention announced this past July voted on by fans and was designed by illustrator Michael Menzel.
 

log in or register to remove this ad

Darryl Mott

Darryl Mott

Umbran

Mod Squad
Staff member
Supporter
IT folks have been watching our jobs get shipped over seas for 20+ years, and crunch is a thing.

The sad bit is that crunch is a self-inflicted thing. Crunch is the result of fitting unrealistic onto a project, and not creating an environment in which folks can talk realistically about timelines.
 

log in or register to remove this ad

Bill Zebub

“It’s probably Matt Mercer’s fault.”
The sad bit is that crunch is a self-inflicted thing. Crunch is the result of fitting unrealistic onto a project, and not creating an environment in which folks can talk realistically about timelines.

Born also of ignorance of Hofstadter's Law.
 

Scribe

Legend
The sad bit is that crunch is a self-inflicted thing. Crunch is the result of fitting unrealistic onto a project, and not creating an environment in which folks can talk realistically about timelines.

"Self Inflicted" doesnt quite have the ring to it I would like. I doubt most of us get to set the timelines. ;)

Born also of ignorance of Hofstadter's Law.

This certainly is true.
 

How things work is usually the difference between ethical and not ethical.

Example: A pair of shoes made in a factory. Ethical! Except if factory is using slave/child labor. Unethical!

I’d say the details actually matter a great deal!

The difference is that when we have a hypothetical problem of toast being poisonous, focusing on how the toaster works is a red herring. The toast only stops being poison if it isn't made.
 

FrogReaver

As long as i get to be the frog
The difference is that when we have a hypothetical problem of toast being poisonous, focusing on how the toaster works is a red herring. The toast only stops being poison if it isn't made.
This situation isn’t analogous to that at all.
 


aramis erak

Legend
I don't believe that imitating a signature is classified as plagiarism. It may be considered fraud if it's done with the intent to mislead about the origin of a work, but that won't apply here since AI doesn't have any intent.
In the US jurisprudcence, Intent follows the bullet. Or the tool. Which is why, prior to a bit of regulatory jiggery-pokery, the tools to commit copyright violations were banned. But when the right to format shift was elucidated by the regulator, the tools to format shift were unbanned in the US. Except to those caught using them for copyright violations.

If the intent of the tool creator or the trainer was to infringe ...
 

aramis erak

Legend
Fun fact, it doesn't actually matter how the AI works. That is a complete non-sequitur and a frankly repetitive argument thats always trotted out in these discussions.

If a given image-based AI cannot do what it does without consuming the art of human beings, and in turn if said art is taken and used (nevermind sold) for this purpose without permission and compensation, then there is a huge problem. It literally does not matter how the AI works. At all. That was never the issue.
No human artist worth paying for the works of hasn't had the same level of input in seeing others works on TV, in movies, in books, and in life.

Every bit of art I do (not bloody much) is influenced by the works of my grandfarther (skilled, but largely unknown), of Bob Ross, and of the dozens of history texts I've studied from, the hundreds of RPGs I've read...

At some point plagiarism ceases to be a valid claim and research becomes a valid claim.

At present, many are arguing that for AI it should be X, and for people Y; I find that not only unconvincing, but ethically more bankrupt than just arguing for a common standard.

I know guys who can draw in the same style as Kevin Siembieda; hell, many of his artists he hires do styles very close to his own. Palladium has a very narrow window of art style... Are the artists he hires plagiarists? NO! He pays them for that consistency.

FASA, now.. 80's FASA was rife with rotoscoped portraits in the STRPG and Battletech lines... Some of later notable actors... one's a rotoscope of Michael Duncan Clark's head shot from a magazine... Were they plagiarists?

No, they met the "15%" requirement of the era.

Very few artists lack a wide exposure to art. They just can't recall all of what has set their network up.
 


talien

Community Supporter
So now: Artists Are Losing the War Against AI

But as the article points out:

In theory, opting out should provide artists with a clear-cut way to protect a copyrighted work from being vacuumed into generative-AI models. They just have to add a piece of code to their website to stop OpenAI from scraping it, or fill out a form requesting that OpenAI remove an image from any training datasets. And if the company is building future models, such as a hypothetical DALL-E 4, from scratch, it should be “straightforward to remove these images,” Alex Dimakis, a computer scientist at the University of Texas at Austin and a co-director of the National AI Institute for Foundations of Machine Learning, told me. OpenAI would prune opted-out images from the training data before commencing any training, and the resulting model would have no knowledge of those works.

In practice, the mechanism might not be so simple. If DALL-E 4 is based on earlier iterations of the program, it will inevitably learn from the earlier training data, opted-out works included. Even if OpenAI trains new models entirely from scratch, it is possible, perhaps even probable, that AI-generated images from DALL-E 3, or images produced by similar models found across the internet, will be included in future training datasets, Alex Hanna, the director of research at the Distributed AI Research Institute, told me. Those synthetic training images, in turn, will bear traces of the human art underlying them.
 

Split the Hoard


Split the Hoard
Negotiate, demand, or steal the loot you desire!

A competitive card game for 2-5 players

Related Articles

Remove ads

Split the Hoard


Split the Hoard
Negotiate, demand, or steal the loot you desire!

A competitive card game for 2-5 players
Remove ads

Top