ENnies To Ban Generative AI From 2025

history-1030x232.jpg

The ENnie Awards has announced that from 2025, products including content made by generative AI will not be eligible for the awards.

Established in 2001, the ENnies are the premier tabletop roleplaying game awards ceremony, and are held every year in a ceremony at Gen Con. They were created right here on EN World, and remained affiliated with EN World until 2018.

The decision on generative AI follows a wave of public reaction criticising the policy announced in 2023 that while products containing generative AI were eligible, the generative AI content itself was not--so an artist whose art was on the cover of a book could still win an award for their work even if there was AI art inside the book (or vice versa). The new policy makes the entire product ineligible if it contains any generative AI content.

Generative AI as a whole has received widespread criticism in the tabletop industry over the last couple of years, with many companies--including D&D's owner Wizards of the Coast--publicly announcing their opposition to its use on ethical grounds.

The new policy takes effect from 2025.

The ENNIE Awards have long been dedicated to serving the fans, publishers, and broader community of the tabletop role-playing game (TTRPG) industry. The ENNIES are a volunteer-driven organization who generously dedicate their time and talents to celebrate and reward excellence within the TTRPG industry. Reflecting changes in the industry and technological advancements, the ENNIE Awards continuously review their policies to ensure alignment with community values.

In 2023, the ENNIE Awards introduced their initial policy on generative AI and Large Language Models (LLMs). The policy recognized the growing presence of these technologies in modern society and their nuanced applications, from generating visual and written content to supporting background tasks such as PDF creation and word processing. The intent was to encourage honesty and transparency from creators while maintaining a commitment to human-driven creativity. Under this policy, creators self-reported AI involvement, and submissions with AI contributions were deemed ineligible for certain categories. For example, products featuring AI-generated art were excluded from art categories but remained eligible for writing categories if the text was entirely human-generated, and vice versa. The organizers faced challenges in crafting a policy that balanced inclusivity with the need to uphold the values of creativity and originality. Recognizing that smaller publishers and self-published creators often lack the resources of larger companies, the ENNIE Awards sought to avoid policies that might disproportionately impact those with limited budgets.

However, feedback from the TTRPG community has made it clear that this policy does not go far enough. Generative AI remains a divisive issue, with many in the community viewing it as a threat to the creativity and originality that define the TTRPG industry. The prevailing sentiment is that AI-generated content, in any form, detracts from a product rather than enhancing it.

In response to this feedback, the ENNIE Awards are amending their policy regarding generative AI. Beginning with the 2025-2026 submission cycle, the ENNIE Awards will no longer accept any products containing generative AI or created with the assistance of Large Language Models or similar technologies for visual, written, or edited content. Creators wishing to submit products must ensure that no AI-generated elements are included in their works. While it is not feasible to retroactively alter the rules for the 2024-2025 season, this revised policy reflects the ENNIE Awards commitment to celebrating the human creativity at the heart of the TTRPG community. The ENNIES remain a small, volunteer-run organization that values the ability to adapt quickly, when necessary, despite the challenges inherent in their mission.

The ENNIE Awards thank the TTRPG community for their feedback, passion, and understanding. As an organization dedicated to celebrating the creators, publishers, and fans who shape this vibrant industry, the ENNIES hope that this policy change aligns with the values of the community and fosters continued growth and innovation.
 

log in or register to remove this ad

Spell checkers now look at the whole grammar of a sentence and can guess what word you might have intended from the rest of the sentence. Correcting words that sound like other words or bear passing resemblance to the word you intended. It’s the only way I’m able to type legible English on an iPhone with my sausage thumbs. The days of there being confused with their are gone.

Maybe just relying on it more? 😉

And one has to be deliberate and not just straight use the default Google screen when trying to check ...


1000000356.png


I am against "prompt generated material" creators winning the prizes... but I think the issue of how AI tools have become part of word processors, standard art editing programs, etc ... makes it important to clarify exactly what is meant.

I am now kind of curious how faculty in art and design schools who cover the use of AI would suggest phrasing the rules to cut out exactly what things aren't wanted.
 
Last edited:

log in or register to remove this ad


"This is a problem, but because others do it we're excused."

No. Do better.

We're talking about giving out awards for excellence. Work that is completely done by humans and good enough to win in one of the ENnie categories can get tossed because someone else unknown to them uses generative AI? A whole RPG gets tossed for consideration because an interior artist used an AI tool to enhance part of their image (like happened to WotC with Glory of the Giants) against policy and not caught until after publication? Or what would be winning Cover Art is disqualified because unbeknown to the artist and completely not under their control generative AI was used elsewhere in a product?

That's the result you are pushing for? That's fair?
Yes. Do better indeed.

If you are creating a product that you are submitting for an award that has a policy against generative AI, put the processes in place to make sure your product meets that policy.
 




No generative content and no AI use at all are two very different things.

I am mentioning it because it matters. "No AI use at all" means no spell check or grammar check. it means no filters for the images. it means no SEO algorithms for actually selling the thing.
If it's so important to distinguish generative AI and LLMs from other expert systems, maybe the marketing departments of the companies pushing generative AI shouldn't have worked so hard to blur the distinction.
 

These things aren't AI. The term is used too often to describe things which are just basic computer functions. Spellchecks have been around for decades--they weren't AI then, and they aren't now.
Yes, they can use AI. As I said, I use it to intelligently link together my corpus from understanding the context of the contents. That's not generating anything but linkages between them and answering questions based on my corpus when I ask for context and delve into it. That's definitely using AI, trained on my work and my writing. Same when I use Github copilot that's trained on our own internal source code for our very large organization and explicitly filters out external answers and/or marks them as such for further investigation. There are many uses that are specifically using and enabled by AI that are not generative in the fashion that many infer they are.
 

If it's so important to distinguish generative AI and LLMs from other expert systems, maybe the marketing departments of the companies pushing generative AI shouldn't have worked so hard to blur the distinction.
They are not differentiated in the manner that you say because those are two disciplines with AI. NLP and LLMs are still applications of Machine Learning and enabled by the use of certain Generative AI methods.
 

I probably should mention that in theory I'm fine with this policy and gatekeeping one's communities overall. However in practice this doesn't tend to stay isolated to such communities, and those outside become free game when it comes to threats and harassment.

They haven’t released the data yet that it used. May well be right. That is very much speculation at this point.
Ultimately it's all speculation as they could be completely lying about everything, but if not they've already admitted to training on 'synthetic' content. Because contrary to popular belief iterative feedback is how these models get better.

But well within the reach of many more companies/entities than it was before.
I'm more concerned with individuals, and whether they can acquire the hardware/software to run it, are legally permitted to use it, and won't be harassed by others for it. Luckily folks are already running deepseek-r1:14b on Raspberry Pis.

I use it to give artists a better mock up of what I want them to create (something that I was doing, but it sucked, so there was a lot of wasted time), to edit images I already have the rights to (something I was doing manually, but it took more time), to search through and summarize and cross-reference my own work so I don't have to worry as much about consistency and can find my work in my world corpus easily. None of this is doing any plagiarism. But if I publish a work- does that use AI?
As far as these rules and the absolutist demands of the robophobes go, yes.

Is your child's crayon drawings that you hang on your fridge art? Yes. So we know art doesn't have to have a specific level of quality.
Let's avoid the debate over what constitutes 'art' and simply address context instead.

If the purpose of your engagement is social bonding, then such drawings are acceptable because it's not about the quality of the art but the act of sharing your work with those closest to you. If on the other hand the purpose of your engagement is to create an appealing product which meets specific needs then such art is not acceptable as none of the emotional gravitas which gives it meaning exists in that space, only the inherent qualities of the work itself. There's a fine line to walk here as designers of inherently social products, but ultimately they are still products, and indie communities has a tendency to stick to their own and pass the same $20 bill among themselves.

Glad about this decision. The AI bubble is hopefully about to burst any time soon, and then we can finally put this grift behind us, just as we did with VR, NFTs, crypto, and all the other techbro bs I can't even remember.
Perfect example of being blinded by hate.

Yes #TechBros are insufferable, but AI is already optimizing the workflows of every industry it touches, and so robust that the only reason the current bubble burst was because a better AI was released. Meanwhile crypto like #BitCoin is worth more than ever, and one of the few safe payment methods available to those sex workers lefties so often advocate for. VR is not ready for prime time, but when it is #Fortnite, #Roblox, and #MineCraft will be ready. And NFT 'art' was always an obvious grift, and nowhere near equivalent to any of the other technologies you hate and fear so much.

You show the artist’s process—the initial sketches, up to the final piece.
This is already being faked. And if the evidentiary requirements become too burdensome smaller creators will be gatekept.

A pedantic argument.
It's easy enough to recognize that those in favor of AI material generation are eager to muddy the waters of the term.
Because it's obvious you see, and clear language is dangerous.

Is that an accusation?
...and so it begins.
 

Related Articles

Remove ads

Remove ads

Top