D&D General Hasbro CEO Says AI Integration Has Been "A Clear Success"

However "people make the decisions and people own the creative outcomes".
Copy of Copy of Copy of pODCAST358-fr (11).png


We've known for some time that Hasbro CEO--and former president of Wizards of the Coast--Chris Cocks is an avid AI supporter and enthusiast. He previously noted that of the 30-40 people he games with regularly, "there's not a single person who doesn't use AI somehow for either campaign development or character development or story ideas." In a 2025 interview he described himself as an "AI bull".

In Hasbro's latest earnings call, Cocks briefly addressed the use of AI within the company. While he mentions Hasbro, Wizards of the Coast and the digital studio teams, he doesn't specifically namecheck Dungeons & Dragons. However, he does tout Hasbro's AI integration as a "clear success", referring primarily to non-creative operations such as finances, supply chains, and general productivity enhancements, and emphasises that "people make the decisions and people own the creative outcomes". He also notes that individual teams choose whether or not to use AI.

So while it is clear that AI is deeply embedded in Hasbro's workflows, it is not clear to what extent that applies to Dungeons & Dragons. WotC has indicated multiple times that it will not use AI artwork, and its freelance contracts explicitly prohibit its use. The company also removed AI-generated artwork in 2023's Bigby's Presents: Glory of the Giants.

Before I close, I want to address AI, and how we're using it at Hasbro. We're taking a human-centric creator-led approach. AI is a tool that helps our teams move faster and focus on higher-value work, but people make the decisions and people own the creative outcomes. Teams also have choice in how they use it, including not to use it at all when it doesn't fit the work or the brand. We're beyond experimentation. We're deploying AI across financial planning, forecasting, order management, supply chain operations, training and everyday productivity. Under enterprise controls and clear guidelines around responsible use and IP protection. Anyone who knows me knows I'm an enthusiastic AI user and that mindset extends across the enterprise. We're partnering with best-in-class platforms, including Google Gemini, OpenAI and 11 labs to embed AI into workflows where it adds real value. The impact is tangible. Over the next year, we anticipate these workflows will free up more than 1 million hours of lower-value work, and we're reinvesting that capacity into innovation, creativity and serving fans. Our portfolio of IP and the creators and talent behind it are the foundation of this strategy. Great IP plus great storytelling is durable as technology evolves, and it positions us to benefit from disruption rather than being displaced by it.

In toys, AI-assisted design, paired with 3D printing has fundamentally improved our process. We've reduced time from concept to physical prototype by roughly 80%, enabling faster iteration and more experimentation with human judgment and human craft determining what ultimately gets selected and turned into a final product. We believe the winners in AI will be companies that combine deep IP, creative talent and disciplined deployment. That's exactly where Hasbro sits. As we enter 2026, we view playing to Win and more importantly, the execution behind it by our Hasbro, Wizards of the Coast and digital studio teams as a clear success.
- Chris Cocks, Hasbro CEO​

Wizards of the Coast's most recent statement on AI said "For 50 years, D&D has been built on the innovation, ingenuity, and hard work of talented people who sculpt a beautiful, creative game. That isn't changing. Our internal guidelines remain the same with regards to artificial intelligence tools: We require artists, writers, and creatives contributing to the D&D TTRPG to refrain from using AI generative tools to create final D&D products. We work with some of the most talented artists and creatives in the world, and we believe those people are what makes D&D great."

A small survey of about 500 users right here on EN World in April 2025 indicated that just over 60% of users would not buy D&D products made with AI.

 

log in or register to remove this ad

I dunno man - I'm no artist, and I'd infinitely prefer something drawn by a human to something generated by an LLM. As someone said on here the other day, if you can't be bothered to write it, why should I be bothered to read it?
Except that most of the time, we can't tell the difference. So unless we're told it's by AI, we can't use this metric.
 

log in or register to remove this ad

Except that most of the time, we can't tell the difference. So unless we're told it's by AI, we can't use this metric.
True. There's no point declaring, "I won't consume artwork if it looks like it was created using generative AI."

A better metric would be, "I won't pay for artwork unless the seller provides a written guarantee no perceivable part of the work was created using generative AI." In that case, anyone selling me artwork they know was wholly or partially created using generative AI is unambiguously doing something wrong (either breaching a contract or outright breaking the law).
 

True. There's no point declaring, "I won't consume artwork if it looks like it was created using generative AI."

A better metric would be, "I won't pay for artwork unless the seller provides a written guarantee no perceivable part of the work was created using generative AI." In that case, anyone selling me artwork they know was wholly or partially created using generative AI is unambiguously doing something wrong (either breaching a contract or outright breaking the law).
Ah... this is coming down to something I can get behind. Like an artists mark, but for digital works. The only thing we have that's like that currently, however, is an even worse alternative.
 

Except that most of the time, we can't tell the difference. So unless we're told it's by AI, we can't use this metric.
To whit, NYT has released a test for people to see if they can tell the difference:


I was surprised - I chose the human only 40% of the time.

1773266769759.png
 
Last edited:

To whit, NYT has released a test for people to see if they can tell the difference:


I was surprised - I chose the human only 40% of the time.
I recognized the first 3 passages, so I could tell. It's a fun experiment and it seems people in general can't. But, this is still relatively short passages. And these two ideas are in tension:

"Skeptics have argued that A.I. can never be truly creative, because it lacks the kind of worldly experiences humans have. But several recent studies have suggested that, in blind tests, many readers prefer A.I.-generated writing to human-authored works."

"We asked A.I. to choose an existing piece of strong writing and then craft its own version using its own voice."

If you give a LLM a prompt in a specific style, it will replicate that style well. This has been useful for me, because I can get stuff that is close to my style. But, it doesn't reach the "truly creative" bar, as nebulous as that is, because there isn't yet a high-quality, uniquely AI style.
 

To whit, NYT has released a test for people to see if they can tell the difference:
I can tell the difference, it's just that sometimes the AI reads smoother than the sample human paragraph and I might prefer that over what some human wrote.

Me sometimes preferring one style over another for a paragraph is not something I would extrapolate to mean that once we get to a page / a short story / a novel I would ever prefer the AI. The more varied human writing will become a plus even if it is not for every single paragraph of the whole
 

"We asked A.I. to choose an existing piece of strong writing and then craft its own version using its own voice."
Well in that case, the human writer and the AI have been given two completely different tasks.

The human writer created the content.

Then the AI was asked to simply rewrite the content.

They're not doing the same thing.

They need to ask the AI to create its own original content, and compare that. Or ask the human to rewrite the AI's work. Either option, as long as the two being compared are doing the same task.
 

They need to ask the AI to create its own original content, and compare that. Or ask the human to rewrite the AI's work. Either option, as long as the two being compared are doing the same task.
To this point, I effectively run this experiment non-stop these days, since students now regularly try to pass off work that is entirely AI. It used to be dead easy for me to spot. It's getting harder, quickly.
 

To this point, I effectively run this experiment non-stop these days, since students now regularly try to pass off work that is entirely AI. It used to be dead easy for me to spot. It's getting harder, quickly.
Do you find they're using the AI to generate ideas and then trying to state those in their own words, or using their own ideas and asking AI to help execute? (I assume some of both).
 

Recent & Upcoming Releases

Remove ads

Recent & Upcoming Releases

Remove ads

Top