D&D General Hasbro CEO Says AI Integration Has Been "A Clear Success"

However "people make the decisions and people own the creative outcomes".
Copy of Copy of Copy of pODCAST358-fr (11).png


We've known for some time that Hasbro CEO--and former president of Wizards of the Coast--Chris Cocks is an avid AI supporter and enthusiast. He previously noted that of the 30-40 people he games with regularly, "there's not a single person who doesn't use AI somehow for either campaign development or character development or story ideas." In a 2025 interview he described himself as an "AI bull".

In Hasbro's latest earnings call, Cocks briefly addressed the use of AI within the company. While he mentions Hasbro, Wizards of the Coast and the digital studio teams, he doesn't specifically namecheck Dungeons & Dragons. However, he does tout Hasbro's AI integration as a "clear success", referring primarily to non-creative operations such as finances, supply chains, and general productivity enhancements, and emphasises that "people make the decisions and people own the creative outcomes". He also notes that individual teams choose whether or not to use AI.

So while it is clear that AI is deeply embedded in Hasbro's workflows, it is not clear to what extent that applies to Dungeons & Dragons. WotC has indicated multiple times that it will not use AI artwork, and its freelance contracts explicitly prohibit its use. The company also removed AI-generated artwork in 2023's Bigby's Presents: Glory of the Giants.

Before I close, I want to address AI, and how we're using it at Hasbro. We're taking a human-centric creator-led approach. AI is a tool that helps our teams move faster and focus on higher-value work, but people make the decisions and people own the creative outcomes. Teams also have choice in how they use it, including not to use it at all when it doesn't fit the work or the brand. We're beyond experimentation. We're deploying AI across financial planning, forecasting, order management, supply chain operations, training and everyday productivity. Under enterprise controls and clear guidelines around responsible use and IP protection. Anyone who knows me knows I'm an enthusiastic AI user and that mindset extends across the enterprise. We're partnering with best-in-class platforms, including Google Gemini, OpenAI and 11 labs to embed AI into workflows where it adds real value. The impact is tangible. Over the next year, we anticipate these workflows will free up more than 1 million hours of lower-value work, and we're reinvesting that capacity into innovation, creativity and serving fans. Our portfolio of IP and the creators and talent behind it are the foundation of this strategy. Great IP plus great storytelling is durable as technology evolves, and it positions us to benefit from disruption rather than being displaced by it.

In toys, AI-assisted design, paired with 3D printing has fundamentally improved our process. We've reduced time from concept to physical prototype by roughly 80%, enabling faster iteration and more experimentation with human judgment and human craft determining what ultimately gets selected and turned into a final product. We believe the winners in AI will be companies that combine deep IP, creative talent and disciplined deployment. That's exactly where Hasbro sits. As we enter 2026, we view playing to Win and more importantly, the execution behind it by our Hasbro, Wizards of the Coast and digital studio teams as a clear success.
- Chris Cocks, Hasbro CEO​

Wizards of the Coast's most recent statement on AI said "For 50 years, D&D has been built on the innovation, ingenuity, and hard work of talented people who sculpt a beautiful, creative game. That isn't changing. Our internal guidelines remain the same with regards to artificial intelligence tools: We require artists, writers, and creatives contributing to the D&D TTRPG to refrain from using AI generative tools to create final D&D products. We work with some of the most talented artists and creatives in the world, and we believe those people are what makes D&D great."

A small survey of about 500 users right here on EN World in April 2025 indicated that just over 60% of users would not buy D&D products made with AI.
 

log in or register to remove this ad

People learn by doing. Students need to be able to summarize their own work by themselves in order to grow into independent researchers. PIs who use their students as a mere pair of hands are doing them a disservice.
By having an AI do the summarizing for you, you've freed yourself up to do more research and if teaching students how to write is important, rather than wasting your precious time with pedagogy, your institution can how hire a technical writing instructor.
 

log in or register to remove this ad

Based on DnDBeyond and subreddit posts, AI's getting used out there. A lot of the posts are "my player/DM sent me pages of AI-gen writing, why should I read what they couldn't bother to write?" but occasionally there are posts that aren't so negative. So you're not alone.. but in a creator-friendly space like many RPG forums, I think there'll be fewer AI supporters.
I definitely wouldn't send like, pages and pages of stuff to read to a new player. AI generated or not! I'm more of a "2 pages of bullet points and some short paragraphs" kind of guy. I can see how several pages of AI slop would be an annoying thing to be expected to read through, but IME my players are unlikely to read through 6 pages of masterful handwritten worldbiilding prose either!
 

The one guy at my workplace who’s weirdly pro-AI (there’s always one, in there?) made a PowerPoint for a session during our team’s annual professional development summit last year… In his defense it was closer to 10 slides than 100, but every one of them had obviously AI generated art, and I was like, “well, this session is a waste of my time, because I can’t trust a single word of this presentation knowing that any of it could have been written by Doctor Plagiarism the Always-Wrong Robot.” I didn’t say it, but I know I wasn’t the only one thinking it.
I've managed to slowly infect my library with AI skepticism up the chain. It helps that I do collections and acquisitions, so I have a lot of say in what gets bought, so I'm often making cases on why I think a given AI product is crap. (In general, I've found that purpose-built proprietary machine learning tools that have to use "AI" branding because their hand has been forced are occasionally good, while the plagiarism machines and things derived from them are usually terrible).
 


It's notable that the wording here is extremely careful.

All the actual examples given are ones that consumers don't really care about - "financial planning, forecasting, order management, supply chain operations, training and everyday productivity".
‘everyday productivity’ can mean a lot of things that consumers might care about, including AI images and AI text
 

It is interesting how opinions on AI use are diverging even further. Elsewhere, I'm seeing people who were not that high on AI previously thinking the world is fundamentally changed since the launch of Claude Code, Codex, and OpenClaw. I have only used Codex, but I share that feeling. I think we've crossed the point where any large company can reasonably be competitive without AI integration.

Obviously there are still many challenges. But we're well past the point where the LLM chatbot is a reasonable proxy for their capabilities.
 

By having an AI do the summarizing for you, you've freed yourself up to do more research and if teaching students how to write is important, rather than wasting your precious time with pedagogy, your institution can how hire a technical writing instructor.
Wasting my time on pedagogy? My job is to both teach and research. To ignore either facet is to do my job poorly. In an industry context this may not be such a big deal - but long term this may cause problems. If you cannot interpret and summarize your own data correctly then you cannot check if an AI has done it well.

Edit: I’m drifting away from the main topic too far, will leave it there.
 


I’m so glad we spend gallons of water and pump even more carbon into the atmosphere to make that possible for you.
I understand the snark, but afaik a low demand prompt like the ones I ask for typically produce CO2 on the order of microwaving something for a few seconds or...spending several minutes typing on a laptop. For some of my activities I'm particularly slow at, I bet I use less electricity on the AI request than I would doing the alternative and typing for a couple hours!
 

Did it, though? The work you’d have to do reading the summary, finding the parts it got wrong, and re-writing them probably isn’t significantly less than the time it would have taken you to just write the summary yourself, and for whatever time it may have saved it will be a lower quality product.
Do you do these things on a regular basis? If you do, you realize that you don't just write a summary for target audience xyz, it requires many, many revisions to get a concise but accurate approximation that's understandable for the target audience.

Depending on the consistency of the output quality and the capabilities of the LLM, there might be a LOT of time saved.

There are of course challenges, research in subject matter abc often can take weeks, months or years, chances are that not all the facts are present in your head (depending on the complexity of the material). Heck, I often need to check facts when I write technical stuff. This of course assumes that the model can actually work with your results without being trained on it (which costs a lot of time and money), and that you have written extensive documentation that it can reference, but not too extensive or it will also get confused.

If all the requirements are met, it can save a LOT of time. The problem is meeting the requirements, I don't use LLMs in my work because a.) the customer doesn't have a LLM policy in place, b.) because I'm working on cutting edge (IT) stuff for application x and the LLM hasn't been properly trained on it, c.) it's an edge case that doesn't have much information available in the first place and the only way to get information is actual experimentation. Another thing is I often work on proof of concepts where there is no extensively written documentation, the point of a condensed report that's understandable for non-specialists is the whole point of the exercise that will get approved for further development, at which point extensive documentation will get written. In such a case you're essentially spanning the cart before the horse when you want to use LLM.

Another issue is not the LLM, it's human nature. Not bothering to check the facts, either because they aren't qualified or just can't be bothered to do so (or it looks believable). A good example of that is the Deloitte case, but there are, many, many such cases. And this is why I'm against many uses for LLM, the humans don't do their job...
 

Remove ads

Remove ads

Top