D&D General Hasbro CEO Says AI Integration Has Been "A Clear Success"

However "people make the decisions and people own the creative outcomes".
Copy of Copy of Copy of pODCAST358-fr (11).png


We've known for some time that Hasbro CEO--and former president of Wizards of the Coast--Chris Cocks is an avid AI supporter and enthusiast. He previously noted that of the 30-40 people he games with regularly, "there's not a single person who doesn't use AI somehow for either campaign development or character development or story ideas." In a 2025 interview he described himself as an "AI bull".

In Hasbro's latest earnings call, Cocks briefly addressed the use of AI within the company. While he mentions Hasbro, Wizards of the Coast and the digital studio teams, he doesn't specifically namecheck Dungeons & Dragons. However, he does tout Hasbro's AI integration as a "clear success", referring primarily to non-creative operations such as finances, supply chains, and general productivity enhancements, and emphasises that "people make the decisions and people own the creative outcomes". He also notes that individual teams choose whether or not to use AI.

So while it is clear that AI is deeply embedded in Hasbro's workflows, it is not clear to what extent that applies to Dungeons & Dragons. WotC has indicated multiple times that it will not use AI artwork, and its freelance contracts explicitly prohibit its use. The company also removed AI-generated artwork in 2023's Bigby's Presents: Glory of the Giants.

Before I close, I want to address AI, and how we're using it at Hasbro. We're taking a human-centric creator-led approach. AI is a tool that helps our teams move faster and focus on higher-value work, but people make the decisions and people own the creative outcomes. Teams also have choice in how they use it, including not to use it at all when it doesn't fit the work or the brand. We're beyond experimentation. We're deploying AI across financial planning, forecasting, order management, supply chain operations, training and everyday productivity. Under enterprise controls and clear guidelines around responsible use and IP protection. Anyone who knows me knows I'm an enthusiastic AI user and that mindset extends across the enterprise. We're partnering with best-in-class platforms, including Google Gemini, OpenAI and 11 labs to embed AI into workflows where it adds real value. The impact is tangible. Over the next year, we anticipate these workflows will free up more than 1 million hours of lower-value work, and we're reinvesting that capacity into innovation, creativity and serving fans. Our portfolio of IP and the creators and talent behind it are the foundation of this strategy. Great IP plus great storytelling is durable as technology evolves, and it positions us to benefit from disruption rather than being displaced by it.

In toys, AI-assisted design, paired with 3D printing has fundamentally improved our process. We've reduced time from concept to physical prototype by roughly 80%, enabling faster iteration and more experimentation with human judgment and human craft determining what ultimately gets selected and turned into a final product. We believe the winners in AI will be companies that combine deep IP, creative talent and disciplined deployment. That's exactly where Hasbro sits. As we enter 2026, we view playing to Win and more importantly, the execution behind it by our Hasbro, Wizards of the Coast and digital studio teams as a clear success.
- Chris Cocks, Hasbro CEO​

Wizards of the Coast's most recent statement on AI said "For 50 years, D&D has been built on the innovation, ingenuity, and hard work of talented people who sculpt a beautiful, creative game. That isn't changing. Our internal guidelines remain the same with regards to artificial intelligence tools: We require artists, writers, and creatives contributing to the D&D TTRPG to refrain from using AI generative tools to create final D&D products. We work with some of the most talented artists and creatives in the world, and we believe those people are what makes D&D great."

A small survey of about 500 users right here on EN World in April 2025 indicated that just over 60% of users would not buy D&D products made with AI.

 

log in or register to remove this ad

Well in that case, the human writer and the AI have been given two completely different tasks.

The human writer created the content.

Then the AI was asked to simply rewrite the content.

They're not doing the same thing.

They need to ask the AI to create its own original content, and compare that. Or ask the human to rewrite the AI's work. Either option, as long as the two being compared are doing the same task.
The individual human did not create the language they write in, nor did they create the literary metaphors that exist in culture, our associations with concepts, nor expectations for plot structure. Hell, the format of the "novel" or "short story" or "novella" themselves were not created by any one man, but by aggregating the artistic memes of everyone who came before them.

Considering that a quote from Carl Sagan was used in that NYT game, let me bring up another one; "If you wish to make an apple pie from scratch, you must first create the universe." Art in the 21st century is not an act of creation, it's an act of bricolage, even before LLMs became commercially available.

All art is regurgitative and iterative; it's all just smashing together ideas and memes until enough people feel an emotional response and start telling eachother about it.
 

log in or register to remove this ad

To whit, NYT has released a test for people to see if they can tell the difference:


I was surprised - I chose the human only 40% of the time.

I recognized the first 3 passages, so I could tell. It's a fun experiment and it seems people in general can't. But, this is still relatively short passages. And these two ideas are in tension:

"Skeptics have argued that A.I. can never be truly creative, because it lacks the kind of worldly experiences humans have. But several recent studies have suggested that, in blind tests, many readers prefer A.I.-generated writing to human-authored works."

"We asked A.I. to choose an existing piece of strong writing and then craft its own version using its own voice."

If you give a LLM a prompt in a specific style, it will replicate that style well. This has been useful for me, because I can get stuff that is close to my style. But, it doesn't reach the "truly creative" bar, as nebulous as that is, because there isn't yet a high-quality, uniquely AI style.
I got 60%. But that's a rather silly experiment. A single small paragraph doesn't reveal the inherent weakness of AI. You need to take a much larger body of work. That's where human ability to structure the work to not be repetitive starts to show. I listened to a handful of sci-fi short stories generated by AI on YouTube. They're not bad. But they're also all kind of the same, and they take forever to get to the point. Their generation within AI is fairly easy to spot. But not if you just take a single paragraph out of it.
 

The individual human did not create the language they write in, nor did they create the literary metaphors that exist in culture, our associations with concepts, nor expectations for plot structure. Hell, the format of the "novel" or "short story" or "novella" themselves were not created by any one man, but by aggregating the artistic memes of everyone who came before them.

Considering that a quote from Carl Sagan was used in that NYT game, let me bring up another one; "If you wish to make an apple pie from scratch, you must first create the universe." Art in the 21st century is not an act of creation, it's an act of bricolage, even before LLMs became commercially available.

All art is regurgitative and iterative; it's all just smashing together ideas and memes until enough people feel an emotional response and start telling eachother about it.
There's something to this case. Charles Mee's remaking project develops it by creating plays that are sort of collages, cobbled together from extant material. (For Example). But, I think you can overstate the case here. Being influenced by does not mean derivative of. As Mee puts it:

And so, whether we mean to or not, the work we do is both received and created, both an adaptation and an original, at the same time. We re-make things as we go.

AI does adaptation. I don't think it has much originality. To the extent they do, the original ideas lie in the prompt, and so are still a sort of adaptation.
 

There's something to this case. Charles Mee's remaking project develops it by creating plays that are sort of collages, cobbled together from extant material. (For Example). But, I think you can overstate the case here. Being influenced by does not mean derivative of. As Mee puts it:

And so, whether we mean to or not, the work we do is both received and created, both an adaptation and an original, at the same time. We re-make things as we go.

AI does adaptation. I don't think it has much originality. To the extent they do, the original ideas lie in the prompt, and so are still a sort of adaptation.
My issue is, effectively, very few human beings are capable of originality either.


The issue we're all dancing around with AI is yeah, it's not good art, it's "good enough art." Exceedingly few people want to engage with cutting edge, original avant garde art, and even fewer people are paying artists for originality. What are artists getting paid for? Large breasted furries and corporate art of blocky people with tiny heads; stuff AI excels at doing because capitalism removed the humanity long before computers did.
 

My issue is, effectively, very few human beings are capable of originality either.


The issue we're all dancing around with AI is yeah, it's not good art, it's "good enough art." Exceedingly few people want to engage with cutting edge, original avant garde art, and even fewer people are paying artists for originality. What are artists getting paid for? Large breasted furries and corporate art of blocky people with tiny heads; stuff AI excels at doing because capitalism removed the humanity long before computers did.
Yeah, I think this is a good point. The challenge it poses is, maybe you only learn to be original by creating a lot of unoriginal stuff first, and if that gets offloaded to AI, it is harder to develop. But then, maybe it will push people to make bolder creative choices.
 

No, you’re absolutely right. Space data centers and lunar solar farms are terrible ideas, which is obvious to anyone who actually knows anything about space. Unfortunately, most people don’t know anything about space, and due to the aforementioned “AI makes you dumber” problem, a lot of people are confidently wrong about space. I’m not sure if the big tech moguls pitching these asinine concepts are among the confidently wrong group, or know it wouldn’t work and are actively grifting. There’s probably some of both going on.
sorry i know this is an older comment and this is not isolated to you but part of a reoccurring point of discussion i see so often surrounding AI

i think it would be helpful to move away from the "AI makes you dumber" argument. it is beyond ableist... but i know that is not your intention. truly this is about a bigger more terrifying threat of AI/LLM is that the algorithm is reductive but also opaque. if knowledge (and creativity) can be owned and mass produced then it can be exploited. is AI making people "dumber" or is it molding their understanding of the world to particular view points, the view points and opinions and world views of techbros, billionaires, corporations and hedgefunds... the interest of state power.

intelligence has never been a reflection of our value or contribution to our communities and the world at large (although we have been MADE to think so), what is "valuable" is the ability to sway, disinform, mold, and predetermine the choices of large swaths of people, especially if you can do it unobtrusively

people don't know about space (or AI or data centers or racial capitalism) because they are "dumb" but because they have been made to never know.
 

sorry i know this is an older comment and this is not isolated to you but part of a reoccurring point of discussion i see so often surrounding AI

i think it would be helpful to move away from the "AI makes you dumber" argument. it is beyond ableist... but i know that is not your intention. truly this is about a bigger more terrifying threat of AI/LLM is that the algorithm is reductive but also opaque. if knowledge (and creativity) can be owned and mass produced then it can be exploited. is AI making people "dumber" or is it molding their understanding of the world to particular view points, the view points and opinions and world views of techbros, billionaires, corporations and hedgefunds... the interest of state power.

intelligence has never been a reflection of our value or contribution to our communities and the world at large (although we have been MADE to think so), what is "valuable" is the ability to sway, disinform, mold, and predetermine the choices of large swaths of people, especially if you can do it unobtrusively

people don't know about space (or AI or data centers or racial capitalism) because they are "dumb" but because they have been made to never know.
“Dumber” is a reductive phrasing, but it does very much seem to be the case that regular LLM use limits people’s capacity for critical thinking.
 

Recent & Upcoming Releases

Remove ads

Recent & Upcoming Releases

Remove ads

Top