Hasbro CEO Reiterates That AI Isn't Used to Make D&D Because of the Game's Audience and Creators

Cocks has spoken about AI extensively in recent months.
1773078976052.png


While Hasbro CEO Chris Cocks is a big fan of AI, he reiterated in a recent interview that the technology is not used to make Dungeons & Dragons and Magic: The Gathering. Recently, Cocks sat down with the Verge to discuss Hasbro's business and in particular how the company uses AI. While Cocks gave several examples of how AI is integrated within the company (it has a Peppa Pig AI provide feedback on Peppa Pig toys, for instance), he stated that not every facet of the company currently uses AI. "From a creative context, I think you have to think about it very carefully," Cocks said. "There are some brands that the audience, the creators, just don’t want it, so we don’t even have it in our pipelines for our video games or for Magic: The Gathering, or D&D. For things like toys where we’re basing it on existing IP, or like a long legacy of ideas, we are able to use it and use it pretty effectively."

The Dungeons & Dragons brand has strongly come out against AI, specifically when it comes to creative work. The brand currently bans the use of AI-generated artwork in its games and has repeatedly talked about how the game is made for people by people. However, Cocks has talked about his personal use of AI in his home D&D games and has strongly suggested integrating that technology into Dungeons & Dragons somehow.

Cocks previously bragged about how AI has been integrated into Hasbro's workflow, and the Verge interview talks about how AI has supplemented the business, mentioning that AI has been used to ideate toy ideas and simulate focus groups and play test labs. While Cocks sees AI as a way to "level up" the work of creatives as opposed to replacing them, he also admits that he's been wrong about technology disrupting the toy business before, specifically mentioning NFTs as an area that he got wrong in the past.

The interview also briefly mentioned the upcoming video game Dungeons & Dragons: Warlock, with Cocks noting that that game will be released in the "later part" of 2027.
 

log in or register to remove this ad

Christian Hoffer

Christian Hoffer


log in or register to remove this ad

It's getting closer, since they're not trying to do so on Section 230 content grounds, but about creating an intentionally addictive product for young people.

Given that countries around the world are moving to ban social media for anyone under 16, I think it's certainly possible that social media is on the way out.
I hate social media, but this seems like wishful thinking to me. You can't legislate away what people want. Nobody is being forced to use social media, even if they feel like they are. It's the same with AI. We can and should try to control them, but they aren't going anywhere (and AI is only getting started).
 

I hate social media, but this seems like wishful thinking to me. You can't legislate away what people want.
Every lawmaker back to Hammurabi has been willing to try.

And maybe you can't legislate away drunk driving, but you can't definitely make the cost for doing so very high (depending on nation or state within the US).
Nobody is being forced to use social media, even if they feel like they are.
If it ends up that Meta/Instagram/Facebook/Whatsapp are forced to pay meaningful damages for intentionally targeting tweens and teens with their products -- and those damages are substantially upheld on appeal -- I think you'll definitely see social media companies wither, since that's a big part of their economic model. Some will no doubt fold or pivot to doing something else.
 


I hate social media, but this seems like wishful thinking to me. You can't legislate away what people want. Nobody is being forced to use social media, even if they feel like they are. It's the same with AI. We can and should try to control them, but they aren't going anywhere (and AI is only getting started).
If you had been a student in the last few years, odds are you would have been forced by at least one teacher too in love with new technologies, or the control they can have by forcefully making their students follow their Facebook page or Instagram.

Being forced is relative, I guess. Nobody so far has had a gun to their head to force them to use Twitter. But a sufficiently pushy educator, employer, or bully can certainly force someone's hand.
 

You can't control that. Nor should you. Every generation decides what is "cool." Just because you hate GenAI doesn't mean GenAlpha is wrong for embracing it.

This idea that there is a fundamental, objective "right" regarding AI generated content is old man BS. The world is rapidly changing. We all remember the discomfort of blogs overtaking traditional journalism in our information ecosystem. People really desperately need to stop pretending that their comfort zone is somehow "right." It's weakness.
No it is objectively a moral ill to support generative ai.

It has set back climate gains, uses more power than while towns per data center, and actively harms cognition and is a direct danger to psychologically vulnerable people.
 

31% of teens surveyed said AI will have a positive effect on society over the next two decades. 34% think the impact will be equally positive and negative. 26% said AI's impact will be negative.
That isnt very optimistic for ai.

60% think the best case scenario is an even split of harm and benefit. lol
 

Vehemently disagree. It sounds like you have personal experience if you're going to educator conferences, my personal experience is having children plus a bevy of nieces and nephews, up through university age (well, my eldest nephew just got his Masters), and them having wide swaths of online friends across geographical boundaries.

The people who sell things to educators want to integrate AI into everything. And please recognize that what the sellers are pushing and what kids learning want has no inherent overlap. It can overlap, it can be diametrically opposed. There's no correlation between them.

Students seem split between incredibly strongly against AI, and a smaller segment who use it to avoid doing any work. The anti-AI push makes the most vocal here at ENWorld seem milquetoast.

One of my kids has a youtube series (Animator vs. Animation) that they've been watching for years. A few months back they had a sponsor segment for technical classes, and one of the ones offered was AI. My kids was ready to swear off this creator they loved, I had to talk them down from this wasn't the creator's opinion, just a class offered by a sponsor that supported the series financially.
Yep the kids who actually understand ai absolutely despise it, and think using it is extremely cringe.
 

If you had been a student in the last few years, odds are you would have been forced by at least one teacher too in love with new technologies, or the control they can have by forcefully making their students follow their Facebook page or Instagram.

Being forced is relative, I guess. Nobody so far has had a gun to their head to force them to use Twitter. But a sufficiently pushy educator, employer, or bully can certainly force someone's hand.
I’m a teacher actually. Forcing students onto your Facebook page or Instagram feed is not only unenforceable it is also incredibly unethical. Not to mention creepy and weird. We are specifically forbidden to even invite students OR alumni less than a full year out of school to join any social media. When we go on a trip and I collect texts so I can contact students in case of emergency, I have to immediately delete them the second the trip ends.

Getting back to the original topic, there are certain aspects of technology that you can and IMO should regulate. But blanket bans on AI use by consenting adults is way beyond the pale. And that horse left the stable long ago anyway. What remains is to manage the ongoing situation.
 


We aren't talking about copy and paste from wikipedia. If a student uses AI to write summaries and essays, will the teacher be able to tell?

The risk of using AI is forgetting the tradional and "old-school" way. For example AI could tell you the novel "Ana Karenina" but if you don't read yourself the book then you miss the spiritual concerns by Konstantín "Kostya" Dmítrievich Levin (main character of a parallel plot).

* I tried use AI to write a funny poem in English languange, but I wanted the punchline in the end, in the final phrase, to cause surprise. If the poem "spoilered" the punchline too soon then it wouldn't be so funny.
 
Last edited:

Recent & Upcoming Releases

Remove ads

Recent & Upcoming Releases

Remove ads

Top