Hasbro CEO Reiterates That AI Isn't Used to Make D&D Because of the Game's Audience and Creators

Cocks has spoken about AI extensively in recent months.
1773078976052.png


While Hasbro CEO Chris Cocks is a big fan of AI, he reiterated in a recent interview that the technology is not used to make Dungeons & Dragons and Magic: The Gathering. Recently, Cocks sat down with the Verge to discuss Hasbro's business and in particular how the company uses AI. While Cocks gave several examples of how AI is integrated within the company (it has a Peppa Pig AI provide feedback on Peppa Pig toys, for instance), he stated that not every facet of the company currently uses AI. "From a creative context, I think you have to think about it very carefully," Cocks said. "There are some brands that the audience, the creators, just don’t want it, so we don’t even have it in our pipelines for our video games or for Magic: The Gathering, or D&D. For things like toys where we’re basing it on existing IP, or like a long legacy of ideas, we are able to use it and use it pretty effectively."

The Dungeons & Dragons brand has strongly come out against AI, specifically when it comes to creative work. The brand currently bans the use of AI-generated artwork in its games and has repeatedly talked about how the game is made for people by people. However, Cocks has talked about his personal use of AI in his home D&D games and has strongly suggested integrating that technology into Dungeons & Dragons somehow.

Cocks previously bragged about how AI has been integrated into Hasbro's workflow, and the Verge interview talks about how AI has supplemented the business, mentioning that AI has been used to ideate toy ideas and simulate focus groups and play test labs. While Cocks sees AI as a way to "level up" the work of creatives as opposed to replacing them, he also admits that he's been wrong about technology disrupting the toy business before, specifically mentioning NFTs as an area that he got wrong in the past.

The interview also briefly mentioned the upcoming video game Dungeons & Dragons: Warlock, with Cocks noting that that game will be released in the "later part" of 2027.
 

log in or register to remove this ad

Christian Hoffer

Christian Hoffer


log in or register to remove this ad

For the most part it seems mor less what I suspected. WotC doesn't want to use AI stuff, and Cocks is not going to force them to at this point.
Really? Even where Cocks stated the opposite?

Cocks is all in on AI. But he's aware that certain fan communities are really resistant, and pushing it on them would backfire. That could change in the future, but I see no reason to fret today.

EDIT: Oops. Missed the "not" in @MonsterEnvy's sentence there.
 
Last edited:

Maybe I'm reading too much but it sounds like he pointing to the troublesome consumer and saying there's nothing he can do when the stockholder come at him for more and more AI.
How do you come up with that reading? No saying it isn't there, it is just a different impression the what I came away with. Trying to see if I am being complacent.
 


Really? Even where Cocks stated the opposite?

Cocks is all in on AI. But he's aware that certain fan communities are really resistant, and pushing it on them would backfire. That could change in the future, but I see no reason to fret today.
Huh. From your second paragraph I agree with you. Did you misread my comment?

Cocks has said in the past that he's not going to force AI use, which this lines up with.
 

Vote with wallet. Trust but verify. If they try to pull a fast one on us, push back.

But until then, why complain about the out of touch CEO seemingly being a bit more in-touch than he was initially?
 


Vote with wallet. Trust but verify. If they try to pull a fast one on us, push back.

But until then, why complain about the out of touch CEO seemingly being a bit more in-touch than he was initially?
Because CEO types like him take silence as permission. The more out of touch they are, the louder you have to be to be heard by them.
 

They do? They love it? Source?

Anecdotally, as a teacher of young teenagers . . . they use AI to cheat, but they were cheating before. They don't really understand AI. AI has evolved so fast it's left most teachers in the dust and we don't have the knowledge, experience, tools, training, or bandwidth to teach kids the ins and outs of using AI and how to do so ethically and effectively.

The AI cheating being done right now . . . is ridiculously easy to spot. But sadly, without me doing a forensic investigation to prove their guilt, my admin won't let me do anything about it but "grade fairly" . . .

But do my students love AI? They don't understand it enough to do so.

EDIT: Thanks for the links @Tea Cozy! Seems the current generation is split into thirds about the positivity of AI. That's not good, but it's also not, "The kids, they love AI!"
Using AI is very widespread in high school. This is a significant and time-consuming issue at my school, where plagiarism can cost a student their IB Diploma. However, there are lots of ways that students can productively and legitimately use AI to improve their writing. Here's what I posted to my Grade 12s as they were finishing their Higher Level Essay for Language and Literature, and their Theory of Knowledge essays (both get sent off to IB for assessment):

Acceptable use of AI: It occurs to me that I may have been too scary about the use of AI. There are legitimate uses for it that are not cheating. Getting it to write (or rewrite) your essay and then submitting the work as your own is obviously not permitted.

Inputting your essay and asking for feedback, reflecting on the feedback, and then making changes as you feel necessary is not only appropriate, I recommend that you do so. To make this kind of feedback useful, ask specific prompts. For example (ToK): "In this paragraph for an IB ToK essay, do my topic sentence, initial argument, and real life example make sense together?" (hint: I'm still seeing a lot of essays where they sometimes don't).

Or "flag any grammar or spelling errors so that I can fix them."

Or "check that I have cited my sources correctly."

Or (a really useful one) "paraphrase the main idea of this paragraph" - this allows you to see if the main idea that you are trying to communicate is actually what is coming out.

Or (HL essays): in this IB Language and Literature HL essay, am I using my terminology accurately?

Or (HL essays): do my thesis, topic sentences, and conclusion directly address my inquiry question?

Or (ToK essay): do my introduction, topic sentences, and conclusion together present a cohesive argument?

Remember, the most important thing is that this is an essay only you could have written. Don't do anything that crosses that line. But do use the tools allowable to you; everyone else will be.
Another issue is that it is becoming very, very hard to know when a kid is improperly using AI. AI checkers are very inaccurate. My writing gets flagged as AI. I just had two students write essays that flagged strongly for AI when both were able to show me their detailed, handwritten notes composed on the sight text they were given to analyze, and were able to accurately paraphrase and explain all of their key ideas independently.

We just had a pro-d day session on AI last month, and have another one to try to set school policy (for now) this Friday. It is a live issue that is absolutely transforming education in real time, and not one person in the world knows where it is going to come out.
 
Last edited:


Recent & Upcoming Releases

Remove ads

Recent & Upcoming Releases

Remove ads

Top