Hasbro CEO Reiterates That AI Isn't Used to Make D&D Because of the Game's Audience and Creators

Cocks has spoken about AI extensively in recent months.
1773078976052.png


While Hasbro CEO Chris Cocks is a big fan of AI, he reiterated in a recent interview that the technology is not used to make Dungeons & Dragons and Magic: The Gathering. Recently, Cocks sat down with the Verge to discuss Hasbro's business and in particular how the company uses AI. While Cocks gave several examples of how AI is integrated within the company (it has a Peppa Pig AI provide feedback on Peppa Pig toys, for instance), he stated that not every facet of the company currently uses AI. "From a creative context, I think you have to think about it very carefully," Cocks said. "There are some brands that the audience, the creators, just don’t want it, so we don’t even have it in our pipelines for our video games or for Magic: The Gathering, or D&D. For things like toys where we’re basing it on existing IP, or like a long legacy of ideas, we are able to use it and use it pretty effectively."

The Dungeons & Dragons brand has strongly come out against AI, specifically when it comes to creative work. The brand currently bans the use of AI-generated artwork in its games and has repeatedly talked about how the game is made for people by people. However, Cocks has talked about his personal use of AI in his home D&D games and has strongly suggested integrating that technology into Dungeons & Dragons somehow.

Cocks previously bragged about how AI has been integrated into Hasbro's workflow, and the Verge interview talks about how AI has supplemented the business, mentioning that AI has been used to ideate toy ideas and simulate focus groups and play test labs. While Cocks sees AI as a way to "level up" the work of creatives as opposed to replacing them, he also admits that he's been wrong about technology disrupting the toy business before, specifically mentioning NFTs as an area that he got wrong in the past.

The interview also briefly mentioned the upcoming video game Dungeons & Dragons: Warlock, with Cocks noting that that game will be released in the "later part" of 2027.
 

log in or register to remove this ad

Christian Hoffer

Christian Hoffer

We shouldn't be too dependent. The AI should be a tool to express our ideas but it shouldn't do all the creative work.

Maybe playing with AI is not a true TRPG but a variant of solo CRPG videogame, or something like using AI to write your own fanfiction or amateur fantasy webnovel. But after th story is ended and completed and you publish it in internet, if other enjoys readings then it may be worth.

Creativity may mean a first phase of storm of ideas althought these could be very fool. The selection can be after.

Even if AI could write complete stories in our hobby we enjoy when we are creating with our imagination.

* What about the test of Turing?
 

log in or register to remove this ad


Using AI is very widespread in high school. This is a significant and time-consuming issue at my school, where plagiarism can cost a student their IB Diploma. However, there are lots of ways that students can productively and legitimately use AI to improve their writing. Here's what I posted to my Grade 12s as they were finishing their Higher Level Essay for Language and Literature, and their Theory of Knowledge essays (both get sent off to IB for assessment):


Another issue is that it is becoming very, very hard to know when a kid is improperly using AI. AI checkers are very inaccurate. My writing gets flagged as AI. I just had two students write essays that flagged strongly for AI when both were able to show me their detailed, handwritten notes composed on the sight text they were given to analyze, and were able to accurately paraphrase and explain all of their key ideas independently.

We just had a pro-d day session on AI last month, and have another one to try to set school policy (for now) this Friday. It is a live issue that is absolutely transforming education in real time, and not one person in the world knows where it is going to come out.
At the middle school level . . . at least in my corner of the world . . . the kids are probably 99% cheating with AI and like your experience, that is an incredibly rapid acceleration over last year. But . . . they don't fully understand why its cheating and even why cheating is bad. My district has not helped teachers help students navigate this new facet of society . . . not that I blame them, it's happened so fast . . . so we're left to struggle on our own. And some of us (teachers) are probably ahead of the curve, some are very pro-AI, some are very anti-AI, and most of us don't understand the new AI tools very well at all. I'm certainly behind the curve, personally.

And sadly, educators in the US aren't given the time (or tools) during the work week to figure this stuff out and adapt on our own. So we either sacrifice personal time, or we simply don't. That might also be true in other countries, not sure.

I just registered for a course on "AI in Education" . . . but I'm wary, and it won't help me much with my current crop of students. And who knows where we'll be next fall!

I'm with Bernie Sanders . . . AI needs to be regulated and regulated NOW. Guardrails need to be in place. Of course, since this tech is evolving so rapidly, that's easier said than done . . .
 

In some disciplines, I have seen the old-school answer of grades being based on work done in class, by hand, without devices.
This grading period, all assessments in my classroom have been on paper. This creates more work for me, and less gets done, but it kills the kids using AI to cheat.

Man, they've been doing REALLY badly on these quizzes! Attention spans and engagement with their lessons is pretty awful post-pandemic too.
 

I assume you mean this one? Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task – MIT Media Lab

I've read the abstract but the full study is behind a paywall. It broadly tracks with my anecdotal experience, but I am cautious because other studies have found contradictory results (human sciences, man. People are too complex). So I want to agree with it, but that makes me extra cautious because I'm as biased as the next hairless ape.
I’m pretty sure you can read the full study here if you want to:

But, regardless, wise to be cautious about results that agree with your own biases.
 

This tracks with my experience (from the study):

The convenience of instant answers that LLMs provide can encourage passive consumption of information, which may lead to superficial engagement, weakened critical thinking skills, less deep understanding of the materials, and less long-term memory formation [8]. The reduced level of cognitive engagement could also contribute to a decrease in decision-making skills and in turn, foster habits of procrastination and 'laziness' in both students and educators [13]. Additionally, due to the instant availability of the response to almost any question, LLMs can possibly make a learning process feel effortless, and prevent users from attempting any independent problem solving. By simplifying the process of obtaining answers, LLMs could decrease student motivation to perform independent research and generate solutions [15]. Lack of mental stimulation could lead to a decrease in cognitive development and negatively impact memory [15]. The use of LLMs can lead to fewer opportunities for direct human-to-human interaction or social learning, which plays a pivotal role in learning and memory formation [16]. Collaborative learning as well as discussions with other peers, colleagues, teachers are critical for the comprehension and retention of learning materials. With the use of LLMs for learning also come privacy and security issues, as well as plagiarism concerns [7]. Yang et al. [17] conducted a study with high school students in a programming course. The experimental group used ChatGPT to assist with learning programming, while the control group was only exposed to traditional teaching methods. The results showed that the experimental group had lower flow experience, self-efficacy, and learning performance compared to the control group.
Academic self-efficacy, a student's belief in their 'ability to effectively plan, organize, and execute academic tasks', also contributes to how LLMs are used for learning [18]. Students with low self-efficacy are more inclined to rely on AI, especially when influenced by academic stress [18]. This leads students to prioritize immediate AI solutions over the development of cognitive and creative skills. Similarly, students with lower confidence in their writing skills, lower 'self-efficacy for writing' (SEWS), tended to use ChatGPT more extensively, while higher-efficacy students were more selective in AI reliance [19].
The fundamental issue is that AI can now do a whole lot of tasks better* than most humans (*depending on your definition of "better"). Certainly more efficiently (though many AI costs are currently hidden), particularly when it comes to tasks that do not require or even outright reject creativity (a whole lot of office work falls into this category). But even when it comes to work that traditionally took significant academic specialization, if all you are looking for is passable quality, which, let's face it, is already incentivized for economic reasons, then AI just makes more sense.

So why am I teaching students to write an essay? I want to say because my job is to help grow brains, not teach particular content. But then I start to sound a lot like the guy insisting that every kid should still study Latin. Because we have no idea where this is going, and it is happening so fast, educators are really flailing around right now.
 
Last edited:

New Scientist has a great article about AI rapidly being able to replace mathematicians and if that's a good a thing. Basically doing the work of proofs has a lot side benefits and what would it mean if AI does all the heavy lifting?

Mathematics is undergoing the biggest change in its history

“Faced with a future in which an increasing share of mathematics is done by AI, some mathematicians, like Avigad, are raising the alarm about the detrimental effects this might have on our ability to practise and come up with new mathematics.”

“Using machines to solve the types of problem posed in First Proof may produce concrete proofs, says Anna Marie Bohmann at Vanderbilt University in Tennessee, but we lose the “learning opportunity”, she says. “Struggling to create and formulate new ideas and to solve new problems is one of the main ways in which both students and mathematics professionals consolidate their knowledge.”

Tony Feng, one of the Aletheia team at Google DeepMind, feels similarly and is cautious about using the tool himself. “A lot of times I feel like I should be doing my own homework and going through the process of building my own intuition"
 

These kids are not going to have the abhorrence Gen Z does for AI.

Upthread, I posted the findings of a recent Pew Research survey on Gen Alpha and their parents' attitudes toward AI in the US. Now, I'm reminded to find some data on Gen Z.

Interestingly, Pew's 2025 global report shows the US is a bit of an outlier when it comes to young adults' concern over AI. The US overall showed the most concern over AI in daily life.

pewglobalageai.png



Harvard Business Review did a more recent survey of US Gen Zers:


Our survey reveals that Gen Z’s relationship with AI is fraught. Even as they use AI extensively, they harbor concerns about its long-term effects on human capability.
Three out of four (74%) young adults in the U.S. used an AI chatbot at least once in the last month. This represents a considerable jump from the 58% of young adults in the U.S. who reported “ever” using ChatGPT in a February 2025 Pew survey. However, our estimates track a more recent (July 2025) study conducted by NORC at the University of Chicago showing 74% of young adults use AI to find information “at least some of the time.”

Specifically, 79% of young adults expressed concern that AI makes people lazier, and 62% worried it makes people less smart.
 

This tracks with my experience (from the study):



The fundamental issue is that AI can now do a whole lot of tasks better* than most humans (*depending on your definition of "better"). Certainly more efficiently (though many AI costs are currently hidden), particularly when it comes to tasks that do not require or even outright reject creativity (a whole lot of office work falls into this category). But even when it comes to work that traditionally took significant academic specialization, if all you are looking for is passable quality, which, let's face it, is already incentivized for economic reasons, then AI just makes more sense.

So why am I teaching students to write an essay? I want to say because my job is to help grow brains, not teach particular content. But then I start to sound a lot like the guy insisting that every kid should still study Latin. Because we have no idea where this is going, and it is happening so fast, educators are really flailing around right now.
A lot of this reads to me like a continuation of an existing phenomenon; an exacerbated issue rather than a new one.

I mean I do like hearing about new things related to what I used to love studying back in college: linguistic and cultural anthropology. People talking about these things and discussing it is a really fun thing to watch. But I have to admit that I'm not really learning anything about it, I'm being entertained by it. All these IG reels or YT shorts, that I'm not putting in the effort to fact-check or cross-reference or even really retain for long if I'm just scrolling, scrolling, scrolling.

But on the subject of "why bother," I think AI itself provides the best argument. Even from a pro-AI stance, at the end of the day AI can only build and iterate upon what it is provided to learn off of. And that becomes more and more complex and expansive by the day. But what still hasn't changed, and might not ever change, is that AI cannot iterate upon itself to any valuable degree. When AI learns from AI, the hallucinations and inaccuracies begin to dominate. Even where AI use cases make the most sense, a human mind that can understand the fullness of what the AI is tasked with is still required for there to be anything other than a degradative loop.

Nothing would make AI more useless than to let AI's capabilities replace the need for education within its realm of expertise.
 

So, I can't speak to your personal experience, but I have a lot of professional experience and just did a conference with a lot of other educators, and I can assure you that AI uptake amongst high school and college-aged students in university prep classes is near 100% after rapid acceleration this year. Rapid as in, during Term 1 I had to deal with about half a dozen potential infractions over the term. Last week I had five on one day (these are a huge time sink, BTW, and soon we won't be able to enforce the issue in the same way).

What we are hearing from our alum who are currently in college, and from our friends and colleagues who teach at college, is that it is the Wild West right now. Some profs are vehemently anti-AI. Others endorse it. Many are 🤷‍♀️.

I ran a class on the ethics of AI art just last week with my Theory of Knowledge students and their reactions were basically incoherent. They claim to hate AI but also all use it and also think it can be used to create cool stuff and also think its stealing but also think it isn't... (Of course, they also get very passionate about artists' rights, but then tell me that downloading music without paying for it is perfectly ethical, so...)

As the parent of two kids who graduated during this whole era of COVID and the rise of AI, this whole post resonates for me. It’s particularly painful when you know a kid did the work on their own, and yet are being called out because someone independently says their writing “voice” flags as AI and then they have to somehow how defend their work.
 

Recent & Upcoming Releases

Remove ads

Recent & Upcoming Releases

Remove ads

Top