Hasbro CEO Chris Cocks Is Talking About AI in D&D Again

Status
Not open for further replies.
DND LOGO.jpg


Chris Cocks, the CEO of Hasbro, is talking about the usage of AI in Dungeons & Dragons again. In a recent interview with Semafor, Cocks once again brought up potential usage of AI in D&D and other Hasbro brands. Cocks described himself as an "AI bull" and offered up a potential subscription service that uses AI to enrich D&D campaigns as a way to integrate AI. The full section of Semafor's interview is below:

Smartphone screens are not the toy industry’s only technology challenge. Cocks uses artificial intelligence tools to generate storylines, art, and voices for his D&D characters and hails AI as “a great leveler for user-generated content.”

Current AI platforms are failing to reward creators for their work, “but I think that’s solvable,” he says, describing himself as “an AI bull” who believes the technology will extend the reach of Hasbro’s brands. That could include subscription services letting other Dungeon Masters enrich their D&D campaigns, or offerings to let parents customize Peppa Pig animations. “It’s supercharging fandom,” he says, “and I think that’s just net good for the brand.”


The D&D design team and others involved with D&D at Wizards of the Coast have repeatedly stood by a statement posted back in 2023 that said that D&D was made by humans for humans. The full, official stance on AI in D&D by the D&D team can be found below.

For 50 years, D&D has been built on the innovation, ingenuity, and hard work of talented people who sculpt a beautiful, creative game. That isn't changing. Our internal guidelines remain the same with regards to artificial intelligence tools: We require artists, writers, and creatives contributing to the D&D TTRPG to refrain from using AI generative tools to create final D&D products. We work with some of the most talented artists and creatives in the world, and we believe those people are what makes D&D great.
 

log in or register to remove this ad

Christian Hoffer

Christian Hoffer

Can't wait to see what an LLM entirely trained on House of Leaves spits out...

Garbage in, garbage out.

I don't know

You sound like the sort of person who would put "Q-learning" on a slide, connect it to an LLM in a flow diagram, and then argue with someone who asks you what implementation of Q-learning you intend to use that you totally know what you're talking about when you, in fact, do not.

LLMs cannot result in AGI. Period, end of story, do not pass go, do not collect 200 dollars.

If someone makes AGI, it's probably more discovering something they made by accident then anything else... ;)

Thanks for infantilizing my education and research. Feels good man.
 

log in or register to remove this ad



This is not true. The current generative AIs are already doing incredible work. Someone mentioned the protein folding scenario, using AIs to effectively determine the structure of nearly every protein in existence we are aware of. Work that would have been untouchable a few years ago.

You don't need AGI to have a radical leap in what is possible.
Factually incorrect. Alphafold is not a generative AI, is a narrow focus deep learning system which was specifically selected for brute forcing complex data. It is essentially using computers for what they are good for - solving complex logic puzzles involving large volumes of data. It doesn't understand or pretend to understand what protein is, but it does generated algorithmic ways to reverse engineer the data. The outputs are from technical specifications and then presented as theoretical and then analyzed by humans, knowing the machine's limitations and process.

Alphafold gets given millions of designs, asked to find the common rules of what does and doesn't happen in them - and invent new ones by solving trillions of logic puzzles. It will invent proteins that aren't possible, and proteins that don't exist, and we may find proteins it concludes are impossible, but it narrows down the spectrum of proteins that are likely discovered in the foreseeable future from infinite to a workable array.

Generative AI doesn't do that, at all. It takes a prompt, breaks it down algorithmically and then creates a collage based on training data. It doesn't allow for fine tweaking, it doesn't hold up to scrutiny by experts and the only problem it solves is guessing what a human might produce in response. This is, in large part, because human communication by language and art does not fit into neat rules like protein structures do.

There is no "correct" answer to how do you reply to "You see three figures approaching, what do you do?" so the Generative AI just tries to find ways to product "average" answers based on correlations. It doesn't have any mechanism to exclude nonsense answers, it doesn't even have a mechanism to properly parse the question for parameters - it just spits out data in response to data based on correlations. Thus, some future generative AI may reply simply with "SHORT! You're a short [censored] and nobody likes you!" or "I'm gonna loot that body, loot that body, loot that body..." with the repetition continuing for a whole page.

The outputs are entirely illusory, and hence why "hallucination" has become such a problem. DeepSeek added a function to show you where it got training data from, and thusfar lawyers have confirmed that yes it still hallucinates - just in an arguably more dangerous way. Rather than invent new case-law, it cites actual cases but misreports what they say. Because its not designed to understand what its reading, its designed to pattern match.

This exactly what I mean when I say AI discussion invariably leads to be people saying Generative Ai is good/important etc then pointing to a different type of AI for examples. Alphafold and Generative AI are distant cousins at best and you've presented them as the same guy. The only way we get the AI that has the user interface of ChatGPT and the potential of programs like Alphafold is if we get GAI - which again nobody is really working on because we haven't got a foundation to build a framework on.

Again I suggest reading some of the materials so you can be better informed - because mistaking Generative AI for an analytical AI, is how you end up believing its hallucinations and doing something terrible you can never take back.

Can't wait to see what an LLM entirely trained on House of Leaves spits out...

Probably just random characters - part of the reason why OpenAI desperately wants special permission to be able to steal people's work for its training data is the volume required to even string basic sentences together is vastly more than they could afford if they had to pay a modest licensing fee for every work. This is where theoretical projects like "I will make an AI trained only on ethically sourced, classic artworks" and "I will make an AI trained entirely on my own work" fall apart - the machine just goes "Feed me more data."
 
Last edited:

You sound like...
What part of "I don't know" did you not understand? I don't know you either, you certainly don't know me (as is evident by what you said about slides and Q-learning). I don't know if it's possible or not, I don't know you so I don't trust your word on the matter either. For all I know you could live in your own scientific bubble. Don't take that personally, because again: I don't know... You, Q-learning, the road to AGI, etc. What I do know that many insanely smart people have said or predicted stuff that didn't happen when and how they thought it would happen or at all for that matter.

What I do expect is simply: It ain't happenin' anytime SOON(tm). As for disrespecting your profession or you training... What!?!? The medical profession has a far longer history, and even they discovered 'penicillin', is it any less of an achievement that they discovered it vs. they made it? The chances that you personally will invent AGI are tiny, no matter how smart you (think you) are... So you're pissed that I think that there's a good possibility that your professional career was in vain? I think that says more about you then what I said. I find it likely that there won't be AGI as we define it now in our lifetime, heck I could be wrong and someone could show up with it next week/month/year/decade.

The only thing I do know is that the big AI/LLM tech companies are trying to sell us something. And my professional experience in IT with (big) Tech companies is that you don't trust sales! Not about products they have actually released, promised future features, and certainly not about technological promises they haven't even figured out how to build yet. i might be pleasantly surprised when I test X in use case Y for client Z, but that is rare. As for AI/LLM in my professional capactity, I would offer the pros and cons for a particular solution for a particular use case, indicate the business/legal risk, but focus primarily on the IT security risks.
 
Last edited:

Please, forgive me this little almost off-topic:

I asked Grok, the X/Twitter's AI a question about certain political threat (and we don't need to tell it here) and its final phrase was "True leadership does not humiliate, it elevates those who follow it".
---
With the right software an AI could create not only the geography of a fictional world but also to describe the global climate and the movement of the tectonic plates (and consequently the areas with the most earthquakes or volcanic eruptions).

The AI could be useful for underage or rookie DMs who can't to create new stories for the next games.

* There are serious risks, for example a DM could want a story about a rebellion against the slavery but the WotC's AI said it was too controversial threat.
 

The AI could be useful for underage or rookie DMs who can't to create new stories for the next games.
Try asking DeepSeek r1 (671b) to make a D&D 5e encounter (you can add more specifics), the results will probably be interesting. But the real gem for new DMs is in the 'thinking' process, that is absolute gold. Not only do they have decent output (probably better then they could make at that time), they are also learning how that output is made...

The reality will probably be that no one looks at the 'thinking' process and just copy/pastes the output, not learning anything. That might be the bigger risk, but shrugs that's their problem, it's not as if smartphones have stopped kids from playing outside... Oh wait! ;)

Subject control from a WotC/Hasbro on an AI/LLM is entirely possible, but I also suspect that folks wanting to do that might be able to circumvent those controls or just find another AI/LLM service that doesn't do that.
 

With the right software an AI could create not only the geography of a fictional world but also to describe the global climate and the movement of the tectonic plates (and consequently the areas with the most earthquakes or volcanic eruptions).
Okay there's a lot going on here in this one claim.

"With the right software" is purely speculative. Software to do this does not exist, and while someone may be working on it - it is entirely unconnected to Generative AI and specifically the Generative AI that Chris Cocks has championed. He's talking about ChatGPT but its also trained on all the historic rulebooks, modules, novels, etc.

"describe the global climate and the movement of the tectonic plates (and consequently the areas with the most earthquakes or volcanic eruptions)" the actual processing to do this is not within the scope of machine learning - we only have data from one planet - Earth. We don't have the masses of data required to allow machine learning to generate predictive outcome rules, and adapt them. This kind of software would need to be designed by specialists for this special role.

Also I'm not convinced it would add any value to a campaign where there are dragons. Just saying.

This is like taking an old car to a hairdresser and then explaining that with the right tools, workspace, training and experience - this hairdresser could convert the broken down gasoline dependent car to a self-driving EV. It is not technically untrue, it is not within the scope of reasonable expectations and ignores everything else that would have to happen to make it possible.

The AI could be useful for underage or rookie DMs who can't to create new stories for the next games.
I'm not convinced anyone has ever had a first session go wrong because they got the information about tectonic plates wrong. I am sure people have had first sessions go wrong by being overloaded with information that they don't understand, being given bad advice or advice that isn't relevant to their particular group, etc.

And just to be super clear on this point, there are things that machine learning could do for D&D, but it's not going to help individual DMs - the tasks its more suited for are to do things like analyse all the data in D&D Beyond character sheets and adventure logs, and come back with discoveries our biases prevent us from looking for - thus making the people who make the game aware of trends that may or may not be interesting to them.

Because questions like "at what level do Rogues start frequently making skill checks for stealth and sleight of hand" is obvious but we not think to check how many Paladins take the Skilled Feat at level 8 so that they can add Arcana.

* There are serious risks, for example a DM could want a story about a rebellion against the slavery but the WotC's AI said it was too controversial threat.
Other serious risks:
  • The AI will derail the adventure but mixing up things - telling the DM to teleport the party who are on a light hearted adventure to rescue a gnome into Tomb of Horrors, then creating a Wanton Wench to advise them their job is now to kill Strahd.
  • The AI may get all onboard with the slavery campaign, but advise the slavers are Lawful Good and the slaves Chaotic Evil - telling the DM to make the adventure about suppressing the slaves who need to remain slaves for their own good.
  • The AI may get confused mid-way and start writing about how the PCs actually deserve to be slaves and should be grateful for the slavers helping them
  • The AI might be right onboard up until the final act then decide it'd be a cool plot twist if the slaves turned on the PCs, due to a "we wanted to be slaves" plot it invents on the spot, thus creating a campaign which frames itself as about helping people but just leaves you with a pile of corpses.
  • The AI may get very confused with the training information and start making the campaign just nonsensical.
Try asking DeepSeek r1 (671b) to make a D&D 5e encounter (you can add more specifics), the results will probably be interesting. But the real gem for new DMs is in the 'thinking' process, that is absolute gold. Not only do they have decent output (probably better then they could make at that time), they are also learning how that output is made..
You know what works more reliably and even better than this? Just like, looking at existing guides written by actual humans with lived experiences who share opinions, ideas, etc. Seth Skorowsky has, at time of writing, 324 videos up on gaming - everything from particular modules to specific systems and modules. He's 1 guy.

You can also try communicating with your fellow humans - there are countless platforms to talk about nerd stuff and share knowledge and ideas.

Again, I encourage people to look at the actual studies I posted - AI kinda sucks at doing anything specific. "will probably be interesting" is highly speculative and works entirely on the premise you claimed you don't abide by - believing the sales pitch. DeepSeek can't pick an "interesting" idea because it doesn't know what is interesting to you.

One of the funniest things (to me) on Reddit is a post where someone announces they've been using the "rules sheet" someone made for Google Gemini to make good D&D content... the rules sheet is like 8 pages long because it has to be specific about literally everything because, it turns out, they have a particular idea of what "good" content is.

Imagine how much time people doing this spend thinking of ways to write prompts for the AI rather than thinking about stuff that's actually interesting to them.

And so with that considered, why would anyone assign any significance to the thinking process? It's not a human thinking process, it's not the process of someone with lots of experience - it doesn't show you a compelling clash of ideas. It's just a search history from a computer that doesn't know anything until it starts searching. There's not reason to believe it vetted its sources, searched for the best, etc. It just grabbed what was at hand for it.

I mean I tried the experiment and all it did in the "thinking" section was babble out generic ideas - many of which were insultingly obvious and kinda condescending if you were to give them to a newbie DM. Also it doesn't follow its own ideas/advice so like, there's an endorsement. If you do it multiple times you quickly find that it falls into a pattern, because it only knows one way to structure because... it's AI.

It also actively encourages bad DMing practices like planning the players actions for them (eg "The druid will wildshape into a mastiff to track the bandit"), gives strategic advice that just isn't sound (recommending the PCs will use spells and abilities they might not have), and makes a random encounter that would totally unbalance/derail your plot.

It's not good. Just like various crypto blockchains are not an authority on ownership and the NFTs on them are worthless as such, so generative AI is not an authority on anything. It can't know what is good encounter design, it can only try to correlate information and sadly - there's a lot more bad advice that labels itself as good than there is accurately labelled good advice. Remember these things needs tetrabytes of training data.

The reality of it is it's not very good at anything, other than being a thing for people to project their dreams and fantasies onto. The sales pitch is that you won't have to do research, thinking etc, for yourself - but its actually going to have someone or something else do it for you - it's just going to pretend it did and smile.
 
Last edited:



Status
Not open for further replies.
Remove ads

Remove ads

Top