Hasbro CEO Chris Cocks Is Talking About AI in D&D Again

Status
Not open for further replies.
DND LOGO.jpg


Chris Cocks, the CEO of Hasbro, is talking about the usage of AI in Dungeons & Dragons again. In a recent interview with Semafor, Cocks once again brought up potential usage of AI in D&D and other Hasbro brands. Cocks described himself as an "AI bull" and offered up a potential subscription service that uses AI to enrich D&D campaigns as a way to integrate AI. The full section of Semafor's interview is below:

Smartphone screens are not the toy industry’s only technology challenge. Cocks uses artificial intelligence tools to generate storylines, art, and voices for his D&D characters and hails AI as “a great leveler for user-generated content.”

Current AI platforms are failing to reward creators for their work, “but I think that’s solvable,” he says, describing himself as “an AI bull” who believes the technology will extend the reach of Hasbro’s brands. That could include subscription services letting other Dungeon Masters enrich their D&D campaigns, or offerings to let parents customize Peppa Pig animations. “It’s supercharging fandom,” he says, “and I think that’s just net good for the brand.”


The D&D design team and others involved with D&D at Wizards of the Coast have repeatedly stood by a statement posted back in 2023 that said that D&D was made by humans for humans. The full, official stance on AI in D&D by the D&D team can be found below.

For 50 years, D&D has been built on the innovation, ingenuity, and hard work of talented people who sculpt a beautiful, creative game. That isn't changing. Our internal guidelines remain the same with regards to artificial intelligence tools: We require artists, writers, and creatives contributing to the D&D TTRPG to refrain from using AI generative tools to create final D&D products. We work with some of the most talented artists and creatives in the world, and we believe those people are what makes D&D great.
 

log in or register to remove this ad

Christian Hoffer

Christian Hoffer


log in or register to remove this ad



The problem with the ethical argument is...when it comes to technologies that generate "power" you have to get all sides to play ball. If you want to ban AI on ethical reasons fine...

Wow. I made no call for any such thing. Jump to conclusions much?

There are a bunch of uses for generative AI that can be really valuable to the human race, and have no ethical issues attached. So, no, I don't want to broadly ban its use or development on ethical grounds.

but if you don't get the US, Europe, Russia, China, India, etc etc everyone to agree not to push AI research and utilize AI than it doesn't matter, because these technologies are too powerful not to use if a competitor country is going to use them.

Quite a grandiose strawman you've got there.

Not using generative AI to make commercial fantasy art isn't going to make the US fall behind in the coming Great AI Wars. India churning out pics of unicorns and orcs isn't going to put us at a major economic disadvantage.
 

The article you link to is about Meta, not Google.
Ok, I mistyped. People make mistakes but that impacts my argument not a bit. I'll go back and correct my post.

But that's something entirely different imho from what others are talking about.
People in this thread have been arguing many different points of view of this situation but we were replying (several times) to each other so the conversation we were having is what I was responding to. I am not sure how this point is relevant to our back and forth.

I do have more I would like to respond to but I have to go back to work.
 
Last edited:

I think Chris Cocks should stick to talking about nominative determinism.

Mod Note:
Someone already got red text on Saturday for puerile jokes at his name.

Continuing to make those, when a moderator is active in the discussion and highly likely to see it, can only be considered... not the brightest of moves.

Have a Warning Point. Keep it clean, folks.
 


What problem is it solving?
IMXP, AI solves the problem of not attracting enough investor attention, since a lot of investors are aggressively in favor of it (mostly because of the supposed labor cost savings).

Long term, I don't think that's a good strategy for D&D, because the value of a D&D product for me is significantly tied up in the labor used to produce it, and if a D&D product ever felt like it wasn't done by human beings, it would lose a significant amount of value for me. I'm also skeptical of the wisdom of chasing investor trends.
 

I'm reminded of the beginning of Douglas Adams' novel Dirk Gently's Holistic Detective Agency, where he talks about how—after inventing answering machines to talk on the phone for us, and VCRs to watch TV for us—we (humans) invented electric monks to believe things for us.

Now we've reached the next step: having AI to play games for us.
Finally freeing me up to do the important things in life, like work at a soul-crushing job for the man!
 

I'm also skeptical of the wisdom of chasing investor trends.

As I've pointed out a few times earlier: the techbros are selling snake oil. I assume that they're doing this because they believe that in the background, they can actually get the machine to do the crazy things they've claimed.

They can't. It's that simple. The technologies and approaches they've married themselves to explicitly prohibit them from achieving AGI -- or anything more than a glorified Chinese room, really -- because there will never not be a context window problem. Not to mention that an LLM's approach to causality is incredibly limited and isn't rooted in actual causal reasoning or temporal logic, but "inferred" (insofar as an LLM can infer things) from text, and that will have consequences in terms of what their models can do.
 

Status
Not open for further replies.
Remove ads

Remove ads

Top