Hasbro CEO Chris Cocks Is Talking About AI in D&D Again

Status
Not open for further replies.
DND LOGO.jpg


Chris Cocks, the CEO of Hasbro, is talking about the usage of AI in Dungeons & Dragons again. In a recent interview with Semafor, Cocks once again brought up potential usage of AI in D&D and other Hasbro brands. Cocks described himself as an "AI bull" and offered up a potential subscription service that uses AI to enrich D&D campaigns as a way to integrate AI. The full section of Semafor's interview is below:

Smartphone screens are not the toy industry’s only technology challenge. Cocks uses artificial intelligence tools to generate storylines, art, and voices for his D&D characters and hails AI as “a great leveler for user-generated content.”

Current AI platforms are failing to reward creators for their work, “but I think that’s solvable,” he says, describing himself as “an AI bull” who believes the technology will extend the reach of Hasbro’s brands. That could include subscription services letting other Dungeon Masters enrich their D&D campaigns, or offerings to let parents customize Peppa Pig animations. “It’s supercharging fandom,” he says, “and I think that’s just net good for the brand.”


The D&D design team and others involved with D&D at Wizards of the Coast have repeatedly stood by a statement posted back in 2023 that said that D&D was made by humans for humans. The full, official stance on AI in D&D by the D&D team can be found below.

For 50 years, D&D has been built on the innovation, ingenuity, and hard work of talented people who sculpt a beautiful, creative game. That isn't changing. Our internal guidelines remain the same with regards to artificial intelligence tools: We require artists, writers, and creatives contributing to the D&D TTRPG to refrain from using AI generative tools to create final D&D products. We work with some of the most talented artists and creatives in the world, and we believe those people are what makes D&D great.
 

log in or register to remove this ad

Christian Hoffer

Christian Hoffer

Either AI just turns out to not be monetizable enough for its cost on its own (which is unlikely)
The variety of AI we are discussing is already being confirmed as cost inefficient by Microsoft and OpenAI. Much like robots that make pizza mid-delivery, it doesn't really solve any big problems and is, at best, a more expensive solution than those currently available.

I think the biggest fundamental legal question is....does an AI have to pay people for the data it is trained on?
A sample of the legal problems:
  • Do creators of training materials need to be consulted and approve the use of their material before the material can be used for training? This is a question that can't really have a universal answer since data privacy rights vary wildly from jurisdiction to jurisdiction, as do copyright. Many areas would also require people to be allowed to remove their data/works.
  • Do institutions such as OpenAI owe money to the people whose material they have already used? If so, are they owed as a one off payment or an ongoing payment?
  • To what degree is the AI provider responsible for the output of the AI? If it's "not at all" then they are essentially offering no value, and if it's "kind of" then it raises concerns about if they can be liable for misinformation, offensive content, etc. There's already caselaw where an AI bot made a promise that the company had to uphold.
  • To what degree can a customer uphold the promises of the AI? If D&D Beyond comes up with an AI that is supposed to generate unique encounters for a party I specify - and it keeps giving me the same 3 encounters that are terrible, am I entitled to a refund?
  • To what extent must holders of AI surrender information under things like New Zealand's Privacy Act? If I submit a query and it gives me what I feel is an oddly personal response, how much digging is required to satisfy what information they have? Is "it's obfuscated because its been processed in Machine Learning" a sufficient answer?
  • If an employee is told to rely on an AI, and it hallucinates a terrible answer that the employee acts on - who is responsible? The employee? The Employer? The company that produced the AI? The person who produced the training material with the terrible advice?
  • If the AI effectively engages in a protected activity, such as giving legal or medical advise, but was not intended to - who is responsible? The creator of the AI? The person who entered the prompt? The person who provided training to the person who supplied the prompt? The person who created the training material?
  • Can an AI defame someone? Under most defamation law even if it only publishes the defamatory comment to 1 person - that can still be enough to start the action. If it can't, who takes responsibility for any harm it causes?
I have covered numerous studies and reports on how AI is already creating a lot of problems, and not really any solutions - so I think if you want to be an advocate for AI you look at those and think about them rather than just simply assure there's not many problems and it'll all be okay because of the vibe.

But until that time, normal businesses have every incentive to utilize AI. It is an extremely powerful tool that has the potential to offer enormous value to customers. It would be absolutely FOOLISH not to include AI into dnd in the current climate.
Again, studies and reports say otherwise. Many of them say it would be objectively foolish to include it.

I don't think anyone is arguing it is of "no value".
Part of the complication here is AI is a very broad concept, so the conversation often starts about generative AI and then people start incorporating other aspects of AI which have been shown to be value. Many areas of Machine Learning have lots of applications when used with awareness of their limitations etc.

When it comes to the Generative AI that Chris Cock is advocating for...
1741653706723.png
 

log in or register to remove this ad

Wow. I made no call for any such thing. Jump to conclusions much?
I would agree you jumped to a big conclusion there. I was discussing ethics arguments (including a full ban) as a discussion on that point, my intention was not to state your position as "we must ban AI for ethical reasons"...simply to discuss the issues with ethical arguments in general, as the ethical side was one you brought up.
 

Quite a grandiose strawman you've got there.

Not using generative AI to make commercial fantasy art isn't going to make the US fall behind in the coming Great AI Wars. India churning out pics of unicorns and orcs isn't going to put us at a major economic disadvantage.
but the "Great AI wars" IS what pushes that technology forward into greater commercial viability. The core technology is not going to be stopped by ethics concerns because of its global power potential was the point I was getting at.
 

Frankly, I think more Enworlders could benefit from using AI.

WOTC has no interest in stuff like D&D's economy, a low magic game, grittiness, etc.

But Chat GPT can and will make that stuff for you!!!
 


"Great AI wars" IS what pushes that technology forward into greater commercial viability. The core technology is not going to be stopped by ethics concerns because of its global power potential was the point I was getting at.
The technology for the "great AI wars" neither exists nor is being worked on.

All discussion on the great AI wars relates to the General Artificial Intelligence ("AGI"), this is a type of computing which we don't even have a framework for - even though people have been saying "it's here" and "it's 20 years away" ever since I was 6 (for context, I'm presently 45).

The "AI" being discussed here is Generative AI which not even vaguely connected to AGI - it doesn't understand causality and concepts like evolving law - it essentially uses statistical math to try to predict what kind of response the user wants to see. This makes it doubly dangerous as telling people what they want to see can convince them of things that are simply not true (hence that paper about reading financial statements getting retracted, for example).

Ethically it is essentially going to require concensus on that because information in all those areas you've cited is a carefully controlled commodity... as it is in literally every other jurisdiction. Everything from conventional censorship to specialist protections of markets, every nation has to be concerned on this. Additionally there's still no evidence that there's any plan beyond trying to devalue the work of knowledge and highly skilled employees - so its vulnerable to public.

Again, I would encourage you to look into how many of the claims you've made have been debunked. I would also encourage you to try to find a single positive use-case - because OpenAI still has to do conventional VC funding and seeking of government grants, and X ended up giving access to Grok away for free because nobody wanted to pay for it. If you strip away all the fantasies of a new form of computing that is so valuable it warrants any risk and any expense, what's left?
 

I don't know if it needs to be said, but what we call AI is artificial but it is not intelligent. It's amazing at taking input, throwing it all into a blender and then spitting out something comprehensible. But it does not think and many people do not see a path towards it ever "thinking", at least not for LLMs.

However there are applications where AI is making strides in medicine, material sciences and the like. It can take years for a person to figure out how proteins fold (it took a team more than 20 years to figure out the first structure). Before AI, it could still take years to decode. AI can now figure out the structure of any given protein in seconds.

The thing is though, while the tool is truly amazing and seemingly disrupted many lines of research initially, what scientists soon realized was that it just opened up new lines of inquiry and study. The tool is not the end of study into protein structures, it's just a tool that lets scientist spend time on other questions and solve different problems.

So I think AI will cause changes unless it all falls apart for some reason. That doesn't mean it ever will or ever can replace people, it just means we will be freed up to pursue other things. Hopefully. :)
 

The technology for the "great AI wars" neither exists nor is being worked on.

All discussion on the great AI wars relates to the General Artificial Intelligence ("AGI"), this is a type of computing which we don't even have a framework for - even though people have been saying "it's here" and "it's 20 years away" ever since I was 6 (for context, I'm presently 45).
This is not true. The current generative AIs are already doing incredible work. Someone mentioned the protein folding scenario, using AIs to effectively determine the structure of nearly every protein in existence we are aware of. Work that would have been untouchable a few years ago.

You don't need AGI to have a radical leap in what is possible.
 

The technologies and approaches they've married themselves to explicitly prohibit them from achieving AGI -- or anything more than a glorified Chinese room, really -- because there will never not be a context window problem. Not to mention that an LLM's approach to causality is incredibly limited and isn't rooted in actual causal reasoning or temporal logic, but "inferred" (insofar as an LLM can infer things) from text, and that will have consequences in terms of what their models can do.

Can't wait to see what an LLM entirely trained on House of Leaves spits out...
 

...prohibit them from achieving AGI...
I don't know, but I do believe that it's similar to the Jetson's flying cars or the robot help in the house. While there are technically flying cars (people have build them) and humanoid robots, it's not a practical implementation (yet?) and certainly not everyone had them a few decades ago like so many people believed in the 70s. If someone makes AGI, it's probably more discovering something they made by accident then anything else... ;)

But some things like the household robots do exist, just in different forms. Just not a humanoid robot with a vacuum cleaner. And they come with their own caveats, but they do clean your vacuum/mop your house when properly setup. The same with AI/LLM, what some people are selling is indeed snakeoil, but isn't it an American expression "Fake it until you make it!"... That doesn't mean that AI/LLM is useless and not getting any better. It just limits what you use it for.

To be honest, I wouldn't trust an LLM at all to do anything important at the moment without human oversight. As an example, recently olmOCR was released, it pretty much reads pdfs/images and assembles it in a readable text format and removes pagebreaks from a sentence (as you would get when you export a pdf to a txt file), while adding markdown. It's relatively fast, there are some known issues (like tables missing things, etc.), but I noticed that the LLM missed a page and inserted the same page text again for the page it missed, missing blocks of text, wrongly arranged text blocks, etc. Suddenly a solution that would normally be able to process a million pages in ~72 hours for ~$190 needs a human to double check the results... Depending on how accurate that human needs to be, a million pages is going to take a LOT longer to check then three working days... Will it be faster then a human doing it manually (OCR and removing the formatting/colums), absolutely! Will it be as fast as advertised, probably, it just needs a far slower human to check all the results! And that is what a LOT of people don't get. They believe there are magical systems that will everything correctly, when fundementally LLMs make up stuff if they don't have the answer (and they don't really know they're making stuff up).

But with fiction and RPGs making stuff up is what we do, with roleplaying we sometimes make mistakes that windup being canon in the campaign, often being counter to what was the intentions or what was believed. Most DMs don't correct that and go with the flow, instead of making a big deal about an error they (or someone else) made, they integrate it into the rest of the story. "Bob the NPC said something different from what the King said last session!" Eh... "YES! Maybe Bob is lying... Or how much do you trust the King?". So when a LLM halucinates, often you can just go with the flow. That's going to problematic for people that have a 'control' disorder and things don't go exactly like they had in mind, but for most people that isn't a problem at all!

Already I've seen single player text adventures that are way more flexible then the old choose your own adventure books or even computer RPGs. Are they as flexible as a good human DM, I doubt it. But maybe as good as an average DM or better then a bad DM? Not all humans are created equal...
 

Status
Not open for further replies.
Remove ads

Remove ads

Top