Hasbro CEO Chris Cocks Is Talking About AI in D&D Again

Status
Not open for further replies.
DND LOGO.jpg


Chris Cocks, the CEO of Hasbro, is talking about the usage of AI in Dungeons & Dragons again. In a recent interview with Semafor, Cocks once again brought up potential usage of AI in D&D and other Hasbro brands. Cocks described himself as an "AI bull" and offered up a potential subscription service that uses AI to enrich D&D campaigns as a way to integrate AI. The full section of Semafor's interview is below:

Smartphone screens are not the toy industry’s only technology challenge. Cocks uses artificial intelligence tools to generate storylines, art, and voices for his D&D characters and hails AI as “a great leveler for user-generated content.”

Current AI platforms are failing to reward creators for their work, “but I think that’s solvable,” he says, describing himself as “an AI bull” who believes the technology will extend the reach of Hasbro’s brands. That could include subscription services letting other Dungeon Masters enrich their D&D campaigns, or offerings to let parents customize Peppa Pig animations. “It’s supercharging fandom,” he says, “and I think that’s just net good for the brand.”


The D&D design team and others involved with D&D at Wizards of the Coast have repeatedly stood by a statement posted back in 2023 that said that D&D was made by humans for humans. The full, official stance on AI in D&D by the D&D team can be found below.

For 50 years, D&D has been built on the innovation, ingenuity, and hard work of talented people who sculpt a beautiful, creative game. That isn't changing. Our internal guidelines remain the same with regards to artificial intelligence tools: We require artists, writers, and creatives contributing to the D&D TTRPG to refrain from using AI generative tools to create final D&D products. We work with some of the most talented artists and creatives in the world, and we believe those people are what makes D&D great.
 

log in or register to remove this ad

Christian Hoffer

Christian Hoffer

I'm not a lawyer but the fair use argument seems to not apply here, at least if it's copyrighted content that is not being offered for free that is being scraped from the internet. If I went to a torrent site and downloaded a few of this years best selling novels and art books and got caught I face a real chance of being smacked down by the legal system and the copyright holders, regardless of what I used the material for afterwards (say posting a review which would be fair use). The AI companies are stealing and that's the issue, not what they do with the material afterwards. Fruit of the poisoned tree. As for it being hard to prove, you might be right there but we won't know until the various court cases are resolved.

I have a friend who is an author and he mostly writes books on fitness and his agent was able to figure out his books were probably being used by AI and so he is involved in at least one of the lawsuits going on. He's not happy about his work being stolen but he also feels like at this point it won't matter what the results of the cases are because it's too late to reign in the technology.
The AI companies claim they're only training AIs on content they're legally allowed to access, either free or paid. But Meta recently got caught torrenting a collection of books, so there might be more skeletons in the closet.
 

log in or register to remove this ad


I'm pretty sure everyone in this conversation has already decided their moral philosphy regarding using AI - and that few are open to persuasion on whether they are about the harms to the already vulnerable creators, or the harm to the environment, or the legality of stealing through proxy. So, I'd like to talk about the thing that probably should matter to TTRPG players (who are infamously competitive) - what relying on AI does to you.

In Navigating the Jagged Technological Frontier: Field Experiments Evidence of the Effects of AI on Knowledge Worker Productivity and Quality, Harvard Business School recruited Boston Consulting Group to see how Large Language Models ("LLMs", aka AI that generates text) impacted their work and while the results were initially promising for corpos who prioritize short term metrics, the outcomes were not so good. The "jagged frontier" they mention is the unpredictable measures of tasks where AI is helpful, AI is not helpful and AI actually impedes people. Due to the opaque nature of Machine Learning ("ML") you can't predict when it will make mistakes, what mistakes it will make etc.

It found that for tasks where the AI was an asset, it helped low-skill workers shrink the gap between themselves and high-skill worker but noted that there are concerns about them learning the core skills, or just relying on the AI. For those "outside" the frontier, it often introduced mistakes that even experienced workers overlooked (because AI is mostly good at sounding persuasive, not being right).

Generative AI at Work found that again, low-skill and inexperienced support centre workers were able to effect short term gains in productivity but didn't seem to upskill as they worked and that they often received feedback about sounding mechanical and inauthentic. (Not exactly feedback I want from my D&D group) How Knowledge Workers Think Generative AI Will (Not) Transform Their Industries found that most people in knowledge/creative work businesses who lean on AI, lean on it for non-creative stuff they don't want to work on (which has different implications in business).

Young Coders Are Using AI for Everything, Giving "Blank Stares" When Asked How Programs Actually Work seems to reflect the realization of the concerns in the formal studies - using AI as a crutch in the short term blocks you from developing skills, you stop growing by challenging yourself to do the stuff.

So, whether Chris Cocks is being honest or not, what he's essentially advocating is that he doesn't have the skills to do the things he wants to lean on the AI for (remember, he's a corpo, not an author or pro DM), he doesn't really care if the output has problems, feels inauthentic, leads to a shallower/flatter experience because it simply upticks the "efficiency" of his game. Instead of the long dreamed of "infinite adventures" machine that provides endless new stories - it's building an infinite quantity of the legendary "super generic" adventure which has massive amounts of content, which you skip over because you've seen/read it all before.

And, while his group might not want to tell him it - it's almost certainly bad for him and bad for his group. "If the Machine Is As Good As Me, Then What Use Am I?" - How the Use of ChatGPT Changes Young Professionals' Perception of Productivity and Accomplishment covered how while it tends to induce periods where people feel like they're being super productive etc, it also leads to spiralling where people get frustrated due to spending more time trying to tweak prompts rather than learn, exercise creativity, etc and can lead to them feeling worse about the quality of their output of their game.

In short, it is the latest and most expensive in gimmicks that promise to revolutionize your game but end up making people hate playing because it stops being fun, stops being about what they show up for and increasingly becomes about an unrelated challenge. People start feeling their work is getting worse, getting frustrated they can't fix it by the method offered and blame themselves - spend more time trying to compensate and burn out.

But it appeals to people who see entirely through metrics because word machine go brrr!
 

Here's what baffles me. These AI companies can scape (pirate, steal) the internet for copyrighted material and use it publicly and its ok, at least so far, but if some individual pirates or steals content from the internet they could be looking at devastating fines.
That's because it isn't the same thing. And the law doesn't react quickly to new things. Of course the power/money come into the picture when the law tries to figure it out.

I've seen something similar happen in the last ~35 years when computer became common and powerful enough to rip music, movies, etc. Due to how our laws were written in the Netherlands, 'piracy' wasn't illegal for around a decade. The distribution part was, just not the 'making a copy for your own use' from an 'illegal' source.

Even now, looking at how technically computers work, much of copyright is a mess. Our only guidance is how judges interpret that law. And the economical impact is never ignored, It often takes years if not decades before companies are fined for monopolistic practices, and even then in the years following that often the fines are partially or wholly turned back.

You can read a book from the library, there's still copyright on that work, but how libraries work they got a certain level of dispensation from the law for that (and not the owner of the copyrighted work). And there's a certain level of 'common' sense when you lend a book to a friend. How does that work with digital books? What if you made a backup (copy) of that book? If you read that book from the library, there's a 'copy' of it in your memory... When I stream a movie to my computer, there's a copy of it on my PC (legal), but if I were to somehow keep a copy on PC of that, it isn't legal anymore. That's even without going into all the intervening network infra that the movie is copied to during it's travel from the streaming server to my PC. The copyright owner doesn't know which networks that actually are, nor does it grant explicit permission to those networks. If you were to implement the copyright laws as written, you would break the Internet, and (almost) no one thinks that is a good idea...

AI/LLM usage of copyrighted material isn't a simple 'problem' to solve for the law/judges, especially not when the people making those decisions don't really understand how AI/LLM or computers work in the first place. And the people advising in this all have their own agenda's as well. And a countries laws aren't exactly immune to inside and outside influences.

In the US at least I don't see AI/LLM disappearing under the current administration. I don't see that happening in China either anytime soon. And the EU might have it's own pov on the matter, but when two other global powers are not stopping it, I don't see it happening there anytime soon either. An when the investment in these technologies reach the trillions, the economic weight becomes far to great to pull the rug from under the AI/LLM steam roller. Especially not when very good AI/LLM models are open sourced and people can run them on local hardware... Think of it as Pandora's box opened, and even if you manage to close the box, the things that were in it are already out there in the world.

Another fun example is for RPGs, when you first introduce your players to a new game, many will not own the rulebook, but you're still distributing a copyrighted work to them, maybe not a physical copy. Just like how you can watch one of your DVDs with friends is tolerated, but if you were to do the same for your whole school, it isn't.

Quite a few people have been talking about drastic changes to copyright/patent laws decades before this AI/LLM stuff became mainstream/relevant. Maybe society will now be forced to change (again) due to enormous economic pressure. It's not as if we always had copyright/patents in the first place. 1710 for England and 1840 in the German speaking countries, 1790 in the US, or even just 1886 (Berne).
 

That's because it isn't the same thing.
I agree with a lot of what you said but this line stands out as something I have to disagree with. AI companies certainly are doing the same thing. Meta got caught torrenting pirated books to train AI and the only difference between me doing the torrenting or Meta is the financial resources I have for the legal representation, lobbying the government on my behalf and paying the potential fines versus Meta's financial resources. As I understand it fines for torrenting can be up to $30,000 per work downloaded plus what I have to pay a lawyer (not cheap). Considering that Meta pirated at least tens of millions of books and that the company defiantly knew better they should be hit with the maximum fine per work but I doubt that would happen. However a Google search will show stories of people who got hit with massive fines, fines that the average American cannot afford and would ruin them financially.

In the US, at least, this is a problem to be solved by judges, after laws are passed. Right or wrong, judges and courts have the final say interpreting what a law means and different jurisdictions will hold to different legal theories or philosophical/political views that will guide their decision making and even then they can be wildly inconsistent in applying those standards. Also, as you say, it is the courts who will levy fines that may or may not hold up under appeal. The average citizen has neither the time or financial resources to fight the same fight.

Edited to correct the company committing the crimes.
 
Last edited:

And of course there’s the danger of having a LLM directly interface with users.

The Los Angeles Times had to pull their AI Bias Detector (lol) after only ONE DAY because it started defending the klan.


In the morally sometimes grey and sometimes evil campaigns of TTRPGs, i cannot imagine this not being a TREMENDOUS risk to the Brand.
 

Here's what baffles me. These AI companies can scape (pirate, steal) the internet for copyrighted material and use it publicly and its ok, at least so far, but if some individual pirates or steals content from the internet they could be looking at devastating fines. Other than the difference in financial resources between these companies and most individuals what's the difference? Is this a case of might, or money, making right? If stealing works on the internet isn't ok for an individual then AI companies should bee held to the same standards (including devastating fines when caught).
It's like laundering money. If you steal some money and you're caught with that money, you're going to jail. But steal some money, pass it through a bunch of hands (bank accounts, cryptocurrency, etc.), and it becomes a lot more difficult to tell that it's the same money as that which was stolen.

GenAI is an IP-laundering machine.
 


AI, by its nature, is a subscription-based model. With how the tech works right now, you can't offer it up as anything other than a subscription. Combine that with the staff cut backs they think it enables and it's every c suite executive's dream.

The root challenge is that we play these games because we like the challenge of being creative. It's like building an escalator on the side of Mount Everest. This isn't toiling away on a vapid PowerPoint or trudging through a TPS report.

It's actually fun, but I'm not sure they get that. We don't play TTRPGs despite the creative demands. We play them because of them.
The Ollama server I have running locally begs to differ.
 

We're in a transition period, with AI taking the stage more and more often. The thing is, it's coming, and in certain areas, it's already here. There's nothing that can be done about it. If you're giving a talk about the future of your investors, especially with a growing part of your business, you're going to talk about it.

And D&D is transitioning to being more online, so this is going to be a bigger and bigger part of WotC's approach. As my old friend Kosh said, "The avalanche has begun. It's too late for the pebbles to vote." So as much as I can appreciate the sentiment that use of AI should be stopped, the likelihood of that happening is almost zero. Will it change how my players and I play the game? Not at the moment, but down the road? Likely.

In my own work in tech, I can see things as changing radically in the next 10 to 15 years, and that's about the time I'll be retiring. I'm just trying to guide my daughter in the best possible way for her to be successful at this point. She's 8, so she embraces tech as much as she can, and the world she's going to be in is likely to be very different than mine.
 

Status
Not open for further replies.
Remove ads

Remove ads

Top