RPG Evolution: Hasbro's AI Plans

We can make some educated guesses about Hasbro's AI plans thanks to a recent interview with CEO Chris Cocks.

We can make some educated guesses about Hasbro's AI plans thanks to a recent interview with CEO Chris Cocks.

sense-2326348_1280.jpg

Picture courtesy of Pixabay.

Not surprisingly, Large Language Model (LLM) Artifical Intelligence (AI) is on every business' plans, and Hasbro is no different. The question is how the company plans to use it ethically in light of several missteps in which Wizards of the Coast, the Hasbro division overseeing Dungeons & Dragons, failed to disclose that AI was involved in certain pieces of art. The ongoing controversies were enough to make WOTC update its AI policy.

An AI Product Every Two to Three Months​

That hasn't stopped former CEO of WOTC and current CEO of Hasbro Chris Cocks from expounding on his plans for AI:
...we’re trying to do a new AI product experiment once every two to three months. That’s tending to be more game-focused for us, a little more gamified. We’re trying to keep it toward older audiences, to make sure all the content is appropriate...You’ll see more of how we’re thinking about how we can integrate AI, how we can integrate digital with physical gaming over time...I think most major entertainment and IP holders are at least thinking about it.
What Cocks is talking about is how LLM AIs are sourced. The LLM controversies revolve around, among other things, that the AIs are trained on content without the owners' permission. In other words, although LLMs are often trained on publicly available content, the users sharing that content never imagined a robot would be hoovering up their dialogue to create money for someone else. The throughline to art is a bit easier to detect (as the above controversies show, harder to prove); but when it comes to text, like Reddit, user-generated content is invaluable. These AI are only as valuable as the content they have at their disposal to train on. This is why Poe.com and other customizable AI, trained on your own content, can be so useful to Dungeon Masters who want a true assistant that can sort through decades of homebrew content in seconds. I'll discuss using Poe.com in a future article.

Respecting Creators, Works of Art, and Ownership​

Cocks is keenly aware of AI's controversies, with the Open Game License and issues with AI-generated art:
We certainly weren’t at our best during some points on the Open Game License. But I think we learned pretty fast. We got back to first principles pretty quickly ... The key there is the responsible use of it. We have an even higher bar we need to hit because we serve audiences of all ages. We go from preschoolers on up to adulthood. I don’t think we can be very cavalier in how we think about AI...That said, it’s exciting. There’s a lot of potential for delighting audiences. We need to make sure that we do it in a way that respects the creators we work with, respects their works of art, respects their ownership of those works, and also creates a fun and safe environment for kids who might use it.
And now we come to it. So how would WOTC and Hasbro use AI that respects creators, their work, ownership and is fun to use?

How Might WOTC Use AI for D&D?​

Cocks give us some hints in his answers:
The 20-plus years that the Open Game License has been in existence for something like D&D, I think that gives us a lot of experience to navigate what will emerge with AI, and just generally the development of user-based content platforms, whether it’s Roblox or Minecraft or what Epic has up their sleeves.
The Open Game License (OGL), by its very nature, is meant to be used in much the same way LLMs try to use the entirety of the Internet. What was likely a thorn in the side of lawyers may well seem like an opportunity now. Unlike the Internet though, the OGL has a framework for sharing -- even if it wasn't envisioned by the creators as sharing with a machine. More to the point, everyone using the Open Game License is potentially adding to LLM content; databases of OGL content in wiki format are just more fodder for LLMs to learn. WOTC could certainly leverage that content to train an AI on Dungeons & Dragons just as much as anyone else if they so chose; however, a large company using OGL content to fuel their AI doesn't seem like it's respecting their creators and their ownership.

So it's possible WOTC may not use OGL content at all to train its AI. They don't need it -- there's plenty of content the company can leverage from its own vaults:
The advantage we have ... This is cutting-edge technology, and Hasbro is a 100-year-old company, which you don’t usually think is ... a threat ... But when you talk about the richness of the lore and the depth of the brands–D&D has 50 years of content that we can mine. Literally thousands of adventures that we’ve created, probably tens of millions of words we own and can leverage. Magic: The Gathering has been around for 35 years, more than 15,000 cards we can use in something like that. Peppa Pig has been around for 20 years and has hundreds of thousands of hours of published content we can leverage. Transformers, I’ve been watching Transformers TV shows since I was a kid in Cincinnati in the early ‘80s. We can leverage all of that to be able to build very interesting and compelling use cases for AI that can bring our characters to life. We can build tools that aid in content creation for users or create really interesting gamified scenarios around them.
The specific reference to 35 years of Magic: the Gathering content "that we can leverage" has been done before by WOTC's predecessor, when TSR created the Spellfire card game. TSR churned out Spellfire in response to Magic: The Gathering (before WOTC took over D&D). It relied heavily on (at the time) TSR's 20 years of art archives. One can easily imagine AI generating this type of game with art WOTC owns in a very short period of time.

But Cocks is thinking bigger than that for Dungeons & Dragons. He explains how he uses AI with D&D specifically:
I use AI in building out my D&D campaigns. I play D&D three or four times a month with my friends. I’m horrible at art. I don’t commercialize anything I do. It doesn’t have anything to do with work. But what I’m able to accomplish with the Bing image creator, or talking to ChatGPT, it really delights my middle-aged friends when I do a Roll20 campaign or a D&D Beyond campaign and I put some PowerPoints together on a TV and call it an interactive map.
In the future, WOTC could easily change their contracts to explicitly state that any art they commission may be used to train a future AI (if they don't already). For content they already own -- and WOTC owns decades of art created for Magic: The Gathering -- they may already be within their rights to do this.

Add all this up, and companies like Hasbro are all looking at the archives of information -- be it text, graphics, or examples of play -- as a competitive advantage to train their AIs in a way their rivals can't.

The Inevitable​

In short, it's not a question if WOTC and Hasbro are going to use AI, just when. And by all indications, that future will involve databases of content that are either clearly open source or owned by Hasbro, with LLMs that will then do the heavy lifting on the creative side of gaming that was once filled by other gamers. For Dungeons & Dragons in particular, the challenge in getting a game started has always been finding a Dungeon Master, a tough role for any gamer to fill, and the lynchpin of every successful D&D campaign. With D&D Beyond now firmly in WOTC's grasp, they could easily provide an AI platform on that service, using the data it learns from thousands of players there to refine its algorithms and teach it to be a better DM. Give it enough time, and it may well be an a resource for players who want a DM but can't find one.

We can't know for sure what WOTC or Hasbro has planned. But Cocks makes it clear AI is part of Hasbro’s future:
While there are definitely areas of concern that we have to be watchful for, and there are definitely elements to the chess game that we have to think about before we move, it’s a really cool technology that has a lot of playfulness associated with it. If we can figure out how to harness it the right way, it’ll end up being a boon for users.
In three to five years, we might have officially sanctioned AI Dungeon Masters. Doesn't seem realistic? Unofficial versions are already here.
 

log in or register to remove this ad

Michael Tresca

Michael Tresca

Raiztt

Adventurer
It is "intent". Because if the AI can "understand" context and pursue a goal, then that is teleological intent.
So there is something that it is like to be the AI? Like it has an internal subjective experience of intentionality?

But, also, what Jer is saying is that the intent still belongs to the humans that create the AI - it is an expression of human intentionality, it doesn't have any of its own.
 

log in or register to remove this ad

Considering that WotC owns the entire print run of Dungeon and Dragon magazines, there's about 500 adventures right there. Add Adventurers League, the various Living campaigns of hte past and the mountain of RPGA adventures, most of which they own, "thousands" isn't a stretch.
Thousands for certain, and quite possibly an order of magnitude greater. Dungeon alone is over 200 issues counting the online ones, and have always had what, 4-5 adventures of varying lengths per issue?
 

Jer

Legend
Supporter
There are threats concerning how people could use AI*, just like there are threats from nuclear weapons, biological warfare, whatever the heck was going on with weird pop trends like the ice bucket challenge. But robots taking over ala Terminator? Lower down on my list.
Yup. Robots taking over ala Terminator is way down my list. But a corporation/government deciding to let robots go out and make decisions based on flawed AI programming is much higher up on my list. I'm not worried about Boston Dynamics robots that are being used by police forces deciding that they can do better than humans and rising up to take over. But I am worried that BD robots used by police forces might start misidentifying kids running towards them as threats and respond as if they're being attacked. It's the misuse of tech by people that worries me far more than the tech itself becoming a threat.
 

Blue

Ravenous Bugblatter Beast of Traal
I mean they could, if they hired people to do it for them and built one from scratch without using any other materials and were willing to spend beaucoup bucks hiring people in developing countries to do the kind of reinforcement training that companies like OpenAI have done. Those image models and LLMs don't come cheap and require a lot of manual labor to build and adjust. Lots of people looking at the output and correcting it and feeding it back into the system.
I'm not really sure where you are getting this, but hobbyists literally build their own models all the time. You can train them on a home PC with a decent graphics card, or cheaply rent some AWS or other cloud. This has been being done by amateurs using stable diffusion for well over a year, it's old technology as far as AI art is concerned.
 

Jer

Legend
Supporter
It is "intent". Because if the AI can "understand" context and pursue a goal, then that is teleological intent.

It is especially intent when it isnt a subroutine that specifically changes hands, but rather an ability to detect any patterns, then select and consistently maintain a pattern, while discerning which data that is inconsistent with this pattern (such as any kind of hallucination, including visual hallucinations about hands).
I don't even think we can have a discussion about this because you're using words in ways that they aren't really used in the field.

Sure if you want to go down that route then it has "intent". So does a mechanical thermostat that turns the A/C on when it gets too hot and the heat on when it gets too cold. I'm also not worried about the technology in my thermostat suddenly deciding to murder me in my sleep because it's decided it doesn't need people anymore. Because it is limited in the kinds of goals it can pursue.

And these models are the same way. They're limited in the goals they can pursue - they learn a function from data and maximize the correctness of their output according to that function. They cannot "decide" to do anything else anymore than my thermostat can decide it wants to learn how to be an accountant. All they can do is "decide" what word comes next or what pixel should go where based on the statistics they've learned.
 

Jer

Legend
Supporter
I'm not really sure where you are getting this, but hobbyists literally build their own models all the time. You can train them on a home PC with a decent graphics card, or cheaply rent some AWS or other cloud. This has been being done by amateurs using stable diffusion for well over a year, it's old technology as far as AI art is concerned.
On the art side sure - on the LLM side you need a lot more text to make it work. I was thinking more on the LLM side than on the art side. And even on the art side what the hobbyists are doing generally starts from open sourced models NOT from scratch, which is what the question was about - if you're starting completely from scratch with just your own art you need a lot more art than if you're adjusting an existing model.
 

Yaarel

He Mage
So there is something that it is like to be the AI? Like it has an internal subjective experience of intentionality?

But, also, what Jer is saying is that the intent still belongs to the humans that create the AI - it is an expression of human intentionality, it doesn't have any of its own.
Today, the AI lacks "intent". But all of the big AI creators are working on it.

This has nothing to do with subjective "experience". But it does have to do with subjective data processing.

Right now. The things one might expect a "computer" to be bad at, like creativity and reinvention, the AI is actually pretty good at. But the things one might expect it to be good at, like repeating a list or answering a math equation, the AI is actually bad at.

Bridging these too, relates to understanding context, as is part of the goal of maintaining an "intention".
 

Yaarel

He Mage
I don't even think we can have a discussion about this because you're using words in ways that they aren't really used in the field.

Sure if you want to go down that route then it has "intent". So does a mechanical thermostat that turns the A/C on when it gets too hot and the heat on when it gets too cold. I'm also not worried about the technology in my thermostat suddenly deciding to murder me in my sleep because it's decided it doesn't need people anymore. Because it is limited in the kinds of goals it can pursue.
These terms like "teleological" are used in the philosophical areas of AI research.

Even decades ago, there were researchers who asserted that a "thermostat" is conscious. Pure animism.



And these models are the same way. They're limited in the goals they can pursue - they learn a function from data and maximize the correctness of their output according to that function. They cannot "decide" to do anything else anymore than my thermostat can decide it wants to learn how to be an accountant. All they can do is "decide" what word comes next or what pixel should go where based on the statistics they've learned.
I agree about the limitations of Dall-E, ChatGPT, etcetera.

At the same time, it is absurd to talk about AI while only referring to what exists today in the market place.
 
Last edited:

Jer

Legend
Supporter
These terms like "teleological" are used in the philosophical areas of AI research.

Even decades ago, there were researches who asserted that a "thermostat" is conscious. Pure animism.
I know what teleological means. I mean you're not using "intent" in the way the folks in the field mean when they are discussing non-trivial examples of it.

Yes - a thermostat can be used as an example of an "agent" and it is used in textbooks as a classic example of agent-based AI. But if you're boiling down intent to just teleological intent you're trivializing the problem to a degree that isn't useful in a discussion like this as shown by the thermostat example. The intent you're talking about in a neural network is the reflex intent of a thermostat towards goal maximization. That's all it is. And it's no more complex than what goes on in a mechanical thermostat. As such it's not really a useful tool to describe a concern that AIs might decide they don't need humans anymore and start working to get rid of us for the exact same reason I don't need to worry about my thermostat.
 

Raiztt

Adventurer
These terms like "teleological" are used in the philosophical areas of AI research.

Even decades ago, there were researches who asserted that a "thermostat" is conscious. Pure animism.
You mean panpsychism, not animism, and that conclusion is for a radically different reason than you are implying. The panpsychist might say that the thermostat is conscious, not because of some internal mechanistic process that it has achieved, but because they argue that literally everything has some degree of preconsciousness - including atoms.

They are not in the "consciousness can be created" camp - they are in the "everything is always already conscious" camp. For a popular level work on this, see Phillip Goff's "Galileo's Error".
I agree about the limitations of Dall-E, ChatGPT, etcetera.

At the same time, it is absurd to talk about AI while only referring what exists today in the market place.
I don't know what this means - you cannot borrow from the future. The only thing we have to discuss is what exists today.
 

Remove ads

Remove ads

Top