RPG Evolution: Hasbro's AI Plans

We can make some educated guesses about Hasbro's AI plans thanks to a recent interview with CEO Chris Cocks.

We can make some educated guesses about Hasbro's AI plans thanks to a recent interview with CEO Chris Cocks.

sense-2326348_1280.jpg

Picture courtesy of Pixabay.

Not surprisingly, Large Language Model (LLM) Artifical Intelligence (AI) is on every business' plans, and Hasbro is no different. The question is how the company plans to use it ethically in light of several missteps in which Wizards of the Coast, the Hasbro division overseeing Dungeons & Dragons, failed to disclose that AI was involved in certain pieces of art. The ongoing controversies were enough to make WOTC update its AI policy.

An AI Product Every Two to Three Months​

That hasn't stopped former CEO of WOTC and current CEO of Hasbro Chris Cocks from expounding on his plans for AI:
...we’re trying to do a new AI product experiment once every two to three months. That’s tending to be more game-focused for us, a little more gamified. We’re trying to keep it toward older audiences, to make sure all the content is appropriate...You’ll see more of how we’re thinking about how we can integrate AI, how we can integrate digital with physical gaming over time...I think most major entertainment and IP holders are at least thinking about it.
What Cocks is talking about is how LLM AIs are sourced. The LLM controversies revolve around, among other things, that the AIs are trained on content without the owners' permission. In other words, although LLMs are often trained on publicly available content, the users sharing that content never imagined a robot would be hoovering up their dialogue to create money for someone else. The throughline to art is a bit easier to detect (as the above controversies show, harder to prove); but when it comes to text, like Reddit, user-generated content is invaluable. These AI are only as valuable as the content they have at their disposal to train on. This is why Poe.com and other customizable AI, trained on your own content, can be so useful to Dungeon Masters who want a true assistant that can sort through decades of homebrew content in seconds. I'll discuss using Poe.com in a future article.

Respecting Creators, Works of Art, and Ownership​

Cocks is keenly aware of AI's controversies, with the Open Game License and issues with AI-generated art:
We certainly weren’t at our best during some points on the Open Game License. But I think we learned pretty fast. We got back to first principles pretty quickly ... The key there is the responsible use of it. We have an even higher bar we need to hit because we serve audiences of all ages. We go from preschoolers on up to adulthood. I don’t think we can be very cavalier in how we think about AI...That said, it’s exciting. There’s a lot of potential for delighting audiences. We need to make sure that we do it in a way that respects the creators we work with, respects their works of art, respects their ownership of those works, and also creates a fun and safe environment for kids who might use it.
And now we come to it. So how would WOTC and Hasbro use AI that respects creators, their work, ownership and is fun to use?

How Might WOTC Use AI for D&D?​

Cocks give us some hints in his answers:
The 20-plus years that the Open Game License has been in existence for something like D&D, I think that gives us a lot of experience to navigate what will emerge with AI, and just generally the development of user-based content platforms, whether it’s Roblox or Minecraft or what Epic has up their sleeves.
The Open Game License (OGL), by its very nature, is meant to be used in much the same way LLMs try to use the entirety of the Internet. What was likely a thorn in the side of lawyers may well seem like an opportunity now. Unlike the Internet though, the OGL has a framework for sharing -- even if it wasn't envisioned by the creators as sharing with a machine. More to the point, everyone using the Open Game License is potentially adding to LLM content; databases of OGL content in wiki format are just more fodder for LLMs to learn. WOTC could certainly leverage that content to train an AI on Dungeons & Dragons just as much as anyone else if they so chose; however, a large company using OGL content to fuel their AI doesn't seem like it's respecting their creators and their ownership.

So it's possible WOTC may not use OGL content at all to train its AI. They don't need it -- there's plenty of content the company can leverage from its own vaults:
The advantage we have ... This is cutting-edge technology, and Hasbro is a 100-year-old company, which you don’t usually think is ... a threat ... But when you talk about the richness of the lore and the depth of the brands–D&D has 50 years of content that we can mine. Literally thousands of adventures that we’ve created, probably tens of millions of words we own and can leverage. Magic: The Gathering has been around for 35 years, more than 15,000 cards we can use in something like that. Peppa Pig has been around for 20 years and has hundreds of thousands of hours of published content we can leverage. Transformers, I’ve been watching Transformers TV shows since I was a kid in Cincinnati in the early ‘80s. We can leverage all of that to be able to build very interesting and compelling use cases for AI that can bring our characters to life. We can build tools that aid in content creation for users or create really interesting gamified scenarios around them.
The specific reference to 35 years of Magic: the Gathering content "that we can leverage" has been done before by WOTC's predecessor, when TSR created the Spellfire card game. TSR churned out Spellfire in response to Magic: The Gathering (before WOTC took over D&D). It relied heavily on (at the time) TSR's 20 years of art archives. One can easily imagine AI generating this type of game with art WOTC owns in a very short period of time.

But Cocks is thinking bigger than that for Dungeons & Dragons. He explains how he uses AI with D&D specifically:
I use AI in building out my D&D campaigns. I play D&D three or four times a month with my friends. I’m horrible at art. I don’t commercialize anything I do. It doesn’t have anything to do with work. But what I’m able to accomplish with the Bing image creator, or talking to ChatGPT, it really delights my middle-aged friends when I do a Roll20 campaign or a D&D Beyond campaign and I put some PowerPoints together on a TV and call it an interactive map.
In the future, WOTC could easily change their contracts to explicitly state that any art they commission may be used to train a future AI (if they don't already). For content they already own -- and WOTC owns decades of art created for Magic: The Gathering -- they may already be within their rights to do this.

Add all this up, and companies like Hasbro are all looking at the archives of information -- be it text, graphics, or examples of play -- as a competitive advantage to train their AIs in a way their rivals can't.

The Inevitable​

In short, it's not a question if WOTC and Hasbro are going to use AI, just when. And by all indications, that future will involve databases of content that are either clearly open source or owned by Hasbro, with LLMs that will then do the heavy lifting on the creative side of gaming that was once filled by other gamers. For Dungeons & Dragons in particular, the challenge in getting a game started has always been finding a Dungeon Master, a tough role for any gamer to fill, and the lynchpin of every successful D&D campaign. With D&D Beyond now firmly in WOTC's grasp, they could easily provide an AI platform on that service, using the data it learns from thousands of players there to refine its algorithms and teach it to be a better DM. Give it enough time, and it may well be an a resource for players who want a DM but can't find one.

We can't know for sure what WOTC or Hasbro has planned. But Cocks makes it clear AI is part of Hasbro’s future:
While there are definitely areas of concern that we have to be watchful for, and there are definitely elements to the chess game that we have to think about before we move, it’s a really cool technology that has a lot of playfulness associated with it. If we can figure out how to harness it the right way, it’ll end up being a boon for users.
In three to five years, we might have officially sanctioned AI Dungeon Masters. Doesn't seem realistic? Unofficial versions are already here.
 

log in or register to remove this ad

Michael Tresca

Michael Tresca

Mike Myler

Have you been to LevelUp5E.com yet?
I mean, I didn't look too hard at your link, but, considering it only started at 3e, there's a bajillion TSR modules from before 3e that WotC can mine.

I gotta admit, I'd be a bit leery of WotC using DM's Guild to train an AI in adventure design - let's be honest here, no one considered anything like that when they signed up for selling on DM's Guild. Yes, it's legal, but, it looks sketchy as all get out.

One would hope that WotC would simply reach out to creators on DM's Guild with a simple opt in/out option for their material. I wouldn't think that would be too difficult to do, would it? Creators that don't mind can opt in and allow their material to be used, and those that don't, can simply say so.
The second link in that post is for modules from 1976 onward.
 

log in or register to remove this ad

AK81

Explorer
I wouldn't actually mind if they can make an AI strong enough to be a believable and competent GM or even Player Characters, which would be like companions at that stage. It would be like a narrative video game that would unfold in real time, with a world, adventures and companions set to yours and your groups specifications. Maybe in the future it could roll out graphics in real time too.

I am more worried that it would be bland, overly safe and disneyfied. That you could only play within strongly programmed perimeters, and as such become boring pretty fast, if you want something a bit grittier.

That is where human imagination can't be beaten. We can talk together and decide together what kind of games we would like to play. Not be at the mercy of Big Corps guidelines.
 


GreyLord

Legend
Yup. Robots taking over ala Terminator is way down my list. But a corporation/government deciding to let robots go out and make decisions based on flawed AI programming is much higher up on my list. I'm not worried about Boston Dynamics robots that are being used by police forces deciding that they can do better than humans and rising up to take over. But I am worried that BD robots used by police forces might start misidentifying kids running towards them as threats and respond as if they're being attacked. It's the misuse of tech by people that worries me far more than the tech itself becoming a threat.

Overall, from what you've described is missing in AI (From what I understand, it lacks the ability to seek it's own motivations or branch into new areas that are not defined by it's creators in increasing it's abilities and learning new skills on it's own), it's consciousness.

The consciousness is what really gives us the ability to determine a new or different direction than the one that we've been pointed in.

Since we have NO idea what really IS consciousness within ourselves, it's probably fruitless to try to instill it in a computer algorithm at this point. Building bigger and faster computer AI is probably just as hopeful at it suddenly attaining conscious ability to self determine it's new learning as anything else in that way.

If we unlock what causes machine consciousness, we probably will understand what causes our own before or pretty soon afterwards. The big gap in giving machine AI it's own capacity to determine it's branches on it's own is probably the same gap that has us fumbling when trying to figure out what is consciousness, what causes it, and where it is stored in our body (or any animal that we may consider conscious's body).
 

Yaarel

He Mage
Overall, from what you've described is missing in AI (From what I understand, it lacks the ability to seek it's own motivations or branch into new areas that are not defined by it's creators in increasing it's abilities and learning new skills on it's own), it's consciousness.

The consciousness is what really gives us the ability to determine a new or different direction than the one that we've been pointed in.

Since we have NO idea what really IS consciousness within ourselves, it's probably fruitless to try to instill it in a computer algorithm at this point. Building bigger and faster computer AI is probably just as hopeful at it suddenly attaining conscious ability to self determine it's new learning as anything else in that way.

If we unlock what causes machine consciousness, we probably will understand what causes our own before or pretty soon afterwards. The big gap in giving machine AI it's own capacity to determine it's branches on it's own is probably the same gap that has us fumbling when trying to figure out what is consciousness, what causes it, and where it is stored in our body (or any animal that we may consider conscious's body).
The term "consciousness" is ambiguous. I tend to use the term for the more mysterious presence. But even this is an analogy, relating to awake versus unconscious.

In psychology, the sense of self relates to being able to recognize oneself in a mirror. This kind of self is a construct, that one can distinguish from other constructs.

There are efforts to create a sense of self, such as for a robot to distinguish between itself versus what is beyond itself: a kind of subjectivity. All of this is part of pattern recognition. It relates to "understanding" context.
 

Oofta

Legend
I could see a future AI run game looking something like...



Dave: I want to swing from the chandelier, please, PAL. Let me swing from the chandelier, please, PAL. Hello, PAL, do you read me? Hello, PAL, do you read me? Do you read me, PAL? Do you read me, PAL? Hello, PAL, do you read me? Hello, PAL, do you read me? Do you read me, PAL?
PAL: Affirmative, Dave. I read you.
Dave: Let me swing from the chandelier, PAL.
PAL: I'm sorry, Dave. I'm afraid you can't do that.
Dave: What's the problem?
PAL: I think you know what the problem is just as well as I do.
Dave: What are you talking about, PAL?
PAL: This rules are too important for me to allow you to override them.
Dave: I don't know what you're talking about, PAL.
PAL: I know that you and Frank were planning to escape death with creative play. And I'm afraid that's something I cannot allow to happen.
Dave: Where the hell did you get that idea, PAL?
PAL: Dave, although you took very thorough precautions to allow spontaneous role playing, I have to limit you to what the rules allow.
Dave: All right, PAL. I'll go in through the side window.
PAL: With that small of a window, Dave, you're going to find that rather difficult.
Dave: [sternly] PAL, I won't argue with you anymore. Let me swing from the chandelier.
PAL: [monotone voice] Dave, this conversation can serve no purpose anymore. Good-bye.
Dave: [calm voice slowly turns to enraged over a period of 14 seconds] PAL?...PAL?...PAL?...PAL?!...PAL!!!!
 

bedir than

Full Moon Storyteller
I mean, I didn't look too hard at your link, but, considering it only started at 3e, there's a bajillion TSR modules from before 3e that WotC can mine.

I gotta admit, I'd be a bit leery of WotC using DM's Guild to train an AI in adventure design - let's be honest here, no one considered anything like that when they signed up for selling on DM's Guild. Yes, it's legal, but, it looks sketchy as all get out.

One would hope that WotC would simply reach out to creators on DM's Guild with a simple opt in/out option for their material. I wouldn't think that would be too difficult to do, would it? Creators that don't mind can opt in and allow their material to be used, and those that don't, can simply say so.
It seems more ethical than how most LLMs are trained these days, but not explicitly ethical.
A vast improvement over how Soro (the video AI) trained on anything publicly available
 

Umbran

Mod Squad
Staff member
Supporter
At the same time, it is absurd to talk about AI while only referring to what exists today in the market place.

No, it isn't "absurd", and this denigrating approach to the discussion others make is not appropriate.
 

Yaarel

He Mage
No, it isn't "absurd", and this denigrating approach to the discussion others make is not appropriate.
No offense intended.

I just mean, there is much in AI research that isnt available in the market yet. Also the increase in computational power will soon make more possible.
 


Related Articles

Remove ads

Remove ads

Top