RPG Evolution: Hasbro's AI Plans

We can make some educated guesses about Hasbro's AI plans thanks to a recent interview with CEO Chris Cocks.

We can make some educated guesses about Hasbro's AI plans thanks to a recent interview with CEO Chris Cocks.

sense-2326348_1280.jpg

Picture courtesy of Pixabay.

Not surprisingly, Large Language Model (LLM) Artifical Intelligence (AI) is on every business' plans, and Hasbro is no different. The question is how the company plans to use it ethically in light of several missteps in which Wizards of the Coast, the Hasbro division overseeing Dungeons & Dragons, failed to disclose that AI was involved in certain pieces of art. The ongoing controversies were enough to make WOTC update its AI policy.

An AI Product Every Two to Three Months​

That hasn't stopped former CEO of WOTC and current CEO of Hasbro Chris Cocks from expounding on his plans for AI:
...we’re trying to do a new AI product experiment once every two to three months. That’s tending to be more game-focused for us, a little more gamified. We’re trying to keep it toward older audiences, to make sure all the content is appropriate...You’ll see more of how we’re thinking about how we can integrate AI, how we can integrate digital with physical gaming over time...I think most major entertainment and IP holders are at least thinking about it.
What Cocks is talking about is how LLM AIs are sourced. The LLM controversies revolve around, among other things, that the AIs are trained on content without the owners' permission. In other words, although LLMs are often trained on publicly available content, the users sharing that content never imagined a robot would be hoovering up their dialogue to create money for someone else. The throughline to art is a bit easier to detect (as the above controversies show, harder to prove); but when it comes to text, like Reddit, user-generated content is invaluable. These AI are only as valuable as the content they have at their disposal to train on. This is why Poe.com and other customizable AI, trained on your own content, can be so useful to Dungeon Masters who want a true assistant that can sort through decades of homebrew content in seconds. I'll discuss using Poe.com in a future article.

Respecting Creators, Works of Art, and Ownership​

Cocks is keenly aware of AI's controversies, with the Open Game License and issues with AI-generated art:
We certainly weren’t at our best during some points on the Open Game License. But I think we learned pretty fast. We got back to first principles pretty quickly ... The key there is the responsible use of it. We have an even higher bar we need to hit because we serve audiences of all ages. We go from preschoolers on up to adulthood. I don’t think we can be very cavalier in how we think about AI...That said, it’s exciting. There’s a lot of potential for delighting audiences. We need to make sure that we do it in a way that respects the creators we work with, respects their works of art, respects their ownership of those works, and also creates a fun and safe environment for kids who might use it.
And now we come to it. So how would WOTC and Hasbro use AI that respects creators, their work, ownership and is fun to use?

How Might WOTC Use AI for D&D?​

Cocks give us some hints in his answers:
The 20-plus years that the Open Game License has been in existence for something like D&D, I think that gives us a lot of experience to navigate what will emerge with AI, and just generally the development of user-based content platforms, whether it’s Roblox or Minecraft or what Epic has up their sleeves.
The Open Game License (OGL), by its very nature, is meant to be used in much the same way LLMs try to use the entirety of the Internet. What was likely a thorn in the side of lawyers may well seem like an opportunity now. Unlike the Internet though, the OGL has a framework for sharing -- even if it wasn't envisioned by the creators as sharing with a machine. More to the point, everyone using the Open Game License is potentially adding to LLM content; databases of OGL content in wiki format are just more fodder for LLMs to learn. WOTC could certainly leverage that content to train an AI on Dungeons & Dragons just as much as anyone else if they so chose; however, a large company using OGL content to fuel their AI doesn't seem like it's respecting their creators and their ownership.

So it's possible WOTC may not use OGL content at all to train its AI. They don't need it -- there's plenty of content the company can leverage from its own vaults:
The advantage we have ... This is cutting-edge technology, and Hasbro is a 100-year-old company, which you don’t usually think is ... a threat ... But when you talk about the richness of the lore and the depth of the brands–D&D has 50 years of content that we can mine. Literally thousands of adventures that we’ve created, probably tens of millions of words we own and can leverage. Magic: The Gathering has been around for 35 years, more than 15,000 cards we can use in something like that. Peppa Pig has been around for 20 years and has hundreds of thousands of hours of published content we can leverage. Transformers, I’ve been watching Transformers TV shows since I was a kid in Cincinnati in the early ‘80s. We can leverage all of that to be able to build very interesting and compelling use cases for AI that can bring our characters to life. We can build tools that aid in content creation for users or create really interesting gamified scenarios around them.
The specific reference to 35 years of Magic: the Gathering content "that we can leverage" has been done before by WOTC's predecessor, when TSR created the Spellfire card game. TSR churned out Spellfire in response to Magic: The Gathering (before WOTC took over D&D). It relied heavily on (at the time) TSR's 20 years of art archives. One can easily imagine AI generating this type of game with art WOTC owns in a very short period of time.

But Cocks is thinking bigger than that for Dungeons & Dragons. He explains how he uses AI with D&D specifically:
I use AI in building out my D&D campaigns. I play D&D three or four times a month with my friends. I’m horrible at art. I don’t commercialize anything I do. It doesn’t have anything to do with work. But what I’m able to accomplish with the Bing image creator, or talking to ChatGPT, it really delights my middle-aged friends when I do a Roll20 campaign or a D&D Beyond campaign and I put some PowerPoints together on a TV and call it an interactive map.
In the future, WOTC could easily change their contracts to explicitly state that any art they commission may be used to train a future AI (if they don't already). For content they already own -- and WOTC owns decades of art created for Magic: The Gathering -- they may already be within their rights to do this.

Add all this up, and companies like Hasbro are all looking at the archives of information -- be it text, graphics, or examples of play -- as a competitive advantage to train their AIs in a way their rivals can't.

The Inevitable​

In short, it's not a question if WOTC and Hasbro are going to use AI, just when. And by all indications, that future will involve databases of content that are either clearly open source or owned by Hasbro, with LLMs that will then do the heavy lifting on the creative side of gaming that was once filled by other gamers. For Dungeons & Dragons in particular, the challenge in getting a game started has always been finding a Dungeon Master, a tough role for any gamer to fill, and the lynchpin of every successful D&D campaign. With D&D Beyond now firmly in WOTC's grasp, they could easily provide an AI platform on that service, using the data it learns from thousands of players there to refine its algorithms and teach it to be a better DM. Give it enough time, and it may well be an a resource for players who want a DM but can't find one.

We can't know for sure what WOTC or Hasbro has planned. But Cocks makes it clear AI is part of Hasbro’s future:
While there are definitely areas of concern that we have to be watchful for, and there are definitely elements to the chess game that we have to think about before we move, it’s a really cool technology that has a lot of playfulness associated with it. If we can figure out how to harness it the right way, it’ll end up being a boon for users.
In three to five years, we might have officially sanctioned AI Dungeon Masters. Doesn't seem realistic? Unofficial versions are already here.
 

log in or register to remove this ad

Michael Tresca

Michael Tresca

Jer

Legend
Supporter
Yes. Even about a decade ago, some people in AI research were saying there was no such thing as AI, and it was a flawed and failed endeavor.

But the technology ACCELERATES. There is no steady state of linear progression.
No, what I'm saying is that nobody is actually researching giving AI models "intent". It's not a thing that anyone is throwing money at. Even the folks who are trying to build a "superintelligent AI" like the folks at OpenAI don't seem to be doing any practical research in that area, instead operating on the (flawed) assumption that if you build a big enough neural network and train it on enough text then superintelligence will come from it as an emergent property. Which is a religious/philosophical belief not a scientific one. And even they aren't working on "intent" but just seem to assume that intention is some property of superintelligence. The AI researcher Yann LeCun summed it up well when he said that LLMs hold a lot of knowledge but they don't have any intelligence themselves.
 

log in or register to remove this ad

Oofta

Legend
The AI "intent" will happen soon enough. Currently, there is a race for AI that can write computer codes. This will accelerate the process of creating humanlike simulations, including "understanding" context and pursuing goals.
It really depends on who you ask. For some, it's right around the corner. For others it's like fusion, the miracle energy source that's just 5-10 years away and always will be.

I see no steps to go from an LLM to a general AI, LLMs are just very advanced autocomplete functions. I was just reading an article talking about how the "emergent" nature of LLMs is just a mirage caused by how we measure it and it's really just a logical steady progression. There's a lot of people who see what they want to see, not what's actually happening.
 

Yaarel

He Mage
No, what I'm saying is that nobody is actually researching giving AI models "intent".
What are you talking about? Right now, figuring out a way for the generative AI to "understand" context is a big effort among all the AI creators.

Visually, to even recognize what a human hand should look like, is part of this effort. Likewise the effort to recognize and prevent "hallucinations" is part of this effort to code intentionality.
 

Raiztt

Adventurer
AI players as a playtesting tool would be of some use to a GM as well.

I think that people that can't think beyond "AI bad" are going to have a tough next decade or so.
Can always cross our fingers and bank on a civilization resetting catastrophe. We've got war, environmental, and economic problems that could really come in and save us.
 

Raiztt

Adventurer
What are you talking about? Right now, figuring out a way for the generative AI to "understand" context is a big effort among all the AI creators.

Visually, to even recognize what a human hand should look like, is part of this effort. Likewise the effort to recognize and prevent "hallucinations" is part of this effort to code intentionality.
Intentionality is a feature of consciousness, as is understanding. Whether or not it can be simulated is a different question from whether or not it can be created.
 

Jer

Legend
Supporter
What are you talking about? Right now, figuring out a way for the generative AI to "understand" context is a big effort among all the AI creators.

Visually, to even recognize what a human hand should look like, is part of this effort. Likewise the effort to recognize and prevent "hallucinations" is part of this effort to code intentionality.
That's not intent. That's building better models that can incorporate context better.

The model builder in what you're describing has the intent - make the model better at creating hands. They do that through selection of training data, changing model topologies, lots of reinforcement feedback from humans reviewing the outputs and telling the system "no" or "yes", etc. The only thing the system can do is what it's designed to do - produce images like the ones its been trained on.

That kind of model is never going to decide that it "doesn't need humans" because it's not part of the model. And no model built in that kind of way is going to be capable of taking that decision either because there's no way to get "intent" from that model.
 

Oofta

Legend
No, what I'm saying is that nobody is actually researching giving AI models "intent". It's not a thing that anyone is throwing money at. Even the folks who are trying to build a "superintelligent AI" like the folks at OpenAI don't seem to be doing any practical research in that area, instead operating on the (flawed) assumption that if you build a big enough neural network and train it on enough text then superintelligence will come from it as an emergent property. Which is a religious/philosophical belief not a scientific one. And even they aren't working on "intent" but just seem to assume that intention is some property of superintelligence. The AI researcher Yann LeCun summed it up well when he said that LLMs hold a lot of knowledge but they don't have any intelligence themselves.

Even assuming an emergent intelligence formed (which I agree has no scientific basis) out of a neural network as we currently use them, there's no knowing what kind of intelligence it would be. There's no reason to believe it would have biological drives to procreate or compete. There's no reason to even think it would care about it's ongoing existence.

There are threats concerning how people could use AI*, just like there are threats from nuclear weapons, biological warfare, whatever the heck was going on with weird pop trends like the ice bucket challenge. But robots taking over ala Terminator? Lower down on my list.

*I even dislike calling what we have "AI" because while it's artificial, it's not intelligent.
 

Yaarel

He Mage
That's not intent. That's building better models that can incorporate context better.

The model builder in what you're describing has the intent - make the model better at creating hands. They do that through selection of training data, changing model topologies, lots of reinforcement feedback from humans reviewing the outputs and telling the system "no" or "yes", etc. The only thing the system can do is what it's designed to do - produce images like the ones its been trained on.

That kind of model is never going to decide that it "doesn't need humans" because it's not part of the model. And no model built in that kind of way is going to be capable of taking that decision either because there's no way to get "intent" from that model.
It is "intent". Because if the AI can "understand" context and pursue a goal, then that is teleological intent.

It is especially intent when it isnt a subroutine that specifically changes hands, but rather an ability to detect any patterns, then select and consistently maintain a pattern, while discerning which data that is inconsistent with this pattern (such as any kind of hallucination, including visual hallucinations about hands).
 


Blue

Ravenous Bugblatter Beast of Traal
I have used some AI tools like ChatGPT ad MidJourney for generating quick ideas. It can be useful for some basic things, but my experience has proven that they take a lot of effort for what little it gives me.

Would I use an AI tool built with the content WotC owns? Maybe, I don't know how I feel about that yet. Would I pay for it? Less likely.
Would I buy content made with AI from WotC? 100% no. I want art made by humans.

Let's see how this develops.
I've found the exact opposite., but it's likely differences in how we use it. I can use it to generate a large list of filler NPCs at a ball with quirks, or shops with names, or other time consuming task, filter through it quickly to keep the top 25%, and have a heck of a lot of non-rewarding work done quickly so that I can spend my prep time on things that human creativity is good at* and I enjoy.

I think we all need to find out "is this a hammer, is this a wrench" - what the tool does well so we can use it correctly.

* Not saying that the LLM can't stitch together things in unexpected and "creative" ways, just that those tasks are ones I do well and find enjoyable.
 

Remove ads

Remove ads

Top