• The VOIDRUNNER'S CODEX is coming! Explore new worlds, fight oppressive empires, fend off fearsome aliens, and wield deadly psionics with this comprehensive boxed set expansion for 5E and A5E!

RPG Evolution: The AI DM in Action

How might WOTC launch an AI-powered DM assistant?

How might WOTC launch an AI-powered DM assistant?

technology-4256272_1280.jpg

Picture courtesy of Pixabay.

We know Wizards of the Coast is tinkering with Artificial Intelligence (AI)-powered tools for its multiple properties, including Dungeons & Dragons. But what might that look like in practice?

Interactive NPCs​

Large Language Model (LLM) AIs have been used extensively to create non-player characters of all stripes on Character.AI. It's not a stretch to imagine that Wizards might have official NPCs included as part of the digital purchase of an adventure, with the rough outline of the NPC acting as parameters for how it would interact. DMs might be able to create their own or modify existing NPCs so that the character drops hints or communicates in a certain way. Log outputs could then be available for DMs to use later.

There are several places today where you can create NPC bots powered by AI that are publicly available, although the DM might need to monitor the output in real time to record the conversation. Character.AI and Poe.com both provide the ability to create publicly available characters that players can interact with .

Random Generators​

There are already dozens of these in existence. What's particularly of note is that AI can go deep -- not just randomize what book is in a library, but provide snippets of text of what's in that book. Not just detail the name of a forgotten magic item, but provide stats for the item. For WOTC products, this could easily cover details that no print product can possibly encompass in detail, or with parameters (for example, only a library with books on necromancy).

AI RPG companion is a great example of this, but there are many more.

Tabletop Assistants​

Hasbro recently partnered with Xplored, with the goal of developing a "new tabletop platform that integrates digital and physical play." Of particular note is how Xplore's technology works: its system "intelligently resolves rules and character behaviors, and provides innovative gameplay, new scenarios and ever-changing storytelling events. The technology allows players to learn by playing with no rulebook needed, save games to resume later, enables remote gameplay, and offers features like immersive contextual sound and connected dice."

If that sounds like it could be used to enhance an in-person Dungeons & Dragons game, Xplored is already on that path with Teburu, a digital board game platform that uses "smart-sensing technology, AI, and dynamic multimedia." Xplored's AI platform could keep track of miniatures on a table, dice rolls, and even the status of your character sheet, all managed invisibly and remotely by an AI behind the scenes and communicating with the (human) DM.

Dungeon Master​

And then there's the most challenging aspect of play that WOTC struggles with to this day: having enough Dungeon Masters to support a group. Wizards could exclusively license these automated DMs, who would have all the materials necessary to run a game. Some adventures would be easier for an AI DM to run than others -- straightforward dungeon crawls necessarily limit player agency and ensure the AI can run it within parameters, while a social setting could easily confuse it.

Developers are already pushing this model with various levels of success. For an example, see AI Realm.

What's Next?​

If Hasbro's current CEO and former WOTC CEO Chris Cocks is serious about AI, this is just a hint at what's possible. If the past battles over virtual tabletops are any indication, WOTC will likely take a twofold approach: ensure it's AI is well-versed in how it engages with adventures, and defend its branded properties against rival AI platforms that do the same thing. As Cocks pointed out in a recent interview, WOTC's advantage isn't in the technology itself but in its licenses, and it will likely all have a home on D&D Beyond. Get ready!
 

log in or register to remove this ad

Michael Tresca

Michael Tresca

"AI Free" is going to become the next "Non-GMO" label slapped on things to make very shallow concern trolls feel better about their culpability in supporting terrible business practices in order to enjoy cheap goods. And like GMO Karen, AI Kyle doesn't actually know why he is mad, what the technology does or means, or anything else beyond "that thing bad."
This is such a funny "I'm an American and I can't imagine not being one" take.

It's also a straight up misunderstanding of what "concern troll" means. That's not what it means, @Reynard. A concern troll is someone who actually supports something (or at least in no way opposes it), but pretends to be concerned about it in order to cause a problem - typically to derail debate in a different direction:

"a person who disingenuously expresses concern about an issue with the intention of undermining or derailing genuine discussion."

Some might even suggest that bringing up general labour issues in art whilst making wild and sweeping assertions about art is exactly that kind of behaviour, even, which is beyond ironic in this context, though I doubt that was your intent.

And no, artists not being paid as much as they "should" be doesn't mean it's okay for them to be replaced entirely, and it's clear that, by and large, the public agrees.

But in many cases, commercial art is already exploitative.
Sure but that doesn't support your point at all. It just looks like the "concern trolling" you brought up. I'm not saying you are doing that - but that's exactly what concern trolling looks like! It looks like someone bringing up a separate issue that they pretend to care about, in order to distract from or derail discussion of a different subject.

The reality is, as much as "Marvel" (not exactly a great example, given they've been known for bad business practices since the 1960s or earlier!) might be using somewhat exploitative practices, a huge of number of actual artists around the world are getting paid money for actual art, and further, AI art needs them to keep making art, because it'll stagnate really rapidly if they don't get new input.

or, go on Fivrr and see how low you can bid your art. that's where a lot of the stuff you see in small press work is coming from.
Again, you don't seem to understand how this directly undermines your own point. Fiverr or the like lets people get paid for art - art has, sadly, never been a big money business for most people involved. That doesn't support the idea that AI art is just opposed by "Karens" (jesus wept lol - Karens are the exact sort of people who see no problem with it). Sure, small press stuff has never paid well. That's not going to change. That was true in 1984, in 1964, in 1904, just as it is in 2024.

You essentially seem to be saying "Well art isn't a big money business, so it's fine if it becomes one where it is big money, but that money is just going to small number of corporate entities in California rather than to actual artists".

But, really, we have had the AI art argument over and over.
No, we haven't* - your comparison to GMOs is very bad one, and unsupported by any rational argumentation on your part (merely an admittedly amusing bit of vitriol), and your painting of all opposition to GMOs as mindless Karen-ism is very hilariously culturally American and shows you have blinders on re: GMO usage and regulation worldwide. If we switched your place of birth on your character sheet to, say, Germany or Britain or India and pressed "update character", you simply wouldn't even have considered saying that.

The terrible business practices associated with early GMO usage (particularly that of then-Monsanto, now-Bayer), essentially convinced even countries which were pro-GMO initially to heavily restrict and monitor their usage. The real risk, as we've seen isn't horizontal gene transfer or such exotic scenarios, but rather very simply that businesses who promoted GMOs initially cannot be trusted and were aggressively litigious - and that's not actually profitable for them in the longer-run! Those bad practices (of which aggressive and litigious use of GMOs was only a part of a larger set of bad practices) sunk Monsanto and caused Bayer's merger with them to be described in the following terms:

"Owing to the massive financial and reputational blows caused by ongoing litigation concerning Monsanto's herbicide Roundup, the Bayer-Monsanto merger is considered one of the worst corporate mergers in history."

Divorced from the bad business practices of a lot of the early companies involved, GMOs seem so far to be broadly harmless, though the evidence of their real benefits (as opposed to good marketing) are surprisingly somewhat scarce. To be honest I expected GMOs to offer such obvious benefits that they'd be undeniable across all crop farming sectors, but that hasn't happened - benefits have been basically non-existent for most - the exceptions being maize and soybeans (and to a lesser extent cotton).

But GMO vs non-GMO is a completely separate issue that's disingenuous to mix in with this issue. It doesn't even resemble this issue, and it's notable that outside the US, GMO crop usage tends to be pretty heavily regulated - allowed, often, to be clear - but regulated in ways normal crops are not. It's not been the game-changer people suggested it would be either.

* = EDIT - Wait, you probably mean we've had the AI art argument repeatedly literally, not metaphorically right? I'm keeping this because I think it's important context re: GMOs but otherwise, yeah we have - note that I did not start that argument this time!

Re: an AI assistant doing certain tasks, sure - but I already agreed that was pretty valid and helpful - AI has a real role as a gap-filler with certain tasks (so long as energy/computation costs can be managed/reduced/onshored/inhoused) - what's it's not so great for is replacing more creative roles.
 
Last edited:

log in or register to remove this ad

As a teacher in the US . . . no, not every student has an art class in K-12.

Kids in elementary usually do art as a part of their broader lessons, but a dedicated art class has become increasingly rare. At the middle and senior high levels, art classes are usually electives. Some kids take them, some don't.

You seem to be immersed in art . . . good for you! That is not the experience of many folks in the US.
That's sad to hear and it certainly helps to explain why attitudes to art and artists in the US are more "insane" than most of the world. You don't typically see European posters attempting to imply that they've never had an opportunity to create art, that cruel artists are basically stealing away all the artistic talent and unfairly refusing to work without pay and so on, but that's exactly the sentiments you see from some of AI art's biggest boosters on, say, Twitter.

But the fact that AI art can be "good-looking" art and/or hard to distinguish from human-created art? Why is that so hard to accept?
The issue here is really simple - the vast majority of AI art that we are asked to see as "good looking" or told "doesn't look like AI art" very obviously doesn't actually meet those criteria. As I've pointed out, it tends to have two characteristics:

1) It doesn't actually look "good", it looks "detailed" - these are different things. But most AI art people are claiming "looks great" tends to merely be detailed, and to broadly match their prompt. Often it looks absolutely horrid - but there's backslapping going on between AI art fans assuring each other it looks great. We can't pretend that isn't happening - especially given it's been happening since the days when Cthulhu Mythos-esque hands were basically the norm. We've been told pieces "look great" and are "indistinguishable from real art" by people posting 22-fingered monstrosities with boobs the size of a basketball, and you expect people to just magically agree it's good now the monstrosity only has 10 fingers most of the time?

Again, as I've already pointed out, there is AI art that does look good - but you have to generate dozens or more of images to get there - and most people stop a long way before that, instead getting something that looks kind of "bleh" or even "ugh" but matches their prompt, is brightly coloured, and is detailed, and labelling it "good". And in a sense it is, to them. Not actually good - but "good enough".

2) It's not typically as hard to distinguish as people make out, especially not when you have large images. There will absolutely be stuff where you simply have no idea, sure. But there are still specific tells which aren't just mistakes - overdetail, and use of multiple different shading techniques are the classic two. Ironically if AI art wants greater acceptance, they probably need to recalibrate it to produce less detailed images, not more. I don't think that's actually possible, however, with the current technical approach.

I think a lot of people, especially some small-press RPG creators, would really like AI art to be harder to spot than it is, simply so they could get away with using it without people looking into it. At this point some of the "false claims" are so obviously dumb they look people intentionally falsely claiming art is AI, so they can act out a little morality play as they pretend to be surprised to find out it wasn't and then handwring and start talking about witch trials. But I suspect it's nothing so cunning - it's just those people are dimwits.

And the tools are getting better.
I've discussed this and you've just ignored what I've said in favour of bland generalizations, which let me be clear - are very unhelpful to your argument.

The reality is, these tools are simply an evolutionary dead end. That's what a lot of people don't seem to understand. AI art will do something pretty amazing at some point, but not with the current scrape-and-compute approach, or whatever you want to call it. The current approach isn't one of creating art, it's one of creating images computationally. It can be refined - errors can be worked out - at the cost of making it more computationally expensive in most cases. But fundamentally the very way it works limits it.

I don't know for sure what the non-dead-end product is, but it's probably something that instead of trying to crunch images instantly into existence, actually builds them up - likely with input from a human during the process. I think that's likely to produce much more persuasive results, and much less likely to a dead end that's already up against the wall of what it can do.
 
Last edited:

Oofta

Legend
....
But, really, we have had the AI art argument over and over. That isn't really what this thread is about. Ai assisted, LLM based tools for running your campaign are a really good idea and are definitely coming. Even with tools at the stage they are now, not tweaked specifically for it, you can get a lot of great mileage out of AI in relation to gaming. And it isn't stealing any jobs. You weren't going to hire a meatbag DM assistant anyway.

What??? Actually addressing the topic of the thread after the first two pages instead of posting how terrible AI is? You should put that kind of shocking language into a spoiler so people are prepared for it. :eek:
 

Reynard

Legend
Supporter
What??? Actually addressing the topic of the thread after the first two pages instead of posting how terrible AI is? You should put that kind of shocking language into a spoiler so people are prepared for it. :eek:
To be fair, I only did so after myself indulging in the same old debate, which was a mistake on my part.

But, yes,I think we can actually discuss the usefulness and inevitable integration of AI in the TTRPG space.

I want to do some testing with Adobe and Daggerheart, but I need to break up the DH book into 100 page pieces first. I am curious of the Adobe LLM (is it proprietary, or are the licensing something? I don't even know) can actually parsecthe language in the playtest doc enough to summarize and make cheat sheets.
 

Sulicius

Adventurer
The solution to this is to keep vigilant and keep learning AI's new tricks, as well as putting pressure on corproate to not use it. At one point, the more you learn about art, the easier it is to spot AI. The solution is to keep learning. Also the more you learn, the more you notice AI art DOES look really freaking bad.

-snip-

Come on, I literally gave an example in this thread about why art I generated with MidJourney was better than human art. I don’t get how you still think all AI art looks bad.

Do I need to give more examples?
 

Oofta

Legend
To be fair, I only did so after myself indulging in the same old debate, which was a mistake on my part.

But, yes,I think we can actually discuss the usefulness and inevitable integration of AI in the TTRPG space.

I want to do some testing with Adobe and Daggerheart, but I need to break up the DH book into 100 page pieces first. I am curious of the Adobe LLM (is it proprietary, or are the licensing something? I don't even know) can actually parsecthe language in the playtest doc enough to summarize and make cheat sheets.

With various games there are a lot of options. Helping with prep by giving me ideas and thought starters based on current events and established lore of my game. Helping me customize existing monster or come up with new ideas based on a theme. Plan out a dungeon, city, kingdom and help me populate it with NPCs and potential plot hooks that tie back to specific NPCs. Help me set up encounters for my specific group of players.

At the table it could answer questions, help me generate some simple stuff on the fly when the players to left when I really thought they'd go right. Give me a voice assistant, where I provide the text or just prompts and it has the conversation. So that bullywug NPC really doesn't sound like my bad Kermit the Frog impersonation. For VTT it could build the visuals for that village, and populate it.

There are some gray areas here of course. Let's say someone creates a Forgotten Realms LLM. Sucks in everything ever written about the setting including articles, novels, setting material. If your happy with that, does it get to the point where nobody never needs to write a module again? Does it just mean that the people writing the module could use it as an aid so we could have more modules with more branching storylines?

I don't know what the future brings. But the only constant is change. I think AI can ethically be used to enhance the hobby and enhance our games. Because whether we like it or not it is coming and the technology we call AI today is going to continue to evolve. I don't think we're going to reach the singularity anytime soon, but aids for TTRPGs like D&D are prime targets for the type of technology being developed.
 

There are some gray areas here of course. Let's say someone creates a Forgotten Realms LLM. Sucks in everything ever written about the setting including articles, novels, setting material.
I mean, that's going to have a substantial subscription attached to it, if you're proposing it be used user-side.

It's also going to absolutely wildly biased towards a 2nd Edition-era vision of the FR, because the vast majority of all that FR material is from that era.
If your happy with that, does it get to the point where nobody never needs to write a module again?
Not any time soon - you'd need a huge number of modules to draw from for the edition you needed to generate for - and 5E simply doesn't have that - especially not to the standard needed by a company like WotC.
Does it just mean that the people writing the module could use it as an aid so we could have more modules with more branching storylines?
Probably not. I don't think there's a demand for "more branching storylines" really - especially not low-quality knocked-out-in-a-hurry ones.

The real usage I think would be for an LLM not to just synthesize junk but to act as a powerful search tool for all the existing FR material - used internally at WotC or perhaps also offered (at some low-ish cost) to people writing for DM's Guild and the like, who could use it to summarize FR material, list FR material that discusses certain subjects or areas or directly pull out those descriptions in response to query.

Current and foreseeable (not AGI) AI is a wonderful tool for searching and summarizing stuff - but it needs crazy numbers of examples to generate stuff well - I think that'd be a lot more likely.

Helping with prep by giving me ideas and thought starters based on current events and established lore of my game. Helping me customize existing monster or come up with new ideas based on a theme. Plan out a dungeon, city, kingdom and help me populate it with NPCs and potential plot hooks that tie back to specific NPCs. Help me set up encounters for my specific group of players.
The only issue right now is that the AIs we have available are ghastly at this*. Primarily because they always go for ultra-bland OTT fantasy tropes. The more generic and obvious something is, the more attractive it is to LLM AI. That's just how it works.

They don't have to be ghastly - one that was trained on more specific material, that was refined by WotC or whoever to provide more D&D-appropriate, more intriguing, less trope-y results could be pretty great.

Right now perhaps the ideal would be, not getting an AI to itself think of stuff (which it's often terribly bad at), but somehow getting an AI to roll on countless charts, and combine and collate the results on a scale that you wouldn't want to do - literally using random charts tends to produce much stronger and more distinctive results that getting the LLM to actually try and come up with stuff.

* = They're a bit less ghastly about NPCs, but AI does tend to give them hilariously lurid and convoluted backstories unless you ask it not to (and even then sometimes...) - this is because I think it's drawing on books, scripts, and so on.
 

EzekielRaiden

Follower of the Way
I would never use, neither as a player nor as a DM, any AI service genuinely meant to replace a DM. I find the very idea risible.

I would make use of AI tools as suggestion engines when I'm out of ideas. I have, in fact, done exactly that. None of the specific, nitty-gritty bits ChatGPT gave me were usable as-is. But at the high-concept level, the general notion, it can be a great source of alternative perspectives. A digital sounding board, if you like, which you can then leverage to produce better work by your own hand.

My current DW game, it actually gave me a great concept for the temple my players are exploring. It's dedicated to a forgotten aspect of the One called the "Whimsy Lord," who represents humor, joy, tenacity, and dignity in the face of a chaotic and ever-changing universe. ChatGPT suggested a nice puzzle idea, with statues depicting people in various poses, to which the players must match various riddles and jokes, representing both the humor angle and the need for cleverness and wisdom, which would fit very well with the doctrine associated with this facet.
 

ART!

Deluxe Unhuman
Come on, I literally gave an example in this thread about why art I generated with MidJourney was better than human art. I don’t get how you still think all AI art looks bad.
I would think one example would definitely be insufficient to prove anything, if proving such a thing were even possible.

"better than human art" is how to say you haven't seen enough "human art" without saying you haven't seen enough "human art".
 


Remove ads

Remove ads

Top