D&D General Deep Thoughts on AI- The Rise of DM 9000

Cadence

Legend
Supporter
How much of how things look, feel, and smell beyond the standard written descriptions informs what happens in games?

Also, how will the AI DM deal with the resident teenager leaving out the nerf guns to distract the players until he is done with his homework so that they don't get far in the dungeon before he can join? (Actually, on that one it probably can't do worse).
 

log in or register to remove this ad


At a certain point, you have to ask, "What is understanding." If an AI can understand natural language, can respond in natural language, and can create art- then how is that materially different than what we do?
I mean, what this kind of AI is going to do is make it increasingly obvious that there is a material difference. It's already started.

That's one of the upsides here. The AIs we're seeing absolutely cannot genuinely understand anything that's being said to them. They're merely reacting using a logic-based language model. That's why they fail in the peculiar ways that they do, and until a fundamentally different approach to AI is taken, they'll continue to fail in those ways. Humans will carefully guard them, prune them, constrain them, and limit them in ways that hide these fundamental failings, but the failings will be present.

Take double-checking, for example - something all these AIs are terrible at. Humans know to check things. That's not a language-logic-level response, it's below that, I'd suggest. Humans exist in the world and are aware of the world and know how to figure things out in ways that don't just involve logic based on language. With these kind of AIs, that's not possible - you have to essentially cheat, and bolt on more basic computer functions, like, if someone is talking about the date, then go check the date at some authoritative source. A human doesn't need to be told to do that - it can figure it out - a language-logic-based AI will never in a million years figure it out.

This is a false dawn.

We will see "true" AI eventually - i.e. self-aware and able to genuinely figure stuff out, not merely respond to prompts, but that's not what this generation is. People are very impressed because it's basically Turing-test compliant, but as has been pointed out for decades - almost since Turing suggested it - that's a godawful test for whether something is intelligent. The Chinese room argument is both correct and incorrect - a machine could be and undoubtedly will be made that is intelligent - but what we have here, right now, are mere Chinese rooms. The full philosophical argument is rather fatuous and humanocentric, but the specific thing that's described is essentially what we have.

So ... I am less confident than you are. I would be shocked if we don't have AI-capable DMs within five years- even if that isn't the use case for them. And that's not a statement I would have made a year ago.
We could have language-logic-based AIs right now if someone just wanted to build them, and had a good enough data set. There's a sort of text-adventure AI tool the name of which escapes me that's somewhat similar.

The big problem though, is the data set. Almost all DMing is live, and unrecorded. It is lost like tears in the rain. Over the last few years, we've had a lot of podcasts and streams which are recorded. However, most of them are edited down, rather than full details, and they tend to represent a peculiar, showy branch of DMing, rather than a more typical approach. They're also very time-bound, and the majority of them somewhat similar in tone, so it's not a huge data set. The players are also highly atypical - far less argumentative and far better at improv than 95% of tabletop groups.

It'll also have peculiarities and freak-outs where a real DM never would. Depending on the way it's modelled/built too, it could have a peculiar approach to the rules.

But I agree that we'll see it - you don't need anything beyond a language-model to build one that's basically functional in the same way that other DM replacements are (i.e. like a flashier version of Ironsworn's Oracle), from a technological perspective.

To get one that could understand maps, write and map coherent adventures which weren't dungeon crawls/railroads and so on, you'd need a bit more complexity - to pair a language-model with something funkier. But if you just want an "ask it what happens", that's pretty doable.

Oh there's another major difficulty too - keeping track of the fiction - in 1 on 1 environment, where the only interaction method is text, this is simple. But once you get an entire party of PCs involved, and they're talking rather than writing, it's going to be pretty hard for the AI to keep track of the fiction/fictional positioning, where it'd be intuitive for a human. So five years may actually be optimistic unless language-model AIs become better at dealing with multiple different people talking to them about the same thing.
 

Clint_L

Hero
I think what a lot of folks are misunderstanding is that ChatGPT is not simply replicating what it finds on the internet. It is generating novel responses according to increasingly powerful pattern recognition.

The other thing to keep in mind is that this is not an all or nothing situation, like if ChatGPT can't currently attain the standards of the best writers it is worthless. Think of it as a writing tool that you can add to your own arsenal. You can work with it, guide it, add your own stuff.

On several tasks, I have asked students to use it in my Creative Writing class by imagining that they have a team of fast and technically capable but very unimaginative writers working for them (so, any sitcom writing room, in effect) and then get the AI to riff on their ideas, constantly guiding it and adding their own writing.
 

Snarf Zagyg

Notorious Liquefactionist
I mean, what this kind of AI is going to do is make it increasingly obvious that there is a material difference. It's already started.

That's one of the upsides here. The AIs we're seeing absolutely cannot genuinely understand anything that's being said to them. They're merely reacting using a logic-based language model. That's why they fail in the peculiar ways that they do, and until a fundamentally different approach to AI is taken, they'll continue to fail in those ways. Humans will carefully guard them, prune them, constrain them, and limit them in ways that hide these fundamental failings, but the failings will be present.

I will respectfully disagree.

Sure, some of the models fail in predictable ways. But what's shocking and interesting is that they also fail (and succeed) in ways that we don't understand.

As someone who follows the field and was relatively unperturbed by all of this a year ago, I am truly shocked. Not just because of the leap in the forward-facing tech, but more importantly, because of the consumer-facing nature of it, which means that adoption and dissemination will be spreading that much faster.

In other words, the more use-cases it has, the more use-cases it will generate. Twelve months ago, there wasn't a widespread worry about AI artists. Six months ago, there wasn't a widespread concern about AI-generated essays. Two weeks ago, I would have thought that the idea of having an AI draft code for me for a home project was laughable- now, I can't imagine not having an AI generate the code.

Life moves pretty fast. If you don't stop and look around once in a while, you could miss it.
-A line from the forthcoming ChatGPT's Day Off
 

Snarf Zagyg

Notorious Liquefactionist
Wow! On the great (or, um, "Aspiring to the heights of mediocrity" in my case) minds think alike, I just saw this video linked-to from John Gruber's blog-


Synopsis-
Roughly the same points I was making, but with no D&D (duh), and more of a focus on the adoption of the technology. I am actually quite shocked as to how closely this tracks my observations, even to the extent of knowing someone using it to write a letter! More on the coding, though. It tracks the same feeling of ... not fear, but unease as to what comes next. This feels different. This isn't just Clippy, or Alexa, or something similar packaged for a new generation with a new coat of paint. This is the beginning of something qualitatively different.
 

nevin

Hero
What is "bad data" here? Sounds like a pretty dangerous concept, because who determines that? And what is their agenda?

To me it looks like Bing isn't "biased by bad data", but rather the model itself is fundamentally flawed, because however it works, it's not allowed to do what a human would do in this situation, which is go and check what date most people/sites thought it was. Instead it's relying on the language and logic rules - if most of the discussion of Avatar 2 refers to it having a future date, then it must be in the future, QED it is 2022. And it's willing to aggressively call people liars for telling the truth, and not check, which even a terrible poster is unlikely to do.

We'll see some interesting stuff stuff in the future where AIs attempt to enforce the normative behaviour set by their creators, even when those norms are outdated. We've seen this in sci-fi for decades, and but we'll get to see it for real. The idea that they'll improve the quality of discussion though seems far-fetched. More likely they'll lower it considerably.
At least half of the Internet.
 

nevin

Hero
I think what we'll see, though how fast I'm not sure, is that as AI become's better and eventually becomes real AI that a huge portion of the boring repetitive tasks that people do AI will do. That's Awesome in many ways. But what happens to all the mediocre, average and below average people who's jobs can be done better by AI? Technology is all about making things better and easier for us. Eventually we will reach the point where the AI devices can do everything better than us. If our entire planet weren't driven by economics and money making that would be amazingly awesome. Instead It will be a complete rewrite of what an economy is. Some day's I'm hopeful for that future somedays my fellow humans make me think we'll end up in a dystopian hell hole. It'll probably be somewhere in between.
 

OB1

Jedi Master
Well that was fast. There is now a paid gateway to ChatGPT that runs $20 a month to get 'priority access' and faster response times.
 

Oofta

Legend
Not to get too philosophical here, but I think you are treading close to the Chinese room argument.

At a certain point, you have to ask, "What is understanding." If an AI can understand natural language, can respond in natural language, and can create art- then how is that materially different than what we do?

Put another way, you say that all the AI does is a decent job of getting input and finding the corresponding output. Which is true! Arguably, of course, that's what we do too.

The idea that certain things (art, natural language, writing, coding, and so on) were the "special sauce" that we had over computers, unlike, say, chess (Deep Thought, get it?) was something I assumed would be the case for a long time- after all, any experience with Siri or Alexa would quickly disabuse you of the notion that there was much there, there. But that's why this is different.

At this point, we are just seeing the beginning of the public-facing aspects of the new technology. But these things happen quickly. Again ... we are already seeing that they can easily write essays that are better than those written by most high school students. They create art better than that of the vast majority of people (who can't do art at all). And unlike us, they are advance quickly and are scalable.

So ... I am less confident than you are. I would be shocked if we don't have AI-capable DMs within five years- even if that isn't the use case for them. And that's not a statement I would have made a year ago.

Literally billions of dollars have been spent on AIs that can safely pilot a car. With input sources significantly better than human eyesight and the ability to analyze and react in a fraction of the time it would take a person, we still have Teslas running into parked semis.

That AI simply doesn't "understand" the world in the way we do. It doesn't "think". We are nowhere near creating an AGI (artificial generalized intelligence). The best guess is that we might get there by 2060 or so, but people working on this also tend to be overly optimistic. See this article: When will singularity happen? 1700 expert opinions of AGI [2023]

I think you would need an AGI to replace a DM, if you even wanted to do so. Then again, I remember a short story about the creation of an all knowing AGI (not that they called it that) and the first thing they asked was "Is there a God" and the response was "There is now." So I'm not sure we should be looking forward to the singularity. ;)
 

Remove ads

Top