D&D General Deep Thoughts on AI- The Rise of DM 9000


log in or register to remove this ad

mamba

Legend
While human writing has a tendency to opinionate within subjective parameters, the opinion-less AI is synthesizing all those POVs into a whole.

That could be incredibly useful tool to zoom out and see the bigger picture first, and then dive deeper into our thesis.
That depends on what you are looking for, there are plenty topics where a lot of uninformed opinion is out there, and mixing that in with the informed ones might get you the ‘big picture’ in the sense that you can see what people think, but it does not get you to an accurate, informed article (or whatever else you were looking for).

Basically this…
I asked ChatGPT a few questions yesterday (what were lines 2 and 3 of a particular poem by Wright I gave the title and line one to, tell me which three fictional chefs I named was best according the literature, tell me about the 30 years war in a part of Germany) and it kind of lied on all three. It made up verses to the poem, assigned one of the chefs to a book they weren't in, and said there were important battles there but when pressed couldn't name any.
except that CharGPT is not lying, because it simply does not understand anything it is not lying, it is just aggregating information and unable to distinguish good data from bad data while doing so.
 
Last edited:

mamba

Legend
I think your greatly underestimating an AI's potential ability. Even now ChatGPT can answers hundreds or even thousands of questions on all shapes, sizes, and topics. And this is infant stages.
That is not as much of an accomplishment as you seem to believe it is. I can answer any question, as long as I do not particularly care whether my answer is accurate - which seems to be pretty much the spot ChatGPT is in ;)
 

Fanaelialae

Legend
Regarding the example with the stone construct and the beverage, I'm surprised that no one's bothered to point this out. It's not exactly a great example of DMing. In no small part due to the fact that the "AI" doesn't understand what a stone construct is.

As a thinking human being, it seems strange to me that this stone construct would drink the beverage. You could "yes and" this in a way that makes more sense. Perhaps the construct is a vessel for a human soul. It gazes at the drink longingly and thanks the PC for treating it as a human being, then steps aside. Drinking it? Kind of an odd thing. Actually being poisoned by it? Improbable as heck (as anyone who's actually played D&D for longer than 5 minutes would know).

But, of course, the AI doesn't know this because it doesn't actually understand what a stone construct is. Replace the construct with an orc, a demon, or a giant talking frog, and the scene might play out in exactly the same way. Because the AI doesn't actually understand the difference between these in the way that a human does.

Regarding the claim that we'll have AI DMs within 5 years? I think that depends on the quality you're intending. A mediocre DM that can suffice if you have nothing better, requiring periodic reprompting? Sure. The current AIs can more or less do that already, at least one on one. Something equivalent to a capable and imaginative human DM that can run a seamless campaign for an entire table? IMO, that's easily at least a decade off, and more likely several.

That's not to say that what AI can already do isn't impressive.

I've played with AI Dungeon, and while it's not the most creative thing in the world, it is kind of fun. However, it constantly gave me responses like "You don't find anything". Not exactly great DMing, unless the goal is to leave your players frustrated that nothing they try produces anything even resembling a result. There were also multiple times where the output felt quite disconnected from my input, which felt to me like it had ignored my input entirely and just generated some random output.

I used ChatGPT just last week to help with game prep. I gave it my ideas for an adventure and it helped to build out the skeleton, saving me some time and waffling. Something to note was that my first input was something of a run on sentence. A human would have been able to parse it, though they might have made a snarky comment about punctuation. ChatGPT just errored out entirely. That said, it was quite helpful in getting me past the blocks I had, which was quite impressive given what it is (a clever application of math to a large amount of data). Ultimately though, I was the one that supplied the real creativity to the adventure.

It's a useful tool for DMs, no doubt. But as a DM replacement? I think it'll be a long time before we see a genuinely viable product in that space. Not merely a bare minimum viable product, but rather something capable of competently substituting for the average human DM, with little to no perceived difference in quality of experience.
 

Emoshin

So Long, and Thanks for All the Fish
That depends on what you are looking for, there are plenty topics where a lot of uninformed opinion is out there, and mixing that in with the informed ones might get you the ‘big picture’ in the sense that you can see what people think, but it does not get you to an accurate, informed article (or whatever else you were looking for).

Basically this…

except that CharGPT is not lying, because it simply does not understand anything it is not lying, it is just aggregating information and unable to distinguish good data from bad data while doing so.
That's fair and to my point about synthesizing different POVs, I think it is useful, that if we look upthread, to see there's at least 2 sides on this general topic:

We all see the current situation with the public access version of ChatGPT, AI research & development, etc. and perhaps we split into at least two* sides or extremes:

AI Optimists: Those who feel confident that future AI will definitely get good overall at distinguishing good from bad data
AI Pessimists: Those who feel confident that future AI will definitely not get really good overall at distinguishing good from bad data

BUT what if we raise the stakes? What if there's a gun to your head, and if your prediction turns out to ever be wrong, your brains will be blasted out? Or, if you prefer, Zeus will strike you down with a lightning bolt for your hubris if you are wrong.

In that scenario, just the feeling of confidence or hope is probably not going to cut it. The stakes are higher: if we turn out to be wrong, then we are dead.

In high stakes scenario, I think my statement was accurate that AI "could be incredibly useful tool to zoom out and see the bigger picture first, and then dive deeper into our thesis", and I think your statement was also accurate that "that depends on what you are looking for". So as humans, what would we do next? Or where is the AI R&D going in this regard?

* it's not actually as binary as just 2 sides, some people will be undecided or have foot in one camp and another foot in another camp, etc.
 
Last edited:

Cadence

Legend
Supporter
Basically this…

except that CharGPT is not lying, because it simply does not understand anything it is not lying, it is just aggregating information and unable to distinguish good data from bad data while doing so.

I will amend it to "gives the impression of lying" :)

I wonder how much slower it would be if it did a quick double check of things it claimed, and didn't attribute things it just spliced together to someone else.

I'm tempted to open two at once and have it fact check itself in real time.
 

Dausuul

Legend
And another.

"Deeply unsettled" is the reaction of someone who anthropomorphizes the chatbot. Me, I about died laughing when I read that conversation. (Actual transcript here.) It showcases exactly why you shouldn't anthropomorphize the thing.

What it is doing is essentially "yes-anding" itself into lunacy. Each of its responses has the following inputs:

1. The prompts from the user.
2. Possibly some facts pulled from a data store or Internet search. (I don't know if the Bing chatbot does this. If it doesn't now, I suspect it soon will.)
3. Its own previous responses.

#3 is the crucial thing here. The bot is based on pattern recognition: It's trying to extend a perceived pattern, based on text harvested from the entire Internet -- every kind of text there is, from Twitter threads to novels. So if the bot looks over its previous responses and sees something that resembles a conversation with a moody teenager, its future responses will be even more like a moody teenager. If it sees something that resembles science fiction about AI gone bad, it will build on that too.

The more you ask it to elaborate, the more it reinforces the pattern, and its responses get more and more extreme. This gives the impression of a sapient being opening up and sharing more of their inner thoughts. But it's just that, an impression. What it's really doing is mirroring humanity back to ourselves. It's basically just a turbocharged ELIZA... which also had people anthropomorphizing it, way back in the 1960s.
 
Last edited:

Maxperson

Morkus from Orkus
"Deeply unsettled" is the reaction of someone who anthropomorphizes the chatbot. Me, I about died laughing when I read that conversation. (Actual transcript here.) It showcases exactly why you shouldn't anthropomorphize the thing.

What it is doing is essentially "yes-anding" itself into lunacy. Each of its responses has the following inputs:

1. The prompts from the user.
2. Possibly some facts pulled from a data store or Internet search. (I don't know if the Bing chatbot does this. If it doesn't now, I suspect it soon will.)
3. Its own previous responses.

#3 is the crucial thing here. The bot is based on pattern recognition: It's trying to extend a perceived pattern, based on text harvested from the entire Internet -- every kind of text there is, from Twitter threads to novels. So if the bot looks over its previous responses and sees something that resembles a conversation with a moody teenager, its future responses will be even more like a moody teenager. If it sees something that resembles science fiction about AI gone bad, it will build on that too.

The more you ask it to elaborate, the more it reinforces the pattern, and its responses get more and more extreme. This gives the impression of a sapient being opening up and sharing more of their inner thoughts. But it's just that, an impression. What it's really doing is mirroring humanity back to ourselves. It's basically just a turbocharged ELIZA... which also had people anthropomorphizing it, way back in the 1960s.
Humans.

1. The prompts from a person. "Do you like anchovies on your pizza?"
2. Possibly some facts pulled from a data store in your head. Possibly from a search of the internet or other sources.
3. Your own previous experiences.

I'm really not see a lot of difference.
 

Stalker0

Legend
We all see the current situation with the public access version of ChatGPT, AI research & development, etc. and perhaps we split into at least two* sides or extremes:

AI Optimists: Those who feel confident that future AI will definitely get good overall at distinguishing good from bad data
AI Pessimists: Those who feel confident that future AI will definitely not get really good overall at distinguishing good from bad data

BUT what if we raise the stakes? What if there's a gun to your head, and if your prediction turns out to ever be wrong, your brains will be blasted out? Or, if you prefer, Zeus will strike you down with a lightning bolt for your hubris if you are wrong.

In that scenario, just the feeling of confidence or hope is probably not going to cut it. The stakes are higher: if we turn out to be wrong, then we are dead.

In high stakes scenario, I think my statement was accurate that AI "could be incredibly useful tool to zoom out and see the bigger picture first, and then dive deeper into our thesis", and I think your statement was also accurate that "that depends on what you are looking for". So as humans, what would we do next? Or where is the AI R&D going in this regard?

* it's not actually as binary as just 2 sides, some people will be undecided or have foot in one camp and another foot in another camp, etc.
the simple truth is…most predictions about the future are wrong, often hilariously so. We humans just aren’t great at it, because we never see the curve balls that come up in the future, we can only extend out based on our current understating.

Extending out the current AI scenario, if any of the current AI fundamental models are able to accurately simulate human thinking “reasonably well”, and it’s simply a matter of training, tweaks, and refinement…then we should expect incredible progress in the next few years, anyone who thinks 10 years would be kidding themselves.

On the other hand, if the currents models have a fundamental flaw that gets us 80% but no further, then we will likely see a short spurt of rapid growth followed by a brick wall of stalled progress, with years of effort to inch forward. AI will be disruptive but not the all encompassing entity we fear it to be. It won’t be until the next paradigm shift, where someone rethinks the model entirely, that we could then get to that ultimate level.

Now again, we can’t see the curveballs. There could be world wide disasters, a virus that takes down the entire internet, we run out of oil and technology starts shutting down, etc etc.

But with the curve we are on, we have a large number of smart people with tons of resources working on the problem. Further this isn’t an “impossible” problem like faster than light travel (which current physics tells us is impossible), nature has shown us a thinking mind is possible. So to bet against AI is to bet against human ingenuity’s ability to mimic something in nature…and I would never take that bet.

The biggest thing ChatGPT has done is remind us…whether it 1 year, 10, or 50….AI is coming. It probably won’t take the form we expect, and we aren’t ready. Maybe this will be a wake up call for governments to start getting serious about how AI will move in the future, because if they don’t, they will be helpless to catch up as it very quickly takes over.
 
Last edited:

Clint_L

Hero
"Deeply unsettled" is the reaction of someone who anthropomorphizes the chatbot. Me, I about died laughing when I read that conversation. (Actual transcript here.) It showcases exactly why you shouldn't anthropomorphize the thing.

What it is doing is essentially "yes-anding" itself into lunacy. Each of its responses has the following inputs:

1. The prompts from the user.
2. Possibly some facts pulled from a data store or Internet search. (I don't know if the Bing chatbot does this. If it doesn't now, I suspect it soon will.)
3. Its own previous responses.

#3 is the crucial thing here. The bot is based on pattern recognition: It's trying to extend a perceived pattern, based on text harvested from the entire Internet -- every kind of text there is, from Twitter threads to novels. So if the bot looks over its previous responses and sees something that resembles a conversation with a moody teenager, its future responses will be even more like a moody teenager. If it sees something that resembles science fiction about AI gone bad, it will build on that too.

The more you ask it to elaborate, the more it reinforces the pattern, and its responses get more and more extreme. This gives the impression of a sapient being opening up and sharing more of their inner thoughts. But it's just that, an impression. What it's really doing is mirroring humanity back to ourselves. It's basically just a turbocharged ELIZA... which also had people anthropomorphizing it, way back in the 1960s.
ChatGPT is a generalist. Imagine taking those basic capacities, and putting them within the constraints of an RPG adventure. In other words, giving it specific information within which to contextualize its responses.

People have been using AI dungeon masters for years in the form of increasingly complex RPG video games. But those are always constrained by, in effect, option trees provided by the human writers. Basically, really complex "pic a path to adventure" stories. What technology like ChatGPT suggests is the possibility of getting rid of the option trees so that players can suggest almost anything.

Take my example from a few pages back, and imagine that, because it is optimized for a specific adventure, the AI knows that the guardian is an ogre with certain goals and tendencies.

As I posted earlier, I think there is a tendency in this discussion to take a black or white position: either we have an amazing, all-purpose AI that can replace a human in every situation in a few years, or AI will never be able to supplant a person. It won't be cut and dry. There will be...there already are...situations in which text AIs can outdo people, and there will be other situations where that will be a long time coming.
 
Last edited:

Remove ads

Top