D&D General Deep Thoughts on AI- The Rise of DM 9000

Sure, some of the models fail in predictable ways. But what's shocking and interesting is that they also fail (and succeed) in ways that we don't understand.
What would be some examples of succeeding in "ways we don't understand"?

I ask because I have yet to see such. Some stuff works surprisingly decently, like better than one might instinctively expect, but I'm not aware of any "not understood" successes. And a lot of "not understood" failures in all fields through history have been extremely well-understood by basically everyone but the people creating the thing that failed!

As someone who follows the field and was relatively unperturbed by all of this a year ago, I am truly shocked. Not just because of the leap in the forward-facing tech, but more importantly, because of the consumer-facing nature of it, which means that adoption and dissemination will be spreading that much faster.
I also follow the field (I dunno if more or less closely - probably less) and I have to say, I've not been shocked.

It's not that there's been some stunning novel advance - it's that tech that's fundamentally been around for a few years is being tested on the public, essentially. The sheer amount of it isn't anything to do with some flipped state - it's to do with greedy businesses trying to roll out unoriginal tech before their competitors roll out nigh-identical tech.

In other words, the more use-cases it has, the more use-cases it will generate.
Maybe? I'm not seeing that many genuine use-cases for any of this yet - i.e. ones which aren't just "It does a crap job, but at least we don't have to pay someone". AI code generation is a possible genuine use case, but I'm somewhat (tangentially) familiar with it through my job, and at least in my sphere, it doesn't actually work very well, in terms of creating real time savings. Perhaps it does in other fields.

AI art gets less impressive every day as it becomes more and more obvious how limited it is, and how easy to spot it is - not just weird hands and teeth, but stuff like how it's obsessed with facial lines and makes everyone look like they're wearing a ton of contouring makeup if it tries to do photographs. I'm not sure there are many real use cases that aren't extremely suspect (deception, semi-illegal/illegal niche porn, etc.). It's not bad for creating a scene to work from if your imagination is failing you, I guess.

Language models demonstrate a very impressive to sound human in text, and that's great, but they're constantly wrong, inaccurate, misleading, don't attribute sources, and have been programmed to be various irresponsible, evasive, stubborn, and fundamentally keen to create a misunderstanding, which, is frankly, horrifying and I'd say borders on actively unethical design.

This feels different.
False dawn, I'm telling you man. In five years or whatever we'll revisit and see how much of this really created change and how much was a cool but largely useless deal.

People will lose jobs and stuff, but AI right now is just the outsourcing/offshoring of the 2020s. In the 2010s and a little before, outsourcing and offshoring was all the rage. You could save a huge amount of money if you outsourced various tasks, or even entire departments, or if you weren't willing to outsource, offshored them somewhere cheap and with low standards. Millions, probably tens of millions or more of people lost jobs because of this.

But they did they stay lost? Nah. From about 2014 onwards there was a ton of very quiet re-shoring and in-housing. Why? Because whilst the other options were cheaper on the balance sheet, they didn't produce the results that they wanted. I could go into extreme detail but I strongly suspect the same will happen here to a large extent. AI will be useful in a lot of ways - particularly for identifying stuff like the spread-patterns of malware, but I question how much further it will go until a new generation of IT with fundamentally different models comes in.
 

log in or register to remove this ad

J.Quondam

CR 1/8
Well that was fast. There is now a paid gateway to ChatGPT that runs $20 a month to get 'priority access' and faster response times.
And so it will be recorded in the Annals of Civilization that the first truly major problem solved by AI was "How to make wealthy people even wealthier."
See, AI works!
 



gorice

Hero
Interesting discussion. A couple of thoughts, which I'll try and tie back to RPGs:

As a writer, what strikes me about AI-generated text is that it's full of cliches and unable to cope with details, except as 'themes'. It's all just sort of... Vague and anodyne. I was reading an example of an AI-generated dungeon previously, and it was so relentlessly generic and boring, it could have been written by WotC. AI DMs are going to be railroading, force-using gaslighters with the creativity of a Forgotten Realms supplement.

So, they'll fit right in? Something doesn't have to be good to be successful. I already worry that a lot of mainstream RPG products are antithetical to play as such. If influential brands start pushing this stuff, we might just have to live with it.
 

On a surface level that's not a bad answer, except that it's wrong: for those unfamiliar with Forgotten Realms as I am, Baldur's Gate is actually on River Chionthar; I think the name of the road is also incorrect. (Notably, it's actually not a bad description of the journey from Waterdeep to Candlekeep). The thing is you can just tell it the correction and it will give a better description.

Well, somehow the AI has to be told stuff. If "programing" is the wrong word it still has to get the information some how. Like it would seem some clueless person gave the AI the 5E Sword Coast book to read, right? That is why the whole Sword Coast is blank between Waterdeep and Baldur's Gate.....just endless miles of blank graph paper, right? And, of course, in 5E the "Sword Coast" is the whole planet of Toril.

The part that does stand out to me is "They will need to cross the river by ferry or by a bridge." So, are those the ONLY two ways across the river? If we were playing AI DM, and I said "my character swims across the river", is it just going to go "does not compute!" What if I fly? How about ride across on the back of a giant turtle? "Does not compute!"

I'm sure the AI gets asked the same 100 questions, so it gets good at answering those questions.....well, at least to the ones that ask the question.

Honestly, I'm not sure that's true. Considering the MOUNTAIN of available information about Waterdeep (for example) on the Internet, I could easily see an AI that could run a canon based open world game in Waterdeep. I mean, good grief, we can do that now practically without AI. And, again, say you start in Waterdeep, fair enough, but, if the AI has access to the FR wiki, and the ability to generate adventures (which isn't all that far fetched, we're seeing the beginnings of it now), suddenly an AI generated Open World Forgotten Realms game isn't all that out of reach.
Can the AI read the FR Wiki? Will the AI understand the "everything on the wiki is in the past tense" dumb rule? Will the AI understand the mess of timelines and information on the Wiki? Will the AI understand all the horrible edition changes and the way the Wiki does them?

And sure, the FR Wiki is, sadly, "the best" place to go for online Waterdeep information. Still it does not really have all the Waterdeep data....not that any of that old 1/2/3E data matters for the 5E realms anyway. Sure using 2/3 E books there is a TON of information about Waterdeep, but like 99% of that is wasted paper after 5E and centuries and two apoclypses. And it's not like there will EVER be a detailed 5E update for even like 1% of the lore.

I'm NOT trying to knock the FR Wiki (that would be a whole thread), I'm trying to point out that any "one" source may not be 100% reliable.

Does "pattern recognition" equal Imagination? Like my dog can do "pattern recognition" that "when I walk towards the door I'm going outside" and then run over to join me......but my dog won't be writing a novel any time soon (or playing D&D).
 

Clint_L

Hero
Playing around with ChatGPT will answer a lot of questions people have. It takes seconds to sign on with your Google account and start trying it out. I recommend doing so if your are curious about it. I've had a lot of fun with it.

To answer one question above, if you told it you were going to fly using your magic carpet or something, it would have no problem handling that.

Again, it is not a person. It is not sentient. There are significant limitations. But what it can do is extremely impressive, with an enormous range of applications. And it is already a done deal, so we may as well figure out how best to employ it.
 

Clint_L

Hero
Literally billions of dollars have been spent on AIs that can safely pilot a car. With input sources significantly better than human eyesight and the ability to analyze and react in a fraction of the time it would take a person, we still have Teslas running into parked semis.
It turns out that writing an essay is significantly easier for AI to handle than driving a car. Also, the stakes are a lot lower. A poorly written conclusion is unlikely to kill anyone.
 

Hex08

Hero
I don't think I saw it mentioned previously, but CNET used AI (not ChatGPT) to write some articles for their site. The articles "sound" fine and the average reader probably wouldn't have realized the problems, but the AI was wrong about the information it presented. This emphasizes the problem with current AI. It is good at making good sounding works but has not yet developed to the point where it is trustworthy as a source for expert-level information. Tha t doesn't mean it won't get there. Depending on the quality of the DM you are used to working with ChatGPT probably won't be able to run a campaign but after messing around with it I can say it is at least as useful a tool for DMs for campaign development as material I've seen published by some RPG companies.
 

Oofta

Legend
It turns out that writing an essay is significantly easier for AI to handle than driving a car. Also, the stakes are a lot lower. A poorly written conclusion is unlikely to kill anyone.

I think a human DM reacting in real time to players can be more difficult than piloting a car. Design a city, dungeon, even an entire campaign? Maybe. It can certainly help as a starting point. I could see turning over certain aspects of the game.

But running a game? Completely replacing a DM with the flexibility a human has? I simply don't see that in the near future. Then again the future is hard to predict, especially when it hasn't happened yet.
 

Remove ads

Top