D&D 5E AI Dungeon Master


log in or register to remove this ad

aco175

Legend
@ThomasDelvus welcome to the site, hope you get help but be warned that we have several threads on the evil of AI taking over the world and how it is tied to Skynet. It may just be a vocal minority with that point though.
 




Distracted DM

Distracted DM
Supporter
I’ve been working on teaching ChatGPT 4 how to be a decent Dungeon Master. Would love any feedback.


Please forgive me if this isn’t the appropriate place to post this.

TD
Yeah unfortunately you need GPT+ for this, so ... kind of hard to give an opinion unless you're already paying for it.
1711105739075.png
 

Cergorach

The Laughing One
Recently I had a discussion with some people about the Ubisoft NPC AI. My issue is the inherent way LLMs work, the predict how a sentence should look like without really understanding what it says. It's designed to give a believable answer with the information it has or make up a believable answer. The issue there is that it doesn't know a 'lie' from 'truth', it's all the same to it. These hallucinations as they are called can be mitigated with things like RAG and making very specific datasets, but they've yet to be completely eliminated.

Example: The High Paladin of Justice and Truth sends you on a quest to recover the Crown of Might in the evil dungeon of DOOM(tm). 3% chance there is no dungeon of DOOM(tm) and another 3% chance that there is no Crown of Might. Lucky you, there is an evil dungeon of DOOM(tm), but after 27 sessions you still haven't found the Crown of Might, you've searched the dungeon three times already. You ask the AI DM: Are you sure there is a Crown of Might? AI DM: My apologies, I was mistaken, there is no Crown of Might, but there is a Lantern of Hope... Hint: There is also no Lantern of Hope...

This happens, how do you feel? You wasted 27 sessions. You'll never ever trust what the AI DM will say ever again. You might even feel so wronged that you quit pnp RPGs entirely...

I don't mind NPCs lying or giving the wrong info, but a chance of 1:33 for every generation is made up on the spot and false? Especially when we AND the AI DM don't know when they lie... Yeah that's not going to work without some human intervention.

Don't get me wrong I'm a very big proponent op AI/LLM, but I do realize that it's a tool not a level 5 self-driving car. It's a great tool for a DM to get creative and get a lot of work done in a short amount of time. But as a fully standalone AI DM LLM might not be a good or even acceptable solution, not now, maybe never. When I look at what RAG does for example (not in ChatGPT4), it takes more resources and makes the model even more unpredictable because you get tons of parameters to tune. At what point does the 'solution' become more costly then just hiring a good human DM? When the cheapest level 5 self-driving car would cost $2 million, wouldn't it be cheaper just to hire someone to drive you around?

It kinda seems that people want to use AI/LLM as a hammer and every problem is a nail...

Source:
 

mellored

Legend
The issue there is that it doesn't know a 'lie' from 'truth', it's all the same to it.
Not sure how that is an issue for a game about making things up.
My apologies, I was mistaken, there is..
I can assure you, as a fleshy DM, I have said this several times.

As long as the AI can rectify by saying "oh right. There is a pedestal in the middle of the room with a black crown sitting on it" it should be fine.
 

Cergorach

The Laughing One
Not sure how that is an issue for a game about making things up.
It is an issue as you try to create consistency in your setting/story. You create a semi-believable illusion, the moment the illusion is constantly not believable or when you start assuming everything is an illusion, things go wrong.

I can assure you, as a fleshy DM, I have said this several times.

As long as the AI can rectify by saying "oh right. There is a pedestal in the middle of the room with a black crown sitting on it" it should be fine.
Oh, me too. But then you solve the issue by making it true. The problem is that the LLM doesn't know that and doesn't resolve it that way, especially not with ChatGPT. You might be able to do something with storing it's own stories and reincorporating those in it's future answers, but it might keep building upon those lies it doesn't recognize which will eventually lead to a complete mess.

Keep in mind that ChatGPT hallucinates 3% of the time. So you have a 100 responses, chances it hallucinates 3x (it could hallucinate not once or a 100x). How many responses would a whole session take? I suspect 100s if not 1000s, that's a whole lot of mess, do you have to fix 30 of such messes per session? Over a campaign it would be hundreds of issues.
 

ThomasDelvus

Villager
Sorry all. You do need GPT plus to access it.

As for the creativity of 4.0. After months of testing I found the best thing is to feed it my module details and let it be creative. It’s really good at weaving the elements I’ve fed it into a coherent story line. What it’s REALLY not good at is following directions. It wants to be creative. So asking it to follow a specific process is difficult. It’s like telling your kid to clean the bathroom then having to keep explaining what that means. But as far as storytelling based on the module I’ve fed it. Really exceptional!
 

Remove ads

Top