AI GMs

The next experiment I'm thinking of doing is writing up a 5-10 page description of a genre/setting I want to play with my kids (sub-light space opera; learn about our solar system while flying around trading and getting into trouble) but with some twists:
1) Simultaneous action resolution: we type what each of us is doing and submit the moves all at once; AI resolves and narrates the results
2) Not based on any specific RPG: in the seed document I'll describe some parameters, but with instructions to not only make up whatever rules are appropriate, but to also hide the mechanics from us. So the game will be entirely narratively based from our point of view, with invisible mechanics.
 

log in or register to remove this ad


If we're clarifying, LLM's don't know things. An LLM knows nothing. They just guess based on what an algorithm decides is the most likely word next, not the most accurate one.
Human Knowledge is also just pattern recognition and appeals to authority or popularity. Human decisions are based not off certainty but off what is most likely to occur next based on previously found patterns, popular consensus or authority.

That's why they hallucinate nonsense so much. They guess at everything because they don't have the capacity or functionality to know anything. That's just not how they work.
Mandela effect, Pareidolia, etc. In many cases we just guess and ‘hallucinate’ based on any semblance of a pattern as well, regardless of how accurate that guess is.
 

Oh Christ.

Ok, if we're getting all pedantic about anthropomorphisms, the LLM doesn't "guess," either. It selects ("samples") from among weighted probabilities using pseudo-random number generation.

And while correct that the LLM doesn't know anything...in a sapient sense...about RPGs, it has been trained on RPG content and that information is factored into the responses.

When I ask for specific information about Shadowdark, it is able to retrieve that information accurately. I think "know" is a pretty useful shorthand for that trick, but if you have something concise and clear and non-anthropomorphic, my circuits are practically humming with anticipation to have that vocabulary input into my memory banks.
I agree with most of this as well but I think people in general have an overly optimistic view about what human knowing is.

Like something as basic as knowing the world is round, I’ve never done the experiments myself, but even if I did I can self deceive. I’ve seen pictures, but photos can be fakes, I’ve read expert explanations and yet experts can lie or be mistaken. But I believe it’s true because internally I believe we all do something very similar to assigning probabilistic weights to all sources, experts, consensus, self experience and once something passes a certain threshold we consider it known.

Right now it seems LLMs are more primitive than humans in one regard, as they seem to weight every input the same, but that doesn’t seem like it must by necessity always be the case, and if that limitation is surpassed the difference to me seems to shrink and shrink and shrink.
 

I have fairly high hopes that ai gms will eventually form the basis for more user driven video game stories. At that point pen and paper is probably dead though ;)
 

Enchanted Trinkets Complete

Remove ads

Top