Reynard
aka Ian Eller
What, you have never had a fruitful discussion inspired by a commercial?In case anyone missed it, the author of the OP offers just such a service, or aspires to. See the "Source" link at the very bottom of that post.
What, you have never had a fruitful discussion inspired by a commercial?In case anyone missed it, the author of the OP offers just such a service, or aspires to. See the "Source" link at the very bottom of that post.
They already have been.This reads like it was written by ChatGPT. Will forum posts be replaced with AI?
I have been training an open source LLM exclusively on the work of @Snarf ZagygThey already have been.
I suppose AI could run a prewritten module if it knew all of the PCs' capabilities. Can it improvise when a player tries something off the wall?
Depends on how you define LLM memory. You either train on it, use RAG, or use the existing context window.Yes. The problem is current generation AI has no memory and the way it fakes memory is highly unreliable. So the problem with playing a game of any sort with a large language model whether it's chess or an RPG is that very quickly the AI loses track of the game state or even that it is a game or what its role in the game is.
Most of the mainstream LLMs offer context windows around the 128k token size, which should fit about a novel. One of Google's LLMs offerts a million tokens worth of context window (~7.5 novels). And there's one odd duck that offers 100 million tokens context window (~750 novels).
You could easily summarize the interactions, something certain LLMs are very proficient in. Thus making a novel a LOT shorter.
Not fast enough shouldn't be an issue. Hallucinations are a problem, always! There are some methods to limit that, but they tended to be complex and compute intensive. As relative compute goes down (faster hardware on less heavy LLMs), the people making LLM systems should be able to tackle that.The problem with my experience is in how it scans back through the context to find the relevant information.
Are you using the right LLM for the job?One of AI's biggest limitations I've noticed is that's attempt to summarize frequently inverts the meaning.
What kind of AI are you talking about? That robot from the Jetsons? Or just another program/tool/service that does what it does reasonably well? That is already happening, as long as you keep your expectations realistic and don't take your experiences with a few LLMs and project that on all LLM tools/services. 'Perfect' AI, that's probably not happening in 25 years either. And what's good enough for you is somewhere in de middle and that also depends on what is good enough for you.And as usual, we're still probably "25 years" away from functional AI.
That kind of depends on the expectations of the group. Certain people playing pnp RPGs back in the day look at cRPGs in the same way, never going to happen, never going to replace the <insert xyz>. But we've been playing cRPGs for a long time! Even played D&D (Champions of Krynn) 35 years ago on my Amiga 500 and even earlier on my C64 (Pool of Radiance)... Later certain people of a newer generation thought the same about the success of MMORPGs...AI DMs may replace people or human DMs in the future to a limited degree, but I think you'll always have some pushback against AI being the DM by some people and some groups.

(Dungeons & Dragons)
Rulebook featuring "high magic" options, including a host of new spells.