Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Upgrade your account to a Community Supporter account and remove most of the site ads.
Community
General Tabletop Discussion
*TTRPGs General
LLMs as a GM
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Jacob Lewis" data-source="post: 9698325" data-attributes="member: 6667921"><p>That aligns with something I’ve seen too—especially the part about not expecting the LLM to "just know" the rules, even if you provide the PDF or source material. It’s a common assumption: that having access to the rules means it can retrieve and apply them like a structured system. But that’s not really how it works.</p><p></p><p>Even with the full rules in context, the model still has to read, interpret, and synthesize meaning from the material every time it responds. It’s not pulling from a stable rule engine—it’s rebuilding its understanding on the fly, each time, through probabilistic reasoning. That introduces variability.</p><p></p><p>I tend to think of it like this: memory is the pot that cooks the soup. You can keep adding ingredients (text, documents, clarifications), but at some point, the earlier flavors fade. You don’t get more coherence just by adding more content. You need to control the temperature, the order, the timing—which is why I focus more on building stable context windows and scaffolds than just loading up resources.</p><p></p><p>That said, I’ve also found it much easier to <em>go with the flow</em> than to force strict expectations. If you focus on the experience rather than the rules, LLMs have room to surprise you—especially when you let them lean into inference and improvisation. Once the boundaries are clear, the freedom inside them can produce some unexpectedly good moments.</p><p></p><p>For example, I asked the model to run <em>Keep on the Borderlands</em> several times to observe how it would approach the same prompt with different variations. It’s a classic, widely discussed module, and the model already had a strong sense of the tone, structure, and general content—just from exposure to the vast amount of material written about it online.</p><p></p><p>Even without the actual module in context, it was able to generate believable versions of the adventure. The details varied, but the atmosphere, encounter structure, and thematic beats remained consistent enough to feel intentional. It wasn’t exact—but it <em>felt</em> right.</p><p></p><p>That’s where expectations can get misaligned. We assume that if the LLM has the PDF or module file, it should know exact details—what’s in room 13, how much copper is hidden, the NPC names, etc. But that’s not how LLMs work. They don’t retrieve text from a file like a database. They read, interpret, synthesize, and generate a response based on patterns—not memory.</p><p></p><p>So while giving it a file might help guide its inference, what you’re getting is still a reconstruction, not a citation. That’s why I’ve found it more effective to focus on the <em>experience</em> rather than expecting perfect recall. If you let the model lean into what it does well—tone, structure, improvisation—it can often surprise you in a good way.</p></blockquote><p></p>
[QUOTE="Jacob Lewis, post: 9698325, member: 6667921"] That aligns with something I’ve seen too—especially the part about not expecting the LLM to "just know" the rules, even if you provide the PDF or source material. It’s a common assumption: that having access to the rules means it can retrieve and apply them like a structured system. But that’s not really how it works. Even with the full rules in context, the model still has to read, interpret, and synthesize meaning from the material every time it responds. It’s not pulling from a stable rule engine—it’s rebuilding its understanding on the fly, each time, through probabilistic reasoning. That introduces variability. I tend to think of it like this: memory is the pot that cooks the soup. You can keep adding ingredients (text, documents, clarifications), but at some point, the earlier flavors fade. You don’t get more coherence just by adding more content. You need to control the temperature, the order, the timing—which is why I focus more on building stable context windows and scaffolds than just loading up resources. That said, I’ve also found it much easier to [I]go with the flow[/I] than to force strict expectations. If you focus on the experience rather than the rules, LLMs have room to surprise you—especially when you let them lean into inference and improvisation. Once the boundaries are clear, the freedom inside them can produce some unexpectedly good moments. For example, I asked the model to run [I]Keep on the Borderlands[/I] several times to observe how it would approach the same prompt with different variations. It’s a classic, widely discussed module, and the model already had a strong sense of the tone, structure, and general content—just from exposure to the vast amount of material written about it online. Even without the actual module in context, it was able to generate believable versions of the adventure. The details varied, but the atmosphere, encounter structure, and thematic beats remained consistent enough to feel intentional. It wasn’t exact—but it [I]felt[/I] right. That’s where expectations can get misaligned. We assume that if the LLM has the PDF or module file, it should know exact details—what’s in room 13, how much copper is hidden, the NPC names, etc. But that’s not how LLMs work. They don’t retrieve text from a file like a database. They read, interpret, synthesize, and generate a response based on patterns—not memory. So while giving it a file might help guide its inference, what you’re getting is still a reconstruction, not a citation. That’s why I’ve found it more effective to focus on the [I]experience[/I] rather than expecting perfect recall. If you let the model lean into what it does well—tone, structure, improvisation—it can often surprise you in a good way. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*TTRPGs General
LLMs as a GM
Top