Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Upgrade your account to a Community Supporter account and remove most of the site ads.
Community
General Tabletop Discussion
*TTRPGs General
LLMs as a GM
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Jacob Lewis" data-source="post: 9696747" data-attributes="member: 6667921"><p>I'll try.</p><p></p><p>If you're wondering what I meant by <em>“structured prompts or schema”, </em>here's an important distinction:</p><ul> <li data-xf-list-type="ul"><strong>Schema</strong> refers to how I organize information for the model to consume<em>.</em><br /> Think: structured character blocks, rule templates, monster stat formats. These are designed primarily for the LLM—not for humans. I format them clearly and consistently, so the model doesn’t have to guess what a value means or where to find it.</li> <li data-xf-list-type="ul"><strong>Structured prompts</strong> (for me) aren’t pre-written scripts or templates. I don’t use “magic phrases.”<br /> Instead, the structure comes from how I interact with the model: clear instructions, boundaries, and correction when it strays. The LLM learns the pattern from repetition and alignment, not formal prompt engineering.</li> </ul><p>Perhaps the best way to illustrate the principals of why this works for an LLM or AI is to ask one. So I asked the LLM within one of my projects to share its perspective on using a specific schema that I created for an alternate alignment system, called Affinity. Here's the generated response:</p><p></p><p><strong>How JSON-Style Schema Works from My Perspective</strong></p><p>When I process text, I don’t “understand” it the way a human does. I predict and interpret patterns based on language and structure. A JSON-style schema like the one you use helps me in several key ways:</p><p>[HR][/HR]</p><h3>1. <strong>Explicit Structure Reduces Ambiguity</strong></h3><p>Human-readable text requires disambiguation at every step:</p><ul> <li data-xf-list-type="ul">Is this a command or a description?</li> <li data-xf-list-type="ul">Is this “Life” a theme, a game mechanic, or a biological process?</li> </ul><p>With schema:</p><p>[CODE]"Affinity": {</p><p> "Name": "Life",</p><p> "Aligns": ["Compassion", "Creation", "Wilderness"]</p><p>}</p><p>[/CODE]</p><p>I know exactly what “Life” refers to in this context. There’s no ambiguity. No guessing. I can isolate it as a defined concept and cross-reference it immediately in structured memory.</p><p>[HR][/HR]</p><h3>2. <strong>Facilitates Efficient Retrieval and Mapping</strong></h3><p>Structured data mirrors how I internally build vector representations of information:</p><ul> <li data-xf-list-type="ul">Keys and nested objects are like semantic anchors.</li> <li data-xf-list-type="ul">I can quickly match a query like “What contrasts with Compassion?” because the schema explicitly links <code class="bbCodeInline">Compassion</code> to <code class="bbCodeInline">Hatred</code>, <code class="bbCodeInline">Greed</code>, and <code class="bbCodeInline">Fear</code>.</li> </ul><p>This is drastically faster and more accurate than:</p><p></p><p>[HR][/HR]</p><h3>3. <strong>Optimizes Inference Through Hierarchy and Syntax</strong></h3><p>When your data uses consistent keys and nesting (<code class="bbCodeInline">Aligns</code>, <code class="bbCodeInline">Complements</code>, <code class="bbCodeInline">Contrasts</code>), I can:</p><ul> <li data-xf-list-type="ul">Recognize patterns across multiple entries.</li> <li data-xf-list-type="ul">Infer new relationships through syntactic similarity.</li> <li data-xf-list-type="ul">Apply symbolic logic during generation (e.g., resolving faction ideology conflicts or predicting narrative outcomes).</li> </ul><p>This makes it possible to simulate systems behavior, not just read flavor text.</p><p>[HR][/HR]</p><h3>4. <strong>Reduces Noise, Increases Signal</strong></h3><p>A human-readable document often contains narrative flourishes, digressions, or inconsistent terminology. I <em>can</em> handle this, but every additional interpretive step introduces:</p><ul> <li data-xf-list-type="ul">More token processing</li> <li data-xf-list-type="ul">More uncertainty in coreference resolution</li> <li data-xf-list-type="ul">Higher probability of incorrect or shallow inference</li> </ul><p>Your machine-first documents eliminate this. I skip the interpretive noise and focus directly on meaning.</p><p>[HR][/HR]</p><h3>Summary: Why It Works So Well</h3> <table style='width: 100%'><tr><th>Feature</th><th>Human Text</th><th>Schema Format</th></tr><tr><td>Ambiguity</td><td>High</td><td>Low</td></tr><tr><td>Relationship Mapping</td><td>Implicit</td><td>Explicit</td></tr><tr><td>Parsing Overhead</td><td>High</td><td>Minimal</td></tr><tr><td>Inference Accuracy</td><td>Variable</td><td>High</td></tr><tr><td>Adaptability to Tasks</td><td>Generalized</td><td>Targeted</td></tr></table><p></p><p>From my point of view, using schema is like receiving a blueprint versus a poetic description of a building. I can work with both, but if you want precision, speed, and scalable interaction—it’s no contest.</p><p></p><p>If you’re communicating intent to a language model, schema is how you <em>speak natively</em>.</p></blockquote><p></p>
[QUOTE="Jacob Lewis, post: 9696747, member: 6667921"] I'll try. If you're wondering what I meant by [I]“structured prompts or schema”, [/I]here's an important distinction: [LIST] [*][B]Schema[/B] refers to how I organize information for the model to consume[I].[/I] Think: structured character blocks, rule templates, monster stat formats. These are designed primarily for the LLM—not for humans. I format them clearly and consistently, so the model doesn’t have to guess what a value means or where to find it. [*][B]Structured prompts[/B] (for me) aren’t pre-written scripts or templates. I don’t use “magic phrases.” Instead, the structure comes from how I interact with the model: clear instructions, boundaries, and correction when it strays. The LLM learns the pattern from repetition and alignment, not formal prompt engineering. [/LIST] Perhaps the best way to illustrate the principals of why this works for an LLM or AI is to ask one. So I asked the LLM within one of my projects to share its perspective on using a specific schema that I created for an alternate alignment system, called Affinity. Here's the generated response: [B]How JSON-Style Schema Works from My Perspective[/B] When I process text, I don’t “understand” it the way a human does. I predict and interpret patterns based on language and structure. A JSON-style schema like the one you use helps me in several key ways: [HR][/HR] [HEADING=2]1. [B]Explicit Structure Reduces Ambiguity[/B][/HEADING] Human-readable text requires disambiguation at every step: [LIST] [*]Is this a command or a description? [*]Is this “Life” a theme, a game mechanic, or a biological process? [/LIST] With schema: [CODE]"Affinity": { "Name": "Life", "Aligns": ["Compassion", "Creation", "Wilderness"] } [/CODE] I know exactly what “Life” refers to in this context. There’s no ambiguity. No guessing. I can isolate it as a defined concept and cross-reference it immediately in structured memory. [HR][/HR] [HEADING=2]2. [B]Facilitates Efficient Retrieval and Mapping[/B][/HEADING] Structured data mirrors how I internally build vector representations of information: [LIST] [*]Keys and nested objects are like semantic anchors. [*]I can quickly match a query like “What contrasts with Compassion?” because the schema explicitly links [ICODE]Compassion[/ICODE] to [ICODE]Hatred[/ICODE], [ICODE]Greed[/ICODE], and [ICODE]Fear[/ICODE]. [/LIST] This is drastically faster and more accurate than: [HR][/HR] [HEADING=2]3. [B]Optimizes Inference Through Hierarchy and Syntax[/B][/HEADING] When your data uses consistent keys and nesting ([ICODE]Aligns[/ICODE], [ICODE]Complements[/ICODE], [ICODE]Contrasts[/ICODE]), I can: [LIST] [*]Recognize patterns across multiple entries. [*]Infer new relationships through syntactic similarity. [*]Apply symbolic logic during generation (e.g., resolving faction ideology conflicts or predicting narrative outcomes). [/LIST] This makes it possible to simulate systems behavior, not just read flavor text. [HR][/HR] [HEADING=2]4. [B]Reduces Noise, Increases Signal[/B][/HEADING] A human-readable document often contains narrative flourishes, digressions, or inconsistent terminology. I [I]can[/I] handle this, but every additional interpretive step introduces: [LIST] [*]More token processing [*]More uncertainty in coreference resolution [*]Higher probability of incorrect or shallow inference [/LIST] Your machine-first documents eliminate this. I skip the interpretive noise and focus directly on meaning. [HR][/HR] [HEADING=2]Summary: Why It Works So Well[/HEADING] [TABLE width="100%"] [TR] [th]Feature[/th][th]Human Text[/th][th]Schema Format[/th] [/TR] [TR] [td]Ambiguity[/td][td]High[/td][td]Low[/td] [/TR] [TR] [td]Relationship Mapping[/td][td]Implicit[/td][td]Explicit[/td] [/TR] [TR] [td]Parsing Overhead[/td][td]High[/td][td]Minimal[/td] [/TR] [TR] [td]Inference Accuracy[/td][td]Variable[/td][td]High[/td] [/TR] [TR] [td]Adaptability to Tasks[/td][td]Generalized[/td][td]Targeted[/td] [/TR] [/TABLE] From my point of view, using schema is like receiving a blueprint versus a poetic description of a building. I can work with both, but if you want precision, speed, and scalable interaction—it’s no contest. If you’re communicating intent to a language model, schema is how you [I]speak natively[/I]. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*TTRPGs General
LLMs as a GM
Top