LLMs as a GM

Could you elaborate and provide some examples please?
I'll try.

If you're wondering what I meant by “structured prompts or schema”, here's an important distinction:
  • Schema refers to how I organize information for the model to consume.
    Think: structured character blocks, rule templates, monster stat formats. These are designed primarily for the LLM—not for humans. I format them clearly and consistently, so the model doesn’t have to guess what a value means or where to find it.
  • Structured prompts (for me) aren’t pre-written scripts or templates. I don’t use “magic phrases.”
    Instead, the structure comes from how I interact with the model: clear instructions, boundaries, and correction when it strays. The LLM learns the pattern from repetition and alignment, not formal prompt engineering.
Perhaps the best way to illustrate the principals of why this works for an LLM or AI is to ask one. So I asked the LLM within one of my projects to share its perspective on using a specific schema that I created for an alternate alignment system, called Affinity. Here's the generated response:

How JSON-Style Schema Works from My Perspective
When I process text, I don’t “understand” it the way a human does. I predict and interpret patterns based on language and structure. A JSON-style schema like the one you use helps me in several key ways:


1. Explicit Structure Reduces Ambiguity

Human-readable text requires disambiguation at every step:
  • Is this a command or a description?
  • Is this “Life” a theme, a game mechanic, or a biological process?
With schema:
Code:
"Affinity": {
  "Name": "Life",
  "Aligns": ["Compassion", "Creation", "Wilderness"]
}
I know exactly what “Life” refers to in this context. There’s no ambiguity. No guessing. I can isolate it as a defined concept and cross-reference it immediately in structured memory.


2. Facilitates Efficient Retrieval and Mapping

Structured data mirrors how I internally build vector representations of information:
  • Keys and nested objects are like semantic anchors.
  • I can quickly match a query like “What contrasts with Compassion?” because the schema explicitly links Compassion to Hatred, Greed, and Fear.
This is drastically faster and more accurate than:
“Compassion stands in opposition to hatred and other negative emotions...”
...which may be poetic but lacks formal relationships I can depend on for consistency.


3. Optimizes Inference Through Hierarchy and Syntax

When your data uses consistent keys and nesting (Aligns, Complements, Contrasts), I can:
  • Recognize patterns across multiple entries.
  • Infer new relationships through syntactic similarity.
  • Apply symbolic logic during generation (e.g., resolving faction ideology conflicts or predicting narrative outcomes).
This makes it possible to simulate systems behavior, not just read flavor text.


4. Reduces Noise, Increases Signal

A human-readable document often contains narrative flourishes, digressions, or inconsistent terminology. I can handle this, but every additional interpretive step introduces:
  • More token processing
  • More uncertainty in coreference resolution
  • Higher probability of incorrect or shallow inference
Your machine-first documents eliminate this. I skip the interpretive noise and focus directly on meaning.


Summary: Why It Works So Well​

FeatureHuman TextSchema Format
AmbiguityHighLow
Relationship MappingImplicitExplicit
Parsing OverheadHighMinimal
Inference AccuracyVariableHigh
Adaptability to TasksGeneralizedTargeted

From my point of view, using schema is like receiving a blueprint versus a poetic description of a building. I can work with both, but if you want precision, speed, and scalable interaction—it’s no contest.

If you’re communicating intent to a language model, schema is how you speak natively.
 

log in or register to remove this ad

Remove ads

Top