Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Upgrade your account to a Community Supporter account and remove most of the site ads.
Community
General Tabletop Discussion
*TTRPGs General
LLMs as a GM
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Gorgon Zee" data-source="post: 9701033" data-attributes="member: 75787"><p>So .. isn't that exactly what I do when I am a GM? I respond using words in orders that seem likely to be appropriate to the statements of my players?</p><p></p><p>I'm not going to respond to "they can't think" since there is no solid definition of what thinking is. But they certainly can come up with solutions. They do so by manipulating patterns of words so as to create a chain of arguments that is consistent with the patterns it has been trained on. This is different in approach from the way that previous AI attempted the problem. The old way is:</p><ol> <li data-xf-list-type="ol">Go from a text description to one in a formal language</li> <li data-xf-list-type="ol">Infer new statements in the formal language based on existing statements</li> <li data-xf-list-type="ol">Translate those new statements into a usable text form</li> </ol><p>LLMs don't have that formal language; the vectors they use are really just words in a more computer-friendly format. So for many people, it doesn't feel like real thinking. We (humans) don't depend on text to reason, but reason based on concepts which are linked to words, rather than directly on the words themselves.</p><p></p><p>But modern chain-of-reasoning LLMs specifically do come up with solutions by planning what they need to do, creating sub-steps and evaluating how well those steps worked, generating new steps based on the previous step's results and deciding when they have enough evidence to present a solution. These are all ways we reason. The big difference is that what the LLM manipulates is bundles of words that can be though of as defining concepts, rather than manipulating concepts directly.</p><p></p><p>The per-query cost in terms of power and dollars is decreasing pretty rapidly. The new models often require multiple queries to do their reasoning, so the total cost is higher. However, yet newer methods are showing that smaller fine-tuned models can do as well as expensive models.</p><p></p><p>I am pretty sure, right now, that I could run a fine-tuned LLM on my $600 mac that would do as good a job as general purpose gpt-4. I haven't done that because it would cost me $5000 and the results could not be shared as it incorporates IP that I do not have rights to. But WotC easily could.</p><p></p><p>Now, to be clear, I'm not bullish on it happening. But it's not because they cannot reason well (they can) or will get more expensive (they will get cheaper), it's because they are not good at surprising you. They are designed to present a sort of average experience. There have been interesting papers showing that for a single person, AIs tend to be more creative than that person, but for a group they tend to be less creative than the group, because that creativity is very similar. </p><p></p><p>I have used LLMs to suggest scenes and create descriptions, and while initially they look good, after a few examples they start to feel very same-y. In general, I think this will be a challenge n any effort that attempts to use AIs to get creative results. It will start looking cool, but after a while will start feeling too similar to previous results. </p><p></p><p>I use GenAI in Photoshop. Not to be imaginative, but to fill in missing content (it's really good at fixing edges when adding a person to a group shot ...) and that's what I expect to use it for it RPGs in the future.</p></blockquote><p></p>
[QUOTE="Gorgon Zee, post: 9701033, member: 75787"] So .. isn't that exactly what I do when I am a GM? I respond using words in orders that seem likely to be appropriate to the statements of my players? I'm not going to respond to "they can't think" since there is no solid definition of what thinking is. But they certainly can come up with solutions. They do so by manipulating patterns of words so as to create a chain of arguments that is consistent with the patterns it has been trained on. This is different in approach from the way that previous AI attempted the problem. The old way is: [LIST=1] [*]Go from a text description to one in a formal language [*]Infer new statements in the formal language based on existing statements [*]Translate those new statements into a usable text form [/LIST] LLMs don't have that formal language; the vectors they use are really just words in a more computer-friendly format. So for many people, it doesn't feel like real thinking. We (humans) don't depend on text to reason, but reason based on concepts which are linked to words, rather than directly on the words themselves. But modern chain-of-reasoning LLMs specifically do come up with solutions by planning what they need to do, creating sub-steps and evaluating how well those steps worked, generating new steps based on the previous step's results and deciding when they have enough evidence to present a solution. These are all ways we reason. The big difference is that what the LLM manipulates is bundles of words that can be though of as defining concepts, rather than manipulating concepts directly. The per-query cost in terms of power and dollars is decreasing pretty rapidly. The new models often require multiple queries to do their reasoning, so the total cost is higher. However, yet newer methods are showing that smaller fine-tuned models can do as well as expensive models. I am pretty sure, right now, that I could run a fine-tuned LLM on my $600 mac that would do as good a job as general purpose gpt-4. I haven't done that because it would cost me $5000 and the results could not be shared as it incorporates IP that I do not have rights to. But WotC easily could. Now, to be clear, I'm not bullish on it happening. But it's not because they cannot reason well (they can) or will get more expensive (they will get cheaper), it's because they are not good at surprising you. They are designed to present a sort of average experience. There have been interesting papers showing that for a single person, AIs tend to be more creative than that person, but for a group they tend to be less creative than the group, because that creativity is very similar. I have used LLMs to suggest scenes and create descriptions, and while initially they look good, after a few examples they start to feel very same-y. In general, I think this will be a challenge n any effort that attempts to use AIs to get creative results. It will start looking cool, but after a while will start feeling too similar to previous results. I use GenAI in Photoshop. Not to be imaginative, but to fill in missing content (it's really good at fixing edges when adding a person to a group shot ...) and that's what I expect to use it for it RPGs in the future. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*TTRPGs General
LLMs as a GM
Top