Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Upgrade your account to a Community Supporter account and remove most of the site ads.
Community
General Tabletop Discussion
*TTRPGs General
LLMs as a GM
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Ruin Explorer" data-source="post: 9697629" data-attributes="member: 18"><p>I kind of agree, but I don't think it's because the fundamental functionality as much as <em>what</em> some LLMs can autocomplete. What you're describing isn't <em>fundamentally</em> that different from fancy autocomplete. It's just got more layers to it, more of a recognition of syntax and so on, so maybe on steroids is underselling it - it's more like a kind of steam engine that powered some temple doors vs. all the steam boilers on the Titanic and attached mechanisms - but at the same time they're both totally mindless tools for putting words in order, based on having seen words be in order before.</p><p></p><p>The bigger difference I can see is the fact that many of these LLMs have scraped (legally or illegally) insanely huge amounts of data, and thus can do some fairly epic feats of what is essentially autocompletion - entire essays, small to moderately-sized blocks of code, increasingly same-y and obvious "art" and so on.</p><p></p><p></p><p>More, sure, but not a great deal more when it really comes down to it. They're still just stringing words together - there are just a lot more rules and connections. I guess in practical terms it's more a like a hybrid of autocomplete and Google search but with the new added ability to go dreadfully wrong and hallucinate stuff.</p><p></p><p>Though, that said, even some autocomplete could hallucinate!</p><p></p><p>I have a specific example - 12-15 years ago I was working very late in the law library going through a gigantic Excel document with all our books, resources, and so on in it (I forget why, probably some kind of annual audit). Excel, as you may recall, already had fairly extensive autocomplete back then, based on attempting to recognize patterns. So I dragged down having accidentally selected a bunch of books (probably hundreds at least, maybe more), and what I expected was blank cells, but what I got was very uncanny and unsettling - it was a list of almost-words, almost-book-names, I guess because that data was sufficiently large that Excel tried to work from that. If it hadn't been kind of freaked out, I would have screenshotted it, because in retrospect it was fascinating.</p><p></p><p></p><p>I disagree completely. LLMs are not capable of <em>any</em> problem solving <em>by themselves</em>. No matter how well-used. And people have tried. Non-LLM generative AI can solve some chemistry/biology problems, but that stuff has been around for literally decades, it's just benefitting slightly from the hype re: LLMs.</p><p></p><p>All real problem-solving relating to LLMs has to be done by <em>humans</em> using the LLMs. Saying that they're "solving problems" is like saying your shoes "take you to work" or something - like, on a metaphorical level, as a bit of whimsical semi-poetic language, sure, but on a factual level? No.</p><p></p><p>All they can do is essentially dig up <strong>solutions to problems that other humans already solved</strong>, and that they absorbed into their vast net, and that you happened to be good enough at manipulating them to extract from them. They fundamentally can't work alone, and we've seen that experiments with them working with each other have... not gone well. If an LLM (specifically) comes up with a novel solution, it won't be skill on the part of the operator, or brilliance on the part of the programming, it'll likely be sheer and probably a hallucination.</p><p></p><p></p><p>I'm skeptical that LLMs can do much more than they're doing now. I'm sure they'll be refined somewhat, but so far the majority of the "advancements" with LLMs in the last what, two years or more have been simply throwing more processor power, energy usage, heat generation, and water consumption at the problem. And 10x the resources doesn't get you a 10x better result, it gets you a 1.125x better quality result, or even the same kinda-crappy result that you have to spend minutes checking, just it was delivered to you in 1 second, not 5 seconds. Is that worth it? It's worth it to some tech exec who gets a $10m bonus because he convinced the absolute rubes at Softbank to get hoodwinked yet again and hand over billions on billions for "data centres", but to anyone else?</p><p></p><p>So I don't really see any path forward for them beyond sort of "lingering". I can see them continuing to be useful in certain ways, but almost all the forward-looking hype about generative AI/LLMs, especially anything suggesting they're even a step on the path to AGI seem to be false to me.</p><p></p><p>There are other forms of AI with a lot more potential, frankly (many of them older than LLMs).</p></blockquote><p></p>
[QUOTE="Ruin Explorer, post: 9697629, member: 18"] I kind of agree, but I don't think it's because the fundamental functionality as much as [I]what[/I] some LLMs can autocomplete. What you're describing isn't [I]fundamentally[/I] that different from fancy autocomplete. It's just got more layers to it, more of a recognition of syntax and so on, so maybe on steroids is underselling it - it's more like a kind of steam engine that powered some temple doors vs. all the steam boilers on the Titanic and attached mechanisms - but at the same time they're both totally mindless tools for putting words in order, based on having seen words be in order before. The bigger difference I can see is the fact that many of these LLMs have scraped (legally or illegally) insanely huge amounts of data, and thus can do some fairly epic feats of what is essentially autocompletion - entire essays, small to moderately-sized blocks of code, increasingly same-y and obvious "art" and so on. More, sure, but not a great deal more when it really comes down to it. They're still just stringing words together - there are just a lot more rules and connections. I guess in practical terms it's more a like a hybrid of autocomplete and Google search but with the new added ability to go dreadfully wrong and hallucinate stuff. Though, that said, even some autocomplete could hallucinate! I have a specific example - 12-15 years ago I was working very late in the law library going through a gigantic Excel document with all our books, resources, and so on in it (I forget why, probably some kind of annual audit). Excel, as you may recall, already had fairly extensive autocomplete back then, based on attempting to recognize patterns. So I dragged down having accidentally selected a bunch of books (probably hundreds at least, maybe more), and what I expected was blank cells, but what I got was very uncanny and unsettling - it was a list of almost-words, almost-book-names, I guess because that data was sufficiently large that Excel tried to work from that. If it hadn't been kind of freaked out, I would have screenshotted it, because in retrospect it was fascinating. I disagree completely. LLMs are not capable of [I]any[/I] problem solving [I]by themselves[/I]. No matter how well-used. And people have tried. Non-LLM generative AI can solve some chemistry/biology problems, but that stuff has been around for literally decades, it's just benefitting slightly from the hype re: LLMs. All real problem-solving relating to LLMs has to be done by [I]humans[/I] using the LLMs. Saying that they're "solving problems" is like saying your shoes "take you to work" or something - like, on a metaphorical level, as a bit of whimsical semi-poetic language, sure, but on a factual level? No. All they can do is essentially dig up [B]solutions to problems that other humans already solved[/B], and that they absorbed into their vast net, and that you happened to be good enough at manipulating them to extract from them. They fundamentally can't work alone, and we've seen that experiments with them working with each other have... not gone well. If an LLM (specifically) comes up with a novel solution, it won't be skill on the part of the operator, or brilliance on the part of the programming, it'll likely be sheer and probably a hallucination. I'm skeptical that LLMs can do much more than they're doing now. I'm sure they'll be refined somewhat, but so far the majority of the "advancements" with LLMs in the last what, two years or more have been simply throwing more processor power, energy usage, heat generation, and water consumption at the problem. And 10x the resources doesn't get you a 10x better result, it gets you a 1.125x better quality result, or even the same kinda-crappy result that you have to spend minutes checking, just it was delivered to you in 1 second, not 5 seconds. Is that worth it? It's worth it to some tech exec who gets a $10m bonus because he convinced the absolute rubes at Softbank to get hoodwinked yet again and hand over billions on billions for "data centres", but to anyone else? So I don't really see any path forward for them beyond sort of "lingering". I can see them continuing to be useful in certain ways, but almost all the forward-looking hype about generative AI/LLMs, especially anything suggesting they're even a step on the path to AGI seem to be false to me. There are other forms of AI with a lot more potential, frankly (many of them older than LLMs). [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*TTRPGs General
LLMs as a GM
Top