Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Community
General Tabletop Discussion
*Dungeons & Dragons
Can ChatGPT create a Campaign setting?
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Jer" data-source="post: 8934174" data-attributes="member: 19857"><p>This is an open question in AI research -whether "strong" AI is possible or not. The jury is still out. But LLMs are not that and may not even be a fruitful path towards getting that. The jury is still out on that too (though I'm in the skeptical side - I think it's likely to end up being a neat toy that can do some things very well but ultimately a dead end on the path towards strong AI).</p><p></p><p></p><p>I mean, yes, if you take away everything that makes us human and reduce us down to a state machine that gives the same performance as a GPT algorithm you'll get the same performance. It's kind of a tautology there.</p><p></p><p>I know what you're getting at - "is there something magical about humans such that an AI can't replicate us". And the answer to that is "the jury is still out" (see above). But I can tell you we ARE more than machines that collect statistics on the syntax of a language and repeat back convincing sounding text based on randomly perturbing our way through those statistics. If that was all we were entire branches of philosophy would never have been invented for starters. A large language model cannot analyze things, it can only generate text based on the distribution of words its discovered in the data it's trained on (which is why they can't really be stopped from lying either. "The sun is the source of its own light." and "The moon is the source of its own light." are both quite reasonable sentences for it to construct and the first one is only slightly more probable than the second absent any ability to do more than construct strings of words into a plausible syntactically correct sentence. And depending on the training set they may be equally likely. Or the second may even be more likely.)</p><p></p><p></p><p>But this is actually something that is part of the basic idea of a large language model - it's a model that is built from basically collecting statistics across large datasets of text. So if you want true novelty you need to be thinking about a different algorithm. So this technology is not going to get better at it, it will only get better at generating realistic sounding text and keeping a consistent "train of thought" going as resources increase and it can keep a wider and wider context of the previous discussion it's had. What you'd want is a new breakthrough - a new algorithm that approaches the problem in a different way.</p><p></p><p>Now the LLM could be used as a front end to an algorithm that generates novel ideas - a way to express the ideas that it comes up with. That was the original purpose of LLMs at one point - to be used in tandem with another algorithm to help make the text generated read more like actual natural language. Using them by themselves is an artifact of researchers being somewhat surprised at how much of the meaning of what we say can actually be gleaned from the syntax that it has learned. (I'm actually not sure how well this research is generalizing to other languages right now - I haven't seen a lot of publications about non-English LLM algorithms. I should go looking I suppose.)</p></blockquote><p></p>
[QUOTE="Jer, post: 8934174, member: 19857"] This is an open question in AI research -whether "strong" AI is possible or not. The jury is still out. But LLMs are not that and may not even be a fruitful path towards getting that. The jury is still out on that too (though I'm in the skeptical side - I think it's likely to end up being a neat toy that can do some things very well but ultimately a dead end on the path towards strong AI). I mean, yes, if you take away everything that makes us human and reduce us down to a state machine that gives the same performance as a GPT algorithm you'll get the same performance. It's kind of a tautology there. I know what you're getting at - "is there something magical about humans such that an AI can't replicate us". And the answer to that is "the jury is still out" (see above). But I can tell you we ARE more than machines that collect statistics on the syntax of a language and repeat back convincing sounding text based on randomly perturbing our way through those statistics. If that was all we were entire branches of philosophy would never have been invented for starters. A large language model cannot analyze things, it can only generate text based on the distribution of words its discovered in the data it's trained on (which is why they can't really be stopped from lying either. "The sun is the source of its own light." and "The moon is the source of its own light." are both quite reasonable sentences for it to construct and the first one is only slightly more probable than the second absent any ability to do more than construct strings of words into a plausible syntactically correct sentence. And depending on the training set they may be equally likely. Or the second may even be more likely.) But this is actually something that is part of the basic idea of a large language model - it's a model that is built from basically collecting statistics across large datasets of text. So if you want true novelty you need to be thinking about a different algorithm. So this technology is not going to get better at it, it will only get better at generating realistic sounding text and keeping a consistent "train of thought" going as resources increase and it can keep a wider and wider context of the previous discussion it's had. What you'd want is a new breakthrough - a new algorithm that approaches the problem in a different way. Now the LLM could be used as a front end to an algorithm that generates novel ideas - a way to express the ideas that it comes up with. That was the original purpose of LLMs at one point - to be used in tandem with another algorithm to help make the text generated read more like actual natural language. Using them by themselves is an artifact of researchers being somewhat surprised at how much of the meaning of what we say can actually be gleaned from the syntax that it has learned. (I'm actually not sure how well this research is generalizing to other languages right now - I haven't seen a lot of publications about non-English LLM algorithms. I should go looking I suppose.) [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*Dungeons & Dragons
Can ChatGPT create a Campaign setting?
Top