Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Community
General Tabletop Discussion
*Dungeons & Dragons
Can ChatGPT create a Campaign setting?
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="EzekielRaiden" data-source="post: 8936305" data-attributes="member: 6790260"><p>The biggest issue with treating this as limits to be overcome is that you are talking about two different <em>kinds</em> of data, not just larger <em>amounts</em> of data. But I'll spoiler-block the rest of this as being merely topic-adjacent, rather than strictly on-topic.</p><p>[SPOILER="The Limits of Syntax"]Syntax and semantics are not the same. No amount of syntax, no matter how great it becomes, can be equivalent to even the smallest amount of semantic content. But it is that very thing, the semantic content, that is the main "more to human language" element you speak of.</p><p></p><p>A GPT model that has trained on a hundred thousand times more data than the most advanced model currently in research, with a hundred thousand times as many nodes (or whatever internal structure GPT uses for its statistical model), would still be infinitely far away from picking up any semantic content. It is simply not trained, in any way, to identify the <em>meaning</em> of words; it can only identify, to whatever the limit of technology and training currently is, statistical correlations, aka, syntax.</p><p></p><p>Hence, I share the skepticism about this form of AI doing terribly much more than what it's already doing: generating bland but effective boilerplate content that remixes stuff other people already wrote/drew/etc. Even there, it's still very early days. The drawing side still has a long way to go, what with the eldritch horror edge cases, especially eyes, digits, and teeth, but also the non-euclidean geometries inserted into buildings and natural locations. That doesn't mean these things are <em>useless</em>, they can be very helpful for quickly generating boilerplate text, which can in fact be a tedious and time-consuming process for little benefit. For example, apparently travel agency type things have found Chat GPT to be <em>incredibly powerful</em> for generating quick, descriptive summaries of travel info, to the point that some aren't sure how they managed <em>without</em> such things. Instead of spending hours every day drafting repetitive stuff, they can focus on other things.</p><p></p><p>My expectation is that the ultimate form of GPT-type "AI" is going to be highly efficient "Virtual Intelligences," to borrow a term from Mass Effect. A Virtual Intelligence, "VI," is not properly speaking <em>intelligent</em>. It acts more like a hybrid database and personal assistant, and has a deep library of input-response associations (again, purely statistical models of syntax, no <em>meaning</em> is stored here) allowing it to handle "mundane" activities. For example, checking emails, filtering down to only those which <em>need</em> a personal response, and using one of several self-written but VI-filled template responses for all the ones that don't. Or drafting meeting notes that can be shared with the team, or summarizing long sections of text into punchy paragraph-length statements. Etc. All the many little ways humans need to condense or process data that can be tedious to do by hand but nearly effortless for a computer.[/SPOILER]</p><p></p><p>As said above in the spoiler (since, as you say, slightly off-topic), it's not just a new algorithm. It's an entirely different approach to analyzing and processing data, looking at the actual <em>content</em> of the message, not just the structure. For exactly the same reason that no amount of analyzing the parts inside cars can give you an understanding of why humans choose to break speed limits, no amount of analyzing the parts of sentences in any language can tell you why humans choose to speak some sentences that they know are false.</p><p></p><p></p><p>Yes. Inputs and association of <em>meaning</em>, not of <em>structure</em>. Humans are actually pretty weird about the syntax of the languages we use. As I've cited elsewhere, nearly every English speaker knows by heart the correct order of adjectives for describing nouns, but could not ever actually identify it for you. It's just pure instinct. You know not to say, "brick old beautiful several houses," even though there's <em>nothing</em> formally "wrong" with that sentence, because that's just...not how adjectives are ordered in English. The <em>correct</em> ordering, despite (almost surely) never having been formally taught to you, is "several beautiful old brick houses." Or, as <a href="https://twitter.com/MattAndersonNYT/status/772002757222002688" target="_blank">a famous tweet puts it</a>, English adjective order is very rigidly: "opinion-size-age-shape-colour-origin-material-purpose Noun."</p><p></p><p></p><p>The GPT type of model cannot do what you describe; it cannot understand "similarities" <em>at all</em>. It is incapable of even seeing semantic data, which is absolutely required for handling things like concepts and context.</p><p></p><p>In order to do what you are describing, we would need an entirely new branch of computer programming, something truly revolutionary, not simply evolutionary.</p></blockquote><p></p>
[QUOTE="EzekielRaiden, post: 8936305, member: 6790260"] The biggest issue with treating this as limits to be overcome is that you are talking about two different [I]kinds[/I] of data, not just larger [I]amounts[/I] of data. But I'll spoiler-block the rest of this as being merely topic-adjacent, rather than strictly on-topic. [SPOILER="The Limits of Syntax"]Syntax and semantics are not the same. No amount of syntax, no matter how great it becomes, can be equivalent to even the smallest amount of semantic content. But it is that very thing, the semantic content, that is the main "more to human language" element you speak of. A GPT model that has trained on a hundred thousand times more data than the most advanced model currently in research, with a hundred thousand times as many nodes (or whatever internal structure GPT uses for its statistical model), would still be infinitely far away from picking up any semantic content. It is simply not trained, in any way, to identify the [I]meaning[/I] of words; it can only identify, to whatever the limit of technology and training currently is, statistical correlations, aka, syntax. Hence, I share the skepticism about this form of AI doing terribly much more than what it's already doing: generating bland but effective boilerplate content that remixes stuff other people already wrote/drew/etc. Even there, it's still very early days. The drawing side still has a long way to go, what with the eldritch horror edge cases, especially eyes, digits, and teeth, but also the non-euclidean geometries inserted into buildings and natural locations. That doesn't mean these things are [I]useless[/I], they can be very helpful for quickly generating boilerplate text, which can in fact be a tedious and time-consuming process for little benefit. For example, apparently travel agency type things have found Chat GPT to be [I]incredibly powerful[/I] for generating quick, descriptive summaries of travel info, to the point that some aren't sure how they managed [I]without[/I] such things. Instead of spending hours every day drafting repetitive stuff, they can focus on other things. My expectation is that the ultimate form of GPT-type "AI" is going to be highly efficient "Virtual Intelligences," to borrow a term from Mass Effect. A Virtual Intelligence, "VI," is not properly speaking [I]intelligent[/I]. It acts more like a hybrid database and personal assistant, and has a deep library of input-response associations (again, purely statistical models of syntax, no [I]meaning[/I] is stored here) allowing it to handle "mundane" activities. For example, checking emails, filtering down to only those which [I]need[/I] a personal response, and using one of several self-written but VI-filled template responses for all the ones that don't. Or drafting meeting notes that can be shared with the team, or summarizing long sections of text into punchy paragraph-length statements. Etc. All the many little ways humans need to condense or process data that can be tedious to do by hand but nearly effortless for a computer.[/SPOILER] As said above in the spoiler (since, as you say, slightly off-topic), it's not just a new algorithm. It's an entirely different approach to analyzing and processing data, looking at the actual [I]content[/I] of the message, not just the structure. For exactly the same reason that no amount of analyzing the parts inside cars can give you an understanding of why humans choose to break speed limits, no amount of analyzing the parts of sentences in any language can tell you why humans choose to speak some sentences that they know are false. Yes. Inputs and association of [I]meaning[/I], not of [I]structure[/I]. Humans are actually pretty weird about the syntax of the languages we use. As I've cited elsewhere, nearly every English speaker knows by heart the correct order of adjectives for describing nouns, but could not ever actually identify it for you. It's just pure instinct. You know not to say, "brick old beautiful several houses," even though there's [I]nothing[/I] formally "wrong" with that sentence, because that's just...not how adjectives are ordered in English. The [I]correct[/I] ordering, despite (almost surely) never having been formally taught to you, is "several beautiful old brick houses." Or, as [URL='https://twitter.com/MattAndersonNYT/status/772002757222002688']a famous tweet puts it[/URL], English adjective order is very rigidly: "opinion-size-age-shape-colour-origin-material-purpose Noun." The GPT type of model cannot do what you describe; it cannot understand "similarities" [I]at all[/I]. It is incapable of even seeing semantic data, which is absolutely required for handling things like concepts and context. In order to do what you are describing, we would need an entirely new branch of computer programming, something truly revolutionary, not simply evolutionary. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*Dungeons & Dragons
Can ChatGPT create a Campaign setting?
Top