Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Upgrade your account to a Community Supporter account and remove most of the site ads.
Community
General Tabletop Discussion
*Dungeons & Dragons
AI isn't all that great when it comes to D&D
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="EzekielRaiden" data-source="post: 8863219" data-attributes="member: 6790260"><p>That's what I said, but in layman's terms rather than formal ones. "Syntax" means the <em>form</em>, the rules for manipulating the symbols. E.g., one small part of English syntax is that, for standard sentences, you have "subject verb object" or "SVO" order (sometimes also phrased "noun verb predicate.") A more complex example of English syntax is that there is a nearly-fixed adjective order, which almost everyone knows without realizing it: you would never say "brick old beautiful several houses," because you know the correct sequence is "several beautiful old brick houses." (The only exception to this ordering is when something becomes a compound noun, e.g. "green great dragons" would normally be forbidden, but if "great dragon" has become a compound noun--e.g. there are "lesser dragons" and "Great Dragons"--then "green Great Dragons" becomes acceptable.)</p><p></p><p>GPT and other highly advanced models have an <em>extremely extensive</em> description of the syntax of English sentences, allowing them to draw correlations across multiple paragraphs. The designers built this up from training the neural network on an absolutely stupidly massive text dump of accessible Internet sources. The longer the work becomes, however, the more difficult it becomes to retain these correlations; combinatoric explosion takes over eventually. (Hence the famous "scientists discover unicorns" text generated, IIRC, by GPT-2, which gets ridiculous after about the third paragraph.)</p><p></p><p>The program does not, and <em>cannot</em>, hold information about the <em>meaning</em> of "the North Pole" or "spirit of Christmas" or the like. It just contains parameters which recognize that those two statements have much higher correlation than would be expected of any two random three-word strings would have, and thus fits them into a probabilistic model. The program, in effect, does one and only one thing: predict what the next word should be in a sentence. (It might be "the next few words" or even "the next letter," depending on the exact implementation, but the principle remains the same.) It contains <em>literally nothing</em> other than information related to how likely the next word(s) should be given the words it's already generated and the words it was given as its prompt. (This is why longer, precise prompts are almost always better than shorter, vague prompts, unless you specifically want the program to hare off on its own.)</p><p></p><p>Grammar is the easy part (English <em>spelling</em> is a nightmare, but its <em>grammar</em> is actually pretty simple.) Logic is a little bit harder, but not much harder. What's extremely hard is <em>long-term preservation</em> of that logic. Because the longer you go, the wider the spectrum of information, and the harder it is to keep a hold on where you're supposed to be when you are narrowly limited to "predict the next word." Every GPT has a finite horizon of words--dozens, scores, perhaps a few hundred. Once you get beyond that horizon, things get wild and wooly pretty quickly.</p><p></p><p>Syntax becomes less and less useful as a guide for what to say next as a text grows. Semantics, on the other hand, becomes <em>more</em> useful--the more meaning you understand about something, the better you will be at generating <em>new</em> meaning relevant to it.</p><p></p><p></p><p>If it can process <em>semantic</em> content, it understands <em>meaning</em>. A system which can grapple with both syntax <em>and</em> semantics--with both the <em>form</em> of the statements and what the statements actually <em>mean</em>--would be capable of the same spectrum of responses as a human. It would almost certainly have a different distribution of responses (e.g., it might differ strongly from most or even all humans in terms of its <em>values-system</em>), but it would be effectively capable of all the same sorts of information-processing actions ("thoughts") that humans are.</p><p></p><p>To be clear, though, I agree with you. I don't think we're going to be able to develop a totally artificial intelligence, and that it will instead hinge on developing a structure which does, in fact, mimic how brains process information. Further, that current efforts at AI will end up an incredibly fascinating dead end, with useful applications in other areas besides "true AI." But one of the reasons I think that is that I think you <em>need</em> to have semantic-processing capability baked into the core of the system. That's what brain-mimicking AI will acquire, IMO: the ability to manipulate semantic content, not just syntactic content.</p></blockquote><p></p>
[QUOTE="EzekielRaiden, post: 8863219, member: 6790260"] That's what I said, but in layman's terms rather than formal ones. "Syntax" means the [I]form[/I], the rules for manipulating the symbols. E.g., one small part of English syntax is that, for standard sentences, you have "subject verb object" or "SVO" order (sometimes also phrased "noun verb predicate.") A more complex example of English syntax is that there is a nearly-fixed adjective order, which almost everyone knows without realizing it: you would never say "brick old beautiful several houses," because you know the correct sequence is "several beautiful old brick houses." (The only exception to this ordering is when something becomes a compound noun, e.g. "green great dragons" would normally be forbidden, but if "great dragon" has become a compound noun--e.g. there are "lesser dragons" and "Great Dragons"--then "green Great Dragons" becomes acceptable.) GPT and other highly advanced models have an [I]extremely extensive[/I] description of the syntax of English sentences, allowing them to draw correlations across multiple paragraphs. The designers built this up from training the neural network on an absolutely stupidly massive text dump of accessible Internet sources. The longer the work becomes, however, the more difficult it becomes to retain these correlations; combinatoric explosion takes over eventually. (Hence the famous "scientists discover unicorns" text generated, IIRC, by GPT-2, which gets ridiculous after about the third paragraph.) The program does not, and [I]cannot[/I], hold information about the [I]meaning[/I] of "the North Pole" or "spirit of Christmas" or the like. It just contains parameters which recognize that those two statements have much higher correlation than would be expected of any two random three-word strings would have, and thus fits them into a probabilistic model. The program, in effect, does one and only one thing: predict what the next word should be in a sentence. (It might be "the next few words" or even "the next letter," depending on the exact implementation, but the principle remains the same.) It contains [I]literally nothing[/I] other than information related to how likely the next word(s) should be given the words it's already generated and the words it was given as its prompt. (This is why longer, precise prompts are almost always better than shorter, vague prompts, unless you specifically want the program to hare off on its own.) Grammar is the easy part (English [I]spelling[/I] is a nightmare, but its [I]grammar[/I] is actually pretty simple.) Logic is a little bit harder, but not much harder. What's extremely hard is [I]long-term preservation[/I] of that logic. Because the longer you go, the wider the spectrum of information, and the harder it is to keep a hold on where you're supposed to be when you are narrowly limited to "predict the next word." Every GPT has a finite horizon of words--dozens, scores, perhaps a few hundred. Once you get beyond that horizon, things get wild and wooly pretty quickly. Syntax becomes less and less useful as a guide for what to say next as a text grows. Semantics, on the other hand, becomes [I]more[/I] useful--the more meaning you understand about something, the better you will be at generating [I]new[/I] meaning relevant to it. If it can process [I]semantic[/I] content, it understands [I]meaning[/I]. A system which can grapple with both syntax [I]and[/I] semantics--with both the [I]form[/I] of the statements and what the statements actually [I]mean[/I]--would be capable of the same spectrum of responses as a human. It would almost certainly have a different distribution of responses (e.g., it might differ strongly from most or even all humans in terms of its [I]values-system[/I]), but it would be effectively capable of all the same sorts of information-processing actions ("thoughts") that humans are. To be clear, though, I agree with you. I don't think we're going to be able to develop a totally artificial intelligence, and that it will instead hinge on developing a structure which does, in fact, mimic how brains process information. Further, that current efforts at AI will end up an incredibly fascinating dead end, with useful applications in other areas besides "true AI." But one of the reasons I think that is that I think you [I]need[/I] to have semantic-processing capability baked into the core of the system. That's what brain-mimicking AI will acquire, IMO: the ability to manipulate semantic content, not just syntactic content. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*Dungeons & Dragons
AI isn't all that great when it comes to D&D
Top