Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Community
General Tabletop Discussion
*Geek Talk & Media
How Generative AI's work
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Gorgon Zee" data-source="post: 9293033" data-attributes="member: 75787"><p>Nice article -- thanks for the reference. As often is the case, a lot depends on what we mean by "understanding".</p><p>The core technology absolutely is simply predicting the next word from a. sequence of words. However, the way in which this is done is with a vast number of parameters. The question of understanding, then, is whether those parameters indicate understanding.</p><p></p><p>Now the article is generally good and well with reading, but it has a bit of a straw man in it:</p><p></p><p style="margin-left: 20px"><em>The team is confident that it proves their point: The model can generate text that it couldn’t possibly have seen in the training data, displaying skills that add up to what some would argue is understanding.</em></p><p></p><p>It is very clear that the model does not simply generate text it say in its training data -- it generates combinations of fractions of text that it saw in its training data. If I ask it to write a poem about hobbits, St Bridgit and calculus, it will absolutely "<em>generate text that it couldn’t possibly have seen in the training data". </em></p><p></p><p>The article builds a separate model for text that ties "skill nodes" to "word nodes" with the idea being that you van then correlate word usage to skill usage, and skill usage is what defines understanding. So LLMs can be said to have understanding if their word output shows that they are using skills in sensible ways. Apologies to the authors for this huge simplification of their argument.</p><p></p><p>I have some issues with this:</p><ul> <li data-xf-list-type="ul">The researchers are really proving not that LLMs understand anything, but that they behave the same way as something that understands. Their quantification is helpful for science, but honestly, if you read some AI generated text, it's pretty clear that they behave the same way we do -- and we (hopefully) are understanding engines, so this isn't really anything new.</li> <li data-xf-list-type="ul">Their statement that understanding is equivalent to skill usage is one definition of understanding, but I'm not sure I'm 100% onboard with that as sufficient.</li> <li data-xf-list-type="ul">They state: “What [the team] proves theoretically, and also confirms empirically, is that there is compositional generalization, meaning [LLMs] are able to put building blocks together that have never been put together. This, to me, is the essence of creativity.” -- is it, though? Is it really creative to randomly put stuff together that have never been put together before? I feel there needs to be a bit more than that.</li> </ul><p>Overall, a great paper, and the use of bipartite knowledge graphs is a very clever idea that will hopefully allow us to quantify how the skill level of an LLM. I loom forward to seeing this use in the future. However, I still feel that the LLM is a stochastic parrot, but the stochastic process is so complex that the results simulate understanding without having actual understanding.</p><p></p><p>I also realize that there is a strong and valid philosophical position that if the results look like understanding, then it is understanding (the "if it looks like a duck" argument). Totally valid, and if that's your feeling, I cannot refute it. For me, though, it's not.</p></blockquote><p></p>
[QUOTE="Gorgon Zee, post: 9293033, member: 75787"] Nice article -- thanks for the reference. As often is the case, a lot depends on what we mean by "understanding". The core technology absolutely is simply predicting the next word from a. sequence of words. However, the way in which this is done is with a vast number of parameters. The question of understanding, then, is whether those parameters indicate understanding. Now the article is generally good and well with reading, but it has a bit of a straw man in it: [INDENT][I]The team is confident that it proves their point: The model can generate text that it couldn’t possibly have seen in the training data, displaying skills that add up to what some would argue is understanding.[/I][/INDENT] It is very clear that the model does not simply generate text it say in its training data -- it generates combinations of fractions of text that it saw in its training data. If I ask it to write a poem about hobbits, St Bridgit and calculus, it will absolutely "[I]generate text that it couldn’t possibly have seen in the training data". [/I] The article builds a separate model for text that ties "skill nodes" to "word nodes" with the idea being that you van then correlate word usage to skill usage, and skill usage is what defines understanding. So LLMs can be said to have understanding if their word output shows that they are using skills in sensible ways. Apologies to the authors for this huge simplification of their argument. I have some issues with this: [LIST] [*]The researchers are really proving not that LLMs understand anything, but that they behave the same way as something that understands. Their quantification is helpful for science, but honestly, if you read some AI generated text, it's pretty clear that they behave the same way we do -- and we (hopefully) are understanding engines, so this isn't really anything new. [*]Their statement that understanding is equivalent to skill usage is one definition of understanding, but I'm not sure I'm 100% onboard with that as sufficient. [*]They state: “What [the team] proves theoretically, and also confirms empirically, is that there is compositional generalization, meaning [LLMs] are able to put building blocks together that have never been put together. This, to me, is the essence of creativity.” -- is it, though? Is it really creative to randomly put stuff together that have never been put together before? I feel there needs to be a bit more than that. [/LIST] Overall, a great paper, and the use of bipartite knowledge graphs is a very clever idea that will hopefully allow us to quantify how the skill level of an LLM. I loom forward to seeing this use in the future. However, I still feel that the LLM is a stochastic parrot, but the stochastic process is so complex that the results simulate understanding without having actual understanding. I also realize that there is a strong and valid philosophical position that if the results look like understanding, then it is understanding (the "if it looks like a duck" argument). Totally valid, and if that's your feeling, I cannot refute it. For me, though, it's not. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*Geek Talk & Media
How Generative AI's work
Top