Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Upgrade your account to a Community Supporter account and remove most of the site ads.
Community
General Tabletop Discussion
*TTRPGs General
The AI Red Scare is only harming artists and needs to stop.
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Gorgon Zee" data-source="post: 9374728" data-attributes="member: 75787"><p>Clint,</p><p></p><p>I can see you're a big fan of both anthropomorphizing computer behavior, and reducing human behavior to computer-analytic terms. I'm not sure that is terribly helpful though for people who want a better understanding.</p><p> </p><p></p><p>This is ancient philosophy that ends up with the only reality being "cogito ergo sum". Of course every perception is filtered and modified -- but that is part of the process of perception. There is a lot of transformation going on, certainly, but calling it "statistical analysis" is not really a good description. Which is why there is a field called "image analysis" that is distinct from statistical analysis.</p><p></p><p></p><p></p><p>I have a couple of issues with this statement. Minorly, of course, LLMs are not finding workarounds at all; people are finding workarounds using LLMs. But more importantly is that LLMs are <em>memoryless </em>-- once you train them, they do not change their state and so every time you use one, with the same inputs and same randomization, it will produce the same output. I'm not really sure what you're referring to here, and I'm quite familiar with the literature. Are you talking about self-fine-tuning? Or using agents to store data later to be used by a RAG system? My best guess is that you're talking about the context window and means to expand it. But as far as I am aware, the efforts there are to squeeze more information into the limited window by quantization and specialized training rather than actually increasing its size.</p><p></p><p>If it's not too much bother, I'd love to see a reference to these techniques. My work has a large component of using LLMs to summarize large sets of text documents in very specific ways, so I have a professional interest in anything that makes it easier to do so!</p><p></p><p></p><p>While that is a potential difference, I think most people in the LLM business might disagree. The big question for us is whether or not an LLM can be though of as capable of conceptualization -- of being able to read text and have an understanding of the concepts involved -- or whether it is simply a stochastic parrot that can simply pattern matches input text to produce statistically plausible output text. The latter is definitely what they are <em>designed </em>to do, but it's a bit of an open question as to whether that ability has led to the ability to build concepts. There's a lively literature on this. But not really anything on consciousness.</p><p></p><p></p><p></p><p>Well, to be honest, it's not a terrible analogy. LLMs are designed specifically to say what word (token) is plausible in a sentence (strong of tokens) given the preceding words (tokens). Autocorrect does indeed do much the same thing. Google, for example, used to publish frequency tables of word combinations that did exactly what LLMs do, but on a much tinier window and a significantly different architecture, but essentially, they had the same statistical frequency-based predictive approach.</p><p></p><p></p><p>Well, no. Autocorrect and LLMs both feed input words into a single process that determines the best next word without trying to abstract or conceptualize. It's possible that LLMs create concepts internally as part of that process, but they are definitely not explicit about it. Whereas human language production, as far as I understand it and I am in no way an expert, depends heavily on explicit conceptualization. Very different.</p><p></p><p>When you feed "You may as well call human language production spicy autocorrect" into an LLM, it simply determines which words would come next. Chat-GPT will reply:</p><p></p><p><em>That's an interesting way to think about it! Language generation, like what I do, involves predicting and producing words and phrases based on patterns and context, which can be seen as an advanced form of autocorrect. The "spicy" part adds a fun twist, suggesting the creativity and variability in human language.</em></p><p></p><p>But if I ask "You may as well call human language production spicy backup" it replies:</p><p></p><p><em>That’s a unique perspective! Describing human language production as "spicy backup" implies that when we communicate, we're not just sharing thoughts but also preserving them—like a backup—with a bit of personal flair or spice. It adds an interesting layer to how we think about memory and expression</em></p><p></p><p>Humans will notice the difference between the computer operations of "autocorrect" and "backup" and realize the concepts are radically different. But the "autocorrect nature" of LLMs does not see any disconnect and continues to embrace the ideas as a good one because although it makes no sense in terms of concepts, we can generate words that tie the two together even though the concepts cannot be.</p></blockquote><p></p>
[QUOTE="Gorgon Zee, post: 9374728, member: 75787"] Clint, I can see you're a big fan of both anthropomorphizing computer behavior, and reducing human behavior to computer-analytic terms. I'm not sure that is terribly helpful though for people who want a better understanding. This is ancient philosophy that ends up with the only reality being "cogito ergo sum". Of course every perception is filtered and modified -- but that is part of the process of perception. There is a lot of transformation going on, certainly, but calling it "statistical analysis" is not really a good description. Which is why there is a field called "image analysis" that is distinct from statistical analysis. I have a couple of issues with this statement. Minorly, of course, LLMs are not finding workarounds at all; people are finding workarounds using LLMs. But more importantly is that LLMs are [I]memoryless [/I]-- once you train them, they do not change their state and so every time you use one, with the same inputs and same randomization, it will produce the same output. I'm not really sure what you're referring to here, and I'm quite familiar with the literature. Are you talking about self-fine-tuning? Or using agents to store data later to be used by a RAG system? My best guess is that you're talking about the context window and means to expand it. But as far as I am aware, the efforts there are to squeeze more information into the limited window by quantization and specialized training rather than actually increasing its size. If it's not too much bother, I'd love to see a reference to these techniques. My work has a large component of using LLMs to summarize large sets of text documents in very specific ways, so I have a professional interest in anything that makes it easier to do so! While that is a potential difference, I think most people in the LLM business might disagree. The big question for us is whether or not an LLM can be though of as capable of conceptualization -- of being able to read text and have an understanding of the concepts involved -- or whether it is simply a stochastic parrot that can simply pattern matches input text to produce statistically plausible output text. The latter is definitely what they are [I]designed [/I]to do, but it's a bit of an open question as to whether that ability has led to the ability to build concepts. There's a lively literature on this. But not really anything on consciousness. Well, to be honest, it's not a terrible analogy. LLMs are designed specifically to say what word (token) is plausible in a sentence (strong of tokens) given the preceding words (tokens). Autocorrect does indeed do much the same thing. Google, for example, used to publish frequency tables of word combinations that did exactly what LLMs do, but on a much tinier window and a significantly different architecture, but essentially, they had the same statistical frequency-based predictive approach. Well, no. Autocorrect and LLMs both feed input words into a single process that determines the best next word without trying to abstract or conceptualize. It's possible that LLMs create concepts internally as part of that process, but they are definitely not explicit about it. Whereas human language production, as far as I understand it and I am in no way an expert, depends heavily on explicit conceptualization. Very different. When you feed "You may as well call human language production spicy autocorrect" into an LLM, it simply determines which words would come next. Chat-GPT will reply: [I]That's an interesting way to think about it! Language generation, like what I do, involves predicting and producing words and phrases based on patterns and context, which can be seen as an advanced form of autocorrect. The "spicy" part adds a fun twist, suggesting the creativity and variability in human language.[/I] But if I ask "You may as well call human language production spicy backup" it replies: [I]That’s a unique perspective! Describing human language production as "spicy backup" implies that when we communicate, we're not just sharing thoughts but also preserving them—like a backup—with a bit of personal flair or spice. It adds an interesting layer to how we think about memory and expression[/I] Humans will notice the difference between the computer operations of "autocorrect" and "backup" and realize the concepts are radically different. But the "autocorrect nature" of LLMs does not see any disconnect and continues to embrace the ideas as a good one because although it makes no sense in terms of concepts, we can generate words that tie the two together even though the concepts cannot be. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*TTRPGs General
The AI Red Scare is only harming artists and needs to stop.
Top