Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Upgrade your account to a Community Supporter account and remove most of the site ads.
Rocket your D&D 5E and Level Up: Advanced 5E games into space! Alpha Star Magazine Is Launching... Right Now!
Community
General Tabletop Discussion
*Geek Talk & Media
Star Wars: Andor season 2
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Ruin Explorer" data-source="post: 9666500" data-attributes="member: 18"><p>I think it does apply, personally. Like a lot of droids, he's in every way an equal to a human in terms of sentience/sapience/free will, if you just remove the (often physical) objects preventing access to free will.</p><p></p><p>He's absolutely a peer made into property by force. It's just that force was applied effectively before he was born. I don't think you'd have any difficulty at all calling him a slave if he was a biological being created to serve and with their free will limited by some kind of removable or destroyable implant.</p><p></p><p></p><p>Not seems. It doesn't understand anything at all.</p><p></p><p>The "most of the time" applies for a fairly narrow subset of interactions - basically asking it essentially text-based questions (even if using speech-to-text or the like) about stuff it can apply what is essentially super-powerful predictive text to.</p><p></p><p>There's no bridge from there to K-2SO or the like. It's a dead end. It can never understand anything. It can only mimic the way things are arranged.</p><p></p><p>For an easy example, I recently asked it about solving static issues with my coffee grinder - a common question, and I assumed it would just direct me to the common answer (which I couldn't remember the name of), and instead it came up with an insane solution of spraying commercial anti-static spray on the beans, because it literally doesn't understand anything at all, it doesn't understand what food or drink are, it doesn't understand where coffee goes or what coffee is, but what it can find is a bunch of websites where "anti-static spray" is associated with problems with static electricity.</p><p></p><p>And a lot of the time it makes trivially obvious mistakes that even a child who could read wouldn't make - for example, my friend was looking for the date when a certain band had played a major venue in the UK, and the Google AI very firmly stated that band had never played that venue in the UK. Literally the first actual result of the actual Google search, however, showed they had, and to an actual intelligence, that's trivial and obvious, but to an LLM, which is just basically predictive text running on a supercomputer, that information wasn't arranged in the right way for it to understand that.</p><p></p><p>What we've seen repeatedly too is that whenever someone claims they can get it to do more and to it reliably and correctly, they're lying, and it's a mechanical turk situation - i.e. a bunch of low-paid workers in a far-away country are actually doing the actual work, with the AI just essentially being a front end.</p></blockquote><p></p>
[QUOTE="Ruin Explorer, post: 9666500, member: 18"] I think it does apply, personally. Like a lot of droids, he's in every way an equal to a human in terms of sentience/sapience/free will, if you just remove the (often physical) objects preventing access to free will. He's absolutely a peer made into property by force. It's just that force was applied effectively before he was born. I don't think you'd have any difficulty at all calling him a slave if he was a biological being created to serve and with their free will limited by some kind of removable or destroyable implant. Not seems. It doesn't understand anything at all. The "most of the time" applies for a fairly narrow subset of interactions - basically asking it essentially text-based questions (even if using speech-to-text or the like) about stuff it can apply what is essentially super-powerful predictive text to. There's no bridge from there to K-2SO or the like. It's a dead end. It can never understand anything. It can only mimic the way things are arranged. For an easy example, I recently asked it about solving static issues with my coffee grinder - a common question, and I assumed it would just direct me to the common answer (which I couldn't remember the name of), and instead it came up with an insane solution of spraying commercial anti-static spray on the beans, because it literally doesn't understand anything at all, it doesn't understand what food or drink are, it doesn't understand where coffee goes or what coffee is, but what it can find is a bunch of websites where "anti-static spray" is associated with problems with static electricity. And a lot of the time it makes trivially obvious mistakes that even a child who could read wouldn't make - for example, my friend was looking for the date when a certain band had played a major venue in the UK, and the Google AI very firmly stated that band had never played that venue in the UK. Literally the first actual result of the actual Google search, however, showed they had, and to an actual intelligence, that's trivial and obvious, but to an LLM, which is just basically predictive text running on a supercomputer, that information wasn't arranged in the right way for it to understand that. What we've seen repeatedly too is that whenever someone claims they can get it to do more and to it reliably and correctly, they're lying, and it's a mechanical turk situation - i.e. a bunch of low-paid workers in a far-away country are actually doing the actual work, with the AI just essentially being a front end. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*Geek Talk & Media
Star Wars: Andor season 2
Top