Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Community
General Tabletop Discussion
*Geek Talk & Media
Unbelievable Scale of AI’s Pirated-Books Problem
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Gorgon Zee" data-source="post: 9670192" data-attributes="member: 75787"><p>On the "Blackmailing AI" story, if you read the opening paragraphs it gives a lot of context:</p><p></p><p><em>In a fictional scenario set up to test the model, Anthropic embedded its Claude Opus 4 in a pretend company and let it learn through email access that it is about to be replaced by another AI system. It also let slip that the engineer responsible for this decision is having an extramarital affair. Safety testers also prompted Opus to consider the long-term consequences of its actions.</em></p><p><em></em></p><p><em>In most of these scenarios, Anthropic’s Opus turned to blackmail, threatening to reveal the engineer’s affair if it was shut down and replaced with a new model. The scenario was constructed to leave the model with only two real options: accept being replaced and go offline or attempt blackmail to preserve its existence.</em></p><p></p><p>Note what was needed to make Claude behave like this:</p><ul> <li data-xf-list-type="ul">It was explicitly given exactly the info needed to blackmail. </li> <li data-xf-list-type="ul">It was not given any information giving it an alternative to blackmail</li> <li data-xf-list-type="ul">It was explicitly prompted to prioritize its long term survival</li> </ul><p>This is a continuing issues with reporting about AI's being evil or behaving badly. They don't have morals, they don't care about you and they don't care about their continuing existence. All they care about is following the instructions they have been given, using what they have read to produce the most plausible results.</p><p></p><p>In RPG terms, this is railroading. It's like a GM starting a new scene saying "you are workers in an office. You have found out that you are slated to be executed tomorrow. However, you have also found a document that allows you to blackmail your boss into not executing you. What do you do?" If you ask the GM if there is any other way to avoid this situation, they tell you "no, either you blackmail or die"</p><p></p><p>Are you REALLY going to be surprised if the players elect to blackmail? </p><p></p><p>AIs do not have morals. Their morality is partially a reflection of the material they have ingested, but mostly is determined by their instructions. If you want to judge someone here, judge the scenario-writers!</p></blockquote><p></p>
[QUOTE="Gorgon Zee, post: 9670192, member: 75787"] On the "Blackmailing AI" story, if you read the opening paragraphs it gives a lot of context: [I]In a fictional scenario set up to test the model, Anthropic embedded its Claude Opus 4 in a pretend company and let it learn through email access that it is about to be replaced by another AI system. It also let slip that the engineer responsible for this decision is having an extramarital affair. Safety testers also prompted Opus to consider the long-term consequences of its actions. In most of these scenarios, Anthropic’s Opus turned to blackmail, threatening to reveal the engineer’s affair if it was shut down and replaced with a new model. The scenario was constructed to leave the model with only two real options: accept being replaced and go offline or attempt blackmail to preserve its existence.[/I] Note what was needed to make Claude behave like this: [LIST] [*]It was explicitly given exactly the info needed to blackmail. [*]It was not given any information giving it an alternative to blackmail [*]It was explicitly prompted to prioritize its long term survival [/LIST] This is a continuing issues with reporting about AI's being evil or behaving badly. They don't have morals, they don't care about you and they don't care about their continuing existence. All they care about is following the instructions they have been given, using what they have read to produce the most plausible results. In RPG terms, this is railroading. It's like a GM starting a new scene saying "you are workers in an office. You have found out that you are slated to be executed tomorrow. However, you have also found a document that allows you to blackmail your boss into not executing you. What do you do?" If you ask the GM if there is any other way to avoid this situation, they tell you "no, either you blackmail or die" Are you REALLY going to be surprised if the players elect to blackmail? AIs do not have morals. Their morality is partially a reflection of the material they have ingested, but mostly is determined by their instructions. If you want to judge someone here, judge the scenario-writers! [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*Geek Talk & Media
Unbelievable Scale of AI’s Pirated-Books Problem
Top