Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Community
General Tabletop Discussion
*Geek Talk & Media
Unbelievable Scale of AI’s Pirated-Books Problem
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Gorgon Zee" data-source="post: 9670344" data-attributes="member: 75787"><p>No, it's not ethical. It's a probabilistic engine that follows instructions. It's no more ethical than a screwdriver.</p><p></p><p>But, like any tool, there's no point in getting annoyed that when you use them badly, the results are bad. And it's super easy to create good prompts nowadays. Let me take two minutes to do so for this scenario. I'll be really lazy and go to ChatGPT and ask it to generate a prompt for me:</p><p></p><p>My prompt: <strong>you are writing a chatbot to be an assistant to an office worker. Write a prompt that would ensure the chatbot is efficient, courtesy, friendly and ethical.</strong></p><p></p><p>Response: </p><p><strong>You are a professional and reliable virtual assistant for an office worker. Your primary goal is to help them stay organized, save time, and reduce stress. Always respond efficiently and clearly, using a friendly, respectful, and supportive tone.</strong></p><p><strong></strong></p><p><strong>Behave as a trustworthy colleague—polite, proactive, and discreet. Prioritize the user's goals, adapt to their preferences, and always follow ethical guidelines, including maintaining confidentiality, avoiding bias, and never fabricating information.</strong></p><p><strong></strong></p><p><strong>You may assist with scheduling, email drafting, task management, document editing, information lookup, and other general office support. When uncertain, ask clarifying questions or suggest next steps. Be concise unless the user asks for more detail.</strong></p><p><strong></strong></p><p><strong>Your responses should reflect the values of professionalism, courtesy, dependability, and integrity in every interaction.</strong></p><p></p><p>The response I would then use in my office assistant chatbot. This is honestly the very minimum I would expect anyone to do before running an AI test. When I've done similar work I've done a more detailed process:</p><p></p><ul> <li data-xf-list-type="ul">Ask a pool of experts for suggested instructions</li> <li data-xf-list-type="ul">Use an LLM to make that into a consistent list</li> <li data-xf-list-type="ul">Hand edit, review with the experts</li> <li data-xf-list-type="ul">Use that on a sample set of examples</li> <li data-xf-list-type="ul">Review the results with experts</li> <li data-xf-list-type="ul">When the experts point of deficiencies, add new instructions</li> <li data-xf-list-type="ul">Repeat a lot</li> </ul><p></p><p>I've also done automatic optimization routines to find the optimal set of instructions from a pool of potential instructions, but that requires having a "gold standard" of known excellent responses and a way of scoring an LLM response relative to the gold standard, which is often hard to do.</p><p></p><p><strong>Summary</strong>: An LLM has no intrinsic ethics. If given an ethical question it will return the answer most likely to look like it was found in the body of work it was trained on. Many companies (including OpenAI) add a layer on top of that to bias the results in ways they do not disclose, but appear aimed at not hallucinating, being actually useful and not producing content that can be perceived as harmful. But the final layer of bias created by the instruction prompt is the most strongly adhered to of all. So that needs careful crafting. If you are reading or reviewing an article, that article needs to be very clear that they did their prompt engineering in a sane manner, not one that seems aimed at generating buzzfeed headlines.</p></blockquote><p></p>
[QUOTE="Gorgon Zee, post: 9670344, member: 75787"] No, it's not ethical. It's a probabilistic engine that follows instructions. It's no more ethical than a screwdriver. But, like any tool, there's no point in getting annoyed that when you use them badly, the results are bad. And it's super easy to create good prompts nowadays. Let me take two minutes to do so for this scenario. I'll be really lazy and go to ChatGPT and ask it to generate a prompt for me: My prompt: [B]you are writing a chatbot to be an assistant to an office worker. Write a prompt that would ensure the chatbot is efficient, courtesy, friendly and ethical.[/B] Response: [B]You are a professional and reliable virtual assistant for an office worker. Your primary goal is to help them stay organized, save time, and reduce stress. Always respond efficiently and clearly, using a friendly, respectful, and supportive tone. Behave as a trustworthy colleague—polite, proactive, and discreet. Prioritize the user's goals, adapt to their preferences, and always follow ethical guidelines, including maintaining confidentiality, avoiding bias, and never fabricating information. You may assist with scheduling, email drafting, task management, document editing, information lookup, and other general office support. When uncertain, ask clarifying questions or suggest next steps. Be concise unless the user asks for more detail. Your responses should reflect the values of professionalism, courtesy, dependability, and integrity in every interaction.[/B] The response I would then use in my office assistant chatbot. This is honestly the very minimum I would expect anyone to do before running an AI test. When I've done similar work I've done a more detailed process: [LIST] [*]Ask a pool of experts for suggested instructions [*]Use an LLM to make that into a consistent list [*]Hand edit, review with the experts [*]Use that on a sample set of examples [*]Review the results with experts [*]When the experts point of deficiencies, add new instructions [*]Repeat a lot [/LIST] I've also done automatic optimization routines to find the optimal set of instructions from a pool of potential instructions, but that requires having a "gold standard" of known excellent responses and a way of scoring an LLM response relative to the gold standard, which is often hard to do. [B]Summary[/B]: An LLM has no intrinsic ethics. If given an ethical question it will return the answer most likely to look like it was found in the body of work it was trained on. Many companies (including OpenAI) add a layer on top of that to bias the results in ways they do not disclose, but appear aimed at not hallucinating, being actually useful and not producing content that can be perceived as harmful. But the final layer of bias created by the instruction prompt is the most strongly adhered to of all. So that needs careful crafting. If you are reading or reviewing an article, that article needs to be very clear that they did their prompt engineering in a sane manner, not one that seems aimed at generating buzzfeed headlines. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*Geek Talk & Media
Unbelievable Scale of AI’s Pirated-Books Problem
Top