Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
NOW LIVE! Today's the day you meet your new best friend. You don’t have to leave Wolfy behind... In 'Pets & Sidekicks' your companions level up with you!
Community
General Tabletop Discussion
*Geek Talk & Media
Judge decides case based on AI-hallucinated case law
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Dannyalcatraz" data-source="post: 9703251" data-attributes="member: 19675"><p>Asked and answered. The LLM isn’t the responsible party- whomever is causing the FDA to promulgate falsehoods is.</p><p></p><p>Not in this case, as I described.</p><p></p><p>Nevertheless, COVID provides a well-documented, recent case study in this. </p><p></p><p>Public health organizations and experts didn’t simply claim they were right because they were authorities, they said Covid-19 was a new virus and they didn’t know exactly what it could do, <em>so they were basing their recommendations on what they knew from related pathogens while waiting for new research results.</em></p><p></p><p>When those results came in, they revised their recommendations, <em>expressly</em> in the context that new information was responsible for the changes. That’s how you create policy in accord with the scientific method- you change recommendations when better information becomes available.</p><p></p><p>This was mischaracterized by certain outlets and individuals as <strong>lying</strong>. And that narrative captured the minds of an unfortunately large segment of the populace.</p><p></p><p>The CDC, WHO, Fauci, etc, didn’t misrepresent what they knew & when they knew it, nor hide behind their credentials. People <em>systematically attacked their credibility. </em>With that pretext, they also destroyed trust in well-established medical and public health findings. </p><p></p><p>At this point, a large enough segment of the adult population (in America, at least) have demonstrated they can’t properly evaluate medical info for veracity and accuracy. </p><p></p><p>So no, generalized AIs should not be able to disseminate any medical advice beyond “find a qualified medical professional near you”*</p><p></p><p>And further, AIs specialized for medical or legal professionals shouldn’t be accessible to the general public either. Most people lack even the <em>vocabulary</em> to fully grasp the results their questions would return.</p><p></p><p></p><p></p><p></p><p></p><p></p><p>* and likewise for legal advice.</p></blockquote><p></p>
[QUOTE="Dannyalcatraz, post: 9703251, member: 19675"] Asked and answered. The LLM isn’t the responsible party- whomever is causing the FDA to promulgate falsehoods is. Not in this case, as I described. Nevertheless, COVID provides a well-documented, recent case study in this. Public health organizations and experts didn’t simply claim they were right because they were authorities, they said Covid-19 was a new virus and they didn’t know exactly what it could do, [I]so they were basing their recommendations on what they knew from related pathogens while waiting for new research results.[/I] When those results came in, they revised their recommendations, [I]expressly[/I] in the context that new information was responsible for the changes. That’s how you create policy in accord with the scientific method- you change recommendations when better information becomes available. This was mischaracterized by certain outlets and individuals as [B]lying[/B]. And that narrative captured the minds of an unfortunately large segment of the populace. The CDC, WHO, Fauci, etc, didn’t misrepresent what they knew & when they knew it, nor hide behind their credentials. People [I]systematically attacked their credibility. [/I]With that pretext, they also destroyed trust in well-established medical and public health findings. At this point, a large enough segment of the adult population (in America, at least) have demonstrated they can’t properly evaluate medical info for veracity and accuracy. So no, generalized AIs should not be able to disseminate any medical advice beyond “find a qualified medical professional near you”* And further, AIs specialized for medical or legal professionals shouldn’t be accessible to the general public either. Most people lack even the [I]vocabulary[/I] to fully grasp the results their questions would return. * and likewise for legal advice. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*Geek Talk & Media
Judge decides case based on AI-hallucinated case law
Top