Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
NOW LIVE! Today's the day you meet your new best friend. You don’t have to leave Wolfy behind... In 'Pets & Sidekicks' your companions level up with you!
Community
General Tabletop Discussion
*Geek Talk & Media
Judge decides case based on AI-hallucinated case law
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Jfdlsjfd" data-source="post: 9703200" data-attributes="member: 42856"><p>The issue at hand isn't what can be done, or what can the US reasonably do within its current legal framework. It's "what should we do". Killing all AI researchers and burn all the book containing knowledge about LLMs is certainly impractical, and some might find it ethically questionable, but it's a position one could hold as an answer to what we should do about AI, because such an answer isn't linked to what we can reasonably do. People explaining the reasoning leading them to proposing their solution shouldn't be impeded by feasability. Hey, at some point, the answer to "what shoud we do about slavery?" was "we should ban it", even if such issue was complicated and nuanced (and doing that in a country led to a civil war and several constitutional changes, so it was indeed complicated). </p><p></p><p></p><p></p><p>I think the Sackler case, to be honest, would be difficult to replicate outside of the US legal and cultural framework, where for example marketing for drugs is allowed, including by having firms promote their products directly to medical professionals with little overview, and the fact that the companies misrepresented the risks associated with the products seems to be (though of course I claim no expertise on this domain) something that is extremely different from the way general purpose LLMs are marketed right now, where they actively discourage anyone for using it for several use cases. </p><p></p><p></p><p></p><p>Sure, but when people here propose to ban AI (and not only general public LLMs) from giving medical advice, they oppose both types of use. If they don't want to conflate both, then they should state it. </p><p></p><p></p><p>You're adressing, again what can be done, not what should be done. There are LLMs that aren't made by corporations, and if you don't want to trust them, then you could choose to trust models developped by state actors (Falcon), semi-state actors (Mistral), universities (Bloom)... Even if it is unlikely that you could convince all countries to ban commercial use of AI, it is a position one could, after all, defend. </p><p></p><p>But realistically, if one considers that corporations are corrupt and states are corrupt, then all we have to do is just lie down and die.</p></blockquote><p></p>
[QUOTE="Jfdlsjfd, post: 9703200, member: 42856"] The issue at hand isn't what can be done, or what can the US reasonably do within its current legal framework. It's "what should we do". Killing all AI researchers and burn all the book containing knowledge about LLMs is certainly impractical, and some might find it ethically questionable, but it's a position one could hold as an answer to what we should do about AI, because such an answer isn't linked to what we can reasonably do. People explaining the reasoning leading them to proposing their solution shouldn't be impeded by feasability. Hey, at some point, the answer to "what shoud we do about slavery?" was "we should ban it", even if such issue was complicated and nuanced (and doing that in a country led to a civil war and several constitutional changes, so it was indeed complicated). I think the Sackler case, to be honest, would be difficult to replicate outside of the US legal and cultural framework, where for example marketing for drugs is allowed, including by having firms promote their products directly to medical professionals with little overview, and the fact that the companies misrepresented the risks associated with the products seems to be (though of course I claim no expertise on this domain) something that is extremely different from the way general purpose LLMs are marketed right now, where they actively discourage anyone for using it for several use cases. Sure, but when people here propose to ban AI (and not only general public LLMs) from giving medical advice, they oppose both types of use. If they don't want to conflate both, then they should state it. You're adressing, again what can be done, not what should be done. There are LLMs that aren't made by corporations, and if you don't want to trust them, then you could choose to trust models developped by state actors (Falcon), semi-state actors (Mistral), universities (Bloom)... Even if it is unlikely that you could convince all countries to ban commercial use of AI, it is a position one could, after all, defend. But realistically, if one considers that corporations are corrupt and states are corrupt, then all we have to do is just lie down and die. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*Geek Talk & Media
Judge decides case based on AI-hallucinated case law
Top