Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
NOW LIVE! Today's the day you meet your new best friend. You don’t have to leave Wolfy behind... In 'Pets & Sidekicks' your companions level up with you!
Community
General Tabletop Discussion
*Geek Talk & Media
Judge decides case based on AI-hallucinated case law
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Jfdlsjfd" data-source="post: 9705733" data-attributes="member: 42856"><p>As long as enough don't, we're golden. The glue on pizza and taste-testing mushrooms are the exploding pressure-cookers of decade pasts, a risk that existed, but is no longer assessed again, in the future. They are all corrected and yet will be talked for years. Right now, LLMs are lacking training on enough legal datathat they can't provide specialized legal advice (only broad general description), and that's where they struggle -- for now.</p><p></p><p></p><p></p><p>This is quite easy to test, given that there is ample computing power available to get the error rate if needed by submitting them synthetic questions. Not that anyone would be interested into running a trial test, unfortunately. There are correcting measures: simple ask an LLM to analyze and check what the first LLM outputs. It will catch most hallucinations (possibly providing other to be corrected by the first).</p><p></p><p>With regard to non-specialized AI dispensing medical advice, do have a rate of accidents linked to improperly understanding medical advice on Google? Having accidents happening as a result of a misunderstanding of what an LLM dispenses is bad, but if it is identical or less than the alternative (people googling for health advice from a random board, which apparently two third of Internet users do), then maybe it's a <em>public health improvement </em>over the current situation.</p><p></p><p></p><p></p><p>What is true of an LLM of 2023 isn't true about an LLM of end 2024 and what was true in late 2024 might not be true mid-2025. Forbidding LLMs to talk about a topic will disincentivize improvement in this field (a lawmaker will be loath to take the political risk to allow something that is previously disallowed...) so there will be less research to improve the products. Ackowledging that they are imperfect chatters and nothing more at the point until they can be proofed for more serious use is certainly the best way to go.</p></blockquote><p></p>
[QUOTE="Jfdlsjfd, post: 9705733, member: 42856"] As long as enough don't, we're golden. The glue on pizza and taste-testing mushrooms are the exploding pressure-cookers of decade pasts, a risk that existed, but is no longer assessed again, in the future. They are all corrected and yet will be talked for years. Right now, LLMs are lacking training on enough legal datathat they can't provide specialized legal advice (only broad general description), and that's where they struggle -- for now. This is quite easy to test, given that there is ample computing power available to get the error rate if needed by submitting them synthetic questions. Not that anyone would be interested into running a trial test, unfortunately. There are correcting measures: simple ask an LLM to analyze and check what the first LLM outputs. It will catch most hallucinations (possibly providing other to be corrected by the first). With regard to non-specialized AI dispensing medical advice, do have a rate of accidents linked to improperly understanding medical advice on Google? Having accidents happening as a result of a misunderstanding of what an LLM dispenses is bad, but if it is identical or less than the alternative (people googling for health advice from a random board, which apparently two third of Internet users do), then maybe it's a [I]public health improvement [/I]over the current situation. What is true of an LLM of 2023 isn't true about an LLM of end 2024 and what was true in late 2024 might not be true mid-2025. Forbidding LLMs to talk about a topic will disincentivize improvement in this field (a lawmaker will be loath to take the political risk to allow something that is previously disallowed...) so there will be less research to improve the products. Ackowledging that they are imperfect chatters and nothing more at the point until they can be proofed for more serious use is certainly the best way to go. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*Geek Talk & Media
Judge decides case based on AI-hallucinated case law
Top