Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Upgrade your account to a Community Supporter account and remove most of the site ads.
Community
General Tabletop Discussion
*Geek Talk & Media
Judge decides case based on AI-hallucinated case law
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Jfdlsjfd" data-source="post: 9699685" data-attributes="member: 42856"><p>Yeah. We're seeing reports of hallucinations. Noone, as far as I know, deny that they exist (though there is no proof they are actually involved in the specific case mentionned in this thread) and there is no reason to suppose they don't happen when asking about law. But, the question that must be answered to determine if this tool is useful, is "how often are there hallucinations, and do they hamper any productivity increase for a specific task".</p><p></p><p>That they get reported is logical, the same as plane crashes are overreported compared to plane landing successfully. There is also a risk that un undertermined number of hallucinations are happening and are not noticed.</p><p></p><p>Evaluation of the tool involves:</p><ul> <li data-xf-list-type="ul">Assessing the frequency of hallucinations (if it is wrong X% of the time, it's not useful, with X varying across use cases),</li> <li data-xf-list-type="ul">Assessing the facility to weed out hallucinations (if the AI is tasked to provide evidence of each claim, it should lower them substantially, if they are obvious, it's easy to identify them, if the AI workflow that involves running another AI to detect them and it can do that 100% of the time, then hallucination are not a problem... if they happen often in a way that it is difficult to dect them, it's not a useful tool,</li> <li data-xf-list-type="ul">Assessing the consequence on the use case (I had an AI hallucinate my sorcerer with glowing green eyes in the AI image thread, and it gave glowing green eyes to the dragon instead... that's not bothering me too much as I like the end result anyway... while an hallucination for an AI doing self-driving for my car would be more problematic.</li> </ul><p>Reporting of the existence of hallucination doesn't say anything about AI in general, especially when people use it in violation of its licence (it might be important for the people involved, and the specific editor of the AI involved, but not for AI in general. We'll see if Husband's lawyer sues ChatGPT for providing bogus legal precedents... possibly invoking a few more bogus precedents?)</p></blockquote><p></p>
[QUOTE="Jfdlsjfd, post: 9699685, member: 42856"] Yeah. We're seeing reports of hallucinations. Noone, as far as I know, deny that they exist (though there is no proof they are actually involved in the specific case mentionned in this thread) and there is no reason to suppose they don't happen when asking about law. But, the question that must be answered to determine if this tool is useful, is "how often are there hallucinations, and do they hamper any productivity increase for a specific task". That they get reported is logical, the same as plane crashes are overreported compared to plane landing successfully. There is also a risk that un undertermined number of hallucinations are happening and are not noticed. Evaluation of the tool involves: [LIST] [*]Assessing the frequency of hallucinations (if it is wrong X% of the time, it's not useful, with X varying across use cases), [*]Assessing the facility to weed out hallucinations (if the AI is tasked to provide evidence of each claim, it should lower them substantially, if they are obvious, it's easy to identify them, if the AI workflow that involves running another AI to detect them and it can do that 100% of the time, then hallucination are not a problem... if they happen often in a way that it is difficult to dect them, it's not a useful tool, [*]Assessing the consequence on the use case (I had an AI hallucinate my sorcerer with glowing green eyes in the AI image thread, and it gave glowing green eyes to the dragon instead... that's not bothering me too much as I like the end result anyway... while an hallucination for an AI doing self-driving for my car would be more problematic. [/LIST] Reporting of the existence of hallucination doesn't say anything about AI in general, especially when people use it in violation of its licence (it might be important for the people involved, and the specific editor of the AI involved, but not for AI in general. We'll see if Husband's lawyer sues ChatGPT for providing bogus legal precedents... possibly invoking a few more bogus precedents?) [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*Geek Talk & Media
Judge decides case based on AI-hallucinated case law
Top