Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
NOW LIVE! Today's the day you meet your new best friend. You don’t have to leave Wolfy behind... In 'Pets & Sidekicks' your companions level up with you!
Community
General Tabletop Discussion
*Geek Talk & Media
Judge decides case based on AI-hallucinated case law
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Jfdlsjfd" data-source="post: 9698932" data-attributes="member: 42856"><p>It's not irrelevant. It's just that we generally don't ban tools that have both positive and negative outcome. Many tools are dangerous if handled by unskilled users (cars, guns and medications come to mind) and yet are mostly available, even if they can be conditionned by a licence when the negative outcome is as harsh as "people dying".</p><p></p><p></p><p></p><p>Indeed. Bad lawyers didn't need AI to hallucinate cases or generally be awful -- I have been in several situation where I actively thought that the defendant's lawyer worsened his client's position -- and with AI they might very well be more prone to do it. Existence of bad lawyer is telling more about our bar exams than anything about AI. It can't be used to attack AI anymore than bad drivers can be used to justify banning cars.</p><p></p><p></p><p></p><p>The jury is still out, and I think the initial effect might be worse because we're transitioning to using a tool we're not used to. Back when you had to copy down books, every book tended to be considered reliable, because it would be foolish to copy down nonsense. As written books became more widespread, people had to "get accustomed" not to trust books. When the Internet started and there was only 3 scientists on it, I guess it could be considered a reliable source of information... until we had to change our view. Picture evidence was extremely convincing until a few doctored photographs appeared and nowadays we're still transitionning to a state where "I saw an image of Elvis Presley with a smartphone" doesn't translate to "Elvis is alive!" but to "Yawn! It's photoshopped". Here we have a tool that might cause bad habit initially because of the learning curve, and some of its users might be unaware of the limitations and let their guard down.</p><p></p><p></p><p></p><p></p><p>Here? No, I don't think so. The 58 partners of the findlaw website that wanted to track me when I visited the site certainly are very interested in the number of views, but noone here. I used it as a proxy for popular interest. To clarify, it meant:</p><p></p><p>"The article would garner much less interest for the viewers if it didn't contain the unproven claim that the bogus cases where AI-hallucinated and not invented by the careless attorney. Most notably, we might not be discussing it right now."</p><p></p><p>There is a strong chance the careless attorney used ChatGPT to invent cases, because that's easier than making your own bogus cases and other evidences hint at her following the path of least effort, but the article isn't telling us anything about AI, not even the proof that it was involved.</p></blockquote><p></p>
[QUOTE="Jfdlsjfd, post: 9698932, member: 42856"] It's not irrelevant. It's just that we generally don't ban tools that have both positive and negative outcome. Many tools are dangerous if handled by unskilled users (cars, guns and medications come to mind) and yet are mostly available, even if they can be conditionned by a licence when the negative outcome is as harsh as "people dying". Indeed. Bad lawyers didn't need AI to hallucinate cases or generally be awful -- I have been in several situation where I actively thought that the defendant's lawyer worsened his client's position -- and with AI they might very well be more prone to do it. Existence of bad lawyer is telling more about our bar exams than anything about AI. It can't be used to attack AI anymore than bad drivers can be used to justify banning cars. The jury is still out, and I think the initial effect might be worse because we're transitioning to using a tool we're not used to. Back when you had to copy down books, every book tended to be considered reliable, because it would be foolish to copy down nonsense. As written books became more widespread, people had to "get accustomed" not to trust books. When the Internet started and there was only 3 scientists on it, I guess it could be considered a reliable source of information... until we had to change our view. Picture evidence was extremely convincing until a few doctored photographs appeared and nowadays we're still transitionning to a state where "I saw an image of Elvis Presley with a smartphone" doesn't translate to "Elvis is alive!" but to "Yawn! It's photoshopped". Here we have a tool that might cause bad habit initially because of the learning curve, and some of its users might be unaware of the limitations and let their guard down. Here? No, I don't think so. The 58 partners of the findlaw website that wanted to track me when I visited the site certainly are very interested in the number of views, but noone here. I used it as a proxy for popular interest. To clarify, it meant: "The article would garner much less interest for the viewers if it didn't contain the unproven claim that the bogus cases where AI-hallucinated and not invented by the careless attorney. Most notably, we might not be discussing it right now." There is a strong chance the careless attorney used ChatGPT to invent cases, because that's easier than making your own bogus cases and other evidences hint at her following the path of least effort, but the article isn't telling us anything about AI, not even the proof that it was involved. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*Geek Talk & Media
Judge decides case based on AI-hallucinated case law
Top