Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
NOW LIVE! Today's the day you meet your new best friend. You don’t have to leave Wolfy behind... In 'Pets & Sidekicks' your companions level up with you!
Community
General Tabletop Discussion
*Geek Talk & Media
Judge decides case based on AI-hallucinated case law
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Jfdlsjfd" data-source="post: 9705090" data-attributes="member: 42856"><p>This is incorrect. You mentionned the Sackler case as an illustration of how you said liability works (and how actual use, even if unintended, will lead to the AI company being liable despite disclaimers). Except that it's an illustration on how liability work in the US and possibly other common law countries (as I noticed you've narrowed the scope of your explanation this time). So it isn't particularly useful to support the argument on "how liability works". It was still an example supporting "how liability works in a particular system". Several of the key elements of the case you used as illustration aren't working the same, or even existing, elsewhere (perimeter of liability, the amounts awarded, the scope of the problem, the possibility of having a settlement, even the concept of settlement...) and didn't work to support a general statement on how liability is working in general, if such thing was possible. To be clear, it's not the example I reject, it's the idea that liability is working exactly the same everywhere the way you're saying it works ("as a bedrock principle of the law", no less), which can't be shown by identifying a single example of <em>anything</em>. If you say "all countries use the dollar as a currency", you can't demonstrate it's true by showing, although correctly, that New-Zealand uses dollars. Especially when you're saying this with authority when telling that to someone from the UK, who kind of know what is their currency. </p><p></p><p>If things were working as you say, the EU lawmakers and their legal advisors would all be complete morons, having spent the last 3 years trying to draft a directive on AI liability (and ultimately failing to agree), based on the <em>explicitely stated premise</em> that it is exceedingly difficult to make AI operators liable under existing Member States' laws. They must surely be mistaking tort with a cake and have no grasp on what they're doing.</p><p></p><p>And even the liability aspect was a tangent to the question of whether AI should be able to give legal or medical advice to the general public -- for the operator to be liable for the bad advice given, the system must be able to give an advice in the first place, or there would be nothing to complain about.</p><p></p><p></p><p></p><p>With the context added, it is a perfectly fine position to hold. On a board where people routinely tend to say "doing X is illegal" or "the supreme court* has ruled against that..." or "the constitution has provisions against that", so one can't support this [or denounce this, depending on the topic]", I feel that we made a big step forward when formulating an opinion on law by specifying the country (or group of countries) they intended to be speaking about. At last!</p><p></p><p>Despite the clear warnings given to users, given that the US's have a large perimeter for the monopoly granted to lawyers, it may be totally justified there for UPL penalties to be applicable to companies operating a general purpose LLM which accept to provide a list of cases supporting a position. I don't have any reservation with your statement. This is a different statement from "AI shouldn't be allowed to give legal advice" or "AI giving legal advice is breaking the law".</p><p></p><p></p><p></p><p>* not to single out the US, but I honestly never saw someone quoting the Bundesverfassungsgericht to support an argument on what one can or cannot do.</p></blockquote><p></p>
[QUOTE="Jfdlsjfd, post: 9705090, member: 42856"] This is incorrect. You mentionned the Sackler case as an illustration of how you said liability works (and how actual use, even if unintended, will lead to the AI company being liable despite disclaimers). Except that it's an illustration on how liability work in the US and possibly other common law countries (as I noticed you've narrowed the scope of your explanation this time). So it isn't particularly useful to support the argument on "how liability works". It was still an example supporting "how liability works in a particular system". Several of the key elements of the case you used as illustration aren't working the same, or even existing, elsewhere (perimeter of liability, the amounts awarded, the scope of the problem, the possibility of having a settlement, even the concept of settlement...) and didn't work to support a general statement on how liability is working in general, if such thing was possible. To be clear, it's not the example I reject, it's the idea that liability is working exactly the same everywhere the way you're saying it works ("as a bedrock principle of the law", no less), which can't be shown by identifying a single example of [I]anything[/I]. If you say "all countries use the dollar as a currency", you can't demonstrate it's true by showing, although correctly, that New-Zealand uses dollars. Especially when you're saying this with authority when telling that to someone from the UK, who kind of know what is their currency. If things were working as you say, the EU lawmakers and their legal advisors would all be complete morons, having spent the last 3 years trying to draft a directive on AI liability (and ultimately failing to agree), based on the [I]explicitely stated premise[/I] that it is exceedingly difficult to make AI operators liable under existing Member States' laws. They must surely be mistaking tort with a cake and have no grasp on what they're doing. And even the liability aspect was a tangent to the question of whether AI should be able to give legal or medical advice to the general public -- for the operator to be liable for the bad advice given, the system must be able to give an advice in the first place, or there would be nothing to complain about. With the context added, it is a perfectly fine position to hold. On a board where people routinely tend to say "doing X is illegal" or "the supreme court* has ruled against that..." or "the constitution has provisions against that", so one can't support this [or denounce this, depending on the topic]", I feel that we made a big step forward when formulating an opinion on law by specifying the country (or group of countries) they intended to be speaking about. At last! Despite the clear warnings given to users, given that the US's have a large perimeter for the monopoly granted to lawyers, it may be totally justified there for UPL penalties to be applicable to companies operating a general purpose LLM which accept to provide a list of cases supporting a position. I don't have any reservation with your statement. This is a different statement from "AI shouldn't be allowed to give legal advice" or "AI giving legal advice is breaking the law". * not to single out the US, but I honestly never saw someone quoting the Bundesverfassungsgericht to support an argument on what one can or cannot do. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*Geek Talk & Media
Judge decides case based on AI-hallucinated case law
Top