Judge decides case based on AI-hallucinated case law

And isn't that the point? It's a tool. You treat it like other tools. We have examples of how you handle tools, including liability of the manufacturer.

I have no problem making the manufacturer liable using regular, common means of making it liable. The idea that was discussed was preventing AI from doing legal advice, which isn't making the manufacturer liable, it's banning the tool altogether.
 
Last edited:

log in or register to remove this ad



And I would say a little bottom text doesn't absolve the creator of improper use. There would likely need to be a click-through portal stating limitations. But that's the stuff of prolonged legal wrangling, not the musings of some barely legally literate guy like me.

Technically, when you sign up, you end-up being exposed to a wall of text. Including this:


Which does force you to agree, among other, with this:

Don’t perform or facilitate the following activities that may significantly impair the safety, wellbeing, or rights of others, including:
  1. Providing tailored legal, medical/health, or financial advice without review by a qualified professional and disclosure of the use of AI assistance and its potential limitations
  2. Making high-stakes automated decisions in domains that affect an individual’s safety, rights or well-being (e.g., law enforcement, migration, management of critical infrastructure, safety components of products, essential services, credit, employment, housing, education, social scoring, or insurance)

Note: answering your question shouldn't be assumed to imply that I think that such a contract is enough to protect OpenAI from litigation. I was just pointing out the fact that the click-through portal is in effect, and the bottom-text reminder AND in-text warning when you touch "risky" subjects are just on top of the contractual agreement not to use AI to provide tailored legal advice or to make important decisions.
 

I have no problem making the manufacturer liable using regular, common, means of making it liable. The idea that was discussed was preventing AI from doing legal advice, which isn't making the manufacturer liable, it's banning the tool altogether.
It's banning an outboard motor from being used and a kitchen mixer, not outright banning of the tool. Again, regulating use. There are actual legal and medical AIs in development. Limiting a general purpose AI from doing specialized tasks wouldn't be that onerous a regulation.
 

I do, and, much like the First Amendment and the definition of free speech and the limitations of it, it's something country-specific, which I covered when I said "there will be no consensus on the solution" earlier.

I very much doubt that. But try me. Explain to me, like I'm a slightly dumb golden retriever, why I took issue with your posts above trying to make the analogy between a person giving legal advice to another person and a company selling a product that provides legal advice upon request.
 

Technically, when you sign up, you end-up being exposed to a wall of text. Including this:


Which does force you to agree, among other, with this:
And there have been cases in which that sort of one-shot blurb has been ruled insufficient warning.

I recall an incident that resulted in a particular motorsports shop ending their 20+ year long practice of renting out a major Canadian racetrack, the day prior to the Nationals, for a track day. They had a waiver stating the usual issues of potential injury in any sporting event, notably in motorcycle racing, and everyone signed it. One guy went out on his bike without fastening his own helmet strap, crashed, and ended up a quadriplegic. He won a hefty settlement, from the organizers, based on the concept that the waiver gave insufficient warning. That he had to make sure that his own safety gear was worn properly. < HeadDesk >
 

And there have been OTHER threads & comments posted on ENWorld about AI hallucinations in other fields of work.

I saw an example recently of Google's AI Summary assistant saying that a steamer or iron could be used to take the wrinkles off of men's private parts.

And you can easily see how this happens. Text-based generative AI is mostly a word-association machine. Steamers and irons are associated with wrinkles. You find wrinkles in fabric and...
 

And there have been cases in which that sort of one-shot blurb has been ruled insufficient warning.

I recall an incident that resulted in a particular motorsports shop ending their 20+ year long practice of renting out a major Canadian racetrack, the day prior to the Nationals, for a track day. They had a waiver stating the usual issues of potential injury in any sporting event, notably in motorcycle racing, and everyone signed it. One guy went out on his bike without fastening his own helmet strap, crashed, and ended up a quadriplegic. He won a heft settlement, from the organizers, based on the concept that the waiver gave insufficient warning. That he had to make sure that his own safety gear was worn properly. < HeadDesk >

If that’s a true and complete summary, then this is a prime example of why people don’t trust the legal system.
 

I very much doubt that. But try me. Explain to me, like I'm a slightly dumb golden retriever, why I took issue with your posts above trying to make the analogy between a person giving legal advice to another person and a company selling a product that provides legal advice upon request.

I can't explain why you took issue, you'd be the best person to identify why you did take issue?

The point was that:

1. We notice that sometimes, AI can give bad legal advice.
2. A person said what I understood to be "AI should be prevented to give legal advice because bad legal advice, contrary to bad hairstyle advice, can be catastrophic".
3. I said "We do ban a lot of problematic speech, why not ban giving bad legal advice altogether? So the goal of protecting people from bad advice would be met, in the case of AI and in the case of the many, many other sources of bad legal advice, with one single piece of legislation. Or alternatively, maybe there isn't enough consensus that getting bad legal advice is something that we should ban at all, as we've grown accustomed to hearing bad legal advice and knowing not to trust someone who isn't an expert and the appropriate measures are already in place."

Note that I have no problem to make the company operating a service offering legal advice liable for the bad advice it gives. Especially if it sells it. I never adressed liability in any of my posts.
 
Last edited:

Remove ads

Top