Judge decides case based on AI-hallucinated case law

I was going to say my CivPro professor would say the same thing, but I think that all CivPro professors are cut from the same cloth.

The greatest power of a truly awesome attorney is to turn all questions of fact and law into questions of procedure.
Not just that; he’s a BIG fan of the Socratic method, and “So what?” quickly became the most terrifying syllables uttered to his students.
 

log in or register to remove this ad

We make laws that limit the way that tools can be produced all the time. Take a look at table saws, for example, and proposed legislation regarding SawStop style technology. When something is dangerous we will frequently either mandate how it must be used, or change how it must be made in order to mitigate risk.

Saws are risky. Risk mitigation is taken. In the wider context, we already have laws preventing people for sawing off other people. We are introducing restrictions on how tools are made to prevent people from accidentally sawing themselves off.

When something is dangerous... we frequently have already regulated humans to do this dangerous thing to other.

Here, we have a proposal to mitigate a risk that is generally accepted in every other context. It would be like introducing SawStop mechanism while allowing people to use saws on other people. I think your analogy is flawed.
 
Last edited:

Saws are risky. Risk mitigation is taken. In the wider context, we already have laws preventing people for sawing off other people. We are introducing restrictions on how tools are made to prevent people from accidentally sawing themselves off.

When something is dangerous... we frequently have already regulated humans to do this dangerous thing to other.

Here, we have a proposal to mitigate a risk that is generally accepted in every other context. It would be like introducing SawStop mechanism while allowing people to use saws on other people. I think your analogy is flawed.
I would call AI a dangerous technology, on several fronts. Even tools that only cause harm when used improperly have regulations and laws that control them, their manufacture, and use. If you insist on calling AI a "tool", then the analogy fits.
 

We started from the situation where AI is giving bad legal advice, accompanied with warning on its faillibility and recommandation to consult a lawyer.

The argument was made that it should be configured to refuse to give any sort of legal information, since warnings aren't sufficent, because giving bad legal advice can have catastrophic consequences.

I agree (maybe we should configure AI to refuse to address legal topics), and I say that if the goal is to protect people from getting legal advice and acting upon it with catastrophic consequences, the solution isn't to modify AI, the solution lies with dealing with the problem as a whole. There is no reason to make an AI specific law, when a general "anti-bad-legal-advice law" would be much more effective.

My view is that, however, people can deal with bad advice, irrespective of the source, and we might simply have to do nothing, but I can see how the other point of view can be adopted.

Look, no offense, but .... this is a very complicated area. It has to do with UPL, state bars, and the First Amendment. If you have to google UPL without knowing immediately what I'm talking about, I'm not sure what to say.

Which is why I think that discussion of this topic when the analogies and metaphors keep shifting isn't helpful. Your last post, and the fact you thought I was talking about regulation (instead of pointing out tort liability) indicated that you aren't familiar with what I was talking about, and that's okay. We can leave it at that.

@Dannyalcatraz There was a case a while back that was pretty funny, IIRC. My recollection was that it was a defamation case against an AI firm. Anyway, they had to end up defaulting. Do you know why? They removed to federal court. But it got sent back to state court because the corporate entity refused to prove the diversity of the LLC.


*Remember, CIVPRO NERDS, when it's the LLC, if you have members that are LLCs, you have to divulge the members of those LLCs. And if they have LLCs, its the members of those LLCs. LLCs all the way down.
 
Last edited:

I would call AI a dangerous technology, on several fronts. Even tools that only cause harm when used improperly have regulations and laws that control them, their manufacture, and use. If you insist on calling AI a "tool", then the analogy fits.

How would you call it other than a tool? (real question, there is absolutely no sarcasm). It's a piece of software.
 

Do you think AI wasn’t tested at all? Or just not to your personal satisfaction?

Restricting this to the general-use, generally-available things (like ChatGPT, or Claude, or Grok)...

The question isn't if they were tested at all. The questions are what were they tested to do, and were they were tested to properly do the things people are using them for. In addition, there are questions of whether the makers have done enough to stop or mitigate harmful use.

I don't think ChatGPT was tested to be a medical diagnostician, financial analyst, or legal consultant, for example.
 


Look, no offense, but .... this is a very complicated area. It has to do with UPL, state bars, and the First Amendment. If you have to google UPL without knowing immediately what I'm talking about, I'm not sure what to say.

I do, and, much like the First Amendment and the definition of free speech and the limitations of it, it's something country-specific, which I covered when I said "there will be no consensus on the solution" earlier.


Your last post, and the fact you thought I was talking about regulation (instead of pointing out tort liability) indicated that you aren't familiar with what I was talking about, and that's okay.

Umbran was talking about banning the tool, so there would be no liability involved (as bad advice couldn't happen, since the AI would refuse to answer it). We also have a language barrier, maybe regulation wasn't the correct term to use.

I haven't taken a side on the idea that companies selling an AI service should (or should not) be liable for the consequences of acting upon the advice given by the AI. It's just one of the use case of AI technology, and the one I find the less appealing.
 
Last edited:

Restricting this to the general-use, generally-available things (like ChatGPT, or Claude, or Grok)...

The question isn't if they were tested at all. The questions are what were they tested to do, and were they were tested to properly do the things people are using them for. In addition, there are questions of whether the makers have done enough to stop or mitigate harmful use.

I don't think ChatGPT was tested to be a medical diagnostician, financial analyst, or legal consultant, for example.
And I would say a little bottom text doesn't absolve the creator of improper use. There would likely need to be a click-through portal stating limitations. But that's the stuff of prolonged legal wrangling, not the musings of some barely legally literate guy like me.
 


Remove ads

Top