Judge decides case based on AI-hallucinated case law

Quite. I'm getting a lot of, "It's all burning down anyway so let's just set off a nuke!" energy in this thread.
Well, to be optimistic, maybe humanity will find a way to stupid itself to extinction in such a way that is less damaging to other species, such as deporting the people who harvest the food then starving to death.
 

log in or register to remove this ad

That notion will always be hilarious to me. It’s like someone claiming they’re a Michelin-starred chef because they ordered food at a Michelin-starred restaurant. “I told the waiter/program what I wanted and it was delivered, that means I made it.”

It’s more like the head chef told their helper exactly what to prepare and how to prepare it. Does the head chef get the credit or the helper for following his instructions?
 

Sure.

By the same token, you thereby cannot use, "Well a good lawyer would do X, Y, and Z, so it is fine," as a defense of the tool. We have demonstrated that bad lawyers exist, and so our use-case for generative AI needs to include that issue. It cannot be dismissed as irrelevant.

There's an adage in the software-development field: "Software will not and cannot fix a fundamentally broken process." AI won't make the failings of lawyers better, and may indeed make them worse.

What I have not seen you address yet are the patterns of behavior that develop in the users of AI, as they come to depend upon it. Does a good lawyer stay a good lawyer when using the tool on a repeated basis, or do they slip into bad habits?

The jury is still out on that one, but early indications are that, if you make a habit out of using generative AI to prepare materials, the user pays less attention to the content - the study I saw showed that, just doing one essay a month leads to lowered retention of the subject matter of the resulting piece, and less holistic thought on its contents. This does not bode well for using it as a regular tool, and depending on the user to double-check it.



Is anyone here using the number of views as a metric for anything? Because I wasn't. Why is the number of views relevant?

As an aside, this follows the general flow of most discussions I see here. When things go bad they go really bad. Rebuttal - but in most cases those bad things won’t occur and there’s a lot of benefit when things don’t go to hell. Counter rebuttal - but it doesn’t matter how often they go right, it only matters that they will eventually inevitably go wrong and then this worst case scenario occurs.

Same pattern on repeat. It’s almost comforting when you notice it. Almost…
 


Sure.
It’s more like the head chef told their helper exactly what to prepare and how to prepare it. Does the head chef get the credit or the helper for following his instructions?

Given how lead chefs are presented as "9-star chefs" because they own 3 restaurants with 3-stars, none of which they are actually cooking in, I'd say that's clearly the head chef that is credited.

When it comes to art, we credit Sol LeWitt for its wall drawings, yet they are executed by the assistants of the museum that set it up. Or Rirkrit Tiravanija is credited for Pad Thai, when it's, more often that not, "not being there to cook a pad thai". He is, materially, doing asbolutely nothing. Yet his thought process, the conception of him being not there, is his own, and that's the art piece. The realisation (which can occur when he is present, and he does cook a pad thai) is very secondary.
 
Last edited:

Well, to be optimistic, maybe humanity will find a way to stupid itself to extinction in such a way that is less damaging to other species, such as deporting the people who harvest the food then starving to death.
Humans certainly seem to be built to want that which works to our ultimate detriment.
 


Sure.


Given how lead chefs are presented as "9-star chefs" because they own 3 restaurants with 3-stars, none of which they are actually cooking in, I'd say that's clearly the head chef that is credited.

When it comes to art, we credit Sol LeWitt for its wall drawings, yet they are executed by the assistants of the museum that set it up. Or Rirkrit Tiravanija is credited for Pad Thai, when it's, more often that not, "not being there to cook a pad thai". He is, materially, doing asbolutely nothing. Yet his thought process, the conception of him being not there, is his own, and that's the art piece. The realisation (which can occur when he is present, and he does cook a pad thai) is very secondary.

Exactly. Thanks for the better examples.
 


As an aside, this follows the general flow of most discussions I see here. When things go bad they go really bad. Rebuttal - but in most cases those bad things won’t occur and there’s a lot of benefit when things don’t go to hell. Counter rebuttal - but it doesn’t matter how often they go right, it only matters that they will eventually inevitably go wrong and then this worst case scenario occurs.

Same pattern on repeat. It’s almost comforting when you notice it. Almost…

Thank you for a lead in for another direction to consider...

Does anyone here think that devices used in medical practice should be thrown out into the market with no testing as to their efficacy or safety? Like, if someone built a new heart-lung machine to use during heart transplants, you'd want that thoroughly tested before it got used on you, right?

Now, your lawyer handles your life, liberty, and financial and legal well-being. If they are using an untested tool, shouldn't that give you pause?

There is no such thing as a life with zero risk. However, in areas of particularly significant consequences, we typically do what we can to reduce and manage that risk. We build purpose-made tools, and we test the heck out of them. We keep redesigning and refining until the risks come down to something like a manageable level, the remaining risks are known, and can be communicated and managed. And, the seller accepts some liability if the tool causes harm.

That hasn't happened with AI tools. They throw ChatGPT out there, and folks use it for whatever they darned well feel like, and risk to thrid parties be darned, hey what?
 

Thank you for a lead in for another direction to consider...

Does anyone here think that devices used in medical practice should be thrown out into the market with no testing as to their efficacy or safety? Like, if someone built a new heart-lung machine to use during heart transplants, you'd want that thoroughly tested before it got used on you, right?

Now, your lawyer handles your life, liberty, and financial and legal well-being. If they are using an untested tool, shouldn't that give you pause?

There is no such thing as a life with zero risk. However, in areas of particularly significant consequences, we typically do what we can to reduce and manage that risk. We build purpose-made tools, and we test the heck out of them. We keep redesigning and refining until the risks come down to something like a manageable level, the remaining risks are known, and can be communicated and managed. And, the seller accepts some liability if the tool causes harm.

That hasn't happened with AI tools. They throw ChatGPT out there, and folks use it for whatever they darned well feel like, and risk to thrid parties be darned, hey what?
We make sure that such things are safe, to the point that they frequently lag well behind cutting edge tech, and with reason (as you say). There's a whole genre of fiction about what happens when untested medical tech is prematurely put into use.
 

Remove ads

Top