Judge decides case based on AI-hallucinated case law

There are significant usages involved. But there are also significant usages for things like meat, which at least in the US is a luxury good.

Do I see "whataboutism" rearing its head? I think I do!

The fact that we use inefficient meat as a food source cannot be used to shield generative AI from its own energy use and environmental impact. The technology must stand on its own cost/benefit merits, not on the basis of how we don't make what you feel is the right call in other areas.
 

log in or register to remove this ad

I mean, yeah. “AI” is the future. Get on board or get left behind. /s

515710905_706007385671718_3890171083073265598_n.jpg
 

I mean, yeah. “AI” is the future. Get on board or get left behind. /s

This kind of stands as a lead-in to another issue:

Generative AI is not objective or neutral. It can, and will, be skewed for various reasons.

As an example, Musk just announced a new release of his social media platform's AI "Grok", because he didn't like the answers it was giving. Musk has chosen to introduce bias in his generative AI to meet his own personal needs and goals.

Think of that in context of legal briefs produced with an AI.
 

Sure.

By the same token, you thereby cannot use, "Well a good lawyer would do X, Y, and Z, so it is fine," as a defense of the tool. We have demonstrated that bad lawyers exist, and so our use-case for generative AI needs to include that issue. It cannot be dismissed as irrelevant.

It's not irrelevant. It's just that we generally don't ban tools that have both positive and negative outcome. Many tools are dangerous if handled by unskilled users (cars, guns and medications come to mind) and yet are mostly available, even if they can be conditionned by a licence when the negative outcome is as harsh as "people dying".

There's an adage in the software-development field: "Software will not and cannot fix a fundamentally broken process." AI won't make the failings of lawyers better, and may indeed make them worse.

Indeed. Bad lawyers didn't need AI to hallucinate cases or generally be awful -- I have been in several situation where I actively thought that the defendant's lawyer worsened his client's position -- and with AI they might very well be more prone to do it. Existence of bad lawyer is telling more about our bar exams than anything about AI. It can't be used to attack AI anymore than bad drivers can be used to justify banning cars.

What I have not seen you address yet are the patterns of behavior that develop in the users of AI, as they come to depend upon it. Does a good lawyer stay a good lawyer when using the tool on a repeated basis, or do they slip into bad habits?

The jury is still out, and I think the initial effect might be worse because we're transitioning to using a tool we're not used to. Back when you had to copy down books, every book tended to be considered reliable, because it would be foolish to copy down nonsense. As written books became more widespread, people had to "get accustomed" not to trust books. When the Internet started and there was only 3 scientists on it, I guess it could be considered a reliable source of information... until we had to change our view. Picture evidence was extremely convincing until a few doctored photographs appeared and nowadays we're still transitionning to a state where "I saw an image of Elvis Presley with a smartphone" doesn't translate to "Elvis is alive!" but to "Yawn! It's photoshopped". Here we have a tool that might cause bad habit initially because of the learning curve, and some of its users might be unaware of the limitations and let their guard down.


Is anyone here using the number of views as a metric for anything? Because I wasn't. Why is the number of views relevant?

Here? No, I don't think so. The 58 partners of the findlaw website that wanted to track me when I visited the site certainly are very interested in the number of views, but noone here. I used it as a proxy for popular interest. To clarify, it meant:

"The article would garner much less interest for the viewers if it didn't contain the unproven claim that the bogus cases where AI-hallucinated and not invented by the careless attorney. Most notably, we might not be discussing it right now."

There is a strong chance the careless attorney used ChatGPT to invent cases, because that's easier than making your own bogus cases and other evidences hint at her following the path of least effort, but the article isn't telling us anything about AI, not even the proof that it was involved.
 
Last edited:


Do I see "whataboutism" rearing its head? I think I do!

The fact that we use inefficient meat as a food source cannot be used to shield generative AI from its own energy use and environmental impact. The technology must stand on its own cost/benefit merits, not on the basis of how we don't make what you feel is the right call in other areas.

I won't speak for the person you're responding to, but I feel that it's not whataboutism. Whataboutism would be if both things were unrelated.

Here, the argument is "the fact that we use inefficient/unsustainable/unethical meat (or that we massively use plane and car travel burning oil, or that we water our gardens to keep the grass green, or that we use air conditioning outdoor (seen in Dubai)...) shows that we're pretty happy as a society having no concern for the environment whenever it suits our fancy, why should we be starting to do thing differently with AI?"

It's not dismissing the ecological impact of AI, it's acknowledging it, and countering the argument by stating that we, as a society, don't care about the ecological impact of anything unless we are not affected by it.
 
Last edited:

Except that this is in use right now and hasn't resulted in lower costs for attorney representation; just what seems to be poorer representation.

I don't think it's that widespread to be honest. People trying ChatGPT to see what it does? Sure. But as a core part of their work, I don't think it's massively used. Professional might be more inclined to use dedicated tools, and LexisNexis' AI assistant is only a few months old, and I don't think it is widespread enough to have a significant effect on the market yet. If it had, we wouldn't have any statistical data to show the price of representation before at least a year.
 

Indeed, but you don't test the exact same skills in a timed exam than in a work you can do at home, over a longer time. There are some exams that are done under supervision to prevent outside communication over a few days, but they are logistical nightmares and impractical for mass students. So you can check qualification for key civil service jobs this way, not university exams.
I think the ability to retrieve and analyse information quickly is quite important for many sectors, and you’d certainly want more emphasis on exams (not 100% or anything, more like 40% and you can’t get a higher class degree on coursework alone) for anything that required professional qualification. Certainly law and medicine, probably many others.
 

This kind of stands as a lead-in to another issue:

Generative AI is not objective or neutral. It can, and will, be skewed for various reasons.

As an example, Musk just announced a new release of his social media platform's AI "Grok", because he didn't like the answers it was giving. Musk has chosen to introduce bias in his generative AI to meet his own personal needs and goals.

Think of that in context of legal briefs produced with an AI.
For example, when it stated that he was a leading source of incorrect information on the Internet. I guess were leaning into the world of "Goodspeech" vs. "Crimespeech." Double-plus ungood.
 

I don't think it's that widespread to be honest. People trying ChatGPT to see what it does? Sure. But as a core part of their work, I don't think it's massively used. Professional might be more inclined to use dedicated tools, and LexisNexis' AI assistant is only a few months old, and I don't think it is widespread enough to have a significant effect on the market yet. If it had, we wouldn't have any statistical data to show the price of representation before at least a year.
It has already gotten to the point that it's being used in law school, by both student and professor, in Canada, as a matter of course. I can only imagine the same can be said elsewhere.

I won't speak for the person you're responding to, but I feel that it's not whataboutism. Whataboutism would be if both things were unrelated.

Here, the argument is "the fact that we use inefficient/unsustainable/unethical meat (or that we massively use plane and car travel burning oil, or that we water our gardens to keep the grass green, or that we use air conditioning outdoor...) shows that we're pretty happy as a society having no concern for the environment whenever it suits our fancy, why should we be starting to do thing differently with AI?"

It's not dismissing the ecological impact of AI, it might just be the argument that we, as a society, don't care about the ecological impact of anything unless we are not affected by it.
Then I think you're operating on a fundamentally different definition of whataboutism than is commonly used.
 

Remove ads

Top