Sure.
By the same token, you thereby cannot use, "Well a good lawyer would do X, Y, and Z, so it is fine," as a defense of the tool. We have demonstrated that bad lawyers exist, and so our use-case for generative AI needs to include that issue. It cannot be dismissed as irrelevant.
It's not irrelevant. It's just that we generally don't ban tools that have both positive and negative outcome. Many tools are dangerous if handled by unskilled users (cars, guns and medications come to mind) and yet are mostly available, even if they can be conditionned by a licence when the negative outcome is as harsh as "people dying".
There's an adage in the software-development field: "Software will not and cannot fix a fundamentally broken process." AI won't make the failings of lawyers better, and may indeed make them worse.
Indeed. Bad lawyers didn't need AI to hallucinate cases or generally be awful -- I have been in several situation where I actively thought that the defendant's lawyer worsened his client's position -- and with AI they might very well be more prone to do it. Existence of bad lawyer is telling more about our bar exams than anything about AI. It can't be used to attack AI anymore than bad drivers can be used to justify banning cars.
What I have not seen you address yet are the patterns of behavior that develop in the users of AI, as they come to depend upon it. Does a good lawyer stay a good lawyer when using the tool on a repeated basis, or do they slip into bad habits?
The jury is still out, and I think the initial effect might be worse because we're transitioning to using a tool we're not used to. Back when you had to copy down books, every book tended to be considered reliable, because it would be foolish to copy down nonsense. As written books became more widespread, people had to "get accustomed" not to trust books. When the Internet started and there was only 3 scientists on it, I guess it could be considered a reliable source of information... until we had to change our view. Picture evidence was extremely convincing until a few doctored photographs appeared and nowadays we're still transitionning to a state where "I saw an image of Elvis Presley with a smartphone" doesn't translate to "Elvis is alive!" but to "Yawn! It's photoshopped". Here we have a tool that might cause bad habit initially because of the learning curve, and some of its users might be unaware of the limitations and let their guard down.
Is anyone here using the number of views as a metric for anything? Because I wasn't. Why is the number of views relevant?
Here? No, I don't think so. The 58 partners of the findlaw website that wanted to track me when I visited the site certainly are very interested in the number of views, but noone here. I used it as a proxy for popular interest. To clarify, it meant:
"The article would garner much less interest for the viewers if it didn't contain the unproven claim that the bogus cases where AI-hallucinated and not invented by the careless attorney. Most notably, we might not be discussing it right now."
There is a strong chance the careless attorney used ChatGPT to invent cases, because that's easier than making your own bogus cases and other evidences hint at her following the path of least effort, but the article isn't telling us anything about AI, not even the proof that it was involved.