Sure.
By the same token, you thereby cannot use, "Well a good lawyer would do X, Y, and Z, so it is fine," as a defense of the tool. We have demonstrated that bad lawyers exist, and so our use-case for generative AI needs to include that issue. It cannot be dismissed as irrelevant.
There's an adage in the software-development field: "Software will not and cannot fix a fundamentally broken process." AI won't make the failings of lawyers better, and may indeed make them worse.
What I have not seen you address yet are the patterns of behavior that develop in the users of AI, as they come to depend upon it. Does a good lawyer stay a good lawyer when using the tool on a repeated basis, or do they slip into bad habits?
The jury is still out on that one, but early indications are that, if you make a habit out of using generative AI to prepare materials, the user pays less attention to the content - the study I saw showed that, just doing one essay a month leads to lowered retention of the subject matter of the resulting piece, and less holistic thought on its contents. This does not bode well for using it as a regular tool, and depending on the user to double-check it.
Is anyone here using the number of views as a metric for anything? Because I wasn't. Why is the number of views relevant?