Judge decides case based on AI-hallucinated case law

It has already gotten to the point that it's being used in law school, by both student and professor, in Canada, as a matter of course. I can only imagine the same can be said elsewhere.

I'd say it's used by student to cheat at exams (which is an example of early adopting). It is a concern that hasn't been addressed correctly (but an urgent concern) in the law school I work with. Using ChatGPT to create course material and not checking the end result would certainly be viewed dimly... and in a real trial? Where you risk being disbarred for doing that?

Side note: a few month ago, a student cut and pasted from a book found with google scholar in a graded essay. It's sad that they didn't even try to reword the text with ChatGPT...
 

log in or register to remove this ad

I'd say it's used by student to cheat at exams (which is an example of early adopting). It is a concern that hasn't been addressed correctly (but an urgent concern) in the law school I work with. Using ChatGPT to create course material and not checking the end result would certainly be viewed dimly...

Side note: a few month ago, a student cut and pasted from a book found with google scholar in a graded essay. It's sad that they didn't even try to reword the text with ChatGPT...
You would be right, but also wrong. While undoubtedly it's being used to cheat (and users are being caught doing so, which results in a charge of academic misconduct, when inevitably found), it is being used as a source of reference.
 
Last edited:

Then I think you're operating on a fundamentally different definition of whataboutism than is commonly used.

To me, whataboutism is refering to the practice of the USSR responding to accusation about its blatant disregard for human rights by asking the US about the rights of its own minorities.

It was sophistry, because the USSR had pledged to defend a lot of individual rights through international agreements, and therefore asking "what about the black people in the US?" wasn't a defence against criticism on how they handled their own minorities. The US segregating had no relation with the USSR segregating. They couldn't answer: "we don't care about segregation" because they had signed several agreements and generally wanted to present themselves as good guys.

Here, we have a fundamentally different thing: we've an attack on the environmental impact of AI, answered by "we don't care about environmental impact, as a rule, as evidenced by an example: meat consumption". Which is a rational counter-argument (even if I find this to be a pessimistic about us), not a whataboutism.

I might have missed a change of use in common parlance.
 
Last edited:

I think the comparison to drugs is so far off the mark that it probably isn't worth continuing on this topic. I'll just say I find it useful and leave it at that.
It’s exactly on the mark.

It is very strongly comperable to abusing prescriptions or other “white collar” drugs.

Here is an article that references the research. I’m at work, so if you want more you’ll have to google. Maybe their AI will get one right and lead you to a direct link to the science. Or maybe it will hallucinate something entirely fictitious! Very useful!

 


To me, whataboutism is refering to the practice of the USSR responding to accusation about its blatant disregard for human rights by asking the US about the rights of its own minorities.

It was sophistry, because the USSR had pledged to defend a lot of individual rights through international agreements, and therefore asking "what about the black people in the US?" wasn't a defence against criticism on how they handled their own minorities. The US segregating had no relation with the USSR segregating. They couldn't answer: "we don't care about segregation" because they had signed several agreements and generally wanted to present themselves as good guys.

Here, we have a fundamentally different thing: we've an attack on the environmental impact of AI, answered by "we don't care about environmental impact, as a rule, as evidenced by an example: meat consumption". Which is a rational counter-argument (even if I find this to be a pessimistic about us), not a whataboutism.

I might have missed a change of use in common parlance.
You apparently missed the clear parallel in your own example.

"You are disregarding the rights of your citizens." "What about how you are disregarding the rights of your citizens."

"You are contributing to a climate crisis." "What about how you are contributing to a climate crisis?"
 

I think the ability to retrieve and analyse information quickly is quite important for many sectors, and you’d certainly want more emphasis on exams (not 100% or anything, more like 40% and you can’t get a higher class degree on coursework alone) for anything that required professional qualification. Certainly law and medicine, probably many others.
Testing, especially high-stakes testing and standardized tests, are not great ways for students to actually learn the material. Have you ever studied for a test then forgotten the material within a few days or weeks? You're not alone. Making tests even more high stakes and more common is the wrong way to go.

 

By the same token, you thereby cannot use, "Well a good lawyer would do X, Y, and Z, so it is fine," as a defense of the tool. We have demonstrated that bad lawyers exist, and so our use-case for generative AI needs to include that issue. It cannot be dismissed as irrelevant.
Eh. I think we can do exactly that. A good driver would drive well...a good author will not use their books to spread lies...a good pilot will land the plane successfully.

If every lawyer is a bad actor, it doesn't matter what the legal system is.
 

Testing, especially high-stakes testing and standardized tests, are not great ways for students to actually learn the material. Have you ever studied for a test then forgotten the material within a few days or weeks? You're not alone. Making tests even more high stakes and more common is the wrong way to go.

I think it depends how the exams work, specifically whether they encourage a useful mixture of rote learning and information analysis.

When I was doing my medical finals and my specialist exams then it did feel at times that I was simply swallowing and regurgitating info that I would never remember in future, but this wasn’t correct - much of what I learned for my finals 22 years ago is still with me now and I don’t think I’d have them burned into my long term memory if I hadn’t had to prepare so completely for my exams. I’ve basically never had to use my knowledge of lysosome storage disorders - they affect fewer than one in 10k on average and I’ve maybe seen one case in my life - but I still remember them, know they exist, and know how to look up more information about them.

Basically, I don’t think you should be allowed to be a doctor unless you’ve passed comprehensive and rigorous exams. Sure, take them several times if you have to - I’ve failed several exams too - but you need to retain and be able to use that information.
 

Do I see "whataboutism" rearing its head? I think I do!

The fact that we use inefficient meat as a food source cannot be used to shield generative AI from its own energy use and environmental impact. The technology must stand on its own cost/benefit merits, not on the basis of how we don't make what you feel is the right call in other areas.
Careful. I'm not saying gen AI doesn't have to stand on its own. I'm not saying "those of us who eat meat can't criticize AI".

I'm saying that our society makes cost-benefit analyses about energy usage all the time, and that by those standards, gen AI does stand on its own.

To me, whataboutism is refering to the practice of the USSR responding to accusation about its blatant disregard for human rights by asking the US about the rights of its own minorities.

It was sophistry, because the USSR had pledged to defend a lot of individual rights through international agreements, and therefore asking "what about the black people in the US?" wasn't a defence against criticism on how they handled their own minorities. The US segregating had no relation with the USSR segregating. They couldn't answer: "we don't care about segregation" because they had signed several agreements and generally wanted to present themselves as good guys.

Here, we have a fundamentally different thing: we've an attack on the environmental impact of AI, answered by "we don't care about environmental impact, as a rule, as evidenced by an example: meat consumption". Which is a rational counter-argument (even if I find this to be a pessimistic about us), not a whataboutism.
Yeah, the way 'whataboutism' is deployed is ime not very careful or conducive to good arugment. Its a nice applause line you can use to shut down a line of inquiry. "That sounds like whataboutism, I win!". You know? Kind of how people used "that is a logical fallacy" back in the early days of the internet.

It actually applies when someone is calling the other a hypocrite...i.e., "you can't criticize my drinking because of your gambling problem", or "you can't criticize my human rights record because of your human rights record".
 

Remove ads

Top