Judge decides case based on AI-hallucinated case law

I think it depends how the exams work, specifically whether they encourage a useful mixture of rote learning and information analysis.

When I was doing my medical finals and my specialist exams then it did feel at times that I was simply swallowing and regurgitating info that I would never remember in future, but this wasn’t correct - much of what I learned for my finals 22 years ago is still with me now and I don’t think I’d have them burned into my long term memory if I hadn’t had to prepare so completely for my exams. I’ve basically never had to use my knowledge of lysosome storage disorders - they affect fewer than one in 10k on average and I’ve maybe seen one case in my life - but I still remember them, know they exist, and know how to look up more information about them.

Basically, I don’t think you should be allowed to be a doctor unless you’ve passed comprehensive and rigorous exams. Sure, take them several times if you have to - I’ve failed several exams too - but you need to retain and be able to use that information.
Everyone learns differently. For me, it was better to learn and remember first principles of electronic engineering. Cramming for an exam was a useless endeavour, if I didn't take the basics on-board during the semester. In fact doing so mean that cramming was frequently counter productive. I was never the first one out of an exam but I was top 1%, and frequently was given a walk on writing finals at all.
 

log in or register to remove this ad

Everyone learns differently. For me, it was better to learn and remember first principles of electronic engineering. Cramming for an exam was a useless endeavour, if I didn't take the basics on-board during the semester. In fact doing so mean that cramming was frequently counter productive. I was never the first one out of an exam but I was top 1%, and frequently was given a walk on writing finals at all.
I think it’s also fair to say that different subjects and sectors need different kinds of learning. Medicine does require a great deal of rote learning and the ability to correlate multiple apparently unrelated facts into recognisable patterns, and that’s what the exams are generally designed to test. If you don’t know that sarcoidosis can present in certain ways, and then recognise a collection of unrelated symptoms and signs as possibly being sarcoidosis, you don’t have much business being a doctor. This isn’t stuff you can look up on the fly in the consultation room.
 

You apparently missed the clear parallel in your own example.

"You are disregarding the rights of your citizens." "What about how you are disregarding the rights of your citizens."

"You are contributing to a climate crisis." "What about how you are contributing to a climate crisis?"

Except there is not the argument that was made, as I understood it.

In the first case, referencing the other citizen doesn't make the criticism about USSR mistreating its citizen invalid, because the USSR cared about the right if its citizen (or at least pretended to). So the "what about..." was changing the subject to something unrelated to the accusation being made.

"AI is contributing to the climate crisis." "We don't care about the climate crisis, as evidenced by X". It's not "You're eating meat, so you can't criticize the use of AI". Not caring about environment is a perfectly logical (if possibly pessimistic) answer, while the former one isn't. Unless whataboutism is just using a sentence starting with what about, but then it's not invalidating the counter-argument.
 
Last edited:

Eh. I think we can do exactly that. A good driver would drive well...a good author will not use their books to spread lies...a good pilot will land the plane successfully.

If every lawyer is a bad actor, it doesn't matter what the legal system is.
Allowing the use of LLMs is like putting point and click videos games on the media screens of automobiles. It will make every quality level of driver worse, if it becomes ubiquitous. It will cause catastrophic harm.
 


It's not dismissing the ecological impact of AI, it's acknowledging it, and countering the argument by stating that we, as a society, don't care about the ecological impact of anything unless we are not affected by it.

Careful. I'm not saying gen AI doesn't have to stand on its own. I'm not saying "those of us who eat meat can't criticize AI".

I'm saying that our society makes cost-benefit analyses about energy usage all the time, and that by those standards, gen AI does stand on its own.
In the opinion of those who care this is the line that they have drawn in the sand a sort of "it stops here, no more" it is basically icing on the cake of being anti-generative AI for them.
 

I think it’s also fair to say that different subjects and sectors need different kinds of learning. Medicine does require a great deal of rote learning and the ability to correlate multiple apparently unrelated facts into recognisable patterns, and that’s what the exams are generally designed to test. If you don’t know that sarcoidosis can present in certain ways, and then recognise a collection of unrelated symptoms and signs as possibly being sarcoidosis, you don’t have much business being a doctor. This isn’t stuff you can look up on the fly in the consultation room.
Now for some real fun, imagine a generation of doctors who think they can just ask chatGPT-MD or some such for the answers to patient diagnosis and treatment.


There is a distinctly possible future ahead that is much much much worse in all sectors of life than the present, and LLMs are tugging the steering wheel toward that road.
 

Except there is not the argument that was made, as I understood it.

In the first case, referencing the other citizen doesn't make the criticism about USSR mistreating its citizen invalid.

"AI is contributing to the climate crisis." "We don't care about the climate crisis, as evidenced by X". It's not "You're eating meat, so you can't criticize the use of AI". Not caring about environment is a perfectly logical (if possibly unethical) answer, while the former one isn't. Unless whataboutism is just using a sentence starting with what about, but then it's not invalidating the counter-argument.
Isn't it though? To me, the rephrasing doesn't invalidate the point.
 

In the opinion of those who care this is the line that they have drawn in the sand a sort of "it stops here, no more" it is basically icing on the cake of being anti-generative AI for them.
Or it's also a matter of scale. Massive use of AI becomes a huge issue given that the datacentres are growing to the point of needing their own powerplants, and submerged cold water cooling rigs. AI doesn't need to be ubiquitous in order to have a positive impact in areas where it has clearly been shown to have at least some benefit.
 

I think it’s also fair to say that different subjects and sectors need different kinds of learning. Medicine does require a great deal of rote learning and the ability to correlate multiple apparently unrelated facts into recognisable patterns, and that’s what the exams are generally designed to test. If you don’t know that sarcoidosis can present in certain ways, and then recognise a collection of unrelated symptoms and signs as possibly being sarcoidosis, you don’t have much business being a doctor. This isn’t stuff you can look up on the fly in the consultation room.
True. Then again there are electronics formulas that need to be remembered, in order to work through designs and problems. If (generic) you don't get E=IR and P=EI, and fully understand it, then just give up.
 

Remove ads

Top