Judge decides case based on AI-hallucinated case law

Isn't it though? To me, the rephrasing doesn't invalidate the point.

It does. You might disagree with the idea that we (collectively) don't care about environmental damage, but it doesn't make the counter-argument about us generally not caring irrelevant. Which is what a whataboutism is all about.

The argument "AI is contributing to the climate crisis" can be absolutely countered by "we don't value a technology on the basis on its climate impact."

The latter argument might be false (and you could argue to demonstrate that we do care and we have banned convenient technology with no easy replacement based on energy cost concern), but it can't be dismissed as irrelevant.
 

log in or register to remove this ad

In the opinion of those who care this is the line that they have drawn in the sand a sort of "it stops here, no more" it is basically icing on the cake of being anti-generative AI for them.

Sure. But then they must accept that they are in the minority. While individually, they might want to stop here (and they might want to not use AI themselves), banning the technology would require a majority of the people sharing their views. Which I don't think is the case: some democratic countries saw the increase in the use of fracking, for example, when they could have said "no more".

Also, they would be exposed to criticism on their stance "it stops here, no more" because there is no reason that the current (or yesterday's) situation is optimal. China is making this point to Western countries: "sure, you got to pollute to your heart's content, and now you're saying "no more" when it's other's turn to benefit from massive energy consumption... that's a strangely conveniently placed line in the sand!"
 
Last edited:

Now for some real fun, imagine a generation of doctors who think they can just ask chatGPT-MD or some such for the answers to patient diagnosis and treatment.


There is a distinctly possible future ahead that is much much much worse in all sectors of life than the present, and LLMs are tugging the steering wheel toward that road.
That’s definitely a possibility. If medical students are mostly judged on coursework rather than exams (as horrible and stressful as exams are) that’s definitely an option. And if after graduating you rely on machine learning for all your diagnoses your skills will rust, even assuming the models are reliable.
 

That’s definitely a possibility. If medical students are mostly judged on coursework rather than exams (as horrible and stressful as exams are) that’s definitely an option. And if after graduating you rely on machine learning for all your diagnoses your skills will rust, even assuming the models are reliable.

I think the risk will increase as the tools are becoming more effective. As long as they are wrong 80% of the time, and it's easy to spot the errors, doctors will get trained as they are now and taught tthe limitation of AI tools and correct the errors (only benefitting from increased productivity with "easy" cases). If the system is 100% right, then the skills are no longer useful outside of emergency situations, where tools are unavailable. But when the system is 99-99.9% right, that's when doctors (or any professional) will be tempted to accept what the system says as true, unless specifically trained for that.

[Number used as illustration and assuming 100% is the cases where human doctors make the correct diagnostic. The difficulty will arise when tools are 99% right and humans are 95% right.]
 
Last edited:

Or it's also a matter of scale. Massive use of AI becomes a huge issue given that the datacentres are growing to the point of needing their own powerplants, and submerged cold water cooling rigs. AI doesn't need to be ubiquitous in order to have a positive impact in areas where it has clearly been shown to have at least some benefit.
If not for the scale, then it would be back to "death of thousands tiny cuts" that goes on with server farms,cars,jets and so on and so forth. Which goes back to my "stop here,no more" line or to put it another way as someone else said on here in another thread on the topic "we're screwing up the planet, why add more"
 

If not for the scale, then it would be back to "death of thousands tiny cuts" that goes on with server farms,cars,jets and so on and so forth. Which goes back to my "stop here,no more" line or to put it another way as someone else said on here in another thread on the topic "we're screwing up the planet, why add more"
Because of the current growth in scale, we'll not be able to mitigate damage. Keeping things to a lesser scale, with known beneficial applications, would result in something that we might be able to mitigate, by reducing damage caused by other sources. For example moving to electric or hydrogen powered vehicles, with less impactful sources for the electricity (nuclear, solar, tidal, wind). Using different sources of protein than inefficient farming methods, to reduce methane production. A small increase could be manageable. Geometric increase is definitely not.
 

I don't think it's that widespread to be honest. People trying ChatGPT to see what it does? Sure. But as a core part of their work, I don't think it's massively used. Professional might be more inclined to use dedicated tools, and LexisNexis' AI assistant is only a few months old, and I don't think it is widespread enough to have a significant effect on the market yet. If it had, we wouldn't have any statistical data to show the price of representation before at least a year.
As mentioned in another thread, CLE courses are already being offered in how to ethically use AI tools in a legal practice, and bar associations are leaning towards making familiarity with them into a standard of basic legal competence.

AI is being incorporated into the legal profession with alarming speed, and IMHO, that’s not a good thing.
I think it’s also fair to say that different subjects and sectors need different kinds of learning. Medicine does require a great deal of rote learning and the ability to correlate multiple apparently unrelated facts into recognisable patterns, and that’s what the exams are generally designed to test. If you don’t know that sarcoidosis can present in certain ways, and then recognise a collection of unrelated symptoms and signs as possibly being sarcoidosis, you don’t have much business being a doctor. This isn’t stuff you can look up on the fly in the consultation room.
As the son of an MD and someone who has been a medical “zebra” more than once, the art and skill of diagnosing afflictions is one of the more amazing aspects of medical practice.

So many things have overlapping symptoms; so many have radically esoteric symptoms. There’s a reason why dedicated diagnostic programs are still under 75% accurate (as compared to trained MDs).
Also, they would be exposed to criticism on their stance "it stops here, no more" because there is no reason that the current (or yesterday's) situation is optimal. China is making this point to Western countries: "sure, you got to pollute to your heart's content, and now you're saying "no more" when it's other's turn to benefit from massive energy consumption... that's a strangely conveniently placed line in the sand!"
Thing is, in certain contexts, “it stops here, no more” is perfectly reasonable and justified. In a closed system, as damage accumulates, it can be utterly destroyed. “Fairness” and taking turns doesn’t matter unless and until damage can be repaired.
 

Because of the current growth in scale, we'll not be able to mitigate damage. Keeping things to a lesser scale, with known beneficial applications, would result in something that we might be able to mitigate, by reducing damage caused by other sources. For example moving to electric or hydrogen powered vehicles, with less impactful sources for the electricity (nuclear, solar, tidal, wind). Using different sources of protein than inefficient farming methods, to reduce methane production. A small increase could be manageable. Geometric increase is definitely not.
Frankly, I think the bets are moving on from mitigation at this point. At least in academia, funding decisions, faculty hires, the rhetoric of climate scientists, are all gradually moving towards geoengineering solutions.

Based on the amount of forcing and sensitivity of the climate system, and the political ineasibility of mitigation, changing this or that just doesn't get close. Mitigation requires a worldwide, fundamental shift in the human relationship with energy. I don't think it's in the cards.
 

Generative AI is not objective or neutral. It can, and will, be skewed for various reasons.

As an example, Musk just announced a new release of his social media platform's AI "Grok", because he didn't like the answers it was giving. Musk has chosen to introduce bias in his generative AI to meet his own personal needs and goals.
Aaaaand it looks like he’s finally getting his wish.

img{
 

Frankly, I think the bets are moving on from mitigation at this point. At least in academia, funding decisions, faculty hires, the rhetoric of climate scientists, are all gradually moving towards geoengineering solutions.

Based on the amount of forcing and sensitivity of the climate system, and the political ineasibility of mitigation, changing this or that just doesn't get close. Mitigation requires a worldwide, fundamental shift in the human relationship with energy. I don't think it's in the cards.
Perhaps. Perhaps not. Mitigation still means that less needs to be done, in order to alter course.
 

Remove ads

Top