Judge decides case based on AI-hallucinated case law

dopamine is also a driver of being online

Yes, when I say its not complicated, I mean it!

That cheap, 24/7 accessible dopamine hit that is the internet, that is the 'likes' that is the 'views' that is the doom scrolling meme guzzling, youtube algorithm twitch/twitter/bluesky WHATEVER firehose of nonstop content?

THAT is the problem.

There are studies on that as well. People are all naughty word up on cheap dopamine, and do not even know how to get that 'natural high' anymore.

To hard, maybe an AI can generate the feeling for us.
 

log in or register to remove this ad

AI work is also extremely generic and easily recognizable. And yes, I am very aware of the many hacks stduents can use to try to disguise their use of AI tools. They are mostly very obvious unless the student knows the material so well that they would have been better off without. The most important thing about student writing is that it read like something only they could have written, and if it doesn't...well, what's the point of them? To be an AI-delivery system?

At some point in the future, maybe, we can rely entirely on automated systems to replace jobs completely. This utopia isn't what we get today, unfortunately. Today, at best, AI is able to be integrated into production workflow to increase productivity. It can write a speech for you, but if you're unable to write a speech yourself, you'll probably just make... an AI speech. If you're good at writing speeches, you can use it to do it in less time than doing it from scratch, or proofread it, or as AI if it notices any consistency problem you might have overlooked.

In the example you gave about your student, unskilled users are penalized because they can't find the faults. They are penalized for misusing the tool -- some might say it's a good thing as it teaches them not to rely on tools they don't understand and listen to their professors. The fact that he was overconfident might be a sign of his young age and he though he knew better than you.

Among more adult students, the "what is the point of you" question is more striking. If your whole job can be done by an AI, then you'd better not be defined by your ability to do a specific job. Which, fortunately, most of us aren't.

When/If it reaches the job replacement stage, we'll need to collectively think a lot on how we share wealth in our societies. But this is quite down the road -- even if such things are better dealt before the fact than after. While the extremes of the 19th century's industrialization led to a century of improvement of quality of life for workers, it didn't happen without a few bloody conflict that could have been avoided. But, as I said, we aren't there yet.

What have we right now? We're just seeing another new tool which will change how we do things. For some jobs, it might be a big change (as much as industrial clothing all but replaced tailored clothing), for some it might be challenging (teaching maths adapted to pocket calculator becoming widespread and formerly-supercomputer-level) and need some evolution in the way things are done, for other it might just be a new tool that increase productivity if used correctly and for a few it will simply have no effect. It is getting the spotlight because it's new, that's why we got the initial article: I guess there would have been no article published if Husband's attorney had said "OK, the cases are made up because I asked an intern to do the job, and he was unhappy because I commented he shouldn't smoke weed in the office, and I didn't bother to proofread the result".

But it's not different than any other technological development.
 
Last edited:

Well, actually....

"“What is different about generative AI is the power density it requires. Fundamentally, it is just computing, but a generative AI training cluster might consume seven or eight times more energy than a typical computing workload,” says Noman Bashir, lead author of the impact paper, who is a Computing and Climate Impact Fellow at MIT Climate and Sustainability Consortium (MCSC) and a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL)."
...
"“The demand for new data centers cannot be met in a sustainable way. The pace at which companies are building new data centers means the bulk of the electricity to power them must come from fossil fuel-based power plants,” says Bashir."
...
"Plus, generative AI models have an especially short shelf-life, driven by rising demand for new AI applications. Companies release new models every few weeks, so the energy used to train prior versions goes to waste, Bashir adds. New models often consume more energy for training, since they usually have more parameters than their predecessors."


And, in addition to the energy use in training...
"Researchers have estimated that a ChatGPT query consumes about five times more electricity than a simple web search."

Emphasis mine.



We are spending a lot of power and water to have an infrastructure that allows us to have silly discussions... and much more. Most of American (perhaps human) commerce happens over that infrastructure, for example, as well as much of our personal and business communication.



If we depend on AI to do the job, and it fails, that's the opposite of justice and fairness.
When it fails. But yeah, that.
A lot of people find AI useful for a lot of reasons.
A lot of people find it useful to do drugs, too. 🤷‍♂️

It’s literally making people dumber, decreasing the quality of work, generating misinformation, harming the planet on a scale that vastly outdoes the rest of the global digital workload combined, and contributing to increased isolation and disconnect from society.

It is only beneficial to the people who own and run the companies using it to sell cheap slop and pretend it’s the same thing they made before with human labor.
I would tend to agree, which is why I made that assumption when replying to a previous post. At best we can expect it to be used as it currently seems to be; replacing clerks, researchers, and office staff, resulting in no reduction in costs to the client.
Yep. And a reduction in quality of work.
 


An anecdote: the International Baccalaureate diploma is very rigorous - it is basically modeled on British A-levels, and students earn univeristy transfer credit if they earn high enough marks in a subject. Although some of the assessment is done internally, all of it is moderated by IB, and most courses conclude with sets of rigorous exams that are anomynized and externally assessed.

In Language and Literature, which I teach, the last exam students write is their Paper 2, an exacting compare/contrast essay discussing two studied works of literature in response to one of four prompts that are unknown until the start of the exam (example: "In the work of two authors you have studied, compare how the physical setting creates a tension that sustains the reader’s interest."). In order to succeed, students cannot discuss their texts in general terms, they must know them inside out, and be able to support their argument with specific references, which means memorized quotations. So one thing they do when revising and prepping their focus texts is to memorize a number of key quotations that could be unpacked in the context of various essay prompts - basically, that speak to the heart of their texts.

This year, as the students gathered for the exam and we teachers were there to cheerlead them, a number were flexing their knowledge a bit - "look at how well I know my stuff" - the usual business. And one student started rattling off quotations from British poet Simon Armitage to me.

Except they weren't. I knew this instantly as I had, just this term, prepared a new study guide for Armitage. The student argued with me, and then even showed me his study notes, with a huge number of quotations that purported to be from Armitage but were instead generic pablum that approximated the themes of his work as generated by, say, an algorithmic program with no actual sentience or comprehension. The student then confessed that he had searched them up the night before using, you guessed it, ChatGPT. Which had just invented BS, but because he hadn't actually studied, he bought it.

Unfortunately, this is literally as he was walking into an exam worth 25% of his final mark. So I told him not to use any of the BS and just do his best with what he knew and...we will see. Results are published later today, actually. Not good, though.

And what struck me was how confident he was in the AI. As mentioned, I had personally created a rigorous study guide for them, which included tons of relevant quotations, but required effort. And instead, he had come in confident that a few hours with AI could replace putting in the work.
That’s a really good argument for making exams more important for qualifications, I think. Coursework can always be written by genAI but exam answers would be harder to duplicate effectively (as long as the examiners are knowledgeable).
 


That’s a really good argument for making exams more important for qualifications, I think. Coursework can always be written by genAI but exam answers would be harder to duplicate effectively (as long as the examiners are knowledgeable).

Indeed, but you don't test the exact same skills in a timed exam than in a work you can do at home, over a longer time. There are some exams that are done under supervision to prevent outside communication over a few days, but they are logistical nightmares and impractical for mass students. So you can check qualification for key civil service jobs this way, not university exams.
 

It’s literally making people dumber, decreasing the quality of work, generating misinformation, harming the planet on a scale that vastly outdoes the rest of the global digital workload combined, and contributing to increased isolation and disconnect from society.
I think the comparison to drugs is so far off the mark that it probably isn't worth continuing on this topic. I'll just say I find it useful and leave it at that.
 


A combination of a bad lawyer (probably using an LLM) and a bad judge (who couldn't care less and could have been fooled by a human made invented [or misunderstood] case) let that happen. I nonetheless propose we don't ban lawyers and judges yet...

Sure.

By the same token, you thereby cannot use, "Well a good lawyer would do X, Y, and Z, so it is fine," as a defense of the tool. We have demonstrated that bad lawyers exist, and so our use-case for generative AI needs to include that issue. It cannot be dismissed as irrelevant.

There's an adage in the software-development field: "Software will not and cannot fix a fundamentally broken process." AI won't make the failings of lawyers better, and may indeed make them worse.

What I have not seen you address yet are the patterns of behavior that develop in the users of AI, as they come to depend upon it. Does a good lawyer stay a good lawyer when using the tool on a repeated basis, or do they slip into bad habits?

The jury is still out on that one, but early indications are that, if you make a habit out of using generative AI to prepare materials, the user pays less attention to the content - the study I saw showed that, just doing one essay a month leads to lowered retention of the subject matter of the resulting piece, and less holistic thought on its contents. This does not bode well for using it as a regular tool, and depending on the user to double-check it.

I also think the article would get less views it was titled differently.

Is anyone here using the number of views as a metric for anything? Because I wasn't. Why is the number of views relevant?
 

Remove ads

Top