Judge decides case based on AI-hallucinated case law

Well, actually....

We are spending a lot of power and water to have an infrastructure that allows us to have silly discussions... and much more. Most of American (perhaps human) commerce happens over that infrastructure, for example, as well as much of our personal and business communication.
Another user in this thread explained that things where better 40 years ago, so maybe we should also stop doing all those other things and get back to doing them over the phone like we used to. We're simply choosing to do them conveniently with this infrastructure, considering the benefit outweight the cost.


If we depend on AI to do the job, and it fails, that's the opposite of justice and fairness.

That's why you don't use AI and style yourself a lawyer (or worse, handle your own cases). You go to a lawyer who, with the help of modern tools, can accept your case and bill you less hours than it would have cost you before the tools appeared. Sure, the tools might fail sometimes, but as long as they don't fail too often or too sneakily for the skilled user to detect, they'll lower the lawyer's workload. If they don't, then the tools probably won't be on the market for long. So, before the tool, you couldn't afford a lawyer, and now, you can. It's exactly working in the direction of justice and fairness.

On the judge side, if one could automate some part of the job so the judge can focus on the most essential part and we could decrease the time taken so people going to court don't have to wait months to get the result, it's also a net positive, even when money isn't involved.

This is true for both a searchable database of cases, an IA, or a word processor, all of which consume more resources than reading a paper code in a library.
 
Last edited:

log in or register to remove this ad

nope. I’m stating factually that generative ai is never going to be a good replacement for a lawyer. Because the only context in which ai makes the legal system accessible to more people is by “replacing” lawyers.
I would tend to agree, which is why I made that assumption when replying to a previous post. At best we can expect it to be used as it currently seems to be; replacing clerks, researchers, and office staff, resulting in no reduction in costs to the client.
 

we could all just lie down and die /s
So, this is hyperbolic yet also gets at the heart of the roles we use for AI. Because it really does raise the question "what is the point of you?" in a lot of situations.

I have unfortunately become something of an expert in this as an IB Diploma teacher. Cheating using AI is a huge and growing problem, and it completely undermines my job. A student handing in bad writing is great - it means there is lots to work with and learning can happen. A student handing in AI-massaged or even entirely AI written work may as well not even bother.

AI work is also extremely generic and easily recognizable. And yes, I am very aware of the many hacks students can use to try to disguise their use of AI tools. They are mostly very obvious unless the student knows the material so well that they would have been better off without. The most important thing about student writing is that it read like something only they could have written, and if it doesn't...well, what's the point of them? To be an AI-delivery system?

I do think AI has an important role when it is being used to extend human creativity, perhaps by handling rote tasks or helping us past certain boundaries and thus extending our creativity. I'm not anti-AI. I use it myself! But we are only starting to grapple with the implications for how we live and work and learn.
 
Last edited:


I would tend to agree, which is why I made that assumption when replying to a previous post. At best we can expect it to be used as it currently seems to be; replacing clerks, researchers, and office staff, resulting in no reduction in costs to the client.

Lower production costs have historically translated to lower price for goods (former luxury goods have become commonplace, even necessary goods) and services, like banking services, travel services, accountancy...
 


Hmmm… the use case for disabled students is interesting, I’ll have to think about that. There are of course all sorts of entirely valid use cases for machine learning, such as distribution logistics or analysing vast amounts of scientific data (such as radio telescope pictures of distant galaxies, according to a talk I went to last month).

There are lots of problems with generative AI, one of which is that it’s very much a solution looking for a problem, thanks to private capital and corporations having invested so much money into developing it, which is why there’s so much genAI in everything we’re offered now from Duolingo to Google. Does any of it help? Not really so far, that I can tell.

In my specific sector (medicine, specifically primary care, but also epidemiology) genAI is of very little use so far. The limited use case is genAI scribing for doctors who have to write notes not in their first language (a common issue for international medical graduates) but those functions still hallucinate a lot and a doctor who is not writing in their first language is, not surprisingly, less likely to catch the mistakes. So not really a viable solution, sadly.

GenAI has so far not been any use for decision support (“do you think this might be cancer?” and so on) which is what people keep hoping for. So it’s not going to replace clinicians in diagnosis (let alone treatment) any time soon.

The NHS is talking about rolling out a machine learning triage tool for the latest version of its app - basically Dr Google - and if Dr Google (a much better developed system) is anything to go by it will just increase unnecessary consultations and anxiety. Which is why health secretaries should not take policy advice from investment-based think tanks.
 

An anecdote: the International Baccalaureate diploma is very rigorous - it is basically modeled on British A-levels, and students earn univeristy transfer credit if they earn high enough marks in a subject. Although some of the assessment is done internally, all of it is moderated by IB, and most courses conclude with sets of rigorous exams that are anomynized and externally assessed.

In Language and Literature, which I teach, the last exam students write is their Paper 2, an exacting compare/contrast essay discussing two studied works of literature in response to one of four prompts that are unknown until the start of the exam (example: "In the work of two authors you have studied, compare how the physical setting creates a tension that sustains the reader’s interest."). In order to succeed, students cannot discuss their texts in general terms, they must know them inside out, and be able to support their argument with specific references, which means memorized quotations. So one thing they do when revising and prepping their focus texts is to memorize a number of key quotations that could be unpacked in the context of various essay prompts - basically, that speak to the heart of their texts.

This year, as the students gathered for the exam and we teachers were there to cheerlead them, a number were flexing their knowledge a bit - "look at how well I know my stuff" - the usual business. And one student started rattling off quotations from British poet Simon Armitage to me.

Except they weren't. I knew this instantly as I had, just this term, prepared a new study guide for Armitage. The student argued with me, and then even showed me his study notes, with a huge number of quotations that purported to be from Armitage but were instead generic pablum that approximated the themes of his work as generated by, say, an algorithmic program with no actual sentience or comprehension. The student then confessed that he had searched them up the night before using, you guessed it, ChatGPT. Which had just invented BS, but because he hadn't actually studied, he bought it.

Unfortunately, this is literally as he was walking into an exam worth 25% of his final mark. So I told him not to use any of the BS and just do his best with what he knew and...we will see. Results are published later today, actually. Not good, though.

And what struck me was how confident he was in the AI. As mentioned, I had personally created a rigorous study guide for them, which included tons of relevant quotations, but required effort. And instead, he had come in confident that a few hours with AI could replace putting in the work.
 

we'll have to put you on the design team for the next versions of humans.

I mean feel free to look into the research.

The more 'online' the more 'connected' the more 'social media engaged' one is, the less happy, more anxious, more unhealthy, one is.

The research is reporting what some of us called out for the last 5-8 years, its not that complicated.

On the other hand, those who get out into the fresh air, engage with the natural world (Forests, Rivers, Fields, aka touching grass) are healthier, happier, less stressed.

We do not need a next version of humans, we need to go back to being human.
 

I mean feel free to look into the research.

The more 'online' the more 'connected' the more 'social media engaged' one is, the less happy, more anxious, more unhealthy, one is.

The research is reporting what some of us called out for the last 5-8 years, its not that complicated.

On the other hand, those who get out into the fresh air, engage with the natural world (Forests, Rivers, Fields, aka touching grass) are healthier, happier, less stressed.

We do not need a next version of humans, we need to go back to being human.
dopamine is also a driver of being online
 

Remove ads

Top