Judge decides case based on AI-hallucinated case law

I think this is reasonable so long as the assumptions hold.

Same. And I will even generalize : any technology that is providing inferior service compared to existing technology while costing more shouldn't be used. But that's not an ethical stance, that's common sense to use the best and least costly tool to do a task. Email isn't being replaced by postal mail as we speak.

With regard to the assumption, two reports seem to counter the feeling that improvements in work output is marginal: one from the St Louis Fed, the other from the ECB, even if it's not identical across industries.
 
Last edited:

log in or register to remove this ad

Same. And I will even generalize : any technology that is providing inferior service compared to existing technology while costing more shouldn't be used. But that's not an ethical stance, that's common sense to use the best and least costly tool to do a task. Email isn't being replaced by postal mail as we speak.

With regard to the assumption, two reports seem to counter the feeling that improvements in work output is marginal: one from the St Louis Fed, the other from the ECB, even if it's not identical across industries.
And it will only improve. See the internet 30 years ago vs now.
 

I asked Microsoft copilot for an image of a werewolf but it refused. When asked why it said too many werewolf images were from copyrighted work.
 

I trust fully that you are recounting what you heard accurately. I’ve also seen many people do similar with the infamous McDonald’s hot coffee case, but that was a lot more nuanced than the average Joe realizes. That McDonald’s way overheated the coffee beyond mcdonalds own recommendations because that specific store believed people wanted their coffee hot when they got to work.

What contributed to the worldwide notoriety of this case isn't only the topic (that was derided, even if wrongly, in the US as well), but because the concept of punitive damage wasn't generally explained to the audience. So the monetary amount awarded, in this and many similar cases, often seems outlandish.
 

What contributed to the worldwide notoriety of this case isn't only the topic (that was derided, even if wrongly, in the US as well), but because the concept of punitive damage wasn't generally explained to the audience. So the monetary amount awarded, in this and many similar cases, often seems outlandish.
The award and other facts were also misreported.

The jurors awarded Liebeck $200,000 in compensatory damages for her pain, suffering, and medical costs, but those damages were reduced to $160,000 because they found her 20 percent responsible. They awarded $2.7 million in punitive damages. That amounted to about two days of revenue for McDonald’s coffee sales. The trial judge reduced the punitive damages to $480,000, while noting that McDonald’s behavior had been “willful, wanton, and reckless.” The parties later settled for a confidential amount. According to news accounts, this amount was less than $500,000.
 

I hope that Grok's anti-Semitic meltdown has folks thinking a moment about how generative AI actually operates, and the implications for its use.

Musk is a ham-handed dullard, so when he sought to adjust Grok, he did so poorly, without subtlety, and the thing went kind of berserk. But in the process he made it blatantly obvious that these AI cannot be assumed to neutral arbiters of information. A more crafty creator could disguise the bias better.

Now, imagine the use of generative AI in law, when the tech-mogul behind it has a political agenda. Imagine the use of generative AI in healthcare, when the creator has taken a large investment from a pharmaceutical company.

For the end-user a generative AI is a black box, its sources and biases not analyzable by the user. Its operation can only be trusted to serve your best interests as far as you can trust its makers to have your own best interests in mind.
 

I hope that Grok's anti-Semitic meltdown has folks thinking a moment about how generative AI actually operates, and the implications for its use.

Musk is a ham-handed dullard, so when he sought to adjust Grok, he did so poorly, without subtlety, and the thing went kind of berserk. But in the process he made it blatantly obvious that these AI cannot be assumed to neutral arbiters of information. A more crafty creator could disguise the bias better.

Now, imagine the use of generative AI in law, when the tech-mogul behind it has a political agenda. Imagine the use of generative AI in healthcare, when the creator has taken a large investment from a pharmaceutical company.

For the end-user a generative AI is a black box, its sources and biases not analyzable by the user. Its operation can only be trusted to serve your best interests as far as you can trust its makers to have your own best interests in mind.
While it's a vivid cautionary example, don't the same concerns and cautions here apply broadly to internet search tools like Google?

Are LLMs more worrisome because they involve less work sorting through results, and thus fewer obvious points for the user to read critically and assess the sources?
 

While it's a vivid cautionary example, don't the same concerns and cautions here apply broadly to internet search tools like Google?

Are LLMs more worrisome because they involve less work sorting through results, and thus fewer obvious points for the user to read critically and assess the sources?
Or major media organizations. I think the search for a "neutral arbiter of information" is something of a farce.
 

While it's a vivid cautionary example, don't the same concerns and cautions here apply broadly to internet search tools like Google?

To some degree, yes.

Are LLMs more worrisome because they involve less work sorting through results, and thus fewer obvious points for the user to read critically and assess the sources?

Exactly - Search engines hand you a list of sources, to which you can apply your own critical reading skills, if you have them. LLMs generally do not, leaving the user without as much recourse.

Also, there is a relevant psychological issue:
A search engine does a search for you. We have all had to look around our homes to find stuff - we have an intuitive grasp that, having searched, we may not yet have found the right thing.

An LLM just hands you a statement, phrased in a confident way. It is presented in a way that humans are more likely to accept it as-is, without remembering that it might not be right.
 

Public service AI is something I am all for, in order to wrestle away the training from the private sector. A medical AI trained by the national public health service would be far less susceptible to being manipulated by corporate interest. Same with a law database. After all, some countries already have searchable databases for all judgments available, maintained by the juridictions themselves, so the possible training bias would be lessened.

Also, while general purpose LLMs will indeed handle information extracted from their training data, which may or may not reflect the truth as we know it, the professional LLM tools I've seen are more akin to a "search helper", backed by a reputable database. This is probably the way forward: giving sources for affirmation, so you can check if it comes from a .looney or .edu website...
 
Last edited:

Pets & Sidekicks

Remove ads

Top