Judge decides case based on AI-hallucinated case law

My trust in AI- such as it is- erodes further:



stargate-sg1.gif
 

log in or register to remove this ad

My trust of humans, who connected something it was testing to a production database, eroded to the point that humans shouldn't really be used for mission-critical tasks.

This article made the news worldwide:
While interns or contracted workers making mistakes with a database are a daily occurrence, generally unreported. This article for example estimates the cost of human mistakes causing data loss in the range of trillions of USD.

The difference of treatment is interesting. While it's imposible to know if detected AI errors are a few among a sea of undetected AI errors, leading to an under-reporting of errors, this coverage is a prime example of overreporting, due to AI being in the limelight right now. I am pretty sure each and every automated car crash will get ample coverage, while the thousands of daily crashes involving human drivers will keep barely going above local media's attention if anything.
 
Last edited:

The difference of treatment is interesting. While it's imposible to know if detected AI errors are a few among a sea of undetected AI errors, leading to an under-reporting of errors, this coverage is a prime example of overreporting, due to AI being in the limelight right now. I am pretty sure each and every automated car crash will get ample coverage, while the thousands of daily crashes involving human drivers will keep barely going above local media's attention if anything.
Cars are a good example. When AI performs poorly at something we find easy, like not stopping forever because someone put a cone on top of your car, it is evidence of how stupid they are. But when it performs well, e.g. decreasing crashes by an order of magnitude, it doesn't fix the initial perception.

Likewise, AI can delete the codebase and look stupid. But a different model just demonstrated gold medal performance at the IMO. Sometimes brilliant, sometimes stupid. More brilliant than stupid, if you're using them right.
 


It’s neither “brilliant” nor “stupid” because it’s incapable of thinking. Which it the problem - people react to it as if it could think, when what it is really doing is faking it.
Ironically I think exactly the same thing. But in my reading the problem is people treating it like a human and judging errors as they would human error.
 

Ironically I think exactly the same thing. But in my reading the problem is people treating it like a human and judging errors as they would human error.

Who makes the error does not change the damage done by the error. Human crash or auto-driving crash, what matters is that someone is injured or killed.
 

Who makes the error does not change the damage done by the error. Human crash or auto-driving crash, what matters is that someone is injured or killed.

Indeed, so we should rationally use the system that lessens the overall damage. However, I think perception will prevent that. If at some point in the future, it is demonstrated that a full self-driving car would reduce the number of overall injuries, I am pretty sure there will be people to plead for allowing man-driving anyway.
 

It’s neither “brilliant” nor “stupid” because it’s incapable of thinking. Which it the problem - people react to it as if it could think, when what it is really doing is faking it.
Extended automatic wilderness random encounter table, with stacked results. Every now and then your level 1 party is going to get "Tarrasque."
 


Who makes the error does not change the damage done by the error. Human crash or auto-driving crash, what matters is that someone is injured or killed.
There are errors humans make that are interpreted as signs of more general failings. E.g., if an engineer deleted your database by mistake, you'd say "this fellow is probably not good at other tasks". Our intuition for these connections is pretty good.

But it isn't good when applied to robots. If a human stopped driving because of a cone, we'd say "this fellow is not a good driver". But self-driving cars do fall for this exploit, and yet seem to be better drivers than humans.

Likewise, your database-deleting engineer is probably not going to get IMO gold. But AI did. Or, a human lawyer who makes up fake cases would be an awful research assistant. But LLMs do hallucinate and yet are useful in this regard.

It seems to me people are treating AI failures as they would human failures rather than recognizing that AI works differently. The same failings don't have the same implications.

Indeed, so we should rationally use the system that lessens the overall damage. However, I think perception will prevent that. If at some point in the future, it is demonstrated that a full self-driving car would reduce the number of overall injuries, I am pretty sure there will be people to plead for allowing man-driving anyway.
Waymo's data purportedly shows this; that is what I got the order of magnitude statistic from. If anything I have seen more opposition to these cars now that they are widespread in some areas.

Whether their statistics are true will need an independent party to take a look though.
 

Pets & Sidekicks

Remove ads

Top