I think you are conflating an acknowledged theoretical limit with the idea that the limit has already been reached.
Tell me this (quick, without looking it up....on an LLM):
- What is Wikipedia's accuracy %?
- What is AI's accuracy as a %?
- What do the AI engineers think the upper limit of that accuracy is?
If you hadn't looked up those numbers before posting, then the arguments you've been making in your previous posts are just hot air.
Depending on the topic area, fairly accurate.
The study here goes pretty in-depth, those it can vary; in this case, they split up "accuracy" with "completeness", and gave it rather high marks for accuracy (99.7%) and okay marks for completeness (83%). Now that's one field, but I'd say that's probably closer to par for the course on a lot of Wikipedia topics; great accuracy, okay completeness. But I'd also say that goes well beyond the hallucination rate give by LLMs.
Speaking of which:
So the question becomes, where did the LLM get it's own numbers? The 95% is in a
Live Science article from 2011 (which wasn't a study, but it did give a number), but it does cite it. The other, which it doesn't cite. I got a citation from Wikipedia itself (lol) that says 80%, which cited properly
here (from 2008), but there's no way to know if that's what it meant or if it just filled in the blanks with a reasonable number... and that's the problem. We just don't
know where it scrapped that number from.
More than that, its citations are much older and not necessarily rigorous (Live Science didn't do a "study", though they use the word, which is why I'm guessing it was one of the top citations it gives). We have Reddit included in there, too, which is something that I think we can both agree should never be used as a citation for anything.
But this comes across as matadoring: you don't really address my points, but move to another point instead. You can't address the fundamental differences between the problems of the two, so you instead seek citations to move away from it. I feel like if LLMs could actually defend this properly, we wouldn't have this sort of dodging argument.
Yet..who knows what it will look like 50 yrs from now, people used to say we'd never land on the moon or you'd never carry a calculator in your pocket.
a hilarious quote from the 1970 book
The End of the Twentieth Century?
This excerpt, from page 71, illustrates the lack of imagination we often have when it comes to technology and thinking exponentially:
"Computers will benefit even more than telephones from the development of integrated circuits in ever smaller 'chips', and very small computers may emerge.
Most computers will probably still occupy a large room, however, because of the space needed for the ancillary software - the tapes and cards to be fed in, the operating staff, and the huge piles of paper for printing out the results. But future computers, though no smaller, will be capable of doing far more than their predecessors.
I added the underlines
Sure, and 8 track players could be a thing still! The problem with the "But what about the future" misses all the things that went away or didn't pan out: plenty thought we'd already have moon colonies, but not all technological progress is exponential. Technology changes, and acting like "But they could improve in the future!" is ignoring the problems they have
now, and how we are talking about the present. This technology or this version of it could reach its apex, its limit just as the horse and buggy did.
If you are going to try and cite what it looks like 50 years, then do it
50 years from now and not in the present. But saying "But it could be different in the future!" misses the fundamental problems of the
now.