AI/LLMs AI art bans are going to ruin small 3rd party creators

LLMs simply can't, for reasons that have been described to you and you seemingly write off as "Well, it will improve" without recognizing that even the companies say that it can't. It's a limitation with the model itself.

I think you are conflating an acknowledged theoretical limit with the idea that the limit has already been reached.

Tell me this (quick, without looking it up....on an LLM):
  1. What is Wikipedia's accuracy %?
  2. What is AI's accuracy according to the same metric?
  3. What do the AI engineers think the upper limit of that accuracy is?
If you hadn't looked up those numbers before posting, then the arguments you've been making in your previous posts are just hot air.
 

log in or register to remove this ad

In fact, it fundamentally misses what solved the former's problem and why the latter's cannot be fixed: actual intelligence. What makes Wikipedia's solution work is that it has actual people evaluating and deciding things, improving it and making it better. An LLM's code can be improved, but it is not sentient and can't evaluate things like a person, which is why hallucinations always be a problem and can never be solved. At the end of the day, it's merely algorithmic, not intelligent.
Yet..who knows what it will look like 50 yrs from now, people used to say we'd never land on the moon or you'd never carry a calculator in your pocket.

a hilarious quote from the 1970 book The End of the Twentieth Century?

This excerpt, from page 71, illustrates the lack of imagination we often have when it comes to technology and thinking exponentially:

"Computers will benefit even more than telephones from the development of integrated circuits in ever smaller 'chips', and very small computers may emerge. Most computers will probably still occupy a large room, however, because of the space needed for the ancillary software - the tapes and cards to be fed in, the operating staff, and the huge piles of paper for printing out the results. But future computers, though no smaller, will be capable of doing far more than their predecessors.

I added the underlines
 

Yet..who knows what it will look like 50 yrs from now, people used to say we'd never land on the moon or you'd never carry a calculator in your pocket.

a hilarious quote from the 1970 book The End of the Twentieth Century?

This excerpt, from page 71, illustrates the lack of imagination we often have when it comes to technology and thinking exponentially:

"Computers will benefit even more than telephones from the development of integrated circuits in ever smaller 'chips', and very small computers may emerge. Most computers will probably still occupy a large room, however, because of the space needed for the ancillary software - the tapes and cards to be fed in, the operating staff, and the huge piles of paper for printing out the results. But future computers, though no smaller, will be capable of doing far more than their predecessors.

I added the underlines

But...but....physicists know there's an upper limit to how small transistors can get, so those predictions had to be accurate!
 

I think you are conflating an acknowledged theoretical limit with the idea that the limit has already been reached.

Tell me this (quick, without looking it up....on an LLM):
  1. What is Wikipedia's accuracy %?
  2. What is AI's accuracy as a %?
  3. What do the AI engineers think the upper limit of that accuracy is?
If you hadn't looked up those numbers before posting, then the arguments you've been making in your previous posts are just hot air.

Depending on the topic area, fairly accurate. The study here goes pretty in-depth, those it can vary; in this case, they split up "accuracy" with "completeness", and gave it rather high marks for accuracy (99.7%) and okay marks for completeness (83%). Now that's one field, but I'd say that's probably closer to par for the course on a lot of Wikipedia topics; great accuracy, okay completeness. But I'd also say that goes well beyond the hallucination rate give by LLMs.

Speaking of which:

Screenshot 2026-04-04 181351.png

So the question becomes, where did the LLM get it's own numbers? The 95% is in a Live Science article from 2011 (which wasn't a study, but it did give a number), but it does cite it. The other, which it doesn't cite. I got a citation from Wikipedia itself (lol) that says 80%, which cited properly here (from 2008), but there's no way to know if that's what it meant or if it just filled in the blanks with a reasonable number... and that's the problem. We just don't know where it scrapped that number from.

More than that, its citations are much older and not necessarily rigorous (Live Science didn't do a "study", though they use the word, which is why I'm guessing it was one of the top citations it gives). We have Reddit included in there, too, which is something that I think we can both agree should never be used as a citation for anything.

But this comes across as matadoring: you don't really address my points, but move to another point instead. You can't address the fundamental differences between the problems of the two, so you instead seek citations to move away from it. I feel like if LLMs could actually defend this properly, we wouldn't have this sort of dodging argument.

Yet..who knows what it will look like 50 yrs from now, people used to say we'd never land on the moon or you'd never carry a calculator in your pocket.

a hilarious quote from the 1970 book The End of the Twentieth Century?

This excerpt, from page 71, illustrates the lack of imagination we often have when it comes to technology and thinking exponentially:

"Computers will benefit even more than telephones from the development of integrated circuits in ever smaller 'chips', and very small computers may emerge. Most computers will probably still occupy a large room, however, because of the space needed for the ancillary software - the tapes and cards to be fed in, the operating staff, and the huge piles of paper for printing out the results. But future computers, though no smaller, will be capable of doing far more than their predecessors.

I added the underlines

Sure, and 8 track players could be a thing still! The problem with the "But what about the future" misses all the things that went away or didn't pan out: plenty thought we'd already have moon colonies, but not all technological progress is exponential. Technology changes, and acting like "But they could improve in the future!" is ignoring the problems they have now, and how we are talking about the present. This technology or this version of it could reach its apex, its limit just as the horse and buggy did.

If you are going to try and cite what it looks like 50 years, then do it 50 years from now and not in the present. But saying "But it could be different in the future!" misses the fundamental problems of the now.
 

But this comes across as matadoring: you don't really address my points, but move to another point instead. You can't address the fundamental differences between the problems of the two, so you instead seek citations to move away from it.

If you are unable to understand why those differences...which I acknowledge exist...don't matter, then I don't know what else I can do for you.

Happy gaming.
 

If you are unable to understand why those differences...which I acknowledge exist...don't matter, then I don't know what else I can do for you.

Happy gaming.

I'm glad that you can smooth fundamental differences like that for the purposes of your argument, but ultimately I don't find it particularly compelling. It is too big a gap to say that "There are problems with both" without acknowledging what the problems are and why the are fundamentally different problems, and why one can be minimized while the other can't. That sort of smoothing feels less like trying to convince me the benefits of LLMs and more trying to cover up their glaring weaknesses.

And so, I wish you well.
 

Plus, well, given what just happened to Sora, and that had Disney money behind it, 'what about the future' may not necessarily look good in AI's case. When one of the big gen AI products just got completely shut down, it doesn't really speak to further reliability in the product.

If you are unable to understand why the differences...which I acknowledge exist...don't matter, then I don't know what else I can do for you.
Wikipedia is a regularly updated situation where mistakes can be corrected. Gen AI is not, and that leads to situations like the seahorse emoji incident were there is no checking of bad data being fed in, and the system breaks itself. The 'which is more accurate' is a doomed thing because Wikipedia is intended to be accurate about things, and Gen AI is not. So why would you want numbers from technicians about something gen AI isn't intended to be?

I think the overall thing is this: Wikipedia strives for accuracy. AI doesn't. Gen AI is fed upon the internet's detritus and all of the nonsense that goes on in Reddit. And if you know Reddit? You'd know why numbers don't matter, the sheer fact that's where gen AI is taking from is enough to discard it as a useful source in the first place
 

Yet..who knows what it will look like 50 yrs from now, people used to say we'd never land on the moon or you'd never carry a calculator in your pocket.

a hilarious quote from the 1970 book The End of the Twentieth Century?

This excerpt, from page 71, illustrates the lack of imagination we often have when it comes to technology and thinking exponentially:

"Computers will benefit even more than telephones from the development of integrated circuits in ever smaller 'chips', and very small computers may emerge. Most computers will probably still occupy a large room, however, because of the space needed for the ancillary software - the tapes and cards to be fed in, the operating staff, and the huge piles of paper for printing out the results. But future computers, though no smaller, will be capable of doing far more than their predecessors.

I added the underlines
Your argument is just, repeatedly, the tired tired old 'luddites' approach which utterly ignores the fact that technology doesn't 100% improve and benefit humanity, and many, many technologies either prove to be bad (nuclear weapons?) and dominate the world, prove to be bad and consequently die on the vine, or just don't work out. It's a simplistic, repetitive, argument which requires extensive cherry-picking to work.

This slavish stanning of technology as some kind of panacea for all the world's problems shows either a fundamental misunderstanding of the history of technology and, indeed, the world, or a conscious choice to cherry-pick examples in order to promote an agenda for... reasons. I'm not sure which, but neither option is great.
 

Your argument is just, repeatedly, the tired tired old 'luddites' approach which utterly ignores the fact that technology doesn't 100% improve and benefit humanity, and many, many technologies either prove to be bad (nuclear weapons?) and dominate the world, prove to be bad and consequently die on the vine, or just don't work out. It's a simplistic, repetitive, argument which requires extensive cherry-picking to work.

This slavish stanning of technology as some kind of panacea for all the world's problems shows either a fundamental misunderstanding of the history of technology and, indeed, the world, or a conscious choice to cherry-pick examples in order to promote an agenda for... reasons. I'm not sure which, but neither option is great.

Honestly, a lot of it reminds me of the blockchain and NFT arguments from several years back.
 


Recent & Upcoming Releases

Remove ads

Top