AI/LLMs AI art bans are going to ruin small 3rd party creators

Of course not. However, Wikipedia can always improve its material, but there's a point beyond which LLM output will not get better.

So both curves (accuracy vs. time) are asymptotic to 100%, and of course those curves won't have the exact same shape, but you are saying there is a qualitative difference?
 

log in or register to remove this ad


I notice that once again you have made quite a big leap, from 'just about all works of art and design that exist' to 'Netflix movies', but nonetheless:

You are saying that your position is that the writing, direction, acting, and production of such movies (or all movies) are all purely commercial calculations, with the only consideration being projected revenue, and no consideration is given whatsoever to craft or art or message or ethics? If they could make $1 more by doing something abhorrent, they would do so?
<Rewatches Krull>

Yeah pretty sure if money was all that mattered more than 2 characters would be beautiful women, it wouldnt be nearly as weird.
 

Whereas Wikipedia's improved processes has made it immune to false edits, right?

Touché. I concede defeat.

Well, it's a false comparison: "false edits" are a problem, but they are a problem that can be mitigated by human processes, in the same way we can create processes through our own discernment to mitigate things like misinformation. Wikipedia does this: it can revert bad edits (and has a habit of doing so quickly), can lock articles, and have discussions about the best way to frame controversial and potentially biased facts in an article. It is not perfect (and, again, it's easy enough to modify in a moment that, even if corrected, it makes for a bad source), but there are things that can lower and mitigate these problems.

As said by Umbran, hallucinations just can't be solved: you are giving an algorithm that can't think on its own a problem, and by design it tries to solve it even if there isn't a clear answer. It will use what information it has and put it together to look coherent enough to be believable because it is told to find an answer and, as the article says, it doesn't really do "uncertainty". And, I'll be honest, I think the hallucination rates given in the article probably beat Wikipedia's bad edits by a magnitude or two.

So at the end, the problems are different: both are problems, but one has mitigating processes meant to make things more accurate. The other simply can't, by the nature of the beast.
 

I don’t know what to say that has not been said about AI.

However I disagree with the main premise of the thread in this case.

Art is a one time cost before publication. After that, you print what you have.

If the lunch of a game is wrecked due to say 5000 dollars (just an example) then the printing we are talking about must be very small indeed.

Small publishers are selling right now! If the big boys resort to AI do we really assume they will sell product at a discount?

I don’t think they will and the 60 bucks I just paid for shadowdark was not influenced by WOTC selling a book 4 dollars cheaper.

The little guys margins are thin and will remain thin. The big boys are not going to Aldi them out of business for passing along a little portion of their savings.
 

Several years ago on an AI thread thread I wrote that AI bans will help the major corporations and hurt 3rd party creators. I faced some criticism for this.

Now we have exactly this coming to pass. Foundary recently banned AI art on its Marketplace. As a result, most of the small mom and pop 3rd party creators will need to remove their products or remove the art in their products.

Paying for art for 100 different monsters in a monster supplement, or dozens of NPCs in an adventure, is not a viable financial option on a publication that might sell 100 copies. Removing the art will make their products inferior to what is being published by the mega corporations that are selling thousands of copies and can absorb the cost to pay artists.
You're upset that AI art bans prevent you from exploiting artists as easily as a larger soulless corporations?

cry me a river, then go bankrupt
 

Well, it's a false comparison: "false edits" are a problem, but they are a problem that can be mitigated by human processes,

But at any given time the information you are relying on might be false, because the article you are looking at might have mistakes that have not yet been corrected by human editors.

Over time Wikipedia has improved its accuracy, and will continue to do so. But it will never be perfect.

Over time LLMs have improved their accuracy, and will continue to do so. But they will never be perfect.
 

But at any given time the information you are relying on might be false, because the article you are looking at might have mistakes that have not yet been corrected by human editors.

Over time Wikipedia has improved its accuracy, and will continue to do so. But it will never be perfect.

Over time LLMs have improved their accuracy, and will continue to do so. But they will never be perfect.

This is a massive overgeneralization of what we are talking about. Saying "Nothing is ever 100% accurate!" doesn't suddenly close the gap between the problems with Wikipedia and the problems with LLMs, nor does it negate the inability for LLMs to solve said problem.

In fact, it fundamentally misses what solved the former's problem and why the latter's cannot be fixed: actual intelligence. What makes Wikipedia's solution work is that it has actual people evaluating and deciding things, improving it and making it better. An LLM's code can be improved, but it is not sentient and can't evaluate things like a person, which is why hallucinations always be a problem and can never be solved. At the end of the day, it's merely algorithmic, not intelligent.
 
Last edited:

This is a massive overgeneralization of what we are talking about. Saying "Nothing is ever 100% accurate!" doesn't suddenly close the gap between the problems with Wikipedia and the problems with LLMs, nor does it negate the inability for LLMs to solve said problem.

In fact, it fundamentally misses what solved the former's problem and why the latter's cannot be fixed: actual intelligence. What makes Wikipedia's solution work is that it has actual people evaluating and deciding things, improving it and making it better. An LLM's code can be improved, but it is not sentient and can't evaluate things like a person, which is why hallucinations always be a problem and can never be solved. At the end of the day, it's merely algorithmic, not intelligent.

In terms of "what we were talking about", which was the similarity between Encyclopedia companies trying to scare users away from Wikipedia because of its untrustworthiness, and people today trying to scare users away from LLMs, using the same tactics and (largely) the same motivations. (By which I mean, they don't really care if high school students put incorrect facts in their term papers, but they definitely see Wikipedia/AI as a threat and want it to die.)

All that noise about intelligence vs. algorithms, as true as it may be, is an irrelevant distraction from that.
 

In terms of "what we were talking about", which was the similarity between Encyclopedia companies trying to scare users away from Wikipedia because of its untrustworthiness, and people today trying to scare users away from LLMs, using the same tactics and (largely) the same motivations. (By which I mean, they don't really care if high school students put incorrect facts in their term papers, but they definitely see Wikipedia/AI as a threat and want it to die.)

All that noise about intelligence vs. algorithms, as true as it may be, is an irrelevant distraction from that.

Yeah, but those problems aren't similar, and saying "scare users" belies your own bias: you are not interested in how they are different, but only in the similarity that people call them "untrustworthy". Your argument ignores why they are untrustworthy, how these problems come about, and how they are addressed on either side.

For Wikipedia, it's made an incredibly earnest effort using dedicated humans to improve it to where it is today. And even then, that's not enough for most of us to use it as a proper citation... but it's doing the work. LLMs simply can't, for reasons that have been described to you and you seemingly write off as "Well, it will improve" without recognizing that even the companies say that it can't. It's a limitation with the model itself.

To wit, the distraction here is not pointing out the differences, it's you trying to conflate the two as a way of giving LLMs more legitimacy than they have earned. You're trying to stealth in magical improvements that are unlikely because other "untrustworthy" sources have managed to do things to improve themselves, without recognizing that the improvements they used relied on human beings and their ability to parse, organize, and evaluate, something that LLMs simply can't do.
 

Recent & Upcoming Releases

Remove ads

Top