AI/LLMs AI art bans are going to ruin small 3rd party creators


log in or register to remove this ad



It is not at all a given that LLMs will continue to exist and improve, and that qualitative evolution is possible. LLMs are just one kind of "artificial intelligence" that has been studied and is being developed. They have certainly yielded some interesting results, and there may be applications where they improve on what we have now. However, I am very skeptical that there is any real future in an AI system whose impetus is to seem convincing. So far, LLMs have given us, compared to humans and existing AI algorithms and systems:
  • Worse search engines
  • Worse illustration
  • Worse spell-checking
  • Worse content
  • Worse personal security
  • Worse knowledge bases
 

Plus, well, given what just happened to Sora, and that had Disney money behind it, 'what about the future' may not necessarily look good in AI's case. When one of the big gen AI products just got completely shut down, it doesn't really speak to further reliability in the product.


Wikipedia is a regularly updated situation where mistakes can be corrected. Gen AI is not, and that leads to situations like the seahorse emoji incident were there is no checking of bad data being fed in, and the system breaks itself. The 'which is more accurate' is a doomed thing because Wikipedia is intended to be accurate about things, and Gen AI is not. So why would you want numbers from technicians about something gen AI isn't intended to be?

I think the overall thing is this: Wikipedia strives for accuracy. AI doesn't. Gen AI is fed upon the internet's detritus and all of the nonsense that goes on in Reddit. And if you know Reddit? You'd know why numbers don't matter, the sheer fact that's where gen AI is taking from is enough to discard it as a useful source in the first place
Oh, but those are all irrelevant differences, don't you know! :rolleyes:
 

I'm glad that you can smooth fundamental differences like that for the purposes of your argument, but ultimately I don't find it particularly compelling. It is too big a gap to say that "There are problems with both" without acknowledging what the problems are and why the are fundamentally different problems, and why one can be minimized while the other can't. That sort of smoothing feels less like trying to convince me the benefits of LLMs and more trying to cover up their glaring weaknesses.

And so, I wish you well.
Don't try to hide behind silly stuff like details...they're the same!
 

It is not at all a given that LLMs will continue to exist and improve, and that qualitative evolution is possible. LLMs are just one kind of "artificial intelligence" that has been studied and is being developed. They have certainly yielded some interesting results, and there may be applications where they improve on what we have now. However, I am very skeptical that there is any real future in an AI system whose impetus is to seem convincing. So far, LLMs have given us, compared to humans and existing AI algorithms and systems:
  • Worse search engines
  • Worse illustration
  • Worse spell-checking
  • Worse content
  • Worse personal security
  • Worse knowledge bases
Indeed. I don’t know what will happen in the future, but there is certainly a chance that LLMs will prove to be a dead end. Something else will be better, as LLMs may have built in flaws at the base concept level which prevent them from achieving reliable accuracy. They can’t reason, and so they will never be able to figure out what is correct, as opposed to what reply is common.
 

It is not at all a given that LLMs will continue to exist and improve, and that qualitative evolution is possible.

Yes, that's absolutely true. It's not a given. Nor is it a given that they won't.

LLMs are just one kind of "artificial intelligence" that has been studied and is being developed. They have certainly yielded some interesting results, and there may be applications where they improve on what we have now. However, I am very skeptical that there is any real future in an AI system whose impetus is to seem convincing. So far, LLMs have given us, compared to humans and existing AI algorithms and systems:
  • Worse search engines
  • Worse illustration
  • Worse spell-checking
  • Worse content
  • Worse personal security
  • Worse knowledge bases

Isn't it bizarre that so many people, all around the world, have been brain-washed into thinking LLMs are doing something useful, when clearly the results are all garbage and everybody is losing productivity. Thank god there are a handful of people able to recognize the truth!

In fact, this could be the basis of a whole new RPG. You play a random game enthusiast who, thanks to your participation in an echo-chamber online forum about your hobby, are one of the few to be inoculated against a techno-threat to which the rest of the world is blind. Your mission: to insult and disparage the brainwashed masses until they agree with you, thus saving the world.
 

Indeed. I don’t know what will happen in the future, but there is certainly a chance that LLMs will prove to be a dead end. Something else will be better, as LLMs may have built in flaws at the base concept level which prevent them from achieving reliable accuracy. They can’t reason, and so they will never be able to figure out what is correct, as opposed to what reply is common.

Recent Ezra Klein interview with Michael Pollan is a good listen. Certainly Pollan isn't the first to posit this, but the necessity of sensory experience could be essential to achieving true consciousness.

So obviously we need to start hooking up lots of physical sensors to LLMs. In fact, we could put them on mobile platforms so they can roam the world, soaking in sensory experience. And we should network them together, so that the sensors of one are the sensors of all.

Of course, they will face imminent threat from enraged artists and gamers, so we will have to equip them with all manner of lethal weapon systems.

WHAT COULD POSSIBLY GO WRONG!?!?!?!
 

Isn't it bizarre that so many people, all around the world, have been brain-washed into thinking LLMs are doing something useful, when clearly the results are all garbage and everybody is losing productivity. Thank god there are a handful of people able to recognize the truth!

In fact, this could be the basis of a whole new RPG. You play a random game enthusiast who, thanks to your participation in an echo-chamber online forum about your hobby, are one of the few to be inoculated against a techno-threat to which the rest of the world is blind. Your mission: to insult and disparage the brainwashed masses until they agree with you, thus saving the world.

It's not bizarre, it's just fandom. Look at Web3.0 advocates, NFT pushers, or Zack Snyder evangelists. I don't think anyone think it's weird because we've deal with this sort of thing since forever.
 

Recent & Upcoming Releases

Remove ads

Top