The AI Red Scare is only harming artists and needs to stop.

The AI fear mongering is driven by scared elites fearing that regular folks realizing they no longer need them and can get their needs met by AI changing the balance of power.

Does any human artist benefit from the fact I can now generate my own art to a pretty sufficient degree, for free from the comfort of any device connected to the internet?

These are not 'elites' these are human beings who wanted to take a craft, an art, and provide for themselves and their families.

Instead however, I do it for free, well not really as my prompts are almost certainly refining the code of these tools further at the 'cost' of my time.

So again, which elites are 'scared' and not benefitting? Would it be the elites who dumped billions into these tools?
 

log in or register to remove this ad

Wait a sec...you're saying that I'm an "elite"?

Spider Man Lol GIF
 

The AI fear mongering is driven by scared elites fearing that regular folks realizing they no longer need them and can get their needs met by AI changing the balance of power.
You view starving artists as "scared elites"? That AI is going to take away all their "power"? All their starving artist power?

Really, dude? Really? The billionaire silicon valley tech bros are, what, modern day Robin Hoods?

Really?

So what are you going to do with all that starving artist power once you've wrested from their elite grasp?
 


No more than parking my car on a public road makes it public domain.
Because "in the public space" and "in the public domain" (= for everybody's use) are different things. It's a subtlety easy to overlook but now increasingly important to take into account with the ever increasing proliferation of web scraping and information resale among companies. A lot of what these companies engage follows the same principles as money laundering, just for data.
 

This is my impression too. Those with established skills in a domain get better results from an AI than a newcomer. This hold true for image, text, and code generation at least. It's partly because those with subject matter expertise create more precise prompts. And it's partly because they detect when the outputs are wonky and correct them. I've seen examples where a skilled programmer has taken raw code produced by an AI and improved it by applying their experience. In these cases, the AI augments rather than replaces human skill. I suspect this will become the norm across many industries within a decade. I don't think generative AI will replace human artists, but it will gradually get integrated into their workflows.
This is not what I said at all...

The article states that recent MIT research shows that it does not help or augment individuals that already have top notch professional training and skills, but it does help those with low-skills.
 
Last edited:

As someone who hasn't experimented much (or at all) with AI imagery, I can see the use as sort of concept art. As in, "Can you draw me something that looks kind of like this?"
And that's fine, if that is what you are aiming for with a project. Loads of artist commissions go along the lines of "can you draw me something that looks kind of like this," this has been the case since freelancing has been around. That's all good, if you have a concept for your game that is more generic (for lack of a better word), like your typical fantasy heartbreakers that want to continue a well-known aesthetic, this works just as well as any other current methods.

If you are like me, and you spend years creating highly detailed projects that look nothing like what the current mainstream stuff looks like, then you will have to do all the work yourself (coming up with concept art & finding artists that can execute your concepts professionally). Gen-ai cannot replace real creativity, all it can do is rehash that which already exists. That is just my opinion of course. But currently if I want specificity, as in detailed main characters that are consistent throughout multiple iterations, then I will be sticking with real artists. That will likely never change for me.
 
Last edited:

Hmm...only a few pages in, but its apparent the conversation has shifted away from the main topic and into moral and legal discussions abou5 AI.

Regardless of your stance on AI's morality, OP's point that the scare is hurting real artists is tangible. People are being falsely accused of using AI in their art and being discriminated against or harassed. That is not okay.

Even moreso, the moral grandstanding of harassing someone that is using AI isn't as effective as people think it is, outside of stroking one's own ego. The people that use AI to make art know what they're doing and when they're caught, will just shrug their shoulders.

So now we're in a position where if you call them out and you're right, nothing happens, but if you're wrong (and its likely you are), then someone gets bullied and ultimately hurt.
 

Hmm...only a few pages in, but its apparent the conversation has shifted away from the main topic and into moral and legal discussions abou5 AI.

Regardless of your stance on AI's morality, OP's point that the scare is hurting real artists is tangible. People are being falsely accused of using AI in their art and being discriminated against or harassed. That is not okay.

Even moreso, the moral grandstanding of harassing someone that is using AI isn't as effective as people think it is, outside of stroking one's own ego. The people that use AI to make art know what they're doing and when they're caught, will just shrug their shoulders.

So now we're in a position where if you call them out and you're right, nothing happens, but if you're wrong (and its likely you are), then someone gets bullied and ultimately hurt.
I’ve heard mostly of cases of professionals misusing AI, where the professional was due chastisement for their clumsy use of it along with creating sub-par output.

The most prominent that I’ve read about have been lawyers using generative AI and submitting un-validated information.

The other harms are speculation that professionals will be displaced. I don’t think we are there yet, but there are definitely strong pushes towards greater use, which would be displaying.

I agree with the statements regarding the effects on productivity.

TomB
 

Ed drops some cold water on AI video hype (Expectations Versus Reality)

"These stories only serve to help Sam Altman, who desperately needs you to believe that Hollywood is scared of Sora and generative AI, because the more you talk about fear and lost jobs and the machines taking over, the less you ask a very simple question: does any of this sh!t actually work?

The answer, it turns out, is “not very well.” In a piece for FXGuide, Mike Seymour sat down with Shy Kids, the people behind Air Head, and revealed how Sora is, in many ways, totally useless for making films. Sora takes 10-20 minutes to generate a single 3 to 20 second shot, something that isn’t really a problem until you realize that until the shot is rendered, you really have absolutely no idea what the hell it’s going to spit out...

...

The AI hype bubble, as I’ve noted before, is one entirely reliant on us accepting the idea of what these companies will do rather than interrogating their ability to actually do it. Sora, much like other generative AI products, suffers from an imprecision and unreliability caused by hallucinations — an unavoidable result of using mathematics to generate stuff — and massive power and compute requirements that are, at this time, prohibitively expensive for any kind of at-scale usage."
 
Last edited:

Remove ads

Top