The AI Red Scare is only harming artists and needs to stop.

I think there is a lot of overhype about the current capabilities of AI. Right now, I think AI offers something of a solution to producers that can't afford to pay for artwork, but that I can't even reliably use it to illustrate my own home campaign. I can't imagine anyone with the money wanting to send out uncorrected AI content that a human illustrator hasn't touched up, and so that comes down to something like Disney's reuse of animation cells - is it harder to touch up animation than it is to draw completely new ones. Ultimately, Disney found back in the day that tracing over animation cells had no cost savings compared to new animation.
ATM, from the perspective of a client that hires professional artists, I agree that starting from scratch is more cost effective. Any random artist has their own artistic process and a workflow, which IMHO is interrupted by incorporating gen-ai outputs. The artist has to clean up the gen-ai image, remove any artifacts, fix anatomy and fingers, adjust colors, lighting, shading, and posing. It is a complete waste of time for professional artists that already have the relevant skills, not to mention a waste of a clients time as well.

Now, I do think that the writing is on the wall that the industry is going to change at some point and that these tools are extremely powerful in the hands of good designers and will only get more powerful - for example more or less perfectly drawing the cells between any two closely related cells is certainly going to be a thing in the medium term. And certainly, if you are an artist you ought to be heavily investing in getting used to using these tools and learning how they work and how the train them for specific tasks. If you are in college right now and some enterprising professor isn't teaching that, then you're going to have to take up the task yourself because it will become a thing.
This is where I will have to disagree, as the most current research from MIT concludes that gen-ai tools help lower skilled workers more than those that are already working at the highest professional levels.

“Generative AI seems to be able to decrease inequality in productivity, helping lower-skilled workers significantly but with little effect on high-skilled workers,”

It certainly helps low-skill workers, but it actually starts to hinder top tier workers as they actually possess professional training and are (currently) more skilled than gen-ai tools.
 

log in or register to remove this ad

So the ability to perform horribly-expensive computations is OpenAI's only competitive advantage? If that's the case, all it would take to disrupt the entire AI industry is one sufficiently-large decentralized computing application.

Does the cryptocurrency community know about this? They have lots of technical expertise with decentralized computing, and they seem to enjoy disrupting the status quo. Sounds like they'd be well equipped to pull the rug right out from under big-AI.
While there have been dramatic improvements in the energy cost of cryptocurrency related computation, I’m not sure crypto is a good direction to go:

An answer provided by google:

How much energy does cryptocurrency consume?
Of course, crypto is more than just Bitcoin. The energy consumption of all crypto assets combined is between 0.4% and 0.9% of annual global electricity usage, or 120 and 240 billion kilowatt-hours per year. That's more energy usage than all the world's data centers combined. Crypto is a big energy user.

TomB
 

No. Just because you are obstinate and refuse to accept any challenge to your emotionally tied opinions does not mean that my premises have been refuted.

Mod Note:
Yes, but making it personal means you're done in this discussion, refuted or not.
 

ATM, from the perspective of a client that hires professional artists, I agree that starting from scratch is more cost effective. Any random artist has their own artistic process and a workflow, which IMHO is interrupted by incorporating gen-ai outputs. The artist has to clean up the gen-ai image, remove any artifacts, fix anatomy and fingers, adjust colors, lighting, shading, and posing. It is a complete waste of time for professional artists that already have the relevant skills, not to mention a waste of a clients time as well.
As someone who hasn't experimented much (or at all) with AI imagery, I can see the use as sort of concept art. As in, "Can you draw me something that looks kind of like this?"
 


I'd hardly call AI being able to steal intellectual property for itself progress.

And we've already seen what happens with self-training. AIs filling the internet with endless streams of ever more jumbled nonsense is not only not progress, it's a threat.

It's like when people kept claiming that "THE BLOCKCHAIN!" would completely change finance forever when in reality all it did was enable a lot of scams and help criminals move their money without being caught.
In the US, at least, the conceptual portions are not, and cannot be, protected past a decade, and that's only by patent.
Most people seem to be utterly ignorant of the different types of IP - Patent, Copyright, Trademark, and Trade Secret... and that fourth isn't federally protected outside DoD and CIA applications, under various obscure laws.
 


Then we can also get rid of DMs with AI. Talk about overrated obsolete human labor and "creativity."
We were talking about just this during the game session I just ran - the potential use for AI as either a DM or adventure writer.

And yes, I see it as entirely possible that within a decade or so RPGs could be quite competently run by AI. Even with that, some will still prefer real-life DMs, and there'll be real-life DMs willing to do it; same as today where even though online gaming is a thing, some people still play in-person around a real table because that's what they want to do.
 

And we've already seen what happens with self-training. AIs filling the internet with endless streams of ever more jumbled nonsense is not only not progress, it's a threat.
Indeed, at the moment self-training AI leads to a death spiral where the result becomes ever less coherent. The output is still consistently less than the sum of the input.

But as the training algorithms inevitably get better to the point where the AI becomes able to equal its input with its output and then extrapolate beyond the bounds of its input - as in take inputs A, B, and C and come up with a D output that's greater than the sum of its input parts - that won't always be the case. At that point it will be able to train itself.

That AI today is (sadly) mostly controlled by a bunch of capitalistic clowns doesn't diminish the potential of the tech itself.
 

When you write something on EN World, your agreements include all relevant copies for a human to read your posts - but no more than that. So, scraping EN World for content to train a LLM would be a violation of copyright.
::blink::

I'd have thought posts on an open public forum that anyone can read (such as for example this post on this forum) would be considered public domain at time of posting.

That, and there's a case to be made (and I really hope one of these days someone makes it in a way that'll stick) that the entire internet is and should always be public domain. Want to retain copyright? Don't put it online.
 

Trending content

Remove ads

Top