The AI Red Scare is only harming artists and needs to stop.


log in or register to remove this ad



IMO it's not about streamlining. It's about experts fear of losing their jobs or ability to work in the field as they know/knew it.

Photographers made the same complaints when digital cameras started becoming available to the masses. When they became ubiquitous on cell phones, at an image quality that exceeds traditional 35mm film, "we" knew that the career of professional photographer would be obsolete. But that did happen. The market changed, certainly, and many photographers had to find a new career. But now "we" can all document exciting points of our life at a quality like never before.

Engineering drafters had the same fears when 5th generation CAD programs became available.
Scribes had the same fears when typeset printing was invented.

This is normal. The fears are normal, but people need to accept change. Yes we need to monitor it and guide it. But it's inevitable.
You are right but there are some differences if the art I do “programs” the AI. If you feed it in, pay a fee.

Frankly there is no real way to monitor this. There are so many digital images out there how would folks ever keep tabs.

(I will let the computer people educate me).

I don’t weep for laborers who had to do something different after the tractor nor do
I feel bad my use of a word processor obviates a typist. Its the way of the world.

But I think it is good to think of ways to minimize impact on the individual and a good sign that we have some worry for them.

I am frankly concerned less about income issues and more about people losing purpose on a very big scale in mere decades. There is nothing I can do about it so I will roll some d20s, drink some beer and live with a now that is real enough and good enough. Tomorrow who knows?
 

And this is why the argument that it's "immoral" or "stealing" to learn from someone's art style and make something original based on that learning is bunk. It's how art has always worked but now we're incensed because someone found a more efficient way to do it. Somehow, when it takes more time and effort it's acceptable but when the process is streamlined it's suddenly "immoral".
"how art has always worked" - incorrect.

The models use statistical analysis to produce, which is a very different method than the human brain.

Let me go for a simpler one for an example, Markov chains. It's early predictive text - if you have this word, what are the words that come next in a corpus and the chances for each? It can do it with more multiple words before as well. It can put together sentences, much like your autocorrect can. Even if you trained it on every book you've ever read, it is not the same way as a human author putting something together. Like you put together your post.

It only looks the same when you don't know the middle section of AI art and Human art and since they both have the inputs and outputs think that the middle is the same. It's not. In a way that is very much not "how art has always worked".
 

I love that word, "Inevitable." It doesn't solve the problem, but it sure makes it sound like it's someone else's problem. There's nothing we can do about it, it's inevitable. It would have happened anyway, it's inevitable. It was only a matter of time. Eventually it would have sprung up from the ground fully-formed, all by itself. It's inevitable.

But it isn't. It was created by specific people, doing specific things, for specific reasons.

It is true that AI technology is here to stay, and it isn't going away...but that's not what "inevitable" means. You're thinking of the word "irrevocable." But that lacks impact. It doesn't set the right tone.

"Inevitable" sounds cool, like an edgy comic book supervillain making an astute observation about the state of society, people get philosophical and say "hmm yes, I see." "Irrevocable" sounds bureaucratic, like a lawmaker trying to sneak something onto a ballot, people poke their heads up and say "oh? is that so?"
 
Last edited:

Photographers made the same complaints when digital cameras started becoming available to the masses.
Digital cameras didn't involve massive copyright violations and general theft of intellectual property.

Artists didn't consent to their art being used to train AI, weren't compensated, and weren't credited.

If AI-creators actually believed they weren't violating copyright they'd be ripping of Disney's copyrighted work. Instead they prefer to steal art from small creators who'd have a lot more difficulty bringing a case against them since they know they're one large corporate lawsuit away from getting squashed.

AI can be created ethically, AI-creators are just choosing not to because it's quicker and cheaper not to.

The people claiming that opposing AI makes a person a Luddite might as well be claiming that campaigning against clothing made using underpaid child laborers means someone's a nudist.
 

This is normal.

As others have already noted, this is different. This new technology does not operate without taking the work of prior artists in the older forms.

If this were merely a new medium, your arguments would hold. But this technology has been implemented by taking their work without paying them for it and then using it to drive them out of work.

That is not what happened in any other technological change. In essence, you are declaring that corporate assumption of eminent domain over IP is okay.

The issues isn't the technology. It is the business practice. The argument comes down to, "I like it so I shouldn't have to pay market value for it."
 

I love Ed Zitron taking on AI 'promises'.
https://www.wheresyoured.at/sam-altman-is-full-of-naughty word/



The second possible explanation — and the most plausible in my opinion — is that OpenAI simply pirated [Johansson's] likeness and then tried to bring her onboard in a failed attempt to eliminate any potential legal risk. Given OpenAI’s willingness to steal content from the wider web to train its AIs, for which it’s currently facing multiple lawsuits from individual authors and media conglomerates alike, it’s hardly a giant leap to assume that they'd steal a person’s voice.
 

It only looks the same when you don't know the middle section of AI art and Human art and since they both have the inputs and outputs think that the middle is the same. It's not. In a way that is very much not "how art has always worked".

This goes by the assumption the distinction is significant. If both processes utilize extent art, and turn out new (albeit sometimes obviously derivative) art its in no way self-evident that's true.
 

The models use statistical analysis to produce, which is a very different method than the human brain.

Given that we haven't a clue how the human brain works, that you would confidently declare that amazes me. How the heck do you know what method the human brain uses? Go ahead and win a Noble prize and a lot of other acclaim by revealing such secrets of the mind.

Let me go for a simpler one for an example, Markov chains. It's early predictive text - if you have this word, what are the words that come next in a corpus and the chances for each? It can do it with more multiple words before as well. It can put together sentences, much like your autocorrect can. Even if you trained it on every book you've ever read, it is not the same way as a human author putting something together.

That's not clear to me at all. When I was a younger naive software engineer I always imagined that one day we'd get this Turing grade AI's and I'd interact with them and I'd be forced to conclude that they were intelligent because I couldn't distinguish them from a human. But that's not what has happened at all. Instead, its been obvious from the start that the current generation AI were as sentient as bricks, but the really strange thing is the more that I interact with them the more I realize interactions with humans have the same flaws and patterns. The more I interact with AI, the less obviously sentient or intelligent in the sense that I had assumed humans become. It's not at all clear how humans produce speech or why they produce speech, but it could be underneath that there is just some predictive text rendered in biological form. I've had to overturn all my preconceptions about how intelligence worked and how language worked. The sense/refence model no longer is big enough and complete enough to describe what is going on.

There are currently missing elements and algorithms that humans have that AI lack or which haven't been integrated together in interesting ways, sure, but that's coming fast.

I was watching Deep Blue live against Kasparov about 25 years ago, and in the final match Deep Blue began playing an unusual sequence while Kasparov had a pawn advanced to the seventh row, and the commentators - experts in chess - where saying on the broadcast, "Well, this is typical of computer play. The AI is unable to reason about the impact of a promoted pawn on the board, or else its foreseen Kasparov's win and its stalling. Computers will never be able to defeat humans in cheese because they lack true imagination and true creativity. You need a human spirit to truly understand chess." (I'm not making this up. I may forget the exact words, but this is the sort of stuff they were saying.) And in the middle of this rant, Kasparov suddenly resigned. And the commentators were dumbfounded. "Why has Kasparov resigned?" And several seconds passed, and one of these experts said, "Because... it's mate in two?!?!" In two mind you? In two moves! It wasn't just that suddenly it turned out that imagination and creativity and actually understanding cheese were just algorithms and predictive ability, as I had fully expected that. What I really discovered then was humans weren't very good at chess at all, because the chess world was watching this and it took all of them to the last moment to even see what the computer was doing. Maybe Kasparov had seen it earlier or not. But the chess world was by and large oblivious. I'd witnessed by first Turing grade AI, and I realized that being indistinguishable from human was strictly domain dependent.

The exact text or the exact form of an image isn't being stored in the neural networks being generated by reading the text or looking at the images. We don't know exactly what it is that is being stored, but we do know for sure it isn't a copy or a compression or anything like that. So if an AI mind stores something it learns from reading a text or scanning an image, how is that fundamentally different than me with my meat brain storing something I learn from reading a text or scanning an image? And if you digitize my mental process so that it can be done faster, does it become a copyright violation just because you now find it more threatening? And if an AI produced image wouldn't be a copyright violation if it was produced by a human mind, how does it become a copyright it was produced by an artificial mind?

There is a fundamental axiomatic assumption by the zealots that this process is inherently theft but I think that assumption is unwarranted and not really supportable. If I read a book and retain some impression of that book in my mind, the copy in my mind isn't a copyright violation. It only becomes a violation of copyright if I reproduce it in some fashion that would violate copyright, and neither the storage mechanism of these AI nor the way they produce images inherently violates copyright. So no theft has occurred. If someone trains an AI on what is publicly available on the net, well, that was not an ethical violation that I could see. The whole point of intellectual property protection is to encourage innovation. It's not there to stop innovation. The writers of this software have done maybe the most innovative thing with human language since it was invented. It's not theft.
 

Remove ads

Top