The AI Red Scare is only harming artists and needs to stop.

The burden of proof is on you. It's not begging the question to demand you substantiate your claims with evidence.

You've made the claim that AI art processing is fundamentally different than than a human doing so. That's as much a claim as to the contrary, and in such the burden is on the person who wants to make a distinction. So, as I said, "you first". Otherwise you're essentially insisting your position is the default of for the discussion, and that's the classic form of begging the question.

If you don't get why that is, I'm not going to around with you about it a third time.
 

log in or register to remove this ad

You've made the claim that AI art processing is fundamentally different than than a human doing so. That's as much a claim as to the contrary, and in such the burden is on the person who wants to make a distinction. So, as I said, "you first". Otherwise you're essentially insisting your position is the default of for the discussion, and that's the classic form of begging the question.

If you don't get why that is, I'm not going to around with you about it a third time.
This is a weird position to take. The claim that would have to be proven is naturally that AI art processing is bascially the same as human art processing; because the idea that would be basically the same is based on the most flimsy evidence of a somewhat similar output.

It's like saying: "A fire produces warmth, and so does a bear, so they're basically the same. Prove me wrong!"
 

This is a weird position to take. The claim that would have to be proven is naturally that AI art processing is bascially the same as human art processing; because the idea that would be basically the same is based on the most flimsy evidence of a somewhat similar output.

Nope. Neither one needs to be proven to make an argument for or against AI. A claim that its different, on the other hand, needs to be made to support the position that using extent material to train an AI--which, as has been noted, does not store the material in any meaningful way--is intrinsically copywrite infringement.

It's like saying: "A fire produces warmth, and so does a bear, so they're basically the same. Prove me wrong!"

No, its like saying "A fire and the sun both produce heat, but their difference matters enormously in the situation at hand". If you're going to make the latter claim, then you have to show the difference is relevant. The particular claims of relevance are heavily based on assumptions, and nothing more; until you can demonstrate what those differences are, not just by assumption but by proof, they shouldn't even be in the discussion.
 

And if you really want to continue pushing for 'ethical' training, just remember that indies are unlikely to ever afford the rights to enough content to train on, while Big Tech already has rights to all the content they'll ever need. And even if indies did there's no way for them to prove it. I'll let you decide who benefits more from that state of affairs.
This is how I paraphrase this particular paragraph, were you speak about the case of ethical training.

"If the choice is either compensating artists, but then only those who can afford to do so will get to play in AI, or for everyone to stiff artists so that there can also be indy AI, I am firmly in the camp that the artists shouldn't be paid even by those that could so that everyone can train their AIs."

Is this your stance on ethical training? That even if we could, we shouldn't?

Or am I misinterpreting this in such a way that you aren't advocating not paying artists simple because only "Big Tech" can afford to do so.
 

Nope. Neither one needs to be proven to make an argument for or against AI. A claim that its different, on the other hand, needs to be made to support the position that using extent material to train an AI--which, as has been noted, does not store the material in any meaningful way--is intrinsically copywrite infringement.

Sorry, but to me that sounds like you're saying: "I'd like to put the burden of proof on the other side because it suits my argument better to do so."

You wrote:
You've made the claim that AI art processing is fundamentally different than than a human doing so. That's as much a claim as to the contrary, and in such the burden is on the person who wants to make a distinction.

So you're saying that caliming that AI art processing is fundamentally different from human art processing is "as much a claim as to the contrary", and I have to call BS on that. I'm not saying that it necessarily is fundamentally different, but the burden of proof definitely lies with the theory that it is fundamentally the same as human art processing. Otherwise, you could claim pretty much anything about any new technology in the flimsiest of evidence and say "and now prove me wrong." And that has implications. If you say "a car works basically in the same way as a horse, because both are used to transport people from A to B and need some kind of fuel to do so", you certainly haven't succesfully put the burden of proof on the other side; and you certainly can't claim from that that the same moral and legal standards should apply to both cars and horses until someone proves you wrong.

And here it's the same. You can't act as if the same moral and legal standards as to a person should apply to an AI; a human being has the right and the need to use their senses to broaden their horizon, learn and let themselves be inspired; an AI can be directed to let itself be "inspired" and "learn", but these are actually nothing but potentially misleading metaphors for whatever it does - as far as I can see, there is no indication that AI "learning" and "inspiration" are processes that are in any way similar to human processes of learning and inspiration; the AI ouput I have seen certainly don't suggest any but the most superficial similarities. If you want to claim that AI "learning" and "inspiration" are substantially similar tu human learning and inspiration, beyond the metaphor of calling what the AI does "learning" and "inspiration", the burden of proof is squarely on you.
 
Last edited:



Not all progress is immediately beneficial, and "we" need to direct and focus it. But adapting to change is critical.

So, the thing that is missing here is the fact that "direct and focus" and "adapt" does not mean, "roll over and take it the way the current providers choose to make it".

Making those who train AIs pay for the data they don't own would "direct and focus" the technology, preserve most of the value of the tech to society - which isn't even in art and writing anyway - and allow the technology to mature in those other domains, while protecting artists as we become accustomed to its presence in our lives.
 

Can we split this into separate issues?

* AI will likely have profound impacts, regardless of how it works or how it is trained.

* Those creating AI models seem to be doing so by unfair use of protected works.

* There are questions of how similar (or different) the way that AIs work in comparison to how people work. (I’m using “work” as a coarse stand-in for “function”, “think”, or “effect the creation of outputs”. I don’t have a better term to use.)

TomB
 
Last edited:


Remove ads

Top