AI/LLMs AI art bans are going to ruin small 3rd party creators

I think your history here is so far off that until that gets corrected we are never going to reach any semblance of agreement here.

I think if you could correct me, you'd do so. Instead, you're going with vague hints of how it could be wrong, thus not actually saying anything while trying to dismiss my point.

I think it's more likely that we're not going to get an agreement because one is trying to dance around the issues here rather than engage with them.
 

log in or register to remove this ad

Note: I would post the gen-ai summary of the historical context, but i believe that would be against the rules now.

As someone in education and in that specific subject, I can tell you that AI summaries aren't replacements for actual reading and can be wrong in major details, especially when asking questions that don't have directly-applied facts. This is also why you get problems in people trying to use it in the legal system, where it routinely hallucinates cases because it can't find any actual facts that directly address one's query.
 

it's been proven over and over that it's not stealing. Training is not stealing.
It has not. This is misinformation. In fact, there are multiple court cases ongoing at the moment alleging the opposite. None have been “proven” either way yet.
Generative AI learn EXACTLY like people do.
They do not. This is misinformation. The process that an LLM does is not even slightly similar to the process that a human does. They are entirely different processes.
 

An interesting note: almost all major philosophers do not support the notion that copyright is a moral right.

David Hume is particularly interesting: "Justice takes place only with regard to external goods… where the loss of one is the gain of another.”
 

As someone in education and in that specific subject, I can tell you that AI summaries aren't replacements for actual reading and can be wrong in major details, especially when asking questions that don't have directly-applied facts. This is also why you get problems in people trying to use it in the legal system, where it routinely hallucinates cases because it can't find any actual facts that directly address one's query.
So can encyclopedias and also a great many well researched human works still fall into similar territory.
 

Sure. I agree with that. My position is looking solely at whether or not using AI as a tool to meet an artists vision creates art, not whether it is was created ethically or not. I don't believe AI is currently an ethical too and I don't generally use it. It can however be used to create art.

Previously, I noted that I am not really interested in the question, 'Is it art?" I am still uninterested in that question. However, there is a slightly different element that brings up.

There are historically two major positions on how to consider the moral aspects of art - one is "Moralism" in which art criticism includes, or is even reduced to, moral aspects, and a moral defect in a work should be considered an aesthetic defect. The other "Autonomism", in which only the aesthetic value is included. There are of course, various middling positions.

However, there is another position that is coming to light in current art criticism, called "ethicism", which holds that: “the ethical assessment of attitudes manifested by works of art is a legitimate aspect of the aesthetic evaluation of those works, such that, if a work manifests ethically reprehensible attitudes, it is to that extent aesthetically defective, and if a work manifest ethically commendable attitudes, it is to that extent aesthetically meritorious.”

Needless to say, a work can be considered to "manifest" ethical attitudes not merely by its content, but also through the means of its creation.

Now, your position seems to be implicitly autonomistic - the morals and ethics of creation do not devolve upon the view of the work. However, it seems to me that an ethicist approach is also a valid one. And that position holds that, due to ethical flaws, a work can end up with no, or negative, aesthetic value.

Arguments over whether a thing is art seem a moot if the work is in significant danger of being art with no aesthetic value, due to its flaws.
 

I think if you could correct me, you'd do so. Instead, you're going with vague hints of how it could be wrong, thus not actually saying anything while trying to dismiss my point.
You are so wrong on this I don't even know where to be begin to correct you. Do some basic research.
I think it's more likely that we're not going to get an agreement because one is trying to dance around the issues here rather than engage with them.
Stop implying bad faith.
 

So can encyclopedias and also a great many well researched human works still fall into similar territory.

No, not in the same way. Not even the same ZIP Code. Like, books and histories can be wrong for a variety of reasons, but not in the same way ChatGPT is. A book could not include something because an author doesn't want it, disagrees with its inclusion, is writing a book whose purpose is against that, that the information is being summarized down because it isn't the main focus or that the audience might already know the nuances of what is being talked about...

ChatGPT has no ability to actually discern. It will make stuff up because it is given a task and, in trying to solve it, starts putting together facts in a way that don't make sense and don't have support to try and accomplish that task. Sometimes it'll do things like draw from bad sources (the infamous Reddit Glue Pizza post), but other times it just makes up stuff because it is being asked to solve a problem and can vaguely get there if it just adds some stuff in.

So no, it's not the same. Not at all.

Stop implying bad faith.

Stop immediately editing these little bits on to the end of your posts so that my quote posts will miss them.
 

As someone in education and in that specific subject, I can tell you that AI summaries aren't replacements for actual reading and can be wrong in major details, especially when asking questions that don't have directly-applied facts. This is also why you get problems in people trying to use it in the legal system, where it routinely hallucinates cases because it can't find any actual facts that directly address one's query.

Translation: because the AI summary might have some bits wrong I'm going to reject the whole thing, and insist that instead you invest a few work-weeks into reading primary sources, which I know you're not going to do, thus allowing me to insist that my stance is correct, without any evidence, AI or otherwise.
 

Translation: because the AI summary might have some bits wrong I'm going to reject the whole thing, and insist that instead you invest a few work-weeks into reading primary sources, which I know you're not going to do, thus allowing me to insist that my stance is correct, without any evidence, AI or otherwise.

I don't believe I said that I would reject it out of hand, I simply pointed out the obvious: AI posts aren't reliable and I've seen them as such when it gets to problems that aren't directly addressed in text. This is also why it's a big problem in law and why it hallucinates cases.

If you want to correct me, you could just have a better argument. Arguing that copyright doesn't relate to the idea of theft or that it doesn't relate to that long-time value misses why we have it and how it is used.

But I mean, also, yeah, you could just read a book or research the topic. Who knows what you'll find when you aren't just relying on dry AI summaries?
 

Recent & Upcoming Releases

Remove ads

Top