The AI Red Scare is only harming artists and needs to stop.

Looking further, I’m finding evidence for contrary legal opinions on whether data mining is fair use. My conclusion is that “data mining is fair use” is a position of some, including, unsurprisingly, companies that use data mining for AI training.

For example:

Text and data mining in US | Entertainment and Media Guide to AI | Perspectives | Reed Smith LLP
Copying copyright protected works for the sole purpose of text and data mining has traditionally been considered a case of fair use by the technology sector. The creative sector disagrees, and the launch of generative AI solutions capable of producing photos, paintings and music at the push of a button has seen copyright holders rally behind the “unfair use” banner to condemn the use of their content by AI businesses.

TomB

Edit: The following seems a decent summary of the current legal state of affairs:


In particular:

Many observers mistakenly assert that TDM is categorically excused by fair use principles, and cite to the decisions in Google Books and HathiTrust for support for the notion that data and text extraction is definitionally transformative and, almost by definition, squarely within fundamental fair use principles. This is a mistake, and represents a misreading of those decisions, and of US law — perhaps as best illustrated in the TVEyes decision.

TomB^2
 
Last edited:

log in or register to remove this ad

Earlier in this thread we got into a discussion of we don't know exactly how the brain does it, but we know how it does not. So being able to explain how the brain does it isn't a requirement to being able to discuss how AI does it in a way the brain does not.

But all of that said, let me give a try of the process using human vs. AI art.

A human art capable of producing a realistic image similar to what AI arts does envisions the various objects which exist in a 3D world, establishes a point of view from which to render it. They translate from 3D to 2D, including what can been seen and what blocks view, where light sources are, perspective and foreshortening, etc. They start with the real world and then from that move to how does it look in an image. Heck, we've had had 3D render programs for a long time that are made to emulate that process, and it's what 3D games do.

AI art is models generating it via statistical analysis. It does not involve that process at all, since there was never a 3D model to translate. A human artist could redo the exact same scene but drawn a few degrees to the left and a foot forward. An AI model can't.
A human brain does not perceive the real world at all, and never can. It only ever has access to electro-chemical signals (data), which it then assembles into an interface that allows us to successfully survive and reproduce. Eons of evolution have created this interface, the purpose of which is not to reveal the "real world", whatever that is, but enhance reproduction. Whatever reality is, your sensory organs only sample the tiniest slice of it in order to create the human umveldt, which is of course distinct from that of other species.

When a human brain is creating an image we don't know exactly what is going on, but we do know that we are not perceiving reality but a presentation of it driven by internal algorithms. We also know that pattern recognition and prediction are integral to the process, which is why, for example, you can never directly perceive your blind spot. Your brain covers it up with a statistical prediction of what "should" occupy it.


In other words...there's a lot of statistical analysis going on. I don't understand your final point; AI modelling routinely envisages the same scene from different angles and perspectives with an accuracy that crushes anything a human can do. If you mean that this would be a challenge for some current generative AI models, then that might be so; I don't know the current research on that particular aspect as I am more interested in generative AI that works with language.
"Writes better."

One common issue with LLMs (Large Language Models - AI writing) is what they are now calling "hallucinations". I'm not fond of that as a descriptor but it's in common usage. If they have information, they can use the information. If they don't have the information, they often will make up information. Not so different from human - except that they can't tell they made up the information. They can sprinkle falsehoods and incorrect information in, and don't know.
Yes, they are not conscious and have very limited memory (though research is showing that LLMs are finding workarounds to create more de facto memory than they were designed with, which is fascinating).
An example of this was with ChatGPT-3.5, we were playing a new boardgame, Dice Theme Park, and asked it for strategies. There were whole sections about Mascots and such that just didn't exist in the game, but were presented with the confidence of everything else.

A human writer would know when they are bulling around. But there is no "they" to understand this with LLMs. We anthropomorphize them because it seems like someone talking to us, and because we as humans anthropomorphize lots of things. Pets. Cars. Computers. What have you.
You are assuming a lot, here. For one thing, humans generally don't know when we are BSing. We only know when we are intentionally BSing. In fact, we are BSing (or "hallucinating," in LLM parlance) all the time. Most of what you remember? It never happened, certainly not exactly as you remember it. All of what you perceive? It's a statistical model driven by the imperatives of evolution, not reality.

The big difference is that we have evolved a sense of self, an ongoing story of our own consciousness. No one understands precisely why this happened or how it works, but there is tons of research showing that this is an emergent property of human brains and not some sort of magical event (I mean, we know it evolved so presumably it offers significant reproductive advantages, but thus far we can only speculate). LLMs don't have this. As it turns out, you don't need it to be very good at a lot of writing and artistic endeavours that until scant years ago we thought were exclusively human.
Instead it's taking the current and previous prompts and statistically generating words. It's spicy autocorrect.
I'll be honest: whenever someone uses that analogy for LLMs I am tempted to just politely ignore anything else they write. Sure, it's "spicy" autocorrect if you are using the word "spicy" to cover a LOT of heavy lifting. You may as well call human language production spicy autocorrect. Most of what you do in conversation is taking current and previous prompts and statistically generating words. That's most of what we are doing in this interaction.
Yes, it's the Porsche of conversation compared to the Horse-and-Buggy of conversation of autocorrect, but being more advanced just means it's better at it's job, that it picks the right words, not that it's actually thinking about the concepts.
See, this is the issue that keeps coming up. Consciousness. But we don't know exactly what consciousness is or how it connects to how humans produce language, art, etc. As it turns out, you don't need consciousness to produce good, original writing and art. I find that frankly mind-blowing and difficult to accept, but the evidence is right in front of me.

I'm looking through the telescope and seeing the moons of Jupiter orbiting. I can't deny it. The former paradigm ain't working anymore. You can make art without consciousness.
Generating output from input that looks human - yes. Is generated by the same process - not at all.
People keep asserting this. But we don't know the processes that human brains are using. There are obviously some differences in components and approaches, but at a fundamental level there seem to be large similarities as well. And the output is undeniably similar, and not on a superficial level.

There is also the question of whether the process really matters. The output is the thing that is affecting careers and livelihoods. Right now, a lot of the discussion is concerned with process because that's what the law can handle, but at an output level, the battle is already over. The toothpaste is not going back in the tube.
Frankly, it's the anthropomorphism that's a big part of the perception issue. Because people treat it like a human, they mistakenly compare it to how a human would learn.
Frankly, anthropomorphism is a red herring that is typically used to write off different opinions as ignorant. I am looking at outputs, and at ongoing research into the astonishing and often unpredicted capacities of generative AI. I am interested at a personal level but more so at a professional level. There are vast implications for better understanding how humans learn, and what direction education needs to take in the dawning era of generative AI.

Edit: for example, here is one question that we are currently wrestling with: why should we continue to teach students how to write essays when LLMs can do it better and much more efficiently? I think there are good reasons for teaching students the fundamental principles of essay writing, as they have to do with persuasive argumentation and can be applicable to a large number of real world endeavours. I also think understanding these structures is useful for developing human cognition.

But should we be spending so much time on having the students actually craft essays? Or should we be moving on to having the students guide LLMs through the grunt work, much as math teachers teach students the basics but then allow them to use calculators when it is time for the heavy computation?
 
Last edited:

And like it or not, AI art does level the playing field in that it gives those who a) can't afford to pay artists and b) aren't good enough artists to do the art themselves an avenue to still get some art into their RPGs.
Why does there need to be a level playing field between people who could learn to do good art, or invest in paying people… And those who don’t?

Fairness can often be good but it’s not inherently moral to make two sides of things the same. I've put out free RPG products for years, with either my own photography, scribbles, or no art. Not one complaint!
I like that more than a future where the cost of custom artwork approaches zero.
 

Maybe's AI Art Detector did a better job of identifying my artwork as original--all four of the samples I gave it (from my previous post) were correctly identified as having come from a human.

Unfortunately, it also said that THIS came from a human:
1718388363244.png

(Source: PromptHunt)

And this:
1718388398231.png

(Source: PromptHunt)

And this:
1718388565008.png

(Source: SeaArtAI)

But not this:
1718388633948.png

("The Mona Lisa," by Leonardo daVinci. Source: )

At least it got my favorite painting right:
1718388922284.png

("The Angel of Death I," by Evelyn DeMorgan, 1880)
 

Why does there need to be a level playing field between people who could learn to do good art, or invest in paying people… And those who don’t?
The exact wording varies from person to person, but the argument is usually something like "I don't want to pay for art."

There are specific goals at play here. The goal of devaluing art is to ultimately convince people that art doesn't have worth, and that they should get it for free. The goal of discrediting artists is to convince people that artists don't deserve money, and they shouldn't expect payment for art. The goal of defunding art education is to convince people that art isn't a worthwhile career, and artists should find other means of earning a living.

This has been going on for generations, at least. And it's working.
 


Why does there need to be a level playing field between people who could learn to do good art, or invest in paying people… And those who don’t?
Why does there need to be a level playing field between the rich and poor? There doesn't, though I prefer a world in which there is a more or less even playing field. You make a good point in that a minority of the (generally) poor ("people who could learn to do good art"; i.e. artists) might have interests aligned with the generally rich (those who can "invest in paying people"). That makes this a complicated ethical situation.

I'm not rich, or an artist. Does that mean I should not have access to high quality art for my D&D games? Or is it a moral good that more people will have access to bespoke art? I don't think there is a simple answer.
Fairness can often be good but it’s not inherently moral to make two sides of things the same. I've put out free RPG products for years, with either my own photography, scribbles, or no art. Not one complaint!
I like that more than a future where the cost of custom artwork approaches zero.
Nothing is inherently moral, IMO. I'm not sure how I feel about a future where the cost of custom artwork approaches zero (e.g. today). My gut reaction is to prefer the status quo, but then that is a typical gut reaction for most people. There's a lot to wrap one's head around when it comes to this issue.

Though the genie is already out of the bottle, so I'd best start wrapping my head around it.
 
Last edited:

Well, hard to argue against getting the thing you want for less or no money. Most folks IME will jump at that.
Yep. The trouble is that they will jump without thought to how it affects others. And not just with regard to art, either: this is the impetus behind sweatshops, child labor, and other more odious parts of our history.

Like I said, it's a very old problem.
 

Yep. The trouble is that they will jump without thought to how it affects others. And not just with regard to art, either: this is the impetus behind sweatshops, child labor, and other more odious parts of our history.

Like I said, it's a very old problem.
Yes, but it's also why most people have a much higher standard of living (by typical measures) and longer lifespan. We've been automating things for a long time, and most people seem to enjoy the results. In the short term. In the long term, it might be a disaster. It might be utopia. Humans generally default to prioritizing short term benefits.

c.f. the environment.
 

I'm not rich, or an artist. Does that mean I should not have access to high quality art for my D&D games? Or is it a moral good that more people will have access to bespoke art? I don't think there is a simple answer.

....

Though the genie is already out of the bottle, so I'd best start wrapping my head around it.
I say this as someone who has won political office as a socialist:

The distribution of resources for human welfare is not the same thing as wanting to push a button and get a picture of an elf.

Poverty is something that leads to social, moral, physical, and spiritual consequences. It’s squid ink in the water to say “Our society is unjust, therefore AI is a land of contrasts.”

And I feel like I could just quote the “inevitable” explanation on every page. Tech is not inevitable and unquestionable.

The genie can go back in the bottle, because LLM is not a magical genie in a folktale. It's a product of massive companies releasing and supporting products that are extremely energy-intensive.
 

Remove ads

Top