The AI Red Scare is only harming artists and needs to stop.

Golden Bee

Explorer
Whenever I feel the need to score points in the argument about AI, I feel it’s best to just talk about the CEOs. Trustworthy products generally don’t come from untrustworthy companies.

More from Ed Zitron’s excellent newsletter, Where’s your Ed At:


[In March], the Wall Street Journal published a 10-minute-long interview with OpenAI CTO Mira Murati, with journalist Joanna Stern asking a series of thoughtful yet straightforward questions that Murati failed to satisfactorily answer. When asked about what data was used to train Sora, OpenAI's app for generating video with AI, Murati claimed it used publicly available data, and when Stern asked her whether it used videos from YouTube, Murati's face contorted in a mix of confusion and pain before saying she "actually wasn't sure about that." When Stern pushed a third time, asking about videos from Facebook or Instagram, Murati shook her head and said that if videos were "publicly available...to use, there might be the data, I'm not sure, I'm not confident about it."

Stern did well to get Murati to answer, but it's deeply concerning that the Chief Technology Officer of the most "important" AI company in the world can't answer a very basic question about training data.
 

log in or register to remove this ad

Clint,

I can see you're a big fan of both anthropomorphizing computer behavior, and reducing human behavior to computer-analytic terms. I'm not sure that is terribly helpful though for people who want a better understanding.

A human brain does not perceive the real world at all, and never can.
This is ancient philosophy that ends up with the only reality being "cogito ergo sum". Of course every perception is filtered and modified -- but that is part of the process of perception. There is a lot of transformation going on, certainly, but calling it "statistical analysis" is not really a good description. Which is why there is a field called "image analysis" that is distinct from statistical analysis.

[GenAIs] are not conscious and have very limited memory (though research is showing that LLMs are finding workarounds to create more de facto memory than they were designed with, which is fascinating).

I have a couple of issues with this statement. Minorly, of course, LLMs are not finding workarounds at all; people are finding workarounds using LLMs. But more importantly is that LLMs are memoryless -- once you train them, they do not change their state and so every time you use one, with the same inputs and same randomization, it will produce the same output. I'm not really sure what you're referring to here, and I'm quite familiar with the literature. Are you talking about self-fine-tuning? Or using agents to store data later to be used by a RAG system? My best guess is that you're talking about the context window and means to expand it. But as far as I am aware, the efforts there are to squeeze more information into the limited window by quantization and specialized training rather than actually increasing its size.

If it's not too much bother, I'd love to see a reference to these techniques. My work has a large component of using LLMs to summarize large sets of text documents in very specific ways, so I have a professional interest in anything that makes it easier to do so!

The big difference is that we have evolved a sense of self, an ongoing story of our own consciousness.
While that is a potential difference, I think most people in the LLM business might disagree. The big question for us is whether or not an LLM can be though of as capable of conceptualization -- of being able to read text and have an understanding of the concepts involved -- or whether it is simply a stochastic parrot that can simply pattern matches input text to produce statistically plausible output text. The latter is definitely what they are designed to do, but it's a bit of an open question as to whether that ability has led to the ability to build concepts. There's a lively literature on this. But not really anything on consciousness.

[Re LLMs being "spicy autocorrect"]

I'll be honest: whenever someone uses that analogy for LLMs I am tempted to just politely ignore anything else they write. Sure, it's "spicy" autocorrect if you are using the word "spicy" to cover a LOT of heavy lifting.

Well, to be honest, it's not a terrible analogy. LLMs are designed specifically to say what word (token) is plausible in a sentence (strong of tokens) given the preceding words (tokens). Autocorrect does indeed do much the same thing. Google, for example, used to publish frequency tables of word combinations that did exactly what LLMs do, but on a much tinier window and a significantly different architecture, but essentially, they had the same statistical frequency-based predictive approach.

You may as well call human language production spicy autocorrect. Most of what you do in conversation is taking current and previous prompts and statistically generating words. That's most of what we are doing in this interaction.
Well, no. Autocorrect and LLMs both feed input words into a single process that determines the best next word without trying to abstract or conceptualize. It's possible that LLMs create concepts internally as part of that process, but they are definitely not explicit about it. Whereas human language production, as far as I understand it and I am in no way an expert, depends heavily on explicit conceptualization. Very different.

When you feed "You may as well call human language production spicy autocorrect" into an LLM, it simply determines which words would come next. Chat-GPT will reply:

That's an interesting way to think about it! Language generation, like what I do, involves predicting and producing words and phrases based on patterns and context, which can be seen as an advanced form of autocorrect. The "spicy" part adds a fun twist, suggesting the creativity and variability in human language.

But if I ask "You may as well call human language production spicy backup" it replies:

That’s a unique perspective! Describing human language production as "spicy backup" implies that when we communicate, we're not just sharing thoughts but also preserving them—like a backup—with a bit of personal flair or spice. It adds an interesting layer to how we think about memory and expression

Humans will notice the difference between the computer operations of "autocorrect" and "backup" and realize the concepts are radically different. But the "autocorrect nature" of LLMs does not see any disconnect and continues to embrace the ideas as a good one because although it makes no sense in terms of concepts, we can generate words that tie the two together even though the concepts cannot be.
 
Last edited:



CleverNickName

Limit Break Dancing (He/They)
Yes, but it's also why most people have a much higher standard of living (by typical measures) and longer lifespan. We've been automating things for a long time, and most people seem to enjoy the results. In the short term. In the long term, it might be a disaster. It might be utopia. Humans generally default to prioritizing short term benefits.

c.f. the environment.
Completely agree, especially with the parts about long-term consequence. I'd carefully like to remind everyone that it wasn't automation that ended child labor, workplace hazards, sweatshops, and other bad practices....it was pushback from labor unions, organized community involvement, public awareness campaigns, and government regulation. Historically, corporations tend to embrace automation as a last resort.

One possibility is simply value. Is the image worth the cost of an artist? Does it have enough value to justify the relatively astronomical cost of a human art commission? Does a free-range chicken have the additional value to you to buy one for dinner over the farm raised brood? Most people say no, are they wrong? (And you can even ask if free range chicken too much a commodity and you should raise and grow your own food.)
Value, price--like I said, the wording varies from person to person. And I appreciate that you use both terms very well: Value (how much something is worth to someone) and Price (the amount that is paid for the finished work.) But your questions aren't very helpful.

It's hard to know if a piece of art has enough value to justify its price. The value will depend on who needs the piece of art done, and what they intend to use it for, and how urgently they need it. The price of that commission will vary, too, depending on the artist's schedule, the number of iterations the client requests, the materials that are used, and whether or not the artist retains control and ownership of the finished work.

But I won't dodge the question, I'll answer it: yes. The artwork has to have enough value to justify its price, otherwise the artist won't get hired to do the work, and then the artwork won't exist. I told you, it's not a helpful answer. $20 worth of chicken is easy to measure; $20 worth of artwork isn't.

Instead of consumable products, maybe it's better to compare it to other highly-trained, skilled labor. Is welding worth $100/hour? Well, that depends on what's being welded, and by whom, and where, and the strength the weld needs to have. Is carpentry worth $100/hour? That depends also: are we adding a door, building a backyard deck, or moving a staircase? Is it getting built out of plywood from Home Depot, or imported hardwood from Europe? Many factors will affect the price, and most people understand that. But if these services are understood to vary in price, and are understood to be worth ~$100/hour, why shouldn't artwork be?

I think it's because people no longer value art. That's not to say people don't enjoy art, or want art, or need art...they just aren't willing to pay for it.

EDIT: I'm stressing myself out with memories of the early 2000s, when I was a struggling freelance artist. Hoo boy.
 
Last edited:


Aldarc

Legend
Sure, and honestly most of it is unfit for RPG use. I've used it. I've spent loads of time searching for such sources. It's really not a good option for many use cases. Yet free AI can provide images that are suitable , and will only get better with time. So no, it's not about the have's and have nots.
Did you miss the part where the better AI image creators are often hidden behind freemium paywalls? Do you not understand how AI will be just as monetized by corporations as everything else? AI is not about saving you, the consumer or small-time business owner, money. It's about saving money for the corporations by not having to pay for labor. And you, the consumer, will be monetized. It's not a matter of IF but WHEN and HOW MUCH. The people who control these AI services will continue to perpetuate, if not widen, the haves and have nots. That is the lesson learned from industrialization and automation.
 

Lanefan

Victoria Rules
Did you miss the part where the better AI image creators are often hidden behind freemium paywalls? Do you not understand how AI will be just as monetized by corporations as everything else? AI is not about saving you, the consumer or small-time business owner, money. It's about saving money for the corporations by not having to pay for labor. And you, the consumer, will be monetized. It's not a matter of IF but WHEN and HOW MUCH. The people who control these AI services will continue to perpetuate, if not widen, the haves and have nots. That is the lesson learned from industrialization and automation.
In the short term, sadly, you're right.

Soon enough, though, both hardware and software advances will allow some of those AI programs to become streamlined enough that you can run them on your own computer/smartphone/tablet.

A historical example is digital photo editors: at first, only big companies with big machines could use them and their editing abilities were rudimentary by today's standards, and the person-in-the-street user had to pay through the nose for access; today just about any smartphone on the market has a much better photo editor built right in.
 

Blue

Ravenous Bugblatter Beast of Traal
A human brain does not perceive the real world at all, and never can. It only ever has access to electro-chemical signals (data), which it then assembles into an interface that allows us to successfully survive and reproduce..
Yes. I started with a discussion of us not knowing about the brain, but then talked about the conscious process we take. You've spent a good number of words not actually addressing what I wrote that are moot for this discussion.

In other words...there's a lot of statistical analysis going on. I don't understand your final point; AI modelling routinely envisages the same scene from different angles and perspectives with an accuracy that crushes anything a human can do.
No, AI art does not. Flat, period, not up to discussion. AI art is not generative each item in 3d, determining where it is, what's visible, what light is doing, and the like.

As mentioned, there are definitely products that duplicate how we do this, but it's not what's being trained on the art out there. Nor are they "creative".

You are assuming a lot, here. For one thing, humans generally don't know when we are BSing. We only know when we are intentionally BSing. In fact, we are BSing (or "hallucinating," in LLM parlance) all the time. Most of what you remember? It never happened, certainly not exactly as you remember it. All of what you perceive? It's a statistical model driven by the imperatives of evolution, not reality.
You were okay until the last sentence. If you want to assert that our brain is a statistical model, you need some supporting evidence.

The big difference is that we have evolved a sense of self, an ongoing story of our own consciousness.
Which is relevant to the topic at hand how? We're discussing if AI model training and the produce an output is the same as how humans do. Sentience is a whole different ball-field.

I'll be honest: whenever someone uses that analogy for LLMs I am tempted to just politely ignore anything else they write. Sure, it's "spicy" autocorrect if you are using the word "spicy" to cover a LOT of heavy lifting. You may as well call human language production spicy autocorrect.

In the same discussion where I called it "spicy autocorrect", I talked about the huge number of iterations, from horse and buggy to a top of the line sports car. Don't try to dismiss the point via rhetoric.

Also, NO you can't call human language production spicy autocorrect because it doesn't work the same way as . Which is the point at hand. That is absolutely not a given. Don't confuse your assumptions for truth.

The first L in LLM is also something to look at, it stands for "Large". The amount of input data needed to generate those statisical data is far, far beyond what a human being is exposed to in order to generate speach. Many orders of magnitude. Because humans are not using the same method.

Most of what you do in conversation is taking current and previous prompts and statistically generating words.
Nope. Not in the slightest. Strike that all important word "statistically" and replace it. I'm not sure with what, maybe "conceptually" but I'm not married to that. Studies on single vs. multiple language households are interesting, especially when grammar is different so "statistically" there are very different word orders for what would come next but there can, among humans, still concept replacements in one language with grammar of the language a sentence has been started in when a word for a concept is only know in the other language.

See, this is the issue that keeps coming up. Consciousness.
Nope, didn't bring it up. A completely different conversation then the one I've been having. I bring up concepts because it's how humans do it. Which is different than the statistical analysis from the LLMs. I'm not attributing concepts to conciousness. I am saying that the method used under the covers differs, since the brain is not doing statistical analysis as the primary or solo method either to generate art nor conversation.
 

I'd honestly be interested to see some good AI RPG art - because for now, I'm mainly looking at stuff thinking, "well, that looks like another ugly AI pic", not even knowing whether it's true or not. For example, if you would present this cover to me:
I'd probably say: "Yeah, that has to be AI", but really, is it? Or is it only in a certain style that I happen to dislike and associate with AI?

I think I have an easier time figuring out if a text is created by an AI: For one, there are the hallucinations (like a text that about Star Trek that recently came up on facebook and mentioned an Episode that simply doesn't exist); also, they tend to be extremely repetitive, saying the same thing over and over again in different words.

4.Passages.jpg


This is a prop I used to set the mood for an abandoned starship the PCs found adrift and searched. Two two tries, maybe 5 minutes total.
 

Voidrunner's Codex

Remove ads

Top