What's the difference between AI and a random generator?

FrogReaver

As long as i get to be the frog
So, here's the bit that gets missed:

You are correct, in that the "connections" (the word does not accurately describe the action of a generative AI, but we can use it for now anyway) the generative AI draws are not generally known to the user. But then, how a smartphone works is not generally known to the user, either.
I don’t think that’s a fair comparison.

The root assertion I was responding to was that they were not just unknown, but unknowable.
I think there are parts that are knowable, but when I consider whether a person could predict an ai’s output (even in aggregate to repeated prompts), I have very little faith that’s possible. Do you?

And if a person cannot predict its behavior - how can one claim it’s knowable?
Sure. And it is interesting. But it is not a mystery of the universe, or something. If we really want to know, we can look and see.
Hindsight is always 20-20

And just to reiterate - I agreed with most of your first post.
 

log in or register to remove this ad

tomBitonti

Adventurer
Where to begin?

The artist has their own thoughts, emotions, dreams and aspirations. The artist has their own opinions about the world, and the work, and everything else. The artist is hungry sometimes, has just gone to get a cup of coffee and had a passing conversation that made them laugh, or made them angry.

The artist, given a prompt, is still also being themselves, and are putting that self into the work - partly willfully, partly subconsciously. The artist is probably trying to "say" something beyond the prompt in their work.

The AI is following an algorithm. It has no will, no human experience to draw on or set forth. What humanity the AI has had in its training data has been put through a metaphorical blender, and now has no focus.

What prevents an AI from having its own unique internal state? While probably not describable as "thoughts, emotions, dreams, and aspirations" -- AI process information in notably different ways than humans -- is it not possible for AI to have similar, unique, features?

Much of what you describe seems reductive: Algorithms (which are implemented in a computer) cannot think, because algorithms can't think. Algorithms can't accrete experience, or have will, because algorithms can't do either.

Certainly, what an AI experiences is very probably different than what we experience: (Current) computers process information differently than people do; one ought not to expect an exact match to human thinking. However, that doesn't mean that AI can't develop unique internal processes.

I'd say that computers can gain experience. They could achieve a sense of will if training data were gathered at the direction of the AI itself.

That's not to say that current AI is anywhere close to these capabilities. I just don't see the change of implementation (neurons vs circuits) as ultimately mattering.

TomB
 

Umbran

Mod Squad
Staff member
Supporter
What prevents an AI from having its own unique internal state? While probably not describable as "thoughts, emotions, dreams, and aspirations" -- AI process information in notably different ways than humans -- is it not possible for AI to have similar, unique, features?

To be perfectly clear - we are not talking about "AI" in some general, potential, possible future or science fiction sense. We are talking about the technology that is present today, spoken about as "generative AI".

A generative AI system does have what we might call a unique internal state, but that state fails to be analogous to the human brain and mind in many ways. Some that are salient are...

1) A generative AI's internal state only changes through training. The AI, once trained, is static. It does not continue to take in information and change its state. It does not continue to "live".

2) A generative AI is not able to take in general world information - the thing is set up to take in information of a specific type and format - the AI used to generate visual art cannot process text, and the one that processes text can't take in information about high energy subatomic particle interactions. And the prompts users type in is not, in general, the form of data used to train the AI.

3) A generative AI does not have biological imperatives or instincts, hormones, or allergies, or anything else that impacts its operation other than its fixed state. The generative AI didn't sleep badly last night, or have a really fun date planned for tomorrow.

Much of what you describe seems reductive: Algorithms (which are implemented in a computer) cannot think, because algorithms can't think. Algorithms can't accrete experience, or have will, because algorithms can't do either.

No, it isn't "can't because it can't". It is "can't because it isn't designed to do so". It is like saying a horse cart can't go uphill on its own without a horse - it has no way to do so, because none has been built in. What you are talking about is outside the design parameters of the current technology.

That's not to say that current AI is anywhere close to these capabilities. I just don't see the change of implementation (neurons vs circuits) as ultimately mattering.

Oh, the implementation probably does matter, a lot. The operation of the neural network of a generative AI is deterministic, because the action of all its parts is deterministic, while the action of your brain, as far as we can tell, is not.
 

mamba

Legend
I’d qualify that as knowing what a penguin is. At least within the context of the training data provided - which is just images.
as long as it sticks to showing it like that, I’d agree, but you will find some penguins swimming like a duck (bent back, head held high) in the generated images as well, or one that is a cross between a seal and a penguin, things like that which would not happen if the AI understood ‘penguin’ as more than (black and white birdlike figure, usually standing upright, frequently in icy landscape, associated with birds and seals - because the latter seem to pop up randomly with them, without being requested)
 
Last edited:

Epic Meepo

Adventurer
I think there are parts that are knowable, but when I consider whether a person could predict an ai’s output (even in aggregate to repeated prompts), I have very little faith that’s possible.
Can a randomly selected person predict an AI's aggregate output? No.

Can the developers of an AI predict its behavior in aggregate? In my experience, yes. I have been sitting in the room when AI developers have done it. They looked the training data, they looked at the code, and they said, "Add training data X to get result Y." My team added training data X, the AI produced result Y, exactly as predicted.
 

tomBitonti

Adventurer
I’d qualify that as knowing what a penguin is. At least within the context of the training data provided - which is just images.
I noticed at least two problems in the generated image. Many of the creatures were incorrect. Also, the creatures (and other elements) were correctly sized for their distance from the user. Curious that the AI has no validation step, and perhaps a poor sense of proper scale.
TomB
 
Last edited:

tomBitonti

Adventurer
To be perfectly clear - we are not talking about "AI" in some general, potential, possible future or science fiction sense. We are talking about the technology that is present today, spoken about as "generative AI".

A generative AI system does have what we might call a unique internal state, but that state fails to be analogous to the human brain and mind in many ways. Some that are salient are...

1) A generative AI's internal state only changes through training. The AI, once trained, is static. It does not continue to take in information and change its state. It does not continue to "live".

2) A generative AI is not able to take in general world information - the thing is set up to take in information of a specific type and format - the AI used to generate visual art cannot process text, and the one that processes text can't take in information about high energy subatomic particle interactions. And the prompts users type in is not, in general, the form of data used to train the AI.

3) A generative AI does not have biological imperatives or instincts, hormones, or allergies, or anything else that impacts its operation other than its fixed state. The generative AI didn't sleep badly last night, or have a really fun date planned for tomorrow.
No, it isn't "can't because it can't". It is "can't because it isn't designed to do so". It is like saying a horse cart can't go uphill on its own without a horse - it has no way to do so, because none has been built in. What you are talking about is outside the design parameters of the current technology.
Oh, the implementation probably does matter, a lot. The operation of the neural network of a generative AI is deterministic, because the action of all its parts is deterministic, while the action of your brain, as far as we can tell, is not.

I don’t know enough about generative AI to say whether it continues to process information after new data is presented. AlphaGo certainly was able to learn by playing itself.

Whether AI, or generative AI can do more that it was designed to do seems an open question. There are (if I remember correctly) reports of AI which exceed their original intended function.

Also, whether brains are deterministic is another open question. But I don’t know if generative AI is deterministic. If not, it seems possible to build in steps where multiple choices are possible and are equally plausible which are made randomly. Also, there are deterministic systems which are effectively random, in the sense that the output cannot be predicted based on the input. This calls into question whether determinism matters in this context.

TomB
 

FrogReaver

As long as i get to be the frog
Can a randomly selected person predict an AI's aggregate output? No.

Can the developers of an AI predict its behavior in aggregate? In my experience, yes. I have been sitting in the room when AI developers have done it. They looked the training data, they looked at the code, and they said, "Add training data X to get result Y." My team added training data X, the AI produced result Y, exactly as predicted.
One might ask - If the output can be predicted why didn’t they get it right the first time?
 

Epic Meepo

Adventurer
One might ask - If the output can be predicted why didn’t they get it right the first time?
Because predictable output is what happens once you have error free code that covers all desired use cases, and that sort of code is the end result of an incremental process. As a developer, you (ideally) know the logic you want your code to follow, but due to human limitations, it's unlikely your first attempt at writing your code successfully implements that logic (the same way it's unlikely that the first draft of a novel is going to be perfect).

With AI, you have the added step of assembling a training set, which follows the same incremental process as software development. You (ideally) know what metadata you would like to include to avoid biases, but your data set is so large, it's not human-readable in its entirely. So you have to process it with software before you know what it contains. Only then can you identify its deficiencies and propose solutions.
 

FrogReaver

As long as i get to be the frog
Because predictable output is what happens once you have error free code that covers all desired use cases, and that sort of code is the end result of an incremental process. As a developer, you (ideally) know the logic you want your code to follow, but due to human limitations, it's unlikely your first attempt at writing your code successfully implements that logic (the same way it's unlikely that the first draft of a novel is going to be perfect).
This seems to align with my claim that AI is not currently predictable. (Not to be read as nothing about ai can be predicted, but rather that not everything about ai can currently be predicted).

No existing AI is unbiased and error free is it?
With AI, you have the added step of assembling a training set, which follows the same incremental process as software development. You (ideally) know what metadata you would like to include to avoid biases, but your data set is so large, it's not human-readable in its entirely. So you have to process it with software before you know what it contains. Only then can you identify its deficiencies and propose solutions.
if the data set is so large it’s not human readable, then at best software can summarize it for us - but summaries often hide important details. Which suggest that humans cannot perfectly predict even perfectly written AI behavior. Basing predictions on the summary will get quite a bit right, but not necessarily everything.
 

Remove ads

Top