What's the difference between AI and a random generator?

Epic Meepo

Adventurer
yes, but to be able to understand the result, you need to know the weights, that is the whole point of why it is basically impossible to explain the result.
I agree with much of your argument, but I disagree with the conclusion drawn in the quoted statement. The fact that an AI assigns weights to training data has no bearing on how easy or how hard it is to create a human-readable explanation for the AI's output.

It is neither impossible nor technically difficult to determine the weights an AI assigns to training data. Weights are either hard-coded (in which case you know them if you know the code) or they are dynamically-generated (in which case you can derive them using the code and the input data). This is no more or less difficult than determining the current internal state of, for example, the operating system on the computer you're using to read this message. All you need if you want to reproduce the program's current internal state is the code the program is running and the input data that led to the program's current state (including all seeds used by random number generators, if any). A step-by-step log explaining how you got there would take a long time for a human to read, but producing that explanation is fairly trivial.
 

log in or register to remove this ad

Umbran

Mod Squad
Staff member
Supporter
It may do well to make this clear:

I, and perhaps others, are pushing back on the "black box" description because it unnecessarily and inaccurately makes the thing look mysterious. And mystery breeds anxiety. As we consider policy about new technology, we need folks to be literate on the matter, reacting to solid information, not boogeymen.

It is a computer. To most of us every computer is as much a "black box" as a generative AI is. If your laptop or smartphone doesn't make you wary because you don't know how it works, the AI shouldn't either.
 

Art Waring

halozix.com
It may do well to make this clear:

I, and perhaps others, are pushing back on the "black box" description because it unnecessarily and inaccurately makes the thing look mysterious. And mystery breeds anxiety. As we consider policy about new technology, we need folks to be literate on the matter, reacting to solid information, not boogeymen.

It is a computer. To most of us every computer is as much a "black box" as a generative AI is. If your laptop or smartphone doesn't make you wary because you don't know how it works, the AI shouldn't either.
Perhaps then, it might help to lay down some solid definitions of what ai is. You are, after all asking that folks be literate. Though it does seem like this is a dig aimed right in my direction, thanks i guess?

As i showed in my previous posts, there are a wide variety of ai models. "Narrow" AI (this is a professional term in the field of ai science, they are referred to as Narrow AI, look it up, ya know, literacy), is the most common type of ai seen today. It is absolutely NOT a black box, is very narrow (yup) in its application, and is typically not hard to understand its process. That should agree with your avoidance of the term black box.

However, there is substantial proof (listed in my previous posts) that more advanced models DO apply the term "black box" because that is the term scientists have chosen to use. If that's a problem, then that is on the technicians and scientists that document their research in the field. Advanced models are becoming increasingly complex, and the scientists that research them have concluded that there is indeed a "black box" of hidden layers inside. Respected scientists in the field of AI would agree with this, I am not sure why you don't.
 

Umbran

Mod Squad
Staff member
Supporter
If that's a problem, then that is on the technicians and scientists that document their research in the field.

I am not interested in placing blame. I'm interested in reducing the mystery.

The amusing thing is we see this same dynamic show up in our discussions of games: what happens when you take jargon - normal words given special meaning within a technical domain - and take it outside that domain.

"GNS theory" has its own definitions of what they mean by "gamism", "narrativism" and "simulationism". Those definitions do not generally match what comes into the minds of other gamers when those words are used. Arguments ensue based on different understandings of what the term should mean.

"Black box" - do we all remember that has at least two meanings? It can mean a box into which we cannot see that has mysterious operation OR it references something like a flight recorder - the box of recorded data we use after a major failure to work out what happened.

So, the black box is either a mystery we cannot understand, or it is the thing we open up to solve mysteries! And that's before we look into what is meant by AI researchers.

Maybe the fact that the term is used within the domain is not a good reason for using it here.
 

Art Waring

halozix.com
I am not interested in placing blame. I'm interested in reducing the mystery.

The amusing thing is we see this same dynamic show up in our discussions of games: what happens when you take jargon - normal words given special meaning within a technical domain - and take it outside that domain.

"GNS theory" has its own definitions of what they mean by "gamism", "narrativism" and "simulationism". Those definitions do not generally match what comes into the minds of other gamers when those words are used. Arguments ensue based on different understandings of what the term should mean.

"Black box" - do we all remember that has at least two meanings? It can mean a box into which we cannot see that has mysterious operation OR it references something like a flight recorder - the box of recorded data we use after a major failure to work out what happened.

So, the black box is either a mystery we cannot understand, or it is the thing we open up to solve mysteries! And that's before we look into what is meant by AI researchers.

Maybe the fact that the term is used within the domain is not a good reason for using it here.
I do agree that the term may be problematic for laying down solid definitions (another thing that is also challenging in discussing ttrpg's as well). I surely don't want to add more confusion or anything.

I am just pointing out that it is a term in common use within the spheres of ai technology and science. Scientists don't always make the best explanations (to us laymen, as we are not scientists working in the field of AI) of how these things work, but the term is being commonly used (in reference to advanced systems).

Even if you don't like the term though, there needs to be some kind of term or phrase for referring to highly advanced ai systems that can't be easily understood, even by the scientists that created them.
 

Committed Hero

Adventurer
A random generator is limited by the content its creator provides. I assume the AI has a set of instructions that tells it where to look, but it could be pretty far-reaching.

The consent issue is a tricky one, because the AI can do what a human can do, just much faster.
 

Epic Meepo

Adventurer
Even if you don't like the term though, there needs to be some kind of term or phrase for referring to highly advanced ai systems that can't be easily understood, even by the scientists that created them.
Computer scientists like to use terms like "black box" and "hidden layers" as technical jargon. Personally, I think it would be more accurate to use terminology from chaos theory: every sufficiently complex computer program is a chaotic system. Roughly speaking, a chaotic system is one whose end state can change dramatically if you make even small changes to its initial state.

If you have perfect knowledge of a chaotic system's initial conditions, you can perfectly predict the end state. But you can't use that knowledge to perfectly predict what would happen if you made even a small change to the initial conditions. The only way to perfectly predict the outcome you would get by changing the initial conditions is to go back and recalculate your prediction from scratch using the new initial conditions.

If we're talking about a computer program, recalculating your prediction about the end state from scratch is identical to just running the program and seeing what happens; in other words, brute force. To make meaningful predictions about large numbers of multiple end states without resorting to brute force calculations, you have to take shortcuts and make predictions based on what's statistically likely to happen given your understanding of the system.

That statistical modeling is the "science" part of computer science. Presumably, that's also the source of all the "black box" and "hidden layers" jargon. The actual behavior of the software is one step removed from the simplified model you are using to predict its behavior. Since you can't realistically perform brute force calculations to determine every possible outcome, you must study patterns and make testable hypothesis about what the software is most likely to do.
 

tomBitonti

Adventurer
Computer scientists like to use terms like "black box" and "hidden layers" as technical jargon. Personally, I think it would be more accurate to use terminology from chaos theory: every sufficiently complex computer program is a chaotic system. Roughly speaking, a chaotic system is one whose end state can change dramatically if you make even small changes to its initial state.

If you have perfect knowledge of a chaotic system's initial conditions, you can perfectly predict the end state. But you can't use that knowledge to perfectly predict what would happen if you made even a small change to the initial conditions. The only way to perfectly predict the outcome you would get by changing the initial conditions is to go back and recalculate your prediction from scratch using the new initial conditions.

If we're talking about a computer program, recalculating your prediction about the end state from scratch is identical to just running the program and seeing what happens; in other words, brute force. To make meaningful predictions about large numbers of multiple end states without resorting to brute force calculations, you have to take shortcuts and make predictions based on what's statistically likely to happen given your understanding of the system.

That statistical modeling is the "science" part of computer science. Presumably, that's also the source of all the "black box" and "hidden layers" jargon. The actual behavior of the software is one step removed from the simplified model you are using to predict its behavior. Since you can't realistically perform brute force calculations to determine every possible outcome, you must study patterns and make testable hypothesis about what the software is most likely to do.

Not sure this helps. There are plenty of completely open systems which exhibit complex, unpredictable behaviors. See for example, the Mendelbrot equation, which is the essence of simplicity but which has complex behavior on critical values.

The thesis which seems to be central to the discussion is that AI systems increasingly contain internal parts which are largely indecipherable. Here “black box" is confusing, in that the internal parts may be entirely exposed, their operation wholly visible to the interested. My sense is that the systems cannot be understood because there are too many parts (here weights count as parts), such that meaning is diffused across the many parts.

TomB
 
Last edited:

tomBitonti

Adventurer
To followup on the issue of bias in AI. This article seems OK. Notably, it has links to other articles which discuss the issue:


I'm not endorsing the accuracy of the article one way or the other. The article seems to be a good starting point for looking for information about the issue.

Lots of the discussion re: bias goes into areas not allowed on this board. In regards to the current discussion, the importance is that handling bias is made harder because of the difficulty of understanding AIs. A part of this is caused by systems being closed, usually for commercial reasons, but also (possibly) to cover problems such as bias. Another part is the difficulty of understanding how AI reaches particular conclusions.

TomB
 

tomBitonti

Adventurer
If I may throw out another, possibly illuminating question: How is RNG, plus extensive technical ability, plus having lots of examples, plus an AI to tie these together, different than an actual artist? When asked to create a scene with several elements, such as was done earlier in this thread, how is the artist functioning differently than the AI system?

TomB
 

Remove ads

Top