What's the difference between AI and a random generator?

Umbran

Mod Squad
Staff member
Supporter
Yes, they do consist of layers, but the larger and more complex the model, the more "hidden" layers exist that are currently unable to be observed by humans.

So, with all respect to the gentleman, "hidden" is in quotes for a reason.

Hidden layers are not accessible to a typical user - the person writing a prompt to get ChatGPT to put out a limerick in the style of Herbert Melville can't see them. But their data is stored in quite regular places and formats, and their operation is well-defined. There is nothing in principle keeping someone with appropriate engineering skill from looking at the details.

The action of the "hidden" layers is "unobservable" in the same way that the action of your smartphone is "unobservable" - a layman who cracked open the case of their iPhone still wouldn't understand its operation, because they don't understand how microprocessors work, but that doesn't make them magic, or something.
 
Last edited:

log in or register to remove this ad

mamba

Legend
Software engineers who program AI professionally absolutely do know the logic their AI uses to produce results, to the same extent they know the logic used by any non-AI software they write. Specifically, they understand the intended logic perfectly, but they need to debug and refine their code over time to ensure it implements that logic correctly. And even bugs introduced by human error produce results one can predict once those bugs are identified.
So let's see where I go wrong, feel free to correct me...

I did not mean they do not understand their code or cannot debug it, the latter is obviously true and former better be true.

The code is only half the equation however, the weights that are being automatically adjusted and used by the AI are the other half, and given that the AI keeps tweaking these 'on its own' and there are so many of them, you cannot really say how it arrived at its result without debugging it.
In contrast to a random generator which uses fixed lookup tables and the only thing that is random is the number it starts with, i.e. if you know that number (through debugging again), you know what the result will be without having to debug the rest of the logic.

How easy would it be to 'predict' that given the following input 'artic ice cliff, penguins, seals, black stone temple, high quality digital painting' the AI would create the following picture

1708736245413.png


or explain and fix the highlighted penguins

Generative AI isn't magic. It's a calculator that takes all the information in a massive data set as input instead of taking a few small numbers as input. Due to that difference in scale, it's tedious to track every single step performed by an AI, but tracking that internal logic or, more practically, a small subset thereof is entirely feasible.
I did not disagree with this, esp. the 'small subset' part. Of course it is a predictive giant calculator, the problem lies in its complexity.

There is no mystery here. If one wanted to, one could take all the weights, and walk through the entire operation of of the network by hand. We don't, not because we don't know the logic, but because it would be, for people, a long and tedious process.
or in other words, we do not know, but yes 'no one knows' might have been a bit strong, even when this essentially is true for a given result and too tedious to figure out. I did not mean 'we do not have the slightest idea what is going on'...
 

Attachments

  • 1708736151728.png
    1708736151728.png
    2.5 MB · Views: 13
Last edited:

Art Waring

halozix.com
So, with all respect to the gentleman, "hidden" is in quotes for a reason.

Hidden layers are not accessible to a typical user - the person writing a prompt to get ChatGPT to put out a limerick in the style of Herbert Melville can't see them. But their data is stored in quite regular places and formats, and their operation is well-defined. There is nothing in principle keeping someone with appropriate engineering skill from looking at the details.

The action of the "hidden" layers is "unobservable" in the same way that the action of your smartphone is "unobservable" - a layman who cracked open the case of their iPhone still wouldn't understand its operation, because they don't understand how microprocessors work, but that doesn't make them magic, or something.
I placed the word hidden in quotations, not the article I posted, which clearly states evidence for the case that there is indeed a problem, its called the Explainability problem. I feel like you didn't read anything I posted in my previous reply.

I have posted quotes and sources from experts in the field of ai technology, I spent two hours gathering and presenting my evidence.

A definition of Explainability from IBM:
As AI becomes more advanced, humans are challenged to comprehend and retrace how the algorithm came to a result. The whole calculation process is turned into what is commonly referred to as a “black box" that is impossible to interpret. These black box models are created directly from the data. And, not even the engineers or data scientists who create the algorithm can understand or explain what exactly is happening inside them or how the AI algorithm arrived at a specific result.

Or, I guess you know better than the experts in the field of AI.

Thanks for your time.
 

One other aspect is bias. Everything has bias because no random number generator is perfect. Most dice are imperfect and do not have a perfect distribution curve after a million rolls, but they are usually close enough. But some dice are really badly weighted or have some kind of damage or defective that makes some numbers less likely to come up. By the same token, most computer RNGs are perfectly adequate for most uses but there are cases where their bias can be exploited.

An LLM or whatever flavor of AI is based on training data that adds its own skew. Imagine an AI trained on data that never included the number 4. It might have a sentence or two of counting so it can say "2+2=four"but if asked "what's 4x2?"it would have no clue. Alternately, "lucky number 7" could appear so often that it's the most probable result of any numeric request.

And that gets to the icky part. A random generator uses an RNG to select from lists of items assembled by a person. Generative AI is using patterns found in the training data to extrapolate a list. If you do that on some public domain work like Shakespeare, that's ok.

But most LLMs are full of internet data, ebooks, who knows what, but it's often pretty current meaning not public domain. Maybe the lists it generates were scraped from some other list created by some person and its just regurgitating that. Or it had articles about Shakespeare and used the words that appeared most often (which could result in references to the Globe Theater).

Personally, I look forward to an LLM based on Project Gutenberg and a visual transformer exclusively made on pre-20th century art that I can use guilt free to glitz up the games I run.

Until then, I shun it. A full fledged shunnin'.
 

Epic Meepo

Adventurer
The code is only half the equation however, the weights that are being automatically adjusted and used by the AI are the other half, and given that the AI keeps tweaking these 'on its own' and there are so many of them, you cannot really say how it arrived at its result without debugging it.
When talking about the internal logic of an AI, I wouldn't say "the code is only half the equation." I would say the code is the internal logic of the AI. Full stop. The fact that there are millions of weights being automatically adjusted doesn't make the logic any more or less complex than the code which sets the values of those weights. The weights are just numerical variables, and their values are not relevant to any of the internal logic involved.

mamba said:
In contrast to a random generator which uses fixed lookup tables and the only thing that is random is the number it starts with, i.e. if you know that number (through debugging again), you know what the result will be without having to debug the rest of the logic.
Incidentally, you don't need to debug a program to determine how it arrives at a given end state. You just need it to create a log of the individual steps it's taking. Creating that kind of log can be useful during the debugging process, but you don't have to be engaged in debugging to create that kind of log. The same process can be used for a perfectly-understood program with no bugs whatsoever.

mamba said:
How easy would it be to 'predict' that given the following input 'artic ice cliff, penguins, seals, black stone temple, high quality digital painting' the AI would create the following picture

View attachment 348211

or explain and fix the highlighted penguins
You're asking two separate questions, so I'll address them in order:

Q. How easy would it be to predict that an AI would create the indicated picture?

A. It would be time-consuming for a human to predict the indicated picture, the same way it would be time-consuming for a human to predict the position of the Earth at a given time in a computer simulation of the solar system. That's not due to a lack of understanding of the system, nor is it necessarily because the system is hard to understand. That's just because computers process large numbers of explicit variables faster than human brains.

Predicting what output an AI gives for a given input is like trying to watch a bullet as it flies through the air. To observe a bullet in flight, you have to film the bullet, then slow the film down to a speed the human brain can process and use that to draw a map. To predict what output an AI gives for a given input, you have to log the steps the AI is taking, then read those steps back at a pace a human brain can process.

In the case of a sophisticated generative AI, the actual logic involved in a single use case isn't that complicated (for highly-trained computer scientists specialized in AI). The real challenge is simply tracking all the variables in a human-readable way. That's because the prompt you entered isn't the input the AI used to generate that picture. The input used is the prompt you entered, plus the entire training set used, plus every prior recorded interaction with that training set, plus (if the AI incorporates one or more random generators) whatever seed or seeds were used in the AI's RNGs.

Q. How hard would it be to fix the highlighted parts of the picture?

A. That depends upon the functionality programmed into the AI which generated the picture. In theory, a sufficiently-advanced AI will incorporate tools users can use to iterate their images with specified changes, allowing them to adjust for gaps or biases in the dataset. Failing that, one can improve output from an AI which has a proven ability to interpret and respond to prompts by simply using better training data.

In the case of the picture under discussion, there appears to be a gap in the AI's training set. The AI doesn't have enough metadata specific to penguins but not seals. This could be fixed on the back end by improving the training set to include more penguin and seal metadata, or it could be done by allowing users to select and combine elements from images until a desired result is achieved. Implementing either of these solutions requires a fair amount of programming knowledge and a lot of time, but the actual logic involved is fairly straightforward.

mamba said:
or in other words, we do not know, but yes 'no one knows' might have been a bit strong, even when this essentially is true for a given result and too tedious to figure out. I did not mean 'we do not have the slightest idea what is going on'...
If your claim is that it's simply impractical to determine every AI-generated output in advance, then I would agree with you. The whole reason someone programmed AI was so users could rely on AI algorithms to do millions of steps they would otherwise have to do to produce complex output they aren't trained to produce manually.
 
Last edited:


tomBitonti

Adventurer
One other aspect is bias. Everything has bias because no random number generator is perfect. Most dice are imperfect and do not have a perfect distribution curve after a million rolls, but they are usually close enough. But some dice are really badly weighted or have some kind of damage or defective that makes some numbers less likely to come up. By the same token, most computer RNGs are perfectly adequate for most uses but there are cases where their bias can be exploited.

An LLM or whatever flavor of AI is based on training data that adds its own skew. Imagine an AI trained on data that never included the number 4. It might have a sentence or two of counting so it can say "2+2=four"but if asked "what's 4x2?"it would have no clue. Alternately, "lucky number 7" could appear so often that it's the most probable result of any numeric request.

And that gets to the icky part. A random generator uses an RNG to select from lists of items assembled by a person. Generative AI is using patterns found in the training data to extrapolate a list. If you do that on some public domain work like Shakespeare, that's ok.

But most LLMs are full of internet data, ebooks, who knows what, but it's often pretty current meaning not public domain. Maybe the lists it generates were scraped from some other list created by some person and its just regurgitating that. Or it had articles about Shakespeare and used the words that appeared most often (which could result in references to the Globe Theater).

Personally, I look forward to an LLM based on Project Gutenberg and a visual transformer exclusively made on pre-20th century art that I can use guilt free to glitz up the games I run.

Until then, I shun it. A full fledged shunnin'.
This is different than what was presented to me re:Bias. It’s not about problems in random number generation. It’s about bias in the data sets used to train the AI. A noted example is facial recognition failure on particular groups because those groups were not well represented in the training data. Other examples appear, in particular, when training AI for law enforcement and crime prediction.
TomB
 

tomBitonti

Adventurer
Also, folks seem to be talking past each other re: the logic of how the AI works. One part of that logic is how the AI is trained. Another part is how neural networks operate, in general. Another part has to do with the operation of the trained AI when handling a particular. This is like the difference between knowing how a neuron signals another neuron vs understanding the emergent behavior of a many neurons working together.
TomB
 

Clint_L

Hero
There is a large and developing body of research studying the ways that different generative AI models, particularly Chat and Chat-type Ads, exceeds their design parameters in currently inexplicable ways. For example, ingenious experiments have shown generative AI that is able to solve problems that seem to require some level of understanding language rather than only the ability to predict words in sequence.

So returning to the OP, a random generator and generative AI are not even in the same conceptual ballpark...but no one completely understands what game generative AI is playing.
 

mamba

Legend
When talking about the internal logic of an AI, I wouldn't say "the code is only half the equation." I would say the code is the internal logic of the AI. Full stop. The fact that there are millions of weights being automatically adjusted doesn't make the logic any more or less complex than the code which sets the values of those weights.
yes, but to be able to understand the result, you need to know the weights, that is the whole point of why it is basically impossible to explain the result.

Of course you understand the code, that just does not get you there

Incidentally, you don't need to debug a program to determine how it arrives at a given end state. You just need it to create a log of the individual steps it's taking. Creating that kind of log can be useful during the debugging process, but you don't have to be engaged in debugging to create that kind of log.
fair enough, makes no difference to my point

Q. How hard would it be to fix the highlighted parts of the picture?

A. That depends upon the functionality programmed into the AI which generated the picture. In theory, a sufficiently-advanced AI will incorporate tools users can use to iterate their images with specified changes, allowing them to adjust for gaps or biases in the dataset.
I meant fix the AI so it does not happen in the first place, and yes, not easy… much more involved than fixing a bug in the code

If your claim is that it's simply impractical to determine every AI-generated output in advance, then I would agree with you.
that, and to explain after the fact why it is what it is. Not literally impossible but much too much effort to be feasible
 
Last edited:

Remove ads

Top