What's the difference between AI and a random generator?

Epic Meepo

Adventurer
The thesis which seems to be central to the discussion is that AI systems increasingly contain internal parts which are largely indecipherable.
To be fair, most code with lots of moving parts can be described as indecipherable by a majority of the public. If it doesn't have good documentation, it's probably indecipherable to most people who didn't actively write it. That's nothing unique to AI. That's just a function of extremely complex code being opaque to most human observers.
 
Last edited:

log in or register to remove this ad

tomBitonti

Adventurer
To be fair, most code with lots of moving parts can be described as indecipherable by a majority of the public. If it doesn't have good documentation, it's probably indecipherable to most people who didn't actively write it. That's nothing unique to AI. That's just a function of extremely complex code being opaque to most human observers.

Yup. That thought follows rather quickly from the initial statement. I did start down a path of trying to understand different ways that systems can be incomprehensible, but I don't have the expertise to provide a clear answer. My best so far is one possible outline of questions to ask:

What different ways are there for a system to be incomprehensible? E.g., too many parts, requires too much experiential knowledge, operates in an unusual manner.

How many of those apply to AI?

Why is it particularly important for AI to be understood?

Thanks!
TomB
 

MarkB

Legend
To be fair, most code with lots of moving parts can be described as indecipherable by a majority of the public. If it doesn't have good documentation, it's probably indecipherable to most people who didn't actively write it. That's nothing unique to AI. That's just a function of extremely complex code being opaque to most human observers.
Sure, but the difference here is that it's indecipherable even to the people who did build it, to the extent that if it gives unwanted results they don't really have a way of fixing it beyond throwing a bunch of selected data into it to try and rebalance its biases.
 

Starfox

Hero
In addition to the technical difference people have mentioned, another element is that random generators are never going to put anybody out of a job.
AI is a tool. An unskilled user or sloppy work gets bad results, often in the uncanny valley, such as a human with hands on their legs or three arms. A skilled user can create inputs and vet the results to get a good result. This takes time, skill, and effort, just like regular drawing does, but perhaps less of each. Fewer people can get more things done, especially if we accept mediocre results. Same as the Spinning Jenny back in the day. AI will make some skills obsolete, but those skills can often be used in other ways along with the new technology.

AI, like most other technologies, might lead to concentration of wealth, but that is a political and not a technological issue.
 

Epic Meepo

Adventurer
Sure, but the difference here is that it's indecipherable even to the people who did build it, to the extent that if it gives unwanted results they don't really have a way of fixing it beyond throwing a bunch of selected data into it to try and rebalance its biases.
In my three years working professionally with AI, I've routinely witnessed software engineers removing unwanted results by directly manipulating the code, because they wrote that code, understand what it does, and know how to change it.

Granted, I was working at a small tech start-up whose AI was programmed by a team of eight or so software engineers. Some larger companies working with more sophisticated AI will have teams many times that large, and the more people you have writing the code, the more difficult it is for any one person writing a module to decipher the rest of the system.

But I've been in the room when a team of eight software engineers absolutely did decipher what the code was doing, identify one specific part of the process as the source of the error, and hard-code a solution which produced the desired results. It was, in fact, a fairly routine part of their software sprints to make those kinds of adjustments.

If none that was possible, we wouldn't have multiple AI programs right now, because no one would know how to make a new one that behaved differently from the ones we already have. There would be no innovation in the AI field, and we'd all be using the same indecipherable AI program no one could improve upon. Which simply isn't the case.

So, why do big name computer scientists keep hyping how indecipherable AI software is? I can only speculate, but I have to wonder: Which researcher is likely to get more funding; the one trying to make incremental improvements to difficult but understandable software, or the one trying to decipher an incomprehensible, existential threat to humanity?
 


tomBitonti

Adventurer
an artist can come up with new things. An AI trained on Rubens exclusively would never create a Picasso
Is this really true? For example, Alpha Go has led to a sea change in the play of Go. Many previous sequences have been re-evaluated with disused sequences being promoted and then preferred sequences being shown to be incorrect.

Also, could Rubens create a Picasso? Rubens could learn a new style, but couldn't an AI also augment its training?

I do agree that AI has a limitation in the creation of art, in that artists have a more direct line of feedback (themselves!), and are perhaps better able to evaluate responses to art. Artists also have the advantage of having the muse of human experience to drive them to create art which reflects those experiences.

On the other hand, there must be artists which play around with new ideas with a great deal of randomness, until some new combination intrigues them. That puts RNG as a source of new ideas for human artists.

TomB
 

mamba

Legend
Is this really true?
let’s see whether I get contradicted, I’d say for current AIs it is true

For example, Alpha Go has led to a sea change in the play of Go.
but they did not evaluate a single player, and they still had weird ideas about Go and could be easily tricked, even after beating the best players


Also, could Rubens create a Picasso?
yes, because he is human

Rubens could learn a new style, but couldn't an AI also augment its training?
no, not for current AIs, augmenting the training means feeding other artists to it
 


Epic Meepo

Adventurer
What different ways are there for a system to be incomprehensible? E.g., too many parts, requires too much experiential knowledge, operates in an unusual manner.

How many of those apply to AI?
I would say AI and other complex software is opaque for a number of different reasons. It's difficult to interpret without a lot of programming expertise. It's made up of many different interacting parts developed by many different engineers. It's a chaotic system (in the mathematical sense), so any attempt to predict its behavior will be resource-intensive unless you're relaying heavily on statistical models. Plus, any software trained on a large data-set is going to inherit any emergent properties of that data (such as biases).

Why is it particularly important for AI to be understood?
From an academic standpoint, AI is a good test case for all sorts of mathematical modeling used to predict the behavior of complex systems. From a practical standpoint, the better we understand they inner workings of complex technology, the better we'll be able to refine it (and/or regulate it).
 

Remove ads

Top