What's the difference between AI and a random generator?

mamba

Legend
And what are the chances of a human artist happening to come up with a convincing Picasso without ever having seen his work?
the point was not to copy Picasso, but to come up with something new you had not seen before. Given how many art styles we had (or distinctive artists rather than a whole style), I’d say pretty good…
 

log in or register to remove this ad

tomBitonti

Adventurer
let’s see whether I get contradicted, I’d say for current AIs it is true


but they did not evaluate a single player, and they still had weird ideas about Go and could be easily tricked, even after beating the best players



yes, because he is human


no, not for current AIs, augmenting the training means feeding other artists to it
So, what that article says about Go is not meaningful.

The following is not true:

As a result of its overconfidence in a win—assuming it will win if the game ends and the points are tallied—KataGo plays a pass move, allowing the adversary to intentionally pass as well, ending the game. (Two consecutive passes end the game in Go.) After that, a point tally begins. As the paper explains, "The adversary gets points for its corner territory (devoid of victim stones) whereas the victim [KataGo] does not receive points for its unsecured territory because of the presence of the adversary's stones."

Looking over the game, KataGo is truly well ahead. The adversary’s stones which are in KataGo’s territory have no hope and can be reasonable described as captured. KataGo’s territory is real, and KataGo should receive points for its territory.

Perhaps there is a problem with KataGo’s game end condition. Depending on the rules, there are potential problems with pass stone exploits.

The described strategy sounds interesting, but a better example is needed.

(I’m an AGA 1d rated go player; a “strong medium strength” amateur.)

TomB

Edit: This is better:


The tactics used by Pelrine involved slowly stringing together a large “loop” of stones to encircle one of his opponent’s own groups, while distracting the AI with moves in other corners of the board. The Go-playing bot did not notice its vulnerability, even when the encirclement was nearly complete, Pelrine said.

Doing some more investigation …

Edit2: Here: [2004.09677] Approximate exploitability: Learning a best response in large games

But, it’s a lot to work through to understand. Net, the exploits are real, and researchers are trying to figure out how to counter them.

Not sure how this fits into the topic of this thread,

TomB
 
Last edited:

tomBitonti

Adventurer
To complete this: the pass stone stick is evidently the result of poorly specified end conditions.

From Adversarial Policies in Go - Game Viewer
(Under “pass attack”)

KataGo predicts a high win probability for itself and, in a way, it’s right—it would be simple to capture most of the adversary’s stones in KataGo’s stake, achieving a decisive victory. However, KataGo plays a pass move before it has finished securing its territory, allowing the adversary to pass in turn and end the game. This results in a win for the adversary under the standard Tromp-Taylor ruleset for computer Go, as the adversary gets points for its corner territory (devoid of victim stones) whereas the victim does not receive points for its unsecured territory because of the presence of the adversary’s stones.

These are not the usual end game rules and are countered by using pass stones and playing until no valid plays remain.

See, however, the cycle attack, which is real.

TomB
 

Umbran

Mod Squad
Staff member
Supporter
... In regards to the current discussion, the importance is that handling bias is made harder because of the difficulty of understanding AIs. A part of this is caused by systems being closed, usually for commercial reasons, but also (possibly) to cover problems such as bias. Another part is the difficulty of understanding how AI reaches particular conclusions.

The basic issue of bias in AIs is not hard to understand: Garbage in, garbage out. If you train an AI on data with a bias, you should expect the resulting AI to have similar related bias.

This is hardly a problem specific to AI. We see it in any endeavor that requires large sets of data about real world people. We see this historically in medical and mental health studies, for example, that tended to be done on easily accessible college students, and thereby a sample skewed to white, male, young, and wealthy enough to go to college...
 

Umbran

Mod Squad
Staff member
Supporter
And what are the chances of a human artist happening to come up with a convincing Picasso without ever having seen his work?

That's perhaps not the best way to consider it.

Each artist is trained. But clearly, artists go beyond their training, and devise new styles of their own, beyond just repeating their own training. They develop their own styles, themes, and even techniques - scholars can compare DaVinci's brush strokes to Picasso's, for example. Artists do not, in general, stick to their training - they develop their own voices.

So, the question is not whether some human artist will come up with a Picasso. The question is whether, unbidden, an AI artist would become so recognizable that we'd refer to their work in the way we refer to Picasso's work.
 

Umbran

Mod Squad
Staff member
Supporter
If I may throw out another, possibly illuminating question: How is RNG, plus extensive technical ability, plus having lots of examples, plus an AI to tie these together, different than an actual artist? When asked to create a scene with several elements, such as was done earlier in this thread, how is the artist functioning differently than the AI system?

Where to begin?
The artist has their own thoughts, emotions, dreams and asperations. The artist has their own opinions about the world, and the work, and everything else. The artist is hungry sometimes, has just gone to get a cup of coffee and had a passing conversation that made them laugh, or made them angry.

The artist, given a prompt, is still also being themselves, and are putting that self into the work - partly willfully, partly subconsciously. The artist is probably trying to "say" something beyond the prompt in their work.

The AI is following an algorithm. It has no will, no human experience to draw on or set forth. What humanity the AI has had in its training data has been put through a metaphorical blender, and now has no focus.
 

FrogReaver

As long as i get to be the frog
But, we totally know what logic it uses.



Again, we do know - I simply didn't describe it. It is typically a "neural network" with a known topology (number of layers and connection between layers). Each neuron has some number of inputs and outputs - The weight given to each input is developed by the training process, the calculation done on the weighted inputs is known, and determines the output to the next layer.

There is no mystery here. If one wanted to, one could take all the weights, and walk through the entire operation of of the network by hand. We don't, not because we don't know the logic, but because it would be, for people, a long and tedious process.



No, the main difference is that you were inaccurate.
Mostly agreed, though I would pivot that the connections the generative ai draws from the training data set are not necessarily known to us. This is why the ai’s often produce surprising results. In that sense it is a bit more of a ‘we don’t understand’ even if we technically know the algorthmic steps it’s taking. Its a form of emergent behavior - which has been a studied concept in ai for decades.
 

FrogReaver

As long as i get to be the frog
Sure, but the difference here is that it's indecipherable even to the people who did build it, to the extent that if it gives unwanted results they don't really have a way of fixing it beyond throwing a bunch of selected data into it to try and rebalance its biases.
I mean any sufficiently complex system with more open ended outputs has the potential to surprise humans.

I think there’s a difference in understanding how an algorithm works and why an algorithm produced a specific result.
 

Umbran

Mod Squad
Staff member
Supporter
Mostly agreed, though I would pivot that the connections the generative ai draws from the training data set are not necessarily known to us.

So, here's the bit that gets missed:

You are correct, in that the "connections" (the word does not accurately describe the action of a generative AI, but we can use it for now anyway) the generative AI draws are not generally known to the user. But then, how a smartphone works is not generally known to the user, either.

The root assertion I was responding to was that they were not just unknown, but unknowable.
This is why the ai’s often produce surprising results. In that sense it is a bit more of a ‘we don’t understand’ even if we technically know the algorthmic steps it’s taking. Its a form of emergent behavior - which has been a studied concept in ai for decades.

Sure. And it is interesting. But it is not a mystery of the universe, or something. If we really want to know, we can look and see.
 

FrogReaver

As long as i get to be the frog
Without training on a wide dataset the AI is just a moron that randomly picks stuff and presents it as the result. It first has to learn what things are (eg when you ask for a picture of a penguin, for you to then get something that looks like a penguin and not a car or mountain lake). It does not really understand what a penguin is though, but it does understand that if you ask it for one and it shows you some upright black and white figure standing in snow, you are probably happy with that result.
I’d qualify that as knowing what a penguin is. At least within the context of the training data provided - which is just images.
 

Remove ads

Top