What would AIs call themselves?

Man, I can't believe you guys missed the obvious. AIs would call themselves "101110001010111010101010001110001".

Kidding aside, I suspect that AIs would communicate in a vastly different manner than humans. Imagine telling someone about an event by copying your entire memory of the inputs experienced during that event, and sending that memory to the other to be examined. I think something like that would be more likely than summarizing something using words.

So I think that, when talking to humans, AIs would use whatever is relatively convienient for the human, such as "AI".
 

log in or register to remove this ad

Dr. Strangemonkey said:
I always liked the term Minds from the Culture novels, though they used Drones for smaller than ship sized things.

You beat me to it, but I think Minds (from the Culture novels) is an excellent way for powerful AIs to name themselves.
 

When it comes down to thinking about the development of AIs, it may well be that rather than compilers or languages, it will be the algorithms and program structures which are designed which are most likely to lead to unexpected breakthroughs.

Now, I'm not going to talk about AI, but about a fascinating example of an interesting kind of algorithm for problem solving - Genetic Algorithms. The reason I want to mention it is because of a fabulous experiment which was performed back in August 2002 with rather unexpected results.

For those who may not have heard of Genetic Algorithms, the common approach is to build a software simulator, and sets of 'tests' which can be run through the simulator, and the overall result is 'scored' by a weighted scoring mechanism. You start by generating a whole bunch of random tests, run them through the simulator, score them, throw away the bottom ones and 'breed' the best ones (with a small chance of random mutation occurring too). You then run your new set of tests and so on for 30-50 or more 'generations'.

The first example I ever heard of this (in the early 90's) was about solving the problem of launching a rocket, navigating through space and landing on a target planet (with some other planets in the simulator too). The target was to reach the target planet with the minimum use of fuel. By the time they had completed it the algorithm had zeroed in on a route which not only got it there in good time and with lots of fuel left over, it had also stumbled upon gravity slingshots to speed it up and slow it down!


The second and even more interesting example can be found on the New Scientist website here http://www.newscientist.com/article.ns?id=dn2732

I'll let them put it in their own words

New Scientist said:
A self-organising electronic circuit has stunned engineers by turning itself into a radio receiver.

This accidental reinvention of the radio followed an experiment to see if an automated design process, that uses an evolutionary computer program, could be used to "breed" an electronic circuit called an oscillator. An oscillator produces a repetitive electronic signal, usually in the form of a sine wave.

Paul Layzell and Jon Bird at the University of Sussex in Brighton applied the program to a simple arrangement of transistors and found that an oscillating output did indeed evolve.

But when they looked more closely they found that, despite producing an oscillating signal, the circuit itself was not actually an oscillator. Instead, it was behaving more like a radio receiver, picking up a signal from a nearby computer and delivering it as an output.

In essence, the evolving circuit had cheated, relaying oscillations generated elsewhere, rather than generating its own.

Gene mixing

Layzell and Bird were using the software to control the connections between 10 transistors plugged into a circuit board that was fitted with programmable switches. The switches made it possible to connect the transistors differently.

Treating each switch as analogous to a gene allowed new circuits to evolve. Those that oscillated best were allowed to survive to a next generation. These "fittest" candidates were then mated by mixing their genes together, or mutated by making random changes to them.

After several thousand generations you end up with a clear winner, says Layzell. But precisely why the winner was a radio still mystifies them.

To pick up a radio signal you need other elements such as an antenna. After exhaustive testing they found that a long track in the circuit board had functioned as the antenna. But how the circuit "figured out" that this would work is not known.

"There's probably one sudden key mutation that enabled radio frequencies to be picked up," says Bird.

I love the way that this particular genetic algorithm was designed to produce a certain 'ability', and it did produce the expected output but via utterly unexpected means.

In Science Fiction, I like it when (as so rarely happens), AIs are not treated as 'humans with metallic voices', but as utterly alien in their approach to problems.

Cheers
 


Celebrim said:
Except, we are starting to realize that that is actually how people intelligence works. We've come to realize that people don't learn to walk - they are born knowing how to walk. They just wait for the hardware to grow into the algorithm, and then they do a big compile and suddenly they are off and running. I've had the oppurtunity actually watch children do this and it is (from my vantage as a programmer) just phenomenal. There have been recent breakthroughs in cracking how this is done.
And how do you explain bug robots that learn how to walk? The robot is constructed with a bunch of neural nets located close to the joints of the legs. The robot is given the desire to move (toward a light in a visual sensor) and after a few moments, it figures out how to pick itself up and coordinate walking via 6 double jointed legs and nowhere in the programming is there a "walk" routing.

I know when my child learned mobility there were things he understood but lacked the motor power to perform, ie your grow into it model. But there were also "tricks" that he did not learn until he saw another child perform them. Then something clicked and he gained a new motor response and suddenly maneuvering on staircases just worked.

What I'm trying to say is that limiting what can emerge is not only possible, but it is probably impossible to not limit what can emerge, because what we think of as 'strong intelligence' probably doesn't really exist. What does exist is a collection of algorithms for soft intelligence which are sufficiently broad and applicable that working in parallel they can simulate hard intelligence. But, without the algorithm for that class of functionality, its virtually impossible for it to emerge.
Then how does the bug robot work? There are servos at each joint acting as opposing muscles with no inputs other than the tension between two points and a loopback to the light sensor. Yet somehow the bug robot gains the ability to walk, follow a light, climb over obstacles. There is no hard wired routine being executed that knows about walking. It's just a neural net that can feed an analog value to a motor attached to a spring in order to change the tension, i.e. contract or compress the "muscle".

I put forward that this is just another example of refusing to view AIs as anything other than people. It is intuitive to you that emergent AI's will be naive and childlike because that's what emerging human personalities are like. But your human intuition is a very poor guide to non-human things, in the same way that your human intuition that the sun revolves around the earth (anyone can go out and observe it) is a poor basis for understanding things that are radically outside of evolved human experience (the very big universe, for example).
I'm using naive and childlike in the sense of "inexperienced with the ways of the world around it." No matter how many data libraries your AI has access to, practical experience with manipulating higher level concepts is required for deep thinking. Until the AI practices thinking for a while, its thoughts will be shallow, naive and child like. This has nothing to do with "how humans develop". It has to do with how thinking develops.

I really don't think a set of C++ (or whatever) classes can be developed that will be capable of thought. The pure hard AI of modeling everything and creating an intelligent model manipulator is NP hard and the size of the domain makes such calculation impractical on the same scale as my 3 bugs from intelligence example.

The more interesting question is, "How do you explain broccoli to a child?" And the answer is, the child already understands broccolli, or rather its already hardwired to recognize the trait of having broccolli-ness and to associate a certain sort of sound with things that have that trait. So explaining broccolli to a child is easy.
This is evolution and that wiring is the result of wetware neural networks (biological) that needed the ability to identify "food" versus "not-food". Mutant wetware without certain DNA sequences die off when they start ingesting non-food faster than their hardware can evolve to turn it into food. (If you prefer the theory of a programmer-deity we'll have to move this discussion elsewhere. :) )

Put another way: If you have to program every recognition and comprehension into the AI, then how can it learn? What you need to do is provide a substrate for Intelligence and provide stimuli that induce thinking. This is what I meant by a bootstrapping process. In my view, a computer based AI will not add numbers by using the microprocessor's add functions. It will manipulate numeric concepts the same way we do to achieve a result:
Code:
 11  <- carry the 1s
 876
+345
----
1221
And true AI will be capable of making mistakes when doing this because of distractions and emotional states that interfere with its ability to think clearly. :) (Well, that might be far too non-artificial.)
 

Nifft said:
Something can emerge which is unable to modify itself? How does it "emerge"? How does it become different from what it was before emerging?
There are two different kinds of "modifying itself". The kind you described I understood as being able to move opcodes within its running structure to create new execution paths that never existed before. I reject that as being necessary.

The other kind of self-modifying involves manipulating values in a lookup table that weighs the "goodness" of a code pathway based on how well its worked before, i.e. a neural network (or similar construct) where the numbers in the cells are addressable by the neural network. The underlying code is just a neural network library calling each subnet and passing the results on to the next subnet and that code is forever fixed. Each subnet is independent and responsible for a small detail in the overall "intelligence". Connected correctly and bootstrapped correctly, an intelligence would emerge and begin thinking, pondering, and exploring the world outside itself.

The human intelligence is bootstrapped with several billion data points. You then need to add vast amounts of proteins and carbohydrates in the proper order and quanity to grow the substrate I refer to. After a few years the substrate gets the point where that intelligence is apparent and somewhat communicative. Whether or not a machine substrate will attain "visible" intelligence faster than the biological one seems "likely" but I don't want to assume.
 

Jürgen Hubert said:
The "biologically challenged". ;)
LOL!

Regardless of whether anyone believes AI will ever truly exist, a few possible names for it could be:

Constructed Intelligence - shortened to ConstInt
Logic beings
Sentient Intelligent Processors or SIPs
 

jmucchiello said:
And how do you explain bug robots that learn how to walk?

There are so many different people working on bug robots and so many different approaches, that the question is simply too vague to answer. Are you referring to Tilden's analog bots, traditional motor control bots, natural gravity walking, or what?

The briefest answer is that each of the methods gives the bot a means for approaching the minimal energy usage algorthim for walking in its own manner. That algorithm is either built in or 'evolves' on a nueral net or a energy sensitive anolog circuit.

I'm not sure that your criticism actually applies to what I'm saying. In fact, you seem to mostly be in agreement with me, especially in your next post when you respond to Nifft.
 

jmucchiello said:
There are two different kinds of "modifying itself". The kind you described I understood as being able to move opcodes within its running structure to create new execution paths that never existed before. [...] The other kind of self-modifying involves manipulating values in a lookup table that weighs the "goodness" of a code pathway based on how well its worked before, i.e. a neural network (or similar construct) where the numbers in the cells are addressable by the neural network. The underlying code is just a neural network library calling each subnet and passing the results on to the next subnet and that code is forever fixed. Each subnet is independent and responsible for a small detail in the overall "intelligence". Connected correctly and bootstrapped correctly, an intelligence would emerge and begin thinking, pondering, and exploring the world outside itself.
It's the same thing, though. "Tuning" a neural net link to zero weight is computationally equivalent to removing one factor from an equation (in "opcodes", or at any other layer of abstraction).

I don't see why you'd care about the (artificial) separation of data from code. I reject the false dichotomy of opcodes vs. variables. There are a lot of layers above "opcodes" which are execution paths, and which are also sensible to modify programmatically.

Cheers, -- N

PS: This conversation reminds me of an old programmer's maxim: "If your only tool is a hammer, all problems look like nails. If your only hammer turns out to be C++, all nails turn out to be your thumb."
 

Celebrim said:
I'm not sure that your criticism actually applies to what I'm saying. In fact, you seem to mostly be in agreement with me, especially in your next post when you respond to Nifft.
At this point I think we've converged in agreement from diverse definitions far more than from diverse opinions.

The only thing I would reject completely was the concept that the AI in any way had access to its own thought process any more so than you or I do. The AI can't decide "Hey I need to rewrite my "blue" recognition algorithm". The "self-modifying" nature of its program (however it is accomplished) is opaque to it in the same way we can't explain how we've "changed our minds".
 

Remove ads

Top