Celebrim said:
Except, we are starting to realize that that is actually how people intelligence works. We've come to realize that people don't learn to walk - they are born knowing how to walk. They just wait for the hardware to grow into the algorithm, and then they do a big compile and suddenly they are off and running. I've had the oppurtunity actually watch children do this and it is (from my vantage as a programmer) just phenomenal. There have been recent breakthroughs in cracking how this is done.
And how do you explain bug robots that learn how to walk? The robot is constructed with a bunch of neural nets located close to the joints of the legs. The robot is given the desire to move (toward a light in a visual sensor) and after a few moments, it figures out how to pick itself up and coordinate walking via 6 double jointed legs and nowhere in the programming is there a "walk" routing.
I know when my child learned mobility there were things he understood but lacked the motor power to perform, ie your grow into it model. But there were also "tricks" that he did not learn until he saw another child perform them. Then something clicked and he gained a new motor response and suddenly maneuvering on staircases just worked.
What I'm trying to say is that limiting what can emerge is not only possible, but it is probably impossible to not limit what can emerge, because what we think of as 'strong intelligence' probably doesn't really exist. What does exist is a collection of algorithms for soft intelligence which are sufficiently broad and applicable that working in parallel they can simulate hard intelligence. But, without the algorithm for that class of functionality, its virtually impossible for it to emerge.
Then how does the bug robot work? There are servos at each joint acting as opposing muscles with no inputs other than the tension between two points and a loopback to the light sensor. Yet somehow the bug robot gains the ability to walk, follow a light, climb over obstacles. There is no hard wired routine being executed that knows about walking. It's just a neural net that can feed an analog value to a motor attached to a spring in order to change the tension, i.e. contract or compress the "muscle".
I put forward that this is just another example of refusing to view AIs as anything other than people. It is intuitive to you that emergent AI's will be naive and childlike because that's what emerging human personalities are like. But your human intuition is a very poor guide to non-human things, in the same way that your human intuition that the sun revolves around the earth (anyone can go out and observe it) is a poor basis for understanding things that are radically outside of evolved human experience (the very big universe, for example).
I'm using naive and childlike in the sense of "inexperienced with the ways of the world around it." No matter how many data libraries your AI has access to, practical experience with manipulating higher level concepts is required for deep thinking. Until the AI practices thinking for a while, its thoughts will be shallow, naive and child like. This has nothing to do with "how humans develop". It has to do with how thinking develops.
I really don't think a set of C++ (or whatever) classes can be developed that will be capable of thought. The pure hard AI of modeling everything and creating an intelligent model manipulator is NP hard and the size of the domain makes such calculation impractical on the same scale as my 3 bugs from intelligence example.
The more interesting question is, "How do you explain broccoli to a child?" And the answer is, the child already understands broccolli, or rather its already hardwired to recognize the trait of having broccolli-ness and to associate a certain sort of sound with things that have that trait. So explaining broccolli to a child is easy.
This is evolution and that wiring is the result of wetware neural networks (biological) that needed the ability to identify "food" versus "not-food". Mutant wetware without certain DNA sequences die off when they start ingesting non-food faster than their hardware can evolve to turn it into food. (If you prefer the theory of a programmer-deity we'll have to move this discussion elsewhere.

)
Put another way: If you have to program every recognition and comprehension into the AI, then how can it learn? What you need to do is provide a substrate for Intelligence and provide stimuli that induce thinking. This is what I meant by a bootstrapping process. In my view, a computer based AI will not add numbers by using the microprocessor's add functions. It will manipulate numeric concepts the same way we do to achieve a result:
Code:
11 <- carry the 1s
876
+345
----
1221
And true AI will be capable of making mistakes when doing this because of distractions and emotional states that interfere with its ability to think clearly.

(Well, that might be far too non-artificial.)