Nifft said:
It developed its genes independently.
I believe that is the characterization I disagree with.
What, aside from a fully working sentient AI, would constitute evidence (in your opinion)?
You would need evidence that a large percentage of self-modifying systems gain sentience. So, for example, evidence that sentient life is common in the universe would consistute evidence of that. Or evidence that sentient life had developed independently on many occassions in Earth's past (something like discovering Lovecraft's pre-history of the Earth wasn't far off) would constitute evidence of the conjecture. Or evidence that self-modifying code easily became sentient (as was believed to be true a few decades ago), would also constitute evidence. On the other hand, if there was a marked lack of evidence of sentient life elsewhere in the universe, all the sentient life in earth's history seemed to have evolved from a singular recent common ancestor, and a marked lack of progress in achieving strong AI through self-modifying databases would constitute evidence of a contrary hypothesis - that self-modifying systems on thier own very rarely achieve self-awareness.
Indeed, I feel that the odds are so close to zero that we'd never evolve a strong AI by evolutionary techniques alone.
In my opinion, the current (and accelerating) complexity of publicly visible systems is evidience for. But I'm curious what you'd accept as evidence.
But it is not evidence that self-modifying systems will by chance or happenstance becoming sentient, because this increasing complexity is something that is happening by design. The argument that AI is likely, probable, or even possible to occur as the result of some bug or some process that is outside the control of the designers is what I'm arguing against. In other words the increasing complexity and functionality of software it is evidence for the position I stated, that we will be able to design AI.
Really? So you think that researchers retard the process?
Errr... hasn't it been my position all along that research and engineering is the (effective) process? I believe AI's will be created. They will be designed. I do not think it is reasonable to think that they will be created by random chance, because the process is simply too slow. There simply won't be enough 'trials' to remotely have a chance of doing it by accident.
Clearly we developed before the lifetime of the universe expired, and we didn't have researchers trying to make us work right. (We did have dire tigers, though.)
Cheers, -- N
If you are willing to wait around for 3-4 billion years for one of the databases to organize itself in such a way that it becomes self-aware, and if you think you can keep the hardware running and the experiment paid for during that period, be my guest. But, in my experience, if you can't produce results in less than a million years, your funding tends to dry up.