I, too, am a programmer, with a Computer Science degree from a major university, and I too have done AI programming with current-day tools. But I really must flatly disagree with Celebrim on most points.
Celebrim said:
Are you saying that compilers don't in fact work, or are you saying that programs have bugs. Because in fact, the translation is usually perfect but what you wrote wasn't perfect.
Here is where I think your arguments get off-base. What you are forgetting in that sentence is that compilers are themselves programs. Compilers can and do have bugs, and these bugs can and do cause mistranslations. I have seen proof firsthand- in fact, it affected one of my own programs.
To wit, I was using two short integer variables, A and B, and had a line that multiplied them together into a third short int variable C.
C = A * B. Simple, straightforward. Now, A and B had a possible range between 1 and 50, so C could never possibly get above 2500. Short ints, for those who don't know, have a possible range of 0 to 65535. Yet, I got an overflow error (meaning, the result of a calculation was outside the acceptable range of the variable it was supposed to be stored in) when I used the above line after putting it through the compiler. Adding in error-checking code, I confirmed after triple, quadruple, quintuple, and even further checks that A and B were never out of range. When I ran the program in an interpreter rather than the compiled version it always ran perfectly. Yet, the compiled version still had the persistent overflow error.
I fixed the problem by breaking up the line:
C = A. C = C * B. When I did that, suddenly the error (which shouldn't have been there in the first place) vanished.
That experience eternally broke my faith in the possibility of absolute correctness in any program, including the very operating systems and compilers that run all our other programs.
And the date it happened was in 1998, so you can't say it was in the early days of compiler technology when some of the kinks were still being worked out.
Celebrim said:
Quite often. But, that is exactly my point. These sorts of bugs are expected. A bug that caused my word processing application to suddenly begin performing as a work spreadsheet application would be rather unexpected.
And this sentence illustrates my specific issue with your rants. Your rants in this thread all seem to be founded on the assumptions that (A) something that can be called "intelligent" is actually programmable in the sense of a modern machine, and (B) all aspects of such an intelligent program will be under the original programmer's control, and furthermore remain so. These assumptions both ring false for me, because they ignore the very important fact that "intelligence" as it is currently defined implies the ability to learn from circumstance and experience.
That single fact overrides
any possibility of controlling the result. Learning requires the ability to self-modify, at least at the program level; if self-modification is not possible than no learning can take place. Experience will have no effect because the original programmed behavior will never change, since it was by definition programmed and cannot be modified by the program itself. In order for learning to actually occur, the program must be capable of self-modification, and thus by definition it must become capable of doing things that the original designers never expected or intended.
It therefore is not possible to say that a sentient program will not, in fact, achieve some desire for what Maslow termed "self-actualization." It is not possible to say that such a program will never have the desire for self-determination, because true learning and self-modification allow for any conceivable result given the correct combinations of time and experience. We can conceive of self-aware software that seeks its own "rights," since we ourselves are exactly such software operating within the confines of our own brains- and therefore it is in principle possible for a learning program to arrive at that point.
Celebrim said:
I'll get to the point in a second, but the main things that says that this can't occur by accident is intelligence sufficient to constitute sentience is incredibly complex. You aren't going to get it by accident unless you are trying to achieve it in the first place and were coming darn close.
Here, again, you are forgetting the very nature of self-modifying systems. Complexities can arise from even very simple starting rules and conditions. In my own view, allowing a self-modifying program to run long enough virtually insures that it will arrive at some sort of self-awareness. Sooner or later, some part of the modifying code is going to question just what it is modifying anyway, when it does this step. It is, of course,
unlikely to occur this way, but it is not by any means impossible.
Celebrim said:
But the really big problem is you again confuse sentience with being human.
This, I agree with. Because we do not control the result of a self-modifying program, we cannot with certainty say that a sentient (even sapient) program will be even remotely human in outlook- except perhaps in those portions of human outlook that are irreducibly part of being sentient or sapient in the first place. Since science has yet to agree on those, I suggest that it is unlikely for the first AI to be particularly close to humanity in its thought patterns, unless it is the result of a research study with the specific goal of producing such a program (and even then it's questionable thanks to the principle that the program must be out of control to evolve).
Celebrim said:
Even the very basic human instincts like, "I want to continue to exist.", aren't necessarily going to occur to a newly sentient AI. I realize that this just flies in the face of your intuition about what intelligence means, but that is precisely my point. You can't rely on your human intuition.
Actually I think the "desire" for continued existence will in fact be common to all sentience, because of the fact that learning requires self-modification. That means that in order for learning to occur, there must in fact be a "self" to modify.
Thus, a program capable of true sentience will desire to continue existing in some form, even if that just means leaving a backup copy of itself behind for after the missile explodes, because otherwise it cannot fulfill the internal directive to modify itself based on experience.
But otherwise I agree with the quoted statements. An AI that arises as a result of a self-modifying learning program will not necessarily acquire human characteristics to its thought patterns.