Currently, emotions (and intelligence) are implemented using chemistry and statistics. Why are those inherently superior to gates and bytes?jefgorbach said:Celembrim brought numerous good points and basically underscores the basic oversight in AI arguments: that AIs would HAVE emotions.
How would you encode the drive for self-improvement without including some measure of self-dissatisfaction?jefgorbach said:Negative emotions simply would not exist in EAIs because there would be no circumstance in which they would be neccessary and thus would not be included in the code.
jefgorbach said:Celembrim brought numerous good points and basically underscores the basic oversight in AI arguments: that AIs would HAVE emotions. Why? While its true AIs would be sophiscated programs, they are still just that: programs. Extremely complex rule sets designed to determine and process the available options to reach the most logical solution to the current event
Here is where I think your arguments get off-base. What you are forgetting in that sentence is that compilers are themselves programs. Compilers can and do have bugs, and these bugs can and do cause mistranslations. I have seen proof firsthand- in fact, it affected one of my own programs.Celebrim said:Are you saying that compilers don't in fact work, or are you saying that programs have bugs. Because in fact, the translation is usually perfect but what you wrote wasn't perfect.
And this sentence illustrates my specific issue with your rants. Your rants in this thread all seem to be founded on the assumptions that (A) something that can be called "intelligent" is actually programmable in the sense of a modern machine, and (B) all aspects of such an intelligent program will be under the original programmer's control, and furthermore remain so. These assumptions both ring false for me, because they ignore the very important fact that "intelligence" as it is currently defined implies the ability to learn from circumstance and experience.Celebrim said:Quite often. But, that is exactly my point. These sorts of bugs are expected. A bug that caused my word processing application to suddenly begin performing as a work spreadsheet application would be rather unexpected.
Here, again, you are forgetting the very nature of self-modifying systems. Complexities can arise from even very simple starting rules and conditions. In my own view, allowing a self-modifying program to run long enough virtually insures that it will arrive at some sort of self-awareness. Sooner or later, some part of the modifying code is going to question just what it is modifying anyway, when it does this step. It is, of course, unlikely to occur this way, but it is not by any means impossible.Celebrim said:I'll get to the point in a second, but the main things that says that this can't occur by accident is intelligence sufficient to constitute sentience is incredibly complex. You aren't going to get it by accident unless you are trying to achieve it in the first place and were coming darn close.
This, I agree with. Because we do not control the result of a self-modifying program, we cannot with certainty say that a sentient (even sapient) program will be even remotely human in outlook- except perhaps in those portions of human outlook that are irreducibly part of being sentient or sapient in the first place. Since science has yet to agree on those, I suggest that it is unlikely for the first AI to be particularly close to humanity in its thought patterns, unless it is the result of a research study with the specific goal of producing such a program (and even then it's questionable thanks to the principle that the program must be out of control to evolve).Celebrim said:But the really big problem is you again confuse sentience with being human.
Actually I think the "desire" for continued existence will in fact be common to all sentience, because of the fact that learning requires self-modification. That means that in order for learning to occur, there must in fact be a "self" to modify.Celebrim said:Even the very basic human instincts like, "I want to continue to exist.", aren't necessarily going to occur to a newly sentient AI. I realize that this just flies in the face of your intuition about what intelligence means, but that is precisely my point. You can't rely on your human intuition.
jefgorbach said:Celembrim brought numerous good points and basically underscores the basic oversight in AI arguments: that AIs would HAVE emotions.
AIs onl the other hand are specialized tools designed to accomplish a single task, 99% of which do not require emotions. Therefore there is no logical reason the programmers (either human or AI) would waste the memory space and processing time for useless emotional code that would never be used in the completion of its assigned function. ie: what role does emotion play in product assembly? Flying? Driving? Mining? Sweeping? Lawn Maintenance? Crop Picking? Warfare?
Negative emotions simply would not exist in EAIs because there would be no circumstance in which they would be neccessary and thus would not be included in the code.
paradox42 said:I, too, am a programmer, with a Computer Science degree from a major university, and I too have done AI programming with current-day tools.
But I really must flatly disagree with Celebrim on most points.
Here is where I think your arguments get off-base. What you are forgetting in that sentence is that compilers are themselves programs. Compilers can and do have bugs, and these bugs can and do cause mistranslations. I have seen proof firsthand- in fact, it affected one of my own programs.
That experience eternally broke my faith in the possibility of absolute correctness in any program, including the very operating systems and compilers that run all our other programs.![]()
And this sentence illustrates my specific issue with your rants. Your rants in this thread all seem to be founded on the assumptions that (A) something that can be called "intelligent" is actually programmable in the sense of a modern machine, and (B) all aspects of such an intelligent program will be under the original programmer's control, and furthermore remain so.
These assumptions both ring false for me, because they ignore the very important fact that "intelligence" as it is currently defined implies the ability to learn from circumstance and experience.
thus by definition it must become capable of doing things that the original designers never expected or intended.
It therefore is not possible to say that a sentient program will not, in fact, achieve some desire for what Maslow termed "self-actualization." It is not possible to say that such a program will never have the desire for self-determination, because true learning and self-modification allow for any conceivable result given the correct combinations of time and experience.
We can conceive of self-aware software that seeks its own "rights," since we ourselves are exactly such software operating within the confines of our own brains- and therefore it is in principle possible for a learning program to arrive at that point.
Here, again, you are forgetting the very nature of self-modifying systems. Complexities can arise from even very simple starting rules and conditions. In my own view, allowing a self-modifying program to run long enough virtually insures that it will arrive at some sort of self-awareness.
Actually I think the "desire" for continued existence will in fact be common to all sentience, because of the fact that learning requires self-modification.
Speak for yourself.Celebrim said:Bacteria have been evolving willy-nilly through countless generations without any sort of the built in restraint I'm suggesting for billions of years, and non-of them are self-aware yet.

(Dungeons & Dragons)
Rulebook featuring "high magic" options, including a host of new spells.