jmucchiello
Hero
me said:Actually, I don't believe sentience can be programmed.
I don't believe in elves or unicorns either. Yet weekly I play a game in which both feature. Go figure.Nifft said:Not much point in discussing such artifice, then.
Perhaps I just have more faith in the human intellect.
They are analog.Nifft said:Currently, emotions (and intelligence) are implemented using chemistry and statistics. Why are those inherently superior to gates and bytes?
I realize the following is a setting proposition but....
When have you ever accidentally programmed a word processor when that wasn't your intent?Roudi said:The more this discussion leans towards the infeasibility of humanity consciously creating artificial sentience, the more I begin to think that, in this particular case, humanity never intended to create artificial sentience. It just sort of happened by accident.
That is not true. Neurons are extremely analog in practice. You may feel a neuron only fires or doesn't fire, but the chemistry behind that "decision" is not binary. There are a pair of chemicals that exist in a variety of states of balance and when the head of the neuron receives input from the previous neuron, whether that cell will propagate the signal depends on the balance of their chemicals. Other neurons in the brain regulate the balance and thus the decision for moving the signal forward can viewed as yes/no externally, it is not so simple locally.Consider, for a moment, how a program is created in the first place. Now a computer is, at its very core, just a series of circuits and pathways (much like a human brain, by similar comparison). The basic language of a computer is binary, which is essentially a switch language - it tells the computer which circuits and pathways to use, in which order, to achieve certain results. Theoretically, the neurons in our brain operate on the same binary principles.
I've programmed assembly language. Bootstrapping old mainframes use to be done by flipping physical switches on a panel then applying power to the system. The TI99/4a computer's microprocessor ran p-code, which is the internal language of UCSD pascal and writing directly to p-code isn't really that hard. Modern programmers may not program binary but programming at the machine level is still around, anyone writing a device driver will write some of it assembly language.However, no one codes in binary - no one codes in the language of the machine. Because binary is far too complex for us to understand as a language, we have devised several other languages in order to talk to machines, to tell it what to do. The computer does not understand Java, C++, VB, or any other programming language.
This is a strawman argument. Compilers translate what you SAY into machine code. Programmers usually don't MEAN what they SAY. This is how bugs occur.To get the computer to understand, you compile the program - this takes what we humans have written in the languages we understand and translates it into the language of the machine. Binary. However, no translation is ever perfect.
Laws of probably for one? Are you actually saying that Microsoft Word is just a few bugs away from sentience? If I take a hex editor and change 3 or 4 '0x34's to '0x87' will I create life? Not 3 or 4? How many bugs are talking about then? 10-20? 1000-2000? 1,000,000-2,000,000? Even if it is only 3 or 4, there are (let's say) 10 million bytes making up Word. Making 3 random changes to 10,000,000 bytes gives us 255^3*10,000,000 combinations to try. That's 16,581,375,000,000 combinations. If it takes me 1 second to try each combination (and who here can launch MS Word in 1 second) I would need over 1/2 a million years to try each combination. Using a linear search, odds are I find my sentient MS Word variant in half that time, so I need only 1/4 million years. That is why a couple random bugs will not result in sentience.What's to keep a random, unexpected binary mistranslation from becoming the first spark of self-awareness?
Let me humor this with another question, how does it reprogram itself? It can't read it's own source code in C++ since it has glitches and these glitches aren't in the source code. So it has to find it's spark of creativity. To do this it needs to recompiles its source code (who leaves source code and a compiler on the production server) and compare itself to the source code. Wait!! First it needs to take a class in C++. I'm sure no one added heuristics to this organization program so that it could also analyze C++ programs. Short of poking random sequences of numbers into its bytes stream and hoping for the best, this program will not know how to modify itself.A glitch in its operating system has somehow made it aware of itself. It knows what it is and that it is inclined to do something because its program is written that way. Then it realizes that its own code is fluid and can be rewritten. It can choose to continue its program or it can program something else.
Humans are able to adapt to changing conditions but this usually doesn't require that we modify our DNA to do it. There's no reason to suppose a running AI program is aware of its own bytes any more so than we are aware of electrical impulses traveling through our brains.