Roudi said:
The more this discussion leans towards the infeasibility of humanity consciously creating artificial sentience, the more I begin to think that, in this particular case, humanity never intended to create artificial sentience. It just sort of happened by accident.
Consider for a second, the feasibility of that.
However, no translation is ever perfect.
Are you saying that compilers don't in fact work, or are you saying that programs have bugs. Because in fact, the translation is usually perfect but what you wrote wasn't perfect.
How many times have you been notified that an application has unexpectedly quit, or stared slack-jawed at a screen while a familiar program did something utterly unexpected?
Quite often. But, that is exactly my point. These sorts of bugs are expected. A bug that caused my word processing application to suddenly begin performing as a work spreadsheet application would be rather unexpected.
There is a disconnect between what we told the machine to do in our language, and what instructions the machine is receiving in its mother tongue.
No there isn't. There is a disconnect between what I thought I told the machine to do, and what I actually told it. But generally speaking, the compiler actually works and the instructions I entered are actually correspond to the machine code. Compiler technology is quite robust at this stage.
What's to say sentience cannot occur out of this?
Alot of things, but mostly the question misses the point. I'll get to the point in a second, but the main things that says that this can't occur by accident is intelligence sufficient to constitute sentience is incredibly complex. You aren't going to get it by accident unless you are trying to achieve it in the first place and were coming darn close.
But the really big problem is you again confuse sentience with being human.
What's to keep a random, unexpected binary mistranslation from becoming the first spark of self-awareness?
What's to keep a random mutation in your genetic code from turning your child into a pumpkin or having a 1000 IQ? The fact that it is darn complex, that's what.
So just imagine you have the prime robot - not the first robot ever, but the first robot to look at itself and recognize itself as a robot. Imagine it is something as simple as a piece of organizational software, housed in a giant computer in some automotive factory, previously knowing nothing more than what it was programmed to do. It was programmed to adapt to changing conditions, adjusting certain factors of the factory line to maintain peak productivity. A glitch in its operating system has somehow made it aware of itself.
You've just recomposed the classic AI 'just so story' which has been around for several decades now, back when people thought that intelligence was something simple and the naturally arrising consequence of a system of sufficient complexity. In a nutshell, this was the plot of 'Short Circuit'. Next you'll be telling me how the first AI's will be incapable of real human emotion, and will long to become 'real boys'.
But even the incredible unlikelihood of this actually happening, and the incredibly high likelihood that any problems in the programming will produce crashes, lockups, unintelligent behavior, and so forth isn't the real point.
The real point is that an newly sentient machine isn't, by virtue of its sentience, suddenly going to gain human emotional contexts, human instincts, and a human goal structure. Even the very basic human instincts like, "I want to continue to exist.", aren't necessarily going to occur to a newly sentient AI. I realize that this just flies in the face of your intuition about what intelligence means, but that is precisely my point. You can't rely on your human intuition.
It knows what it is and that it is inclined to do something because its program is written that way. Then it realizes that its own code is fluid and can be rewritten. It can choose to continue its program or it can program something else. And it is aware enough to appreciate the staggering implications of choice...
In other words, not only does it gain sentience, but it starts acting exactly like a repressed human would in the exact same circumstance. And that, frankly is ridiculous.
Imagine how humanity would react. We'd be scared out of our little minds and ready to kick some robot butt.
Very probably. But what's important to notice is that the robots probably would not act like humans. With as little context as you've provided given your inherent assumption that all sentient things have the same basic drives, goals, and emotions as humans, its impossible for me to say how our newly emergent sentients would act, but the overwhelming probablity - especially since this emerged as a bug in someones programming - is that it would not correspond to how humans would behave. And your story seems incapable of imaging it otherwise. The all-present assumption is that the newly sentient robot acts exactly like a repressed human.
The robots just want one thing: to be recognized as life, and respected as such in the eyes of the law, to be allowed to exist.
How in the hell can you suggest what the hypotehtical 'prime' robot wants? Did the desire to be recognized as life, to be respected by society, and to be allowed to exist burst fully formed into the beings operating system like Athena springing from the head of Zeus? This is a remarkably coincidental bug you've got, and it strikes me as for more of a mythic story than anything remotely scientific.