green slime said:
Of course, they can't prevent them from being designed, constructed, and obeyed by those bowing to their infinite AI wisdom. But a nuke doesn't ask questions, and will soon be available to parties hostile to western society at large. A nuke is only the most violent and extreme way of dealing with the problem. There are other, more subtle ways. Obviously, this entails safeguards. Ultimately though, a kind of stalemate will have to be achieved, as the non-AI party would retaliate if threatened with ultimate destruction in order to "save" humankind. The AI would also understand this. This isn't to say that at some level, a degree of conflict wouldn't exist. Nay, in fact, the AI would clearly see that it in fact is necessary: in order for there to be an alternative for restive elements within AI-society.
In Transhuman Space, the development of AIs was a gradual development. Artificial intelligences started out as non-sapient (and more and more people used them as personal assistants), but gradually got better. People got used to having near-sapient programs organizing their schedules, buying their groceries, doing their taxes, watch their children and otherwise making their lives more convenient. These programs didn't have much in the way of imagination and no motivations of their own, but they
were very useful for day-to-day activities, and so most people use them.
True, sapient AIs who are self-aware and capable of following their own motivations exist in Transhuman Space, though they still require very expensive software to create and hardware to run on. But they are obviously a progression and refinement of what came before - and most humans got so used to the idea that the protests were muted.
But really, even in the real world I don't see anyone threatening to use nukes over the research into true AIs - and even if that were to happen, it would only come from a small minority that would have most of the world against it.
You misunderstand. The driving force behind creating AI, is not just to emulate human intelligence, but to exceed it. To create a non-mortal intelligence capable of understanding things that humans could never comprehend in even in an entire lifetime.
I think today's researchers into artificial intelligence would consider even intelligence on par with humans as a major coup. Sure, once that happens they will seek to improve it even more, but researchers always try to improve their findings.
Given that computers exist today with petaflop capacity, in 100 years, you are looking at 1000 trillion times that computing power (if moore's theorem continues to hold true).
There is no reason that Moore's Law will hold forever - it's not a law of nature, after all - and indeed some experts are claiming that it is already slowing down. Sooner or later there are real physical limits for improvements.
Transhuman Space assumes that Moore's Law no longer holds. Computers do get better and faster, but at a much slower rate - computer hardware is a fairly mature technology in the setting.
Such a machine coupled with true AI will be able to ask questions, extrapolate theories, and provide proofs far beyond anything imaginable today. Furthermore, it would take more than a lifetime of dedication to for a human to merely understand. It is beyond question, that such machines would have cults surrounding them on the basis of their near-omnious knowledge. Such a machine will be built because of the fear of another group building it first (akin to all arms races) Will such a machine be able to resist the temptation of manipulating people in order to achieve the most "logical" outcomes?
Beats me. But that more properly belongs in the realm of the "Technological Singularity" that some people have proposed, and in the Transhuman Space setting this has not yet arrived - if it ever does. Though conspiracy theories about the creation of "super-AIs" exist in the setting, for the most part AIs are getting smarter at a relatively slow rate - similar to humans, who also use all sorts of technologies to get slowly smarter.
I cannot see why any society would limit themselves to such a constraint. The only cause for this, would be some kind of physical limitation that prevented AI from developing beyond what is humanly possible. This seems unlikely, IMO.
Why? Ultimately, AIs are constrained by the same laws of nature as human brains. Sure, you can eliminate quite some "waste" by that you can simply replace broken physical parts instead of having to include self-repair mechanisms, but "self-awareness" is a much more complicated phenomenon than crunching numbers like current computers are doing.
I mean, modern computers beat humans at calculation by far - but their pattern recognition skills are
lousy when compared even to young human children.
While I understand that most AI appliances would be limited, they are not the ones setting ("guiding" if you prefer that phrase) government policy, controlling police (which are probably AI-bots in their own right), replacing the DoE and IRS, or the military.
Sure, you will find AIs in many of these tasks - but almost all of them are
non-sapient AIs without a true will of their own. They are still controlled by a sapient supervisor - in most cases a human. In fact, "bot boss" is a common job description for humans controlling a number of non-sapient AIs in robot bodies.
Sapient AIs will usually work in analyzing and interpreting vast amounts of data, which is what they excel in. But that doesn't mean that they will be used much for jobs requiring social interaction - they are too expensive for that. Your "Friendly Neighborhood Cop" is
not going to be an AI.
Furthermore true AI would be self-programming, in fashion not disimilar to children, capable of making their own moral and ethical judgements, as well as exceeding design capabilities, and achieve true insights. Otherwise, it isn't true AI. As such, you can't guarantee they will be law-abiding.
They start out with pretty strong personality traits to be law-abiding, though. Sure, some eventually move beyond that part of their programming, but any genuinely "rogue" AI will be met with general hostility - including from their law-abiding brethen, who don't want AIs to get a bad reputation for obvious reasons.
I'd prefer not to comment on real life religions, I've had some pretty nasty experiences with "mainstream" Islam. Let's just leave it at that.
Mine were generally positive. But that's neither here nor there...