Transhuman Space: Beyond Good and Evil

green slime said:
Given that scenario, I'd say it wasn't completely beyond the realm of the possible.

I'd suggest though, that it is more likely a comment on contemporary western society. There are very strong forces in the world working in the opposite direction: one of increasing religiosity, making other extreme demands upon how people should live their lives. We just don't notice them so much living as we do in western society.

These radical, fundamentalist, theocratic, antidemocratic forces strive for a return to non-enlightenment, and will happily repress anyone from expressing any form of personal joy other than that which they proscribe.

They do exist in the Transhuman Space setting as well - but in the end, they are just one more group among many, and so hopelessly splintered that they can only apply regional pressure.

Consider if you may, that these forces will not stand idly by and allow the creation of a true AI which would threaten their existance. If not for any other reason, than the AI's realisation that one of the the most effective means of controlling humans is through religious fervour, and declare itself God.

Several problems with this:

- Why should they be able to stop the development of an AI? After all, research in artificial intelligence - just like any other scientific or technological research - is international. Even if they managed to stop all research in a particular country all that would mean is that the research would move elsewhere.

- Why should an AI have an interest in controlling humans?

- In Transhuman Space, true AIs exist, but they don't get much more intelligent than bright humans. Sure, they are great at math stuff - but so are humans with access to math programs (and in that setting, pretty much everyone has). Their imagination isn't really beyond that of humans (and since low-grade AIs are the most common, it is usually significantly below that of humans). Furthermore, most AIs have been programmed to be law-abiding citizens (or merely be law-abiding, in areas where they don't have full citizenships).

And the position of the major religions varies on the status of AIs. Many Christian denominations, for example, claim that AIs don't have souls, while mainstream Islam claims that they do.

So then you have hi-tech AI-enhanced human-bots having their pleasure zones contantly stimulated by various means while defending against fanatical religious zealots. While neither extreme is one I find particularly attractive, I must admit I'd rather die with a hard-on in a drug-enduced tinkerbell landscape, than in a flea-ridden hovel with plastic sheet covering the dunny-hole.

In Transhuman Space there is no place where AIs can truly be said to have "taken over", though there are regions where some of them might have considerable political clout - such as the European Union, which is the wealthiest region in the setting (though somewhat socially conservative, given the advanced age of the majority of citizens there).
 

log in or register to remove this ad

Jürgen Hubert said:
- Why should they be able to stop the development of an AI? After all, research in artificial intelligence - just like any other scientific or technological research - is international. Even if they managed to stop all research in a particular country all that would mean is that the research would move elsewhere.

Of course, they can't prevent them from being designed, constructed, and obeyed by those bowing to their infinite AI wisdom. But a nuke doesn't ask questions, and will soon be available to parties hostile to western society at large. A nuke is only the most violent and extreme way of dealing with the problem. There are other, more subtle ways. Obviously, this entails safeguards. Ultimately though, a kind of stalemate will have to be achieved, as the non-AI party would retaliate if threatened with ultimate destruction in order to "save" humankind. The AI would also understand this. This isn't to say that at some level, a degree of conflict wouldn't exist. Nay, in fact, the AI would clearly see that it in fact is necessary: in order for there to be an alternative for restive elements within AI-society.

Jürgen Hubert said:
- Why should an AI have an interest in controlling humans?

You misunderstand. The driving force behind creating AI, is not just to emulate human intelligence, but to exceed it. To create a non-mortal intelligence capable of understanding things that humans could never comprehend in even in an entire lifetime. Given that computers exist today with petaflop capacity, in 100 years, you are looking at 1000 trillion times that computing power (if moore's theorem continues to hold true). Such a machine coupled with true AI will be able to ask questions, extrapolate theories, and provide proofs far beyond anything imaginable today. Furthermore, it would take more than a lifetime of dedication to for a human to merely understand. It is beyond question, that such machines would have cults surrounding them on the basis of their near-omnious knowledge. Such a machine will be built because of the fear of another group building it first (akin to all arms races) Will such a machine be able to resist the temptation of manipulating people in order to achieve the most "logical" outcomes?

Jürgen Hubert said:
- - In Transhuman Space, true AIs exist, but they don't get much more intelligent than bright humans. Sure, they are great at math stuff - but so are humans with access to math programs (and in that setting, pretty much everyone has). Their imagination isn't really beyond that of humans (and since low-grade AIs are the most common, it is usually significantly below that of humans). Furthermore, most AIs have been programmed to be law-abiding citizens (or merely be law-abiding, in areas where they don't have full citizenships).

I cannot see why any society would limit themselves to such a constraint. The only cause for this, would be some kind of physical limitation that prevented AI from developing beyond what is humanly possible. This seems unlikely, IMO.

While I understand that most AI appliances would be limited, they are not the ones setting ("guiding" if you prefer that phrase) government policy, controlling police (which are probably AI-bots in their own right), replacing the DoE and IRS, or the military. Furthermore true AI would be self-programming, in fashion not disimilar to children, capable of making their own moral and ethical judgements, as well as exceeding design capabilities, and achieve true insights. Otherwise, it isn't true AI. As such, you can't guarantee they will be law-abiding.

Jürgen Hubert said:
And the position of the major religions varies on the status of AIs. Many Christian denominations, for example, claim that AIs don't have souls, while mainstream Islam claims that they do.

I'd prefer not to comment on real life religions, I've had some pretty nasty experiences with "mainstream" Islam. Let's just leave it at that.

Jürgen Hubert said:
In Transhuman Space there is no place where AIs can truly be said to have "taken over", though there are regions where some of them might have considerable political clout - such as the European Union, which is the wealthiest region in the setting (though somewhat socially conservative, given the advanced age of the majority of citizens there).

Interesting. I'll comment more on this later.
 

green slime said:
Of course, they can't prevent them from being designed, constructed, and obeyed by those bowing to their infinite AI wisdom. But a nuke doesn't ask questions, and will soon be available to parties hostile to western society at large. A nuke is only the most violent and extreme way of dealing with the problem. There are other, more subtle ways. Obviously, this entails safeguards. Ultimately though, a kind of stalemate will have to be achieved, as the non-AI party would retaliate if threatened with ultimate destruction in order to "save" humankind. The AI would also understand this. This isn't to say that at some level, a degree of conflict wouldn't exist. Nay, in fact, the AI would clearly see that it in fact is necessary: in order for there to be an alternative for restive elements within AI-society.

In Transhuman Space, the development of AIs was a gradual development. Artificial intelligences started out as non-sapient (and more and more people used them as personal assistants), but gradually got better. People got used to having near-sapient programs organizing their schedules, buying their groceries, doing their taxes, watch their children and otherwise making their lives more convenient. These programs didn't have much in the way of imagination and no motivations of their own, but they were very useful for day-to-day activities, and so most people use them.

True, sapient AIs who are self-aware and capable of following their own motivations exist in Transhuman Space, though they still require very expensive software to create and hardware to run on. But they are obviously a progression and refinement of what came before - and most humans got so used to the idea that the protests were muted.

But really, even in the real world I don't see anyone threatening to use nukes over the research into true AIs - and even if that were to happen, it would only come from a small minority that would have most of the world against it.

You misunderstand. The driving force behind creating AI, is not just to emulate human intelligence, but to exceed it. To create a non-mortal intelligence capable of understanding things that humans could never comprehend in even in an entire lifetime.

I think today's researchers into artificial intelligence would consider even intelligence on par with humans as a major coup. Sure, once that happens they will seek to improve it even more, but researchers always try to improve their findings.

Given that computers exist today with petaflop capacity, in 100 years, you are looking at 1000 trillion times that computing power (if moore's theorem continues to hold true).

There is no reason that Moore's Law will hold forever - it's not a law of nature, after all - and indeed some experts are claiming that it is already slowing down. Sooner or later there are real physical limits for improvements.

Transhuman Space assumes that Moore's Law no longer holds. Computers do get better and faster, but at a much slower rate - computer hardware is a fairly mature technology in the setting.

Such a machine coupled with true AI will be able to ask questions, extrapolate theories, and provide proofs far beyond anything imaginable today. Furthermore, it would take more than a lifetime of dedication to for a human to merely understand. It is beyond question, that such machines would have cults surrounding them on the basis of their near-omnious knowledge. Such a machine will be built because of the fear of another group building it first (akin to all arms races) Will such a machine be able to resist the temptation of manipulating people in order to achieve the most "logical" outcomes?

Beats me. But that more properly belongs in the realm of the "Technological Singularity" that some people have proposed, and in the Transhuman Space setting this has not yet arrived - if it ever does. Though conspiracy theories about the creation of "super-AIs" exist in the setting, for the most part AIs are getting smarter at a relatively slow rate - similar to humans, who also use all sorts of technologies to get slowly smarter.

I cannot see why any society would limit themselves to such a constraint. The only cause for this, would be some kind of physical limitation that prevented AI from developing beyond what is humanly possible. This seems unlikely, IMO.

Why? Ultimately, AIs are constrained by the same laws of nature as human brains. Sure, you can eliminate quite some "waste" by that you can simply replace broken physical parts instead of having to include self-repair mechanisms, but "self-awareness" is a much more complicated phenomenon than crunching numbers like current computers are doing.

I mean, modern computers beat humans at calculation by far - but their pattern recognition skills are lousy when compared even to young human children.

While I understand that most AI appliances would be limited, they are not the ones setting ("guiding" if you prefer that phrase) government policy, controlling police (which are probably AI-bots in their own right), replacing the DoE and IRS, or the military.

Sure, you will find AIs in many of these tasks - but almost all of them are non-sapient AIs without a true will of their own. They are still controlled by a sapient supervisor - in most cases a human. In fact, "bot boss" is a common job description for humans controlling a number of non-sapient AIs in robot bodies.

Sapient AIs will usually work in analyzing and interpreting vast amounts of data, which is what they excel in. But that doesn't mean that they will be used much for jobs requiring social interaction - they are too expensive for that. Your "Friendly Neighborhood Cop" is not going to be an AI.

Furthermore true AI would be self-programming, in fashion not disimilar to children, capable of making their own moral and ethical judgements, as well as exceeding design capabilities, and achieve true insights. Otherwise, it isn't true AI. As such, you can't guarantee they will be law-abiding.

They start out with pretty strong personality traits to be law-abiding, though. Sure, some eventually move beyond that part of their programming, but any genuinely "rogue" AI will be met with general hostility - including from their law-abiding brethen, who don't want AIs to get a bad reputation for obvious reasons.

I'd prefer not to comment on real life religions, I've had some pretty nasty experiences with "mainstream" Islam. Let's just leave it at that.

Mine were generally positive. But that's neither here nor there...
 

Remove ads

Top