Transhuman Space - A Setting Defined By Its Freedoms

DarkKestral said:
Once those connections are worked out, it's mostly a matter of time before we approach the tipping point.

Only if we assume that computing power will continue increasing indefinitely. It's likely that at some point we will be able to simulate a human brain, but if it takes expensive enough equipment and high enough power requirements to provide a human-level intelligence, why not just use a human?
 

log in or register to remove this ad

The AI could work safely in environments impossible for humans, prosfilaes, and would be easily capable of improving it's own intelligence, as well as potentially working longer and more continuously, as well as be able to control and monitor more variables than a human, and likely be cheaper to train. That's kind of why companies are already building neural nets: they are cheaper than paying humans to do repetitive, high-calculation, or high-risk jobs and are generally far better at performing the job requirements anyway, and are far cheaper to make better. Need more reliability at a job? Spend thousands of dollars training humans for uncertain speedups, or spend 10s of thousands more to hire more, or spend only 2-3k and a minor extra electricity cost adding processors to the AI.

To illustrate: the archetypal example of when to use an AI as opposed to a human is spaceflight. An AI in a high-gee rocket is going to be much less costly to train and equip for space operation than a human; the human explorer would have sentimental value, but the AI would train faster, be more efficient, be more difficult to destroy, and given something of a probe and a knowledge of it's own architecture, could potentially repair itself or make copies once at the landing site, enabling faster and cheaper exploration. Another example is nuclear power. When dealing with nuclear power, an AI is more attentive to reactor conditions and will be more efficient at preventing alarms, so investment there means a longer reactor lifetime and safer operating conditions for the humans involved.

As far as worrying about Moore's Law not being valid over the next few years, given current transistor technology, we've got a lot of time before we hit the point at which it reaches it's physical minimum size, at least as far as manuacturing goes. So it's more a matter of figuring out how to manufacture ever smaller transistors en masse. I believe the absolute maximum was reached in a lab a few years ago, but it's several orders of magnitude smaller that what we use in major manufacturing. Therefore, it could be assumed that we'd hit the practical tipping point well before we run out of our 'more transistors' space, as the tipping point is several orders of magnitude closer than the practical limit of our technology. A far more practical concern is heat, but even that is getting worked on, in the form of smaller heat pipes and on-chip watercooling to keep cores from overheating.
 

DarkKestral said:
The AI could work safely in environments impossible for humans

In many cases, if an AI can control a drone, so can a human teleoperating from a safe location. But yes.

would be easily capable of improving it's own intelligence,

I doubt it. There's no reason to think that a computer would be any better at improving its own intelligence than a human is.

as well as potentially working longer and more continuously,

Why? If you emulate a human, you're likely to get a human attention span and sleep cycle.

be able to control and monitor more variables than a human,

Why? A human-like intelligence may have all the same problems in this area that a human would.

a minor extra electricity cost adding processors to the AI.

Let's make clear the difference between neural networks/computers and full Turing-complete AI. There's no reason to assume that you can meaningfully increase an AI's power by simply adding more processors any more than adding more brain has much effect on a human. At a certain point, adding more processors is going to run into serious scaling problems.

To illustrate: the archetypal example of when to use an AI as opposed to a human is spaceflight.

And there's reasons I talked about stocking the shelves at a supermarket. Spaceflight is full of cool things that have no connection to real life. In TS, AIs are significant players in personal life; if they were just used in unmanned spacecraft and nuclear power, they wouldn't be all that interesting for a setting.

given something of a probe and a knowledge of it's own architecture, could potentially repair itself or make copies once at the landing site, enabling faster and cheaper exploration.

You know, humans do that real well. In fact, that's what a human is "designed" to do: keep itself alive long enough to make copies. A computer needs silicon processed in clean rooms to self-replicate; humans need food, water, air and a bed, and they'll go without the latter in a pinch.

Also, a self-replicating computer system is scary; once it makes a mistake and doesn't copy the "Mission Orders" correctly, you have a new species that's designed to take over outer space, and which probably won't look on humans real kindly. "Von Neumann machines" don't have a great history in science fiction.

When dealing with nuclear power, an AI is more attentive to reactor conditions and will be more efficient at preventing alarms, so investment there means a longer reactor lifetime and safer operating conditions for the humans involved.

Really? I don't see any reason for the software that controls a nuclear power plant to be Turing-complete, and I would be surprised if any intelligent being could be attentive 24-7 waiting for an event that may never come.

Therefore, it could be assumed that we'd hit the practical tipping point well before we run out of our 'more transistors' space, as the tipping point is several orders of magnitude closer than the practical limit of our technology.

I doubt we know the real numbers right now. I apologize if I don't feel real trusting of your numbers, being vague as they are.
 


prosfilaes said:
I suspect that it will never be reasonable to have a digital computer simulate a human-like intelligence;

Why shouldn't it be? Just because something is digital it doesn't follow that it is unable to model analog processes with sufficient accuracy. Unless you are assuming that there's something in the human mind that cannot be duplicated by a computer model, no matter what kind of computational resources you have available. But in my mind, that kind of assumption tends towards the religious rather than the scientific.
 

Jürgen Hubert said:
Why shouldn't it be?

My issue is not with whether it can be emulated. It's whether it can be emulated efficiently. Emulation is always expensive, especially when you're talking about systems that work in fundamentally different ways. Is the human brain so "inefficient" that it can be run on a fundamentally alien system smaller or cheaper? If it needs gigawatts of power to run or can't run in real time, it's not nearly interesting.
 

Pets & Sidekicks

Remove ads

Top