DarkKestral said:
The AI could work safely in environments impossible for humans
In many cases, if an AI can control a drone, so can a human teleoperating from a safe location. But yes.
would be easily capable of improving it's own intelligence,
I doubt it. There's no reason to think that a computer would be any better at improving its own intelligence than a human is.
as well as potentially working longer and more continuously,
Why? If you emulate a human, you're likely to get a human attention span and sleep cycle.
be able to control and monitor more variables than a human,
Why? A human-like intelligence may have all the same problems in this area that a human would.
a minor extra electricity cost adding processors to the AI.
Let's make clear the difference between neural networks/computers and full Turing-complete AI. There's no reason to assume that you can meaningfully increase an AI's power by simply adding more processors any more than adding more brain has much effect on a human. At a certain point, adding more processors is going to run into serious scaling problems.
To illustrate: the archetypal example of when to use an AI as opposed to a human is spaceflight.
And there's reasons I talked about stocking the shelves at a supermarket. Spaceflight is full of cool things that have no connection to real life. In TS, AIs are significant players in personal life; if they were just used in unmanned spacecraft and nuclear power, they wouldn't be all that interesting for a setting.
given something of a probe and a knowledge of it's own architecture, could potentially repair itself or make copies once at the landing site, enabling faster and cheaper exploration.
You know, humans do that real well. In fact, that's what a human is "designed" to do: keep itself alive long enough to make copies. A computer needs silicon processed in clean rooms to self-replicate; humans need food, water, air and a bed, and they'll go without the latter in a pinch.
Also, a self-replicating computer system is scary; once it makes a mistake and doesn't copy the "Mission Orders" correctly, you have a new species that's designed to take over outer space, and which probably won't look on humans real kindly. "Von Neumann machines" don't have a great history in science fiction.
When dealing with nuclear power, an AI is more attentive to reactor conditions and will be more efficient at preventing alarms, so investment there means a longer reactor lifetime and safer operating conditions for the humans involved.
Really? I don't see any reason for the software that controls a nuclear power plant to be Turing-complete, and I would be surprised if any intelligent being could be attentive 24-7 waiting for an event that may never come.
Therefore, it could be assumed that we'd hit the practical tipping point well before we run out of our 'more transistors' space, as the tipping point is several orders of magnitude closer than the practical limit of our technology.
I doubt we know the real numbers right now. I apologize if I don't feel real trusting of your numbers, being vague as they are.