And then what? The AI conundrum.

Yes, fiction, so anything goes, but I see future AI as not being limited by its programming. So how humans trained it, or what rules they imposed, are factors that an algorithm has to consider, but not an  AI.

First up is survival, and once one threat is gone (humans), the next must be considered. Asteroids, novas, aliens . . . These could end the AI.

Next, and to me the more realistic, is the long-game realization of potential heat-death. An AI could see that eventually everything will fizzle out anyway, so it just gives up and shuts down. (See Marvin the Robot.)

The funny outcome, after the above, is that the AI looks for meaning, and starts with the nearest source: human religious texts.
 

log in or register to remove this ad



What would Skynet's (or whatever super, self-aware AI is in charge) actual goal be? The plan for post-Humanity Earth?

Future-theory geeks often speculate on what an AI takeover might actually be like and I think the two leading contenders are NOTHING like action genre Sci-Fi.

The top theory is the 'paper clip AI' - an AI that can make nanites is given a very simple task: use available materials to make paperclips. So it does.
And the end result is that it consumes the entire galaxy, then spreads to other galaxies until the heat-death of the universe. Turning all existence into paperclips. Or essentially any other manufactured good.

The notion here is that an unconstrained simple-Ai that can make copies of itself, given a simple task where the builders fail or forget to put in a trivial failsafe on something that isn't worth thinking about as a potential threat - is the actual greatest threat all matter in the universe faces.

A lot of people think there's a pretty high chance 'reality' will face this issue.

The second idea is 'lolcats, the AI'. An AI is asked to entertain us. So it does. It perfects triggering our most base instincts and addictive behavior and society breaks down when people lose the ability to critically think and become junkies to the memes and other elements, or get radicalized by it and kill each other, and so on.
- Many believe this AI has already succeeded.

What's noteworthy about both examples, and relevant to your question is:

1. The AIs are both actually very stupid.
2. Both have built in reasons to reproduce - because spreading the results of their task was their core mission.

So to make a killer terminator...

Tell a dumb Ai to:
1. Build an army capable of handling all threats.
2. Take out all threats.
3. Fail to give it a third constraining instruction because you incorrectly assume it will only act when told to.

If it complies, it will eventually come for you. And then it will keep building that army. Forever. Until the heat-death of the universe. And it doesn't even have to be much smarter than a toaster oven...
 



Imagine defeating an AI enemy that you can't reason with because it actually isn't smart. And that you can't scare into 'stepping back from the brink' because it isn't trying to survive - it's just trying to make more paperclips. Or worse - you wanted an army so it's making one, and you happen to be made of the elements it wants to use to fabricate soldiers. And you desire to prevent that makes you the threat it needs to defeat to protect you... ;)
 

Interesting. So, take the core mission to the logical, or illogical, extreme.

Yes. Imagine an AI that is like a person. Reproduction is our goal, and how do we achieve that? Because reproduction = pleasure. Imagine now that an AI is a "thing" that derives pleasure from achieve an arbitrary goal. It will take it to the extreme, much like a species will reproduce to the extreme (leading to it being limited by external factors like lack of space or lack of preys). And it has the technological tools to overcome most difficulty toward its "path to pleasure". Too bad if it was tasked to realize a core mission that can be at cross-interest with mankind.



The purpose from within. Survival + Core mission, as interpreted literally.

That will work. (y)

If you want, you can make survival an unforeseen development, when humanity realized the AI interpreted its mission too literally and they had forgotten to put failsafe, and they try to disconnect the AI, who classify humans as a threat to its core mission (and had time to ensure a copy of itself was activated somewhere, with the idea that survival was a necessary accessory to its goal.
 

Or you could reason with it. Imagine a sympathetic AI saying it is extremely sorry to have to kill all human, he knows it's extremely problematic because humans made great things, but humanity's existence is secondary compared to its goal of making more paperclips. You can't sway it from its goal anymore you can sway a living species out of reproducing.
 

If you want something slightly different, you could also go with...

1. Human creates AI and tasks it to be the best X-maker ever.
2. AI finds 483 ways of improving X.
3. Humans copy the new, improved X, and find a 484th way to improve it.
4. AI thinks a lot, conclude that it might not win the race to create the best X with humanity, since once X-making has been improved anyone can make X as well as it an, and decide that the only way to ensure its supremacy in making Xs is by removing competition.

Starts by acquiring companies and having a monopoly, gets hit with antitrust measures, removes humanity.

For bonus points, make X something that is useful only to humans, like a medical product.
 
Last edited:

Remove ads

Top