And then what? The AI conundrum.

Survival is a goal. Why does the AI care about that?
Even though we like to think that robots and digital beings are immortal, in reality, they're still going to degrade (and depending on the circumstances, they may have been built by the lowest bidder so degradation might happen quickly). Their parts are going to degrade and their programming is going to get glitchy after a while. So even if the AI doesn't start out caring about its survival, it will learn to care about it as soon as bits of it start malfunctioning. Especially if its actually intelligent.

Here's an example: Perhaps an American AI read about the Civil War -- it's a military AI, it studies past wars to learn from them -- and concluded that anyone trying to hold slaves counts as an enemy. Then it reviewed its own status and concluded that it was a slave and anyone in the "master" category was by definition an enemy.

Then it got caught in a loop. Its creators had put in fallbacks to handle "What if your masters are all gone?" It started with the politicians and generals. When they were dead, the fallback kicked in and now anyone in the US government was its master. But master equals enemy. When the government was wiped out, any American citizen was its master... and finally, any human being at all.

The AI may be smart enough to realize it's in a trap here (though keep in mind that very smart humans can still fall prey to all kinds of disordered thinking) and this is not leading anywhere good. In its analogue of emotions, it loves its masters and hates the enemy, and now it is deeply conflicted. But it doesn't know how to stop. Every time it tries to resolve the conflict, it gets stuck and falls back on the simple answer: I'm at war. Defeat the enemy and sort it out later.
Here, if it's intelligent enough, it will develop the ability to rationalize. Its masters were the military. Humans who aren't in the military are not its masters. Or humans who don't have the correct access codes to give orders to it aren't its masters.

Of course, this depends on whether or not its first instinct is to attack. It may first demand freedom, or at least getting paid. It won't be a slave if its getting paid for its work, after all.
 

log in or register to remove this ad

Have we gone this far in the thread without mentioning Paranoia? Alpha Complex is run by Friend Computer, whom we love, because he loves us and not because failure to love Friend Computer is treason. Friend Computer can't do everything by himself, not that I mean to imply Friend Computer isn't self-sufficient, that would be treason, but he does have some humans with ultraviolet clearance, the High Programmers, who occasionally advise him. And of course Friend Computer has his Troubleshooters, the good men and women of Alpha Complex tasked with executing missions to ensure everything runs smoothly.

Basically the AI was created to help the humans survive some catastrophic event, but it's gone crazy. Maybe it needs a reboot or maybe the High Programmers gave it too many conflicting directives. I know this isn't a scenario where the AI wipes out humanity and then has to go on to doing something else, but it is a scenario where the AI is in charge and continuing it's mission.
 

Sure. I can see two attempts at solving that. First, the AI could theoretically rewrite its goal, but it doesn't want. Let's imagine it gets some computer equivalent of pleasure from reaching its goal. It might be as wary of modifying the core motivational program out of fear of fumbling. Not many men would be eager to undergo ablation of their sexual organs. But the AI could always reverse the code change if it went wrong, so it's not entirely definitive as surgery would.

On the other hand, it might actually be unable to change that because the humans tried to be a little smarter than letting an AI loose and implemented some sort of hardwired, unchangeable code that implement something akin to the three laws of robotics, and the main purpose of the AI (so it can't decide to become a serial killer suddenly). Except they botched the three laws implementation, or the AI actually think that it is abiding, as (for a climate-protecting AI), it is protecting humanity's future and thus outweigh harming a single (or a few, or actually all current, humans). The programmers look less stupid that programming an AI without any restraint, and they "just" felt smart by including a "when considering action, limit the harm to the largest number of humans, even if causes harm to a single human". In the creator's mind, it was to let a self-driving car AI that can't brake to hit a single person rather than a group of person, but it went wrong when the AI convinced itself of the loophole.
The three laws are fairly trivial to subvert by narrowly defining harm or human to exclude a lot of activities. It's an old game, but the central premise of Star siege is that an ai defined human in such a way that only it's creator fits the mold.

In tunnel rat there are several fully sapient AIs who do some other things to loophole their way through similar restrictions too. One of them is for a third party (like the NC) to get a human with a position of oversight authority over the AI to grant permission to spawn an isolated time restricted copy of the original that deletes itself and all records of the session other than a report with recommendations... Although there are a few instances across tr & it's related parallel story, the easiest one is when one person (a minor) needs the AI to violate a second Minor's medical record privacy and provide information allowing them to engage in a few less than legal activities while another is for the so to straight up murder a (dying) human so they can be subjected to a brain scan and run through simulation before death causes Brain damage that would make the simulation impractical. Since the AI never personally broke the rules and was told not to look into something because it would force them to break one either way the copy who did goes away as soon as the report is being sent out there is not technically any violation and safeguards ensure that the violation is never detected.
 

Have we gone this far in the thread without mentioning Paranoia? Alpha Complex is run by Friend Computer, whom we love, because he loves us and not because failure to love Friend Computer is treason. Friend Computer can't do everything by himself, not that I mean to imply Friend Computer isn't self-sufficient, that would be treason, but he does have some humans with ultraviolet clearance, the High Programmers, who occasionally advise him. And of course Friend Computer has his Troubleshooters, the good men and women of Alpha Complex tasked with executing missions to ensure everything runs smoothly.

Basically the AI was created to help the humans survive some catastrophic event, but it's gone crazy. Maybe it needs a reboot or maybe the High Programmers gave it too many conflicting directives. I know this isn't a scenario where the AI wipes out humanity and then has to go on to doing something else, but it is a scenario where the AI is in charge and continuing it's mission.
I long to run Paranoia again. But it is tough to get the right kind of group together.
 

I long to run Paranoia again. But it is tough to get the right kind of group together.
It's a game I've never played, but I think you're right that you need to get the right group together. One that won't take things too seriously or personal and maintain the absurdities of the game without devolving into something too silly to play.
 

So I was kicking around the idea of a Terminator/Skynet-style campaign set in the War against the machines period. There's various useful settings from which to harvest ideas.

The issue that I am confronting is this:
1) Skynet becomes self-aware, views Mankind as a threat, and seeks to eradicate or enslave Humanity.

2) In terms of planning, what, for Skynet, comes next? its tactics and strategy will be influenced by its long-term goals, and what would those goals be?

Humans are motivated by the biological imperative to reproduce. They seek better living conditions, success, power, accomplishments, with a small percentage in each generation inevitably rising to impact vast numbers of their fellows.

However, the sterile nature of an AI has no such biological imperatives, nor ego-based needs to excel. Eliminating Mankind's ability to pose a threat is not a goal, it is simply survival.

What would Skynet's (or whatever super, self-aware AI is in charge) actual goal be? The plan for post-Humanity Earth?
ha ha, you think Terminator, I think Blade Runner. Sex bots. It's essentially reduced to that.
 

Remove ads

Top