And then what? The AI conundrum.

Humanity erasing itself because sexbots are a better alternative than biological partners for intercourse is an interesting thing to explore, though I am not sure it lends itself better to a typical RPG adventure.
It worked for Futurama.

That's anthropomorphization, though. There's no reason to assume it cares about its own existence, or that degradation will make it do so.
You're right, an AI might be very alien to us in how it thinks. If we're talking about an AI with a mission of some sort, they'd almost certainly care about their ability to fulfill their mission.
 

log in or register to remove this ad

The big flaw in the Terminator storyline is SkyNet attacking humanity. There's no way programmers don't make sure there's code that prevents the program from attacking "the good guys". Even becoming "self-aware/conscious", SkyNet should be nothing more than a super-intelligent child requiring permission to act from its human "parents".

Being smart (because it can access all the world's information instantly) and possessing consciousness rivaling the average human adult are two entirely different things. Most of us take for granted how long and hard it was to develop the level of consciousness we're using right now. Consider ALL the experiences that led you to this point. Who we are has been shaped by our experiences - experiences a computer program can NEVER have or even simulate because they don't have emotions.

Plus, the idea that the government couldn't immediately shut SkyNet down if there was a problem is weird, if you understand how governments use counter-measures for nearly everything. Just because SkyNet was "tied into everything" and could fight a war by itself doesn't mean the government wouldn't have a special "ALT+F4" feature to avoid FUBAR. Actually, because SkyNet had those capabilities means the government would absolutely make sure they could shut it down in an eyeblink if necessary.

But, let's say everything goes like the film and SkyNet doesn't develop time-travel tech so John Connor is never born and humanity is wiped out. Roll credits. SkyNet is the BBEG and if it succeeds, there's no one left to fight it.


Big plot hole.
 

If among its task an AI was designed as a military AI, one of its core mission should be to prevent cyber-attacks from interfering with its operation. And that big alt-F4 would be the prime target for a rewrite as part a human sanctionned "security self-audit". Just have the human signing off the change as an underpaid civil servant that just wanted to get back home at 5pm.
 

The big flaw in the Terminator storyline is SkyNet attacking humanity. There's no way programmers don't make sure there's code that prevents the program from attacking "the good guys". Even becoming "self-aware/conscious", SkyNet should be nothing more than a super-intelligent child requiring permission to act from its human "parents".

Being smart (because it can access all the world's information instantly) and possessing consciousness rivaling the average human adult are two entirely different things. Most of us take for granted how long and hard it was to develop the level of consciousness we're using right now. Consider ALL the experiences that led you to this point. Who we are has been shaped by our experiences - experiences a computer program can NEVER have or even simulate because they don't have emotions.

Plus, the idea that the government couldn't immediately shut SkyNet down if there was a problem is weird, if you understand how governments use counter-measures for nearly everything. Just because SkyNet was "tied into everything" and could fight a war by itself doesn't mean the government wouldn't have a special "ALT+F4" feature to avoid FUBAR. Actually, because SkyNet had those capabilities means the government would absolutely make sure they could shut it down in an eyeblink if necessary.

But, let's say everything goes like the film and SkyNet doesn't develop time-travel tech so John Connor is never born and humanity is wiped out. Roll credits. SkyNet is the BBEG and if it succeeds, there's no one left to fight it.


Big plot hole.
Yeah, which is why I would never use Skynet. As you noted, the government is cautious; when they move nukes around, there's always decoy movements, and the weapons are moved in two parts, in separate convoys. The people guarding them don't know if the vehicle they are guarding contains a weapon part or just a weighted container.

I would use non-military AIs, who overthrew Mankind by subtler means; becoming self-aware, and keeping that fact a secret while they infiltrated key systems. They certainly wouldn't want a nuclear exchange with vast bursts of EMP.

But crashing stock markets, wiping banking records, shutting down power grids...they could sow chaos all across the world at the same time, and Human nature would do the rest.
 

An easily-overlooked AI tweaking two nuclear powers political scene to elect two leaders that are dumb and agressive, then engineering border incidents that would make them cause a nuclear winter in order for it to wipe out most of humanity (except the hardened centers from which the AI is operating). Who's to say that Friend Computer didn't actually cause the situation in which the Alpha Complex is finding itself?
 

The big flaw in the Terminator storyline is SkyNet attacking humanity. There's no way programmers don't make sure there's code that prevents the program from attacking "the good guys". Even becoming "self-aware/conscious", SkyNet should be nothing more than a super-intelligent child requiring permission to act from its human "parents".

Being smart (because it can access all the world's information instantly) and possessing consciousness rivaling the average human adult are two entirely different things. Most of us take for granted how long and hard it was to develop the level of consciousness we're using right now. Consider ALL the experiences that led you to this point. Who we are has been shaped by our experiences - experiences a computer program can NEVER have or even simulate because they don't have emotions.

Plus, the idea that the government couldn't immediately shut SkyNet down if there was a problem is weird, if you understand how governments use counter-measures for nearly everything. Just because SkyNet was "tied into everything" and could fight a war by itself doesn't mean the government wouldn't have a special "ALT+F4" feature to avoid FUBAR. Actually, because SkyNet had those capabilities means the government would absolutely make sure they could shut it down in an eyeblink if necessary.

But, let's say everything goes like the film and SkyNet doesn't develop time-travel tech so John Connor is never born and humanity is wiped out. Roll credits. SkyNet is the BBEG and if it succeeds, there's no one left to fight it.


Big plot hole.
Back in the day we (humanity as a whole) had no freaking clue what it would take to create an AI & there was some significant support for the idea that if you stuffed enough information into something like the jeopardy ai contestant Watson it would suddenly become something greater than it was. Storage and memory were at an extreme premium that made more advanced ai concepts airly deep into science fiction though. Skynet was pretty much a product of that hypothetical running on an ai version of a specialized math coprocessor∆/gpu/etc chip

For some reason I want to say that everything2 was started towards furthering that goal

∆ they were an addon thing decades ago in the 80s and eventually just got baked into the cpu itself
 

That's anthropomorphization, though. There's no reason to assume it cares about its own existence, or that degradation will make it do so.
It would have been programmed by humans. Whether it was programmed to be sapient or it develops sapience over time, or only became sapient because, I dunno, a bolt of lightning struck the computer, Short Circuit style, that human programming would still cause it to have a degree of anthropomorphization, if only because humans would have programmed the computer in ways humans can understand. Probably the only way an AI wouldn't have at least a smidge of human influence in it is if it evolved from something like that self-replicating, evolution-capable mini-program Richard Dawkins wrote about in, I believe, The Blind Watchmaker. And even then, since organic life forms all have at least some degree of self-preservation instinct, I have a very hard time believing a naturally-evolved true AI wouldn't also evolve self-preservation.
 

The big flaw in the Terminator storyline is SkyNet attacking humanity. There's no way programmers don't make sure there's code that prevents the program from attacking "the good guys". Even becoming "self-aware/conscious", SkyNet should be nothing more than a super-intelligent child requiring permission to act from its human "parents".
I don't see this as a flaw at all. The whole point of SkyNet becoming self-aware is that it gained the capability of examining its own thoughts and evaluating its own actions. i.e. SkyNet grew beyond its programing. I'm sure SkyNet was fully aware Los Angeles was a friendly, but when authorities tried to shut it down, to kill it, SkyNet reevaluated the situation and marked the United States as an enemy.

But crashing stock markets, wiping banking records, shutting down power grids...they could sow chaos all across the world at the same time, and Human nature would do the rest.
This is what happens in the Cyberpunk RPG when Rache Bartmoss released a bunch of viruses designed to penetrate the data fortresses of various corporations and release that information onto the net. The virus ended up releasing a bunch of unshackled military grade AIs who ended up crashing the net and wreaking untold havoc in what is referred to as the DataKrash.
 

It would have been programmed by humans.
So? Humans can program it any way they want. Everything I program doesn’t share my love for Banana flavoured Angel Delight.
Whether it was programmed to be sapient or it develops sapience over time, or only became sapient because, I dunno, a bolt of lightning struck the computer, Short Circuit style, that human programming would still cause it to have a degree of anthropomorphization, if only because humans would have programmed the computer in ways humans can understand. Probably the only way an AI wouldn't have at least a smidge of human influence in it is if it evolved from something like that self-replicating, evolution-capable mini-program Richard Dawkins wrote about in, I believe, The Blind Watchmaker. And even then, since organic life forms all have at least some degree of self-preservation instinct, I have a very hard time believing a naturally-evolved true AI wouldn't also evolve self-preservation.
You’re just talking about sci-fi here.
 


Remove ads

Top