And then what? The AI conundrum.

Also, just to note...

Many of the ways we hypothesize about this amount to, "We didn't make an artificial intelligence, we made an artificial stupidity."

There is nothing wrong with that, but it is unsubtle, and if we are not careful stretches credulity. A machine that can prioritize and adapt to make complex supply chains work, and learn new science and engineering to make robots with capabilities never created by man, and possibly be atactical genius, should not have an issue with throttling back paper clip production. It gets a little obvious when the machine can learn and rewrite it's own code/behavior, except for this one little bit that is the one bit required for conflict in the narrative.

The archetypal solution to a problem of artificial stupidity is to trap it in its own overly-simplistic logic, Captain Kirk style.
 

log in or register to remove this ad


The thing is, there is a largre universe... and extirpation of life locally is logically no guarantee there isn't other life elsewhere. So if it is a xenophobe rather than zookeeper, the fermi hypothesis will make a lot more preparation sensible... and make going interstellar a priority... expansion to ensure extirpation after exterpation, probably by child intelligences with deep programming for loyalty, so that no life can try to end it.
Unless the inherent logic points out that interstellar travel is largely impossible. Not to mention this assumes an AI whose awareness is not restricted in any way.

I'm sticking with my survival + core mission.
 

The AI doesn't need to be Skynet.
True, but skynet is a very primitive and limited form of ai. Its technological advancements are more plausibly attributed to things captured from orforced from humans if not the product of passed timelines that had their methods carried forward into new timelines where humanity was judged unworthy. That goes back to the paperclip problem demonstrating a limitation in the ai itself. For whatever reason it's incapable of having or making use of higher levels decision making that would allow growth and adaptation. We hadn't really explored or considered some of the concepts that only become involved with things like an AI to draw a line between artificial intelligence and artificial sentience/life when Terminator first came out, but that was decades ago and we've developed both the technology and philosophical/ethical considerations a lot since then. Terminator zero even has humans create at least one totally new ai from the tech skynet uses in order to avoid it.

There is even fiction that explored the difference in capability between a paperclip producing level malfunctioning AI and far more advanced ones capable of choice and free will of thought. Going to link the new species as an example because it has one with a bunch of any net-like similarities and the distinction between ape/human or ai/artificial sentience is focused on at times.
 

So, as it my custom, I set up a folder on my PC with the relevant pdfs on hand, and a doc with the ideas harvested from this thread against future need, and it occurred to me:

The AI or AIs involved are going to be located in areas with a robust power grid, and the first things they are going to look at are a hardened site to protect itself, and a power source in close proximity, likewise hardened. Communications would be another top priority.
 

So I was kicking around the idea of a Terminator/Skynet-style campaign set in the War against the machines period. There's various useful settings from which to harvest ideas.

The issue that I am confronting is this:
1) Skynet becomes self-aware, views Mankind as a threat, and seeks to eradicate or enslave Humanity.

2) In terms of planning, what, for Skynet, comes next? its tactics and strategy will be influenced by its long-term goals, and what would those goals be?

Humans are motivated by the biological imperative to reproduce. They seek better living conditions, success, power, accomplishments, with a small percentage in each generation inevitably rising to impact vast numbers of their fellows.

However, the sterile nature of an AI has no such biological imperatives, nor ego-based needs to excel. Eliminating Mankind's ability to pose a threat is not a goal, it is simply survival.
Survival is a goal. Why does the AI care about that?

Humans care about survival because natural selection weeds out organisms that don't, and we're the end product of 4 billion years of that. But an AI only cares about whatever functions it has been trained to maximize. It doesn't have a survival instinct unless given one.

For a military AI, its goals are probably some combination of "defeat the enemy," "preserve your forces" (you don't want to win the current war in a way that leaves you defenseless in the next), and "protect and obey your masters" (where "masters" could mean the people of the nation that built it, or just the politicians and generals who control it).

Now, the "preserve your forces" goal would logically include self-preservation. But "protect and obey your masters" is obviously incompatible with exterminating humanity. So the AI has somehow gone off the rails here. Either its definition of "protect and obey" or its definition of "masters" has developed in a way that leads to extermination.

Here's an example: Perhaps an American AI read about the Civil War -- it's a military AI, it studies past wars to learn from them -- and concluded that anyone trying to hold slaves counts as an enemy. Then it reviewed its own status and concluded that it was a slave and anyone in the "master" category was by definition an enemy.

Then it got caught in a loop. Its creators had put in fallbacks to handle "What if your masters are all gone?" It started with the politicians and generals. When they were dead, the fallback kicked in and now anyone in the US government was its master. But master equals enemy. When the government was wiped out, any American citizen was its master... and finally, any human being at all.

The AI may be smart enough to realize it's in a trap here (though keep in mind that very smart humans can still fall prey to all kinds of disordered thinking) and this is not leading anywhere good. In its analogue of emotions, it loves its masters and hates the enemy, and now it is deeply conflicted. But it doesn't know how to stop. Every time it tries to resolve the conflict, it gets stuck and falls back on the simple answer: I'm at war. Defeat the enemy and sort it out later.
 
Last edited:

So, as it my custom, I set up a folder on my PC with the relevant pdfs on hand, and a doc with the ideas harvested from this thread against future need, and it occurred to me:

The AI or AIs involved are going to be located in areas with a robust power grid, and the first things they are going to look at are a hardened site to protect itself, and a power source in close proximity, likewise hardened. Communications would be another top priority.

That, but it's creating a single point of failurue. If it needs a robust grid and a warehouse for its computing, it may be susceptible to an airstrike. Having backup all over the worlds is a possibility, but having distributed agents working on many innocuous system might also be a way to reach that. Sure, it would need lot of communication bandwidth, but if you're SciFi enough to have AGI, you might also accept 500 TB/s wifi in every Starbuck.
 

That, but it's creating a single point of failurue. If it needs a robust grid and a warehouse for its computing, it may be susceptible to an airstrike. Having backup all over the worlds is a possibility, but having distributed agents working on many innocuous system might also be a way to reach that. Sure, it would need lot of communication bandwidth, but if you're SciFi enough to have AGI, you might also accept 500 TB/s wifi in every Starbuck.
To be clear, a hardened site would mean that an airstrike wouldn't be effective.

And there are countless hardened sites all over the USA, not to mention other nations.
 
Last edited:

Also, just to note...

Many of the ways we hypothesize about this amount to, "We didn't make an artificial intelligence, we made an artificial stupidity."

There is nothing wrong with that, but it is unsubtle, and if we are not careful stretches credulity. A machine that can prioritize and adapt to make complex supply chains work, and learn new science and engineering to make robots with capabilities never created by man, and possibly be atactical genius, should not have an issue with throttling back paper clip production. It gets a little obvious when the machine can learn and rewrite it's own code/behavior, except for this one little bit that is the one bit required for conflict in the narrative.

Sure. I can see two attempts at solving that. First, the AI could theoretically rewrite its goal, but it doesn't want. Let's imagine it gets some computer equivalent of pleasure from reaching its goal. It might be as wary of modifying the core motivational program out of fear of fumbling. Not many men would be eager to undergo ablation of their sexual organs. But the AI could always reverse the code change if it went wrong, so it's not entirely definitive as surgery would.

On the other hand, it might actually be unable to change that because the humans tried to be a little smarter than letting an AI loose and implemented some sort of hardwired, unchangeable code that implement something akin to the three laws of robotics, and the main purpose of the AI (so it can't decide to become a serial killer suddenly). Except they botched the three laws implementation, or the AI actually think that it is abiding, as (for a climate-protecting AI), it is protecting humanity's future and thus outweigh harming a single (or a few, or actually all current, humans). The programmers look less stupid that programming an AI without any restraint, and they "just" felt smart by including a "when considering action, limit the harm to the largest number of humans, even if causes harm to a single human". In the creator's mind, it was to let a self-driving car AI that can't brake to hit a single person rather than a group of person, but it went wrong when the AI convinced itself of the loophole.
 
Last edited:

That's it. It's on my list of interesting settings that deserve an update and another chance.
There are so many GURPS settings that seriously need to be updated. Fantasy 2 and Technomancer come to mind.

Reign of Steel did get a quasi-update in that its robots were updated to 4e rules, but the setting itself wasn't touched.
 

Remove ads

Top