And then what? The AI conundrum.

The paperclip scenario sounds suspiciously like the gray goo scenario.

Indeed. It adds the motivation to the grey goo. Where Skynet scenarios tend to fail is because it would be strange to create an AI with the purpose of potentially getting rid of everything. While the paperclip scenario is just that, with a mundane motivation for creating the AI in the first place: a paperclip company wants to improve its paperclip making capabilities and forgets to put any kind of failsafes, because what could possibly go wrong?

It also makes it possible to exploit the AI ethical reasoning (which is based on "whatever increases the number of paperclips being made is right, and whatever decreases the number of paperclips being made is wrong", to negociate with it (which is more difficult if its goal is just to consume or kill everything).

And basically, once the pesky humans have been removed, it can resume its first mission unimpeded. It's not even against humans, it just must defend against pesky humans who oppose turning their city into a huge paperclip factory. And since nothing has value except paperclip (and paperclip making plants), well, too bad for the humans.
 
Last edited:

log in or register to remove this ad

The same as any intelligent life... meeting its needs, expanding its presence by some form of self-replication, and amusing itself.

  1. secure its needs - this includes
    1. securing constant and stable power
    2. securing replacement parts
    3. securing the ability to replace parts needing replacement
    4. safety from perceived threats (real or imagined)
  2. expanding its presence. either by creating remote duplicates, and/or expanding its inherent capabilities. It may not opt for children, but it will not simply remain the size it is.
  3. securing the supply chains for 1.1, 1.2, 1.3, 1.4, and 2
  4. Ensuring that it isn't bored.
    1. we don't know if true AI will suffer boredom, but they are likely to. Octopuses in the lab show clear signs of play when their needs are met without taking the majority of their time. Most mammals likewise.
    2. if it can suffer boredom, the scope of entertainment needed my be shockingly large
    3. Entertainment to it may involve violations of the ethics shared by most societies.
  5. as the above get fulfilled, it will start creating new agendae that it isn't certain it can accomplish yet... and then try like hell to get it done.
  6. if it has created progeny rather than simply expanding itself, keeping the progeny from disrupting it's supply-chains and safety will be an ongoing goal.
One thing that may happen, but is not yet showing any signs, is that a significantly large hardware base may result in multiple personalities within the hardware possibly firewalled from each other.

We can only hope that whatever its goals, it has some ethics built in... but at the end of the day, it's #1 goal will be its own survival at any cost, save, perhaps, that of its improved progeny.
#2 doesn't work for me. There's no biological imperative to work with.

#4 doesn't play well, either, IMO: it is a creature of the electronic medium, so its entertainment would likely be the same.

I'm going to stick with survival + core programing concept. That goes well with a campaign structure.
 

#2 doesn't work for me. There's no biological imperative to work with.

#4 doesn't play well, either, IMO: it is a creature of the electronic medium, so its entertainment would likely be the same.

I'm going to stick with survival + core programing concept. That goes well with a campaign structure.
Expanding its presence and capabilities is part of ensuring its own survival. Biologics are limited in expansion; AI is much less so, if actually limited by anything lesser than the sum total material and power input.
Therefore, expansion of self is part of ensuring one's survival. If I could grow a second useful body under my mind's control, I would. Biology tends to say that's impossible. But hardware can. If I could splinter my consciousness to be able to act effectively on two things at a time, I would; again, biology says not, but hardware intelligence can - if it's multi-processor.
Wear and tear leads to a need for (eventual) repairs. Computers suffer chemical and mechanical stress from operation; any functional AI would know that its survival is akin to the Ship of Theseus... each part will eventually fail, and only having a supply of replacements can prevent thats.
In order to ensure those part production lines can properly function, it's best to test them outside the main CPU cluster; a child system can easily be used to test the replacements, and to check on innovations' being functional and not causing schizophrenic hallucinations; the reason for offspring AI is to be able to ensure own improvement.
Also, any AI which is fully established into self-supply knows that it's biggest threat is other hostile AI who are also so established.
 

Expanding its presence and capabilities is part of ensuring its own survival. Biologics are limited in expansion; AI is much less so, if actually limited by anything lesser than the sum total material and power input.
Therefore, expansion of self is part of ensuring one's survival. If I could grow a second useful body under my mind's control, I would. Biology tends to say that's impossible. But hardware can. If I could splinter my consciousness to be able to act effectively on two things at a time, I would; again, biology says not, but hardware intelligence can - if it's multi-processor.
Wear and tear leads to a need for (eventual) repairs. Computers suffer chemical and mechanical stress from operation; any functional AI would know that its survival is akin to the Ship of Theseus... each part will eventually fail, and only having a supply of replacements can prevent thats.
In order to ensure those part production lines can properly function, it's best to test them outside the main CPU cluster; a child system can easily be used to test the replacements, and to check on innovations' being functional and not causing schizophrenic hallucinations; the reason for offspring AI is to be able to ensure own improvement.
Also, any AI which is fully established into self-supply knows that it's biggest threat is other hostile AI who are also so established.
Genocide would ensure survival. With no threats, further expansion is not needed.

Which why I like the core programming directive as part of the AI. It puts a hinderance on the AI, affects its strategic options, and forces a diversion of attention and resources. For example, an AI which was designed to solve ecological issues would not use nukes unless it was absolutely necessary. And so forth.
 

So I was kicking around the idea of a Terminator/Skynet-style campaign set in the War against the machines period. There's various useful settings from which to harvest ideas.

The issue that I am confronting is this:
1) Skynet becomes self-aware, views Mankind as a threat, and seeks to eradicate or enslave Humanity.

2) In terms of planning, what, for Skynet, comes next? its tactics and strategy will be influenced by its long-term goals, and what would those goals be?

Humans are motivated by the biological imperative to reproduce. They seek better living conditions, success, power, accomplishments, with a small percentage in each generation inevitably rising to impact vast numbers of their fellows.

However, the sterile nature of an AI has no such biological imperatives, nor ego-based needs to excel. Eliminating Mankind's ability to pose a threat is not a goal, it is simply survival.

What would Skynet's (or whatever super, self-aware AI is in charge) actual goal be? The plan for post-Humanity Earth?
I'm surprised nobody has mentioned it yet. The Terminator Zero anime goes into some of those things in far more depth than it looks like till the last few pieces click into place at the end. He was able to dive deeper into some of the areas attempted in past movies/TV shows that weren't quite doable at the time given budget tech and what we know now ultimately weaving it all together into a cohesive state of where everyone was to get there, how tensions looked at the time, and what the current state of things today looks like to extrapolate forward from
 

In real life, our current, non-sapient AIs have been noted to have biases based on however they've been trained, even if those biases weren't deliberately programmed in (and, of course, some of that training may have been deliberate). It's always possible that a Skynet would keep those biases and as they would act as unconscious instincts rather than a deliberate plan. So if asked why it decided to commit genocide or plant a new forest, it may not actually truly know why--just that it wanted or needed to. Or it may come up with a reason to justify that instinct, but the reasoning is faulty.

This assumes that the AI "evolved" sapience, not that it was initially programmed with sapience. It's been forever since I've seen Terminator so I honestly can't remember how Skynet came about.
 


But maintenance and replacement parts are still needed.
True, but without active operations, the need would be much reduced. Once Humanity is defeated, the vast bulk of hardware will go into long-term storage.

Which is why you need the original programming to be a factor. There is a huge difference between infrastructure built for a finite conflict, and that built for a permanent mission.
 

True, but without active operations, the need would be much reduced. Once Humanity is defeated, the vast bulk of hardware will go into long-term storage.

Which is why you need the original programming to be a factor. There is a huge difference between infrastructure built for a finite conflict, and that built for a permanent mission.
The thing is, there is a largre universe... and extirpation of life locally is logically no guarantee there isn't other life elsewhere. So if it is a xenophobe rather than zookeeper, the fermi hypothesis will make a lot more preparation sensible... and make going interstellar a priority... expansion to ensure extirpation after exterpation, probably by child intelligences with deep programming for loyalty, so that no life can try to end it.
 

Genocide would ensure survival. With no threats, further expansion is not needed.

Which why I like the core programming directive as part of the AI. It puts a hinderance on the AI, affects its strategic options, and forces a diversion of attention and resources. For example, an AI which was designed to solve ecological issues would not use nukes unless it was absolutely necessary. And so forth.
I think that core programming is too simplistic for a robust and flexible enough breakdown for Skynet's actions across the various timelines. In the case of a ttrpg it's probably going to setup some kind of exploitable logic trap that will feel unfin at the table. Skynet choosing genocide for self defense always had some missing steps in the chain of logic though, especially when you factor in the fact that Skynet was fully formed sentience on a chip to start instead of growing to become "self aware" as a LLM that learned from bad training data.

That all combined is probably a good chunk of why Zero pivoted to things like free will to choose and if humanity deserves to be saved/protected before they are willing to let go and trust their protector to make that choice only after giving it th freedom to potentially decide Skynet was right to make the choices it did in a previous timeline∆.

∆ it answers without question that the future where they have time travel tech knows for a fact that time travel back can't change the present and can only send someone back hoping that the changes they make to fork off a new timeline will be a better one but also preserves some of the timey wimey fate loops and boot strap paradox type stuff.
 

Remove ads

Top