The paperclip scenario sounds suspiciously like the gray goo scenario.
Indeed. It adds the motivation to the grey goo. Where Skynet scenarios tend to fail is because it would be strange to create an AI with the purpose of potentially getting rid of everything. While the paperclip scenario is just that, with a mundane motivation for creating the AI in the first place: a paperclip company wants to improve its paperclip making capabilities and forgets to put any kind of failsafes, because what could possibly go wrong?
It also makes it possible to exploit the AI ethical reasoning (which is based on "whatever increases the number of paperclips being made is right, and whatever decreases the number of paperclips being made is wrong", to negociate with it (which is more difficult if its goal is just to consume or kill everything).
And basically, once the pesky humans have been removed, it can resume its first mission unimpeded. It's not even against humans, it just must defend against pesky humans who oppose turning their city into a huge paperclip factory. And since nothing has value except paperclip (and paperclip making plants), well, too bad for the humans.
Last edited: