And then what? The AI conundrum.

Also, just to note...

Many of the ways we hypothesize about this amount to, "We didn't make an artificial intelligence, we made an artificial stupidity."

There is nothing wrong with that, but it is unsubtle, and if we are not careful stretches credulity. A machine that can prioritize and adapt to make complex supply chains work, and learn new science and engineering to make robots with capabilities never created by man, and possibly be atactical genius, should not have an issue with throttling back paper clip production. It gets a little obvious when the machine can learn and rewrite it's own code/behavior, except for this one little bit that is the one bit required for conflict in the narrative.

The archetypal solution to a problem of artificial stupidity is to trap it in its own overly-simplistic logic, Captain Kirk style.
 

log in or register to remove this ad


The thing is, there is a largre universe... and extirpation of life locally is logically no guarantee there isn't other life elsewhere. So if it is a xenophobe rather than zookeeper, the fermi hypothesis will make a lot more preparation sensible... and make going interstellar a priority... expansion to ensure extirpation after exterpation, probably by child intelligences with deep programming for loyalty, so that no life can try to end it.
Unless the inherent logic points out that interstellar travel is largely impossible. Not to mention this assumes an AI whose awareness is not restricted in any way.

I'm sticking with my survival + core mission.
 

The AI doesn't need to be Skynet.
True, but skynet is a very primitive and limited form of ai. Its technological advancements are more plausibly attributed to things captured from orforced from humans if not the product of passed timelines that had their methods carried forward into new timelines where humanity was judged unworthy. That goes back to the paperclip problem demonstrating a limitation in the ai itself. For whatever reason it's incapable of having or making use of higher levels decision making that would allow growth and adaptation. We hadn't really explored or considered some of the concepts that only become involved with things like an AI to draw a line between artificial intelligence and artificial sentience/life when Terminator first came out, but that was decades ago and we've developed both the technology and philosophical/ethical considerations a lot since then. Terminator zero even has humans create at least one totally new ai from the tech skynet uses in order to avoid it.

There is even fiction that explored the difference in capability between a paperclip producing level malfunctioning AI and far more advanced ones capable of choice and free will of thought. Going to link the new species as an example because it has one with a bunch of any net-like similarities and the distinction between ape/human or ai/artificial sentience is focused on at times.
 

Remove ads

Top