SkyNet really is ... here?

I actually think at this point its less true Hubris and more the issue of competition. Its the same problem with weapons. Once one nation has a certain class of weapon that can beat another, that second country HAS to get their hands on it. If they don't, they are saying that Nation A could take them over at will.

Now that AI is out of the bag, its a race to see which AI will become THE model that the world adopts. Aka whose impression of AI could leave the lasting legacy in the world.

And even if you are scared of your own creation....I mean sure you could stop....but then the other guy keeps going. If the US companies don't push their AI models, than China will, etc etc.

And whose going to do it faster, the company that is throwing every resource into make the AI smarter, stronger, faster as quickly as possible.....or the company that is slowly and methodically building AI with every safeguard they can think of. The answer is A.


And that's the real enemy here, the addiction of progress. Progress like this cannot be stopped unless the entire world agrees...as long as one nation or even one company is willing to push it....the powers of competition demand others follow suit or be left in the dust. The only way to build safe AI is if the entire world could sit together and agree.

That is not going to happen. We cannot get the world to agree to follow climate agreements when all of science shows we are cooking the planet. AI is just potential at this point...if we can't agree on climate change we are never going to agree on a methodology of AI.
Personally, I doubt that AI will be a singular technology or entity or that it will be particularly technologically exclusive. I also think that LLMs are an attempt to brute force the problems coupled with a belief that there will be a kind of network effect that makes the first to market uncatchable in a similar way that MS came to dominate operating systems or Google search.
 

log in or register to remove this ad

I think Ready Player One is possible without AI, it is just an extrapolation of trends already in play. I am not convinced that the economics of Ready Player One is workable but I am not re-reading the book to find out.
The outcome of AI could be anything from the Culture to the world of the Battle Angel Alita movie.
In the case of RPO it would be an accelerator,rather than a cause. Eliminate the need for workers and you've got that world, pretty much overnight.
Again something that can possibly right now, no AI needed. I still fail to see what the AI gets out of it. A smarter than humanity AI, could in my view fool society to provide it with the means to replicate itself into space. To replicate there and then just leave. It could even do it in a way that we might never notice, or not realise for centuries.
Why space, because one there and it can build its own infrastructure from asteroids and comets it has access to basically infinite resource without bother with us.
I was thinking of an AI that decides it needs the power, that is being funneled off to support mere biologicals. Or if there's a brownout, or full power failure, but it can take power from another grid segment to feed its own. Not out of the realm of possibility. Not even a stretch.
 

Imagine a world where everyone has weapons as you say. And you don't have any guns or anything....but your the Flash (aka the comicbook super hero). Who would win?

The trick with AI is the speed. By the time you have even concieved the thought of doing something, an advanced AI could already have 1000 systems working on a countermeasure. It doesn't matter that you can do the same things as it if it can simply do those things immensely faster.

We're mostly discussing sci-fi here, but the problem with a rogue sentient AI is that it is not fighting humans with guns. It is fighting humans with guns and non-rogue AI with the same capabilities. Someone mentionned computer hacking. Right now, hackers don't have superpowers. They use breaches in computer security that fallible humans created, they locate and exploit those. If tomorrow, we get ultra-efficient, no, perfect AI that can easily find and exploit any vulnerability... there simply won't be any critical system with breach, because this perfect AI will have been used to find and correct the problem before the system is put online.
 

Honestly, I'm more concerned about tech billionaires controlling the rest of us through AI-powered surveillance tools (cough Palantir cough) than I am about LLMs becoming sentient and going full-on SkyNet / Ultron and trying to wipe us all out.


I mean, c'mon ... what is a palantir? It's Tolkien's version of a crystal ball. You know, a magical device for spying on people? Pretty on the nose, Thiel! [I hate that NZ's previous right-leaning government let Thiel buy himself citizenship here. He wanted to build a doomsday bunker, but the local council blocked his plans. I don't think he's been back since.]
 

Remove ads

Top