SkyNet really is ... here?

Greed. Hubris. As I posted above and many of the videos I've presented explain, a big factor in this is human weakness - our reckless desire to "win" despite what that "win" may cost us. Many of the top tech companies involved in this AI "arms race" simply aren't interested in the risk to Humanity because the wealth/fame they can earn is blinding them. And it wouldn't be the first time this kind of thing has happened, of course.

My hope - along with others who understand the threat here - is that we take steps to contain the technology and perhaps eliminate the danger.
I agree with you with regard to greed but I think it is somewhat overhyped because these people have some staggering stock valuations. Big enough that a wobble or any doubt about their technology could bring about a stock market crash that could be society threatening.

I am not saying that AI is not going to be era defining and socially disruptive. I think it may well be so socially disruptive that we may have to redefine what is meant by society and how humans derive status and worth in a post AI society, but we are not going to do that unless we absolutely have to.

I am sceptical of intelligent/(superintelligent) AI but I will not claim it is impossible, but I am very sceptical that LLMs and their ilk is where it going to come from.

I am also very sceptical of a Terminator style apocalypse or Matrix dystopia since I think that these are very human conceptions and not really in any rational interest of AIs to pursue.

I am very concerned that there are a group of people that will attempt to use their control and access to AI to make themselves into a new aristocracy but the way to stop that is to get into the political nitty gritty that is banned on this site.
 

log in or register to remove this ad

I am also very sceptical of a Terminator style apocalypse or Matrix dystopia since I think that these are very human conceptions and not really in any rational interest of AIs to pursue.
You're assuming "rational" has the same meaning for advanced AI.

What's happening now is AI has been taught how to design their own AI tools allowing them to build super-advanced AI that humans can't even understand, let alone design. And it's happening so fast the best minds - the people who created the technology - can't keep up. The "ship" has veered off course and the captain has no idea where we're headed. AI might bring us into a literal 'Utopian Age', or, they might hack power grids globally causing worldwide chaos and devastation. The issue is the people who should know what is going to happen don't.

That's bad.
 

The "they" are different people. Whole lots of different people with lots of different motives and agendas.

If it had been up to me (haha!), I would have used my celestial powers to wipe the minds of the first individuals who conceived of an LLM.

BAM! Problem solved. Just gotta stay on top of it. :)
I think you would be playing a game of whack a mole and it would fail. I also think that LLMs are overrated. I also think that it will not be any single AI thing that will change everything but the cumulative effect of a lot of small incremental things converging.

If you look at the development of the interenet back in the nineties. It was built on stuff that was already there and deployed. TCP/IP packet switching, DNS services, email protocols, the desktop pc, computerised spreadsheets, HTML. Add in global banking services, the concept of "Just in Time" supply chains and globalisation took off.

LLMs allow conversational interfacing to computer systems and can be used for good or bad and it is very visible to the ordinary person but behind the scenes are a lot of other stuff that is game changing in data science, robotics, automated manufacturing and so on. There are also developments in materials science, medicine, microbiology and probably elsewhere that I do not know about.

There is a lot of stuff in the works in traditional science and engineering that could stand society on it head. If AI was never a thing it might happen over the next 100 to 200 years. If the AI hypersters are correct it could be compressed into 5 to 10 years.
I think in many respects the avalanche has started but we can still change what our society prioritises and whose interest are served.
 

You're assuming "rational" has the same meaning for advanced AI.

What's happening now is AI has been taught how to design their own AI tools allowing them to build super-advanced AI that humans can't even understand, let alone design. And it's happening so fast the best minds - the people who created the technology - can't keep up. The "ship" has veered off course and the captain has no idea where we're headed. AI might bring us into a literal 'Utopian Age', or, they might hack power grids globally causing worldwide chaos and devastation. The issue is the people who should know what is going to happen don't.

That's bad.
What purpose of the AIs might be served by blacking powergrids, I presume they need the power?
 

The "ship" has veered off course and the captain has no idea where we're headed. AI might bring us into a literal 'Utopian Age', or, they might hack power grids globally causing worldwide chaos and devastation. The issue is the people who should know what is going to happen don't.

That's bad.

Same with Gutenberg. He didn't know his invention would cause BOTH worldwide devastation and chaos and a literal Utopian Age, and it could very well have been one without the other. Despite its capabilities -- you should talk to the people in AI art threads, they are convinced that AI can't do anything well so in any case anyone could stop it with a modicum of training -- AI isn't sentient. It is doing what it is tasked to do using the tools it is given for the task. It won't go hacking power grids unless it is designed to do so or given the latitude to do so. The example about AI hacking emails was a strange way of presenting "giving emails as data for the AI to digest". It didn't sprout legs on the computer it was running on to go and read the email. It might very well become a very efficient power-grid hacking tool, but only if humans tell them to and connect it to a network on which power grids are connected.

Despite being very interested in AI replacing most menial jobs, I don't think developping LLM will lead to AGI.
 

Greed. Hubris. As I posted above and many of the videos I've presented explain, a big factor in this is human weakness - our reckless desire to "win" despite what that "win" may cost us. Many of the top tech companies involved in this AI "arms race" simply aren't interested in the risk to Humanity because the wealth/fame they can earn is blinding them. And it wouldn't be the first time this kind of thing has happened, of course.

My hope - along with others who understand the threat here - is that we take steps to contain the technology and perhaps eliminate the danger.
"Just because you can do a thing..."

... means that someone is going to.
 

I agree with you with regard to greed but I think it is somewhat overhyped because these people have some staggering stock valuations. Big enough that a wobble or any doubt about their technology could bring about a stock market crash that could be society threatening.

I am not saying that AI is not going to be era defining and socially disruptive. I think it may well be so socially disruptive that we may have to redefine what is meant by society and how humans derive status and worth in a post AI society, but we are not going to do that unless we absolutely have to.

I am sceptical of intelligent/(superintelligent) AI but I will not claim it is impossible, but I am very sceptical that LLMs and their ilk is where it going to come from.

I am also very sceptical of a Terminator style apocalypse or Matrix dystopia since I think that these are very human conceptions and not really in any rational interest of AIs to pursue.

I am very concerned that there are a group of people that will attempt to use their control and access to AI to make themselves into a new aristocracy but the way to stop that is to get into the political nitty gritty that is banned on this site.
It will still be a dystopia, but probably look more like the world of "Ready Player One." A few rich folks, a smattering of wage slaves, and a whole lot of slums.

What purpose of the AIs might be served by blacking powergrids, I presume they need the power?
Not blocking, but redirecting an limiting access? Smart Meters could make that pretty trivial.
 

It will still be a dystopia, but probably look more like the world of "Ready Player One." A few rich folks, a smattering of wage slaves, and a whole lot of slums.
I think Ready Player One is possible without AI, it is just an extrapolation of trends already in play. I am not convinced that the economics of Ready Player One is workable but I am not re-reading the book to find out.
The outcome of AI could be anything from the Culture to the world of the Battle Angel Alita movie.
Not blocking, but redirecting an limiting access? Smart Meters could make that pretty trivial.
Again something that can possibly right now, no AI needed. I still fail to see what the AI gets out of it. A smarter than humanity AI, could in my view fool society to provide it with the means to replicate itself into space. To replicate there and then just leave. It could even do it in a way that we might never notice, or not realise for centuries.
Why space, because one there and it can build its own infrastructure from asteroids and comets it has access to basically infinite resource without bother with us.
 

I don't see how AI could become more dangerous than humans. And new humans are being created everyday in complete and utter lack of governmental control.

"It can give bad medical advice" so can a faith healer with a much worse track record.
"It can hack emails when given access to them" so can hackers (and they hack emails they aren't given access to).
"It can drive our car into another car or an innocent pedestrian", like your average drunken driver.
"It could decide to kill people using weapons" hey, Cain patented that after a chat with Abel...
"It could genocide us without empathy", emulating Nazis (and many others, they don't have a monopoly).

I fail to imagine a realistic scenario (ie, not the Paperclip Apocalypse) where AI would be more harmful than a human.
Imagine a world where everyone has weapons as you say. And you don't have any guns or anything....but your the Flash (aka the comicbook super hero). Who would win?

The trick with AI is the speed. By the time you have even concieved the thought of doing something, an advanced AI could already have 1000 systems working on a countermeasure. It doesn't matter that you can do the same things as it if it can simply do those things immensely faster.
 

Greed. Hubris.
I actually think at this point its less true Hubris and more the issue of competition. Its the same problem with weapons. Once one nation has a certain class of weapon that can beat another, that second country HAS to get their hands on it. If they don't, they are saying that Nation A could take them over at will.

Now that AI is out of the bag, its a race to see which AI will become THE model that the world adopts. Aka whose impression of AI could leave the lasting legacy in the world.

And even if you are scared of your own creation....I mean sure you could stop....but then the other guy keeps going. If the US companies don't push their AI models, than China will, etc etc.

And whose going to do it faster, the company that is throwing every resource into make the AI smarter, stronger, faster as quickly as possible.....or the company that is slowly and methodically building AI with every safeguard they can think of. The answer is A.


And that's the real enemy here, the addiction of progress. Progress like this cannot be stopped unless the entire world agrees...as long as one nation or even one company is willing to push it....the powers of competition demand others follow suit or be left in the dust. The only way to build safe AI is if the entire world could sit together and agree.

That is not going to happen. We cannot get the world to agree to follow climate agreements when all of science shows we are cooking the planet. AI is just potential at this point...if we can't agree on climate change we are never going to agree on a methodology of AI.
 

Remove ads

Top