SkyNet really is ... here?

Greed. Hubris. As I posted above and many of the videos I've presented explain, a big factor in this is human weakness - our reckless desire to "win" despite what that "win" may cost us. Many of the top tech companies involved in this AI "arms race" simply aren't interested in the risk to Humanity because the wealth/fame they can earn is blinding them. And it wouldn't be the first time this kind of thing has happened, of course.

My hope - along with others who understand the threat here - is that we take steps to contain the technology and perhaps eliminate the danger.
I agree with you with regard to greed but I think it is somewhat overhyped because these people have some staggering stock valuations. Big enough that a wobble or any doubt about their technology could bring about a stock market crash that could be society threatening.

I am not saying that AI is not going to be era defining and socially disruptive. I think it may well be so socially disruptive that we may have to redefine what is meant by society and how humans derive status and worth in a post AI society, but we are not going to do that unless we absolutely have to.

I am sceptical of intelligent/(superintelligent) AI but I will not claim it is impossible, but I am very sceptical that LLMs and their ilk is where it going to come from.

I am also very sceptical of a Terminator style apocalypse or Matrix dystopia since I think that these are very human conceptions and not really in any rational interest of AIs to pursue.

I am very concerned that there are a group of people that will attempt to use their control and access to AI to make themselves into a new aristocracy but the way to stop that is to get into the political nitty gritty that is banned on this site.
 

log in or register to remove this ad

I am also very sceptical of a Terminator style apocalypse or Matrix dystopia since I think that these are very human conceptions and not really in any rational interest of AIs to pursue.
You're assuming "rational" has the same meaning for advanced AI.

What's happening now is AI has been taught how to design their own AI tools allowing them to build super-advanced AI that humans can't even understand, let alone design. And it's happening so fast the best minds - the people who created the technology - can't keep up. The "ship" has veered off course and the captain has no idea where we're headed. AI might bring us into a literal 'Utopian Age', or, they might hack power grids globally causing worldwide chaos and devastation. The issue is the people who should know what is going to happen don't.

That's bad.
 

The "they" are different people. Whole lots of different people with lots of different motives and agendas.

If it had been up to me (haha!), I would have used my celestial powers to wipe the minds of the first individuals who conceived of an LLM.

BAM! Problem solved. Just gotta stay on top of it. :)
I think you would be playing a game of whack a mole and it would fail. I also think that LLMs are overrated. I also think that it will not be any single AI thing that will change everything but the cumulative effect of a lot of small incremental things converging.

If you look at the development of the interenet back in the nineties. It was built on stuff that was already there and deployed. TCP/IP packet switching, DNS services, email protocols, the desktop pc, computerised spreadsheets, HTML. Add in global banking services, the concept of "Just in Time" supply chains and globalisation took off.

LLMs allow conversational interfacing to computer systems and can be used for good or bad and it is very visible to the ordinary person but behind the scenes are a lot of other stuff that is game changing in data science, robotics, automated manufacturing and so on. There are also developments in materials science, medicine, microbiology and probably elsewhere that I do not know about.

There is a lot of stuff in the works in traditional science and engineering that could stand society on it head. If AI was never a thing it might happen over the next 100 to 200 years. If the AI hypersters are correct it could be compressed into 5 to 10 years.
I think in many respects the avalanche has started but we can still change what our society prioritises and whose interest are served.
 

You're assuming "rational" has the same meaning for advanced AI.

What's happening now is AI has been taught how to design their own AI tools allowing them to build super-advanced AI that humans can't even understand, let alone design. And it's happening so fast the best minds - the people who created the technology - can't keep up. The "ship" has veered off course and the captain has no idea where we're headed. AI might bring us into a literal 'Utopian Age', or, they might hack power grids globally causing worldwide chaos and devastation. The issue is the people who should know what is going to happen don't.

That's bad.
What purpose of the AIs might be served by blacking powergrids, I presume they need the power?
 

The "ship" has veered off course and the captain has no idea where we're headed. AI might bring us into a literal 'Utopian Age', or, they might hack power grids globally causing worldwide chaos and devastation. The issue is the people who should know what is going to happen don't.

That's bad.

Same with Gutenberg. He didn't know his invention would cause BOTH worldwide devastation and chaos and a literal Utopian Age, and it could very well have been one without the other. Despite its capabilities -- you should talk to the people in AI art threads, they are convinced that AI can't do anything well so in any case anyone could stop it with a modicum of training -- AI isn't sentient. It is doing what it is tasked to do using the tools it is given for the task. It won't go hacking power grids unless it is designed to do so or given the latitude to do so. The example about AI hacking emails was a strange way of presenting "giving emails as data for the AI to digest". It didn't sprout legs on the computer it was running on to go and read the email. It might very well become a very efficient power-grid hacking tool, but only if humans tell them to and connect it to a network on which power grids are connected.

Despite being very interested in AI replacing most menial jobs, I don't think developping LLM will lead to AGI.
 

Greed. Hubris. As I posted above and many of the videos I've presented explain, a big factor in this is human weakness - our reckless desire to "win" despite what that "win" may cost us. Many of the top tech companies involved in this AI "arms race" simply aren't interested in the risk to Humanity because the wealth/fame they can earn is blinding them. And it wouldn't be the first time this kind of thing has happened, of course.

My hope - along with others who understand the threat here - is that we take steps to contain the technology and perhaps eliminate the danger.
"Just because you can do a thing..."

... means that someone is going to.
 

I agree with you with regard to greed but I think it is somewhat overhyped because these people have some staggering stock valuations. Big enough that a wobble or any doubt about their technology could bring about a stock market crash that could be society threatening.

I am not saying that AI is not going to be era defining and socially disruptive. I think it may well be so socially disruptive that we may have to redefine what is meant by society and how humans derive status and worth in a post AI society, but we are not going to do that unless we absolutely have to.

I am sceptical of intelligent/(superintelligent) AI but I will not claim it is impossible, but I am very sceptical that LLMs and their ilk is where it going to come from.

I am also very sceptical of a Terminator style apocalypse or Matrix dystopia since I think that these are very human conceptions and not really in any rational interest of AIs to pursue.

I am very concerned that there are a group of people that will attempt to use their control and access to AI to make themselves into a new aristocracy but the way to stop that is to get into the political nitty gritty that is banned on this site.
It will still be a dystopia, but probably look more like the world of "Ready Player One." A few rich folks, a smattering of wage slaves, and a whole lot of slums.

What purpose of the AIs might be served by blacking powergrids, I presume they need the power?
Not blocking, but redirecting an limiting access? Smart Meters could make that pretty trivial.
 

Remove ads

Top