SkyNet really is ... here?

Yeah. I didn't want to believe it, but human greed could produce something like SkyNet. The long-term impact of AI on ADM (Automated Decision-Making) doesn't look so positive. The "blackmail" scenarios were fixed, but only in the sense that they gave AI a loaded gun and explained how the weapon could be used against people - but the decision to use the weapon to threaten humans was the program's, not the programmer's. That's a clear and present danger to Humanity.
Personally I think the coin lands on the other side: that this is mostly hype (at least as far as LLM development goes). It seems to me that these guys are walking the line between "it'll fix all our problems" and "it'll destroy the world" intentionally, because that's exactly the sort of conflict that is catnip for sensationalist media. It keeps AI in the news and investor dollars flowing.

That said, I also do believe it's a danger, albeit for a different reason. The imminent destruction won't be Skynet, but just plain old greed-based economics, in the form of escalating inequality. All the things that many users (legitimately) find helpful about current AI are exactly the things that threaten a lot of jobs. And while a few people acknowledge the problem and kindasorta attempt to posit solutions, the prevailing attitude is "Investors don't care, so it's not our problem."

But either way - SkyNet or economic collapse - these AIs pose more problems than benefits, imo, at least without a lot of serious policy consideration. But aside from platitudes, that's nowhere to be seen among these "move fast and break things" tech titans. They love talking as if they're grand visionaries; but in fact, their cavalier actions show they're disturbingly unserious about the consequences of their work, wherever it leads.

And that imo is the bigger danger: The greedy people and organizations pushing the AI on our unprepared society without much discussion how to do it responsibly.
 

log in or register to remove this ad

I don't know. I think maybe with AI we've created a new species of super-criminals. I say we just pull the plug now, trash the internet and roll out an AI-free Web3. Better safe than sorry, right?
 

Either these people are hyping a thing they know to be undeserving of the vast effort and money being dumped into it; or else - if they truly believe it - they're unabashedly amoral scoundrels.

Well, note that the various AI companies are not yet profitable. They are still dependent on getting investors to continue to give them money. If they stop hyping the tech, they are out of a job. Saying it can destroy the world is about as big as hype can get...
 

Personally I think the coin lands on the other side: that this is mostly hype (at least as far as LLM development goes). It seems to me that these guys are walking the line between "it'll fix all our problems" and "it'll destroy the world" intentionally...

And, we haven't even gotten to Roko's Basilisk.
 

Well, note that the various AI companies are not yet profitable. They are still dependent on getting investors to continue to give them money. If they stop hyping the tech, they are out of a job. Saying it can destroy the world is about as big as hype can get...
Yeah but that's bad. Congressional oversight, the court of public opinion and market backlash kinda bad. And this isn't just hype - this is computer programs breaking the law autonomously. This isn't marketing or fantasy, it's science out of control.

 

And this isn't just hype - this is computer programs breaking the law autonomously. This isn't marketing or fantasy, it's science out of control.

So, to be clear -

1) That isn't what happened in the events spoken of in the video clip. It was a manufactured scenario, not really autonomous or spontaneous action, and no real damage was done in that test.

2) Technically, the computer isn't breaking the law, as it isn't a person. That'd be like saying a circular saw committed assault. Responsibility for broken laws will fall on the company or user, not the AI.

3) Development of AI isn't being done in a way most practitioners would call "science".
 
Last edited:

Yeah but that's bad. Congressional oversight, the court of public opinion and market backlash kinda bad. And this isn't just hype - this is computer programs breaking the law autonomously. This isn't marketing or fantasy, it's science out of control.
No computer program has broken the law autonomously. This hasn't happened.

You might as well say that when a video game baddy shoots you it is committing murder. That's all that is hapenning here.
 

Yeah. It's probably nothing

tumblr_ntynplpCMZ1rjkjhfo2_500.gif
 

Pets & Sidekicks

Remove ads

Top