SkyNet really is ... here?


log in or register to remove this ad

film terminator GIF
 


Whether one believes current AI is (or soon will be) a threat or not, it's been interesting to watch Sam Altman interviews last couple weeks where he's likened OpenAI's upcoming ChatGPT 5 to the Manhattan Project. He says he's worried about the profound ramifications of releasing a super-AI that could be used for malevolent purposes... but they'll go ahead and dish it out to the world soon enough.
ohnoanyway.jpg
Either these people are hyping a thing they know to be undeserving of the vast effort and money being dumped into it; or else - if they truly believe it - they're unabashedly amoral scoundrels.

"Yeah, it would be terrible if bad people get it, destroyer of worlds and all that, yadda, yadda. But we really, really need to hit our financial targets next quarter...."
 




Whether one believes current AI is (or soon will be) a threat or not, it's been interesting to watch Sam Altman interviews last couple weeks where he's likened OpenAI's upcoming ChatGPT 5 to the Manhattan Project. He says he's worried about the profound ramifications of releasing a super-AI that could be used for malevolent purposes... but they'll go ahead and dish it out to the world soon enough.
Either these people are hyping a thing they know to be undeserving of the vast effort and money being dumped into it; or else - if they truly believe it - they're unabashedly amoral scoundrels.

"Yeah, it would be terrible if bad people get it, destroyer of worlds and all that, yadda, yadda. But we really, really need to hit our financial targets next quarter...."
Yeah. I didn't want to believe it, but human greed could produce something like SkyNet. The long-term impact of AI on ADM (Automated Decision-Making) doesn't look so positive. The "blackmail" scenarios were fixed, but only in the sense that they gave AI a loaded gun and explained how the weapon could be used against people - but the decision to use the weapon to threaten humans was the program's, not the programmer's. That's a clear and present danger to Humanity.
 


The bad things that will happen as a result of AI are not likely to come about from "self-awareness" or intent. Rather, through agentic AI, a small error or hallucination will go off like a runaway train, and us humans will miss it until it is too late. Agentic AI is speeding everything up, including complex workflows with interdependencies at a crazy pace. And they work most of the time really well. But all it takes is one error in a complex system to make a huge mess. Like a mutation in genetic code.

Our jobs will go from writing documents/conducting analysis (replaced by AI) to cleaning up the mess AI makes in short order. Good times.
 

Pets & Sidekicks

Remove ads

Top