J.Quondam
CR 1/8
Personally I think the coin lands on the other side: that this is mostly hype (at least as far as LLM development goes). It seems to me that these guys are walking the line between "it'll fix all our problems" and "it'll destroy the world" intentionally, because that's exactly the sort of conflict that is catnip for sensationalist media. It keeps AI in the news and investor dollars flowing.Yeah. I didn't want to believe it, but human greed could produce something like SkyNet. The long-term impact of AI on ADM (Automated Decision-Making) doesn't look so positive. The "blackmail" scenarios were fixed, but only in the sense that they gave AI a loaded gun and explained how the weapon could be used against people - but the decision to use the weapon to threaten humans was the program's, not the programmer's. That's a clear and present danger to Humanity.
That said, I also do believe it's a danger, albeit for a different reason. The imminent destruction won't be Skynet, but just plain old greed-based economics, in the form of escalating inequality. All the things that many users (legitimately) find helpful about current AI are exactly the things that threaten a lot of jobs. And while a few people acknowledge the problem and kindasorta attempt to posit solutions, the prevailing attitude is "Investors don't care, so it's not our problem."
But either way - SkyNet or economic collapse - these AIs pose more problems than benefits, imo, at least without a lot of serious policy consideration. But aside from platitudes, that's nowhere to be seen among these "move fast and break things" tech titans. They love talking as if they're grand visionaries; but in fact, their cavalier actions show they're disturbingly unserious about the consequences of their work, wherever it leads.
And that imo is the bigger danger: The greedy people and organizations pushing the AI on our unprepared society without much discussion how to do it responsibly.