They aren't supposed to be or at least this one isn't marked as suchAre (-) threads a thing now?
I use different AI models daily both personally and professionally and see them as revolutionary, now with the advent of LLMs. Also see the weaknesses.I am a big fan of AI, but there are limits to how long these things can run autonomously and succeed at complex tasks and it is not very high. There is a fun project, "Claude plays Pokemon", that lets the AI try to beat those games. The last I checked the furthest progress it had gotten was 3 gyms, which is pretty impressive for a computer but not so much for a human.
On the other hand, AI are very strong--stronger than most humans--at specific, narrowly defined reasoning tasks. But they require supervision or at least careful prompting and need to be monitored constantly.
First stages? Amazing things like cures for diseases, higher crop yields, cold fusion. Later stages? Massive global unemployment.
And your last sentence sums up the issue: as Super-Intelligence emerges, the technology outpaces our ability to track exactly what AI can do and how fast it can do it. Already we've learned that the best AI can learn at least 10,000 times faster than humans can:I don't see how AI could become more dangerous than humans. And new humans are being created everyday in complete and utter lack of governmental control.
"It can give bad medical advice" so can a faith healer with a much worse track record.
"It can hack emails when given access to them" so can hackers (and they hack emails they aren't given access to).
"It can drive our car into another car or an innocent pedestrian", like your average drunken driver.
"It could decide to kill people using weapons" hey, Cain patented that after a chat with Abel...
"It could genocide us without empathy", emulating Nazis (and many others, they don't have a monopoly).
I fail to imagine a realistic scenario (ie, not the Paperclip Apocalypse) where AI would be more harmful than a human.
The problem they are sounding the alarm and still going ahead. This makes me suspicious, if they were really alarmed why not stop?And your last sentence sums up the issue: as Super-Intelligence emerges, the technology outpaces our ability to track exactly what AI can do and how fast it can do it. Already we've learned that the best AI can learn at least 10,000 times faster than humans can:
![]()
It Begins: AI Is Now Improving Itself
Detailed sources: https://docs.google.com/document/d/1ksVvFuR0IttxzH6zoASSYy7ZhTDqif42IFXp25ITVKU/edit?tab=t.9rb62ckaanowBased on the report: Situational Awa...www.youtube.com
Our inability to keep pace with the technology is a very serious problem. Humanity has always created tools, but we've never created something that can literally act without our guidance AND outperform us intellectually. The fact that the best minds in the field are sounding an alarm is ... not to be ignored IMO.
Greed. Hubris. As I posted above and many of the videos I've presented explain, a big factor in this is human weakness - our reckless desire to "win" despite what that "win" may cost us. Many of the top tech companies involved in this AI "arms race" simply aren't interested in the risk to Humanity because the wealth/fame they can earn is blinding them. And it wouldn't be the first time this kind of thing has happened, of course.The problem they are sounding the alarm and still going ahead. This makes me suspicious, if they were really alarmed why not stop?
The "they" are different people. Whole lots of different people with lots of different motives and agendas.The problem they are sounding the alarm and still going ahead. This makes me suspicious, if they were really alarmed why not stop?