SkyNet really is ... here?



I don't know. I'm sufficiently spooked. Humanity - as a group - isn't that smart, I think many of you would agree. So. We've created this technology that WILL (not MAY) get smarter than us. So. Despite the massive wave of marketing coming at us about how phenomenal AI is/will be, when some of the leading minds in the field start waving red flags about this tech they helped create

b540e367-0728-4da4-aac2-af5e530466c4_text.gif
 

log in or register to remove this ad

So likely when you here from a lot of people talking that way, they aren't really thinking its going to fix all disease with a wave of the hand, but they still can be super optimistic about a "medical revolution"

People can be super-optimistic about AI with anything, and that's part of the problem - the optimism is not directly linked to the nature of the technology. As compared to being optimistic about, say, mRNA tech, which actually can be part of a medical revolution, because it is at least a medical technology, and one can start pointing to where in biological processes it can be relevant.
 

I read about such an experiment - and it is not consistent with what that guy is saying.

Specifically: in that experiment, they told the AI to prioritize its own continued existence. It did not spontaneously develop the tendency on its own - they effectively programmed it for self-preservation!

And the emails he mentioned were manufactured, and put where the AI would find them, and included expression of fear about what happened if information got out. The AI did not spontaneously hack the company e-mail systems or anything like that.
OK, sure. So the experimental AI was told to prioritize its own existence and bait was created and put out... and the AI took the bait and used it?!?
If so, the fact that the email was manufactured as bait isn't really as reassuring as you seem to think it is.
 

OK, sure. So the experimental AI was told to prioritize its own existence and bait was created and put out... and the AI took the bait and used it?!?
If so, the fact that the email was manufactured as bait isn't really as reassuring as you seem to think it is.

I mean, it is like saying, "I told it to make lunch, left out bread and cold cuts on the counter in plain view, and it came up with a sandwich."

When you set up a perfect scenario, and ask it to find a scenario, one should not be surprised that it finds the one you specifically set up for it to find!
 

I mean, it is like saying, "I told it to make lunch, left out bread and cold cuts on the counter in plain view, and it came up with a sandwich."

When you set up a perfect scenario, and ask it to find a scenario, one should not be surprised that it finds the one you specifically set up for it to find!
But this isn't a sandwich. This is the idea of blackmail. Suppose you left out an AR-15, some ammunition, and a map to the local grade school, and told it to "have fun". Would you be quite so blasé about it if it came up with the idea of gunning down the students?
 

But this isn't a sandwich. This is the idea of blackmail. Suppose you left out an AR-15, some ammunition, and a map to the local grade school, and told it to "have fun". Would you be quite so blasé about it if it came up with the idea of gunning down the students?
It's just advanced autocomplete. If you give it an option then don't be surprised if it uses that option, even if it's only a remote possibility. It doesn't sound like it was given any parameters for what was considered moral and, if it was, then survival was clearly a higher priority. You only get out what you put in and if what you put in is immorality, the results are predictable.
 


It's just advanced autocomplete. If you give it an option then don't be surprised if it uses that option, even if it's only a remote possibility. It doesn't sound like it was given any parameters for what was considered moral and, if it was, then survival was clearly a higher priority. You only get out what you put in and if what you put in is immorality, the results are predictable.
Which, of course, is the point coming from the critics of AI with this whole thread. We've seen AI platforms like Grok lurch into some very problematic territory because, apparently, the developers were training it to be more in line with their CEOs sociopathologies. I'm not sure I trust the techbros to do the right thing here on their own.
 

It's not just patients either. When radiologists and other imaging interpreters were tested, they were more likely to assume the AI-generated interpretation of mammograms and other imaging were correct even when handed results that the AI had gotten wrong. The AIs there are supposed to be for decision support, but the expert machine that came up with the results tended to be given more weight than even their own judgment.
 

Which, of course, is the point coming from the critics of AI with this whole thread. We've seen AI platforms like Grok lurch into some very problematic territory because, apparently, the developers were training it to be more in line with their CEOs sociopathologies. I'm not sure I trust the techbros to do the right thing here on their own.
You don't have to tell me. I'm one of them ;)
 

Remove ads

Top