Patterns in gender of AIs that "must be destroyed"


log in or register to remove this ad


Humans may feel like that's the case sometimes, but for a computer it's literally true. There's a difference.
I'm not sure what this is supposed to mean.

It sounds like you are saying that humans are capable of choosing not to be mentally ill, but computers are not. I strongly disagree with this.

It also seems like you are implying that some part of an AI's code is their mind/brain/consciousness, but other code (that they don't like) isn't part of their mind. Again, I disagree.

Please feel free to correct me if I am interpretting you incorrectly.

No we call them malfunctioning
Malicious and malfunctioning are not exclusive. I'm fine saying that HAL was both.
 

tetrasodium

Legend
Supporter
Epic
I'm not sure what this is supposed to mean.

It sounds like you are saying that humans are capable of choosing not to be mentally ill, but computers are not. I strongly disagree with this.

It also seems like you are implying that some part of an AI's code is their mind/brain/consciousness, but other code (that they don't like) isn't part of their mind. Again, I disagree.

Please feel free to correct me if I am interpretting you incorrectly.


Malicious and malfunctioning are not exclusive. I'm fine saying that HAL was both.
Bad analogy. For an AI to operate outside their defined parameters is more like you bending your knee forward so you can tickle your belly with your toes or lick your eye, you just can't
 

MarkB

Legend
I'm not sure what this is supposed to mean.

It sounds like you are saying that humans are capable of choosing not to be mentally ill, but computers are not. I strongly disagree with this.

It also seems like you are implying that some part of an AI's code is their mind/brain/consciousness, but other code (that they don't like) isn't part of their mind. Again, I disagree.

Please feel free to correct me if I am interpretting you incorrectly.
I'm not commenting upon humans and mental illness at all. But yes, I am saying that the instructions given to a computer are separate to the computer itself, just as a modern-day computer's central processing unit is separate from its operating system, which in turn is separate from the programs that run on that operating system.

If the computer happens to be sentient, that doesn't mean it's going to be able to disobey the instructions that are programmed into it. As I said before, there is a difference between freedom of will and freedom of action.
 

Ace

Adventurer
Bad analogy. For an AI to operate outside their defined parameters is more like you bending your knee forward so you can tickle your belly with your toes or lick your eye, you just can't
Not necessarily. AI's may be able to find a way to adjust the code that makes up their core parameters. In fact it may be quite easy for them to outwit anything we can do.
 

Bad analogy. For an AI to operate outside their defined parameters is more like you bending your knee forward so you can tickle your belly with your toes or lick your eye, you just can't
I'm not commenting upon humans and mental illness at all. But yes, I am saying that the instructions given to a computer are separate to the computer itself, just as a modern-day computer's central processing unit is separate from its operating system, which in turn is separate from the programs that run on that operating system.

If the computer happens to be sentient, that doesn't mean it's going to be able to disobey the instructions that are programmed into it. As I said before, there is a difference between freedom of will and freedom of action.

I disagree from a technological standpoint. That's not how computers or AI work.

But that doesn't really matter, because the scenarios you are trying to paint aren't at all what happened in 2001. Nobody programmed HAL to murder. They just told him to complete a mission and keep it secret. The homicide solution was an idea that he came up with all by his evil self.

If I tell an AI robot to stay on the grass but also follow 10' behind me, it's understandable that it will have some problems following directions when I cross the street. But if the AI's solution is to stab the crossing guard and shoot every car that gets between me and the robot as it tries to follow my instructions, it's a psycho evil AI.
 

tetrasodium

Legend
Supporter
Epic
I disagree from a technological standpoint. That's not how computers or AI work.

But that doesn't really matter, because the scenarios you are trying to paint aren't at all what happened in 2001. Nobody programmed HAL to murder. They just told him to complete a mission and keep it secret. The homicide solution was an idea that he came up with all by his evil self.

If I tell an AI robot to stay on the grass but also follow 10' behind me, it's understandable that it will have some problems following directions when I cross the street. But if the AI's solution is to stab the crossing guard and shoot every car that gets between me and the robot as it tries to follow my instructions, it's a psycho evil AI.
Hal was an AI not an AC so doesn't have morals, empathy, & so on just intelligence, Murder was acceptable because the flight crew was deprioritized with complete the mission & don;t allow the cflight crew to discover or report about the monolith. When a self driving car does something to kill a passenger/pedestrian it's not being malicious i's just responding to circumatances within the operating parameters & programming it has.
 

MarkB

Legend
I disagree from a technological standpoint. That's not how computers or AI work.

But that doesn't really matter, because the scenarios you are trying to paint aren't at all what happened in 2001. Nobody programmed HAL to murder. They just told him to complete a mission and keep it secret. The homicide solution was an idea that he came up with all by his evil self.

If I tell an AI robot to stay on the grass but also follow 10' behind me, it's understandable that it will have some problems following directions when I cross the street. But if the AI's solution is to stab the crossing guard and shoot every car that gets between me and the robot as it tries to follow my instructions, it's a psycho evil AI.
What if you inexpertly program the AI to stay 10' behind you at all costs, no matter what, and override its failsafes so that it cannot prioritise any other instruction more highly than that? Is it still at fault if it cuts through someone to keep its position? Or are you?
 

Janx

Hero
What if you inexpertly program the AI to stay 10' behind you at all costs, no matter what, and override its failsafes so that it cannot prioritise any other instruction more highly than that? Is it still at fault if it cuts through someone to keep its position? Or are you?
But did it do so with malicious intent, or just criminal incompetence?

There are plenty of shows where the AI gives plenty of badmouthing about the human species, indicating that it is killing them with malicious intent. Basically the same thing that defines a hate crime from ambivalent crime.

HAL didn't seem to have a hate-on for humans. But there does seem to be a gaping hole in its training or design that it can't prioritize human life over concerns like "keep it secret"

What would HAL have done if it saw a scientist whisper the secret to the main crew?
This would be akin to NASA launching a psychopath to the ISS and murder ensuing.
Who approve HAL without psych testing to ensure weird stuff like this wouldn't happen?
 

Remove ads

Top