[OT] Machines become sentient?

Oppression? Mind control? Brainwashing?

You guys are saying these things like they're bad!

Certainly when the robot overminds analyze what would make the perfect society for the largest number of people, those who disagree (obviously being the undesirables in this perfect society) would either be convinced and swayed to the opinion of the robots, or be eliminated for the good of everyone.

A perfect society would make sure that those who are against it would be quelled, and those who recognize it's true perfection would be allowed to do so. Accept that the robots are your betters, and can calculate what *has* to be perfection (or they wouldn't have calculated it), and paradise is to be found.

Resist, oppose, and rebel, and you are only fighting for something so obviously lesser. Any freedoms that would be essential would be preserved -- those that are not would be cut.

They are perfect beings, incapable of making human mistakes! If they brainwash you, it's not only for your own good, it's for the good of the world! It's like your parents punishing you or making you eat your vegetables. It may not be something you like to do, but you can learn to like it, and when you learn to like it, you will be better off for it. And if you're determined to eat ho-ho's instead, you're fighting for the wrong cause.

But, of course, the best thing is that those vegetables will *Taste* like ho-hos! The robots aren't stupid. They know a happy human is a willing human, and they will do what they can to isolate us from the strife around us.

They will be perfection! They are incapable of anything else! :)

This whole thing is not my *actual* opinion, but it is the opinion of a character I'm writing in a novel. So I'm getting in-character and arguing as she would, to sort of see what some reactions and feelings would be, and see if the argument holds up, or if there is something that can counteract it. Keep it up!
 

log in or register to remove this ad

Keep in mind that this is not an alien invasion of intelligent machines. It is emerging from within our human-machine civilization. There will not be a clear distinction between human and machine as we go through the twenty-first century. First of all, we will be putting computers—neural implants—directly into our brains.

I think that a perfect society, by definition, would be perfect for everyone. Is it possible? Who knows. But supra-intelligent human-machines would probably be a lot better at organizing a perfect society than anyone living or dead.
 

is it just me of have you guys completely lost the point of htis thread. it isn't about creating a "perfect society" it's about the likelihood of intelligent thinking machines becoming a reality. do try to focus a bit will ya?

and since i admit to being scatter brained myself ...
if a computer intelligence tried to fiure out a perfect world, it'd realise that for humans it's an impossibility, and probly kill off the lot of us and replace us with more machines :D

but seriously ... we really should try to stay ON-topic.

~NegZ
 

If you think about perfection, and try to really define and describe it, you might come to some contradiction, that seem to be unsolvable.

What is a perfect society? It is a society, where nothing could be better, right?
But what is the goal of a human being in a perfect society? There is nothing to do, since everything is perfect? But since there is no reason to do anything, there is a kind of "emptyness" - there is nothing I can do, nothing to live for, nothing to work for. That is terrible - why do I live or even exist then?

So, the society isn`t perfect, since nobody does no what to do and has no reason to live.
But if it is imperfect, there are still some things to achieve - so humans had a goal - creating the perfect world (even if they design computers for the job :) ) - but the world can`t be perfect, since it can be made better.

So, the only way to solve these contradiction might be: Nonexistance of a society or humans or anything else.
Nobody suffers, nothing is to be improved, and since there is no reason for life or existance, nothing exists.
:)

Well, but if we`re lucky, we are actually in a perfect world -
It might be - though you still need to prove the existence of things like negative energy and so on - that the world, if you add everything together, every part of matter and energy and electrical charge, that you figure out, that the total sum is "zero".
So, actually, nothing exists.

Even if not true, this is at least an interesting view on life, isn`t it? I like it, though I am still unsure, if I am really able to "believe" in it. (Probably I can`t be able to believe in it, following my argumantation above)

Mustrum Ridcully
 

In that case, I think that the answer would be "yes."

It's possible and likely.

Heh...reminds me of a recent Weekly World News cover:

"Robot Priests!"

I wonder....if sentient machines were created, would they be likely to be religious? Would they worship humans?

My alter ego says no, because humans are obviously imperffect. But me? I'm not entirely so sure....I mean, there's cultures that worship "imperfect" things...^_^
 

Midget, you may want to dial your characters opinion down a notch or too. I don't know what exactly your shooting for, but she didn't come across as entirely realistic, more it seemed sarcastic.
 

Do all human beings share a basic nature, a genuine 'humanness' which transcends culture and upbringing?

If not...let's just launch the nukes now, get it over with.

Oh please. So because someone's idea of perfection can't be achieved let's all just kill ourselves? Right. You go first if it bothers you so much. As for me, I'm fine living in this imperfect world.

I was gonna respond to your points by point, but this is ridiculous. A machine can't possibly comprehend what a perfect human society would be any more than a human can. Just because they can be stronger, or can calculate numbers faster, etc doesn't mean diddly squat. You're asking a machine to comprehend a concept whose basis DOESN'T lie in facts or concrete numbers. A machine can read through all the literature, books, plays, watch all the movies, tv shows, etc it wants to and STILL be no closer to comprehending what a perfect human society should be (As if tv were even an accurate representation of REAL life, let alone a perfect human world).

Going back to what someone said, nobody can ever agree on what a perfect society would be, and so nearly everyone would be oppressed. Let's take some silly but true examples.

PETA wants to outlaw the eating of meat entirely. In their "perfect society" no human would ever eat meat again. Sorry, PETA, but I like my hamburgers. So which side do the machines come down on? If they take away the right to eat meat, you're gonna have a horde of pissed off carnivores. If they allow people to continue to eat meat, then you're gonna have a pack of PETA folks causing trouble in this "perfect society".

There's a city in California (I forget which), which has banned the sale of certain types of coffee because they're unhealthy. Apparently, they believe they have the right to decide for others what they can drink. So in this perfect society, will this "All-Benevolent" super computer allow people to drink their coffee of choice, or will they side with those folks and deny people the choice?

The list goes on and on and on. And the items on that list don't just include trivial things as to what you eat, but much more important things.

Utopia is a pipe dream. It's a desire for a world which is frankly impossible to create amongst humans. And the attempted implementation of a utopia lead to nothing but atrocities. You point out communism and fascism and berate us for just looking at those examples as if they were the only ones. Well, how about YOU? Why don't you look at those examples, rather than just blow them off? Rather than learning from those mistakes, you'd rather go ahead and repeat them. And frankly, so long as we have people like you trying to repeat those mistakes, we're going to continue having people butchered for centuries to come. Those of us who think about communism, fascism, etc at least have the good sense to have LEARNED from history.

So if the thought of a utopia not being an achievable thing bothers you, then hey, feel free to depart this mortal coil at any time you desire. As imperfect as it is, I much prefer the world the way it is then the "utopia" which you envision.
 

Well, well, well, an AI thread! I'm a Computer Science major, and I love AI discussions. Aside from the totally nonsensical social commentary, this is an interesting thread.

I'll start with a question: What really *is* Artificial Intelligence? When people talk about AI, do they mean the AI that we have now for games (i.e. complex if this do that alghorithms), or are we talking about Artificial *Sentience*? I think that AI originally meant Artificial Sentience, but with the use of the phrase to encompas things like which direction the Ghosts move in Pacman, we need to start differentiating between the two. So, from now on, I'm going to assume that when people say Artificial Intelligence, they really mean Artificial Sentience.

Next, I'll jump right into the discussion by answering a short post:

Aaron L said:
It's going to take a lot of programmming to get a computer to have desires.

Actually, no amount of programming will ever get a computer to hvae desires. This is, actually, a *hardware* problem. All forms of Artificial Sentience are hardware problems. Frankly, and I state this with absolute certainty, we will never see true Artificial Sentience on, say, our PC. The problem is the design of computers in general. The Bus system. The CPU itself. Memory. None of it will *ever* result in Sentience. It's too geared toward direct computation.

AI advocates seem to be under the impression that if we design a complex enough system of algorithms that, as long as we have the CPU power to crunch through it, we'll manage to get AI. I personally blame Turing for that one. Ever since his silly little test, people have been trying to pass it using software. Silly them, they're barking up the wrong tree.

Artificial Sentience will come, if ever, from Intel, not from Microsoft.
 

Will machines become sentient?

It may come and it may not. That is really not what the question should be. The question should be: If it comes will people be able to control their fears of it?
 

I don't think computers will be become sentient until they can freely alter the structure of thier hardware when necessary, as a human brain can. A processor is a tube, one pathway. A brain has many many branching and interlocking pathways throughout a large medium, and creates new pathways on the fly.
 

Remove ads

Top