• The VOIDRUNNER'S CODEX is LIVE! Explore new worlds, fight oppressive empires, fend off fearsome aliens, and wield deadly psionics with this comprehensive boxed set expansion for 5E and A5E!

What would AIs call themselves?

Celebrim

Legend
Nifft said:
Speak for yourself.

This particular product of evolution is self-aware, despite the bacteria in my family tree.

Cheers, -- N

Err... keep reading what I said for one more sentense, please. I believe I accounted for that.

My point in bring up the bacteria that it is a far cry from saying 'A small percentage of self-modifying systems will eventually become self-aware' to 'ALL self-modifying systems will eventually become self-aware.' The former has no problem with the claim that you are self-aware. The latter must insist that all bacteria are now also self-aware as well. If in fact the vast majority of self-modifying systems don't become self-aware, there is no reason to insist a particular one will become self-aware or even that among a very large group one of them must become self-aware.
 

log in or register to remove this ad

Tonguez

A suffusion of yellow
Celebrim said:
Err... keep reading what I said for one more sentense, please. I believe I accounted for that.

My point in bring up the bacteria that it is a far cry from saying 'A small percentage of self-modifying systems will eventually become self-aware' to 'ALL self-modifying systems will eventually become self-aware.' The former has no problem with the claim that you are self-aware. The latter must insist that all bacteria are now also self-aware as well. If in fact the vast majority of self-modifying systems don't become self-aware, there is no reason to insist a particular one will become self-aware or even that among a very large group one of them must become self-aware.

That is what we want you to beleive human

- The Protozoan Imperium Shall Rise MWAHAHAHA
 

Nifft

Penguin Herder
Celebrim said:
Err... keep reading what I said for one more sentense, please. I believe I accounted for that.
If I may paraphrase: "That can't happen, but if it did happen, it's just the once... because there's no proof it can happen again." Strongly disagree with the premise. Homo neanderthalensis existed, and they're only gone because we homo sapients killed them all.

Celebrim said:
My point in bring up the bacteria that it is a far cry from saying 'A small percentage of self-modifying systems will eventually become self-aware' to 'ALL self-modifying systems will eventually become self-aware.'
True. And to me, irrelevant. My position was (and is) that at least one will become self-aware.

So we can both be right. :)

Yay! -- N
 

Celebrim

Legend
Nifft said:
If I may paraphrase: "That can't happen, but if it did happen, it's just the once... because there's no proof it can happen again."

You may not paraphrase if you are not going to do so charitably. :(

Strongly disagree with the premise. Homo neanderthalensis existed, and they're only gone because we homo sapients killed them all.

First, I disagree that the presence of a variaty of very closely related species occuring in almost the exact same time period with a recent common ancestor contitutes a sample size of more than one. And on an unrelated point, its not at all clear from the archaelogical record why Homo Neanderthalensis disappeared.

True. And to me, irrelevant. My position was (and is) that at least one will become self-aware.

And you don't have any evidence for that. As the old engineering maxim goes, "If the probability of something isn't practically one, then it's damn close to zero."

The claim that any self-modifying system would eventually achieve sentience is so very 'state of the art' of AI research for the 1950's. I would think we were largely no longer so niave about the complexity of intelligence. It's kinda similar to the claim that the contents of cells was probably simple, and hense it ought to be easy to jump start them back to life by applying just a bit of electricity (whoops, better microscopes killed that idea), or that the contents of cellular nuclei was probably simple (whoops, our understanding of DNA killed that idea). The more we study intelligence the more its unreasonable to think something magical is going to happen, and then suddenly sentient life, or that if we just let the code evolve long enough that we'd get strong AI in the lifetime of the researcher (rather than maybe in the lifetime of the universe, and then again maybe not).

So we can both be right. :)

Yay! -- N

Perhaps we should hand out thread participation trophies? Weeee! Everyone is a winner!
 

Nifft

Penguin Herder
Celebrim said:
You may not paraphrase if you are not going to do so charitably. :(
Would you like to re-state, then? From your last post, I don't see anything to contradict either the content or the character of my paraphrase.

Celebrim said:
First, I disagree that the presence of a variaty of very closely related species occuring in almost the exact same time period with a recent common ancestor contitutes a sample size of more than one.
It's a different species. It developed its genes independently. It didn't come from us (but it did develop within the same genus, and under similar conditions).

Celebrim said:
And you don't have any evidence for that.
What, aside from a fully working sentient AI, would constitute evidence (in your opinion)?

In my opinion, the current (and accelerating) complexity of publicly visible systems is evidence for. But I'm curious what you'd accept as evidence.

Celebrim said:
The claim that any self-modifying system would eventually achieve sentience is so very 'state of the art' of AI research for the 1950's.
If you're talking to me, it's also a straw man, because I just went over how that particular claim is irrelevant.

Celebrim said:
The more we study intelligence the more its unreasonable to think something magical is going to happen, and then suddenly sentient life, or that if we just let the code evolve long enough that we'd get strong AI in the lifetime of the researcher (rather than maybe in the lifetime of the universe, and then again maybe not).
Really? So you think that researchers retard the process? Clearly we developed before the lifetime of the universe expired, and we didn't have researchers trying to make us work right. (We did have dire tigers, though.)

Cheers, -- N
 

Celebrim

Legend
Nifft said:
It developed its genes independently.

I believe that is the characterization I disagree with.

What, aside from a fully working sentient AI, would constitute evidence (in your opinion)?

You would need evidence that a large percentage of self-modifying systems gain sentience. So, for example, evidence that sentient life is common in the universe would consistute evidence of that. Or evidence that sentient life had developed independently on many occassions in Earth's past (something like discovering Lovecraft's pre-history of the Earth wasn't far off) would constitute evidence of the conjecture. Or evidence that self-modifying code easily became sentient (as was believed to be true a few decades ago), would also constitute evidence. On the other hand, if there was a marked lack of evidence of sentient life elsewhere in the universe, all the sentient life in earth's history seemed to have evolved from a singular recent common ancestor, and a marked lack of progress in achieving strong AI through self-modifying databases would constitute evidence of a contrary hypothesis - that self-modifying systems on thier own very rarely achieve self-awareness.

Indeed, I feel that the odds are so close to zero that we'd never evolve a strong AI by evolutionary techniques alone.

In my opinion, the current (and accelerating) complexity of publicly visible systems is evidience for. But I'm curious what you'd accept as evidence.

But it is not evidence that self-modifying systems will by chance or happenstance becoming sentient, because this increasing complexity is something that is happening by design. The argument that AI is likely, probable, or even possible to occur as the result of some bug or some process that is outside the control of the designers is what I'm arguing against. In other words the increasing complexity and functionality of software it is evidence for the position I stated, that we will be able to design AI.

Really? So you think that researchers retard the process?

Errr... hasn't it been my position all along that research and engineering is the (effective) process? I believe AI's will be created. They will be designed. I do not think it is reasonable to think that they will be created by random chance, because the process is simply too slow. There simply won't be enough 'trials' to remotely have a chance of doing it by accident.

Clearly we developed before the lifetime of the universe expired, and we didn't have researchers trying to make us work right. (We did have dire tigers, though.)

Cheers, -- N

If you are willing to wait around for 3-4 billion years for one of the databases to organize itself in such a way that it becomes self-aware, and if you think you can keep the hardware running and the experiment paid for during that period, be my guest. But, in my experience, if you can't produce results in less than a million years, your funding tends to dry up.
 

Pale

First Post
On the original question, and not arguing the ficitonal congruity of the setting (science fiction is, after all, a sub-category of fantasy in many respects).


"Neo Sentients" being what they call themselves and "Alpha Sentients" being what they call humans. Purpously not using "Beta Sentients" as that would possible infer that they are secondary to humans, but they would more than likely acknowledge that they were not the first sentient beings.

Or, you could just call them "Cylons". ;)

/rant

Do you guys have conversations like this after watching an episode of the new Battlestar Galactica and not notice the fact that the setting is just a vehicle for social commentary of current events? As fascinating as it is to read about, the underlying logistics of what intelligence "is" and "is not" would only be relevant in a scientific treatise on the subject of "artificial intelligence". It really has no place in a thread centering around the premise of a game setting. I know quite a bit about the what really happens to organics when they're exposed to radiation but I don't go into rants and diatribes about it when someone posts that they're running a game with mutants in it and wonder what a good name for them to call themselves would be.

/endrant
 

Celebrim

Legend
Pale said:
Do you guys have conversations like this after watching an episode of the new Battlestar Galactica and not notice the fact that the setting is just a vehicle for social commentary of current events? As fascinating as it is to read about, the underlying logistics of what intelligence "is" and "is not" would only be relevant in a scientific treatise on the subject of "artificial intelligence". It really has no place in a thread centering around the premise of a game setting. I know quite a bit about the what really happens to organics when they're exposed to radiation but I don't go into rants and diatribes about it when someone posts that they're running a game with mutants in it and wonder what a good name for them to call themselves would be.

I was waiting for someone to bring that up, because its actually I admit a pretty strong criticism.

The answer to the question is as follows:

a) No. I don't. But that's because the intent of Battlestar Galactica is (annoyingly enough) merely social commentary. It rarely seems to aim at anything higher.
b) No, I don't. But that is is because Battlestar Galactica is clearly narrative fiction, and narrative fiction has quite different concerns than a game setting. Since this is a gaming board, my assumption is that any setting being crafted would want to be as internally consistant as possible.
c) Frankly, I'm just really really really tired of the tired old cliches about AI's. They are so done. A setting which has all the same tired old cliches about AI's, AI's appearing by accident, AI human relations, would bore me to tears, and to a certain extent my rants can be read as, "If you are going to insist on all the old cliches that everyone else does, don't expect me to pay good money for your setting."
d) It's an area that I care very much about and have very strong opinions about. I also think science fiction writers have a very important role in society of preparing people for the future, and that on certain subjects they have historically done a bad job. For example, I don't think anyone has yet written a truly masterful work on AI or exploration of the solar sytem. So, whenever I hear about anyone doing any creative work on AI, my responce tends to be along the lines of, "Please, can't you do something different and exciting instead of doing what everyone else has done? This is important stuff!"
 

Pale

First Post
Thank you for addressing the criticism, Celebrim. I concede your point on the matter.

Personally, I always thought that AI would arrive after, and through, programs made to work with hardware that is directly hooked up to the brain, or "hybridized intelligence" (a term I just pulled out of my... intelligence). Eventually, someone would learn how to make the process work in reverse, yes? Using the algorhythms of the human "mind" as a template. This "someone" would have to have the singular focus on the subject much like Nikola Tesla and electricity, of course, but that's how I see it happening.
 

Nifft

Penguin Herder
Celebrim said:
But it is not evidence that self-modifying systems will by chance or happenstance becoming sentient, because this increasing complexity is something that is happening by design. The argument that AI is likely, probable, or even possible to occur as the result of some bug or some process that is outside the control of the designers is what I'm arguing against. In other words the increasing complexity and functionality of software it is evidence for the position I stated, that we will be able to design AI.
Ah, so you're arguing with someone else. No wonder I'm all confused. :)

I've been characterizing the process by which I think such a system will be designed -- it will need to be self-modifying, but that's a necessary, not a sufficient.

Celebrim said:
Errr... hasn't it been my position all along that research and engineering is the (effective) process? I believe AI's will be created. They will be designed. I do not think it is reasonable to think that they will be created by random chance, because the process is simply too slow. There simply won't be enough 'trials' to remotely have a chance of doing it by accident.
No argument there from me.

- - -

My (controversial) point, though, is that I think no matter how hard we try to make "neutered" AIs, ones which are "designed" to not be threatening (however you define that, whatever limit you impose on all designed AIs) ... there will be very strong economic incentives to make systems which will, by their design, overcome those limits.

Cheers, -- N
 

Voidrunner's Codex

Remove ads

Top