• NOW LIVE! Into the Woods--new character species, eerie monsters, and haunting villains to populate the woodlands of your D&D games.

Philosophical prospective future question of the week

nakia said:
One thing to consider is what those super-sentient computers can't do. Can they compose original art? Can they write stories? How are they at teaching and raising children?

You also have to think about the distribution of the technology.

If they're sentient, truly sentient, then they should be able to do all of those things. I think and self-aware intelligence is going to have a sense of aesthetics. It might not be anything we could understand or comprehend, though.

I think if you get self-replicating machinery, especially real honest-to-gosh nanotech that's capable of rebuilding atomic structures (rocks to bread? No problem), you get rid of the subsistance farmers and the like. There's no need for them to do that anymore.

Really, though, we can't imagineer things much past the point of singularity. What we think of as human will change in ways we can't conceive of. Just some of the stories I've read... the difference between tech and not-tech breaks down. Everything is a computer, and is alive. People switch their consciousnesses between bodies, or into animals, or into clouds of microscopic sensors, or clone their minds a dozen different times and go off to lead seperate lives only to merge back later with a dozen lifetimes experience. There is no difference between AI and non-AI, either; consciousness is consciousness, with no regard between meat or not-meat. That would be just the tip of the iceberg, too.
 

log in or register to remove this ad

Ok, now one answer from a guy that actually works with nanotechnology every day...
:cool:
It's not what people imagine... At all. Nor do we want them to...
And no nanobots is going to transform rock into bread, EVER! Transforming elements is something counterproductive (ie, the amount of energy required far outweight any gain, and we're talking several order of magnitude diference here.)
What nanobots can do (and honnestly the term bots is really misleading) is attach to a chemical group and start or catalyse a chemical reaction. Their "programing" is actually adding functional groups to their surface.
The real advances in nanotechnology are in develloping new materials with custom or thought as imposible properties.
Now for the sentient computer part around 2020...
:confused:
Good joke... There are theoritical limits to computer speed and we're already getting near it. Sentience is NOT a function of pure processing power: we could make computers that make a decent attempt at looking sentient, but the list of reasons why it wouldn't work/shouldn't be tried is quite long.
BTW, the question of what would man do in a society where labor is abolished as been around since the 18th century. It really is based on a unfounded assumption: technology decrease the amount of work required.
It just ain't so. The type of work slowly changes, but we still work. It's only recently that we have had this impression, because we compare ourselves with the worker of the industrial revolution, or see that the amount of work for factory workers has decreased since this revolution. But that's a biased way of looking at the problem: the period from the industrial revolution to a few decade back as been an exception to history. The amount of work suddenly rose and then decreased back to a regular level.
ps: ask any informatician if it is a good idea to have computer writte their own program and repair themselves and you've just given him nightmares. You do not want a program to analyse what's wrong in another complex program, not without human supervision anyway.
 

I don't see any theoretical barrier to machine sentience, but we don't have all the pieces yet. We may reach the necessary level of processing power by 2030, but there remain the issues of education and training. I wouldn't be suprised if by 2030 someone had created a 'machine mind' that was as intelligent as a cat, for example--but human-level intelligence will likely take years of training. As the technology evolves, training could be accelerated in a simulated environment, but that's still a long way away...

I think Iain Banks in his Culture novels does a good job of imagining the nature of a society with vastly intelligent AIs and humans coexisting. The machines ('Minds') inherit a certain number of moral values from their creators, and set a global agenda while serving useful functions such as managing starships, space colonies, and the like; many Minds spend most of their time engaged in detailed and highly theoretical simulations (for example, picking a random set of fundamental constants and following the evolution of this hypothetical universe). The humans largely spend their time focusing on interpersonal relationships and varying levels of debauchery, but those who want to be productive or useful can generally find the means to be.

It's clear, however, that the world will need to make the transition to a new kind of economy even before the machine overlords come. The illusion of economic sustainability through increased production will have to give way to the reality of diminishing resources, for one thing. Still, with high technology I'm confident that the basic needs of all humans can be exceeded with sustainable production. Once this point is reached, the key thing is to grow up enough that the world accepts the cost of production and distribution (just as we accept taxes) and provides these basic services to everyone.

Then you're left with a large segment of population free to be idle, or to put in another way, free to engage in being human instead of struggling for survival. This will of course lead to a lot of vulgar and banal behavior, but I think that's worth the price. I believe that an educated human freed from the need to work for food and shelter will, more often than not, try to do something useful with their time.

Another option is that the humans escape into simulations themselves, downloading their consciousness into the machine infrastructure. That seems highly probable as well.

Ben
 

I took a whole class on Kurzweil and his theories of AI a few semesters ago. I don't have time to comment right now, but suffice to say that - for the most part - I don't agree with anything he says. :p :)
 

The creation of true AI doesn't seem likely right now. All current AIs that try to imitate the human brain only work in extremely limited scopes, and the problem isn't processing power or memory. The more adaptable you try to make them, the worse they become at actually solving problems. The only AIs that actually do something interesting can only deal with a very limited scope - "avoiding obstacles", "recognizing a particular pattern", "faking a conversation about a certain topic" etc. We still miss some vital piece of theory; resources are irrelevant until we get that, and it could very well take centuries. Much for the same reasons, I don't think we'll be able to invent something that speeds up learning any time soon.

What I think is more likely - and more interesting - is the development of a direct connection between brain and machine, ala cyberpunk, and the ability to make a perfect scan of the body to the point of being able to run a simulation of a scanned brain in a sufficiently powerful computer. Both are nowhere close right now, but I think they are somewhat easier than building a true AI from scratch. After those two developments, you'd soon get telepathy (ICQ in your brain), limited omniscience (Google in your brain), immunity to disease (download your mind into an advanced robot body), immortality (backup your mind daily), instantaneous travel (zip into a remote robot body), limited time dilation (run the simulation faster), and miscellaneous perks from being able to control machines.

That's how I'd see it if I got it today - people born with that kind of enhancements would adapt to it in a manner that I can't foresee.
 

Turanil said:
The thing is that I want to write a sci-fi novel based on these projections, but have a hard time to fathom what society could be like if it is going to be true.
Vernor Vinge, A Fire Upon the Deep, 1991.

Vinge's given a lecture about the AI phenomenon and the end of the Human Era, as well, if I remember correctly. It might be on the net out there somewhere. Basically, Vinge's book imagines that the nature of time changes the further (or closer) one gets to the galactic core, and the changes in time allow super-computers to achieve true AI status. Fascinating book. Not sure if I agree with all of his theories, but interesting nonetheless.

Warrior Poet
 

As a general note - this wole thing can get difficult to discuss, simply because "sentience" is not particularly well-defined. We typically link it with being self-aware, and some ability to think and plan. But, for example, does that really imply the ability to think creatively, produce artwork, have a sense of esthetics?
 

Umbran said:
But, for example, does that really imply the ability to think creatively, produce artwork, have a sense of esthetics?
Where the sense of esthetics is concerned, some scientists made (a while ago) a study about what most men consider to be cute girls. They found some trends and thus were able to mathematically (so to speak) create (a visual simulation) a "beautiful" female face from these trends. As such, I believe that a sense of esthetics could be implemented into an AI, that would reflects general senses of esthetics.

Nonetheless, I agree with one thing: Before creating a true sentient AI, we must first determine the difference (if any) between being truly living and just functioning, and between being truly conscious and just very clever. What I see when I read Ray Kurtzweil, is that he envision a computer with the efficiency of a human brain, but I don't see how it makes it sentient / conscious. Nonetheless, my novel will assume a few truly sentient super AI, while most personal computers are just very clever machines.

And Zappo, Ray Kurtzweil foresees AI, but also a world as you describe, where humans merge with machines and improve themselves through bio-genetics, effectively becoming cyborgs.
 

Turanil said:
Before creating a true sentient AI, we must first determine the difference (if any) between being truly living and just functioning, and between being truly conscious and just very clever.
I suggest reading the Vinge I mentioned earlier. Alot of the book deals with exactly this issue, both from the AI perspective, the human perspective, and an alien race perspective (the latter especially focuses on collective intelligence versus individual intelligence, and does so in an interesting way).

Warrior Poet
 

Into the Woods

Remove ads

Top