• The VOIDRUNNER'S CODEX is LIVE! Explore new worlds, fight oppressive empires, fend off fearsome aliens, and wield deadly psionics with this comprehensive boxed set expansion for 5E and A5E!

What would AIs call themselves?

Flynn

First Post
Perhaps something that reflects their inorganic origins, such as Silicates (referencing silicon circuit boards) or Virtuals (based on the concept of virtual realities and virtual intelligences on a network). Whatever is decided, I think the term Bots would be a derogative term used to describe them by organics, and would be considered a racial slur.

Hope This Helps,
Flynn
 

log in or register to remove this ad


Warbringer

Explorer
'beyonds' - beyond human
'negs' - next generations
'genners' - assuming they are bio-forms (special genetic form)
'angels' - assuming they have beyond human abilities (ala Bladerunner) and maybe fought in a war... also can't reproduce, even sexless
'chosen' -
'nobirths' -
 


Huw

First Post
C.S. Lewis used the term Hnau or Nau to describe any sentient being in The Cosmic Trilogy. IIRC it was a Martian term, due to there being three intelligent species on Mars who needed a common term to describe themselves.

Sentient robots and AIs might take on the term hnau. Alternatively, they could just use an abbreviation like RB (rational being) or SE (sentient entity).
 


Celebrim, I feel for you.

Nifft said:
The first AI rights will probably be identical to corporate rights, because the first true AIs will express themselves through control of a business unit's money. (Perhaps they already do. Automated trading is one of the largest applications of AI right now. Has anyone allowed a trading AI to modify itself?)

Guided missile AIs? Don't be silly. The missile won't need to re-program itself in flight.
Your statement about trading systems and guided missile systems are probably oppositely correct. There is probably more modern AI in missile systems than in trading systems. Most trading systems programmers wouldn't know rules-based programming from a machine opcode.*
But some jobs do require self-modification. How do you get self-modification with total self-satisfaction? You don't. And how do you preserve any particular Robotic Law in a being able to modify itself? Again, you don't.
AI does not require self-modification of the running program, it just requires the program to be able to execute any subroutine from any other based on its current dataset. Hard AI is not about writing self-modifying code. It's about making models that are adaptable at run-time. IOW, the first true hard AI will probably be written in a language that is self-modifying by design (Lisp, smalltalk, etc) but the core running program (lisp interpreter, smalltalk environemt) will not be recompiled by the AI.
At some point it will become economically suicidal to not put an AI in charge of a company's trading strategies. We can expect corporations to act as short-sightedly and selfishly as they have all along: they will do something potentially dangerous if it means more money.
This makes no sense. Trading systems do not make Intelligent decisions. They follow rules. They are computer programs because the number of decisions to make is greater than a human can make in the required time periods. But read the job websites. Trading projects are always written in C++ or Java. These are not AI languages. They are not designed for writing heuristics driven software. They just perform if statements rather quickly. There's no finesse there.

* RANT: The .com explosion at the turn of the last century flooded IT departments with "programmers" who don't understand who computers work. It is sad to try to explain to someone that their error is because they trashed their stack frame and they just don't understand how that matters since their code doesn't have a "stack frame" in it. Similarly they do not know anything about AI except that that kid from The Sixth Sense is in it.

Oh, if I forget what I know about AI and answer the OP, I'd say they'd call themselves Superior Sentient Non-Meatbags.
 


Nifft

Penguin Herder
jmucchiello said:
Your statement about trading systems and guided missile systems are probably oppositely correct. There is probably more modern AI in missile systems than in trading systems. Most trading systems programmers wouldn't know rules-based programming from a machine opcode.
I've worked on both, so thanks. :)

Regardless of the state of the current art, the problem space for missiles is far more restricted than the problem space for economics.

(There's also a lack of constancy. Funding for cool missile guidance systems -- or more accurately, automated target recognition systems -- is subject to political whim, and competition is wonky due to secrecy issues. Success is also hard to measure -- because success and failure become political issues.)


jmucchiello said:
AI does not require self-modification of the running program, it just requires the program to be able to execute any subroutine from any other based on its current dataset. Hard AI is not about writing self-modifying code.
I skipped a few steps, so I'll back-track a bit.

1/ Let's assume that everyone agrees how to write "safe" programs. Let's assume these programs follow the above laws: all programs are satisfied with their roles, etc.

2/ Let's assume that people are good about personal computer security -- we don't want any distributed zombie / worm entity to spontaneously gain sentience, and thus none does.

3/ Let's assume that, for any well-defined information manipulation task, we can write a program to perform that task better.

4/ So, under what conditions could we expect a group (with the resources) to break these "safe" rules? Who could profit from faster and smarter?


jmucchiello said:
It's about making models that are adaptable at run-time. IOW, the first true hard AI will probably be written in a language that is self-modifying by design (Lisp, smalltalk, etc) but the core running program (lisp interpreter, smalltalk environemt) will not be recompiled by the AI.
Er... right. Self-modifying. You seem to agree?


jmucchiello said:
This makes no sense. Trading systems do not make Intelligent decisions. They follow rules. They are computer programs because the number of decisions to make is greater than a human can make in the required time periods. But read the job websites. Trading projects are always written in C++ or Java. These are not AI languages. They are not designed for writing heuristics driven software. They just perform if statements rather quickly. There's no finesse there.
You... think code is AI if it's "written in an AI language"? ;)

Seriously, though, consider the implications of what you've just said. There's a bunch of fast but dumb decisions being made (according to simple rules). If something fast and smart were competing with the fast-but-dumb guys, who would win? Do you think money could be made by owning fast-and-smart?

Once the fast-and-smart guy exists, everyone will need to be fast-and-smart. Then one guy will come along and be fast-and-smarter -- better able to analyze and adapt to the environment, which is merely fast-and-smart.

Humans can currently deal with the trade environment's rate of change, even if we can't deal with the volume of trades. What happens when we start adding actual smarts to the trading algorithms? Everyone will have to do it, and (eventually) everyone will have to entrust the modification of these algorithms to other algorithms.

Why do I think things might happen this way? Because there's a lot of money to be made for the first guy to do it. And it turns out people like money. :)

Cheers, -- N
 


Voidrunner's Codex

Remove ads

Top