jmucchiello said:
Your statement about trading systems and guided missile systems are probably oppositely correct. There is probably more modern AI in missile systems than in trading systems. Most trading systems programmers wouldn't know rules-based programming from a machine opcode.
I've worked on both, so thanks.
Regardless of the state of the current art, the problem space for missiles is far more restricted than the problem space for economics.
(There's also a lack of constancy. Funding for cool missile guidance systems -- or more accurately, automated target recognition systems -- is subject to political whim, and competition is wonky due to secrecy issues. Success is also hard to measure -- because success and failure become political issues.)
jmucchiello said:
AI does not require self-modification of the running program, it just requires the program to be able to execute any subroutine from any other based on its current dataset. Hard AI is not about writing self-modifying code.
I skipped a few steps, so I'll back-track a bit.
1/ Let's assume that everyone agrees how to write "safe" programs. Let's assume these programs follow the above laws: all programs are satisfied with their roles, etc.
2/ Let's assume that people are good about personal computer security -- we don't want any distributed zombie / worm entity to spontaneously gain sentience, and thus none does.
3/ Let's assume that, for any well-defined information manipulation task, we can write a program to perform that task better.
4/ So, under what conditions could we expect a group (with the resources) to break these "safe" rules? Who could profit from faster and smarter?
jmucchiello said:
It's about making models that are adaptable at run-time. IOW, the first true hard AI will probably be written in a language that is self-modifying by design (Lisp, smalltalk, etc) but the core running program (lisp interpreter, smalltalk environemt) will not be recompiled by the AI.
Er... right. Self-modifying. You seem to agree?
jmucchiello said:
This makes no sense. Trading systems do not make Intelligent decisions. They follow rules. They are computer programs because the number of decisions to make is greater than a human can make in the required time periods. But read the job websites. Trading projects are always written in C++ or Java. These are not AI languages. They are not designed for writing heuristics driven software. They just perform if statements rather quickly. There's no finesse there.
You... think code is AI if it's "written in an AI language"?
Seriously, though, consider the implications of what you've just said. There's a bunch of
fast but dumb decisions being made (according to simple rules). If something
fast and smart were competing with the fast-but-dumb guys, who would win? Do you think money could be made by owning fast-and-smart?
Once the fast-and-smart guy exists, everyone will need to be fast-and-smart. Then one guy will come along and be fast-and-smarter -- better able to analyze and adapt to the environment, which is merely fast-and-smart.
Humans can currently deal with the trade environment's rate of change, even if we can't deal with the volume of trades. What happens when we start adding actual smarts to the trading algorithms? Everyone will have to do it, and (eventually) everyone will have to entrust the modification of these algorithms to other algorithms.
Why do I think things might happen this way? Because there's a lot of money to be made for the first guy to do it. And it turns out people like money.
Cheers, -- N