The Dichotomy of Design and Evolution
The two processes for building complex systems present a fundamental fork in the path to the future.
I just published an article in Technology Review which was constrained on word count. Here is a longer version and forum for discussion.
Many of the most interesting problems in computer science, nanotechnology, and synthetic biology require the construction of complex systems. But how would we build a really complex system – such as a general artificial intelligence (AI) that exceeded human intelligence?
Some technologists advocate design; others, prefer evolutionary search algorithms. Still others would selectively conflate the two, hoping to incorporate the best of both paradigms while avoiding their limitations. But while both processes are powerful, they are very different, and they are not easily combined. Rather, they present divergent paths.
Designed systems have predictability, efficiency, and control. Their subsystems are easily understood, which allows their reuse in different contexts. But designed systems also tend to break easily, and, so far at least, they have conquered only simple problems. Compare, for example, Microsoft code to biological code: Office 2004 is larger than the human genome.
By contrast, evolved systems are inspiring because they demonstrate that simple, iterative algorithms, distributed over time and space, can accumulate design and create complexity that is robust, resilient, and adaptive within its accustomed environment. In fact, biological evolution provides the only “existence proof” that an algorithm can produce complexity that transcends its antecedents. Biological evolution is so inspiring that engineers have mimicked its operations in areas such as artificial evolution, genetic programming, artificial life, and the iterative training of neural networks.
But evolved systems have their disadvantages. For one, they suffer from “subsystem inscrutability”, especially within their information networks. That is, when we direct the evolution of a system or train a neural network, we may know how the evolutionary process works, but we will not necessarily understand how the resulting system works internally. For example, when Danny Hillis evolved a simple sort algorithm, the process produced inscrutable and mysterious code that did a good job at sorting numbers. But had he taken the time to reverse-engineer his evolved system, the effort would not have provided much generalized insight into evolved artifacts.
Why is this? Stephen Wolfram’s theory of computational equivalence suggests that simple, formulaic shortcuts for understanding evolution may never be discovered. We can only run the iterative algorithm forward to see the results, and the various computational steps cannot be skipped.
Thus, if we evolve a complex system, it is a black box defined by its interfaces. We cannot easily apply our design intuition to improve upon its inner workings. We can’t even partition its subsystems without a serious effort at reverse engineering. And until we can understand the interfaces between partitions, we can’t hope to transfer a subsystem from one evolved complex system to another (unless they have co-evolved).
A grand engineering challenge therefore remains: can we integrate the evolutionary and design paths to exploit the best of both? Can we transcend human intelligence with an evolutionary algorithm yet maintain an element of control, or even a bias toward friendliness?
The answer is not yet clear. If we artificially evolve a smart AI, it will be an alien intelligence defined by its sensory interfaces, and understanding its inner workings may require as much effort as we are now expending to explain the human brain. Assuming that computer code can evolve much faster than biological reproduction rates, it is unlikely that we would take the time to reverse engineer these intermediate points given that there is so little that we could do with the knowledge. We would let the process of improvement continue.
Humans are not the end point of evolution. We are inserting ourselves into the evolutionary process. The next step in the evolutionary hierarchy of abstractions will accelerate the evolution of evolvability itself.
(Precursor threads from the photoblog: Stanford Singularity Summit, IBM Institute on Cognitive Computing, Santa Fe Institute’s evolution & scaling laws, Cornell’s replicating robots, and a TED Party)
I just published an article in Technology Review which was constrained on word count. Here is a longer version and forum for discussion.
Many of the most interesting problems in computer science, nanotechnology, and synthetic biology require the construction of complex systems. But how would we build a really complex system – such as a general artificial intelligence (AI) that exceeded human intelligence?
Some technologists advocate design; others, prefer evolutionary search algorithms. Still others would selectively conflate the two, hoping to incorporate the best of both paradigms while avoiding their limitations. But while both processes are powerful, they are very different, and they are not easily combined. Rather, they present divergent paths.
Designed systems have predictability, efficiency, and control. Their subsystems are easily understood, which allows their reuse in different contexts. But designed systems also tend to break easily, and, so far at least, they have conquered only simple problems. Compare, for example, Microsoft code to biological code: Office 2004 is larger than the human genome.
By contrast, evolved systems are inspiring because they demonstrate that simple, iterative algorithms, distributed over time and space, can accumulate design and create complexity that is robust, resilient, and adaptive within its accustomed environment. In fact, biological evolution provides the only “existence proof” that an algorithm can produce complexity that transcends its antecedents. Biological evolution is so inspiring that engineers have mimicked its operations in areas such as artificial evolution, genetic programming, artificial life, and the iterative training of neural networks.
But evolved systems have their disadvantages. For one, they suffer from “subsystem inscrutability”, especially within their information networks. That is, when we direct the evolution of a system or train a neural network, we may know how the evolutionary process works, but we will not necessarily understand how the resulting system works internally. For example, when Danny Hillis evolved a simple sort algorithm, the process produced inscrutable and mysterious code that did a good job at sorting numbers. But had he taken the time to reverse-engineer his evolved system, the effort would not have provided much generalized insight into evolved artifacts.
Why is this? Stephen Wolfram’s theory of computational equivalence suggests that simple, formulaic shortcuts for understanding evolution may never be discovered. We can only run the iterative algorithm forward to see the results, and the various computational steps cannot be skipped.
Thus, if we evolve a complex system, it is a black box defined by its interfaces. We cannot easily apply our design intuition to improve upon its inner workings. We can’t even partition its subsystems without a serious effort at reverse engineering. And until we can understand the interfaces between partitions, we can’t hope to transfer a subsystem from one evolved complex system to another (unless they have co-evolved).
A grand engineering challenge therefore remains: can we integrate the evolutionary and design paths to exploit the best of both? Can we transcend human intelligence with an evolutionary algorithm yet maintain an element of control, or even a bias toward friendliness?
The answer is not yet clear. If we artificially evolve a smart AI, it will be an alien intelligence defined by its sensory interfaces, and understanding its inner workings may require as much effort as we are now expending to explain the human brain. Assuming that computer code can evolve much faster than biological reproduction rates, it is unlikely that we would take the time to reverse engineer these intermediate points given that there is so little that we could do with the knowledge. We would let the process of improvement continue.
Humans are not the end point of evolution. We are inserting ourselves into the evolutionary process. The next step in the evolutionary hierarchy of abstractions will accelerate the evolution of evolvability itself.
(Precursor threads from the photoblog: Stanford Singularity Summit, IBM Institute on Cognitive Computing, Santa Fe Institute’s evolution & scaling laws, Cornell’s replicating robots, and a TED Party)