The J Curve

Thursday, July 13, 2006

The Dichotomy of Design and Evolution

The two processes for building complex systems present a fundamental fork in the path to the future.

I just published an article in Technology Review which was constrained on word count. Here is a longer version and forum for discussion.

Many of the most interesting problems in computer science, nanotechnology, and synthetic biology require the construction of complex systems. But how would we build a really complex system – such as a general artificial intelligence (AI) that exceeded human intelligence?

Some technologists advocate design; others, prefer evolutionary search algorithms. Still others would selectively conflate the two, hoping to incorporate the best of both paradigms while avoiding their limitations. But while both processes are powerful, they are very different, and they are not easily combined. Rather, they present divergent paths.

Designed systems have predictability, efficiency, and control. Their subsystems are easily understood, which allows their reuse in different contexts. But designed systems also tend to break easily, and, so far at least, they have conquered only simple problems. Compare, for example, Microsoft code to biological code: Office 2004 is larger than the human genome.

By contrast, evolved systems are inspiring because they demonstrate that simple, iterative algorithms, distributed over time and space, can accumulate design and create complexity that is robust, resilient, and adaptive within its accustomed environment. In fact, biological evolution provides the only “existence proof” that an algorithm can produce complexity that transcends its antecedents. Biological evolution is so inspiring that engineers have mimicked its operations in areas such as artificial evolution, genetic programming, artificial life, and the iterative training of neural networks.

But evolved systems have their disadvantages. For one, they suffer from “subsystem inscrutability”, especially within their information networks. That is, when we direct the evolution of a system or train a neural network, we may know how the evolutionary process works, but we will not necessarily understand how the resulting system works internally. For example, when Danny Hillis evolved a simple sort algorithm, the process produced inscrutable and mysterious code that did a good job at sorting numbers. But had he taken the time to reverse-engineer his evolved system, the effort would not have provided much generalized insight into evolved artifacts.

Why is this? Stephen Wolfram’s theory of computational equivalence suggests that simple, formulaic shortcuts for understanding evolution may never be discovered. We can only run the iterative algorithm forward to see the results, and the various computational steps cannot be skipped.

Thus, if we evolve a complex system, it is a black box defined by its interfaces. We cannot easily apply our design intuition to improve upon its inner workings. We can’t even partition its subsystems without a serious effort at reverse engineering. And until we can understand the interfaces between partitions, we can’t hope to transfer a subsystem from one evolved complex system to another (unless they have co-evolved).

A grand engineering challenge therefore remains: can we integrate the evolutionary and design paths to exploit the best of both? Can we transcend human intelligence with an evolutionary algorithm yet maintain an element of control, or even a bias toward friendliness?

The answer is not yet clear. If we artificially evolve a smart AI, it will be an alien intelligence defined by its sensory interfaces, and understanding its inner workings may require as much effort as we are now expending to explain the human brain. Assuming that computer code can evolve much faster than biological reproduction rates, it is unlikely that we would take the time to reverse engineer these intermediate points given that there is so little that we could do with the knowledge. We would let the process of improvement continue.

Humans are not the end point of evolution. We are inserting ourselves into the evolutionary process. The next step in the evolutionary hierarchy of abstractions will accelerate the evolution of evolvability itself.

(Precursor threads from the photoblog: Stanford Singularity Summit, IBM Institute on Cognitive Computing, Santa Fe Institute’s evolution & scaling laws, Cornell’s replicating robots, and a TED Party)


  • Fascinating post. I think there are concerns to be had with successfully validating evolved systems. How would the FAA regard an evolved machine as a pilot for a passenger aircraft? I think verification is going to be big hurdle, but that there are precedents we can use.

    By Blogger Matt, at 1:15 AM  

  • I think we are already seeing an itegration of design and complex systems, without really knowing it or at least not thinking about it in terms of complex systems.

    If Office 2004 is the analogy for monolithic design then Google APIs are perhaps an apt analogy for a complex system.

    While Micrsoft knows (can know) every possible feature and interaction of Office 2004 the same is not true of Google with their web services. They cannot know all of the possible interactions that will occur with their interfaces.

    This is the core of the complexity problem, interaction. It is in the interactions between many small simple systems that complexity explodes. This is where we hit limitations in our own cognitive abilities, we just cannot (easily) internally manage or process the possible outcomes of often staggering numbers of possible interactions.

    Office 2004 on the other hand is like a large book, read it from cover to cover and you can know exactly how the system works, this is because it fundamentally does not have interactions with other systems that have to be taken into account and cognitively processed.

    However this process can be managed and controlled to a degree by limiting interactions and hence limiting complexity, call it managed complexity.

    For many, developers especially, this is a very difficult process, letting go of complete control.

    It is also an inherantly parallel process, many autonomous processes running together, occasionally interacting, passing information but not reliant on each other. Robust, redundant and adaptive. The technology world is still in a serial frame of mind to a great extent I believe but this shift to parallelism will be a necessity for a complex systems world.

    As for being "friendly" that depends on how we manufacture our next step in evolution. Friendlyness is competiton and the competition for limited resources. If human v1.0 (I think we may be v0.5 beta) requires the same resources we do then we will be in competition and they will not be "friendly" or the degree of friendship will be propotional to the scarcity of the shared, required resource.

    That means we should "design" our new species to require resources we ourselves do not. Better still if they require resources we actually want to rid our selves of. Better still (for us) if they are reliant on us to provide those resources where we are in control of the production. Better still if the relationship can be made symbiotic (and we are not viewed as a parasite).

    By Anonymous Anonymous, at 2:17 AM  

  • Ian: yes... the network is the locus of complexity and the primary vector by which evolution proceeds today.

    “Evolution did not change the building blocks, only the size of the networks.” Geoffrey West, SFI (flickr discussion)

    Your comment on web services reminds me of a passage by Albert-Laszlo Barabasi in Linked (which discusses power laws and scale free networks):

    “While entirely of human design, the Internet now lives a life of its own. It has all the characteristics of a complex evolving system, making it more similar to a cell than a computer chip. Many diverse components, developed separately, contribute to the functioning of a system that is far more than the sum of its parts. Therefore Internet researchers are increasingly morphing from designers into explorers. They are like biologists or ecologists who are faced with an incredibly complex system that, for all practical purposes, exists independently of them.” (149-50).

    “It is impossible to predict when the Internet will become self-aware, but clearly it already lives a life of its own. It grows and evolves at an unparalleled rate while following the same laws that nature uses to spin its own webs. Indeed it shows many similarities to real organisms.” (158).

    Matt: depending on your point of view, we are already relying on an evolved system to fly the planes – the pilots themselves. =)

    The FAA saw the need to build in redundancy with the co-pilot.

    By Blogger Steve Jurvetson, at 9:12 AM  


    By Anonymous Anonymous, at 3:40 AM  

  • Hi Steve,

    An idea on evolutionary search...

    Perhaps we should constrain it in such a way that it would yield clues on what the system is doing at different scales. Define artificial (not necessarily arbitrary) interfaces. Surely evolution is smart enough to solve the problem despite the extra hoops you make it jump through.

    In other words: inject some clarity into the evolutionary process. The more clarity we inject, the more informationally constrained the process would be.

    I suppose this goes back to the usual... purpose. If we want to stay in control, it comes at a cost in terms design space.

    What do ya think?


    By Anonymous Anonymous, at 7:31 PM  

  • Interesting… I want to think about that a bit. Moving from directed search to structural bounds makes me wonder if you’d get segmented local optimization, and if this would be powerful enough if the designer has to be able to pre-specify the overall architecture and subsystem blocks.

    If you relax the rigid separation and allow a little cross-over, “life will find a way” like in Jurassic Park. =)

    I am reminded of the folks evolving a set of algorithms on Xilinx FPGAs, each in their own digital sandbox. They found one of the circuits that outperformed all the rest on the table. But when they took it out of the room to demo it, it stopped working. After a major reverse engineering effort, they discovered that the circuit had done something very strange. It operated in an analog domain and RF coupled to the neighboring cells to borrow compute resources! This was completely unexpected, and not a documented feature…

    Oh, I also meant to share a couple interesting quotes from Danny Hillis in The Pattern on the Stone:

    “We will not engineer an artificial intelligence; rather we will set up the right conditions under which an intelligence can emerge. The greatest achievement of our technology may well be creation of tools that allow us to go beyond engineering – that allow us to create more than we can understand.” (138)

    “we cannot expect to understand an intelligence by taking it apart and analyzing it as if it were a hierarchically designed machine.” (141)

    By Blogger Steve Jurvetson, at 10:50 PM  

  • re: "Assuming that computer code can evolve much faster than biological reproduction rates"

    Need "drive" and "selection". Easier to design variation of a genome (drive) than selective pressure...computers can generate enormous variation of a code base...need also to confront the design challenge of creating artificial selective pressure that moves a "genome" in a specific direction. You could create algorithms for "evolving niches"...maybe thats where selective conflation of design/evolution will be strongest...

    RE: "Humans are not the end point of evolution."

    Well, depending on how you interperet the ending of "2001: A Space Odyssey", apparently we are not the agent of the next big jump. Need to find that monolith...or bring back Stanley ;-)

    RE: "accelerate the evolution of evolvability itself."

    Gabriel Dover has done some very unique work on the sources and max rate of Drive and Selection in evolutionary systems. It appears these too are governed by laws...see his "Genome Evolution" for a mindful...

    RE: Xilinx FPGAs:

    great example of evolution into a niche not specified by the designers!

    RE: Hillis: "we will set up the right conditions under which an intelligence can emerge"...

    Exactly! Niche design is as important as "drive" design...algorithmic/fractal niche design is killer emerging field. Imagine a fractal niche generating algorithm interacting with an intelligently guided "drive" system -- that might get you evolution at the speed of computation...

    By Blogger Stephen, at 8:12 AM  

  • It seems that a good way to go is to compartmentalize evolved sub-systems and connect them using traditional APIs.

    For example, we could have a visual pattern matcher, a natural language parser, a trend analyzer and other subsystems that are good in their problem domain. Each subsystem would be evolved with a well defined API (e.g. visual input goes in there, classifier output goes out of there). Connect them at a higher level.

    The advantages are:

    - subsystems would be reusable

    - maintain visibility at the API layers

    - subsystems can be trained for narrow expertise, giving faster training and better results

    By Anonymous Anonymous, at 11:33 AM  

  • An excellent post. You explain in a very clear way quite a tricky problem. Neat. I owed it the read from last week, my apologizes.

    I have found that as I suspected, the post had connection with what was being discussed in Troika , Luminaries, Biomimicry... well in fact with those ones and a lot other ( Symbolic Immortality, etc) magnific posts of you I can recall at your flickr album.

    [off topic] Readers, go check Steve´s flickr album, not only for the amazing conversations that arise there, but the magnific photos he takes. :)
    [/off topic]

    From the whole of things this post awake in my mind, I keep one as the main concept from all, to which David Hillis' quotes you added late after in the comments come to summarize perfectly: "The greatest achievement of our technology may well be creation of tools that allow us to go beyond engineering – that allow us to create more than we can understand."

    This is it, what I agree fully with. In this regard, both designed and evolved systems interact. If the general question here is not exactly The Dichotomy between D & E, it may be expressed differently: Will we ever be able to "design" systems to evolve?

    I think like David, that here there are limitations on our ability to comprehend how systems works which are not an inpairment, but just a brain setting. To produce its special kind of "intelligence" our brain has to oversee a lot of the compound parts of the phenomena it witnesses, to grab it as a whole and invest meaning to it. The ability of our brains to deal with hundreds of variables at a time (working parallely) and process a united thought -or more than one- as an outcome is what AI can´t achieve -yet-. This -ours- Intelligence is not about computation, but about pattern recognition. And I think for my own perception of my thinking processes that pattern recognition works best taking bulks of information, rather than making an analisys of the pieces, each by each. The pieces by themselves make no sense at all.

    So this is not an incapability of the brain -something we lack of- but the use of the neuronal technology with preference of a method over another. (If our brains would like to show off "smartness" trying to work like a microprocessor they would be a shame). However this preference may (or may not, we don´t know) prevent us from being able to comprehend the intrincate events happenning inside a complex system through its evolution. Perhaps we will remain blind forever, but this doesn´t mean we won´t be able to go beyond this "obstacle" and get where we want to go anyway.

    I have in mind the idea of the eyes... how we see. If there was not a harmonic mechanism for the eyes to delay enough time the images it captures, and to refresh them with the new one in time too, we would see life "frame-by-frame". This would allow us to analize each frame, our intelligence would have had to evolve in such way, but we wouldn´t be able to perceive movement, think it and understand it -with all that it implies-. Yet, to see movement we can´t see still images.

    Perhaps, we are able to see the bulk evident behavour and outcome of evolved systems and take meaning from them, but we will never be able to deconstruct them in a way that make sense for our brains. Will we have to learn to construct things we don´t understand? David challenges us. I think: Possibly. We already do when we have our offspring. ;-)

    In this regard, and as a final thought: Humans may not be the endpoint of Evolution, but perhaps will be the endpoint of the possible known -for us- Intelligence, as is. Perhaps tomorrow´s AIs will achieve the Singularity and will self-adjust, but that won´t make them necessarily smarter than -most brilliant- humans.


    Aside note: can´t you link from the Tech. Review article this blog post (with its discussion)? Perhaps is not too politicaly correct, right? A pitty. Maybe I would recommend that you add in the "thread" below the article there, those quotes from Hillis, as an apendix with your comments on them (or if you want I do it in your behalf, no prob.). They worth a lot, the add whole bunch of thoughts for the more comprehensive approach to the subject. Great post, Steve.

    By Blogger Gisela Giardino, at 12:27 AM  

  • Off topic but triggered by this piece.
    I find the inclusion of synthetic biology and the reference to human intelligence and the 'building' to bring us to an interesting junction. Most of us of a certain age can build humans -I've built 3 so far. They work pretty well and have great IO and can process huge amounts of data and be fed off organic fuel with little maintenance. Moving from biology to synthetic biology is to turm us all into Darwinian creationists and I suspect will become fraught with legal controls any time soon, but is by far and away the most interesting area as it is the nearest MEMS is going to get to Si digital electronics where the trick is to use software to determine functionality and mass production techniques for standard devices. This is a mimic of biology where the DNA code provides the functionality to a standard hardware building mechanism that is resource minimising. I see the way this going is to create new life forms for e.g. coating glass- rather than a vacuum coater or sol gel process.
    So... complex systems? I have an 'ant theory' about this. Seemingly complex behaviour can be reduced to very simple code if it is carried out of a massively parallel scale. Biological evolution is another interestig simple rule that gets forgotten. Inferior designs should fail to reproduce- and hence die out(so we've already stopped the evolution of all that is important to us, whilst assisting evolution of everything we attempt to destroy).
    And on the size of Office code vs DNA. The reason for this is that DNA is a recipe, not a blueprint- same as the difference beween a recipe for a fruit cake and a specification for a particular fruit cake (thank you Douglas Adams, rip). So the lesson is that complex systems can be built with small code if you can tolerate wide variance in output specification- then go for massive parallelism and select the best (Darwinism) by killing off the inferior versions.

    By Anonymous Anonymous, at 6:30 AM  

  • Design and evolution...what a dichotomy. I think this is really interesting stuff. The first time I'd really ever heard about it was at a talk given by Kurzweil at Stanford last semester (I'm an undergrad). Anyway, I found your blog through your company website; I'm in Shanghai right now with the Stanford ATI summer program, and we had thought you were going to give a keynote at a conference we're holding in a few weeks. Either way, even though you can't come, it's been quite enjoyable reading through these (old) entries! I was looking forward to meeting you, but hopefully there will be more chances in the future.


    ps. If Office 2004 has more code than the human genome, I'd like to meet the superbeing that could be created from Office 2007's code...

    By Anonymous Anonymous, at 9:24 AM  

  • Evolution is both fact and theory. Creationism is neither.

    By Blogger beepbeepitsme, at 6:26 PM  

  • "… a system cannot be analyzed into parts. This leads to the radically new notion of unbroken wholeness of the entire universe. You cannot take it apart. For if you do, what you end up with is not contained within the original whole. It is created by the act of analysis."
    David Bohm - Wholeness and the Implicate Order

    “A human being is part of the Whole. He experiences himself, his thoughts and feelings,
    as something separated from the rest... a kind of optical delusion of his consciousness.”
    Albert Einstein

    See also Stephen Wolfram's "A New Kind of Science" and many more recent thinkers on complexity, self-organizing systems, Chaos Theory, etc.

    The point is that human intelligence has framed the problem of self-organizing systems and the wholeness of everything but doesn't get that we, the observers, are part of that system. Now it is time to review what we know and, more importantly, what we don't know about human cognition. Einstein’s Special Relativity theory is a hundred years old, yet, we still don’t get his basic perspective on wholeness.

    Basically, our industrial age propensity for mechanistic thinking (modular code for software? duh!) keeps our perception and thinking in a two-dimensional (plus Time) "box". None of the media we use to communicate with each other or for self reflection is dimensional enough to grasp or share much about the real universe. For example, fuzzy logic is a self-limiting, disqualifying definition. We know more than we "think" we do. We can know more.

    Let's use at least both sides of our brain and see if we can self-evolve our reasoning beyond the industrial age limitations. Yes, we learned a lot during the Age of Reason but, I suspect, we have forgotten more than we realize.

    By Anonymous Anonymous, at 2:02 PM  

  • What indicators will one use to discern whether blending or bifurcating is happening? Imagine a person five, fifty, or perhaps five hundred years from now who is reflecting on your speculation and wants to examine the intervening time to see what has happened. If blending or bifurcating will happen between now and then, how will that person be able to tell? How can we *measure* our progress in achieving the "grand engineering challenge" of integrating the two paths?

    By Anonymous Anonymous, at 11:49 AM  

Post a Comment

<< Home