The J Curve

Thursday, March 06, 2008

The Joy of Rockets

A short talk that I gave at TED, under the apt mavericks conference theme:

So many people have contacted me since this video went up to relay how rocketry inspired them in their childhood. Rocket science is tangible.

Here are some recent rocket photos and videos:

Icarus Rocket’s Red Glare Rocket-eye’s View Mile High View Go Canada Space, the Final Frontier L3 Bird Walking on the Moon

Silicon Valley boasts the largest rocketry club in the world. Yet, there is no "legal" launch site anywhere in the Bay Area, a situation that has become endemic across America.

There have been over 500 million Estes rocket launches in the U.S. alone. It's not for safety that rocketry has been pushed out of suburban areas; it's fear of the unknown. Local communities would rather forbid launches in their backyard than think about the systemic effect once all communities do so. This recently happened here, when Livermore shut down the last Silicon Valley site for launches. We are on the search for a new site (DeAnza college used to host sites, and we are currently pitching NASA Ames). If you have a large plot of land and would welcome some excited kids of all ages, please contact us at LUNAR. UPDATE: we succeeded in getting NASA Ames as our low-power launch site. Thanks!!!

Saturday, May 05, 2007


The words just warm the heart. WIRED recently launched the GeekDad blog with multiple contributors.

Parenthood is an atavistic adventure, especially for geeks who rediscover their child-like wonder and awe… and find that they can relate better to kids than many adults. The little people really appreciate arrested development in adults. =)

Another cause for celebration is the rediscovery of toys, but as an adult with a bigger allowance. Chris Anderson, editor in chief of Wired, put it well in one of his GeekDad posts: “Get Lego Mindstorms NXT. Permission to build and program cool toy robots is not the only reason to have children, but it's up there.”

Here are my contributions so far:

Beginner Ants with the NASA gel ant farm

Beginner’s Video Rocketry to capture video feeds from a soaring rocket

Peering into the Black Box: Household appliances become less mysterious when you take them apart

Cheap Laser Art: amazing emergent images with just a laser pointer and a camera

Slot Cars Revisited: modern cars with modern materials

Rocket Science Redux : Trying to build the smallest possible rocket is a great way for children to learn rocket science

Easter Egg Deployment by Rocket with a hundred little parachutes

Celebrate the Child-Like Mind, a topical repost from the J-Curve

From what I can see, the best scientists and engineers nurture a child-like mind. They are playful, open minded and unrestrained by the inner voice of reason, collective cynicism, or fear of failure.

Children remind us of how to be creative, and they foster an existential appreciation of the present. Our perception of the passage of time clocks with salient events. The sheer activity level of children and their rapid transformation accelerates the metronome of life.

Monday, January 01, 2007

Happy New Year

We broke out a wonderful bottle of bubbly with some friends last night, and discovered the official drink of The J Curve....

Starting in mid 2004, I blogged on a weekly basis, then bimonthly in 2005, and just twice in 2006. My creativity here has withered, supplanted by a daily photoblog on flickr.

I have wondered why I find it so much easier to post a daily photo than to sculpt prose on any kind of regular basis. For me, the mental hurdle for a daily photo post is so much lower than text. A photo can be a quick snapshot, without much care for quality, and this is immediately apparent to the viewer. You don't have to waste much time with uninteresting images. With text, if I dash off a few sloppy and poorly thought out paragraphs (like these ones =), the reader has to waste some time to realize that this is a throw-away post, or maybe meant to be tongue-in-cheek. I hold myself to a much higher quality hurdle for linear media — something thoughtful and provocative — and so I procrastinate. Many of my text posts are repurposed material that I wrote for external deadlines (magazines, conferences, congressional testimony), without which I may never had crystallized my disparate thoughts into something coherent.

Anyway, here are my 30 favorite photos and my best shot of 2006. Cheers!

Thursday, July 13, 2006

The Dichotomy of Design and Evolution

The two processes for building complex systems present a fundamental fork in the path to the future.

I just published an article in Technology Review which was constrained on word count. Here is a longer version and forum for discussion.

Many of the most interesting problems in computer science, nanotechnology, and synthetic biology require the construction of complex systems. But how would we build a really complex system – such as a general artificial intelligence (AI) that exceeded human intelligence?

Some technologists advocate design; others, prefer evolutionary search algorithms. Still others would selectively conflate the two, hoping to incorporate the best of both paradigms while avoiding their limitations. But while both processes are powerful, they are very different, and they are not easily combined. Rather, they present divergent paths.

Designed systems have predictability, efficiency, and control. Their subsystems are easily understood, which allows their reuse in different contexts. But designed systems also tend to break easily, and, so far at least, they have conquered only simple problems. Compare, for example, Microsoft code to biological code: Office 2004 is larger than the human genome.

By contrast, evolved systems are inspiring because they demonstrate that simple, iterative algorithms, distributed over time and space, can accumulate design and create complexity that is robust, resilient, and adaptive within its accustomed environment. In fact, biological evolution provides the only “existence proof” that an algorithm can produce complexity that transcends its antecedents. Biological evolution is so inspiring that engineers have mimicked its operations in areas such as artificial evolution, genetic programming, artificial life, and the iterative training of neural networks.

But evolved systems have their disadvantages. For one, they suffer from “subsystem inscrutability”, especially within their information networks. That is, when we direct the evolution of a system or train a neural network, we may know how the evolutionary process works, but we will not necessarily understand how the resulting system works internally. For example, when Danny Hillis evolved a simple sort algorithm, the process produced inscrutable and mysterious code that did a good job at sorting numbers. But had he taken the time to reverse-engineer his evolved system, the effort would not have provided much generalized insight into evolved artifacts.

Why is this? Stephen Wolfram’s theory of computational equivalence suggests that simple, formulaic shortcuts for understanding evolution may never be discovered. We can only run the iterative algorithm forward to see the results, and the various computational steps cannot be skipped.

Thus, if we evolve a complex system, it is a black box defined by its interfaces. We cannot easily apply our design intuition to improve upon its inner workings. We can’t even partition its subsystems without a serious effort at reverse engineering. And until we can understand the interfaces between partitions, we can’t hope to transfer a subsystem from one evolved complex system to another (unless they have co-evolved).

A grand engineering challenge therefore remains: can we integrate the evolutionary and design paths to exploit the best of both? Can we transcend human intelligence with an evolutionary algorithm yet maintain an element of control, or even a bias toward friendliness?

The answer is not yet clear. If we artificially evolve a smart AI, it will be an alien intelligence defined by its sensory interfaces, and understanding its inner workings may require as much effort as we are now expending to explain the human brain. Assuming that computer code can evolve much faster than biological reproduction rates, it is unlikely that we would take the time to reverse engineer these intermediate points given that there is so little that we could do with the knowledge. We would let the process of improvement continue.

Humans are not the end point of evolution. We are inserting ourselves into the evolutionary process. The next step in the evolutionary hierarchy of abstractions will accelerate the evolution of evolvability itself.

(Precursor threads from the photoblog: Stanford Singularity Summit, IBM Institute on Cognitive Computing, Santa Fe Institute’s evolution & scaling laws, Cornell’s replicating robots, and a TED Party)

Wednesday, June 28, 2006

Brainstorm Questions

The editors of FORTUNE magazine asked four questions of the attendees of Brainstorm 2006. Ross Mayfield is blogging the replies and the ongoing conference. Here are my answers to two of the questions:


None of the individuals named today.

I would bet that in 2016, when we look back on who has had the greatest impact in the prior 10 years, it will be an entrepreneur, someone new, someone unknown to us at this time.

Looking forward from the present, we tend to amplify the leaders of the past. But in retrospect, it’s always clear that the future belongs to a new generation. A new generation of leaders will transcend political systems that cater to the past. I would bet more on a process of empowerment than any particular person.


I tend to be out of touch with fear as an emotion, and so I find myself rationally processing the question and thinking of the worst near-term catastrophe that could affect all of us.

At perhaps no time in recorded history has humanity been as vulnerable to viruses and biological pathogens as we are today. We are entering the golden age of natural viruses, and genetically modified and engineered pathogens dramatically compound the near term threat.

Bill Joy summarizes that “The risk of our extinction as we pass through this time of danger has been estimated to be anywhere from 30% to 50%.”

Why are we so vulnerable now?

The delicate "virus-host balance" observed in nature (whereby viruses tend not to be overly lethal to their hosts) is a byproduct of biological co-evolution on a geographically segregated planet. And now, both of those limitations have changed. Organisms can be re-engineered in ways that biological evolution would not have explored, or allowed to spread widely, and modern transportation undermines natural quarantine formation.

One example: According to Preston in The Demon in the Freezer, a single person in a typical university bio-lab can splice the IL-4 gene from the host into the corresponding pox virus. The techniques and effects are public information. The gene is available mail order.

The IL-4 splice into mousepox made the virus 100% lethal to its host, and 60% lethal to mice who had been vaccinated (more than 2 weeks prior). Even with a vaccine, the IL-4 mousepox is twice as lethal as natural smallpox (which killed ~30% of unvaccinated people).

The last wave of “natural” human smallpox killed over one billion people. Even if we vaccinated everyone, the next wave could be twice as lethal. And, of course, we won’t have time to vaccinate everyone nor can we contain outbreaks with vaccinations. 

Imagine the human dynamic and policy implications if we have a purposeful IL-4 outbreak before we are better prepared…. Here is a series of implications that I fear:

1) Ring vaccinations and mass vaccinations would not work, so
2) Health care workers cannot come near these people, so

3) Victims could not be relocated (with current people and infrastructure) without spreading the virus to the people involved.

4) Quarantine would be essential, but it would be in-situ. Wherever there is an outbreak, there would need to be a hair-trigger quarantine.

5) Unlike prior quarantines, where people could hope for the best, and most would survive, this is very different: everyone in the quarantine area dies.

6) Where do you draw the boundary? Neighborhood? The entire city? With 100% lethality, the risk-reward ratio on conservatism shifts.

7) How do you enforce the quarantine? Everyone who thinks they are not yet infected will try to escape with all of the fear and cunning of someone facing certain death if they stay. It would require an armed military response with immediate deployment capabilities.

8) The ratio of those available to enforce quarantine to those contained makes this seem completely infeasible. With unplanned quarantine locations, there is no physical infrastructure to assist in the containment.

9) Once word about a lost city spreads, how long would it take for ad-hoc or planned “accelerated quarantine” to emerge?
10) Once rumor of the quarantine policy spreads, doctors would have a strong perverse incentive to not report cases until they made it out of town…

Sunday, December 11, 2005

Books I am Enjoying Now

and the library they came from. Each image links to comments
. .
Symbolic Immortality Bookshelf@work

Saturday, October 29, 2005

Keep On Booming

(I thought I’d post an excerpt from testimony I gave to the WHCoA: the White House Conference on Aging. I tried to use language that might appeal to the current political regime. =)

Every 60 seconds, a baby boomer turns 60. In thinking about the aging demographic in America, let me approach the issue as a capitalist. Rather than regarding the burgeoning ranks of “retirees” as an economic sink of subsidies, I see an enormous market and an untapped opportunity. Many marketers are realizing the power of the boom, and some of our largest investors have made their fortune attending to the shifting needs of the boomers.

Aging boomers are numerous and qualitatively different. Compared to an older generational cohort, the average boomer is twice as likely to have college degree and 3x as likely to have Internet experience.

Envision a future where many aging boomers are happily and productively working, flex-time, from home, on tasks that require human judgment and can be abstracted out of work flows.

Fortunately, we are clearly entering an information age for the economy. The basis of competition for most companies and all real GNP growth will come from improvements in information processing. Even in medicine and agriculture, the advances of the future will derive from better understanding and manipulation of the information systems of biology.

In short, the boomers could be America’s outsourcing alternative to off-shoring. The Internet’s latest developments in web services and digital communications (VOIP and videoconferencing) lower the transaction costs of segmenting information work across distributed work organizations.

There is a wonderful economic asymmetry between those who have money and those who have time, between those who need an answer and those with information. This is a boomer opportunity. Imagine a modern-day Web librarian. Think of professional services, like translation, consulting or graphic arts. The majority of economic activity is in services, much of which is an information service, freely tradable on a global basis. Imagine an eBay for information. Boomers may be the beneficiaries.

The free market will naturally exploit opportunities in secondary education and retraining, telecommuting technologies for rich communication over the Internet, web services to segment and abstract workflow processes and ship them over the network to aging boomers, and technology to help all of us retain our mental acuity and neural plasticity as we age. Lifelong learning is not just about enlightenment; it’s an economic imperative.

Where can the government help? Primarily in areas already entrenched in regulation. I will point out two areas that need attention:

1) Broadband Access. Broadband is the lifeline to the economy of the future. It is a prerequisite to the vision I just described. But America trails behind twelve other countries in broadband adoption. For example, our per-capita broadband adoption is less than half that of Korea. The Pew Internet Project reports that “only 15% of Americans over the age of 65 have access to the Internet.”

Broadband is infrastructure, like the highways. The roads have to be free for innovation in the vehicles, or software, that run on them. Would we have permitted GM to build the highways in exchange for the right to make them work exclusively with GM cars? Would we forbid he building of new roads because they compete with older paths? Yet that is what we are doing with current broadband regulation.

2) Reengineering the FDA and Medicare. No small feat, but this should be a joint optimization. Medicare has the de facto role to establish reimbursement policy, and it often takes several years after FDA approval for guidelines to be set. This could be streamlined, and shifted to a parallel track to the FDA approval process so that these delays are not additive.

Why is this important? We are entering an intellectual Renaissance in medicine, but the pace of progress is limited by a bureaucracy that evolves at a glacial pace, relative to the technological opportunities that it regulates.

The FDA processes and policies will need to undergo profound transitions to a future of personalized and regenerative medicine. The frustration and tension with the FDA will grow with the mismatch between a static status quo and an exponential pace of technological process. Exponential? Consider that 80% of all known gene data was discovered in the past 12 months. In the next 20 years, we will learn more about genetics, systems biology and the origin of disease than we have in all of human history.

The fate of nations depends on their unfettered leadership in the frontier of scientific exploration. We need to explore all promising possibilities of research, from nanotechnology to neural plasticity to reengineering the information systems of biology. We are entering a period of exponential growth in technological learning, where the power of biotech, infotech, and nanotech compounds the advances in each formerly discrete domain. In exploring these frontiers, nations are buying options for the future. And as Black-Scholes option pricing reveals, the value of an option grows with the range of uncertainty in its outcome.

These are heady times. Historians will look back on the upcoming nano-bio epoch with no less portent than the Industrial Revolution. If we give our aging boomers free and unfettered broadband access, and our scientists free and unfettered access to the frontiers of the unknown, then our greatest generation, when the look to the next, can take pride in knowing that the best is yet to come.

Saturday, October 01, 2005

XPRS: Big Rockets in the Black Rock Desert

"In terms of sheer coolness, few things beat rocketry."
— Paul Allen, Microsoft co-founder

I just had the most exciting weekend of my life.

For those who are not subscribers to my unified Feedburner RSS feed, the links here are to the relevant photos and commentary.

There was a steady stream of high power rockets, all day and into the night. Their roar quickens the pulse. Especially when they fall from the sky as supersonic lawn darts, shred fins at Mach 2, or go unstable and become landsharks. I had been warned about what happens when a supersonic rocket meets a Chevy Suburban.

The Hybrid Nitrous Oxide rockets and Mercury Joe scale model had glorious launches. To get my L1 Certification for high power rocketry, I had to build a rocket and H-size motor, and then successfully recover them after launch. I also tested my rocket videocam and GPS and altimeter systems.

Black Rock Desert in Nevada is the only place in the country with an FAA waiver to shoot up 100,000 feet, way beyond the end of the atmosphere.

I was camping with a member of the 100K team. It is a beautiful rocket, but this weekend a software bug brought the upper stage back to earth as a supersonic ground-penetrating “bunker buster” that tunneled and blasted a cave 14 feet under ground.

My inner child can’t wait for the next one…

Saturday, July 23, 2005

Reverberations of Friendship

On my flight to Estonia for a Skype board meeting, I was reading my usual geek fare, such as Matt Ridley’s Nature Via Nurture, a wonderful synthesis of phylogenetic inertia, nested genetic promoter feedback loops, bisexual bonobo sisterhoods, and the arrested development of domesticated animals.

While reading various interviews of Craig Venter, I stumbled across a nugget of sculptured prose from Patti Smith, which eloquently captures the resonant emotional filtration of a newfound friend and, in a more abstract way, the curious cultural immersion I felt in my Estonian homeland:

“There are those whom we seek and there are those whom we find. Occasionally we find – however fractured the relativity – one we recognize as kin. In doing so, certain curious aspects of character recede and we happily magnify the common ground.”

Friday, March 25, 2005

Ode to Carbon

I took a close look at the benzene molecular model on my desk, and visions of nested snake loops danced in my head…

Is there something unique about the carbon in carbon-based life forms?

Carbon can form strong bonds with a variety of materials, whereas the silicon of electronics is more finicky. Some elements of the periodic table are quite special. Herein may lie a molecular neo-vitalism, not for the discredited metaphysics of life, but for scalable computational architectures that exploit three dimensions.

Why is the difference in bonding variety between carbon and silicon important? The computational power of nature relies on a multitude of shapes (in the context of Wolfram’s principle of computational equivalence whereby any natural process of interest can be viewed as a comparably complex computation).

“Shape based computing is at the heart of hormone-receptor hookups, antigen-antibody matchups, genetic information transfer and cell differentiation. Life uses the shape of chemicals to identify, to categorize, to deduce and to decide what to do.” (Biomimicry, p.194)

Jaron Lanier abstracts the computation of molecular shapes to phenotropic computation along conformational and interacting surfaces, rather than linear strings like a Turing Machine or a data link. Some of these abstractions already apply to biomimetic robots that “treat the pliability of their own building materials as an aspect of computation.” (Lanier)

When I visited Nobel Laureate Smalley at Rice, he argued that the future of nanotech would be carbon based, due to its uniquely strong covalent bond potential, and carbon’s ability to bridge the world of electronics to the world of aqueous and organic chemistries, a world that is quite oxidative to traditional electronic elements.

At ACC2003, I moderated a debate with Kurzweil, Tuomi and Prof. Michael Denton from New Zealand. While I strongly disagreed with Denton's speculations on vitalism, he started with the interesting proposition that "self-replication arises from unique types of matter and can not be instantiated in different materials... The key to self-replication is self-assembly by energy minimization, relieving the cell of the informational burden of specifying its 3D complexity... Self-replication is not a substrate independent phenomenon." (Of course, self-replication is not impossible in other physical systems, for that would violate quantum mechanics, but it might be infeasible to design and build within a reasonable period of time.)

Natural systems exploit the rich dynamics of weak bonds (in protein folding, DNA hybridization, etc.) and perhaps the power of quantum scanning of all possible orbitals (there is a probability for the wave function of each bond). Molecules snap together faster than predicted by normal Brownian interaction rates, and perhaps this is fundamental to their computational power.

For example, consider the chemical reaction of a caffeine molecule binding to a receptor (something which is top of mind =). These two molecules are performing a quantum mechanical computation to solve the Schrödinger equation for all of their particles. This simple system is finding the simultaneous solution for about 2^1000 equations. That is a task of such immense complexity that if all of the matter of the universe was recast into BlueGene supercomputers, they could not find the solution even if they crunched away for the entire history of the universe. And that’s for one of the molecules in your coffee cup. The Matrix would require a different approach. =)

A simultaneous 3D exploration of all possible bonds warps Wolfram’s classical computational equivalence into a neo-vitalist quantum equivalence argument for the particular elements and material sets that can best exploit these dynamics. A quantum computer with 1000 logical qubits could perfectly simulate the coffee molecule by solving the Schrödinger equations in polynomial time.

Of course this begs the question of how we would design and program these conformational quantum computers. Again, nature provides an existence proof – with the simple process of evolutionary search surpassing intelligent design of complex systems. Which brings us back the earlier blog prediction, that biology will drive the future of information technology – inspirationally, metaphorically, and perhaps, elementally.

Traditional electronics design, on the other hand, has the advantages of exquisite speed and efficiency. The biggest challenge may prove to be the hybridization of these domains and design processes.

Sunday, March 06, 2005

TED Reflections

TED is a wonderfully refreshing brain spa, an eclectic ensemble of mental exercise that helps rekindle the childlike mind of creativity.

This year’s theme was “Inspired by Nature”, which I believe has broad and interdisciplinary relevance, especially to the future of intelligence and information technology. By the end of the conference, there was a common thread running throughout the myriad talks, a leitmotif along the frontiers of the unknown. I felt as if I had been immersed in a fugue of biomimicry.

I am still trying to synthesize the discussions I had with Kurzweil, Venter and Hillis about subsystem complexity in evolved systems, but until then, I thought I’d share some of my favorite quotes and photos.

• Rodney Brooks, MIT robotocist:
“Within 2-3 weeks, freshmen are adding BioBricks to the E.Coli bacteria chassis. They make oscillators that flash slowly and digital computation agents. But the digital abstraction may not be right metaphor for programming biology.”

“Polyclad flatworms have about 2000 neurons. You can take their brain out and put it back in backwards. The worm moves backwards at first, but adapts over time back to normal. You can rotate its brain 180 degrees and put it in upside down, and it still works. Biology is changing our understanding of complexity and computation.”

• Craig Venter, when asked about the risks of ‘playing God’ in the creation of a new form of microbial life: “My colleague Hammie Smith likes to answer: ‘We don’t play.’”

“With Synthetic Genomics, genes are the design components for the future of biology. We hope to replace the petrochemical industry, most food, clean energy and bioremediation.”

“The sea is very heterogeneous. We sampled seawater microbes every 200 miles and 85% of the gene sequences in each sample were unique... 80% of all known gene data is new in the last year.”

“There are about 5*10^30 microbes on Earth. The Archaea alone outweigh all plants and animals... One milliliter of sea water has 1 million bacteria and 10 million viruses.”

• Graham Hawkes, radical submarine inventor, would agree:
“94% of life on Earth is aquatic. I am embarrassed to call our planet ‘Earth’. It’s an ocean planet.”

• Janine Benyus, author of Biomimicry (discussion):
“Our heat, beat and treat approach to manufacturing is 96% waste... Life adds information to matter. Life creates conditions conducive to life.”

• Kevin Kelly, a brilliant author and synthesizer:
“Organisms hack the rules of life. Every rule has an exception in nature.”

“Life and technology tend toward ubiquity, diversity, specialization, complexity and sociability…. What does technology want? Technology wants a zillion species of one. Technology is the evolution of evolution itself, exploring the ways to explore, a game to play all the games.”

• James Watson, on finding DNA's helix: “It all happened in about two hours. We went from nothing to thing.” (Photo and discussion)

• The Bill Joy nightmare ensemble: GNR epitomized in Venter (Genetics), Kurzweil (Nanotech) and Brooks (Robotics).

• The Feynman Fan club: particle diagrams take on human form =)
• GM’s VP of R&D on the importance of hydrogen to the auto industry.
• Amory Lovins on the inefficiency of current autos

And, for entertainment, a Grateful Dead drum circle, Pilobolus, and polypedal studies.

• Bono, Streaming video of his TED Prize acceptance speech:
“A head of state admitted this to me: There’s no chance this kind of hemorrhaging of human life would be accepted anywhere else other than Africa. Africa is a continent in flames.”

Sunday, January 09, 2005

Thanks for the Memory

While reading Jeff Hawkins’ book On Intelligence, I was struck by the resonant coherence of his memory-prediction framework for how the cortex works. It was like my first exposure to complexity theory at the Santa Fe Institute – providing a new perceptual prism for the world. So, I had to visit him at the Redwood Neuroscience Institute.

As a former chip designer, I kept thinking of comparisons between the different “memories” – those in our head and those in our computers. It seems that the developmental trajectory of electronics is recapitulating the evolutionary history of the brain. Specifically, both are saturating with a memory-centric architecture. Is this a fundamental attractor in computation and cognition? Might a conceptual focus on speedy computation be blinding us to a memory-centric approach to artificial intelligence?

• First, the brain:
“The brain does not ‘compute’ the answers to problems; it retrieves the answers from memory… The entire cortex is a memory system. It isn’t a computer at all.”

Rather than a behavioral or computation-centric model, Hawkins presents a memory-prediction framework for intelligence. The 30 billion neurons in the neocortex provide a vast amount of memory that learns a model of the world. These memory-based models continuously make low-level predictions in parallel across all of our senses. We only notice them when a prediction is incorrect. Higher in the hierarchy, we make predictions at higher levels of abstraction (the crux of intelligence, creativity and all that we consider being human), but the structures are fundamentally the same.

More specifically, Hawkins argues that the cortex stores a temporal sequence of patterns in a repeating hierarchy of invariant forms and recalls them auto-associatively. The framework elegantly explains the importance of the broad synaptic connectivity and nested feedback loops seen in the cortex.

The cortex is relatively new development by evolutionary time scales. After a long period of simple reflexes and reptilian instincts, only mammals evolved a neocortex, and in humans it usurped some functionality (e.g., motor control) from older regions of the brain. Thinking of the reptilian brain as a “logic”-centric era in our development that then migrated to a memory-centric model serves as a good segue to electronics.

• And now, electronics:
The mention of Moore’s Law conjures up images of speedy microprocessors. Logic chips used to be mostly made of logic gates, but today’s microprocessors, network processors, FPGAs, DSPs and other “systems on a chip” are mostly memory. And they are still built in fabs that were optimized for logic, not memory.

The IC market can be broadly segmented into memory and logic chips. The ITRS estimates that in the next six years, 90% of all logic chip area will actually be memory. Coupled with the standalone memory business, we are entering an era for complex chips where almost all transistors manufactured are memory, not logic.

At the presciently named HotChips conference, AMD, Intel, Sony and Sun showed their latest PC, server, and PlayStation processors. They are mostly memory. In moving from the Itanium to the Montecito processor, Intel saturated the design with memory, moving from three megabytes to 26.5MB of cache memory. From a quick calculation (assuming 6 transistors per SRAM bit and error correction code overhead), the Montecito processor has ~1.5 billion transistors of memory, and 0.2 billion of logic. And Intel thought it had exited the memory business in the 80’s. |-)

Why the trend? The primary design enhancement from the prior generation is “relieving the memory bottleneck.” Intel explains the problem with their current processor: "For enterprise work loads, Itanium executes 15% of the time and stalls 85% of the time waiting for main memory.” When the processor lacks the needed data in the on-chip cache, it has to take a long time penalty to access the off-chip DRAM. Power and cost are also improved to the extent that more can be integrated on chip.

Given the importance of memory advances and the relative ease of applying molecular electronics to memory, we may see a bifurcation in Moore’s Law, where technical advances in memory precede logic by several years. This is because molecular self-assembly approaches apply easily to regular 2D structures, like a memory array, and not to the heterogeneous interconnect of logic gates. Self-assembly of simple components does not lend itself to complex designs. (There are many more analogies to the brain that can be made here, but I will save comments about interconnect, learning and plasticity for a future post).

Weaving these brain and semi industry threads together, the potential for intelligence in artificial systems is ripe for a Renaissance. Hawkins ends his book with a call to action: “now is the time to start building cortex-like memory systems... The human brain is not even close to the limit” of possibility.

Hawkins estimates that the memory size of the human brain is 8 terabytes, which is no longer beyond the reach of commercial technology. The issue though, is not the amount of memory, but the need for massive and dynamic interconnect. I would be interested to hear from anyone with solutions to the interconnect scaling problem. Biomimicry of the synapse, from sprouting to pruning, may be the missing link for the Renaissance.

P.S. On a lighter note, here is a photo of a cortex under construction. ;-)

Thursday, November 25, 2004

Giving Thanks to our Libraries & Bio-Hackers

As I eat a large meal today, I am reminded of so much that we should be thankful for. Most evidently, we should give thanks to the epiglottis, the little valve that flaps with every swallow to keep food and drink out of our windpipe. Unlike other mammals, we can’t drink and breathe at the same time, and we are prone to choking, but hey, our larynx location makes complex speech a lot easier.

Much of our biology is more sublime. With the digitization of myriad genomes, we are learning to decode and reprogram the information systems of biology. Like computer hackers, we can leverage a prior library of evolved code, assemblers and subsystems. Many of the radical applications lie outside of medicine.

For example, a Danish group is testing a genetically-modified plant in the war-torn lands of Bosnia and Africa. Instead of turning red in autumn, this plant changes color in the presence of land mines or unexploded ordinance. Red marks the spot for land mine removal.

At MIT, researchers are using accelerated artificial evolution to rapidly breed M13 viruses to infect bacteria in such a way that they bind and organize semiconductor materials with molecular precision.

At IBEA, Craig Venter and Hamilton Smith are leading the Minimal Genome Project. They take the Mycoplasma genitalium from the human urogenital tract, and strip out 200 unnecessary genes, thereby creating the simplest synthetic organism that can self-replicate (at about 300 genes). They plan to layer new functionality on to this artificial genome, to make a solar cell or to generate hydrogen from water using the sun’s energy for photonic hydrolysis (perhaps by splicing in novel genes discovered in the Sargasso Sea for energy conversion from sunlight).

Venter explains: “Creating a new life form is a means of understanding the genome and understanding the gene sets. We don’t have enough scientists on the planet, enough money, and enough time using traditional methods to understand the millions of genes we are uncovering. So we have to develop new approaches… to understand empirically what the different genes do in developing living systems.”

Thankfully, these researchers can leverage a powerful nanoscale molecular assembly machine. It is 20nm on a side and consists of only 99 thousand atoms. It reads a tape of digital instructions to concatenate molecules into polymer chains.

I am referring to the ribosome. It reads mRNA code to assemble proteins from amino acids, thereby manufacturing most of what you care about in your body. And it serves as a wonderful existence proof for the imagination.

So let’s raise a glass to the lowly ribosome and the library of code it can interpret. Much of our future context will be defined by the accelerating proliferation of information technology, as it innervates society and begins to subsume matter into code.

(These themes relate to the earlier posts on the human genome being smaller than Microsoft Office and on the power of biological metaphors for the future of information technology.)

P.S. Happy Thanksgiving, even to the bears… =)

Sunday, November 21, 2004

Nanotech is the Nexus of the Sciences

Disruptive innovation, the driver of growth and renewal, occurs at the edge. In startups, innovation occurs out of the mainstream, away from the warmth of the herd. In biological evolution, innovative mutations take hold at the physical edge of the population, at the edge of survival. In complexity theory, structure and complexity emerge at the edge of chaos – the dividing line between predictable regularity and chaotic indeterminacy. And in science, meaningful disruptive innovation occurs in the inter-disciplinary interstices between formal academic disciplines.

Herein lies much of the excitement about nanotechnology. Quite simply, it is in the richness of human communication about science. Nanotech exposes the core areas of overlap in the fundamental sciences, the place where quantum physics and quantum chemistry can cross-pollinate with ideas from the life sciences.

Over time, each of the academic disciplines develops its own proprietary systems vernacular that isolates it from neighboring disciplines. Nanoscale science requires scientists to cut across scientific languages to unite the isolated islands of innovation.

In academic centers and government labs, nanotech is fostering new conversations. At Stanford, Duke and many other schools, the new nanotech buildings are physically located at the symbolic hub of the schools of engineering, computer science and medicine.

(Keep in mind though, that outside of the science and research itself, the "nanotech" moniker conveys no business synergy whatsoever. The marketing, distribution and sales of a nanotech solar cell, memory chip or drug delivery capsule will be completely different from each other, and will present few opportunities for common learning or synergy.)

Nanotech is the nexus of the sciences. The history of humanity is that we use our tools and our knowledge to build better tools and expand the bounds of our learning. Empowered by the digitization of the information systems of biology, the nanotech nexus is catalyzing an innovation Renaissance, a period of exponential growth in learning, where the power of biotech, infotech and nanotech compounds the advances in each formerly discrete domain. This should be a very exciting epoch, one that historians may look back on with no less portent than the Industrial Revolution.

Sunday, November 14, 2004

Clones and Mutants

“Life is the imperfect transmission of code.” At our life sciences conference in Half Moon Bay, Juan Enriquez shared some his adventures around the biosphere, from an Argentinean clone farm to shotgun sequencing the Sargasso Sea with Craig Venter. From the first five ocean samples, they grew the number of known genes on the planet by 10x and the number of genes involved in solar energy conversion by 100x. The ocean microbes have evolved over a longer period of time and have pathways that are more efficient than photosynthesis.

Clone Farms
Juan showed a series of photos from his October trip to a farm in Argentina. With simple equipment that fits on a desk, the farmer cloned and implanted 60 embryos that morning. All of the cows in his field came from a cell sample from the ear of one cow.

Some of the cows are genetically modified to produce pharmaceutical proteins in their milk (human EPO). These animal bioreactors are very efficient and could replace large buildings of traditional manufacturing capacity.

Whether stem cell research and treatment for ALS, or cloning cows, Argentina is one of the countries boldly going where the U.S. Federal government fears to tread.

Three Wing Chickens
Juan also showed a genetically engineered three wing chicken. The homeobox gene that has been modified is affectionately called “Sonic Hedgehog” (his son really likes SEGA!)

The homeobox genes are my favorites. They are like powerful subroutine calls that have structural phenotypic effects.

I recommend Juan’s book As the Future Catches You for an exploration of the economic imperative of technology education, especially literacy in the modern languages of digital code and genetic code. And for a populist description of the homeobox genes, I recommend Matt Ridley’s Genome, a very fun primer on genetics. Here is a selection:

Hedgehog has its equivalents in people and in birds. Three very similar genes do much the same thing in chicks and people… The hedgehog genes define the front and rear of the wing, and it is Hox genes that then divide it up into digits. The transformation of a simple limb bud into a five-fingered hand happens in every one of us, but it also happened, on a different timescale, when the first tetrapods developed hands from fish fins some time after 400 million years ago.”

"So simple is embryonic development that it is tempting to wonder if human engineers should not try to copy it, and invent self-assembling machines.”

One of Juan’s slides was the first hand drawn map of the Internet, circa 1969. Larry Roberts had drawn that map, and happened to be in the audience to brainstorm after the talk.

P.S. The most popular phone at our conference was the Moto Razor, Chinese edition.

P.S.S. The most popular blog photo so far (with over 12,000 visitors) is a simple message…

Sunday, October 31, 2004

Spooks and Goblins

As it’s Halloween here, I got to thinking about strange beliefs and their origins. Do you think that the generation of myths and folkloric false beliefs has declined over time?

In addition to the popularization of the scientific method, I wonder if photography lessened the promulgation of tall tales. Before photography, if someone told you a story about ghosts in the haunted house or the beast on the hill, you could chose to believe them or check for yourself. There was no way to say, “show me a picture of that Yeti or Loch Ness Monster, and then I’ll believe you.”

And, if so, will we regress as we have developed the ability to modify and fabricate photos and video?

For our class on genetic free speech, Lessig used a pre-print of Posner’s new book, Catastophe: Risk and Response. Posner relates the following statistics on American adults:
• 39% believe astrology is scientific (astrology, not astronomy).
• 33% believe in ghosts and communication with the dead.

Ponder that for a moment. One out of every three U.S. adults believes in ghosts. Who knows what their kids think.

People’s willingness to believe untruths relates to the ability of the average person to reason critically about reality. Here are some less amusing statistics on American adults:
• 46% deny that human beings evolved from earlier animal species.
• 49% don’t know that it takes a year for the earth to revolve around the sun.
• 67% don't know what a molecule is.
• 80% can't understand the NY Times Tuesday science section.

Posner concludes: “It is possible that science is valued by most Americans as another form of magic.” This is a wonderful substrate for false memes and a new generation of bogeymen.

Gotta go… It’s time to trick-or-treat… =)

Wednesday, October 27, 2004

The Photo Blog

For those of you who are not receiving the Feedburner RSS Feed of this blog, you are missing the whimsical and visual postings. So, for Halloween, I thought I’d post links to some of the interesting photos and commentary:

• Fun with: Bush, Kennedy, Gates, Jobs, Moore, and Jamis in Japanese.

• Observations from the first screening of Pixar’s new film, The Incredibles.

• Beautiful Scenes from: Estonia, the Canadian Rockies, Singapore, Montage (Beach), and The Internet.

• Odd Photos: Halloween Horses, Climbing the Dish at Stanford, Extreme Macro Zoom, Elephants, Aquasaurs and Ecospheres, the Technorati Bobsled Team, and the NanoCar spoof (which continues to fool people even this week).

• It came from TED: Visual Material Puzzles (another) and the DeepFlight submarine.

• And, of course, Rockets, Detached Heads, Funky Pink Divas and Robot Women.

An eclectic mix…. Happy Halloween!

Monday, October 18, 2004

Defining “Don’t be Evil”

Back in 1995, it was easy to rig search engine results. Some search engines would actually tell you how they parsed just the first 100 words on the page. And they would let you submit pages to be crawled for fast feedback on how page content modifications lead to search results. Stacking white keywords on a white background at the top of the page did the trick for a couple years.

Then Overture invented the pay for placement model, which Google disdained as “evil” and then adopted as its primary revenue model. Google got around their own evil epithet by clearly delineating paid search results from unpaid. This has been their holy line in the sand. From the Business Journal: "'Don't be evil' is the corporate mantra around Google…. When their competitors began mixing paid placement listings with actual search results, Google stayed pure, drawing a clear line between search results and advertising.”

So Overture and Google have made search engine results a BIG business, and several “consultants” sell advice on how to spike results, but their tricks are short lived.

So it was with some amusement, that I found a way to easily spike certain Google search results. This has worked for a few months now, and it will be interesting to see how long it lasts after this post… ;-)

A reader of this blog pointed out to me that my Blogger Profile gets the top two Google search results for IL-4 smallpox, a genetically modified bioweapon. This is when my blog had no content whatsoever in this area (it now does). My profile is also number one for genetically modified pathogen policy, over thousands of more relevant pages.

And my profile is number one for several areas of whimsy: Techno downbeat music, and Nanotech core memory boards, and Artificial life with female moths, and Viral marketing with Technorati, among others. (disclosure: we invested in Technorati and Overture). Of course, longer phrases are easier to spike, and not everything works for a top placement, but this still seems way too easy.

Why is this interesting? Well, Google owns Blogger, and they get to decide how to fold blog pages into search results. It’s not obvious how to rank a vapid Blogger profile page versus real content… or a competing blog service for that matter. And as Google offers more services like Blogger and Orkut, it will be interesting to see how they promote them in their own search results.

Every person I have met from Google is fantastic, and I don’t think this quirk is an overt strategy passed down from management (and I presume it will disappear as more people exploit it). On the other hand, this is the kind of product tying you would expect from Microsoft. And it begs the question, can a mantra to not do evil infuse into the corporate DNA and continue to drive culture as a company scales?

There's also the question of internal consistency. Thinking back to the holy line in the sand about disclosing advertising in search results, does it somehow not count if you own it?

Google has taken on the challenge of defining evil, which begs for an operational constitution. Neal Stephenson proposes one meta rule: in a climate of moral relativism the only sin is hypocrisy.

Friday, October 15, 2004

Childish Scientists

In the comments to the Celebrate the Child-Like Mind posting, a wonderful quote came from Argentina:

"I know not what I appear to the world, but to myself I seem to have been only like a boy playing on the sea-shore” – Sir Isaac Newton

Of course, this observation does not apply just to the Newtonian physicists. The September issue of Discover Magazine observes: “Einstein had the genius to view space and time like a child,” as with his thought experiments of riding a light-beam. "His breakthrough realization of the relativity of time turned on a series of mental cartoons featuring trains and clocks. General relativity, his theory of gravity, started off as a meditation on what happens when a man falls off a roof."

And the fantastic physicist Feynman (the first person to propose nanotechnology in his 1960 lecture “There's Plenty of Room at the Bottom”) is especially child-like: "When Richard Feynman faced a problem he was unusually good at going back to being like a child, ignoring what everyone else thinks and saying, 'Now, what have we got here?'" – The Science of Creativity, p.102.

For a humorous aside, the T.H.O.N.G. protesters remixed Feynman as "Plenty of Room at This Bottom."

Lest we think that childishness is reserved for physicists, I am reminded of my meeting with James Watson, co-discoverer of the DNA double helix. His breakthrough technique: fiddling with metal models and doodling the fused rings of adenine on paper. I like this summary: “Watson can himself be quite the double helix – a sharp scientific mind intertwined with a child-like innocence.”

How far can this generalize? In Creating Minds: An Anatomy of Creativity Seen Through the Lives of Freud, Einstein, Picasso, Stravinsky, Eliot, Graham, and Gandhi, the author “finds a childlike component in each of their creative breakthroughs.”

This final quote reminds me of a wonderful echo of Michael Schrage’s claim that reality is the opposite of play:

“One thing I have learned in a long life: that all our science, measured against reality, is primitive and childlike – and yet it is the most precious thing we have.”
– Albert Einstein

Monday, October 11, 2004

Notes from EDAY 2004

On Saturday, IDEO mixed some fun and play with some great lectures:

• Stanford Prof. Bob Sutton: “Sometimes the best management is no management at all. Managers consistently overestimate their impact on performance. And once you manage someone, you immediately think more highly of them.” When Chuck House wanted to develop the oscilloscope for HP, David Packard told him to abandon the project. Chuck went on vacation” and came back with $2MM in orders. Packard later gave him an award inscribed with an accolade for “extraordinary contempt and defiance beyond the normal call of engineering.” When Leakey chose Jane Goodall, he “wanted someone with a mind uncluttered and unbiased by theory.” Sutton’s conclusion for innovative work: “Hire slow learners of the organizational code, people who are oblivious to social cues and have very high self-esteem. They will draw on past individual experience or invent new methods.”

• Dr. Stuart Brown, founder of the Institute for Play, showed a fascinating series of photos of animals playing (ravens sliding on their backs down an icy slope, monkeys rolling snowballs and playing leapfrog, and various inter-species games). “Warm-blooded animals play; fish and reptiles do not. Warm blood stores energy, and a cortex allows for choice and REM sleep.”

Brown has also studied the history of mass murderers, and found “normal play behavior was virtually absent throughout the lives of highly violent, anti-social men. The opposite of ‘play’ is not ‘work’. It’s depression.”

“We are designed to play. We need 3D motion. The smarter the creature the more they play. The sea squirt auto-digests its brain when it becomes sessile.”

• Michael Schrage, MIT Media Lab Fellow, defined play as “the riskless competition between speculative choices. If it’s predictable, it’s not play. The opposite of play is not what is serious, but what is real. The paradox is that you can’t be serious if you don’t play.”

“We need to treat our tools as toys and our toys as tools. Our simulations, models and prototypes need to play.”

Friday, October 08, 2004

More Things Change

I am at the World Technology Summit today. Just finished a panel on accelerating change, where John Smart made the following provocative points:

• Technology learns 100 million times faster than you do.
• Humans are selective catalysts, not controllers, of technological evolutionary development.
• 80-90% of your paycheck comes from automation.
• Catastrophes accelerate societal immunity. The network always wins.

If you want to take a deep dive into these topics with him, John is hosting Accelerating Change 2004 at Stanford, Nov 6-7. He is offering a $50 discount to readers of this blog (discount code "AC2004-J" with all caps).

Update: For those not subscribing to the Feedburner RSS feed, here are some new photos from WTS 2004 and the Awards Dinner.

Sunday, October 03, 2004

Celebrate the Child-Like Mind

Celebrate immaturity. Play every day. Fail early and often.

From what I can see, the best scientists and engineers nurture a child-like mind. They are playful, open minded and unrestrained by the inner voice of reason, collective cynicism, or fear of failure.

On Thursday, I went to a self-described "play-date" at David Kelley's house. The founder of IDEO is setting up an interdisciplinary "D-School" for design and creativity at Stanford. David and Don Norman noted that creativity is killed by fear, referencing experiments that contrast people’s approach to walking along a balance beam flat on the ground (playful and expressive) and then suspended in the air (fearful and rigid). They are hosting an open conference on Saturday, appropriately entitled The Power of Play.

In science, meaningful disruptive innovation occurs at the inter-disciplinary interstices between formal academic disciplines. Perhaps the D-school will go further, to “non-disciplined studies” – stripped of systems vernacular, stricture, and the constraints of discipline.

What is so great about the “child-like” mind? Looking across the Bay to Berkeley, I highly recommend Alison Gopnik’s Scientist in the Crib to any geek about to have a child. Here is one of her key conclusions: "Babies are just plain smarter than we are, at least if being smart means being able to learn something new.... They think, draw conclusions, make predictions, look for explanations and even do experiments…. In fact, scientists are successful precisely because they emulate what children do naturally."

Much of the human brain’s power derives from its massive synaptic interconnectivity. I spoke with Geoffrey West from the Santa Fe Institute last night. He observed that across species, synapses/neuron fan-out grows as a power law with brain mass.

At the age of 2 to 3 years old, children hit their peak with 10x the synapses and 2x the energy burn of an adult brain. And it’s all downhill from there.

Cognitive Decline by Age

This UCSF Memory and Aging Center graph shows that the pace of cognitive decline is the same in the 40’s as in the 80’s. We just notice more accumulated decline as we get older, especially when we cross the threshold of forgetting most of what we try to remember.

But we can affect this progression. Prof. Merzenich at UCSF has found that neural plasticity does not disappear in adults. It just requires mental exercise. Use it or lose it. We have to get out of the mental ruts that career tracks and academic “disciplines” can foster. Blogging is a form of mental exercise. I try to let this one take a random walk of curiosities and child-like exploration.

Bottom line: Embrace lifelong learning. Do something new. Physical exercise is repetitive; mental exercise is eclectic.

Friday, October 01, 2004

Quote of the Day

"Microsoft has had clear competitors in the past.
It's good that we have museums to document them."
- Bill Gates, today at the Computer History Museum (former SGI HQ)

At the reception, Gates mingled in front of the wooden Apple 1, with a banner over his head: “The Two Steves.”

Sunday, September 26, 2004

Transcending Moore’s Law with Molecular Electronics

The future of Moore’s Law is not CMOS transistors on silicon. Within 25 years, they will be as obsolete as the vacuum tube.

While this will be a massive disruption to the semiconductor industry, a larger set of industries depends on continued exponential cost declines in computational power and storage density. Moore’s Law drives electronics, communications and computers and has become a primary driver in drug discovery and bioinformatics, medical imaging and diagnostics. Over time, the lab sciences become information sciences, and then the speed of iterative simulations accelerates the pace of progress.

There are several reasons why molecular electronics is the next paradigm for Moore’s Law:

• Size: Molecular electronics has the potential to dramatically extend the miniaturization that has driven the density and speed advantages of the integrated circuit (IC) phase of Moore’s Law. For a memorable sense of the massive difference in scale, consider a single drop of water. There are more molecules in a single drop of water than all transistors ever built. Think of the transistors in every memory chip and every processor ever built, worldwide. Sure, water molecules are small, but an important part of the comparison depends on the 3D volume of a drop. Every IC, in contrast, is a thin veneer of computation on a thick and inert substrate.

• Power: One of the reasons that transistors are not stacked into 3D volumes today is that the silicon would melt. Power per calculation will dominate clock speed as the metric of merit for the future of computation. The inefficiency of the modern transistor is staggering. The human brain is ~100 million times more power efficient than our modern microprocessors. Sure the brain is slow (under a kHz) but it is massively parallel (with 100 trillion synapses between 60 billion neurons), and interconnected in a 3D volume. Stan Williams, the director of HP’s quantum science research labs, concludes: “it should be physically possible to do the work of all the computers on Earth today using a single watt of power.”

• Manufacturing Cost: Many of the molecular electronics designs use simple spin coating or molecular self-assembly of organic compounds. The process complexity is embodied in the inexpensive synthesized molecular structures, and so they can literally be splashed on to a prepared silicon wafer. The complexity is not in the deposition or the manufacturing process or the systems engineering.

Biology does not tend to assemble complexity at 1000 degrees in a high vacuum. It tends to be room temperature or body temperature. In a manufacturing domain, this opens the possibility of cheap plastic substrates instead of expensive silicon ingots.

• Elegance: In addition to these advantages, some of the molecular electronics approaches offer elegant solutions to non-volatile and inherently digital storage. We go through unnatural acts with CMOS silicon to get an inherently analog and leaky medium to approximate a digital and non-volatile abstraction that we depend on for our design methodology. Many of the molecular electronic approaches are inherently digital and immune to soft errors, and some are inherently non-volatile.

For more details, I recently wrote a 20 page article expanding on these ideas and nanotech in general (PDF download). And if anyone is interested in the references and calculations for the water drop and brain power comparisons, I can provide the details in the Comments.

Friday, September 17, 2004

Recapitulation in Nested Evolutionary Dynamics

I noticed the following table of interval time compression midway down the home page of

“3–4 million years ago: collective rock throwing…
500,000 years ago: control of fire
50,000 years ago: bow and arrow; fine tools
5,000 years ago: wheel and axle; sail
500 years ago: printing press with movable type; rifle
50 years ago: the transistor; digital computers”

Then I burst out laughing with a maturationist epiphany: this is exactly the same sequence of development I went though as a young boy! It started with collective rock throwing (I still have a scar inside my lip)..... then FIRE IS COOL!.... then slingshots…. and the wheels of my bike…. then writing and my pellet gun.... and by 7th grade, programming the Apple ][. Spooky.

It reminded me of the catchy aphorism: “ontogeny recapitulates phylogeny” (the overgeneralization that fetal embryonic development replays ancestral evolutionary stages) and recapitulation theories in general.

I’m thinking of Dawkin’s description of memes (elements of ideas and culture) as fundamental mindless replicators, like genes, for which animals are merely vectors for replication (like a host to the virus). In Meme Machine, Susan Blackmore explores the meme-gene parallels and derives an interesting framework for explaining the unusual size of the human brain and the origins of consciousness, language, altruism, religion, and orkut.

Discussions of the cultural and technological extensions of our biological evolution evoke notions of recapitulation – to reestablish the foundation for compounding progress across generations. But perhaps it is something more fundamental, a “basic conserved and resonant developmental homology” as John Smart would describe it. A theme of evolutionary dynamics operating across different substrates and time scales leads to inevitable parallels in developmental sequences.

For example, Gardner’s Selfish Biocosm hypothesis extends evolution across successive universes. His premise is that the anthropic qualities (life and intelligence-friendly) of our universe derive from “an enormously lengthy cosmic replication cycle in which… our cosmos duplicates itself and propagates one or more "baby universes." The hypothesis suggests that the cosmos is "selfish" in the same metaphorical sense that evolutionary theorist and ultra-Darwinist Richard Dawkins proposed that genes are "selfish." …The cosmos is "selfishly" focused upon the overarching objective of achieving its own replication.”

Gardner concludes with another nested spiral of recapitulation:
“An implication of the Selfish Biocosm hypothesis is that the emergence of life and ever more accomplished forms of intelligence is inextricably linked to the physical birth, evolution, and reproduction of the cosmos.”

Friday, September 10, 2004

Whither Windows?

From the local demos of Longhorn, it seems to me that OS X is the Longhorn preview. As far as I can tell, Microsoft is hoping to do a subset of OS X and bundle applications like iPhoto. Am I missing something?

It seems that the need to use a Microsoft operating system will decline with the improvement in open source device drivers and web services for applications.

Why worry about Microsoft operating systems as a non-user? Well, the spam viruses on Windows affect all of us. I have not had a Mac virus for at least 10 years (sure, you could joke that nobody writes apps for the Mac any more =), but my email inbox has seen the effects of the Windows worms.

And of course, I am an indirect user of Microsoft servers. And that can be another source of concern. Microsoft is a global monoculture and is therefore subject to catastrophic collapse. The resiliency of critical computer networks might suffer if they migrate to a common architecture. Like a monoculture of corn, they can be more efficient, but the vulnerability to pathogens is more polarized - especially in a globally networked world.

When will the desktop Linux swap out occur, as it did seamlessly at Apple with the XNU kernel in OS X?

Saturday, September 04, 2004

Accelerating Change and Societal Shock

Despite a natural human tendency to presume linearity, accelerating change from positive feedback is a common pattern in technology and evolution. We are now crossing a threshold where the pace of disruptive shifts is no longer inter-generational and begins to have a meaningful impact over the span of careers and eventually product cycles.

The history of technology is one of disruption and exponential growth, epitomized in Moore’s law, and generalized to many basic technological capabilities that are compounding independently from the economy.

For example, for the past 40 years in the semiconductor industry, Moore’s Law has not wavered in the face of dramatic economic cycles. Ray Kurzweil’s abstraction of Moore’s Law (from transistor-centricity to computational capability and storage capacity) shows an uninterrupted exponential curve for over 100 years, again without perturbation during the Great Depression or the World Wars. Similar exponentials can be seen in Internet connectivity, medical imaging resolution, genes mapped and solved 3D protein structures. In each case, the level of analysis is not products or companies, but basic technological capabilities.

In his forthcoming book, Kurzweil summarizes the exponentiation of our technological capabilities, and our evolution, with the near-term shorthand: the next 20 years of technological progress will be equivalent to the entire 20th century.

For most of us, who do not recall what life was like one hundred years ago, the metaphor is a bit abstract. So I did a little research. In 1900, in the U.S., there were only 144 miles of paved road, and most Americans (94%+) were born at home, without a telephone, and never graduated high school. Most (86%+) did not have a bathtub at home or reliable access to electricity. Consider how much technology-driven change has compounded over the past century, and consider that an equivalent amount of progress will occur in one human generation, by 2020. It boggles the mind, until one dwells on genetics, nanotechnology, and their intersection.

Exponential progress perpetually pierces the linear presumptions of our intuition. “Future Shock” is no longer on an inter-generational time-scale. How will society absorb an accelerating pace of externalized change? What does it mean for our education systems, career paths, and forecast horizons?

Friday, September 03, 2004

Joke of the day

You know, there are actually 10 types of people...

Those who think in binary and those who don't.

Sunday, August 29, 2004

Can friendly AI evolve?

Humans seem to presume an "us vs. them" mentality when it comes to machine intelligence (certainly in the movies =).

But is the desire for self-preservation coupled to intelligence or to evolutionary dynamics?… or to biological evolution per se? Self-preservation may be some low-level reflex that emerges in the evolutionary environment of biological reproduction. It may be uncoupled from intelligence. But, will it emerge in any intelligence that we grow through evolutionary algorithms?

If intelligence is an accumulation of order (in a systematic series of small-scale reversals of entropy), would a non-biological intelligence have an inherent desire for self-preservation (like HAL), or a fundamental desire to strive for increased order (being willing, for example, to recompose its constituent parts into a new machine of higher-order)? Might the machines be selfless?

And is this path dependent? Given the iterated selection tests of any evolutionary process, is it possible to evolve an intelligence without an embedded survival instinct?

Thursday, August 26, 2004

FCC Indecency & Howard Stern

I forgot to comment on the remarkably candid interview that FCC Chairman Michael Powell gave last month on the topics of broadband policy, industry transitions, regulatory philosophy, Skype and VOIP, censorship and Howard Stern. While the streaming video has been available, the transcript proliferated in the blogosphere:
Denise Howell captured the most salient parts of the broad discussion.
Marc Canter covers Powell’s further ruminations on indecency.

At minute 25:08 (and into the Q&A), I ask about the recent FCC crackdown on indecency. I had two pages of questions from Howard Stern (who has no great love for the FCC), and in a burst of recursive irony, I self-censored the indecent ones (like PBS recently). Here are some of the questions from Howard, and I only got to the first one in the interview:

“Aside from Oprah, who else will you NOT fine?”
“What makes the FCC qualified to determine what is indecent?”
“What role should religion play in determining indecency standards?”

The FCC answer points to the number of complaints as the motivation for the crackdown. This sounds like a voting system of “majority rules”…. which seems to run counter to the spirit of the First Amendment and the protection of minority voices.

Monday, August 23, 2004

The coolest thing you learned this year?

In the spirit of lifelong learning, what is the coolest new thing you learned this year?

Last year, I think it was at a dinner with Matt Ridley talking about the inter-gene warfare going on within our bodies, especially between the X and Y sex chromosomes.

For this year, I can’t seem to pick one thing. Conversations with the eponymous Mr. Smart come to mind. Here is an example of his thinking about the limitations of biology as a substrate for developing computational complexity.

Jaron Lanier is also a wonderful thinker, and when we writes for my favorite “interesting ideas” site (, it’s a potent combination. He makes an interesting counterpoint: “We're so used to thinking about computers in the same light as was available at the inception of computer science that it's hard to imagine an alternative, but an alternative is available to us all the time in our own bodies.”

Reconciling the two, perhaps biology will drive the future of intelligence and information technology – not literally, but figuratively and metaphorically and primarily through powerful abstractions.

Many of the interesting software challenges relate to growing resilient complex systems or they are inspired by other biological metaphors (e.g., artificial evolution, biomimetics, neural networks for pattern recognition, artificial immunology for virus and spam detection, genetic algorithms, A-life, emergence, IBM’s Autonomic Computing initiative, meshes and sensor nets, hives, and the subsumption architecture in robotics). Tackling the big unsolved problems in info tech will likely turn us to biology – as our muse, and for an existence proof that solutions are possible.

Friday, August 20, 2004

Quantum Computational Equivalence

An interesting comment on "Your Genome is Smaller than Microsoft Office" referenced quantum effects to explain the power of the interpreters of biological code.

I recently heard Wolfram present his notion of "computational equivalence", and I asked about quantum computers because it seemed like a worm hole through his logic… but he seemed to dismiss the possibility of QCs instead.

The abstract summary of my understanding of computational equivalence is that many activities, from thinking to evolution to cellular signaling, can be represented as a computation. A physical experiment and a computation are equivalent. For an iterative system, like a cellular automata, there is no formulaic shortcut for the interesting cases. The simulation is as complex as “running the experiment” and will consume similar computational resources.

Quantum computers can perform accurate simulations of any physical system of comparable complexity. The type of simulation that a quantum computer does results in an exact prediction of how a system will behave in nature — something that is literally impossible for any traditional computer, no matter how powerful. Professor David Deutsch of Oxford summarizes: “Quantum computers have the potential to solve problems that would take a classical computer longer than the age of the universe.”

So I wonder what the existence of quantum computers would say about computational equivalence? How might this “shortcut through time” be employed in the simulation of molecular systems? Does it prove the existence of parallel universes (as Deutsch concludes in Fabric of Reality) that entangle to solve computationally intractable problems? Is there a “quantum computational equivalence” whereby a physical experiment could be a co-processor for a quantum simulation? Is it a New New Kind of Science?

Thursday, August 19, 2004

Morpheus beats the RIAA

A new development: Morpheus just unanimously won their 9th Circuit case. The entertainment industry lawyers were so confident that they would prevail in the case that they did not have a statement ready for this scenario.

The justices actually addressed Congress and urged them not to pass anti P2P legislation so quickly. They added:

“we live in a quicksilver technological environment with courts ill-suited to fix the flow of internet innovation… The introduction of new technology is always disruptive to old markets, and particularly to those copyright owners whose works are sold through well established distribution mechanisms. Yet, history has shown that time and market forces often provide equilibrium in balancing interests, whether the new technology be a player piano, a copier, a tape recorder, a video recorder, a personal computer, a karaoke machine, or an MP3 player. Thus, it is prudent for courts to exercise caution before restructuring liability theories for the purpose of addressing specific market abuses, despite their apparent present magnitude."


iTunes Licensing Model

I just received a call from one of my favorite musicians. He told me that when Apple sells one of his songs for 99 cents, EMI gets 66 cents and he gets 5 cents.

EMI just ported the business contract of physical distribution (which presumes manufacturing costs, breakage, inventory and other real costs). So the music label unilaterally captured 100% of the upside from moving the business online and shared none of it with the artist.

Having just finished reading Free Culture, I guess I should not be surprised by this habitual behavior. But it seems so old school.

My channel and fulfillment relationship is now with Apple. EMI provides no value to me in this modern context. Yet they take more than 10x what they share with the artist.

Tuesday, August 17, 2004

Your Genome is Smaller than Microsoft Office

How inspirational are the information systems of biology?

If we took your entire genetic code -- the entire biological program that resulted in your cells, organs, body and mind -- and burned it into a CD, it would be smaller than Microsoft Office. Two digital bits can encode for the four DNA bases (A,T,C and G) resulting in a 750MB file that can be compressed for the preponderance of structural filler in the DNA chain. Even with simple Huffman encoding, we should get below the 486MB of my minimal Office 2004 install.

If much of the human genome consists of vestigial evolutionary and parasitic remnants that serve no useful purpose, then we could compress it to 60MB of concentrated information.

What does this tell us about Microsoft? About software development? About complex systems development in general?

Sunday, August 08, 2004

Genetic Free Speech

Following the J-Curve from the downer of the prior post, there is much to be excited about.

Earlier this year, I had the wonderful opportunity to co-teach a new interdisciplinary class at Stanford with Prof. Larry Lessig. It was called “Ideas vs. Matter: the Code in Tiny Spaces” and we discussed genetics, nanotechnology and the regulatory ecosystem.

We went in with the presumption that society will likely try to curtail “genetic free speech” as it applies to human germ line engineering, and thereby curtail the evolution of evolvability. Lessig predicts that we will recapitulate the 200-year debate about the First Amendment to the Constitution. Pressures to curtail free genetic expression will focus on the dangers of “bad speech”, and others will argue that good genetic expression will crowd out the bad. Artificial chromosomes (whereby children can decide whether to accept genetic enhancements when they become adults) can decouple the debate about parental control. And, with a touch of irony, China may lead the charge.

Many of us subconsciously cling to the selfish notion that humanity is the endpoint of evolution. In the debates about machine intelligence and genetic enhancements, there is a common and deeply rooted fear about being surpassed – in our lifetime. But, when framed as a question of parenthood (would you want your great grandchild to be smarter and healthier than you?), the emotion often shifts from a selfish sense of supremacy to a universal human search for symbolic immortality.

Tuesday, August 03, 2004

Genetically Modified Pathogen (GMP) Policy

In repose to my first post requesting topics of interest, “anonymous” noted that this blog is the top result on a Google search for “IL-4 Smallpox”... a dubious and disturbing honor for what I was hoping to be a content-free blog.

Anon also asked “what do you think of DHS efforts for a realtime bio-sensor network?”

It is possible that with the mobilization of massive logistical resources around the planet, we will prevail over genetically modified and engineered pathogens (GMPs). But I would not bet on it. It would be great to have a sensor network, but with most Health and Human Services offices lacking a basic Internet connection, we have a way to go.

From what I can tell, a crash-program in antiviral development may provide a ray of hope (e.g., HDP-cidofovir and some more evolutionarily robust and broad-spectrum host-based strategies).

Most importantly, from my random walk through government labs, talks with policy planners, CDC folk and DOD Red Team members, I haven’t seen any policy bifurcation for GMPs (for detection and response). I think there should be distinct policy consideration given to GMPs vs. natural pathogens.

The threat from GMPs is much greater, and the strategic response would need special planning. For example, the vaccinations that eradicated smallpox last time around may not be effective for IL-4 modified smallpox, and in-situ quarantine may be needed. “Telecommuting” for many forms of work will need to be pre-enabled, especially remote operation of the public utilities and MAE-East &West and other critical NAP nodes of the Internet.

The delicate "virus-host balance" observed in nature (whereby viruses tend not to be overly lethal to their hosts) is a byproduct of biological co-evolution on a geographically segregated planet. And now, both of those limitations have changed. Organisms can be re-engineered in ways that biological evolution would not have explored, nor allowed to spread widely, and modern transportation undermines natural quarantine formation.

In evolution, pathogens do not become overly lethal to their host, for that limits their own propagation to a geographically-bound quarantine zone. Evolution may have created 100% lethal pathogens in the past, but those pathogens are now extinct because they killed all of their locally available hosts.

A custom-engineered or modified pathogen may not observe that delicate virus-host balance, nor the slow pace of evolutionary time scales, and could engender extinction level events with a rapidity never before seen on Earth. Given early truncation of the lethality branch (truncating a local maximum), evolution has not experimented with a multivariate global maximum of lethality. The pattern of evolution is small and slow incremental changes where each intermediate genetic state needs to survive for the next improvement to accumulate. Engineered and modified pathogens do not need to follow that pattern.

Sunday, June 13, 2004

Will we comprehend supra-human emergence?

Thinking about complexity, emergence and ants, I went to a lecture by Deborah Gordon, and remain fascinated by the different time scales of learning at each layer of abstraction. For example, the hive will learn lessons (e.g., don’t attack the termites) over long periods of time – longer than the life span of the ants themselves. The hive itself is a locus of learning, not just individual ants.

Can an analogy be drawn to societal memes? Human communication sets the clock rate for the human hive (and the Interet expands the fanout and clock rate). Norms, beliefs, philosophy and various societal behaviors seem to change at a glacial pace, so that we don’t notice them day-to-day (slow clock rate). But when we look back, we think and act very differently as a society than we did in the 50’s.

As I look at the progression of:

Groups : Humans
Flocks : Birds
Hive : Ants
Brain : Neurons

I notice that as the number of nodes grows (as you go down the list), the “intelligence” and hierarchical complexity of the nodes drops, and the “emergent gap” between the node and the collective grows. There’s more value to the network with more nodes (grows ~ as n^2), so it makes sense that the gap is greater. At one end, humans have some understanding of emergent group phenomena and organizational value, and on the other end, a neuron has no model for brain activity.

One question I am wrestling with: does the minimally-sufficient critical mass of nodes needed to generate emergent behavior necessitate a certain incomprehensibility of the emergent properties by the nodal members? Does it follow that the more powerful the emergent properties, the more incomprehensible they must be to their members? So, I guess I am wondering about the "emergent gap" between layers of abstraction, and whether the incomprehensibility across layers is based on complexity (numbers of nodes and connections) AND/OR time scales of operation?