The J Curve

Sunday, June 13, 2004

Will we comprehend supra-human emergence?

Thinking about complexity, emergence and ants, I went to a lecture by Deborah Gordon, and remain fascinated by the different time scales of learning at each layer of abstraction. For example, the hive will learn lessons (e.g., don’t attack the termites) over long periods of time – longer than the life span of the ants themselves. The hive itself is a locus of learning, not just individual ants.

Can an analogy be drawn to societal memes? Human communication sets the clock rate for the human hive (and the Interet expands the fanout and clock rate). Norms, beliefs, philosophy and various societal behaviors seem to change at a glacial pace, so that we don’t notice them day-to-day (slow clock rate). But when we look back, we think and act very differently as a society than we did in the 50’s.

As I look at the progression of:

Groups : Humans
Flocks : Birds
Hive : Ants
Brain : Neurons

I notice that as the number of nodes grows (as you go down the list), the “intelligence” and hierarchical complexity of the nodes drops, and the “emergent gap” between the node and the collective grows. There’s more value to the network with more nodes (grows ~ as n^2), so it makes sense that the gap is greater. At one end, humans have some understanding of emergent group phenomena and organizational value, and on the other end, a neuron has no model for brain activity.

One question I am wrestling with: does the minimally-sufficient critical mass of nodes needed to generate emergent behavior necessitate a certain incomprehensibility of the emergent properties by the nodal members? Does it follow that the more powerful the emergent properties, the more incomprehensible they must be to their members? So, I guess I am wondering about the "emergent gap" between layers of abstraction, and whether the incomprehensibility across layers is based on complexity (numbers of nodes and connections) AND/OR time scales of operation?

15 Comments:

  • There’s more value to the network with more nodes (grows ~ as n^2)...According to Reed's Law the value of the network sometimes grow exponentially with the number of nodes. See also Wikipedia.

    Paul B.

    By Blogger Paul B., at 1:31 PM  

  • Very interesting pointers. Thanks!

    I was working off the simple topology baiss for Metcalfe's law, where the number of connections betweeen n nodes is (n - 1)! = n*(n-1)/2 = O(n^2)

    So Reed looks at group forming networks like Orkut and finds O(2^n) potential sub-groups of interest. This contribution to network value dominates al others for large values of n.

    So it begs the question, does group formation drive the properties of emergence in a system? For the brain, does this presume neural plasticity? Will groups of humans exhibit emergence of a fundamentally different nature - with groups and their memeplexes as the vectors?

    By Blogger Steve Jurvetson, at 5:15 PM  

  • So Reed [...] finds O(2^n) potential sub-groups of interest. This contribution to network value dominates all others for large values of n.My understanding of this is related to how does one *subjectively* define "value" of the network, the definition then translates into the weights of N, N^2, 2^N etc terms (hidden by the O()-notation).

    One can argue that at the earliest stages of ARPANET "value" per node added was indeed growing exponentially, and for quite small N -- each site provided a unique capability which could be mixed-and-matched into 2^N possible subgraphs. (An interesting quote: "The first sites of the ARPANET were picked to provide either network support services or unique resources.")

    On the other hand, if all nodes are interchangeable (think processors in a massively parallel computer) there is no need to form subgraphs, the value is defined only by a number of processors and how well they are connected together (and in this case there is nothing better than an N^2 crossbar, of course, but for the practical reasons a hypercube with log(N) physical connections approximating N^2 virtual connections can suffice). There is no weight to the 2^N term. (Of course this example is not entirely correct, for real installations the ability to partition the processors to work on separate tasks while cutting off all communications between partitions IS of great value!)

    Now, if we look at real people on the modern Internet (think Yahoo Groups) the exponential term seems to show up because of two reasons: the ability to create a (non-null) group to discuss any, however obscure, topic AND the fact that each new member of that group might bring in his/her own unique characteristics complementing the experience of the rest of the group.

    How to extrapolate all this to how brain works -- have no idea...

    [Hmm, it looks like a typical blogger's rant, maybe I should've actually start posting on my own blog rather than pollute other people's... ;-) ]

    Paul B.

    By Blogger Paul B., at 3:14 PM  

  • a nice paper to read about that is this one -->

    http://web.media.mit.edu/~minsky/papers/sciam.inherit.html

    Minsky is the advisor of Foresight, he is big

    By Anonymous Anonymous, at 4:06 AM  

  • Back to the big picture question: for the past 1000 years the pace of change has been glacial. The "clock rate" of the emergent intelligence would be the pace of inter-human communication. That was scattered, intermittent, and topologically clustered. We look back to the 50s and see plenty of change in norms, beliefs and culture, but it's harder to see the changes on the time scale of the nodes' activity (our mental activity).

    With modern telecommunication networks, the fanout broadened and connectivity started to span geography. With the Internet, the fanout widened considerably (with multipoint communications like this post) and the clock rate increased with IM and email. With agent-based systems, the clock rate would increase further still.

    Fanout is important. The adult brain has an average synaptic fanout of 1000. 2 year olds (aka "learning machines") have a fanout of 10,000. Neural network simulations with low fanouts achieve very little.

    So I wonder if the glacial collective mind is transforming into a more powerful intelligence in the modern era, and if the pace of its operations is moving from decades to months?

    By Blogger Steve Jurvetson, at 8:26 PM  

  • Something that often gets overlooked in writing about a global mind is that there's no selection for global minds. There's selection on the hive level for bee species, there's selection on the nest level for ant species, and that's precisely why the organization of beehives and ant nests is vastly more complex, efficient, and purposeful than the flocking behavior of birds. Sure, the flocking behavior of birds is interestingly complex - but you can express it with a lot fewer lines of code than it would take to describe an ant's nest. Likewise with trying to compare the Earth and its humans, to a human and his/her cells. The cells in a human are selected to contribute to the complex and directed behavior of the human. In contrast, humans are selected to win the genetic contest as humans, maybe with a small field of overlap for genetic material shared in relatives or tribes. But now I'm just repeating the last thirty years of sermons in evolutionary biology... my point is:

    (a) Six billion humans in Earth, one hundred trillion synapses in a human brain. Winner: Brain.

    (b) Those hundred trillion synapses have been shaped by millions of years of natural selection into complex machinery that operates specifically for the good of the host human('s genes), and they've got support from the heart, the lungs, all the other cells in the body, etc. Anthills work the same way - the individual ants serve with utter faithfulness for as long as they have a commonality of genetic interest with the queen - as soon as this breaks down, so does the perfect cooperation. Humans all have different genes and bicker constantly. Winner: Brain.

    So natural selection did not shape humans to be smoothly functioning cogs in a global mind, and a global mind would have a pretty paltry collection of cogs by mammalian standards.

    Since Earth-level social phenomena of technological civilization are not longtime objects of natural selection, I would expect them to have far less complexity than a human. Why shouldn't a human understand them? For that matter, I think it's perfectly reasonable for a human to try and understand the high-level behaviors governing the brain - you don't have to store a description of a hundred trillion neurons, just the description of the behaviors governing those neurons. The brain has interestingly compressed explanations that make useful predictions; I would call this understanding. And since Earth-level social phenomena are smaller (in both raw size, and complexity) than a human brain, why shouldn't we understand them?

    By Blogger EliezerYudkowsky, at 1:53 PM  

  • EY: Very interesting observations. I should clarify my question; I am not trying to focus on an out-of-system analysis of the high-level emergent systems (the social sciences do just fine there). I am interested in the in-system “emergent gap.” I would agree that the flocking patterns of birds and schools of fish are understandable from the lower-level simple rules. But what about the more complex systems?

    I realize that there can be confusion whether we are approaching this as an in-system “node” or as an out-of-system human observer. As an observer, we can try to analyze a system of comparable complexity to the hive mind as you point out (I don’t think we can understand the emergent gaps yet, but I’ll get to that later).

    But if we want to understand our individual role in the system (our contribution to culture, norms, etc.) and the architecture of mind, will we be baffled, and notice shimmering patterns of beauty at best?

    And we might want to understand how to bridge this emergent gap more urgently than when analyzing other systems. We are the nodes. We care about the nodes. And we care about the societal constructs since they can affect all of us. For other systems, the nodes are not the focus of agency.Philosopher poet, Hunter S. Thomspon wrote: “Kill the body and the head will die.” The converse is also true.

    I am puzzled about the claim that there are no selection pressures for the global mind. 1) It is possible that we have competing memeplexes or emergent minds at work today. Must there be one mind? 2) Perhaps the glacial timescale comments from before suggest that our global selection test has just not occurred yet. Will our society survive the selection test presented by modern terrorism and annual decreases in the “barriers to entry” for WMD? Will we develop a societal immune system? Do we start to recognize our interdependence and “commonality of genetic interest.”

    And now, some quibbles about the relative complexity arguments of the brain. The ~6B population of human nodes should be compared to the ~60B neurons, not the ~100T synaptic connections. The synapses are analogous to the myriad communication networks between humans. (As an aside, a 2 year old has ~1 quadrillion synapses, but that does not enable the infant to deconstruct and understand the emergent gap of the relatively simpler “adult’ with 1/10th the synapses. I think you would agree that the brain, as a system of complexity N does not yet understand all of the phenomena and the emergent gaps in systems of complexity N (e.g., our own brains) or even .1N (simpler animals)).

    What is the fanout of human social networks? We have broadcast mechanisms and multi-modal communications. We have the power of dynamic group-forming networks (discussed above with the Reed links).

    And compared to the brain, “humans++” are powerful nodes. We have large local and near-line memory stores via our technologies and databases. (How much of the brain’s neurons are functioning as local memory vs. a computational capability?) We also have local agency - perhaps entering the domain of recursion and reentrant mimetic code.

    What is the sensory I/O of a hive mind? The neurons of the brain are not the I/O; they do not interface with the world directly. As nodes in a hive mind, we may be contributing to activities entirely beyond our senses. As Ember, emWare and others hook billions of embedded sensors and machines to the network, the potential array of I/O interfaces starts to compound and extend the metaphor beyond human nodes to include other symbionts – much like the body is a federation of cells and cells are a historical endosymbiosis of simpler organisms.

    By Blogger Steve Jurvetson, at 2:09 PM  

  • "I should clarify my question; I am not trying to focus on an out-of-system analysis of the high-level emergent systems (the social sciences do just fine there). I am interested in the in-system “emergent gap.”"What is this "emergent gap", and why do you expect it to be there? In your original post you spoke of a necessary incomprehensibility of emergent properties by nodal members. I don't see why this is so, especially if the nodes are great big humans, while the emergent systems are mere human societies. As a human, shouldn't I find it easier to understand a system composed of humans, than a system composed of birds? (Even if the birds are less complex.) Should I not find it easier to understand the system of the Singularity Institute, of which I am a part, than to understand the system of Draper Fisher Jurvetson?

    Can the node understand the system? Sure, if the node is an intelligent being capable of Solomoff induction or an approximation thereof, i.e., capable of reasoning from evidence and finding compressed descriptions of phenomena. Why would the system-level properties prove insusceptible to the inductive reasoning of an intelligent node, so long as the system is reasonably sized (smaller and less complex than the intelligent node)?

    The same observation goes for time-scales; even if global political changes were occurring pretty fast during the disintegration of the Soviet Union, it was still slow compared to the timescale of human neurons. They weren't actually redrawing the borders faster than the human eye could track - just every month or so, and that's glacial compared to neurons. Even if the future's massive global political changes occur over a timescale of hours, I can still, as an individual human, reason and react. I'm faster than the system. I can think on a timescale of seconds. Only if we had a global system not composed of humans, like AIs thinking on 2GHz processors, could I find myself completely out of my league - social changes occurring over milliseconds, faster than my ability to process.

    There's something of the same flavor as my response to the question, "But how can a human possibly build something more intelligent than us?" and my response is, "This is just not something ruled out by the laws of nature, any more than it's impossible for humans to build things bigger or stronger than us. A human can design computers faster than neurons, and write better software than evolution."

    "But if we want to understand our individual role in the system (our contribution to culture, norms, etc.) and the architecture of mind, will we be baffled, and notice shimmering patterns of beauty at best?"And my response is, "I don't think so. Why would that be a law of nature? Where would it come from?"

    (Nitpick: If you can see "shimmering patterns of beauty", you must understand it at least a little, in the information-compression sense of "understanding". If you don't understand something at all - if you can't compress it down to any simpler description than raw data - it means you see static, snow on a television screen, not shimmering patterns of beauty.)

    By Blogger EliezerYudkowsky, at 11:22 PM  

  • "I am puzzled about the claim that there are no selection pressures for the global mind."To answer this one I shall delve a bit into evolutionary biology.

    People often reason about selection pressures as if they were binary, on-or-off characteristics - as if selection pressure is something that's either present or absent, depending on whether a list of preconditions are present or absent. Qualitative understanding is better than no understanding, but to really understand evolutionary biology you want to take the next step up, and see selection pressure as something quantitative - a pressure, like a wind blowing, that can be strong or weak. I want to run through the numbers and show that any selection pressure for a global mind is either extremely weak, or nonexistent.

    What units measure a selection pressure? In that meeting at the Foresight Gathering, I used the Shannon information, which we can think of as a measure of how tiny an improbability a selection pressure can select from a fitness space. Shannon information is measured in bits - not like bits on a hard drive; one DNA base does not always carry two bits of Shannon information. Rather, two bits on a hard drive can carry at most two bits of Shannon information. If we have a byte that is always the ASCII letter "A", "X", "?", or "!", then we only need four bits of Shannon information to select which letter it is - even if the byte itself is eight bits on the hard drive. To select one possibility from 1024 requires 10 bits of Shannon information. To select a 104-amino-acid gene that is "one in a billion", that is, fitter than all but one in a billion possible 104-amino-acid sequences, requires at least 30 bits of selection pressure. Not 624 bits on a hard drive, which is what it would take to write out the codons as a literal string.

    There's a speed limit on evolution and it's an astonishingly slow speed limit. When a species has existed for a while, every two parents must have had, on average, two surviving children - otherwise the population size goes to zero or infinity in fairly short order. If the average child belongs to a brood of sixteen siblings, then natural selection can exert at most three bits of selection pressure per generation. Sixteen are born, two survive, a selection factor of 1/8, and that's three bits of Shannon information. Now, this is not three bits of selection pressure per gene or per individual; it is a speed limit that applies to the entire genetic pool. You can only have three bits of selection pressure per generation, which must be shared out among all the alleles and characteristics selected on at that time.

    So right away we have a good reason to doubt the existence of any Global Mind that depends on, say, humans passing instant messages or telegrams, or millions of component nodes. Such Global Minds could not have come into existence more than a handful of centuries ago. There simply has not been enough time - enough generations of living and dying and competing and reproducing Global Minds - to get any decent amount of cumulative selection pressure. Compared to the course of real natural selection on Earth, any Global Mind would be at most three generations of replicators into the RNA World. Evolution can select complex machinery but the selection is slooooow. There's a reason why evolution takes tens of thousands of generations to get anywhere interesting, and why evolution spun its wheels for billions of years before accumulating enough complexity to start constructing multicellular organisms.

    However, this is only the beginning of the reasons to be skeptical of the Global Mind.

    In genetics Price's Equation says that the covariance of fitness with a characteristic determines the rate at which that characteristic increases. This point about covariance is what I want to emphasize, because to have covariance you need variance and correlation. Furthermore, you need persistent covariance, over many iterated generations, because otherwise you don't have cumulative selection pressure. Furthermore, you need hereditary characteristics - the children (and grandchildren!) need to reliably vary in the same way as the parents - or even if there's "selection" in one generation, it will not pass on to the next generation.

    I sometimes hear people speculating that corporations evolve. They live, die, compete, and even have offspring, right? This is what I mean by "qualitative" thinking about evolution. Evolution is not a yes-or-no test you apply by looking for the presence of things like survival and reproduction. It's a quantitative pressure that you measure. In the case of corporations, it's not enough that they sometimes bud off spinoff companies. You have to ask whether the characteristics of parents are reliably present in the children and the grandchildren and the great-grandchildren. You need to ask about the fidelity of transmission. If the characteristic is reliably present in the children but not the great-great-great grandchildren, then selection pressure can't accumulate over many generations.

    How would corporations transmit characteristics at all? A good employee can only go to at most one spinoff. With what degree of fidelity is a corporate philosophy transmitted? DNA is digital. You can transmit literally megabytes in DNA, with nearly perfect fidelity. You can construct big complex protein machines in the phenotype and they'll be duplicated in the offspring. Without digital fidelity of transmission, it's extremely difficult to evolve complex adaptations, adaptations with multiple interacting parts, because a blur in any of ten interacting pieces of machinery can destroy the entire machine.

    Corporations live, die, and have offspring. But they don't have heritable characteristics, with sizeable persistent covariance with the number of offspring, that are transmitted with digital fidelity to offspring. Not to mention the tiny number of generations since the invention of the corporation. So the cumulative selection pressure on corporations is an infinitesimal fraction of the cumulative selection pressures that gave birth to mammals and humans, and corporations in their present form are barred from evolving complex machinery.

    It seems to me that the objections I raised to corporations apply with equal force to the other alternatives I have commonly heard proposed for Global Minds. There haven't been enough generations - not by a factor of millions. There isn't reliable, digital transmission of megabytes of heritable characteristics to offspring.

    The stringency of the conditions for natural selection are not widely appreciated. You need: Limited resources, frequent death to free up resources, multiple phenotypes with heritable characteristics, extremely good fidelity in transmission of heritable characteristics, enough transmitted genetic information to support complex machinery, substantial variation in characteristics, substantial variation in number of successful offspring (fitness), persistent correlation between those two variances, iteration over many many generations, and persistence of the correlation and transmission fidelity over those many many generations. Then, and only then, do you have a noticeable amount of cumulative selection pressure that can create complex mechanisms.

    "And now, some quibbles about the relative complexity arguments of the brain. The ~6B population of human nodes should be compared to the ~60B neurons, not the ~100T synaptic connections. The synapses are analogous to the myriad communication networks between humans."(Counterquibble: Computing in single neurons is pretty complex and a lot goes on in individual synapses.)

    I can see the appeal of that analogy. However, for the 100T synapses, natural selection could actually shape the 100T synapses into complex machinery. The 6B population of human nodes are the largest number of potential nodes in the Global Brain that are even individually shaped into complex machines. Natural selection has no opportunity to shape the flow of messages on AIM or the pinging of emails, for all the reasons already mentioned. So if we're looking at complex elements, then a human brain has 100T complex elements, and a Global Brain has at most 6B complex elements. But actually the Global Brain is a lot worse off than this, because human brains have 100T complex elements shaped by huge cumulative selection pressures into the complex machinery of a human mind, while the Global Brain has 0 elements that are actually shaped to be elements of a Global Mind. Evolution, as such, had no chance to shape humans into a Global Brain, the way that natural selection shaped the synapses into a human.

    That's why I'm so much less impressed with the Global Brain than with one lone human. I don't expect to see any design complexity there, of the impressive sort that goes into biological organisms such as humans. Emergent dynamics, maybe, but no complex machinery. The twinkling messages of an IM network are analogous to raindrops in thunderstorms, not ants in nests.

    "1) It is possible that we have competing memeplexes or emergent minds at work today. Must there be one mind?"For there to be selection pressure, there must be competition.

    "2) Perhaps the glacial timescale comments from before suggest that our global selection test has just not occurred yet."A selection pressure that has not yet occurred (and just one test would be a tiny pressure), cannot yet have created complex machinery. Evolution doesn't anticipate.

    "Will our society survive the selection test presented by modern terrorism and annual decreases in the “barriers to entry” for WMD? Will we develop a societal immune system?"Now there's a whole new can of worms! You should ask that as a separate blog post, IMHO. But if there's a moral I'd take from this whole long comment, it would be that neither emergence nor evolution will solve this problem for us. The only thing that could possibly develop the complex machinery needed for the species to survive, would be intelligence - human intelligence deliberately trying to solve that particular problem. And it is not a trivial problem. And the clock is ticking.

    By Blogger EliezerYudkowsky, at 2:02 AM  

  • Thanks for you comments on selective pressure gradients and the speed limit of evolution. Fascinating stuff! If the global brain is analogous to competing memplexes rather than one Gaia-inspired sentience, then what is the reproduction rate? The competition is among memes. It is no longer coupled to our biological clocks. Selection pressure applies to the memes, as it does with genes, and the emergent intelligences are indirect effects. Perhaps I am misreading your comment, but it seems tightly coupled to physical biology. And I realize that the lack of rigor around meme definitions leave this a bit fuzzy. And I have been mixing scenarios, I now realize. The “maybe the selection pressure hasn’t happened yet” comment was a random thought relating to the Gaia/Selfish Biocosm scenario, and the rest of my comments relate more to the memeplex scenario. All of the parameters differ between these two scenarios. Sorry for that muddle!

    Regarding your earlier comment about the ease of understanding societal emergence, I think the disconnects between our comments might trace from two root sources:

    1) Emergence. From your other post, I think we have a different perspective on the emergent complexity and computational irreducibility across different layers of abstraction (by irreducible, I just mean that there are no major computational shortcuts to reduce the complexity of a system simulation). I don’t think we have figured out how to analytically bridge or “reduce” that gap for large systems of interest. (or perhaps we do agree. I’m not sure. I don’t think any of this will “save us,” but I do think it’s an interesting area of speculation, and the unresolved core of complexity theory)

    2) In system vs. Out of system perspective. I realize that maybe my assumptions are influenced by my wife’s career. She is a psychiatrist. In her work experience, something may be perfectly clear to an outside observer that is opaque to an equally smart in-system participant (e.g., understanding oneself or relationship dynamics with a spouse). This relates to the notion of humans as the nodes and their ability to understand and take in-system actions.

    By Blogger Steve Jurvetson, at 6:49 PM  

  • In my book, the structure of existence, I make note of how the abstract structure and process of the brain, consiousness, body and existence itself are all the same. Free on the web. Lots of ideas.
    structureofexistence.com

    By Blogger Dan, at 9:29 PM  

  • Certainly, I am brave to write some words here, where more than 50% of the words I read are complete not of my use. However, the form of treating this subject might be unreachable to me, but not its substance. And actually, posting with these obstacles and distances in our comprehension is a reaffirmation itself of the points I want to make with my comment. =)

    I believe that the glacial pace of evolution within the last 1000 years is inversely proportional to the accerelating increment of formal education, amount of information and its storage available for more and more people. I mean: population has not become more intelligent because they have access to education and to all these technology changes. Quite the opposite. Seems that since we are not obliged to fight for survival knowledge, it is always there, even free, at a click... why should we make an effort of "thinking"? of building a personal new core of understanding about the world or about how to make it a better place, for example?

    Information and its logistics and instrument has become a prosthesis. You know what real prosthesis make to people, right? A knee replacement might let you walk but you will never run again, and probably you will start to suffer other related problems caused by the use of the prosthesis (at the hip bones i.e.). With glasses you might be able to see but your next pair will need more magnification... endless examples. To sum up: everything tends to get truck in a comfort zone when it is not necessary to go forward. When survival is assured since you are born, fat changes are that you will never try to make any efforts, that may end in an optimization or enhancement of your nature.

    Take this thread as an example. I think your questions on emergence are unanswerable from these big theories and formulas. Funny thing that those things that seem impossible to understand to an ordinary mortal may be a comfort zone to you. Here you need free thinking, creative activities, some body excercise too may help to open all the channels of understanding that are not the usual intellectual ones. This urges you to leave behind your preconceptions about the world. Because you are trying to visualize change in human race, the turning into something different than it is now, but you are using tools to think that belongs to this current now-and-here. I believe this requires more of playing, the "lego" thinking, rather than pre-thought theories. Also, you are trying to predict evolution, by reducing its nature to a system of complex equations. But since the variables: "emotions", "hope", "faith", "belief" are not included in the scope of causes that may alter the course of human evolution, I wonder how will you ever come to a accurate answer.

    Everything about us might be reduced to our genome, even our conception of the soul and God. I am sure of this. All starts and ends at our body-mind system. But the keys to understand our behaviors, and thus predict them and their evolution, remain yet undiscovered.

    My last comment: I think the in or out of the system view has nothing to do with the human ability of self awareness or posibilities to examine himself. What makes that self conciousness and recognition opaque most of times is a survival defense mechanism of the mind that put out of reach (in the dark) perceptions about ourselves that it (the mind) judges that threathen the whole system. It even put the system at mortal risk in order to keep on defending it from those perceptions. The most clear exmaple: someone that take drugs without understanding why (which means that the addiction is truly effective). The addict is keeping in the dark things from its past (...childhood let´s say) that his mind perceive as a big danger or unbareble suffering to live with conciously. Even more terrible than intoxicating the whole system with susbtances. Maybe because our experience and absorving capacity (learning) when we are children make a greater impact in our mind, so the memory of some traumatic thing at that age is recalled in the present of the adult still as a big drama, when it is no longer so. Seems crazy? Ilogical? It is not. It sounds very natural to me. I like the idea of our models and theories about what we are and how we work being wrong and ilogical rather than saying those addicts are mentally ill or insane.

    But this requires an open mind. That´s why I recommend boundless thinking... to let your deepest side of yourself, your purest mind speak up... trying to feel (decode) the answers it gives. Like some kind of meditation. At least it can be a good experience. Sounds odd that I am suggesting a way of thinking to one of the greatest minds ever, uh? =)

    I know I have not answered if we will comprehend suprahuman emergence. I am focusing in if it is this possible from this foundations.... My deepest respect, mon ami. I am truly blessed to have met you. Aur revoir.

    By Blogger Gisela Giardino, at 1:16 AM  

  • Well, I finally got a picture of a Hive Mind in formation, so I think this settles the issue. ;-)

    By Blogger Steve Jurvetson, at 6:27 PM  

  • In a recent retort to Metcalfe’s Law and Reed’s Law, variation in link strength leads Odlyzko and Tilly to conclude that the value of a network grows with n*log(n) instead of n^2 or 2^n.

    “That means, for example, that the total value of two networks with 1,048,576 members each is only 5 percent more valuable together compared to separate. Metcalfe's Law predicts a 100 percent increase in value by merging the networks.

    It's not a merely academic issue. "Historically there have been many cases of networks that resisted interconnection for a long time," the researchers say, pointing to incompatible telephone, e-mail and text messaging standards. Their network effect law, in contrast to Metcalfe's, shows that incumbent powers have a reason to shut out smaller new arrivals.

    When two networks merge, "the smaller network gains considerably more than the larger one. This produces an incentive for larger networks to refuse to interconnect without payment, a very common phenomenon in the real economy," the researchers conclude.”

    By Blogger Steve Jurvetson, at 11:29 AM  

  • Cool new research in SEED on this topic

    By Blogger Steve Jurvetson, at 7:29 PM  

Post a Comment

<< Home