The J Curve

Sunday, January 09, 2005

Thanks for the Memory

While reading Jeff Hawkins’ book On Intelligence, I was struck by the resonant coherence of his memory-prediction framework for how the cortex works. It was like my first exposure to complexity theory at the Santa Fe Institute – providing a perceptual prism for the seeing the consilience across various scientific conundrums. So, I had to visit him at the Redwood Neuroscience Institute.

As a former chip designer, I kept thinking of comparisons between the different “memories” – those in our head and those in our computers. It seems that the developmental trajectory of electronics is recapitulating the evolutionary history of the brain. Specifically, both are saturating with a memory-centric architecture. Is this a fundamental attractor in computation and cognition? Might a conceptual focus on speedy computation be blinding us to a memory-centric approach to artificial intelligence?

• First, the brain:
“The brain does not ‘compute’ the answers to problems; it retrieves the answers from memory… The entire cortex is a memory system. It isn’t a computer at all.”

Rather than a behavioral or computation-centric model, Hawkins presents a memory-prediction framework for intelligence. The 30 billion neurons in the neocortex provide a vast amount of memory that learns a model of the world. These memory-based models continuously make low-level predictions in parallel across all of our senses. We only notice them when a prediction is incorrect. Higher in the hierarchy, we make predictions at higher levels of abstraction (the crux of intelligence, creativity and all that we consider being human), but the structures are fundamentally the same.

More specifically, Hawkins argues that the cortex stores a temporal sequence of patterns in a repeating hierarchy of invariant forms and recalls them auto-associatively. The framework elegantly explains the importance of the broad synaptic connectivity and nested feedback loops seen in the cortex.

The cortex is relatively new development by evolutionary time scales. After a long period of simple reflexes and reptilian instincts, only mammals evolved a neocortex, and in humans it usurped some functionality (e.g., motor control) from older regions of the brain. Thinking of the reptilian brain as a “logic”-centric era in our development that then migrated to a memory-centric model serves as a good segue to electronics.

• And now, electronics:
The mention of Moore’s Law conjures up images of speedy microprocessors. Logic chips used to be mostly made of logic gates, but today’s microprocessors, network processors, FPGAs, DSPs and other “systems on a chip” are mostly memory. And they are still built in fabs that were optimized for logic, not memory.

The IC market can be broadly segmented into memory and logic chips. The ITRS estimates that in the next six years, 90% of all logic chip area will actually be memory. Coupled with the standalone memory business, we are entering an era for complex chips where almost all transistors manufactured are memory, not logic.

At the presciently named HotChips conference, AMD, Intel, Sony and Sun showed their latest PC, server, and PlayStation processors. They are mostly memory. In moving from the Itanium to the Montecito processor, Intel saturated the design with memory, moving from three megabytes to 26.5MB of cache memory. From a quick calculation (assuming 6 transistors per SRAM bit and error correction code overhead), the Montecito processor has ~1.5 billion transistors of memory, and 0.2 billion of logic. And Intel thought it had exited the memory business in the 80’s. |-)

Why the trend? The primary design enhancement from the prior generation is “relieving the memory bottleneck.” Intel explains the problem with their current processor: "For enterprise work loads, Itanium executes 15% of the time and stalls 85% of the time waiting for main memory.” When the processor lacks the needed data in the on-chip cache, it has to take a long time penalty to access the off-chip DRAM. Power and cost are also improved to the extent that more can be integrated on chip.

Given the importance of memory advances and the relative ease of applying molecular electronics to memory, we may see a bifurcation in Moore’s Law, where technical advances in memory precede logic by several years. This is because molecular self-assembly approaches apply easily to regular 2D structures, like a memory array, and not to the heterogeneous interconnect of logic gates. Self-assembly of simple components does not lend itself to complex designs. (There are many more analogies to the brain that can be made here, but I will save comments about interconnect, learning and plasticity for a future post).

Weaving these brain and semi industry threads together, the potential for intelligence in artificial systems is ripe for a Renaissance. Hawkins ends his book with a call to action: “now is the time to start building cortex-like memory systems... The human brain is not even close to the limit” of possibility.

Hawkins estimates that the memory size of the human brain is 8 terabytes, which is no longer beyond the reach of commercial technology. The issue though, is not the amount of memory, but the need for massive and dynamic interconnect. I would be interested to hear from anyone with solutions to the interconnect scaling problem. Biomimicry of the synapse, from sprouting to pruning, may be the missing link for the Renaissance.

P.S. On a lighter note, here is a photo of a cortex under construction. ;-)

29 Comments:

  • I'd add that its 4000+ year old Vedic knowledge (and by direct extension Buddhist) that the brain is only a memory unit. Awareness is greatly increased when the brain ceases to interfere and ceases to obscure perception with its predictions and recollections. In meditation the sensations in the brain become still and your perception becomes much clearer. But logic is also known as a hindrance, only a tool used to sort perceptions. The Mind (in Buddhism thought to reside in the heart space, though not physically), in its form of pure awareness is aware of reality, but is not really situated inside reality. Which may sound far fetched, but so is wave-particle duality and non-locality and virtual states. Perception is also ultimately a hindrance, as is Consciousness (which is defined as a series of impressions that are under the false assumption that there are a stream of them, and that time "exists"). Awareness is beyond a concept of individual self and causual flow.

    So if you model a computer after this, it probably will accomplish less than the average monk. I'm sorry Dave, I can't do that. You see the door doesn't inherently "exist" in the way you assume it does...

    Fascinating blog, thanks for writing!

    -felix
    http://crucial-systems.com

    By Anonymous Anonymous, at 10:45 AM  

  • very interesting.... gives new meaning to Ray Kurzweil's book Age of Spiritual Machines...

    By Blogger Steve Jurvetson, at 8:49 PM  

  • You raise many interesting issues--- and interconnectivity is certainly one of the most daunting.

    I like to illustrate my impression of the size of the brain-technology divide by comparing a human brain to the Los Alamos ASCI Q supercomputer --- currently number 6 in the Nov 2004’s ranking of the worlds fastest computers. ASCI Q has 1.5e11 transistors and ~12,000 CPU with 1.25 GHz processors, 33TB RAM and 664 TB of total storage, it consumes 3MW of power and has a volume of about 1.2e7 cc; performance is in the 30 tera-operations (3e13) range.

    The brain, on the other hand, has 3e10 neurons (3e14 synapses) with a neural firing rate ~1 KHz which equates to something like 300 peta-synaptic “operations” (3e17) per second (and does it at 20W in 1400cc of space).

    I, too, try to avoid the brain-computer metaphor, but it is good for establishing context---so given the complexity of a neuron, I think the best part-to-part comparison is between the number of neurons (3e10) and the numbers of CPUs (1.2e4). As you allude to, once bio-plasticity is introduced, it makes the comparison a real apples-to-oranges affair.

    So what’s a cogno-neuro-technologist to do? I think some of the often-overlooked differences between neuronal synapses and digital computing (e.g., that synapses are nonlinear, plastic, analog systems that encode information in their spatial-temporal distributions using non-binary logic) could certainly be useful for future technological design inspirations.

    Yet, I think low-level neuro-inspiration will only get people so far in the quest for intelligent machines. In fact, I’m not even convinced a strictly bottom-up approach (i.e., understanding the neural implementation of the brain) is a necessary prerequisite for creating intelligent machines; understanding what is encoded is at least equally as important as how the information is encoded (for some good reading see David Marr).

    If you haven’t come across the site before, there are some interesting government programs related to bio-computing that might pique your interest: http://www.darpa.mil/ipto/Programs/programs.htm

    Please feel free to drop me a line.

    Chris Furmanski
    chris@furmanski.net
    http://furmanski.net/

    By Blogger c. furmanski, at 3:24 PM  

  • Steve,
    Enjoy your Blog... I was wondering, what are your thoughts on the existence of God? Do you believe in God, or chance, or both?
    -- WM

    By Anonymous Anonymous, at 5:07 PM  

  • Try MacArthur Fellow neuroscientist Paul Adams (http://www.hsc.stonybrook.edu/som/neurobiology/adams.cfm).

    His research statement:

    It is thought that learning occurs by activity-dependent adjustments of synaptic strength. It is also thought that learning plays an important role in the formation and refinement of elaborate neural networks such as the neocortex. We are interested in the possibility that the basic microcircuit of neocortex has evolved to minimize errors in synaptic learning, allowing the self-organisation of complex networks. This view of neocortex has would become particularly compelling if (1) errors (arising from inevitable molecular noise) could make learning by large networks impossible and (2) enigmatic features of neocortical structure and function received a natural interpretation within this framework. Our main effort has been in area (1), though we have suggestions for (2) also. We explore simple formal models of learning by single neurons using simulation and mathematical techniques. The neuron receives time-varying inputs drawn from a defined statistical distribution, via activity-dependent synaptic weights. We use a Hebb rule for the weight updates, but modify the rule to allow for “learning errors”, either by making the updates a stochastic function of input and output activities, or by allowing cross-talk between neighboring synaptic updates. We explore the relationship between the evolution of the weights and model parameters such as error rates, numbers of input neurons, postsynaptic nonlinearities, input statistics and normalization constraints.

    By Blogger Frank Ruscica, at 6:39 PM  

  • Hi Steve,
    I played the traditional "predictions for the next year" game with friends and a discussion yesterday gave me the opportunity to publish a part of my contribution.

    On first sight there is no connection between the two subjects, but there is one in my mind, and I would like to share it with you.

    A short version of my post :
    --
    Semantic Internet (and not only Web), allowing access to Shared Documents all over the Net will happen in 2005, with two major consequences : a completely new way to distribute one's work, and the building over the next years or the Interpedia, composed of articles written by experts and shared.
    My guess (non posted) is that Gxxxxx is heading that direction, using distributed generation of indexes (via desktop Gxxxxx Search) and planning to compile them in a central index to facilitate access to shared resources.
    Technology is available, people are used to share documents, ways of claiming paternity exist, aggregating feeds and alerts is already common.
    [full text in French here FR]
    --

    This could be the first build of a Net-wide common memory stock, each document described by keywords added automatically, by content analysis, or manually.
    Accessible by "connected ones" which will have the possibility to create "personally" filtered packs, fitting their needs, and eventually share them back.

    While reading your post the parallel between what I call the Semantic Internet and the "memory-prediction framework for intelligence" pop-up.
    Despite the fact that this would be based on "classic" memory blocks, it looks much like Hawkins' model, based on invariant forms [documents],  broad connectivity [the Net] and nested feedback loops [aggregation].
    Seems that evolution is once more repeated ;-)

    If my guesses [Google/Apple and maybe Yahoo! now working on that] and prediction [the building of Semantic Internet] are false, someone should take care to build that.

    What do you think about it?

    By Blogger OldCola, at 10:18 PM  

  • Bonjour! The Google founders would agree with you. Every time I talk about Google's future with Larry Page, he argues that it will become an artificial intelligence.

    I have a hard time factoring in the asynchronous aspect of data structure updates such as this. It seems relatively static and decoupled to me, but I have not been able to grasp it clearly. Maybe it will operate at a peculiar clock rate, where aggregation occurs over several seconds…. And so perhaps we would have a hard time recognizing it.

    By Blogger Steve Jurvetson, at 10:30 PM  

  • I'm actually trying to figure that out. Read eclectic thinkers http://eclectict.blogspot.com/ and check back. I'll publish the sections as I finish them. I am working on the theoretical side, the epistemological implications of all this. One result is a theory of how we love, and how a computer may also do the same.

    By Blogger czarandom, at 8:31 AM  

  • Steve,
    it is quit ease to do if you split the job. I posted at Eclectic Thinkers Blog a more extended description and probably more explicit.
    Czarandom, I'm looking forward to read your posts.

    By Blogger OldCola, at 8:52 AM  

  • When I write a book, it's like I write a chip, or maybe an architecture. When it's done it's done. When we go to the brain organization, it seems that all is done but at the same time all is on the way.

    I'm always in trouble. The brain organization doesn't "store" something in some place, but is able to "restore" everything when I call the thing.

    How the brain does it? One of the suggestion (not solutions, please) is in the dyanamic structure of our brain. Our brain - supposedly - doesn't stores anything, but dynamycally adds paths over paths like a grid, that's modified by experience day by day.

    The grid of neurons someway "marked" as carriers, keep record of the importance of some simple message that may activate a neuronal connection that then reproduces what we call "memory".

    Grids of relevant importance get more attenction than grids of few. Neurons react not to a single event, but only when a pull of events show to be similar to what the grid informations store as event.

    Not a single fact or thought, but a chain of facts and thoughts that in the whole show to be meaningful.

    Not one key, just to say, but almost seven (just to lift off a number) keys together, move the whole.

    But, Steve, this is a very tangled field, indeed. Walk in a jungle.

    By Blogger Franco Cumpeta, at 1:22 PM  

  • Let me drop two flying thoughts. My apologises first if this is too basic, or simply non sense. I am intuitively trying to find an answer, I am not into this subject, not even near. Thank you.

    1) When Medicine began to realize that their patients had something called psycology (psyque) and that this last was of a tremendous importance to their physical condition (soma) ...It was a HUGE step for this discipline.

    The relationship I make here is: where in all this brain study, it is acknowledged the body that holds that brain? I do believe that our intelligence -though centered in the brain- it is not fostered only by its activity. The easiest example is to see how a person with nutritional problems has severe problems in thinking and sleeping.

    It will be like saying: Our body it a whole intelligent organism, we are not just brains with legs.

    This thought also leads me to ask if in this brain function study and its comparison with future possible AIs, the susbtrate (the "hardware") is acknowledged and how. I guess that as long as we don´t fully understand the exchanges of susbtances, which are they, at which temperatures and frecuencies it happens, how electrically charged the process is (at the brain and at the whole body) -btw, all this a study that lead us to the nanotech realms- ...As long as we don´t know all this it will be hard to find a way to a possible and useful AI as we dream it today. If the Protein´s structure already does its job with such grace, we should try to emulate that, take it as a model, shouldn´t we?

    2) My second thought not related to the first one, has to do with the structure of the brain. I will talk from my own awareness of my thinking process. We can´t stop thinking. We even dream in our sleep. There is no "off" in the brain. The day it turns off you are death. This leads me to think that the brain function relies, it is founded in its constant dynamics, in its constant activity, which may vary in speed or intensity, but it is always present. This may make possible the lack of "storaging" (ROM) place for the brain, because everything is happenning -is present- at different layers of conciousness, but simultanously (RAM).

    I tend to believe that people who are more intelligent have a better connection between these layers in surface and in depth, so they are able to connect more ideas and perceptions and memories at the same moment or in less time than ordinary people .

    The other thing I thought regarding this dymanics, is about the circuits -electrical impulses?- that our brain makes which we experience as thoughts, feelings, impulses, fears, direct action, etc. I think that in this circuit formation of our mental programmming resides the pros and cons of our intelligence. There you had our capacity to learn, this is: to make habits, the good and the bad ones. This explains why someone can be excellent or brilliant at something and a disaster in another activity, or simply it may explain our inherent contradictions which we can live everyday with. i.e.: a biologist believing in the Bible. Because in the end, in its very nature, there is no moral, neither logic or ethics in our brains. There is only more or less pronounced circuits that allow more or less flowing through themselves. This should also explain why there is pleasure -and seek of it- in repetition and exhausting while learning something new, and they older you are the worst. The necessary repetition in music and visual arts that we feel it pleasant may also have to do with this. Our brain is confortable running its common rutine => that makes it so hard to quit with circular habits. Or makes us suffer the abstinence syndrome.

    Ok, that´s for now... I put you to sleep? Hope I didn´t! Again apologises if these thoughts are too basic. Hope they are somehow useful. Adieu. |-)

    By Blogger Gisela Giardino, at 3:12 AM  

  • If memory serves me right, pretty much everyone in the field of Artificial General Intelligence (and it's a damned small club) agrees that memory, memory content and memory bandwidth are more important than CPU cycles. I'm probably the odd duck out, as I think that neither memory nor computing cycles nor any type of hardware resource are a limiting resource - I think that at this point it's a software problem; all the necessary crunch exists, the hard part is shaping it. But, if crunch does turn out to be important, then I'd bet on memory and memory bandwidth being more precious than CPU cycles.

    If your computer had a CPU that ran at 200Hz, like a biological neuron, then you'd also need a hundred trillion processors and massive caching of intermediate results just to compute anything in realtime. If I was stuck with 200Hz processors, but I had a hundred trillion of them, then I would need to implement an intelligence that was virtually all cache lookups.

    An AGI need not be so poorly designed. We can replace massive parallelism with massive serialism.

    By Blogger EliezerYudkowsky, at 8:21 AM  

  • OldCola et. al. Thanks for the rich discussion and eclecticism. This is what makes blogging worthwhile.

    Eliezer: Good to hear from you. Please tell me more about the massive serialism topology to overcome the connectivity bottleneck… If it's a bus/mux scheme, roughly how much logic overhead is there per bit of memory? If I imagine an IPv6 node per bit, then it seems high. =)

    The logic/memory tradeoff also relates to memory design (e.g., the value of multi-port memories, multi-level memories, interconnect topologies).

    Logic gates can be implemented in memory (as a look up table, as Xilinx does it) as can switch fabrics, but I am very curious about the fine-grained details of synaptic fan-out implementation or simulation (especially with self-assembling molecular electronics that are easier with simple 2D wiring grids).

    Again, the motivation for this line of questions is the presumption that in the near future, the abstract capabilities embodied in Moore’s Law will continue for memory and stagnate for logic. (Computational power relies on a balanced mix of memory and logic, so the Kurzweil curves may continue, but with a reallocation of transistor functionality under the covers)

    Gisela: Much of the back half of your comment resonates with Hawkins’ framework. I think you would really like the book.

    By Blogger Steve Jurvetson, at 12:36 PM  

  • Just thought that I'd add a couple of comments to the interesting points that have already been made. Yes, there is a processor/memory tradeoff to discuss when looking at neuro-inspired computation, but a more fundamental tradeoff was originally presented by Minsky and Papert back when they wrote Perceptrons - and to the best of my knowledge the limitation still stands. They called it the 'Time Versus Memory For Best Match Problem'.

    If we go right back to the basic 'pattern matching' neural networks, they can be characterised as a device which, when presented with a novel pattern, will find the best match to that pattern from within a set of previously trained patterns. What Minsky and Papert did was look at the exact same problem from the perspective of conventional computing, which given that the current approach to building neural-network based systems relies on emulating parallel analog computers on serial digital computers is even more significant than it was at the time.

    The two corner cases for implementing any matching algorithm are using a direct lookup memory and an exhaustive search. The direct lookup memory essentially uses the search inputs as address lines into a huge RAM (maximum memory) while the exhaustive search goes through every trained pattern in sequence comparing it against the novel input and then picks the one that matches (maximum time). Minsky and Papert observed that when looking for an exact match, there are a number of algorithms that can dramatically reduce the time memory product (eg hash tables, etc). However, they postulated that no such algorithms were possible for solving the best match problem - an observation which to the best of my knowledge still stands for conventional computers (let's reserve judgement on quantum computation for now!).

    If we assume that the time versus memory for best match problem still stands, the only practicable solution is exhaustive search, since the scaling properties of direct memory lookup are horrendous. I actually spent 36 pages trying to convince my sceptical PhD supervisor of this 10 years ago but to no avail! See the following link if you don't believe me...

    http://www.zynaptic.com/pdfs/Paper_03.pdf

    So, my take is that if you want to build neuro-inspired computer systems using current technology, the best way forward is to bolt a 'calculate hamming distance' instruction on the side of a conventional processor and then plonk a whole load of them on a chip with vast quantities of memory bandwidth. The only things you need then in order to achieve the kind of functionality you mentioned in the original article is the right macro level structure and a good unsupervised, distributed learning algorithm... but I've decided to keep that bit of the secret sauce to myself. At least until I can work out how to make money out of this stuff!

    Chris (chris@zynaptic.com)

    By Anonymous Anonymous, at 10:39 AM  

  • Call me a pseudo-intellectual, or even uninformed, but the only way "memory-based models" can "continuously make low-level predictions in parallel across all of our senses", is if there's some sort of control and post-processing mechanism that directs the "simulation" at the "memory" level. Isn't that by definition a higher function -- i.e. an operation that would have to be overseen by non-memory neurons? I guess the answer depends on how loosely you define "memory"...

    I'm no neuro-surgeon, or for that matter a circuit designer, but my guess would be that in nature there's no clear division b/w memory and CPU [for the lack of a better term] cells. Depending on the particular network/processing activity, a bunch of neurons can perform memory, or higher control functions (depending on the pathway invoked?).

    Cheers,
    -Georgi
    ge0rgi@msn.com

    By Anonymous Anonymous, at 9:33 PM  

  • The BIG thing no one sees is that the brain is an INTEGRATED system - all the false starts and most of the bugs have been filtered out. There is an enormous wastebasket filled with all the things did not work or which were suboptimal to the current design.

    AI science has to walk all those trees and discover the false leads just as the evolution of mind had to travel those paths. There is an enormous quantity of design cycles behind our minds.

    Like any Very Large System, the brain is a complex mix of compromise and architecture decisions as well as shortcuts that make sense in special cirmustances. The integration is the hard part and we just see the end result.

    AI rsearchers need to look at Very Large Systems like communication systems, revenue collection systems, market systems, military command and control, etc for hints. These systems are aware in a limited sense and are constantly evolving as humans make them better and more useful. These systems are often very different under the hood although they may do the same thing. We cannot assume that everyone's brains work the same way or that people think the same way.

    I am very skeptical of the academic approach ( as Hawkins typifies ) because every academic I talk to in most fields has no clue as to the resources in the hands of top notch IS professionals nor have they observed these people at work on these systems. ( Nor have they had to build and maintain a Very Large System. ) Building an AI is building a Very Large System.

    For example, a brand new Dell 2850 with a tuned oracle DB and a good design (benefitting from all the past lessons and mistakes ) can out-analyze the current systems that economists use to study economic data. I was able to do in hours what a PHD took years to do for his Thesis.

    GOOGLE already is an AI. Each human is a synapse and the memory is all the data on the web with the nueron being the links from Google. The googlesphere runs at .1 HZ or even slower, but the overall cycle time to get reliable information decreases significantly. A person today with access to Google is much smarter and informed than someone from ten years ago. It still takes a special sense of creativity to develop key insights, but the information is SO much more accessible and useful.

    The role of the Web and Google are pretty much done. The basic model is deeply rooted now and just needs tweaking.

    The real focus now should be on making the interface smoother and seamless. Rather than search and filter, I should just KNOW the information as I know my father's name or the face of my wife. The same goes for expressing myself - the words should just flow into the medium rather than having to type.

    For this reason I am little dissapointed with Google's approach. They are still focused on cataloging, not on usefulness. I think that if they focused on usefulness, then the cataloging would follow.

    The memory-prediction framework is part of the process, but memory has to be persistent. Google is a giant set of pointers to memory. And its mostly persistent.

    The way Very Large Systems evolve is by humans cooperating to this end. A key aspect of AI is task-related goal-focused activities that require many actors. I can take a group of four year old kids and have them move a shelf of books from once place to the other. Can AI design a group of robots to do this and can they spontanously cooperate and find better and better ways to do this? The stunning part is the practice effect - even little kids invent and use better ways to do things, often in humorous ways.

    For AI to take off, this playful cooperation has to occur among the AI systems. Human cooperation is the result of minds integrated together, just as the brain itself is integrated within itself. How to make AI systems cooperate?

    By Blogger PureData, at 9:44 PM  

  • This topic fascinates me as well. Have you read Society of Mind? Dated, but it crinkled my toes in a way that only maybe the 1970's flowing-hair Farrah Fawcett poster has. Except, I never took "Society of Mind" alone with me into the bathroom.

    By Blogger Mr. Sun, at 1:42 PM  

  • This comment has been removed by a blog administrator.

    By Blogger Ben Of Denver, at 2:10 PM  

  • (oops, sorry for the comment substitution, too many little things wrong in the previous incarnation to ignore...)

    Phew, finally gotten through the post in my 'spare' time. Lots of really good thoughts.

    Logic VS Memory VS Bandwidth
    AI vs Brain Power
    Searching
    Patterns
    Organization
    The Nature of Intelligence

    Great stuff. I think I need to find time to read some of the reccomended readings! Really interesting topic. I was disappointed in the actual state of AI when I took it in College. It seemed too wrote, too much human intelligence getting codified into the programs we created. Since then I've thought about AI in my spare time, even though my job doesn't require it.

    Two replies on the nature of the beast especially caught my eye. Gisela, your post is very interesting. Our brain indeed would not be very capable without the rest of our bodies. The senses (allowing that touch includes all feed back of physical hapenings that aren't auditory, visual, smell, or taste) and give us the ability to form impressions of a given situation. Our 'mechanical' body itself allows us to act to change the situation. How would you say that sense and action contribute to actual intelligence besides providing the purpose for intelligence?

    The second post topic was that of Pattern Matching. Improving pattern matching would yeild huge returns. I think there are actually strategies that we could teach computers to minimize the search space. The autonomic navigation grand challenge that Darpa hosted last spring has provided me a lot of food for thought. None of the vehicles went more than 7 of the course total 142 miles, but it's a great task to think about a computer tackling. It seems to me that most of the teams went after the brute force method, increasing the amount of data, and trying make up for perception failures by adding data and complexity. Add sensors, more sensors, more sensors than any human would ever need to navigate the course. Thinking about how we are able to succeed in the same task, I come to the conclusion that our brains are highly capable of shortcuts. We are expert pattern matchers. We are expert simplifiers. We are experts at measuring change over small periods of time. We know what to ignore by doing a quick categorization of patterns we see in our field of view.

    If we are roaming our office looking for our misplaced keys, we quickly limit our search scope. We know they are not on the white board, the door, or the walls. We may look to see if they're hanging on something, but we usually remember these places, or come back to do a second search after the obvious search has failed. In terms of memory, we usually only remember the details of the situation we consider 'relevant' to the task we're engaging in. Things like what features are in the room, or what the name of the street was that we turned on. In emotional moments we can be 'hyper aware' and are able to remember many more details, but usually that is when we slow down and "smell the roses." We would totally fail to function in life if we studied everything with that level of detail.

    It seems to me that we could make computers seem a lot smarter by teaching them to be aware of some of these shortcuts for pattern matching. Our brains have many advantages over computers, why do we demand that computers be more precise than we are to start out with? In terms of the Navigation example, the computer only needs to worry about obstacles. There are things that are attached to the ground or contours of the ground that could interfere with a planned path. Using a variation on fast forrier transform of a series of images and converting them into a collection of 'objects' or dangerous contours seems like the way to process input for the video stream. Now, that's no small feat, but there are tricks we should be able to use to simplify things. For instance, our eyes see maximum detail at the center of our field of view, maybe we could do something like that for computer vision so it wouldn't get bogged down in unneccessary detail.

    Breaking down 'life' into streams of objects would seem to be useful for a computer in general. For general intelligence, there are a lot of individual facts, but they fall into a limited categorization. Shapes, known objects, colors, functions, etc. A box is this shape, it sits on things, it contains things, it hides things, it transports things. Walls are usually solid and block movement, doors are mounted in walls, etc. Any of you have recommendations for books that approach vision or machine intelligence in this way?

    The ultimately interesting AI would be one that could learn from its creators from the ground up in a stream of consciousness manner. Instead of us teaching a computer facts, we would teach it how to learn facts and recall them. Persuit of this would really be cool. No hill climbing, expert systems, or neural nets that have to have their noses smooshed in the facts before they perform at all... If computers could break up information into useful chunks and organize them in a hierarchical manner by themselves it'd be very powerful. A computer could be better than humans at managing shifting perceptions too because they could note which facts were inferred and keep its eyes open for facts that validate a particular categorization. A computer could even index the source of inference so the computer could go back and re-analyze conclusions at its liesure. Eventually computers should be able to be at least as accurate at inference as intelligent humans. With the right framework, computers should be able to notice patterns, remember things, and make correct (possibly new) conclusions by using trustworthy correlation methods between the pieces of knowledge it has, and to me, that is intelligence.

    Oook, I appologize for the long comment, I hope it's interesting and relevant enough to warrant the length.

    By Blogger Ben Of Denver, at 7:52 PM  

  • Oh, and yes. For this pattern matching, we need to come up with methods to access a lot of memory quickly, but this seems like it'd be conducive to processing on large clusters. Store parts of the data on 100 or 1000 even 10,000 independent computers and set them all to the task of finding the right pattern match on their personal slice of data. These would have to be moderately fast computers, but 10,000 computers sucking hard on pc 3300 memory would be 10^14 bytes cumulative throughput per second, right? Major head and power issues, of course, but enh, it's a supercomputer. Figuring out how to keep the processors busy as the results are tabulated would be interesting, but the Parallel processing people love that sort of challenge.

    By Blogger Ben Of Denver, at 1:04 PM  

  • major HEAT not head issues. blah.

    By Blogger Ben Of Denver, at 1:06 PM  

  • A short note: First, congratulations! Please, see the amount of comments -whose personal blog gets this much interaction?- the excellence of the posters (don´t mind me, seriously) and the length, diversity and thoughtfulness in each contribution. Second, I´d like to welcome you, Ben, I will be back with the answer on your question ASAP.

    Lastly, I had to make this entry, given that it will be comment #23. It is a must. You understand... Thank you. =D

    By Blogger Gisela Giardino, at 5:01 AM  

  • So many fascinating contributions! I had to collect a lot of thoughts for this one…

    Chris: In 1969, Minsky and Papert generalized the limitations of single layer “Perceptrons” to multilayered systems: "...our intuitive judgment that the extension (to multilayer systems) is sterile". After chilling the field for a whle, progress was then made in multilayer systems.

    But most importantly, a multi-layer neural network is a very poor model of the cortex. It is not biomimicry in any real sense. It lacks run-time feedback loops, which are essential for auto-associative recall and for the very formation of memories from temporal input streams. (Back propagation of errors during training is different from nested loops in the topology of the network).

    Do you recall the assumptions behind the time vs. memory tradeoff? Von Neumann architecture? Traditional single port memories? At first glace, it seems that the architecture of the cortex lies along a spectrum between the extremes of direct lookup and exhaustive search. Perhaps the degree of synaptic fanout is a hint as to where the balance was struck.

    Georgi: I think you can make a compelling computational equivalence argument that the network topology and morphology are loci of computation. So I suspect you are right that "there’s no clear division between memory and CPU.” The CPU is not central; it is the matrix for the memories.

    PureData: Yes! These complex systems will need to be evolved and grown, not designed and installed like Microsoft Office. This applies across evolutionary and developmental vectors.

    The Google founders would agree with you. Every time I talk about Google's future with Larry Page, he argues that it will become an artificial intelligence.

    I have a hard time factoring in the asynchronous aspect of data structure updates such as this. It seems relatively static and decoupled to me, but I have not been able to grasp it clearly. Maybe it will operate at a peculiar clock rate, where aggregation occurs over several seconds…. And so perhaps we would have a hard time recognizing it.

    You might like the discussion here on the emergence of a hive mind.

    Mr. Sun: I read that book many years ago, around the same time as Gödel Escher Bach. and Rumelhart’s PDP.Bennett: To your request for a book that discusses vision and the brain’s processing of “streams of objects”, I’d suggest the Hawkins’ book itself. =)

    Hawkins notes that the brain is the only part of the body that has no senses itself. Everything we know and remember comes from the spatial and temporal stream of patterns from input axons to the brain. All of the senses, vision, touch, hearing, etc. are converted into similarly structured streams of pulses before they get to the neocortex.

    So all of the inputs to the cortex are fundamentally similar, and all regions of the cortex perform fundamentally similar, but hierarchically structured functions. This is why people can be trained to see with a camera wired to electrodes on their tongue, and ferrets can develop just fine with their auditory and visual centers of the brain surgically swapped (so the vision center processes sound and vice versa). “In fact, your brain can’t directly know where your body ends and the world begins” (p.60).

    And there is massive internal feedback to the sense-making process. “Your brain constantly makes predictions about the very fabric of the world we live in, and it does so in a parallel fashion… What we perceive – that is, how the world appears to us – does not come solely from our senses. What we perceive is a combination of what we sense and our brain’s memory-derived predictions.” (p.87) When the pattern stream is consistent with our memory-prediction engines, we don’t even notice it.

    So all representations of reality, memory, and “self” are interwoven and internal abstractions, derived from a stream of patterns. I would imagine that some people incorporate more external feedback, and adjust internal representations more frequently, than others. Children do this naturally….

    Ben & Alien23, the IO can contextually define the intelligence. Several robotics researchers fundamentally believe that we have to build a robot with a humanoid physical interface to the world if we ever want to create a human-like AI.

    For example, a dog can detect identical twin humans by smell alone. Much of a dog's brain is devoted to nasal memory (related to their sensory input - a nose close to the ground), so even their “memories" would be difficult for us to relate to, not to mention any higher-order constructs.

    Philosopher poet Hunter S. Thomspon had this all figured out. Watching a boxing match on TV, he blurted from an ether binge: “Kill the body and the head will die.”

    But Morpheus reminds us of the converse: "the body can't live without the mind." ;-)

    Update On Feb 8, Intel revealed the details of the Montecito processor at the ISSCC conference. They have more error-correction-code memory bits than my conservative assumption derived from an earlier SPARC chip (The ECC overhead is now over 2 bits per byte).

    According to Intel, of the 1.72 billion transistors on the chip, 1.66 billion are memory and 0.06 billion are logic.

    By Blogger Steve Jurvetson, at 2:56 PM  

  • As far as Google is concerned, I think that in its current form no amount of data, number of network nodes and/or processing speed would ever enable it to become an AI system. This is simply because the traditional server/massive database technology is only as powerful as the underlying principles that govern the relational database that powers Google. Furthermore, the client-server architecture in itself is a limitation, because it ensures no interaction between the "spokes of the wheel".

    We've probably already managed to build a system with greater memory and raw processing power capability than let's say a cat. Still, without an equally complex "pipeline control mechanism", a cat remains vastly more intelligent. We can see the neural pathway that's invoked when a person's thinking about the color blue, but do we really understand how it all works together? Do we really understand the principles that govern the development and complexity of interconnectivity?

    Steve, fascinating read on "Quantum Computational Equivalence". It's interesting that the brain is not a quantum computer, and yet, in its relatively small size, it manages to achieve what only a quantum calculator can do and what "is literally impossible for any traditional computer, no matter how powerful"... Makes me realize how much of molecular biology, physiology and quantum/string theory I personally know little to nothing about.

    -Georgi

    By Anonymous Georgi, at 7:49 PM  

  • you might look at what lightfleet corp is trying to do

    By Anonymous Anonymous, at 7:00 AM  

  • Hear this one:

    “If the human mind was simple enough to understand, we'd be too simple to understand it.”

    - Emerson Pugh

    How neat. How well put. Loved it.

    Must be shared. =D

    By Blogger Gisela Giardino, at 6:45 PM  

  • Really interesting discussion. A couple more-or-less random thoughts:

    1) A lot of the posts have the implicit assumption that memories are stored somewhere (maybe lots of somewheres, if memory is distributed) and retrieved as they are stored. But one of the most fascinating things about memory is that it's reconstructive -- there's a lot of hypothesis testing and detail-filling that goes on by logic, not by retrieving what happened. For example, Elizabeth Loftus' classic studies of eyewitness testimony: Whether or not people report seeing glass after witnessing a car accident has a lot to do with the question that is posed. If you ask, "how fast were the two cars going when they smashed into one another?" you get higher speed estimates than if you ask, "how fast were the two cars going when they collided".

    2) Speed of processing (CPU speed?) has been known for a long time to matter quite a bit. For instance, one of the hallmarks of cognitive aging is that people slow down. It's thought (by some) that decreases in processing speed can then lead to problems with high-level processes, like reasoning skill. The way I think of it is that if you have one process trying to retrieve information because it's needed to integrate with some other information held in memory, and if the retrieval process takes longer, the information being held actively in memory has more time to decay. Once the already-held inforamtion is degraded, your ability to apply it to the information you've just retrieved will probably be less efficient.

    3) I've been struggling with the question of what constutites a memory for a long time, and the previous posts about the mind-body connection have reminded me of that. For instance, if you exercise your biceps, later you can more efficiently lift (retrieve?) the weight. Does that mean that your muscles have developed a memory? And if it's not a memory, what definition of memory can we come up with that will exclude weight-lifting while allowing for the processes that our common sense tells us are really memory.

    -Christy

    By Anonymous Anonymous, at 6:41 AM  

  • hope to see you use flicker on this blog also

    By Blogger jeffysspot, at 3:40 PM  

  • The comments are interesting. The study of eyewitness memory is my graduate study field. The 'anonymous' comment #3 as to whther a muscle stores a memory is one that I wrote about in a graduate paper in philosophy of mind - that was not well received.
    I took the position that the very crux or definition of existence is memory in the broad sense implied by 'anonymous'. First we have to distinguish between at least perception, encoding, storing and retrieving as aspects of the broad term, "memory". It is recall, and only recall, that we can experience and test. The other features must be assessed by reasoning from measures and observations of recall. Once we make this approach, we can see that recall means the persistence in the presence of an occurence from the past. Is that not all that identifies existence? Is that not exactly what 'anonymous' is considering with respect to imprinting muscles with a pattern of responses?

    By Anonymous Anonymous, at 9:02 AM  

Post a Comment

<< Home