Loading ...
Sorry, an error occurred while loading the content.

11263Blue Brain Project

Expand Messages
  • Light Eye
    Oct 1, 2006
    • 0 Attachment
      Dear Friends,

      Click the link if you can't access the links.


      Love and Light.


      Overview of the Brain
      The Numbers and more numbers
      Q: The level of supercomputing power is not new--it's been around for a decade--so what are the developments that have made this project possible? Advances in our understanding of neuro-architecture that have come from wet labs? New computational tools for handling a model with so many parameters? etc. Or was it rather a project just waiting for someone to be bold enough to do it?
      HM: What really makes it now possible is the convergence of multiple factors.
      1. The data
      The neocortical microstructure has been studied for over 100 years starting from the seminal work of the Spanish neuranatomist, Ramon Y Cajal. An immense amount of information was collected, but it was still not possible to piece all the fragments together and build the complete puzzle. The reason for this is that pure anatomy was not enough. It required combined anatomy and electrical recordings of the neurons and the synaptic pathways that make up the microcircuit. Ten years ago, starting at the Max-Planck and continuing at the Weizmann Institute for Science, I began using the new approach of infrared-DIC microscopy to do multi-neuron patch-clamp recordings. This allowed my lab to essentially map out, in a highly quantitative manner, the main elements and synaptic pathways making up the neocortical column (NCC). We are now at crucial moment in the history of neuroscience, where we can begin to bring together 100 years of research into a single model of the NCC. I already
      discussed with IBM in 1999 (5 years ago) in preparation for this day.
      2. Blue Gene/L supercomputer
      While powerful supercomputing powers have been around for some time, it's use has largely been dedicated to other projects. This is the first time that such supercomputing power is dedicated with a priority in Brain Research. The Blue Gene/L architecture is now so efficient, compact, easy to run, and scalable to fit the needs of virtually any project, that it is now the right moment for using the supercomputers to simulate the brain.
      3. The software for large scale neural simulations
      There are no optimized software programs that can carry out very large scale (10's of thousands) simulations of morphologically complex neurons (there are many for simple/point neurons). The software that is being finalized for such simulations consists of a hydrib between two powerful software approaches: one for large scale neural network simulations called Neocortical Simulator (NCS) developed by Phil Goodmann at Reno University and the other is a well established program called NEURON developed by Michael Hines at Yale. We are in the last stages of finalizing this software. NCS has been benchmarked on Blue Gene/L and it is the ideal large scale simulator for Blue Gene.
      4. Database, Visualization, Analysis and Simulation Expertise
      The level of expertise in these areas, also at IBM, is now also at a very high level and accessible.
      5. Probably more than half the neuroscience community will be very skeptical about this project, but if one listens to the skeptics, nothing will get done. It is the perfect time in history to start this. It will not be easy, but so many researchers are willing to help that I believe that we will have an accurate replica of the Neocortical Column within 2-3 years. We will know it is accurate when it behaves eletrically as the real one in as many ways as we can test experimentally.
      Q: With the Blue Gene/L, you'll be able to simulate a single NCC with 10,000 neurons (connected by 10^8 synapses) at the cellular (but not molecular) level??
      HM: Yes
      Q: The human brain has on the order of 1 million NCCs. And because of the nature of its interconnectivity (I presume), you won't be able to simulate that with the equivalent of 1 million Blue Gene/L machines, but would in fact need something even more powerful.
      HM: The human neocortex has many millions of NCCs. For this reason we would need first an accurate replica of the NCC and then we will simplify the NCC before we begin duplications. The other approach is to covert the software NCC into a hardware version - a chip, a blue gene on a chip - and then make as many copies as one wants.
      The number of neurons various markedly in the Neocortex with values between 10-100 Billion in the human brain to millions in small animals. At this stage the important issue is how to build one column. This column has 10-100'000 neurons depending on the speecies and particualr neocortical region and there are millions of columns.
      Q: What species will be modeled first?
      HM: We are building a cellular level model of the rat Neocortical Column at the age of 2 weeks. This region has the most quantitative data and can also be tested rigorously in parallel to the simulations. Once we have build an accurate copy, we can use this column as a template to include neurons and connections from other brain regions, ages of the animal and species. In principal the template will allow building a neocortical column from any species.
      Q: And this is all still at the cellular level of simulation. Simulating the brain down to the molecular level is inconceivable with the kind of computers we have today, no matter how much faster they get. Correct?

      HM: Yes, it is very unlikely that we will be able to simulate the human brain at the molecular level detail with even the most advanced form of the current technology. However, there are other directions to solve this problem. We are going to move to molecular level modeling of a NCC. This software version could in principal be converted into a hardware version - a molecular level NCC chip - and then we can duplicate as many of them as we want.

      Q: And what is the implication of cellular vs. molecular simulation? By cellular level, do you mean that the Blue Gene/L will be simplifying an NCC to a network of interacting nodes without considering the molecular details? Will that not capture the essential information? What is lost by not going to the 'molecular' level? Are those molecular details really important? (...I guess this might be one of the bones of contention among neuroscientists, no?

      HM: The cellular level is a form of phenomenological model of the underlying molecular processes - a simplification - so it does capture many key processes, but molecular interactions are of course very complex and they keep neurons on a growth trajectory (real neurons are never biochemically the same), whereas in the simulations, neurons will tend to go back to a resting position when not activated. A very important reason for going to the molecular level, is to link gene activity with electrical activity. Ultimately, that is what makes neurons become and work as neurons - an interaction between nature and nuture.
      Q: Computerworld Singapore states that you are starting up a model of a rat neocortical column on July 1st. The IBM press release talks about the human brain. Some articles mention simulation on the cellular level, others mention simulations on the molecular level. I suppose you will be modeling different things at different stages, can you give me an outline?

      HM: In the first phase, we will begin by building a cellular-level replica of the neocortical column of 10’000 neurons. This will take about 2 years.

      In the second phase, we will go in two directions simultaneously.
      a. simplify the column and duplicate it to build larger parts of the neocortex and eventually the entire neocortex.
      b. stay with a single column, but move down to the molecular level description and simulation. This step is aimed at moving towards genetic level simulations of the Neocortical Column (NCC).

      Q: Also, how do you go from modeling a rats neocortex to modeling a human neocortex?

      HM: The NCC is very stereotypical from mouse to man and across brain regions. The rat template that we are building will provide a starting point for incorporating the small variations in different brain regions and different species to allow us to create columns from different brain regions and species.

      Q: In an interview in BusinessWeek you express great expectations for brain research in the wake of this project. Can you say something about the implications it might have?

      HM: Some potential benefits are outlined here.

      Q: In the same article you talk about being able to do work that has taken days before, in seconds with the new computer. Can you explain some more? To what extent will the computer give the same response as actual living brain tissue?

      HM: It is not our goal to build an intelligent neural network. It is to replicate in digital form the NCC as accurately as possible. We will perform similar experiments on the virtual NCC as in the actual NCC and keep doing this until the virtual NCC behaves precisely the same in as many ways as we can measure, as the actual NCC. Once this replica is built, we will be able to do experiments that normally take us years are are prohibitively expensive or too difficult to perform. This will greatly accelerate the pace of research.

      Q: Would you agree that you are actually taking part in building a new area of research, a bit like the field of genome research and molecular biology grew out of the combination of advances in biology and computing?

      HM: Of course, I am biased, but I believe that this is a landmark moment for brain research and consider building the cortical column (which marks the leap from reptiles to mammals and all the intelligence that emerges) on the same level as mapping the genome. There are many obstacles, but if we succeed, this will indeed usher in a different area of neuroscience research.

      Q: How do you relate your research to the field of artificial intelligence?

      HM: We are not trying to create a specific form of intelligence, but rather trying to understand the emergence of mammalian intelligence. We will of course be examining the computational power of the NCC. In particular, we will explore the ability of the NCC to act as a Liquid Computer (a form of analog computer that handles continuous data streams). This could be used for dynamic vision and scene segmentation, real-time auditory processing, as well as sensory-motor integration for robotics. Another special ability of the neocortex is the ability to predict into the future based on current data (the birth of cognition) and so we will examine the ability of the NCC to make intelligent predictions on complex data. We will also examine other forms of computing that can be used – perhaps hybrid digital-analog computing, but this is quite far in the future.

      Q: How much has the scientific community actually uncovered about how individual neurons and brain molecules behave – do you think it is possible to uncover everything?

      HM: The past 50 years have yielded an immense amount of information about the brain, the neurons it contains, the molecules that make up the neurons, and the genes that produce the molecules. There is still a tremendous amount to find out, but we now need a platform where all this information can be integrated and that is the main purpose of building the NCC. “Everything” is a lot, but we have definitely entered a phase of brain research where the brain’s secrets are being revealed at an extremely rapid pace. I call this phase the Synthesis Phase where the fragments of knowledge are being collected and assembled to reconstruct the manner in which the brain works – the Blue Brain Project is just one part of this process.

      Q: Do you believe a computer can ever be an exact simulation of the human brain?

      HM: This is not likely nor necessary. It will be very difficult because, in the brain, every molecule is a powerful computer and we would need to simulate the structure and function of trillions upon trillions of molecules as well as all the rules that govern how they interact. You would literally need computers that are trillions of times bigger and faster than anything existing today. Mammals can make very good copies of each other, we do not need to make computer copies of mammals. That is not our goal. We want to try to understand how the biological system functions and malfunctions so that this knowledge can benefit mankind.
      Q: And thinking into the future... how much computing power do you think it might take to do a complete simulation of the brain? Is that even possible?
      HM: We have estimated that we may approach real-time simulations of a NCC with 10'000 morphologically complex neurons interconnected with 10x8 synapses on a 8-12'000 processor Blue Gene/L machine. To simulate a human brain with around millions of NCCs will probably require more than proportionately more processing power. That should give an idea how much computing power will need to increase before we can simulate the human brain at the cellular level in real-time. Simulating the human brain at the molecular level is unlikely with current computing systems.

      Q: Is the brain like a computer?

      HM: In some ways yes, but in most ways it is not at all like a computer. The brain performs many analog operations which can not be performed by computers and in many cases it achieves hybrid digital-analog computing. The most important feature of the brain that makes it different from computers, is that it is also changing. If the resistors and capacitors in a computer started changing, then it would immediately malfunction, whereas in the brain such equivalent properties change constantly on the time scales of milliseconds to years. The brain is more like a dynamically morphing computer. The brain is different from the heart or lungs it is always changing. We are still far from understanding the rules that govern such genetically and environmentally driven self-organization of the brain as we experience the world.

      Q: I sense from the interviews a great excitement about this project. Can you tell me something about your feelings, why you are so excited? What do you think you and your colleagues can accomplish?

      HM: The neocortical microcircuit is evolutionary the most advanced circuit in the brain and marks the emergence of human intelligence. Understanding this microcircuit is the holy grail for neuroscientists. The quest to unravel the NCC started over 100 years ago with the work of the Spanish anatomist, Ramon Y Cajal. We are now at a crucial stage in history where enough data and enough computing power has come together to be able to put a centuries worth of knowledge together and build the neocortical column. I equate this to man’s quest to land on the moon or to map the genome.

      Q: Can I use the pictures from http://domino.research.ibm.com/comm/pr.nsf/pages/rsc.bluegene_cognitive.html in my article? Do you have any pictures of yourself and other people behind this project?

      HM: We have more pictures here. Please feel free to use any of them. The copright stays with us so that others can also use them. See also the meetings since they have pictures of some of the people behind the project.
      Q. Will Blue Brain be fully deterministic?
      Stochasticity of communication will be included in the model as well as general background noise from a simulate neural environment, so the responses will not be perfectly deterministic. In the long-term the software will allow for self-organization of its own structure in order to meet the demands of a goal.
      Q. How will be able able to replicate the complexity of neurons and neruotransmitter actions?
      We have built 3D computer models of most of all the main types of neurons and can simulate their individual behaviors with great detail and very accurately. At this stage we can capture the complexity of the fast neurotransmitters very accurately as well with phenomenological models that we have built. A more difficult issue is the slow neurotransmitters and the neuromodulators as well as hormonal effects. These will take a while longer to model, but there is no principal obstacle to this.
      Q.The BG is one of the fastest supercomputers, but is it enough?
      The BG that we are getting is only just enough to launch this project. Ie it is enough to simulate about 50'000 fully complex neurons close to real-time. Much more power will be needed to go beyond this. We can also simulate about 100 million simple neurons with the current power. In short it I sthe computing power and not the neurophysiological data that is the limiting factor.
      Q. You are going to 8000processors to simulate 10'000. Is there a 1neuron/processor equation involved?
      There is no software in the world currently that can run such simulations properly. We have just finished the first prototype software and the first version will place about one neuron per processor - some will have more because the neurons are less demanding. We can in principel simulate about 50'000 neurons so we can place mmany neurons on a processor. The first version of BG can not hold more than a few neurons on each processor. Later versions will probably hold hundreds.
      Q. Blue Brain as a whole could replace in vitro and in vivo experiments with in silico experiments. Do you expect the same level of confidence as in in vitro experiments?
      The model will not replace experiments, but it will allow experiments to be much more focused and better optimized. In a sense the model can be used to design the ideal experiment and then perform the experiment before moving to animal experiments. So it will definitely save the slow road of trial and error and exploration in animal experiments.
      Q. How will replicating the columns help to understand the brain?
      Replicating the column is a key and "secret" evolutionary step and if we can capture how this is done we will be able to explain how the brain changed from one species to another in evolution. Mimicking the expansion of the neocortex will also allow us to examine how the computing power changes as the brain expands. It will also allow us to begin understanding EEG recordings in the clinics as well as fMRI studies. It requires connecting all parts of the brain together and this by itself will reveal many important principles of how the brain as a whole is connected - at the cellular level.
      Q.Will consciousness emerge?
      We really do not know. If consciousness arises because of some critical mass of interactions, then it is possible I suppose. But we really do not understand what consciousness actually is, so it is difficult to say.
      Please send any questions or comments to Henry Markram, BMI or Charles Peck, IBM

      [Non-text portions of this message have been removed]