Loading ...
Sorry, an error occurred while loading the content.

How the Human Brain Organizes the Universe We See

Expand Messages
  • derhexer@aol.com
    URL to an interesting post from The Daily Galaxy http://tinyurl.com/bowqvx7 Chris Our eyes may be our window to the world, but how do we make sense of the
    Message 1 of 1 , Dec 20, 2012
    • 0 Attachment
      URL to an interesting post from The Daily Galaxy
      http://tinyurl.com/bowqvx7

      Chris

      "
      Our eyes may be our window to the world, but how do we make sense of the
      thousands of images that flood our retinas each day? Scientists at the
      _University of California, Berkeley_
      (http://maps.google.com/maps?ll=37.87,-122.259&spn=0.01,0.01&q=37.87,-122.259
      (University%20of%20California,%20Berkeley)&t=h) , have found that the brain is wired to put in order all the
      categories of objects and actions that we see. They have created the first
      interactive map of how the brain organizes these groupings. The result — achieved
      through computational models of _brain imaging_
      (http://en.wikipedia.org/wiki/Neuroimaging) data collected while the subjects watched hours of movie
      clips — is what researchers call “a continuous _semantic_
      (http://en.wikipedia.org/wiki/Semantics) space.”
      Some relationships between categories make sense (humans and animals share
      the same “semantic neighborhood”) while others (hallways and buckets) are
      less obvious. The researchers found that different people share a similar
      semantic layout.
      “Our methods open a door that will quickly lead to a more complete and
      detailed understanding of how the brain is organized. Already, our online brain
      viewer appears to provide the most detailed look ever at the visual
      function and organization of a single human brain,” said Alexander Huth, a
      doctoral student in neuroscience at UC Berkeley and lead author of the study
      published in the journal Neuron.
      A clearer understanding of how the brain organizes visual input can help
      with the medical diagnosis and treatment of brain disorders. These findings
      may also be used to create _brain-machine interfaces_
      (http://en.wikipedia.org/wiki/Brain–computer_interface) , particularly for facial and other image
      recognition systems. Among other things, they could improve a grocery
      store self-checkout system’s ability to recognize different kinds of
      merchandise.
      ”Our discovery suggests that brain scans could soon be used to label an
      image that someone is seeing, and may also help teach computers how to better
      recognize images,” said Huth, who has produced a video and interactive
      website to explain the science of what the researchers found.
      It has long been thought that each category of object or action humans see —
      people, animals, vehicles, household appliances and movements — is
      represented in a separate region of the _visual cortex_
      (http://en.wikipedia.org/wiki/Visual_cortex) . In this latest study, UC Berkeley researchers found
      that these categories are actually represented in highly organized,
      overlapping maps that cover as much as 20 percent of the brain, including the
      somatosensory and frontal cortices.
      Maps show how different categories of living and non-living objects that we
      see are related to one another in the brain’s “semantic space.”To conduct
      the experiment, the brain activity of five researchers was recorded via
      functional Magnetic Resonance Imaging (_fMRI_
      (http://en.wikipedia.org/wiki/Functional_magnetic_resonance_imaging) ) as they each watched two hours of
      movie clips. The brain scans simultaneously measured blood flow in thousands
      of locations across the brain.
      Researchers then used regularized linear regression analysis, which finds
      correlations in data, to build a model showing how each of the roughly
      30,000 locations in the cortex responded to each of the 1,700 categories of
      objects and actions seen in the movie clips. Next, they used _principal
      components analysis_ (http://en.wikipedia.org/wiki/Principal_component_analysis) ,
      a statistical method that can summarize large data sets, to find the “
      semantic space” that was common to all the study subjects.
      The results are presented in multicolored, multidimensional maps showing
      the more than 1,700 visual categories and their relationships to one another.
      Categories that activate the same brain areas have similar colors. For
      example, humans are green, animals are yellow, vehicles are pink and violet
      and buildings are blue. For more details about the experiment, watch the
      video above.
      “Using the semantic space as a visualization tool, we immediately saw that
      categories are represented in these incredibly intricate maps that cover
      much more of the brain than we expected,” Huth said.* Other co-authors of the
      study are UC Berkeley neuroscientists Shinji Nishimoto, An T. Vu and Jack
      Gallant.
      The Daily Galaxy via _http://newscenter.berkeley.edu_
      (http://newscenter.berkeley.edu/)


      [Non-text portions of this message have been removed]
    Your message has been successfully submitted and would be delivered to recipients shortly.