Loading ...
Sorry, an error occurred while loading the content.

Re: [existlist] FW: Puzzlingly High Correlations in fMRI Studies of Emotion, Personality, and Social Cognition

Expand Messages
  • Exist List Moderator
    The article itself, in the text, points to the following: (281) Thus, in half of the studies we surveyed, the reported correlation coefficients mean almost
    Message 1 of 2 , May 31, 2009
      The article itself, in the text, points to the following: (281) Thus,
      in half of the studies we surveyed, the reported correlation
      coefficients mean almost nothing, because they are systematically
      inflated by the biased analysis.

      It is not all studies that had this problem, nor is it likely the
      majority. A sample size of 53 (two of the "surveyed" refused to
      provide methodology) is significant, but I am also not surprised that
      20 to 22 of these had used poor fMRI sampling methods.

      As the authors argue, sampling anything over a week can show a
      correlation. I can show that as May warmed in New York by 14 degrees,
      the stock market rose by 14 percent. That's the problem with
      correlations, rightly addressed in the article.

      The only fMRI studies I consult and generally trust are done over many
      years with the same pool of subjects. Also, the results *should never*
      be generalized to the entire population. I tell people that what an
      autistic brain reveals cannot and should not be applied to other
      conditions for a variety of reasons, not the least of which is neural

      People need to learn that science is about skepticism. You publish
      your findings *and methods* so they can be replicated by other teams.
      If your results defy replication, they are likely erroneous. Multiple
      observations, multiple challenges to your findings, are necessary
      until someone should accept the results as "more likely than not, all
      things being equal, correct for this study and this methodology."

      People make the mistake of generalizing. We test against the null
      hypothesis -- we prove only that certain things did not happen. We
      cannot, no matter what, prove beyond all doubt that one thing is the
      cause of a situation or condition.

      Science is very difficult to explain to the general public. How can I
      explain, "I proved that X is not true, so there is that much more
      likelihood that my theory is possibly correct, maybe." We don't claim
      certainly. We claim the opposite of what we expected did not, in fact,
      occur. We cannot, in most sciences, claim that what we did expect
      happened because of X. (Rudimentary scientific statistical analysis...
      the null hypothesis is all we can analyze.)

      Try that with philosophy: "I can tell you that what I don't believe is
      definitely not worth believing. What I do believe... I can't really
      tell you that is worth much at all. But, darn it, I know beyond all
      doubt that those other guys are wrong!!"

      I gave up trying to explain scientific statistics to most audiences
      long ago. They want certainty, and that's not science -- unless you
      count being certain of what is not true. Ironic. We can't claim what
      is, only what isn't. No wonder the philosophy of science confounds so
      many of my students.

      - C. S. Wyatt
      I am what I am at this moment, not what I was and certainly not all
      that I shall be.
      http://www.tameri.com - Tameri Guide for Writers
      http://www.tameri.com/csw/exist - The Existential Primer
    Your message has been successfully submitted and would be delivered to recipients shortly.