The problem is especially acute with the construct of "memory", but it
applies to all such constructs such as emotion, reason, imagination, will,
etc. It is worth opening out the discussion to find where things actually go
Standard scientific modelling is based on a particular idea of causality -
the mechanical or atomistic. Complex wholes are constructions from simple
parts. This is the belief that justifies reductionism. But where it
eventually leads is to a fundamental division between "sameness" and
"difference". Something must ultimately be either a this or a that as - by
definition it seems - a fundamental stuff, or component, or process, cannot
be both this and that. Sameness and difference don't fit into the same
causal box (or functional module in evopsych terms).
Yet clearly when it comes to the brain and its abilities, you have both
sameness and difference to explain. You have a global cohesiveness and
unity. You also have localisation and seriality. So when we come to try to
fit memory into a causal box, we find it everywhere. Every synapse, every
molecular cascade, let alone every lobe. But it is also located in certain
ways. So memory (or recognition capability and object perception biasing)
can seem more located to temporal lobe, and snapshot memory fixing to
What makes the same~different issue even more intractable is that the
organisation of the brain is dynamic not static. The "memory system" of a
child will show different patterns of apparent localisation. The memory
response of an adult will change as they move from novel to habitual (and
back again). Reconsolidation is just the latest data bearing on a generally
So there is a pervasive problem in mind science. The standard reductionist
approach to doing science is good for separating sameness from difference.
But it does not naturally deal in models that have same~different logically
related within the same causal box.
What are the various responses from theorists?
1) Believe that reductionism will work (the strong ontological position).
Argue that the brain really is constructed from fundamental atoms of some
kind (be it processes, computational modules, QM fields, whatever) and that
the answer to the problem is to keep reducing until the stuff that
represents the same is well and truly differentiated from what is the
2) Believe that reductionism is effective and its excesses can be worked
around with heuristics (the pragmatic epistemological position). So the
argument is that the best route is still to push for strong same~different
distinctions in both data and theory. The hippocampus, or a little more
generally, the medial temporal lobe, can be dubbed the brain's memory
system. Outsiders may be a bit confused by this simplistic phrenology. But
researchers in the field will know enough about the complexity of
neuroanatomy to avoid making serious conceptual mistakes when working with
such constructs as the MTL. Or with other divisions such as explicit vs
implicit memory streams.
3) Follow route 2 above but try to talk in more appropriate metaphors that
keep the global~local issue at the forefront of people's minds. So talk
about dynamism and plasticity. Or soft, or weak, modularity. Or talk about
the holographic brain - where every bit is both a whole and a location
within a whole.
4) The fourth route is to seek a proper model of the causality of the brain.
To ask what kind of causal model can actually capture same~different in the
same box? We now want to move beyond informal heuristics, beyond mere
metaphor and analogy. We want an actual mathematics to structure our
understanding in a formal way.
There are many people working at this level. I will mention some of my own
favourites - Salthe, Kelso, Friston, Grossberg, Prigogine, Maturana,
Freeman. Links to their work are on my web site. So I will just mention a
few essential points.
A final mathematical model, when it is forthcoming, would seem almost
certainly to be some form of hierarchy theory. But an organic,
bootstrapping, kind of hierarchy (see Salthe especially) with downward
causation (see Pattee) and employing anticipatory processing (see
Grossberg). Not a mechanical hierarchy - a pyramid constructed of atomistic
parts with "emergent" global properties.
A variety of causal topologies will be necessary. So while an organic
hierarchy is the most general model, other mathematical shapes will have
their place. You have the familiar causality of lines - event A is necessary
and sufficient for event B. This is the standard mechanical model of course.
Then you can have loops or causal circles - the line is closed on itself, as
in cybernetics or autopoiesis, so that feedback makes life more complex. You
can then proceed to network causality - where many nodes are connected and
there is the beginnings of a part~whole relationship. You have the
beginnings of same and different in the same causal box. Once you start to
stack up layers of networks, and couple the layers with feedback loops, then
you are moving towards organic hierarchies. Many people can see this story
of course. But putting it on a mathematical footing, showing how one
topology relates formally to the others, is another matter.
A final point worth mentioning is that a forgotten aspect of ontological
modelling will have to be brought back into the limelight. The only
practical way to get sameness and difference back in a share causal box is
to have some mathematical notion of vagueness and crispness as an
ontological dichotomy. We are familiar with other ancient philosophical
dichotomies, such as generals and particulars, or form and substance,
determined and random. In the modern world, we even have good mathematical
models. But Anaximander, Aristotle and other early Greek philosophers found
the vague~crisp (or potential~actual) dichotomy to be about the most basic.
And in the modern world, this dichotomy is much neglected and barely
modelled. In maths, you have fuzzy sets and other devices that don't really
"get" what is at stake. People who model self-organisation and dissipative
structure (such as Prigogine and Kelso) are of course concerned with the
vague, but in a more task specific than mathematically general fashion
(though that is changing it seems).
Anyway, vagueness is the claim that both sameness and difference exist as
potentials within the same causal "stuff". To offer an analogy, grey has
both blackness and whiteness within it. An actual state of crisp order then
has to be manufactured by an act of separation. This is a holistic kind of
causal picture as you can make a crisp black "splat" in the middle of the
field by pushing the whiteness to the edges. And it is a dynamic picture
because if you relax the effort, the process, of separation, then blackness
and whiteness (the crisp figure and the crisp ground) will flow back into
each other to make a grey vague state of potential again.
To sum up, the reductionist approach is, of course, based on the most
reduced possible mathematical conception of causality. That is, atomism or
mechanism. In topological terms, causality is seen as a line connecting two
points - event A and event B. Seeing the world with this linear vision of
causality in mind has worked pretty well. But in the end, it cannot do the
whole job (see Rosen, Godel, etc, for the "in principle" arguments about
this). In physics, the failure of simple linear causality is even
catastrophic - the implosions of logic once you get down to QM level or up
to cosmological questions about Big Bangs out of nothingness. In mind
science, it eventually leads to the equivalent catastrophe of dualism and
"the Hard Problem".
But even way before we reach the "whole shebang" level of explanation, even
when we just humbly want to map the functional neuroanatomy of the brain, we
have to recognise that there is a basic issue of how to model causality.
Reductionism, with its linear logic, is just one particular topology. Other
topologies exist and, indeed, are likely to be more general (and so more
fundamental). Hierarchy theory looks to be about in the middle of things -
though that waits to be proved.
And as you get into a discussion of the variety of possible causal
topologies, then the missing dimension of vagueness~crispness must enter the
picture. For most people, the only familiar dimensions to causal modelling
are those of random~determined, substance~form, and possibly
general~particular (or laws~initial conditions in physics-speak). But
vagueness is necessary if you are going to get both sameness and difference
in the same causal box - black/white and grey all together but capable of
Memory researchers have been trying to construct atomistic theories - the
fiction of the static memory trace, the fiction of isolate memory organs
like the hippocampus, the fiction of dedicated memory processing paths like
the MTL. This sort of grows into a hierarchical tale, but a mechanical one
that does not really hang together in a way that is organically realistic.
So time to accept the answer lies not in a need for yet further reductionist
detail about the neuroanatomy of brains but in a shake-up of basic causal
concepts of how anything actually works.
from John McCrone
check out my consciousness web site
neuroscience, human evolution, Libet's half second, Vygotsky and more...