## Causalities and Epiphenomena, Laws and Noticings Part III

Expand Messages
• Copyright 2008 David Dodds Causalities and Epiphenomena, Laws and Noticings Part III I was at back to back week-long computer conferences the previous couple
Message 1 of 1 , Aug 10, 2008

Causalities and Epiphenomena, Laws and Noticings Part III

I was at back to back week-long computer conferences the previous
couple weeks, additional material for this Yahoo group was found.

In some previous two episodes we saw discussion of BACON, and in the
era it was hyped, there were claims made by its authors of the BACON
series that it (re)discovered a number of scientific laws, such as
Boyles Gas Law, Ohms Law (electrical resistance), and so on. We also
saw that what the program's authors claimed that the program did was
a bit of a stretch, compared to what the code actually did.

For example, the program did not literally print out or otherwise
announce "I've (re)discovered Boyles Gas Law". The program would need
to have in some way known what Boyles Gas Law was ahead of time in
order to know that it had found it.

Typically computers detect or recognize things when there is some
conditional statement where a is equal to b. A in this case would be
the equation the system had "found" and B would be Boyles Gas Law. If
somethingA is *equal* to somethingB then a computer (program) can say
it has found or detected somethingB. The nub of this is that B has a
data type, which means it can only be something legitimately
representable in a Von Neuman computer and not something like the
beauty of a sunset. Also it means that the *equal* part must be a
hardware performable matching process, specifically "near", and
"similar (to)" are not included. I am not aware of any computer
hardware that "has" a "similar (to)" functionality. Certainly there is
less-than and greater-than but neither of these is "similar (to)", and
calling 'value +- x' "similar (to)" is doing violence to the meaning
of "similar (to)". With a bit of a stretch 'value +- x' might be force
fitted into the representation for 'within' but it doesnt really
correctly / completely capture even the meaning of that expression
('within').

Of course B does not have to be only a simple variable, it could be an
entire computable expression such as a set of logical statements or
equations, like e=mc**2
(or e=mc^2, or e=(times (m , (power(c,2))), or ..). All of the things
that may go on the right hand side of the equation sign must be
computable in a Von Neuman computer.

So that suggests that since what BACON was doing was determining which
of its predefined equation patterns best (curve) fit the data that the
program was given that either a) a representation of the equation
itself, or b) a name or label representing that (equational)
representation constituted the left side of the equal sign (somethingA
as it were), and that somethingB on the other (or right hand) side of
the equal sign has to be of a type and content which the computer
hardware can detect "equal" (equality). If the equal sign means
nothing more than somethingA evaluates to TRUE we may not have gained
much. If the equal sign means that somethingB is an exact duplicate of
the binary string somethingA its not clear that that is a powerful
conditional.

In the case 'a) a representation of the equation itself', this
suggests that somethingB be an a representation of an equation and
that it be representing exactly the same equation as occurs in
somethingA. This amounts to playing a recording of the opera The
Barber of Seville to the computer and after it has done an end-to-end
waveform comparison announces, I recognize that sound (opera).

Case a) also means that by some unstated means, the computer must have
'a representation of the equation itself' stored associated with the
term somethingB, in order to 'recognize it' when somethingA is the
same equation.

Assuming that there is a set of 'a representation of the equation
itself' for each of the 'laws' that BACON 'discovers' to be used as
the term somethingB, then assuming the input data was clean enough (ie
low enough variance from the ideal values) to be used in the curve
fitting process and that superfluous variable data sequences were not
included in the input data set then it is likely that BACON would
indeed identify (ahem, 'discover') the 'laws' (but only as represented
by their somethingB equations).

To my knowledge BACON did not in fact have these somethingB equations
(representing the 'laws') predefined and did not attempt to match the
best-fit equation it 'discovered' for each data set that was input to
it. The best-match curve-fit was the end / last of the processing set
of sequences which BACON did. It was the hyping humans who then made
the claim that, the program having 'discovered' the equation which
best-fits the (carefully groomed) data input to it had identified
('discovered') Boyle's Law, Ohm's Law, Burke's Law, etc. What the
program did was find that equation in a prespecified set of equations
which produced the best curve-fit of the (carefully groomed) data
input to it. The hyping humans then made the claim that the program
science as describing the numerical characteristics of the actors
(gases, liquids, solids) in that realm of the world.

Having thoroughly beaten that horse I simply point out that the
(identity or 'name of' the) 'law' that was said to have been
discovered by the program, was in fact not discovered or identified at
all. It was the humans who, knowing the equation for each such 'law',
claimed that, (the BACON program) having 'discovered' the (same)
equation, had thereby discovered the 'law' [it's existence, and it's
name / identity]. The program never had any awareness of any kind of
a) such an existence, nor b) the name / identity of a).

Perhaps you missed the point of this discussion. What is to be made
clear is that the BACON programs are algorithms which perform
predefined functions, in BACON's case this is mostly curve-fitting (of
a) relevant, and b) not unruly data). None of the architecture nor
coding of BACON was intended to provide any representation of nor
modelling of 'awareness'. This is to say that there was no capability
present in BACON programs to 'know' in any way 'what the program was
doing', so it didn't 'know' that it was (trying to) 'discovering', and
it didn't know about the 'laws' (of Boyle, Ohm, etc) by their names
nor by their representative equations. It didn't know that it was
curve-fitting, nor that the data was artificially constrained to be of
only relevant variables ((conveniently) relevant to the 'law' it was
currently 'discovering') and that no values were input that strayed
too far away from the 'correct' or 'ideal' value. (This latter thing
is called 'noisy data' versus 'clean data'.) It did not know that it
had 'discovered' anything, let alone something that might / should be
labelled as a 'law'. One does not give a named law to just any old
curve-fit data, context and methodology are required to do that
correctly; else one ends up with the 'logic' exercised in the Monty
Python sketch within Search for the Holy Grail, where the good town's
people are discussing the 'logic' of determining whether or not the
woman with the carrot attached to her nose, is a witch.

This point is one aspect of a phenomena which I call cognitive
finessing, which I will be talking about across a number of episodes.
Without the ability to do this finessing adult humans would not be
able to function singly or collectively in / as societies. By seeing
the emergence of this capability in the development of children's
cognitive abilities we can come to appreciate this facility in the
typical adult. No we will not be discussing child development in these
episodes, you are spared this. (Here is a hint: 'Commander Data' is
often portrayed as 'not getting' humans. He was shown once sitting
intently watching a pot boil. The (supposed) humour in that is about
the facility (or lack of it) of cognitive finessing (versus literal
equalities and micro-deliberation).)
Your message has been successfully submitted and would be delivered to recipients shortly.