Loading ...
Sorry, an error occurred while loading the content.

Verification is Imperfect and Complicated

Expand Messages
  • Jim Bromer
    Suppose someone, secure in the magnificence of his own expertise, comes up with the theory that anyone who disagrees with him must be stupid. Furthermore,
    Message 1 of 1 , Jan 18, 2004
      Suppose someone, secure in the magnificence of his own expertise,
      comes up with the theory that anyone who disagrees with him must be
      stupid. Furthermore, suppose that he assumes that everyone who
      participates in these discussion groups is stupid until proven
      otherwise. Then, he could use his "whoever-argues-with-me-is-
      stupid" theory as a means to confirm the stupidity of the people who
      get into discussions with him. When someone does disagree with him,
      then, in his mind, the predictive element of the theory jumps up and
      confirms his assignment of the prejudice to that person. That
      person is arguing with me, therefore, he must be stupid - just as I

      Obviously, you can find logical errors in this argument, but even a
      valid logical predictive theory can be misapplied. Since a true
      implication can lead to a false conclusion if the premise is false,
      this means that a predicted event can appear to verify a false
      conclusion even if used within a valid inference. You could say
      that if the premises are true then the predictive element of a valid
      theory might constitute a valid test of the theory, but that
      argument is premised on the assumption that you would be able to
      absolutely detect the truth of all premises beforehand, and that the
      predicted event actually had some appropriate relation to the
      theory. If you could arrange knowledge so that only valid and
      appropriate arguments were used then the predictive powers of your
      theory might be stronger, but the belief that learning can take
      place just by using a prediction to confirm or test a theory does
      not seem too reasonable. Without an adequate means to verify a
      theory, the attempt to constrict theories so that they only use
      valid arguments is not a reasonable approach to apply to the problem
      of implementing an artificial intelligence capable of true learning.

      If you don't want to use the predictive test as an absolute
      validation, but only as a measure of the utility of an action, you
      can not call the action understanding. For example, the belief
      that "I am an expert and anyone who argues with me is a fool" can be
      used as an utility that allows the prejudiced person a means of
      saving face. As long as he doesn't need any kind of confirmation
      from the people he argues with, as long as he does not explore
      thoughts that lead to alternatives that he hasn't already accepted
      and as long as he ignores or is careless about the contradictions
      that arise in his own thinking he might be able to derive a great
      deal of satisfaction from the use of his theory. But even though
      the theory might be useful to him, it still would not constitute
      an "understanding" of the situation as he imagines it to be. His
      theory is not actually detecting stupidity.

      I realize that a variety of approaches to a problem like this can be
      considered. But there is no way you can find an absolute
      confirmation for most thoughts or theories. The only way you learn
      is by exploring and interacting with the subject matter from
      different perspectives using various "tools" appropriate to that
      study. I do not believe that prediction can be used as a
      consistently reliable instrument to validify theories or to detect
      understanding. Predictive techniques are useful in most situations
      where measurements and their implications are extremely reliable,
      and the relationships between the predicted event and the theories
      that they are thought to be confirming are reliable as well. But
      understanding is an active dynamic which must include the active
      exploration of new ideas and situations. Sure, there are moments
      when we can rely on some kind of static knowledge to judge a
      situation. But in general, knowledge, understanding and learning
      are so interdependent that they cannot exist nor be adequately
      comprehended with simplistic methods of separation. We can
      certainly study or talk about understanding with techniques of
      simplification (we must use techniques of simplification), but that
      doesn't mean that we can casually ignore the inexorable
      interdependence understanding shares with learning.

      Prediction may be the product of most theorization. But because
      false conclusions can be derived logically from false presumptions,
      there are no simple validating principles that can be used to
      bootstrap knowledge. The same problem occurs with the attempt to
      use probability, logic, fuzzy logic, other decision processes,
      semantic primitives, theoretical primitives, isolated conjectural
      reasoning, and other simplistic instrumental methods. We must
      instead rely on the weight of the evidence, and that evidence can
      only be acquired through an active exploration of the theory, its
      complex relations with other theories and the use of relevant
      empirical tests that are created with reasoning.

      I believe that anytime we use reason, or for that matter, anytime we
      react in any way, we are learning. The only way we can test the
      reliance of a theory, or of a complex reaction that acts in some
      ways that are similar to reasoning, is through the examination of
      the network of ideas that can be found to be relevant to the theory
      and by the discovery of its relationships to other theories that are
      considered more reliable. Some of these related theories must
      provide some means of obtaining empirical evidence for the theory.

      The axiomatic system of constructing a theory of intelligence has
      not been adequate to explain intelligence. Alternatives that
      offered obscure methods of combining information like the
      connectionist theory have not proven to be adequate to produce
      intelligence either. Connectionist methods do not use symbolic
      information effectively and the connectionists were not able to use
      their theories to explain the conceptual instruments of
      intelligence. Somewhat like the behaviorists of psychology, many
      connectionists tended to treat the concept of "ideas" with disdain
      and dismissal. Genetic Algorithms, on the other hand, do not use
      symbolic information effectively.

      Strangely, not one of the major paradigms of artificial intelligence
      has been able to adequately explain explanation. We feel
      comfortable with simplification, correlation, expectation,
      definition, logic, associative networks, conjecture, numerical
      methods and weak validation procedures. But we also need to use
      the full spectrum of reasoning, explanation, behavior and
      communication in our theories of intelligence as well. We need to
      figure out how a computer program can make models of ideas. But
      most of all, I believe we need to explore the nature of the varied
      interrelationships between ideas and to explain explanation.

      Jim Bromer
    Your message has been successfully submitted and would be delivered to recipients shortly.