Loading ...
Sorry, an error occurred while loading the content.

Re: Rationalizing is something we *all* do

Expand Messages
  • Bruno Marchal
    ... Oops, sorry. I meant many . It is a typo. ... Hmm.... OK. Can be quite implicit though, and not recognizable as such. ... There is no algorithm to decode
    Message 1 of 20 , Nov 1, 2012
    • 0 Attachment
      On 31 Oct 2012, at 16:07, Rami Rustom wrote:

      > On Oct 28, 2012 2:43 PM, "Bruno Marchal" <marchal@...> wrote:
      > >
      > >
      > > On 27 Oct 2012, at 18:38, Rami Rustom wrote:
      > >
      > > > On Sat, Oct 27, 2012 at 9:46 AM, Bruno Marchal <marchal@...>
      > > > wrote:
      > > > >
      > > > > On 26 Oct 2012, at 20:04, Rami Rustom wrote:
      > > > >
      > > > >> On Fri, Oct 26, 2012 at 10:59 AM, Bruno Marchal <marchal@...
      > >
      > > > >> wrote:
      > > > >>>
      > > > >>> On 25 Oct 2012, at 20:32, Rami Rustom wrote:
      > > > >>>
      > > > >>> Like: most heroin user have begun with cannabis, so cannabis
      > leads to
      > > > >>> heroin. The gateway theory.
      > > > >>>
      > > > >>> You can explain that the number of cannabis user among
      > heroin user is
      > > > >>> irrelevant for judging a relation of causality rather
      > easily: all
      > > > >>> heroin user have begun with water, yet nobody would say that
      > *this*
      > > > >>> means that water leads to heroin. The correct statistics
      > consists in
      > > > >>> looking and comparing the number of heroin users in the (good
      > > > >>> sampling
      > > > >>> of) population of cannabis users, and compare it with a (good
      > > > >>> sampling
      > > > >>> of) population not using cannabis. This would also prove
      > nothing, but
      > > > >>> would give an evidence. Of course when done, there are zero
      > evidence
      > > > >>> that cannabis leads more to heroin than water, and even less
      > than
      > > > >>> alcohol.
      > > > >>
      > > > >> There are many very successful people that use cannabis and not
      > > > >> heroin.
      > > > >
      > > > > This does not say much, but I agree.
      > > >
      > > > It refutes the theory that cannibas is a gateway drug, and the
      > theory
      > > > that cannibas ruins a person's life.
      > > >
      > > OK. ("any" is a bit fuzzy, but I can accept this).
      >
      > What do you mean? Who said "any"?
      >

      Oops, sorry. I meant "many". It is a typo.



      >
      > >
      > >
      > > >
      > > > >
      > > > >>
      > > > >>
      > > > >>> That error is not only done often, if not systematically at
      > the
      > > > >>> political level, in the domain of Health, it is done in
      > other parts
      > > > >>> of
      > > > >>> politics, and frequently in racist discourses, defamation,
      > and fake
      > > > >>> sciences.
      > > > >>>
      > > > >>> That error has even a Darwinian explanation, as simple neural
      > > > >>> associative nets do that error, and yet can solve problem
      > and needs
      > > > >>> very few K. Some 'mistaken theory' are efficacious in the
      > short term
      > > > >>> (like robbing a bank, to solve the money problem).
      > > > >>>
      > > > >>> Stupidity is not in the mistake, it is in the doing of the
      > same
      > > > >>> mistake again and again and again and again (usually for
      > problem of
      > > > >>> image of oneself by some people indeed, or just for
      > perpetuating a
      > > > >>> fear selling technic to steal your money, something sad,
      > bad, but
      > > > >>> 'natural' like robbing a bank).
      > > > >>
      > > > >> Sometimes people make the same mistakes repeated because of
      > anti-
      > > > >> rational memes.
      > > > >
      > > > > Yes. The problem is that something anti-rational for the long
      > term can
      > > > > be completely rational in the short term. Typically "stealing
      > money".
      > > > > It works well in the short term, but only because it is bad
      > and it is
      > > > > done by a minority. If stealing money was encouraged and
      > taught in
      > > > > high school, the society would quickly degenerate in many
      > mafia type
      > > > > of wars. You need enough honest people doing "real money" to
      > be stolen
      > > > > by others, which acts rationally for their limited personal
      > purpose.
      > > > >
      > > > > There are often conflicts between the short term, the middle
      > term and
      > > > > the long term, which are conflicts between different type of
      > rational
      > > > > reason. The frontier between rational and non rational is like
      > the
      > > > > Mandelbrot set: very complex and intricate. That is why, in
      > actual
      > > > > context, usually with only very partial information, we have
      > to trust
      > > > > our guts. I think.
      > > >
      > > > That implies that our gut feelings are not discoverable. But
      > that is
      > > > wrong.
      > > >
      > > > A gut feeling is the feeling one gets when there is a conflict of
      > > > ideas in his mind, where one of those ideas is conscious and
      > explicit,
      > > > and the other idea is subconscious and inexplicit. But there is
      > > > nothing permanent about the status of that subconscious and
      > inexplicit
      > > > idea. One can put forth effort to discover it, thus making it
      > > > conscious and explicit. This allows one to criticize it.
      > > >
      > > Hmm... What makes you sure we can know all the roots of our guts
      > > feeling?
      >
      > A subconscious idea *is* instantiated in the brain. It physically
      > exists there.
      >
      Hmm.... OK. Can be quite implicit though, and not recognizable as such.




      > Why would it be off limits from being discovered?
      >
      There is no algorithm to decode what a program can do, from its code
      (Rice theorem). The set of programs doing a particular task is not
      recursive. With neural nets, it is worst, as the information can be
      distributed in tiny difference of excitation level of billions of
      neurons.




      >
      > > This can be contradicted in the mechanist theory + classical
      > > theory of knowledge: it shows that the ideally arithmetically
      > correct
      > > machine can never know "who she is", and in that theory the guts
      > > feeling originates from the "who" we are.
      >
      > I don't know what the "who somebody is" means nor how that is relevant
      > to this discussion.
      >
      If we are machine, we cannot know which machine we are. We can only
      bet on some level of description, in case we accept a digital brain
      prosthesis, for example. Even if the guess is correct, we still cannot
      recognize the genuine program which constitutes our identity. The
      relevance of this is that for the gut feelings, we might be unable to
      decide if some idea is a prejudice of the parents, or of the mammals,
      or of all universal machines. In the mater case, we can't change it
      without loosing consistency or soundness.




      >
      > > Without going that far in
      > > the theory, I doubt we can be conscious of all the subconscious
      > > processing. A machine can refer to itself integrally, but not to its
      > > integral behavior.
      >
      > I don't understand your idea that some subconscious ideas are off
      > limits.
      >
      Intuitively it seems obvious, if only because our brain have plausibly
      a very long complex history, and nature does not make explicit
      programs with readable comment. Then, although machine can represent
      itself entirely, it can't represent its behavior entirely, almost for
      reason akin to the reason that nobody can see directlmy his/her own
      back. there are blind spot in self-reference, for logical reasons.





      >
      > Consider this. Decades from now we create the first AGI. It programs
      > its own code. No?
      >
      This is ambiguous. Programs can easily modify, even completely their
      own code, but it cannot create one from nihilo. There is always
      another universal program or reality needed, even if it is the
      arithmetical reality, or physical reality, which are highly
      undecidable set (probably in the first case, plausibly in the second
      case).
      It is like our wanting. We have few control about what we want, nor do
      we have control on the way events can hurt us. In fact we control very
      few things, but yet, we do have a non negligible partial control, but
      it is only a window, so to speak.

      Bruno


      http://iridia.ulb.ac.be/~marchal/





      [Non-text portions of this message have been removed]
    • a b
      ... So this is the conjecture - researched, reflected on, and ultimately defined by you yourself - that you have then refuted. The active ingredient of
      Message 2 of 20 , Nov 2, 2012
      • 0 Attachment
        On Wed, Oct 31, 2012 at 3:16 PM, Rami Rustom <rombomb@...> wrote:
        >
        >
        >
        > On Oct 28, 2012 2:43 PM, "a b" <asbbih@...> wrote:
        > >
        > > On Sun, Oct 28, 2012 at 12:15 AM, Rami Rustom <rombomb@...> wrote:
        > > >
        > > >
        > > >
        > > > On Sat, Oct 27, 2012 at 2:50 PM, a b <asbbih@...> wrote:
        > > > > On Sat, Oct 27, 2012 at 5:38 PM, Rami Rustom <rombomb@...>
        > > > > wrote:
        > > > >>
        > > > >>
        > > > >>
        > > > >> On Sat, Oct 27, 2012 at 9:46 AM, Bruno Marchal <marchal@...>
        > > > >> wrote:
        > > > >> >
        > > > >> > On 26 Oct 2012, at 20:04, Rami Rustom wrote:
        > > > >> >
        > > > >> >> On Fri, Oct 26, 2012 at 10:59 AM, Bruno Marchal
        > > > >> >> <marchal@...>
        > > > >> >> wrote:
        > > > >> >>>
        > > > >> >>> On 25 Oct 2012, at 20:32, Rami Rustom wrote:
        > > > >> >>>
        > > > >> >>> Like: most heroin user have begun with cannabis, so cannabis
        > > > >> >>> leads
        > > > >> >>> to
        > > > >> >>> heroin. The gateway theory.
        > > > >> >>>
        > > > >> >>> You can explain that the number of cannabis user among heroin
        > > > >> >>> user
        > > > >> >>> is
        > > > >> >>> irrelevant for judging a relation of causality rather easily:
        > > > >> >>> all
        > > > >> >>> heroin user have begun with water, yet nobody would say that
        > > > >> >>> *this*
        > > > >> >>> means that water leads to heroin. The correct statistics
        > > > >> >>> consists
        > > > >> >>> in
        > > > >> >>> looking and comparing the number of heroin users in the (good
        > > > >> >>> sampling
        > > > >> >>> of) population of cannabis users, and compare it with a (good
        > > > >> >>> sampling
        > > > >> >>> of) population not using cannabis. This would also prove
        > > > >> >>> nothing,
        > > > >> >>> but
        > > > >> >>> would give an evidence. Of course when done, there are zero
        > > > >> >>> evidence
        > > > >> >>> that cannabis leads more to heroin than water, and even less
        > > > >> >>> than
        > > > >> >>> alcohol.
        > > > >> >>
        > > > >> >> There are many very successful people that use cannabis and not
        > > > >> >> heroin.
        > > > >> >
        > > > >> > This does not say much, but I agree.
        > > > >>
        > > > >> It refutes the theory that cannibas is a gateway drug, and the
        > > > >> theory
        > > > >> that cannibas ruins a person's life
        > > > >
        > > > > In your own words can you summarize what the argument actually was
        > > > > that cannibis is a gateway drug?
        > > >
        > > > Using cannibis causes people to then use heroin or other worse drugs.
        > >
        > > What is meant by 'causes'?
        >
        > A causes B. B happens because A happened.

        So this is the conjecture - researched, reflected on, and ultimately
        defined by you yourself - that you have then refuted. The active ingredient
        of cannabis causes the individual to try heroin. Not statistical, not
        correlation, no identified reasoning, no review of the evidence.

        I don't really see the point of what you've done here Rami. You've defined
        yourself a nonsensical - actually silly - conjecture about how the link
        between cannabis and heroin use has been defined (by those who say there is
        one) . Which you then refute.

        What have you learned or taught by this process? This is a lot like the
        popperian refutation of the idea getting drunk can change your personality,
        maybe produce a Mr Hyde. The conjecture is defined as alcohol causes or
        creates a different personality. It's a stupid way to define the link in
        the first place.






        > > >
        > > >
        > > > > Also the theory that cannibis ruins lives.
        > > >
        > > > Using cannibis causes one to be lazy, unproductive, not keep a job,
        > > > etc.
        > >
        > > >
        > > >
        > > > > What is the reasoning and
        > > > > evidence for these positions that your point above refutes them?
        > > >
        > > > If one case that is known to be inconsistent with the theory, then the
        > > > theory is falsified.
        > > >
        > >
        > > Doesn't that depend on whether the theory is statistical? The 2nd law
        > > of thermodynamics is statistical....if you find a single molecule
        > > going the other way you don't falsify the theory, or do you?
        >
        > The 2nd law of thermodynamics says nothing about what each molecule
        > does. It talks about the average. And the objects that the law talks
        > about are indistinguishable (fungible). So the objects all act the
        > same way in response to the laws of physics.
        >
        >
        > > So is what has been said about cannabis use, statistical?
        >
        > Humans are not fungible. They do not all act the same. The make
        > choices using their ideas. They all have different ideas, which means
        > they make different choices when presented with the similar
        > situations.
        >
        > -- Rami
        >
        >


        [Non-text portions of this message have been removed]
      • Rami Rustom
        ... What does that mean? ... When speaking of software, one does not need to speak of hardware. The hardware doesn t affect the software, unless the hardware
        Message 3 of 20 , Nov 2, 2012
        • 0 Attachment
          On Nov 1, 2012 1:10 PM, "Bruno Marchal" <marchal@...> wrote:
          >
          >
          > On 31 Oct 2012, at 16:07, Rami Rustom wrote:
          >
          > > On Oct 28, 2012 2:43 PM, "Bruno Marchal" <marchal@...> wrote:
          > > >
          > > >
          > > > On 27 Oct 2012, at 18:38, Rami Rustom wrote:
          > > >
          > > > > On Sat, Oct 27, 2012 at 9:46 AM, Bruno Marchal <marchal@...>
          > > > > wrote:
          > > > > >
          > > > > > On 26 Oct 2012, at 20:04, Rami Rustom wrote:
          > > > > >
          > > > > >> On Fri, Oct 26, 2012 at 10:59 AM, Bruno Marchal <marchal@...
          > > >
          > > > > >> wrote:
          > > > > >>>
          > > > > >>> On 25 Oct 2012, at 20:32, Rami Rustom wrote:
          >
          > >
          > > >
          > > >
          > > > >
          > > > > >
          > > > > >>
          > > > > >>
          > > > > >>> That error is not only done often, if not systematically at the
          > > > > >>> political level, in the domain of Health, it is done in other parts
          > > > > >>> of
          > > > > >>> politics, and frequently in racist discourses, defamation, and fake
          > > > > >>> sciences.
          > > > > >>>
          > > > > >>> That error has even a Darwinian explanation, as simple neural
          > > > > >>> associative nets do that error, and yet can solve problem and needs
          > > > > >>> very few K. Some 'mistaken theory' are efficacious in the short term
          > > > > >>> (like robbing a bank, to solve the money problem).
          > > > > >>>
          > > > > >>> Stupidity is not in the mistake, it is in the doing of the same
          > > > > >>> mistake again and again and again and again (usually for problem of
          > > > > >>> image of oneself by some people indeed, or just for perpetuating a
          > > > > >>> fear selling technic to steal your money, something sad, bad, but
          > > > > >>> 'natural' like robbing a bank).
          > > > > >>
          > > > > >> Sometimes people make the same mistakes repeated because of anti-
          > > > > >> rational memes.
          > > > > >
          > > > > > Yes. The problem is that something anti-rational for the long term can
          > > > > > be completely rational in the short term. Typically "stealing money".
          > > > > > It works well in the short term, but only because it is bad and it is
          > > > > > done by a minority. If stealing money was encouraged and taught in
          > > > > > high school, the society would quickly degenerate in many mafia type
          > > > > > of wars. You need enough honest people doing "real money" to be stolen
          > > > > > by others, which acts rationally for their limited personal purpose.
          > > > > >
          > > > > > There are often conflicts between the short term, the middle term and
          > > > > > the long term, which are conflicts between different type of rational
          > > > > > reason. The frontier between rational and non rational is like the
          > > > > > Mandelbrot set: very complex and intricate. That is why, in actual
          > > > > > context, usually with only very partial information, we have to trust
          > > > > > our guts. I think.
          > > > >
          > > > > That implies that our gut feelings are not discoverable. But that is
          > > > > wrong.
          > > > >
          > > > > A gut feeling is the feeling one gets when there is a conflict of
          > > > > ideas in his mind, where one of those ideas is conscious and explicit,
          > > > > and the other idea is subconscious and inexplicit. But there is
          > > > > nothing permanent about the status of that subconscious and inexplicit
          > > > > idea. One can put forth effort to discover it, thus making it
          > > > > conscious and explicit. This allows one to criticize it.
          > > > >
          > > > Hmm... What makes you sure we can know all the roots of our guts
          > > > feeling?
          > >
          > > A subconscious idea *is* instantiated in the brain. It physically
          > > exists there.
          > >
          > Hmm.... OK. Can be quite implicit though, and not recognizable as such.

          What does that mean?


          >
          >
          >
          > > Why would it be off limits from being discovered?
          > >
          > There is no algorithm to decode what a program can do, from its code
          > (Rice theorem). The set of programs doing a particular task is not
          > recursive. With neural nets, it is worst, as the information can be
          > distributed in tiny difference of excitation level of billions of
          > neurons.

          When speaking of software, one does not need to speak of hardware. The
          hardware doesn't affect the software, unless the hardware gets
          damaged. Agreed?


          >
          >
          >
          >
          > >
          > > > This can be contradicted in the mechanist theory + classical
          > > > theory of knowledge: it shows that the ideally arithmetically correct
          > > > machine can never know "who she is", and in that theory the guts
          > > > feeling originates from the "who" we are.
          > >
          > > I don't know what the "who somebody is" means nor how that is relevant
          > > to this discussion.
          > >
          > If we are machine, we cannot know which machine we are. We can only
          > bet on some level of description, in case we accept a digital brain
          > prosthesis, for example. Even if the guess is correct, we still cannot
          > recognize the genuine program which constitutes our identity. The
          > relevance of this is that for the gut feelings, we might be unable to
          > decide if some idea is a prejudice of the parents, or of the mammals,
          > or of all universal machines. In the mater case, we can't change it
          > without loosing consistency or soundness.

          I still don't know what that means nor why its relevant to our
          discussion. Consistency of what? Soundness of what?


          >
          >
          >
          > >
          > > > Without going that far in
          > > > the theory, I doubt we can be conscious of all the subconscious
          > > > processing. A machine can refer to itself integrally, but not to its
          > > > integral behavior.
          > >
          > > I don't understand your idea that some subconscious ideas are off
          > > limits.
          > >
          > Intuitively it seems obvious, if only because our brain have plausibly
          > a very long complex history, and nature does not make explicit
          > programs with readable comment.

          You are using the theory of the mind that says that certain brain
          parts cause certain mind parts. But that theory has been refuted by
          DD. That refutes evolutionary biology (which is the field you're
          referring to).

          What you're saying is that some of our software is hardcoded. And that
          hardcoding evolved over millions of years. But, humans don't have any
          hardcoding. Its all softcoded. All of it changes.


          > Then, although machine can represent
          > itself entirely, it can't represent its behavior entirely, almost for
          > reason akin to the reason that nobody can see directlmy his/her own
          > back. there are blind spot in self-reference, for logical reasons.

          Do you agree that we can guess about them?


          >
          >
          >
          >
          > >
          > > Consider this. Decades from now we create the first AGI. It programs
          > > its own code. No?
          > >
          > This is ambiguous. Programs can easily modify, even completely their
          > own code, but it cannot create one from nihilo.

          Whats nihilo?


          > There is always
          > another universal program or reality needed, even if it is the
          > arithmetical reality, or physical reality, which are highly
          > undecidable set (probably in the first case, plausibly in the second
          > case).
          > It is like our wanting. We have few control about what we want,

          False. Our wants depend on our values. We have control over our
          values. Our values are ideas. We guess ideas and criticism them.
          Sometimes we're doing this with our value-type ideas. And when we make
          changes to our values, we've changed our wants.


          > nor do
          > we have control on the way events can hurt us.

          Physical hurt or mental hurt? We have a lot of control over mental
          hurt. Mental hurt is TCS-coercion. TCS-coercion occurs when someone is
          acting on a theory while another conflicting theory is active in his
          mind. An example of TCS-coercion is someone gets offended when someone
          else says a racial slur to him. So the one who gets hurt doesn't want
          to be called by that racial slur but it happened. Do you think he has
          no control over getting hurt by this?


          > In fact we control very
          > few things, but yet, we do have a non negligible partial control, but
          > it is only a window, so to speak.

          It is a window. But, that window can be made bigger without limit. Or,
          is the limit 100%? Can someone be absolutely conscious of *all* his
          ideas such that none of them are subconscious/inexplicit? Doesn't that
          imply perfection?

          No it doesn't. Because he could be wrong about any of them.

          -- Rami
        • Rami Rustom
          ... No Bruno explained the gateway theory. ... Thats not what I said. I said that A is *using cannabis*. You re saying that A is *THC*. They aren t the same.
          Message 4 of 20 , Nov 2, 2012
          • 0 Attachment
            On Fri, Nov 2, 2012 at 9:48 AM, a b <asbbih@...> wrote:
            > On Wed, Oct 31, 2012 at 3:16 PM, Rami Rustom <rombomb@...> wrote:
            >>
            >>
            >>
            >> On Oct 28, 2012 2:43 PM, "a b" <asbbih@...> wrote:
            >> >
            >> > On Sun, Oct 28, 2012 at 12:15 AM, Rami Rustom <rombomb@...> wrote:
            >> > >
            >> > >
            >> > >
            >> > > On Sat, Oct 27, 2012 at 2:50 PM, a b <asbbih@...> wrote:
            >> > > > On Sat, Oct 27, 2012 at 5:38 PM, Rami Rustom <rombomb@...>
            >> > > > wrote:
            >> > > >>
            >> > > >>
            >> > > >>
            >> > > >> On Sat, Oct 27, 2012 at 9:46 AM, Bruno Marchal <marchal@...>
            >> > > >> wrote:
            >> > > >> >
            >> > > >> > On 26 Oct 2012, at 20:04, Rami Rustom wrote:
            >> > > >> >
            >> > > >> >> On Fri, Oct 26, 2012 at 10:59 AM, Bruno Marchal
            >> > > >> >> <marchal@...>
            >> > > >> >> wrote:
            >> > > >> >>>
            >> > > >> >>> On 25 Oct 2012, at 20:32, Rami Rustom wrote:
            >> > > >> >>>
            >> > > >> >>> Like: most heroin user have begun with cannabis, so cannabis
            >> > > >> >>> leads
            >> > > >> >>> to
            >> > > >> >>> heroin. The gateway theory.
            >> > > >> >>>
            >> > > >> >>> You can explain that the number of cannabis user among heroin
            >> > > >> >>> user
            >> > > >> >>> is
            >> > > >> >>> irrelevant for judging a relation of causality rather easily:
            >> > > >> >>> all
            >> > > >> >>> heroin user have begun with water, yet nobody would say that
            >> > > >> >>> *this*
            >> > > >> >>> means that water leads to heroin. The correct statistics
            >> > > >> >>> consists
            >> > > >> >>> in
            >> > > >> >>> looking and comparing the number of heroin users in the (good
            >> > > >> >>> sampling
            >> > > >> >>> of) population of cannabis users, and compare it with a (good
            >> > > >> >>> sampling
            >> > > >> >>> of) population not using cannabis. This would also prove
            >> > > >> >>> nothing,
            >> > > >> >>> but
            >> > > >> >>> would give an evidence. Of course when done, there are zero
            >> > > >> >>> evidence
            >> > > >> >>> that cannabis leads more to heroin than water, and even less
            >> > > >> >>> than
            >> > > >> >>> alcohol.
            >> > > >> >>
            >> > > >> >> There are many very successful people that use cannabis and not
            >> > > >> >> heroin.
            >> > > >> >
            >> > > >> > This does not say much, but I agree.
            >> > > >>
            >> > > >> It refutes the theory that cannibas is a gateway drug, and the
            >> > > >> theory
            >> > > >> that cannibas ruins a person's life
            >> > > >
            >> > > > In your own words can you summarize what the argument actually was
            >> > > > that cannibis is a gateway drug?
            >> > >
            >> > > Using cannibis causes people to then use heroin or other worse drugs.
            >> >
            >> > What is meant by 'causes'?
            >>
            >> A causes B. B happens because A happened.
            >
            > So this is the conjecture - researched, reflected on, and ultimately
            > defined by you yourself

            No Bruno explained the gateway theory.


            > - that you have then refuted. The active ingredient
            > of cannabis causes the individual to try heroin.

            Thats not what I said. I said that A is *using cannabis*. You're
            saying that A is *THC*. They aren't the same.


            > Not statistical, not
            > correlation, no identified reasoning, no review of the evidence.

            *The evidence*. There is no possibility that evidence can say anything
            about how people make choices. Evidence is used in scientific
            knowledge creation. Choices are part moral knowledge, not scientific
            knowledge. Science can not say anything about choices or how people
            make choices. Only philosophy can do that.

            When a person makes a choice, what he's doing is considering his
            options and then making a value judgement choosing the best option.
            His value judgement depends on the context of the choice. Part of that
            context is his values (and epistemic ideas), and part of it is the
            details of the situation.


            > I don't really see the point of what you've done here Rami. You've defined
            > yourself a nonsensical - actually silly - conjecture about how the link
            > between cannabis and heroin use has been defined (by those who say there is
            > one) .

            Actually Bruno explained the gateway theory, not me. Read his posts
            that I replied to.


            > Which you then refute.

            Well, if you have a criticism, then tell me, then maybe I haven't
            refuted the gateway theory.


            >
            > What have you learned or taught by this process?

            Nothing. What do you think I should learn?


            > This is a lot like the
            > popperian refutation of the idea getting drunk can change your personality,
            > maybe produce a Mr Hyde. The conjecture is defined as alcohol causes or
            > creates a different personality. It's a stupid way to define the link in
            > the first place.

            So how do you think it should work?

            -- Rami Rustom
            http://ramirustom.blogspot.com
          • Bruno Marchal
            ... That there are no algorithm to decide what a piece of code or matter can compute or do. ... Yes. But Rice theorem concerns both software, and possible
            Message 5 of 20 , Nov 3, 2012
            • 0 Attachment
              On 02 Nov 2012, at 23:02, Rami Rustom wrote:

              > On Nov 1, 2012 1:10 PM, "Bruno Marchal" <marchal@...> wrote:
              > >
              > >
              > > On 31 Oct 2012, at 16:07, Rami Rustom wrote:
              > >
              > > > On Oct 28, 2012 2:43 PM, "Bruno Marchal" <marchal@...>
              > wrote:
              > > > >
              > > > >
              > > > > On 27 Oct 2012, at 18:38, Rami Rustom wrote:
              > > > >
              > > > > > On Sat, Oct 27, 2012 at 9:46 AM, Bruno Marchal <marchal@...
              > >
              > > > > > wrote:
              > > > > > >
              > > > > > > On 26 Oct 2012, at 20:04, Rami Rustom wrote:
              > > > > > >
              > > > > > >> On Fri, Oct 26, 2012 at 10:59 AM, Bruno Marchal <marchal@...
              > > > >
              > > > > > >> wrote:
              > > > > > >>>
              > > > > > >>> On 25 Oct 2012, at 20:32, Rami Rustom wrote:
              > >
              > > >
              > > > >
              > > > >
              > > > > >
              > > > > > >
              > > > > > >>
              > > > > > >>
              > > > > > >>> That error is not only done often, if not systematically
              > at the
              > > > > > >>> political level, in the domain of Health, it is done in
              > other parts
              > > > > > >>> of
              > > > > > >>> politics, and frequently in racist discourses,
              > defamation, and fake
              > > > > > >>> sciences.
              > > > > > >>>
              > > > > > >>> That error has even a Darwinian explanation, as simple
              > neural
              > > > > > >>> associative nets do that error, and yet can solve
              > problem and needs
              > > > > > >>> very few K. Some 'mistaken theory' are efficacious in
              > the short term
              > > > > > >>> (like robbing a bank, to solve the money problem).
              > > > > > >>>
              > > > > > >>> Stupidity is not in the mistake, it is in the doing of
              > the same
              > > > > > >>> mistake again and again and again and again (usually for
              > problem of
              > > > > > >>> image of oneself by some people indeed, or just for
              > perpetuating a
              > > > > > >>> fear selling technic to steal your money, something sad,
              > bad, but
              > > > > > >>> 'natural' like robbing a bank).
              > > > > > >>
              > > > > > >> Sometimes people make the same mistakes repeated because
              > of anti-
              > > > > > >> rational memes.
              > > > > > >
              > > > > > > Yes. The problem is that something anti-rational for the
              > long term can
              > > > > > > be completely rational in the short term. Typically
              > "stealing money".
              > > > > > > It works well in the short term, but only because it is
              > bad and it is
              > > > > > > done by a minority. If stealing money was encouraged and
              > taught in
              > > > > > > high school, the society would quickly degenerate in many
              > mafia type
              > > > > > > of wars. You need enough honest people doing "real money"
              > to be stolen
              > > > > > > by others, which acts rationally for their limited
              > personal purpose.
              > > > > > >
              > > > > > > There are often conflicts between the short term, the
              > middle term and
              > > > > > > the long term, which are conflicts between different type
              > of rational
              > > > > > > reason. The frontier between rational and non rational is
              > like the
              > > > > > > Mandelbrot set: very complex and intricate. That is why,
              > in actual
              > > > > > > context, usually with only very partial information, we
              > have to trust
              > > > > > > our guts. I think.
              > > > > >
              > > > > > That implies that our gut feelings are not discoverable. But
              > that is
              > > > > > wrong.
              > > > > >
              > > > > > A gut feeling is the feeling one gets when there is a
              > conflict of
              > > > > > ideas in his mind, where one of those ideas is conscious and
              > explicit,
              > > > > > and the other idea is subconscious and inexplicit. But there
              > is
              > > > > > nothing permanent about the status of that subconscious and
              > inexplicit
              > > > > > idea. One can put forth effort to discover it, thus making it
              > > > > > conscious and explicit. This allows one to criticize it.
              > > > > >
              > > > > Hmm... What makes you sure we can know all the roots of our guts
              > > > > feeling?
              > > >
              > > > A subconscious idea *is* instantiated in the brain. It physically
              > > > exists there.
              > > >
              > > Hmm.... OK. Can be quite implicit though, and not recognizable as
              > such.
              >
              > What does that mean?
              >

              That there are no algorithm to decide what a piece of code or matter
              can compute or do.



              >
              > >
              > >
              > >
              > > > Why would it be off limits from being discovered?
              > > >
              > > There is no algorithm to decode what a program can do, from its code
              > > (Rice theorem). The set of programs doing a particular task is not
              > > recursive. With neural nets, it is worst, as the information can be
              > > distributed in tiny difference of excitation level of billions of
              > > neurons.
              >
              > When speaking of software, one does not need to speak of hardware. The
              > hardware doesn't affect the software, unless the hardware gets
              > damaged. Agreed?
              >
              Yes. But Rice theorem concerns both software, and possible hardware.



              >
              > >
              > >
              > >
              > >
              > > >
              > > > > This can be contradicted in the mechanist theory + classical
              > > > > theory of knowledge: it shows that the ideally arithmetically
              > correct
              > > > > machine can never know "who she is", and in that theory the guts
              > > > > feeling originates from the "who" we are.
              > > >
              > > > I don't know what the "who somebody is" means nor how that is
              > relevant
              > > > to this discussion.
              > > >
              > > If we are machine, we cannot know which machine we are. We can only
              > > bet on some level of description, in case we accept a digital brain
              > > prosthesis, for example. Even if the guess is correct, we still
              > cannot
              > > recognize the genuine program which constitutes our identity. The
              > > relevance of this is that for the gut feelings, we might be unable
              > to
              > > decide if some idea is a prejudice of the parents, or of the
              > mammals,
              > > or of all universal machines. In the mater case, we can't change it
              > > without loosing consistency or soundness.
              >
              > I still don't know what that means nor why its relevant to our
              > discussion. Consistency of what? Soundness of what?
              >
              Of us, assuming we are machine. All machine looking inward discover
              that they have an irrational part, in the sense that their knowledge
              grows bigger than what they can prove. That is also why the will
              change, in a way that they cannot predict.




              >
              > >
              > >
              > >
              > > >
              > > > > Without going that far in
              > > > > the theory, I doubt we can be conscious of all the subconscious
              > > > > processing. A machine can refer to itself integrally, but not
              > to its
              > > > > integral behavior.
              > > >
              > > > I don't understand your idea that some subconscious ideas are off
              > > > limits.
              > > >
              > > Intuitively it seems obvious, if only because our brain have
              > plausibly
              > > a very long complex history, and nature does not make explicit
              > > programs with readable comment.
              >
              > You are using the theory of the mind that says that certain brain
              > parts cause certain mind parts. But that theory has been refuted by
              > DD. That refutes evolutionary biology (which is the field you're
              > referring to).
              >
              Where did I talk on brain parts? The conscious mind is not even
              related to the brain, which in fine is a construct of the mind. Now if
              you use mind in a large sense, then yes, the part of the brain or of
              the computer which clean up this of that memory can be attributed to
              some part of the computer. If not, your point above that if the
              hardware is damaged it has consequence on the mind, would not follow.



              >
              > What you're saying is that some of our software is hardcoded. And that
              > hardcoding evolved over millions of years. But, humans don't have any
              > hardcoding. Its all softcoded. All of it changes.
              >
              This does not make sense for me (despite comp implies non materialism,
              but only globally). At some level the brain changes itself, but
              through the material behavior of neurons. Then some part of the brain
              are more hardcoded than others, like the cerebral stem, the limbic
              system. But even the cortical system has an hard coded part, and the
              change of itself will be hard changes.




              >
              > > Then, although machine can represent
              > > itself entirely, it can't represent its behavior entirely, almost
              > for
              > > reason akin to the reason that nobody can see directlmy his/her own
              > > back. there are blind spot in self-reference, for logical reasons.
              >
              > Do you agree that we can guess about them?
              >
              About some of them, yes. I doubt we can guess all of them.



              >
              > >
              > >
              > >
              > >
              > > >
              > > > Consider this. Decades from now we create the first AGI. It
              > programs
              > > > its own code. No?
              > > >
              > > This is ambiguous. Programs can easily modify, even completely their
              > > own code, but it cannot create one from nihilo.
              >
              > Whats nihilo?
              >
              Nothing. You need the program to start with, and some contexts.




              >
              > > There is always
              > > another universal program or reality needed, even if it is the
              > > arithmetical reality, or physical reality, which are highly
              > > undecidable set (probably in the first case, plausibly in the second
              > > case).
              > > It is like our wanting. We have few control about what we want,
              >
              > False. Our wants depend on our values. We have control over our
              > values. Our values are ideas. We guess ideas and criticism them.
              > Sometimes we're doing this with our value-type ideas. And when we make
              > changes to our values, we've changed our wants.
              >
              I have to pee now. Oh, I will change that, so I can continue the talk.
              I want to be happy? Oh, I will change that, life will be so much
              simpler.
              I am afraid to die. Oh I will change that, so that I can kill myself
              in peace.
              ...
              I am not sure that our values are only ideas. That seems a bit too
              much constructivist for my appreciation of the comp hypothesis.



              >
              > > nor do
              > > we have control on the way events can hurt us.
              >
              > Physical hurt or mental hurt? We have a lot of control over mental
              > hurt.
              >
              Hurt is always mental. "physical hurt" is a manner of speaking, of
              mental hurt related to low level sensation coming from the periperical
              (hardcoded too) nervous system (including glial cells).



              > Mental hurt is TCS-coercion. TCS-coercion occurs when someone is
              > acting on a theory while another conflicting theory is active in his
              > mind. An example of TCS-coercion is someone gets offended when someone
              > else says a racial slur to him. So the one who gets hurt doesn't want
              > to be called by that racial slur but it happened. Do you think he has
              > no control over getting hurt by this?
              >
              Here he has control. But that's a special case. And then he has
              control but he need some education, so even if that is in principle
              controllable, most of the time in "real-life" he will not.



              >
              > > In fact we control very
              > > few things, but yet, we do have a non negligible partial control,
              > but
              > > it is only a window, so to speak.
              >
              > It is a window. But, that window can be made bigger without limit. Or,
              > is the limit 100%? Can someone be absolutely conscious of *all* his
              > ideas such that none of them are subconscious/inexplicit?
              >
              No, it cannot. In fact consciousness needs some unconsciousness. You
              can make a machine referring to 100% of its body (this is not trivial
              to prove), but you can't make a program referring to all its possible
              behavior.
              This has been intuited by Hofstadter, and proved by Solovay, and others.



              > Doesn't that
              > imply perfection?
              >
              > No it doesn't. Because he could be wrong about any of them.
              >
              Not on all of them. There is a fixed point which cannot be doubted,
              like consciousness itself.

              Bruno

              >

              http://iridia.ulb.ac.be/~marchal/





              [Non-text portions of this message have been removed]
            • Bruno Marchal
              ... That there are no algorithm to decide what a piece of code or matter can compute or do. ... Yes. But Rice theorem concerns both software, and possible
              Message 6 of 20 , Nov 4, 2012
              • 0 Attachment
                On 02 Nov 2012, at 23:02, Rami Rustom wrote:

                > On Nov 1, 2012 1:10 PM, "Bruno Marchal" <marchal@...> wrote:
                > <snip>
                > > >
                > > Hmm.... OK. Can be quite implicit though, and not recognizable as
                > such. [subconscious idea]
                >
                > What does that mean?
                >

                That there are no algorithm to decide what a piece of code or matter
                can compute or do.





                >
                > >
                > >
                > >
                > > > Why would it be off limits from being discovered?
                > > >
                > > There is no algorithm to decode what a program can do, from its code
                > > (Rice theorem). The set of programs doing a particular task is not
                > > recursive. With neural nets, it is worst, as the information can be
                > > distributed in tiny difference of excitation level of billions of
                > > neurons.
                >
                > When speaking of software, one does not need to speak of hardware. The
                > hardware doesn't affect the software, unless the hardware gets
                > damaged. Agreed?
                >

                Yes. But Rice theorem concerns both software, and possible hardware.




                >
                >
                > > >
                > > If we are machine, we cannot know which machine we are. We can only
                > > bet on some level of description, in case we accept a digital brain
                > > prosthesis, for example. Even if the guess is correct, we still
                > cannot
                > > recognize the genuine program which constitutes our identity. The
                > > relevance of this is that for the gut feelings, we might be unable
                > to
                > > decide if some idea is a prejudice of the parents, or of the
                > mammals,
                > > or of all universal machines. In the mater case, we can't change it
                > > without loosing consistency or soundness.
                >
                > I still don't know what that means nor why its relevant to our
                > discussion. Consistency of what? Soundness of what?
                >


                Of us, assuming we are machine. All machine looking inward discover
                that they have an irrational part, in the sense that their knowledge
                grows bigger than what they can prove. That is also why the will
                change, in a way that they cannot predict.





                >
                >
                > > >
                > > > I don't understand your idea that some subconscious ideas are off
                > > > limits.
                > > >
                > > Intuitively it seems obvious, if only because our brain have
                > plausibly
                > > a very long complex history, and nature does not make explicit
                > > programs with readable comment.
                >
                > You are using the theory of the mind that says that certain brain
                > parts cause certain mind parts. But that theory has been refuted by
                > DD. That refutes evolutionary biology (which is the field you're
                > referring to).
                >

                Where did I talk on brain parts? The conscious mind is not even
                related to the brain, which in fine is a construct of the mind. Now if
                you use mind in a large sense, then yes, the part of the brain or of
                the computer which clean up this of that memory can be attributed to
                some part of the computer. If not, your point above that if the
                hardware is damaged it has consequence on the mind, would not follow.




                >
                > What you're saying is that some of our software is hardcoded. And that
                > hardcoding evolved over millions of years. But, humans don't have any
                > hardcoding. Its all softcoded. All of it changes.
                >

                This does not make sense for me (despite comp implies non materialism,
                but only globally). At some level the brain changes itself, but
                through the material behavior of neurons. Then some part of the brain
                are more hardcoded than others, like the cerebral stem, the limbic
                system. But even the cortical system has an hard coded part, and the
                change of itself will be hard changes.




                >
                > > Then, although machine can represent
                > > itself entirely, it can't represent its behavior entirely, almost
                > for
                > > reason akin to the reason that nobody can see directlmy his/her own
                > > back. there are blind spot in self-reference, for logical reasons.
                >
                > Do you agree that we can guess about them?
                >

                About some of them, yes. I doubt we can guess all of them.




                >
                > >
                > >
                > >
                > >
                > > >
                > > > Consider this. Decades from now we create the first AGI. It
                > programs
                > > > its own code. No?
                > > >
                > > This is ambiguous. Programs can easily modify, even completely their
                > > own code, but it cannot create one from nihilo.
                >
                > Whats nihilo?
                >
                Nothing. You need the program to start with, and some contexts.



                >
                > > There is always
                > > another universal program or reality needed, even if it is the
                > > arithmetical reality, or physical reality, which are highly
                > > undecidable set (probably in the first case, plausibly in the second
                > > case).
                > > It is like our wanting. We have few control about what we want,
                >
                > False. Our wants depend on our values. We have control over our
                > values. Our values are ideas. We guess ideas and criticism them.
                > Sometimes we're doing this with our value-type ideas. And when we make
                > changes to our values, we've changed our wants.
                >

                I have to sneeze now. Oh, I will change that, so I can continue the
                talk.
                I want to be happy? Oh, I will change that, life will be so much
                simpler.
                I am afraid to die. Oh I will change that, so that I can kill myself
                in peace.
                ...
                I am not sure that our values are only ideas. That seems a bit too
                much constructivist for my appreciation of the comp hypothesis.



                >
                > > nor do
                > > we have control on the way events can hurt us.
                >
                > Physical hurt or mental hurt? We have a lot of control over mental
                > hurt.
                >

                Hurt is always mental. "physical hurt" is a manner of speaking, of
                mental hurt related to low level sensation coming from the periperical
                (hardcoded too) nervous system (including glial cells).




                > Mental hurt is TCS-coercion. TCS-coercion occurs when someone is
                > acting on a theory while another conflicting theory is active in his
                > mind. An example of TCS-coercion is someone gets offended when someone
                > else says a racial slur to him. So the one who gets hurt doesn't want
                > to be called by that racial slur but it happened. Do you think he has
                > no control over getting hurt by this?
                >

                Here he has control. But that's a special case. And then he has
                control but he need some education, so even if that is in principle
                controllable, most of the time in "real-life" he will not.




                >
                > > In fact we control very
                > > few things, but yet, we do have a non negligible partial control,
                > but
                > > it is only a window, so to speak.
                >
                > It is a window. But, that window can be made bigger without limit. Or,
                > is the limit 100%? Can someone be absolutely conscious of *all* his
                > ideas such that none of them are subconscious/inexplicit?
                >

                No, it cannot. In fact consciousness needs some unconsciousness. You
                can make a machine referring to 100% of its body (this is not trivial
                to prove), but you can't make a program referring to all its possible
                behavior.
                This has been intuited by Hofstadter, and proved by Solovay, and others.




                > Doesn't that
                > imply perfection?
                >
                > No it doesn't. Because he could be wrong about any of them.
                >

                Not on all of them. There is a fixed point which cannot be doubted,
                like consciousness itself.

                Bruno

                http://iridia.ulb.ac.be/~marchal/





                [Non-text portions of this message have been removed]
              • Rami Rustom
                ... Are you saying someone can t *guess* what one s subconscious ideas are? ... That interesting. I never thought of that. As the mind is discovering its
                Message 7 of 20 , Nov 4, 2012
                • 0 Attachment
                  On Sat, Nov 3, 2012 at 6:46 AM, Bruno Marchal <marchal@...> wrote:
                  >
                  > On 02 Nov 2012, at 23:02, Rami Rustom wrote:
                  >
                  >> On Nov 1, 2012 1:10 PM, "Bruno Marchal" <marchal@...> wrote:
                  >> >
                  >> >
                  >> > On 31 Oct 2012, at 16:07, Rami Rustom wrote:
                  >> >
                  >> > > On Oct 28, 2012 2:43 PM, "Bruno Marchal" <marchal@...> wrote:
                  >
                  >>
                  >> >
                  >> >
                  >> >
                  >> > > Why would it be off limits from being discovered?
                  >> > >
                  >> > There is no algorithm to decode what a program can do, from its code
                  >> > (Rice theorem). The set of programs doing a particular task is not
                  >> > recursive. With neural nets, it is worst, as the information can be
                  >> > distributed in tiny difference of excitation level of billions of
                  >> > neurons.
                  >>
                  >> When speaking of software, one does not need to speak of hardware. The
                  >> hardware doesn't affect the software, unless the hardware gets
                  >> damaged. Agreed?
                  >>
                  > Yes. But Rice theorem concerns both software, and possible hardware.

                  Are you saying someone can't *guess* what one's subconscious ideas are?


                  >
                  >
                  >>
                  >> >
                  >> >
                  >> >
                  >> >
                  >> > >
                  >> > > > This can be contradicted in the mechanist theory + classical
                  >> > > > theory of knowledge: it shows that the ideally arithmetically correct
                  >> > > > machine can never know "who she is", and in that theory the guts
                  >> > > > feeling originates from the "who" we are.
                  >> > >
                  >> > > I don't know what the "who somebody is" means nor how that is relevant
                  >> > > to this discussion.
                  >> > >
                  >> > If we are machine, we cannot know which machine we are. We can only
                  >> > bet on some level of description, in case we accept a digital brain
                  >> > prosthesis, for example. Even if the guess is correct, we still cannot
                  >> > recognize the genuine program which constitutes our identity. The
                  >> > relevance of this is that for the gut feelings, we might be unable to
                  >> > decide if some idea is a prejudice of the parents, or of the mammals,
                  >> > or of all universal machines. In the mater case, we can't change it
                  >> > without loosing consistency or soundness.
                  >>
                  >> I still don't know what that means nor why its relevant to our
                  >> discussion. Consistency of what? Soundness of what?
                  >>
                  > Of us, assuming we are machine. All machine looking inward discover
                  > that they have an irrational part, in the sense that their knowledge
                  > grows bigger than what they can prove.

                  That interesting. I never thought of that. As the mind is discovering
                  its subconscious ideas (making them conscious), his own subconscious
                  ideas are expanding (in number) faster than he can discover them. In
                  which case one cannot discover *all* his subconscious ideas.


                  > That is also why the will
                  > change, in a way that they cannot predict.

                  By will, I guess you mean wants. Wants are like psychological forces.
                  They result from one's values. One's values *causes* one's wants. So I
                  can predict, that if a person changes what he values, then his wants
                  will change. I can predict, that if a person values X, then he wants
                  X. I can predict, that if a person currently values X, and that if I
                  persuade him that X is bad, then he will no longer value X, and he'll
                  won't want X anymore.

                  Now this doesn't mean that a person who wants X, will do X. He could
                  have conflicting wants. He could want X and Y, while X and Y are
                  conflicting. So he might choose to do nothing for now. Or he might
                  choose X and thus coerce himself, or choose Y and thus coerce himself.


                  >
                  >
                  >
                  >>
                  >> >
                  >> >
                  >> >
                  >> > >
                  >> > > > Without going that far in
                  >> > > > the theory, I doubt we can be conscious of all the subconscious
                  >> > > > processing. A machine can refer to itself integrally, but not to its
                  >> > > > integral behavior.
                  >> > >
                  >> > > I don't understand your idea that some subconscious ideas are off
                  >> > > limits.
                  >> > >
                  >> > Intuitively it seems obvious, if only because our brain have plausibly
                  >> > a very long complex history, and nature does not make explicit
                  >> > programs with readable comment.
                  >>
                  >> You are using the theory of the mind that says that certain brain
                  >> parts cause certain mind parts. But that theory has been refuted by
                  >> DD. That refutes evolutionary biology (which is the field you're
                  >> referring to).
                  >>
                  > Where did I talk on brain parts? The conscious mind is not even
                  > related to the brain, which in fine is a construct of the mind.

                  So then you mean that we have hardcoded software and that the
                  hardcoding evolved over millions of years.


                  >
                  >>
                  >> What you're saying is that some of our software is hardcoded. And that
                  >> hardcoding evolved over millions of years. But, humans don't have any
                  >> hardcoding. Its all softcoded. All of it changes.
                  >>
                  > This does not make sense for me (despite comp implies non materialism,
                  > but only globally). At some level the brain changes itself, but
                  > through the material behavior of neurons. Then some part of the brain
                  > are more hardcoded than others, like the cerebral stem, the limbic
                  > system. But even the cortical system has an hard coded part, and the
                  > change of itself will be hard changes.

                  Why do you think any of the human brain parts that emerge from it the
                  mind, is hardcoded?


                  >
                  >
                  >
                  >
                  >>
                  >> > Then, although machine can represent
                  >> > itself entirely, it can't represent its behavior entirely, almost for
                  >> > reason akin to the reason that nobody can see directlmy his/her own
                  >> > back. there are blind spot in self-reference, for logical reasons.
                  >>
                  >> Do you agree that we can guess about them?
                  >>
                  > About some of them, yes. I doubt we can guess all of them.

                  We can guess all of them. It doesn't mean we'll guess right.


                  >
                  >
                  >>
                  >> >
                  >> >
                  >> >
                  >> >
                  >> > >
                  >> > > Consider this. Decades from now we create the first AGI. It programs
                  >> > > its own code. No?
                  >> > >
                  >> > This is ambiguous. Programs can easily modify, even completely their
                  >> > own code, but it cannot create one from nihilo.
                  >>
                  >> Whats nihilo?
                  >>
                  > Nothing. You need the program to start with, and some contexts.

                  I'm not saying that the human mind creates its code from nothing. I'm
                  saying that the human mind is inborn with some softcode, and zero
                  hardcoding.


                  >
                  >
                  >
                  >>
                  >> > There is always
                  >> > another universal program or reality needed, even if it is the
                  >> > arithmetical reality, or physical reality, which are highly
                  >> > undecidable set (probably in the first case, plausibly in the second
                  >> > case).
                  >> > It is like our wanting. We have few control about what we want,
                  >>
                  >> False. Our wants depend on our values. We have control over our
                  >> values. Our values are ideas. We guess ideas and criticism them.
                  >> Sometimes we're doing this with our value-type ideas. And when we make
                  >> changes to our values, we've changed our wants.
                  >>
                  > I have to pee now. Oh, I will change that, so I can continue the talk.

                  Or you could pee now and continue later.


                  > I want to be happy? Oh, I will change that, life will be so much
                  > simpler.

                  That is so vague that I think no one could change a value using that idea.


                  > I am afraid to die. Oh I will change that, so that I can kill myself
                  > in peace.

                  You'd have to address the reasons for your fear. That means
                  discovering one's subconscious ideas that are causing the emotion of
                  fear. Once those are discovered and refuted, the fear emotion will
                  cease.


                  > ...
                  > I am not sure that our values are only ideas. That seems a bit too
                  > much constructivist for my appreciation of the comp hypothesis.

                  What else could they be?


                  >
                  >
                  >>
                  >> > nor do
                  >> > we have control on the way events can hurt us.
                  >>
                  >> Physical hurt or mental hurt? We have a lot of control over mental
                  >> hurt.
                  >>
                  > Hurt is always mental. "physical hurt" is a manner of speaking, of
                  > mental hurt related to low level sensation coming from the periperical
                  > (hardcoded too) nervous system (including glial cells).
                  >
                  >
                  >
                  >> Mental hurt is TCS-coercion. TCS-coercion occurs when someone is
                  >> acting on a theory while another conflicting theory is active in his
                  >> mind. An example of TCS-coercion is someone gets offended when someone
                  >> else says a racial slur to him. So the one who gets hurt doesn't want
                  >> to be called by that racial slur but it happened. Do you think he has
                  >> no control over getting hurt by this?
                  >>
                  > Here he has control. But that's a special case. And then he has
                  > control but he need some education, so even if that is in principle
                  > controllable, most of the time in "real-life" he will not.

                  No. You're saying that he needs *ideas* in order to not feel bad. But
                  you're not talking about what caused the bad feeling.

                  I'm saying that he first learned bad ideas, which are causing him to
                  feel bad AND that he needs to refute those ideas in order to not feel
                  bad.

                  -- Rami Rustom
                  http://ramirustom.blogspot.com
                Your message has been successfully submitted and would be delivered to recipients shortly.