Loading ...
Sorry, an error occurred while loading the content.

Falsifiable Moral theories

Expand Messages
  • brhallway
    A subset of the theories which explain how best to improve the well being of humans on Earth is the Taking Children Seriously theory which - in its most
    Message 1 of 7 , Dec 5, 2010
    • 0 Attachment
      A subset of the theories which explain how best to improve the well being of humans on Earth is the "Taking Children Seriously" theory which - in its most brief articulation might be described as being about "non-coersion". http://www.takingchildrenseriously.com/

      At that website David Deutsch has written an article explaining the relationship between fallibility and TCS.
      http://www.takingchildrenseriously.com/node/16

      David writes:

      "Experiment could not refute the theory of TCS, but argument and criticism might."

      I agree with all that David has written in his article but - and I think it's a point of fundamental significance - it's incorrect to dismiss the possibility that "experiment could not refute the theory of TCS."

      What is good in the moral sense - what is the right thing to do - is that which promotes well being. Sam Harris' most recent book "The Moral Landscape" goes into great detail about how science can determine human values. http://www.amazon.com/Moral-Landscape-Science-Determine-Values/dp/1439171211

      Harris goes to great pains to demolish Hume's is-ought distinction pointing out that we can't get is without ought. If you want to know what is then you ought to value good explanations and parsimony and reliable experimental tests and logical coherence and so forth. But just as significantly, Harris points out that values are simply a certain type of fact about the world and as such fall within the scope of science to answer. I clearly cannot summarise his entire book here so permit me get a little more elaboration before returning to Fabric of Reality matters.

      Once you accept the premise that the only thing we need to consider in our moral judgements are the experiences of conscious creatures and that what is worth maximising is the well being of those creatures then everything else quickly follows. Harris considers a thought experiment where you imagine a world which is "the worst possible suffering for everyone all the time." By definition this is the worst state of affairs physically possible. Now any state of affairs is clearly better than this. Thus movements either away from this worst-possible state of suffering for everyone or towards it are movements which amount to more or less right or wrong answers to moral questions. Harris admits there may be many such solutions and ways to flourish - but the point is that there is scientific truths to be discovered here. Admitting that...

      Although we do not actually need to do the experiment to show that beating a child (or otherwise being coersive) is wrong - we could do such an experiment. Does beating a child improve their well being? Is their health - mental, emotional, physical, spiritual improved by the beating? Do beaten children go out into the world more confident, happy and resiliant people or are they more likely to be withdrawn, angry and visit similar violence upon others? To take it another step, despite the infancy of neuroscience at this stage: does their brain - if we had the appropriate scanning technology - show objective signs of the correlates of pleasure, pain, reward, confidence, satisfaction, contentment, fear, shame, and so on? Would not all of these measures constitute an experiment falsification of the theory that "beating a child improves their well being"? After all, isn't such "discipline" the intent originally?

      Equally then, could experiment refute the theory of TCS? It seems a similar battery of experiments can be imagined in just the same way. Does a TCS approach produce children with greater well being? Are they happier, better adjusted, more confident, creative and happy people...or are they not? It seems to me that if they are not then we have a refutation. At the limit we can imagine not only studies where participants simply report the contents of their consciousness but also have their brain scanned as they undergo a TCS exchange about whether they can try MDMA this weekend with some friends compared to some other group where the approach is a "just so no" coercive one.

      I think then that TCS is falsifiable in the usual sense of the word. One need only grant that what you are actually seeking is the greatest possible well-being of the child as the outcome of putting some parenting theory into practise. If the theory does improve the well-being of the child then it is the best moral - and hence scientific theory currently available. Unless, that is - another theory can be shown to be more capable of improving well being. In practise of course, no such experiments need to be done as those alternatives can be rejected on other grounds that David alludes to.

      And is well-being too imprecise a term to be using here? Consider, as Sam Harris does, by way of example - the notion of physical health. It defies precise definition and is rather elastic to changes in medicine. Yet we can still talk meaningfully about being healthy or not. Likewise, our notion of what it means for a theory like TCS to improve the "well-being" can be "perpetually open to revision as we make progress in science." (Harris, p 12).

      So TCS does not need to be tested for its capacity to improve well being in the ways I suggest for us to embrace it - but I observe that because TCS is ultimately a theory about how best to achieve well being - and well being is as scientific a concept as is "physical health" then we can say that TCS is a falsifiable, scientific theory. And it, like the broader idea that maximising the well being of conscious creatures is what morality is about. And this is clearly something that depends on the state of the world. It thus forms part of the Fabric of Reality.

      Brett.
    • Elliot Temple
      ... That is a vague statement. What is well being? What is the purpose of using this formulation instead of a more traditional one like, Morality is about how
      Message 2 of 7 , Dec 5, 2010
      • 0 Attachment
        On Dec 5, 2010, at 3:42 AM, brhallway wrote:

        > A subset of the theories which explain how best to improve the well being of humans on Earth is the "Taking Children Seriously" theory which - in its most brief articulation might be described as being about "non-coersion". http://www.takingchildrenseriously.com/
        >
        > At that website David Deutsch has written an article explaining the relationship between fallibility and TCS.
        > http://www.takingchildrenseriously.com/node/16
        >
        > David writes:
        >
        > "Experiment could not refute the theory of TCS, but argument and criticism might."
        >
        > I agree with all that David has written in his article but - and I think it's a point of fundamental significance - it's incorrect to dismiss the possibility that "experiment could not refute the theory of TCS."
        >
        > What is good in the moral sense - what is the right thing to do - is that which promotes well being.

        That is a vague statement. What is well being? What is the purpose of using this formulation instead of a more traditional one like, "Morality is about how to live a *good* life" and then defining "good"? Is Harris objecting to a "good" life in some way, or is his view equivalent?

        > Sam Harris' most recent book "The Moral Landscape" goes into great detail about how science can determine human values. http://www.amazon.com/Moral-Landscape-Science-Determine-Values/dp/1439171211
        >
        > Harris goes to great pains to demolish Hume's is-ought distinction pointing out that we can't get is without ought. If you want to know what is then you ought to value good explanations and parsimony and reliable experimental tests and logical coherence and so forth. But just as significantly, Harris points out that values are simply a certain type of fact about the world and as such fall within the scope of science to answer. I clearly cannot summarise his entire book here so permit me get a little more elaboration before returning to Fabric of Reality matters.

        I do not agree with the is/ought distinction, but science cannot replace moral argument.

        >
        > Once you accept the premise that the only thing we need to consider in our moral judgements are the experiences of conscious creatures

        No way! We also need to consider the laws of physics too, for example -- they are relevant to moral judgments. For example the morality of adding some atoms to my friend's suitcase depends on whether they are a bomb or not, which is a matter of the laws of physics.

        And *experiences*? Is this intentionally subjectivist? Wanting something, and being the kind of person who will feel good if he gets it, makes it right?

        > and that what is worth maximising is the well being of those creatures then everything else quickly follows. Harris considers a thought experiment where you imagine a world which is "the worst possible suffering for everyone all the time." By definition this is the worst state of affairs physically possible.

        Sounds more like, by definition, the worst state of affairs *psychologically* possible. Wouldn't you agree?

        But as for the physical situation, it's vague. The moon might have super advanced philosophy and technology libraries just waiting for us, so the instant we get space travel we'll access this knowledge and fix everything. Or there might be benevolent aliens seconds from intervening and making everyone happy.

        > Now any state of affairs is clearly better than this.

        No (even accepting most of your premises), because it might be about to end, and we might compare to it another situation, almost as bad, but which will last much longer.

        There's also, to consider, the situation of everyone being happy but the multiverse moments from exploding or otherwise being destroyed.

        > Thus movements either away from this worst-possible state of suffering for everyone or towards it are movements which amount to more or less right or wrong answers to moral questions. Harris admits there may be many such solutions and ways to flourish - but the point is that there is scientific truths to be discovered here. Admitting that...

        That argument is incomplete. It doesn't show that science is the key to improving that situation. Suppose people are sad b/c their food cooks slow. Microwaves can make them happy. OK. Science is surely relevant b/c science can invent microwaves. But it's not science that told me people will like microwaves. Science is good for accomplishing many goals, but it does not tell us the complete picture about which goals are good to have.

        > Although we do not actually need to do the experiment to show that beating a child (or otherwise being coersive) is wrong - we could do such an experiment. Does beating a child improve their well being? Is their health - mental, emotional, physical, spiritual improved by the beating? Do beaten children go out into the world more confident, happy and resiliant people or are they more likely to be withdrawn, angry and visit similar violence upon others? To take it another step, despite the infancy of neuroscience at this stage: does their brain - if we had the appropriate scanning technology - show objective signs of the correlates of pleasure, pain, reward, confidence, satisfaction, contentment, fear, shame, and so on? Would not all of these measures constitute an experiment falsification of the theory that "beating a child improves their well being"? After all, isn't such "discipline" the intent originally?

        You can only do an experiment to see if beating children improves their well being if:

        A) you specify, beating children in certain types of environments (your experiment can't tell you about other types you don't try it out in)

        B) you pre-define what counts as benefit or harm to the child, so you can decide what is a good result or not. but this is one of the main things people disagree about.

        C) you better deal with the fact that beaten children often, 20 years later, report that it was good for them and that they liked it and that it made them the awesome person they are today. that's hard for science b/c when you measure straight forward things like is the person happy (now) he does OK, but it's not so hard for moral *argument* to say that he's cruelly disregarding the preferences of a person (from the past) to excuse cruelty (from the past).

        >
        > Equally then, could experiment refute the theory of TCS?

        No because TCS does not have pre-defined outcomes about what is a good result. There's no way to measure success. TCS aims to help children *by the children's own standards*, not by the standards of any experiment.

        > It seems a similar battery of experiments can be imagined in just the same way. Does a TCS approach produce children with greater well being?

        No it does not, for *your* definition of well being, b/c that would be imposing your values on my child, when he should live by his own values. Clear?

        > Are they happier, better adjusted,

        But "better adjusted" is usually a euphemism for having a broken spirit and giving in to convention.

        Science can measure that if you define it precisely enough, but science cannot tell us if being "better adjusted" is a good thing or not.

        > more confident,

        But is confidence right for everyone? I think not.

        > creative and happy people...or are they not?

        Take even happiness. It may seem a bit obvious how great that is. But some people prefer to be calm, or perhaps mildly happy, rather than to have stronger emotions even strong happiness. Some people think emotions are dangerous and don't want so much of them.

        Finding something science can measure, even happiness, then assuming that thing is moral, is begging the question of what is moral.

        > It seems to me that if they are not then we have a refutation. At the limit we can imagine not only studies where participants simply report the contents of their consciousness but also have their brain scanned as they undergo a TCS exchange about whether they can try MDMA this weekend with some friends compared to some other group where the approach is a "just so no" coercive one.

        This brings up another issue which is that it's very hard to judge someone from a brain scan. Even if we imagine hypothetical improvements in science so the brain scans get us complete information about the entire brain, still how do you work out the *meaning* of this information?

        Let's apply Popperian epistemology to the case. Don't you need to make guesses at the meaning, and improve them by criticism (conjectures & refutations) -- exactly as you can do without a brain scan? The role of the brain scan, it seems to me, is that you can use it to criticize any theories that it contradicts, but no more (evidence can help criticize but never confirm or tell us what is true directly).

        >
        > I think then that TCS is falsifiable in the usual sense of the word. One need only grant that what you are actually seeking is the greatest possible well-being of the child

        Still waiting on the definition of "well-being". But whatever it is I won't force it on my child.

        > as the outcome of putting some parenting theory into practise. If the theory does improve the well-being of the child then it is the best moral - and hence scientific theory currently available. Unless, that is - another theory can be shown to be more capable of improving well being. In practise of course, no such experiments need to be done as those alternatives can be rejected on other grounds that David alludes to.
        >
        > And is well-being too imprecise a term to be using here? Consider, as Sam Harris does, by way of example - the notion of physical health. It defies precise definition and is rather elastic to changes in medicine. Yet we can still talk meaningfully about being healthy or not. Likewise, our notion of what it means for a theory like TCS to improve the "well-being" can be "perpetually open to revision as we make progress in science." (Harris, p 12).
        >
        > So TCS does not need to be tested for its capacity to improve well being in the ways I suggest for us to embrace it - but I observe that because TCS is ultimately a theory about how best to achieve well being - and well being is as scientific a concept as is "physical health" then we can say that TCS is a falsifiable, scientific theory. And it, like the broader idea that maximising the well being of conscious creatures is what morality is about. And this is clearly something that depends on the state of the world. It thus forms part of the Fabric of Reality.


        For Deutsch-compatible views on morality, you might be interested in reading Karl Popper (esp _The World of Parmenides_, ch 2, addendum 2), Ayn Rand (_Atlas Shrugged_, etc), and William Godwin (_Enquiry Concerning Political Justice_, etc).

        Also see here: http://www.curi.us/1169-morality




        One more thing. Consider this chart of traditions and their relevance to parenting:

        http://fallibleideas.com/parenting-and-tradition

        Suppose we wanted to measure these things with science. So we would take one, say Human Rights, and then we'd come up with some criteria for measuring it, and then we'd measure it. Right? The thing is, the part where we come up with empirical criteria for measuring the abstract thing -- that is not science but philosophy, and the entire project depends on doing that step correctly. Is that clear? The Harris project has the same sort of issue.

        -- Elliot Temple
        http://fallibleideas.com/
      • silky
        Whether or not I agree with the original comments, there are some flaws in your responses, which I m highlighting below: ... Well, it doesn t matter if it s a
        Message 3 of 7 , Dec 5, 2010
        • 0 Attachment
          Whether or not I agree with the original comments, there are some
          flaws in your responses, which I'm highlighting below:

          On Mon, Dec 6, 2010 at 4:00 AM, Elliot Temple <curi@...> wrote:
          > > Once you accept the premise that the only thing we need to consider in our moral judgements are the experiences of conscious creatures
          >
          > No way! We also need to consider the laws of physics too, for example -- they are relevant to moral judgments. For example the morality of adding some
          > atoms to my friend's suitcase depends on whether they are a bomb or not, which is a matter of the laws of physics.

          Well, it doesn't matter if it's a bomb if it doesn't affect a creature
          you care about morally. If you agree that those are conscious
          creatures, the above statement seems accurate (if not perhaps a bit of
          a tautology - you need to consider what you need to consider).


          > And *experiences*? Is this intentionally subjectivist? Wanting something, and being the kind of person who will feel good if he gets it, makes it right?
          >
          > > and that what is worth maximising is the well being of those creatures then everything else quickly follows. Harris considers a thought experiment where
          > > you imagine a world which is "the worst possible suffering for everyone all the time." By definition this is the worst state of affairs physically possible.
          >
          > Sounds more like, by definition, the worst state of affairs *psychologically* possible. Wouldn't you agree?
          >
          > But as for the physical situation, it's vague. The moon might have super advanced philosophy and technology libraries just waiting for us, so the instant
          > we get space travel we'll access this knowledge and fix everything. Or there might be benevolent aliens seconds from intervening and making everyone
          > happy.
          >
          > > Now any state of affairs is clearly better than this.
          >
          > No (even accepting most of your premises), because it might be about to end, and we might compare to it another situation, almost as bad, but which will
          > last much longer.

          I think purely by definition if you've agreed that you are thinking of
          "the world possible suffering", you must agree that any state of
          affairs is better than it, otherwise you are not truly thinking of the
          worst possible.

          Exactly what the worst possible situation is; or if it is even
          *possible* to have a worst possible state, is arguable (and probably,
          I don't think it is possible to decide on a "worst" without being able
          to make it slightly more uncomfortable: "everything that is is, +
          itself again").


          [...]

          > Science can measure that if you define it precisely enough, but science cannot tell us if being "better adjusted" is a good thing or not.

          Why not? What about that, specifically, escapes Scientific review?
          Does anything escape it? I don't see that it does.

          [...]


          > -- Elliot Temple
          > http://fallibleideas.com/

          --
          silky

          http://dnoondt.wordpress.com/

          "Every morning when I wake up, I experience an exquisite joy — the joy
          of being this signature."
        • Elliot Temple
          ... So suppose we re trying to figure out, if we do X will there be good experiences for conscious creatures in the future? Will X be a good idea to do? (Not
          Message 4 of 7 , Dec 5, 2010
          • 0 Attachment
            On Dec 5, 2010, at 10:16 PM, silky wrote:

            > Whether or not I agree with the original comments, there are some
            > flaws in your responses, which I'm highlighting below:
            >
            > On Mon, Dec 6, 2010 at 4:00 AM, Elliot Temple <curi@...> wrote:
            >>> Once you accept the premise that the only thing we need to consider in our moral judgements are the experiences of conscious creatures
            >>
            >> No way! We also need to consider the laws of physics too, for example -- they are relevant to moral judgments. For example the morality of adding some
            >> atoms to my friend's suitcase depends on whether they are a bomb or not, which is a matter of the laws of physics.
            >
            > Well, it doesn't matter if it's a bomb if it doesn't affect a creature
            > you care about morally. If you agree that those are conscious
            > creatures, the above statement seems accurate (if not perhaps a bit of
            > a tautology - you need to consider what you need to consider).

            So suppose we're trying to figure out, if we do X will there be good experiences for conscious creatures in the future? Will X be a good idea to do? (Not saying I agree to that, just trying to play along and see how it goes.)

            To figure that out, you have to know about the laws of physics, so you can know if X is going to blow up a building or build a skyscraper. See? You can't predict if people are going to feel good or bad about it unless you know if it's a bomb or not. So you have to consider lots of things, even if when you finally evaluate some future scenario you've imagined X will lead to *then, in that evaluation itself* all you care about is the experiences of consciousnesses.

            >
            >> And *experiences*? Is this intentionally subjectivist? Wanting something, and being the kind of person who will feel good if he gets it, makes it right?
            >>
            >>> and that what is worth maximising is the well being of those creatures then everything else quickly follows. Harris considers a thought experiment where
            >>> you imagine a world which is "the worst possible suffering for everyone all the time." By definition this is the worst state of affairs physically possible.
            >>
            >> Sounds more like, by definition, the worst state of affairs *psychologically* possible. Wouldn't you agree?
            >>
            >> But as for the physical situation, it's vague. The moon might have super advanced philosophy and technology libraries just waiting for us, so the instant
            >> we get space travel we'll access this knowledge and fix everything. Or there might be benevolent aliens seconds from intervening and making everyone
            >> happy.
            >>
            >>> Now any state of affairs is clearly better than this.
            >>
            >> No (even accepting most of your premises), because it might be about to end, and we might compare to it another situation, almost as bad, but which will
            >> last much longer.
            >
            > I think purely by definition if you've agreed that you are thinking of
            > "the world possible suffering", you must agree that any state of
            > affairs is better than it, otherwise you are not truly thinking of the
            > worst possible.
            >
            > Exactly what the worst possible situation is; or if it is even
            > *possible* to have a worst possible state, is arguable (and probably,
            > I don't think it is possible to decide on a "worst" without being able
            > to make it slightly more uncomfortable: "everything that is is, +
            > itself again").

            Brain scans don't measure the future, they measure bad psychological states *now* (at best, if they work).

            Think of a graph. Happiness vs time. V shape. It's got a minimum at -5. You measure at that time, you get -5. Say that's the worst. But most of the time with the V it's pretty good.

            Now think of another graphic. Straight line. Always 2. If you sum the area under the graph (to the X axis), you get much much less over a long period of time, compared to the V which has an average (mean) of 728934234 over a few billion years which is way more than an average of 2. But the V one has the lower minimum. See?

            So that minimum on the V graph, it has the worst suffering for people. The straight line graph just has mild suffering forever. But the V gets better.

            That's what I'm saying. Take the worst thing -- as judged by how consciousnesses feel at some time -- and that don't tell you if the future will more than make up for it. It's short sighted to focus on a minimum.

            Now maybe there is some philosophical reason to be wary of minima, but that isn't a matter of the experiences of conscious creatures so who cares, right?

            > [...]
            >
            >> Science can measure that if you define it precisely enough, but science cannot tell us if being "better adjusted" is a good thing or not.
            >
            > Why not? What about that, specifically, escapes Scientific review?
            > Does anything escape it? I don't see that it does.

            Well, OK, you tell me: how do you measure whether a child being "better adjusted" is a good thing or a bad thing?

            -- Elliot Temple
            http://elliottemple.com/
          • brhallway@yahoo.com.au
            On the same topic (hope my last posting got through - it s been a while since I ve posted at all!) - I should respond to this. Again, apologies for the
            Message 5 of 7 , Oct 7, 2011
            • 0 Attachment
              On the same topic (hope my last posting got through - it's been a while since I've posted at all!) - I should respond to this. Again, apologies for the almost-1-years delay.

              --- In Fabric-of-Reality@yahoogroups.com, Elliot Temple <curi@...> wrote:
              >
              >
              > On Dec 5, 2010, at 10:16 PM, silky wrote:
              >
              <snip>
              > >> No (even accepting most of your premises), because it might be about to end, and we might compare to it another situation, almost as bad, but which will
              > >> last much longer.
              > >
              > > I think purely by definition if you've agreed that you are thinking of
              > > "the world possible suffering", you must agree that any state of
              > > affairs is better than it, otherwise you are not truly thinking of the
              > > worst possible.
              > >
              > > Exactly what the worst possible situation is; or if it is even
              > > *possible* to have a worst possible state, is arguable (and probably,
              > > I don't think it is possible to decide on a "worst" without being able
              > > to make it slightly more uncomfortable: "everything that is is, +
              > > itself again").
              >
              > Brain scans don't measure the future, they measure bad psychological states *now* (at best, if they work).
              >
              > Think of a graph. Happiness vs time. V shape. It's got a minimum at -5. You measure at that time, you get -5. Say that's the worst. But most of the time with the V it's pretty good.
              >
              > Now think of another graphic. Straight line. Always 2. If you sum the area under the graph (to the X axis), you get much much less over a long period of time, compared to the V which has an average (mean) of 728934234 over a few billion years which is way more than an average of 2. But the V one has the lower minimum. See?

              I'm not sure what you're getting at here. You seem to have a deeper skepticism about happiness/well-being as being relevant at all to any discussion of morality.

              >
              > So that minimum on the V graph, it has the worst suffering for people. The straight line graph just has mild suffering forever. But the V gets better.
              >
              > That's what I'm saying. Take the worst thing -- as judged by how consciousnesses feel at some time -- and that don't tell you if the future will more than make up for it. It's short sighted to focus on a minimum.
              >
              > Now maybe there is some philosophical reason to be wary of minima, but that isn't a matter of the experiences of conscious creatures so who cares, right?

              Considering the worst possible misery for everyone is simply a thought experiment. Again, if you grant that it's bad then movements away from it are good. We can talk about the details of calculations - but I don't think that's useful unless you concede that, all else being equal, a happier world is ethically more desirable than a miserable one.

              >
              > > [...]
              > >
              > >> Science can measure that if you define it precisely enough, but science cannot tell us if being "better adjusted" is a good thing or not.
              > >
              > > Why not? What about that, specifically, escapes Scientific review?
              > > Does anything escape it? I don't see that it does.
              >
              > Well, OK, you tell me: how do you measure whether a child being "better adjusted" is a good thing or a bad thing?

              This concern that because we cannot measure something it is not a scientific claim is at odds with both BoI, FoR and Sam Harris' work. Many things can't be measured but are scientific claims. Harris own favourite example is: How many birds are in flight right now? The question is well posed and has an answer. But no one knows the answer and it just changed anyway. If someone says the answer is exactly 20432 then this is a scientific claim. But we can reject it even though we don't have the right answer. We need only ask how this person knows.

              A better adjusted child is a good thing. We cannot measure such things in the lab easily - but like the birds in flight example - if someone says that beating a child for learning to read is a good thing - they are making an ethical claim which is quite scientific. This is science considered as good explanations. It need not have anything to do with immediate access to an experimental test. One day we might have such a test: we might be able to scan the brains of children who learn to read and gather that this increases their well being and that violence lowers it and the well being of all involved.

              Can I ask a question: Is there anything wrong with asserting that ethics is the science of the well-being of conscious creatures?

              Brett.


              >
              > -- Elliot Temple
              > http://elliottemple.com/
              >
            • Elliot Temple
              ... No problem. ... What I was getting at, specifically, is that the worst possible state that happens (minimum for one instant) is quite different than the
              Message 6 of 7 , Oct 7, 2011
              • 0 Attachment
                On Oct 7, 2011, at 1:47 AM, brhallway@... wrote:

                > On the same topic (hope my last posting got through - it's been a while since I've posted at all!) - I should respond to this. Again, apologies for the almost-1-years delay.

                No problem.

                >
                > --- In Fabric-of-Reality@yahoogroups.com, Elliot Temple <curi@...> wrote:
                >>
                >>
                >> On Dec 5, 2010, at 10:16 PM, silky wrote:
                >>
                > <snip>
                >>>> No (even accepting most of your premises), because it might be about to end, and we might compare to it another situation, almost as bad, but which will
                >>>> last much longer.
                >>>
                >>> I think purely by definition if you've agreed that you are thinking of
                >>> "the world possible suffering", you must agree that any state of
                >>> affairs is better than it, otherwise you are not truly thinking of the
                >>> worst possible.
                >>>
                >>> Exactly what the worst possible situation is; or if it is even
                >>> *possible* to have a worst possible state, is arguable (and probably,
                >>> I don't think it is possible to decide on a "worst" without being able
                >>> to make it slightly more uncomfortable: "everything that is is, +
                >>> itself again").
                >>
                >> Brain scans don't measure the future, they measure bad psychological states *now* (at best, if they work).
                >>
                >> Think of a graph. Happiness vs time. V shape. It's got a minimum at -5. You measure at that time, you get -5. Say that's the worst. But most of the time with the V it's pretty good.
                >>
                >> Now think of another graphic. Straight line. Always 2. If you sum the area under the graph (to the X axis), you get much much less over a long period of time, compared to the V which has an average (mean) of 728934234 over a few billion years which is way more than an average of 2. But the V one has the lower minimum. See?
                >
                > I'm not sure what you're getting at here. You seem to have a deeper skepticism about happiness/well-being as being relevant at all to any discussion of morality.

                What I was getting at, specifically, is that the "worst possible state" that happens (minimum for one instant) is quite different than the total amount of happiness or unhappiness.

                So one scenario might involve the worst possible state for a moment and then get much better quickly.

                A different one might never get that bad, but be mildly bad forever.

                The first one, despite having the worst possible state (temporarily) could be better overall.


                >> So that minimum on the V graph, it has the worst suffering for people. The straight line graph just has mild suffering forever. But the V gets better.
                >>
                >> That's what I'm saying. Take the worst thing -- as judged by how consciousnesses feel at some time -- and that don't tell you if the future will more than make up for it. It's short sighted to focus on a minimum.
                >>
                >> Now maybe there is some philosophical reason to be wary of minima, but that isn't a matter of the experiences of conscious creatures so who cares, right?
                >
                > Considering the worst possible misery for everyone is simply a thought experiment. Again, if you grant that it's bad then movements away from it are good. We can talk about the details of calculations - but I don't think that's useful unless you concede that, all else being equal, a happier world is ethically more desirable than a miserable one.

                I'm willing to grant that as a general principle which I generally agree with, but not as a foundation from which to derive conclusions.

                And I'm skeptical that it's very useful because it cannot tell us whether it's better to have a lower worse point or a better average. It can't tell us how to compare issues like that.

                I noticed DD made the same kind of point as me today:

                > The second is that even if we grant that premise, it gives us no clue how we are going to judge

                So he's saying it's not too useful, even if we accept it, because it doesn't help us judge lots of comparisons. Maybe his version of the explanation will be helpful too.


                >>> [...]
                >>>
                >>>> Science can measure that if you define it precisely enough, but science cannot tell us if being "better adjusted" is a good thing or not.
                >>>
                >>> Why not? What about that, specifically, escapes Scientific review?
                >>> Does anything escape it? I don't see that it does.
                >>
                >> Well, OK, you tell me: how do you measure whether a child being "better adjusted" is a good thing or a bad thing?
                >
                > This concern that because we cannot measure something it is not a scientific claim is at odds with both BoI, FoR and Sam Harris' work.

                No, the Popperian *definition* of science -- advocated in FoR and BoI -- is science is stuff that you can *empirically test* (i.e. measurements are relevant). Anything you can't test (measure) we regard as not being science (for better or worse).

                > Many things can't be measured but are scientific claims. Harris own favourite example is: How many birds are in flight right now? The question is well posed and has an answer. But no one knows the answer and it just changed anyway. If someone says the answer is exactly 20432 then this is a scientific claim. But we can reject it even though we don't have the right answer. We need only ask how this person knows.

                That is measurable. Even using current technology (take simultaneous high res photos from several angles and count, or maybe make a 3d computer model and put birds into a simulated sky then check if the model matches the pictures from all the angles, and keep moving/adding birds until it does).

                Even if it wasn't measurable today, the issue we're concerned with is measurable *in principle*. We don't regard what is scientific as changing as new technology is invented, but being an in principle matter.


                > A better adjusted child is a good thing.

                Why?

                Personally I'm not really a big fan of conformity and I'm skeptical how good it is.

                > We cannot measure such things in the lab easily

                How could you measure *in principle* what is good or bad (morally)?

                > - but like the birds in flight example - if someone says that beating a child for learning to read is a good thing - they are making an ethical claim which is quite scientific. This is science considered as good explanations. It need not have anything to do with immediate access to an experimental test. One day we might have such a test: we might be able to scan the brains of children who learn to read and gather that this increases their well being and that violence lowers it and the well being of all involved.
                >
                > Can I ask a question: Is there anything wrong with asserting that ethics is the science of the well-being of conscious creatures?

                Because you cannot measure the answers to philosophical questions (including all ethical and epistemological issues) and any claims that you can are going to end up advocating false stuff.

                -- Elliot Temple
                http://beginningofinfinity.com/interview
              • brhalluk@hotmail.com
                Hi Elliot, First, thanks for taking the time to reply. I do tend to agree with you about what you say with respect to that single minimum. I also agree it
                Message 7 of 7 , Oct 8, 2011
                • 0 Attachment
                  Hi Elliot,

                  First, thanks for taking the time to reply.

                  I do tend to agree with you about what you say with respect to that single minimum. I also agree it cannot be measured. The claim that "the worst possible misery for everyone is bad" is made in order to simply get the discussion about the objectivity of ethics going. If you grant that 'misery' is a state of the human brain then it seems to me that you agree that this then becomes something that science is not silent upon. Science is relevant. I'm not saying that philosophy is irrelevant. It's worth recognising that most of our community believe that both science and philosophy are irrelevant. Most think that culture and religion are the most important considerations when it comes to deciding between what is right and wrong.

                  I think in the end it does not matter what we call `science' or `philosophy' of course. Sam Harris just talks about `rationality generally'. I think there's nothing there to object to with the idea that whatever sphere we are talking about it's simply good explanations that we are after.

                  The project here is, at root, a challenge to the idea that one dogma (or other) can in some way provide us with the best way to make ethical progress. I think we can agree that religion does not really provide a good framework. It is also a challenge to the notion that ethical relativism is somehow preferable. Again, I think we agree relativism does not permit us an open-ended search for objectively better explanations - ethical or otherwise.

                  So all this is, is a basis or framework or ethical stance (whatever term is least objectionable to you) that recognises the importance (the centrality indeed) of the experience of conscious creatures in ethics. As Sam Harris has pointed out: even religious people to some extent tacitly admit this much because of their conception in heaven and hell. They're just most concerned about the conscious experience of creatures after death.

                  Let's consider some specific ethical questions and see how this might work. Sam Harris uses the example of forced veiling in Islamic societies (say under the Taliban in Afghanistan). Is this right or wrong? Under Islam: it's right. Under certain `left leaning' relativist ideologies - we cannot pass judgement on another society. Under other religious ideologies: it's wrong. But it's wrong because of some dogma. All Sam is saying is that we need to consider the well being of those involved. It's actually bad for everyone. It's certainly bad for all the girls and women involved. It's actually bad on closer inspection for everyone in society. And we do not need to run tests to establish this - but (and here is one of his important points) it's not unscientific to say that it's wrong. This is because well being depends on brains operating in particular states.

                  Why should we want to foster well being? Because that which does, in general, tends to move us away from maximum misery for all. Why is this desirable? "Our spade is turned by the shovel of a stupid question".

                  Consider something else: is it good/ethically correct/right to eat meat? It seems to me that this is a question many consider to be worth pondering. I think with Harris' morality - it has a nuanced and rational answer. On what would you base your answer?

                  I contend it depends upon the degree to which the animal can suffer and this depends on the complexity of the animal. We are right to be mostly concerned about our fellow `universal explainers' as they have the potential for the widest range of suffering and well-being. We should not be farming human beings and eating them.

                  We need to be less concerned about, but still nonetheless concerned - about chimpanzees. They might not be universal explainers: but we have ethical obligations towards them because they have the capacity to suffer. We know this because we know something about biology and nervous systems and the potential to have experiences.

                  Cows? They have the potential to suffer. Indeed they are stupid animals: but stupidity is not the main factor here. It's the potential to experience. A cow - from our best theory of what it means to have a subjectivity - has the potential to experience suffering. We should aim to minimise suffering. This might not rule out eating the cow but it does begin to give a flavor of how far down the evolutionary chain our ethical concerns run and how we might go about eating cows. It might be (likely?) that cows don't mentally model the future very well. They might not have "hopes". They might not have much of a rich internal subjectivity at all. So we might not be *that* concerned about what they might feel about being eaten. They probably wouldn't appreciate the concept at all. But it's pretty likely they can experience suffering and so if we are to farm them - we should do so with concern for their well-being and slaughter them painlessly.

                  We are right to be less concerned about fish and insects.

                  We are right to be unconcerned about rocks. We do not think there is anything it is like to be a rock.

                  Until we have a more complete science of the mind, we have to judge these issues scientifically: we need to consider the evidence. So when it comes to eating fish we have to wonder: what's it like to be a fish? Does a fish experience pain when caught to any great extent? We know that the ability to experience a sensation depends upon having a nervous system and we know that richness of experience has something to do with complexity of nervous systems for life on Earth. I think I've made this point enough for now. But essentially, it seems to me that Harris' idea has practical implications: and I've just given you a couple of examples.

                  I've made a few more replies in the body of your response below.

                  --- In Fabric-of-Reality@yahoogroups.com, Elliot Temple <curi@...> wrote:
                  >
                  >
                  > On Oct 7, 2011, at 1:47 AM, brhallway@... wrote:
                  >
                  <snip>

                  > > I'm not sure what you're getting at here. You seem to have a deeper skepticism about happiness/well-being as being relevant at all to any discussion of morality.
                  >
                  > What I was getting at, specifically, is that the "worst possible state" that happens (minimum for one instant) is quite different than the total amount of happiness or unhappiness.
                  >
                  > So one scenario might involve the worst possible state for a moment and then get much better quickly.
                  >
                  > A different one might never get that bad, but be mildly bad forever.
                  >
                  > The first one, despite having the worst possible state (temporarily) could be better overall.

                  Yes, agreed. But this is a concern about the ability to measure things imprecisely - or perhaps too precisely as the case may be. The Moral Landscape is a devise for picturing the space of possibilities: peaks are the heights of human happiness and valleys are the deepest depths of misery. There can be multiple, equally high peaks. There's also going to be multiple, equally bad valleys (and many more of these). What's worse in your scenario? They're both bad: and we both agree they should be avoided. Why? Because happiness is preferable to misery. It seems you are tacitly agreeing as you agree there is such a thing as bad. It's an abstract concept, sure. But it's not merely that.

                  >
                  >
                  > >> So that minimum on the V graph, it has the worst suffering for people. The straight line graph just has mild suffering forever. But the V gets better.
                  > >>
                  > >> That's what I'm saying. Take the worst thing -- as judged by how consciousnesses feel at some time -- and that don't tell you if the future will more than make up for it. It's short sighted to focus on a minimum.
                  > >>
                  > >> Now maybe there is some philosophical reason to be wary of minima, but that isn't a matter of the experiences of conscious creatures so who cares, right?
                  > >
                  > > Considering the worst possible misery for everyone is simply a thought experiment. Again, if you grant that it's bad then movements away from it are good. We can talk about the details of calculations - but I don't think that's useful unless you concede that, all else being equal, a happier world is ethically more desirable than a miserable one.
                  >
                  > I'm willing to grant that as a general principle which I generally agree with, but not as a foundation from which to derive conclusions.
                  >
                  > And I'm skeptical that it's very useful because it cannot tell us whether it's better to have a lower worse point or a better average. It can't tell us how to compare issues like that.

                  I believe Harris considers this. He believes it's possible for us to have to pass down into a valley before we can begin to climb another peak. He thinks there are probably times in our own history like this. I think this answers your concern. I'm not sure your dichotomy here is a valid one because - with infinite moral progress - it's going to be hard to assume some sort of eternal average. Instead we just continue to increase well-being without limit. And to get there we may have to pass through a `lower worse point' (to use your terminology) to get there. We might *have* to fight a war (for example) to ensure that we can ultimately live in a free and open society.

                  >
                  > I noticed DD made the same kind of point as me today:
                  >
                  > > The second is that even if we grant that premise, it gives us no clue how we are going to judge
                  >
                  > So he's saying it's not too useful, even if we accept it, because it doesn't help us judge lots of comparisons. Maybe his version of the explanation will be helpful too.

                  In and of itself the "worst possible misery for everyone" argument isn't meant to provide detailed guidance about how to make ethical decisions. It's not meant to. It's meant to provide a basis for the position that there is an objective basis for morality. Once you grant that the worst possible misery for everyone is bad then the landscape opens up before you, even if you don't currently have all the answers in hand or know how to get them.

                  It's importance for us rationalists to at least agree, in my opinion, that our fellow people really don't agree with many of us on this point. Sam Harris has been at pains to point out that many people - in academia - seem incapable of conceding even that much. That misery is `bad'. Or that the word `bad' can mean anything at all. We should be able to concede these people are wrong. We should also agree that the reason something is bad is not because some authority (usually God or some religious leader) says it's bad.

                  I get the sense yourself and David agree with that last part. I also get the sense that DD might believe that good and bad are abstract ideas like values generally. Do you think this is this correct? I don't know that I agree because I agree with Sam that values are simply a certain kind of fact: facts about state of the human mind. A thing is good or bad because of the effect it has upon the states of brains.

                  >
                  >
                  > >>> [...]
                  > >>>
                  > >>>> Science can measure that if you define it precisely enough, but science cannot tell us if being "better adjusted" is a good thing or not.
                  > >>>
                  > >>> Why not? What about that, specifically, escapes Scientific review?
                  > >>> Does anything escape it? I don't see that it does.
                  > >>
                  > >> Well, OK, you tell me: how do you measure whether a child being "better adjusted" is a good thing or a bad thing?
                  > >
                  > > This concern that because we cannot measure something it is not a scientific claim is at odds with both BoI, FoR and Sam Harris' work.
                  >
                  > No, the Popperian *definition* of science -- advocated in FoR and BoI -- is science is stuff that you can *empirically test* (i.e. measurements are relevant). Anything you can't test (measure) we regard as not being science (for better or worse).

                  I think things are a little more nuanced then this: there are answers in practice and answers in principle. This is perhaps where we might find some agreement. The bird example below is what I am talking about. I think science depends upon something being testable *in principle* - but not all scientific claims are testable *in practise*.

                  >
                  > > Many things can't be measured but are scientific claims. Harris own favourite example is: How many birds are in flight right now? The question is well posed and has an answer. But no one knows the answer and it just changed anyway. If someone says the answer is exactly 20432 then this is a scientific claim. But we can reject it even though we don't have the right answer. We need only ask how this person knows.
                  >
                  > That is measurable. Even using current technology (take simultaneous high res photos from several angles and count, or maybe make a 3d computer model and put birds into a simulated sky then check if the model matches the pictures from all the angles, and keep moving/adding birds until it does).

                  It's measurable *in principle* - just like states of the brain. There is no possible way to answer this question *in practice* however. In principle, you are correct. Of course we can imagine a world filled with instrumentation - but we do not live in that world. Using current technology we could *estimate* - but really do you mean to say we could find the *exact* number *now*. No - we can't. *Now* is an instant in time. I mean *now* at 7:47pm October 8th, 2011, Sydney - how many birds are in flight above the surface of the Earth. When you read this email *now* is over. This is what I mean about it being `untestable' in practice - but still scientific. In theory - of course it is. And if I claim that the number is 'exactly 20167' you can *know* I am wrong. And it would be a scientific claim that I am wrong and a valid scientific claim *without doing the test*. Just like "It's wrong to beat an innocent child" is a scientific claim without us having access to an experiment *right now*. In principle we could scan brains of all involved - but we do not need to. As we do not need to monitor all birds in flight to dismiss certain answers as unscientific.

                  If you are willing to grant the existence of a world where there are high resolution cameras up to the task of actually answering this question exactly then surely you are willing to grant a world where neuroimaging technology could scan brains with a fidelity of determining which person is happier and by how much. We can't - we perhaps never will be able to - but this doesn't mean that there is not an `in principle' answer to questions about what creates more well being and which person is suffering more than another. This makes ethical questions a class of scientific questions. In principle.

                  >
                  > Even if it wasn't measurable today, the issue we're concerned with is measurable *in principle*. We don't regard what is scientific as changing as new technology is invented, but being an in principle matter.
                  >
                  >
                  > > A better adjusted child is a good thing.
                  >
                  > Why?
                  >
                  > Personally I'm not really a big fan of conformity and I'm skeptical how good it is.

                  I agree. I don't see the relevance. You brought up `better adjusted'. I think that is far more poorly defined than "well being'. The well-being of children is more important than whatever might be meant by some sort of better-adjusted child.

                  >
                  > > We cannot measure such things in the lab easily
                  >
                  > How could you measure *in principle* what is good or bad (morally)?

                  With a brain scanner. We don't know enough yet about brains to do this. But we do know that certain neurotransmitters and regions of the brain are more active in happy people compared to those suffering. Once you grant that there is a causal connection between states of the mind and states of the brain like this and you also admit that - as I keep saying - the worst possible misery for all is bad - then brain scans *in principle* provide a test for what is good (higher well being) and what is bad (misery). Once more, we don't actually need to do such tests to admit that some ethical questions already have answers: we do not need to scan brains to know that torturing innocent children for fun is wrong. Because torturing causes suffering. Suffering is bad. In principle we could scan the brains of all involved and notice the scientifically explained regions of the brain associated with suffering being active.

                  We should also note that anyone who did derive 'fun' out of such a thing would be wrong to do so. They would be a psychopath. We know enough about neuroscience and ethics to know that psychopaths who enjoy torture are not useful for maximising well being in an open society. In principle one day we might be able to - but wouldn't need to - scan the brains of all involved to get a precise quantitative answer to just how bad it is to torture an innocent child.

                  >
                  > > - but like the birds in flight example - if someone says that beating a child for learning to read is a good thing - they are making an ethical claim which is quite scientific. This is science considered as good explanations. It need not have anything to do with immediate access to an experimental test. One day we might have such a test: we might be able to scan the brains of children who learn to read and gather that this increases their well being and that violence lowers it and the well being of all involved.
                  > >
                  > > Can I ask a question: Is there anything wrong with asserting that ethics is the science of the well-being of conscious creatures?
                  >
                  > Because you cannot measure the answers to philosophical questions (including all ethical and epistemological issues) and any claims that you can are going to end up advocating false stuff.


                  I disagree. But I think that's good. I think Sam Harris view as advocated in The Moral Landscape stands up to criticism of the sort you have provided. This seems to me to be a true instance of the `law of small differences' - making a big deal about the smallest philosophical issue. We would probably both agree on each and every specific instance of which decision to take given some moral dilemma. We would probably even agree why it's the right thing to do. We just wouldn't agree upon the extent to which our decision was `philosophical' as opposed to `scientific'. And I think in the end that would turn out to be unimportant. But I could be convinced otherwise.

                  Thanks again for your reply,

                  Brett Hall.

                  >
                  > -- Elliot Temple
                  > http://beginningofinfinity.com/interview
                  >
                Your message has been successfully submitted and would be delivered to recipients shortly.