Loading ...
Sorry, an error occurred while loading the content.
 

Cost-effectiveness

Expand Messages
  • Holden Karnofsky
    Hello all, I d like your thoughts on how important the following issue is to you. GiveWell has consistently taken the position that cost-effectiveness
    Message 1 of 8 , Jun 14, 2010

      Hello all, I'd like your thoughts on how important the following issue is to you.

      GiveWell has consistently taken the position that cost-effectiveness estimates are "too rough to take literally."  We therefore use them in a very non-literal way.  Specifically, any organization that comes in under $1000/death averted is considered by us to be "highly cost-effective" and we don't distinguish between them (instead we rate/rank organizations on "confidence in their effectiveness" factors).  By contrast, we do put weight on observations like "ART is several times as costly as TB control," where we feel the estimates are directly comparable and we have more confidence in the source of the large difference between them.

      We have never taken the effort to fully spell out the reasons we feel this approach is appropriate.  When we stick to language like "This is too rough to be useful," it probably sounds to some people (well, it definitely sounds to at least one person) that we don't understand basic concepts like "expected value."

      I believe we could mount a strong and handwaving-free defense of our approach, but that it would be quite a bit of work.

      Currently, I have the sense that only 1-2 of our current followers disagree with us on (and care about) this issue.  However, I'd like to check that.  So, if the way we deal with cost-effectiveness bothers you (specifically, if you feel that we don't take cost-effectiveness estimates literally enough, and that we should for example be willing to let high relative theoretical cost-effectiveness outweigh serious questions about effectiveness), please let me know.
    • Holden Karnofsky
      Just a quick clarification on this: I didn t mean to ask about cost-effectiveness estimates in general, but rather, specifically about the
      Message 2 of 8 , Jun 15, 2010
        Just a quick clarification on this: I didn't mean to ask about "cost-effectiveness estimates in general," but rather, specifically about the global-health-related estimates that we use, whose issues we discussed recently at http://blog.givewell.org/2010/03/19/cost-effectiveness-estimates-inside-the-sausage-factory/

        On Mon, Jun 14, 2010 at 10:41 PM, Holden Karnofsky <Holden@...> wrote:

        Hello all, I'd like your thoughts on how important the following issue is to you.

        GiveWell has consistently taken the position that cost-effectiveness estimates are "too rough to take literally."  We therefore use them in a very non-literal way.  Specifically, any organization that comes in under $1000/death averted is considered by us to be "highly cost-effective" and we don't distinguish between them (instead we rate/rank organizations on "confidence in their effectiveness" factors).  By contrast, we do put weight on observations like "ART is several times as costly as TB control," where we feel the estimates are directly comparable and we have more confidence in the source of the large difference between them.

        We have never taken the effort to fully spell out the reasons we feel this approach is appropriate.  When we stick to language like "This is too rough to be useful," it probably sounds to some people (well, it definitely sounds to at least one person) that we don't understand basic concepts like "expected value."

        I believe we could mount a strong and handwaving-free defense of our approach, but that it would be quite a bit of work.

        Currently, I have the sense that only 1-2 of our current followers disagree with us on (and care about) this issue.  However, I'd like to check that.  So, if the way we deal with cost-effectiveness bothers you (specifically, if you feel that we don't take cost-effectiveness estimates literally enough, and that we should for example be willing to let high relative theoretical cost-effectiveness outweigh serious questions about effectiveness), please let me know.

      • rnoble@sas.upenn.edu
        I haven t followed your approach to cost-effectiveness closely lately, but I have gotten an uneasy feeling that you are quickly dismissive of attempts to
        Message 3 of 8 , Jun 16, 2010
          I haven't followed your approach to cost-effectiveness closely lately,
          but I have gotten an uneasy feeling that you are quickly dismissive of
          attempts to determine cost-effectiveness mathemtically. This is a
          very rough impression but I think others that think like I do my get
          the same impression.

          For example, in the post on clean water from yesterday, you state that
          your own DALY calculations differ from those of the authors. From
          your wording, I get the impression that you think there's a good
          chance the authors made some basic math error in their calculation,
          and that therefore no one should take their estimate too seriously. I
          glanced at the paper briefly. Two things I notice are that the
          authors have many pages of explanation of complex mathemtical
          calculations, and that their credentials would appear to be pretty
          solid. I think it far more likely that the basic for the calculation
          is more complicated than anything you could do from the simple
          spreadsheet.

          Another example is that a recent blog post is entitled "Futility of
          standardized metrics." I have only glanced at it but the title itself
          suggests a rather extreme view.

          Finally, I've noticed several times that you put terms from the
          literature on DALYs and standardized metrics in quotation marks; you
          even do so with expected value below. To me that comes off as
          somewhat disparaging of attempts to deal with cost-effectiveness
          mathematically.

          In reality I know you understand the concepts better than this, but I
          can see how someone could get the impression that you don't. I think
          this could be an impediment to being taken seriously by some. And
          these might be people it would be really good to be taken seriously
          by, in terms of what they might ultimately say about GiveWell.





          Quoting Holden Karnofsky <Holden@...>:

          > Hello all, I'd like your thoughts on how important the following issue is to
          > you.
          > GiveWell has consistently taken the position that cost-effectiveness
          > estimates are "too rough to take literally." We therefore use them in a
          > very non-literal way. Specifically, any organization that comes in under
          > $1000/death averted is considered by us to be "highly cost-effective" and we
          > don't distinguish between them (instead we rate/rank organizations on
          > "confidence in their effectiveness" factors). By contrast, we do put weight
          > on observations like "ART is several times as costly as TB control," where
          > we feel the estimates are directly comparable and we have more confidence in
          > the source of the large difference between them.
          >
          > We have never taken the effort to fully spell out the reasons we feel this
          > approach is appropriate. When we stick to language like "This is too rough
          > to be useful," it probably sounds to some people (well, it definitely sounds
          > to at least one person) that we don't understand basic concepts like
          > "expected value."
          >
          > I believe we could mount a strong and handwaving-free defense of our
          > approach, but that it would be quite a bit of work.
          >
          > Currently, I have the sense that only 1-2 of our current followers disagree
          > with us on (and care about) this issue. However, I'd like to check that.
          > So, if the way we deal with cost-effectiveness bothers you (specifically,
          > if you feel that we don't take cost-effectiveness estimates literally
          > enough, and that we should for example be willing to let high relative
          > theoretical cost-effectiveness outweigh serious questions about
          > effectiveness), please let me know.
          >
        • Natalie Stone
          Ron, As I wrote the blog post on Tuesday about the spring protection for clean water, I would like to take a moment to respond to your comments. While I never
          Message 4 of 8 , Jun 17, 2010
            Ron,

            As I wrote the blog post on Tuesday about the spring protection for clean water, I would like to take a moment to respond to your comments. While I never intended to imply that the authors had made a mathematical mistake, I can see how the wording came across as that. My intention was only to say that I was not able to follow the steps they took to come to their estimate of the cost per DALY averted by spring protection.

            Upon going back to the study, however, I realized that I had made an error. It was in fact possible to follow their steps, and it became clear that the assumptions that the authors made were more specific to the intervention, and therefore more reasonable, than the assumptions I made in my own estimation. I have edited the blog post to reflect my error: http://blog.givewell.org/2010/06/15/new-evidence-that-cleaner-water-less-diarrhea/.

            Natalie


            On Wed, Jun 16, 2010 at 5:43 AM, <rnoble@...> wrote:
             



            I haven't followed your approach to cost-effectiveness closely lately,
            but I have gotten an uneasy feeling that you are quickly dismissive of
            attempts to determine cost-effectiveness mathemtically. This is a
            very rough impression but I think others that think like I do my get
            the same impression.

            For example, in the post on clean water from yesterday, you state that
            your own DALY calculations differ from those of the authors. From
            your wording, I get the impression that you think there's a good
            chance the authors made some basic math error in their calculation,
            and that therefore no one should take their estimate too seriously. I
            glanced at the paper briefly. Two things I notice are that the
            authors have many pages of explanation of complex mathemtical
            calculations, and that their credentials would appear to be pretty
            solid. I think it far more likely that the basic for the calculation
            is more complicated than anything you could do from the simple
            spreadsheet.

            Another example is that a recent blog post is entitled "Futility of
            standardized metrics." I have only glanced at it but the title itself
            suggests a rather extreme view.

            Finally, I've noticed several times that you put terms from the
            literature on DALYs and standardized metrics in quotation marks; you
            even do so with expected value below. To me that comes off as
            somewhat disparaging of attempts to deal with cost-effectiveness
            mathematically.

            In reality I know you understand the concepts better than this, but I
            can see how someone could get the impression that you don't. I think
            this could be an impediment to being taken seriously by some. And
            these might be people it would be really good to be taken seriously
            by, in terms of what they might ultimately say about GiveWell.



            Quoting Holden Karnofsky <Holden@...>:

            > Hello all, I'd like your thoughts on how important the following issue is to
            > you.
            > GiveWell has consistently taken the position that cost-effectiveness
            > estimates are "too rough to take literally." We therefore use them in a
            > very non-literal way. Specifically, any organization that comes in under
            > $1000/death averted is considered by us to be "highly cost-effective" and we
            > don't distinguish between them (instead we rate/rank organizations on
            > "confidence in their effectiveness" factors). By contrast, we do put weight
            > on observations like "ART is several times as costly as TB control," where
            > we feel the estimates are directly comparable and we have more confidence in
            > the source of the large difference between them.
            >
            > We have never taken the effort to fully spell out the reasons we feel this
            > approach is appropriate. When we stick to language like "This is too rough
            > to be useful," it probably sounds to some people (well, it definitely sounds
            > to at least one person) that we don't understand basic concepts like
            > "expected value."
            >
            > I believe we could mount a strong and handwaving-free defense of our
            > approach, but that it would be quite a bit of work.
            >
            > Currently, I have the sense that only 1-2 of our current followers disagree
            > with us on (and care about) this issue. However, I'd like to check that.
            > So, if the way we deal with cost-effectiveness bothers you (specifically,
            > if you feel that we don't take cost-effectiveness estimates literally
            > enough, and that we should for example be willing to let high relative
            > theoretical cost-effectiveness outweigh serious questions about
            > effectiveness), please let me know.
            >


          • Holden Karnofsky
            Just wanted to add that we do take cost-effectiveness estimates seriously and consider them a major part of how we choose between causes and between charities.
            Message 5 of 8 , Jun 18, 2010
              Just wanted to add that we do take cost-effectiveness estimates seriously and consider them a major part of how we choose between causes and between charities.  None of the things Ron pointed out had occurred to me as potentially giving a different impression; I appreciate the heads up.

              On Thu, Jun 17, 2010 at 4:28 PM, Natalie Stone <natalie@...> wrote:
               

              Ron,

              As I wrote the blog post on Tuesday about the spring protection for clean water, I would like to take a moment to respond to your comments. While I never intended to imply that the authors had made a mathematical mistake, I can see how the wording came across as that. My intention was only to say that I was not able to follow the steps they took to come to their estimate of the cost per DALY averted by spring protection.

              Upon going back to the study, however, I realized that I had made an error. It was in fact possible to follow their steps, and it became clear that the assumptions that the authors made were more specific to the intervention, and therefore more reasonable, than the assumptions I made in my own estimation. I have edited the blog post to reflect my error: http://blog.givewell.org/2010/06/15/new-evidence-that-cleaner-water-less-diarrhea/.

              Natalie


              On Wed, Jun 16, 2010 at 5:43 AM, <rnoble@...> wrote:
               



              I haven't followed your approach to cost-effectiveness closely lately,
              but I have gotten an uneasy feeling that you are quickly dismissive of
              attempts to determine cost-effectiveness mathemtically. This is a
              very rough impression but I think others that think like I do my get
              the same impression.

              For example, in the post on clean water from yesterday, you state that
              your own DALY calculations differ from those of the authors. From
              your wording, I get the impression that you think there's a good
              chance the authors made some basic math error in their calculation,
              and that therefore no one should take their estimate too seriously. I
              glanced at the paper briefly. Two things I notice are that the
              authors have many pages of explanation of complex mathemtical
              calculations, and that their credentials would appear to be pretty
              solid. I think it far more likely that the basic for the calculation
              is more complicated than anything you could do from the simple
              spreadsheet.

              Another example is that a recent blog post is entitled "Futility of
              standardized metrics." I have only glanced at it but the title itself
              suggests a rather extreme view.

              Finally, I've noticed several times that you put terms from the
              literature on DALYs and standardized metrics in quotation marks; you
              even do so with expected value below. To me that comes off as
              somewhat disparaging of attempts to deal with cost-effectiveness
              mathematically.

              In reality I know you understand the concepts better than this, but I
              can see how someone could get the impression that you don't. I think
              this could be an impediment to being taken seriously by some. And
              these might be people it would be really good to be taken seriously
              by, in terms of what they might ultimately say about GiveWell.



              Quoting Holden Karnofsky <Holden@...>:

              > Hello all, I'd like your thoughts on how important the following issue is to
              > you.
              > GiveWell has consistently taken the position that cost-effectiveness
              > estimates are "too rough to take literally." We therefore use them in a
              > very non-literal way. Specifically, any organization that comes in under
              > $1000/death averted is considered by us to be "highly cost-effective" and we
              > don't distinguish between them (instead we rate/rank organizations on
              > "confidence in their effectiveness" factors). By contrast, we do put weight
              > on observations like "ART is several times as costly as TB control," where
              > we feel the estimates are directly comparable and we have more confidence in
              > the source of the large difference between them.
              >
              > We have never taken the effort to fully spell out the reasons we feel this
              > approach is appropriate. When we stick to language like "This is too rough
              > to be useful," it probably sounds to some people (well, it definitely sounds
              > to at least one person) that we don't understand basic concepts like
              > "expected value."
              >
              > I believe we could mount a strong and handwaving-free defense of our
              > approach, but that it would be quite a bit of work.
              >
              > Currently, I have the sense that only 1-2 of our current followers disagree
              > with us on (and care about) this issue. However, I'd like to check that.
              > So, if the way we deal with cost-effectiveness bothers you (specifically,
              > if you feel that we don't take cost-effectiveness estimates literally
              > enough, and that we should for example be willing to let high relative
              > theoretical cost-effectiveness outweigh serious questions about
              > effectiveness), please let me know.
              >



            • Holden Karnofsky
              Below is an exchange I had with Nick Beckstead on the subject of cost-effectiveness estimates (which I requested feedback on previously), forwarded with
              Message 6 of 8 , Jun 24, 2010
                Below is an exchange I had with Nick Beckstead on the subject of cost-effectiveness estimates (which I requested feedback on previously), forwarded with permission.

                (Read from the bottom if interested)

                ---------- Forwarded message ----------

                Hi Nick, thanks for the thoughts.  I think these are good questions, and would like to forward this exchange to the list with your permission.

                Re: the $1000 threshold.  It would take serious effort to flesh out and defend our full picture of what differences are and aren't meaningful.  The short answer is that there are a substantial number of interventions that are 1. Under this threshold and 2. "Best in class" in the sense that there's no highly comparable intervention that seems to perform better.  The differences between estimates for these "best in class" interventions seem highly fragile to us (there are sometimes even conflicting estimates given on different pages of the DCP report that are off by a factor of several, with no clear explanation).  However, for most of the things we've seen that fail the threshold, there are fairly concrete explanations for why they are less cost-effective than the "best-in-class" interventions (for example, the higher cost-effectiveness of TB treatment relative to ART can clearly be seen in differences in how much the drugs cost and in how long they need to be taken).

                Re: weighing confidence in the organization vs. cost-effectiveness of the estimate.  I think a lot of this comes down to intuitions, especially re: the proper prior for a charity's efficacy.

                Our prior is pretty low for a variety of reasons, including:
                • We just think the whole endeavor of helping people in the developing world seems inherently challenging, especially for organizations and people that we don't believe have functioning/healthy incentives.  
                • When we look at the history of aid, we don't see anything to undermine this view; there are some striking successes, but no macro picture that suggests aid is usually or even often effective.  (To be clear, we also don't think the macro picture suggests aid is *ineffective* - it's mostly just unclear.)
                • And when we investigate the very best charities (in terms of giving us confidence that their activities work) we can find, we end up constantly coming up with new things to be worried about, not all of which are satisfyingly addressed even by these outstanding and highly transparent organizations.  From 10,000 feet these organizations' activities seem at least as simple and likely to succeed as any others'.  We keep this observation in mind when thinking about other organizations that also seem to have relatively straightforward activities, but haven't share information that would even let us begin to think about potential problems.
                There are a couple of other reasons that we generally put "confidence that it works" at the center of our rankings, rather than formally multiplying P(it works) by (cost-effectiveness if it does work):
                • Incentives.  Thought experiment: imagine that organizations A, B, and C are all health organizations facing similar challenges, but that C's program has 3x the theoretical cost-effectiveness of A's and B's.  Imagine that A discloses substantive information showing that its programs largely work; B discloses substantive information showing that its programs largely fail; C discloses nothing.  One might find it appropriate to expect C to perform about as well as the average of A and B (I wouldn't, since I think willingness to disclose is correlated with quality) ... but even if so, I think it would be a mistake to fund C in this case.  Doing so would create incentives for poor organizations to disclose less and therefore benefit from the fact that donors base their estimates on strong organizations.  As an evaluator, we are particularly mindful of what behavior we are rewarding, but I think a donor ought to pay some attention to this as well.
                • The question of how to weigh confidence in activiteis vs. theoretical cost-effectiveness of activities seems somewhat moot to us since our top charities perform well on both.  Our top-rated organizations have very high cost-effectiveness in the scheme of things, and we don't think there's good reason to expect higher cost-effectiveness from anyone else just based on what program they're running.  
                Hopefully this helps clarify how we are thinking, even if I haven't given the specifics/sources that would let you evaluate our thinking (as I said, that would be a significant enough undertaking that I want a good sense of how important it is).

                On Tue, Jun 15, 2010 at 8:07 AM, Nick Beckstead wrote:
                Hi Holden, (cc: Elie)

                1.  I would like to know more about why you do not attempt to distinguish between organizations whose cost-effectiveness falls below the $1000/death range.  Prima facie, it sounds like a strange policy.  The only justification I can think of is of the form: we just can't tell the difference between organizations that work at $250/death and $500/death with any confidence to speak of.  This answer is hard to understand from the outside, since you seem to be able to distinguish between organizations that are at $750/death and those over $1500/death.

                2.  You write: "So, if the way we deal with cost-effectiveness bothers you (specifically, if you feel that we don't take cost-effectiveness estimates literally enough, and that we should for example be willing to let high relative theoretical cost-effectiveness outweigh serious questions about effectiveness), please let me know." I'm worried about this (but I don't really know what to think).  I tend to agree that many of the theoretical cost-effectiveness estimates are too optimistic, but I wouldn't want to overcorrect for this.  Some of the GWWC folks, Toby Ord in particular, make it sound like you guys go too far in "assuming the worst" when it comes to unknowns in a charity.  (I think Toby said this about SCI in particular.)  I need to understand the situation more before I agree or disagree, but I'd like to know more about your policy here.

                If I were in your shoes, I would work with a risk over uncertainty premium and a good bit or risk aversion, since that is important for your long term credibility.  But as an individual donor, I am less concerned about such things.  I just want to give where it maximizes expected value, so I might be willing to tolerate many unanswered questions with murky probabilities, provided the expected value calculation works out right.

                Not sure if you wanted me to post this to the listserv, or just send to you directly.

                Best,

                Nick


                On Mon, Jun 14, 2010 at 10:41 PM, Holden Karnofsky <Holden@...> wrote:
                 

                Hello all, I'd like your thoughts on how important the following issue is to you.

                GiveWell has consistently taken the position that cost-effectiveness estimates are "too rough to take literally."  We therefore use them in a very non-literal way.  Specifically, any organization that comes in under $1000/death averted is considered by us to be "highly cost-effective" and we don't distinguish between them (instead we rate/rank organizations on "confidence in their effectiveness" factors).  By contrast, we do put weight on observations like "ART is several times as costly as TB control," where we feel the estimates are directly comparable and we have more confidence in the source of the large difference between them.

                We have never taken the effort to fully spell out the reasons we feel this approach is appropriate.  When we stick to language like "This is too rough to be useful," it probably sounds to some people (well, it definitely sounds to at least one person) that we don't understand basic concepts like "expected value."

                I believe we could mount a strong and handwaving-free defense of our approach, but that it would be quite a bit of work.

                Currently, I have the sense that only 1-2 of our current followers disagree with us on (and care about) this issue.  However, I'd like to check that.  So, if the way we deal with cost-effectiveness bothers you (specifically, if you feel that we don't take cost-effectiveness estimates literally enough, and that we should for example be willing to let high relative theoretical cost-effectiveness outweigh serious questions about effectiveness), please let me know.



                --
                Nick Beckstead
                Ph.D. Student
                Department of Philosophy
                Rutgers University


              • Wai-Kwong Sam Lee
                Here are my two cents: 1. I am sympathetic to Givewell s position on cost-effectiveness - my intuition is that not only an accurate cost estimate is
                Message 7 of 8 , Jun 26, 2010
                  Here are my two cents:

                  1. I am sympathetic to Givewell's position on cost-effectiveness - my intuition is  that not only an accurate cost estimate is realistically difficult, but also the estimate might not necessarily apply as the program gets scaled up or replicated. 

                  2. Having said that, in the event that if the cost estimate is an order of magnitude lower ($1000 / death), I do think it'I'd warrant to be called out. 

                  3. I do have a side question though: what is the policy towards intervention is mainly on increasingly DALY (and less so on preventing death)?  E.g., for school-based deworming profiled by Poverty Action Lab, my impression is that while the program increases DALY and less so on preventing death (intestinal worms are not as fatal as diseases like Malaria, TB, as far as I know). 


                  - sam
                  "Joy comes not to him who seeks it for himself, but to him who seeks it for other people."


                  On Thu, Jun 24, 2010 at 8:39 PM, Holden Karnofsky <Holden@...> wrote:
                   

                  Below is an exchange I had with Nick Beckstead on the subject of cost-effectiveness estimates (which I requested feedback on previously), forwarded with permission.


                  (Read from the bottom if interested)

                  ---------- Forwarded message ----------

                  Hi Nick, thanks for the thoughts.  I think these are good questions, and would like to forward this exchange to the list with your permission.

                  Re: the $1000 threshold.  It would take serious effort to flesh out and defend our full picture of what differences are and aren't meaningful.  The short answer is that there are a substantial number of interventions that are 1. Under this threshold and 2. "Best in class" in the sense that there's no highly comparable intervention that seems to perform better.  The differences between estimates for these "best in class" interventions seem highly fragile to us (there are sometimes even conflicting estimates given on different pages of the DCP report that are off by a factor of several, with no clear explanation).  However, for most of the things we've seen that fail the threshold, there are fairly concrete explanations for why they are less cost-effective than the "best-in-class" interventions (for example, the higher cost-effectiveness of TB treatment relative to ART can clearly be seen in differences in how much the drugs cost and in how long they need to be taken).

                  Re: weighing confidence in the organization vs. cost-effectiveness of the estimate.  I think a lot of this comes down to intuitions, especially re: the proper prior for a charity's efficacy.

                  Our prior is pretty low for a variety of reasons, including:
                  • We just think the whole endeavor of helping people in the developing world seems inherently challenging, especially for organizations and people that we don't believe have functioning/healthy incentives.  
                  • When we look at the history of aid, we don't see anything to undermine this view; there are some striking successes, but no macro picture that suggests aid is usually or even often effective.  (To be clear, we also don't think the macro picture suggests aid is *ineffective* - it's mostly just unclear.)
                  • And when we investigate the very best charities (in terms of giving us confidence that their activities work) we can find, we end up constantly coming up with new things to be worried about, not all of which are satisfyingly addressed even by these outstanding and highly transparent organizations.  From 10,000 feet these organizations' activities seem at least as simple and likely to succeed as any others'.  We keep this observation in mind when thinking about other organizations that also seem to have relatively straightforward activities, but haven't share information that would even let us begin to think about potential problems.
                  There are a couple of other reasons that we generally put "confidence that it works" at the center of our rankings, rather than formally multiplying P(it works) by (cost-effectiveness if it does work):
                  • Incentives.  Thought experiment: imagine that organizations A, B, and C are all health organizations facing similar challenges, but that C's program has 3x the theoretical cost-effectiveness of A's and B's.  Imagine that A discloses substantive information showing that its programs largely work; B discloses substantive information showing that its programs largely fail; C discloses nothing.  One might find it appropriate to expect C to perform about as well as the average of A and B (I wouldn't, since I think willingness to disclose is correlated with quality) ... but even if so, I think it would be a mistake to fund C in this case.  Doing so would create incentives for poor organizations to disclose less and therefore benefit from the fact that donors base their estimates on strong organizations.  As an evaluator, we are particularly mindful of what behavior we are rewarding, but I think a donor ought to pay some attention to this as well.
                  • The question of how to weigh confidence in activiteis vs. theoretical cost-effectiveness of activities seems somewhat moot to us since our top charities perform well on both.  Our top-rated organizations have very high cost-effectiveness in the scheme of things, and we don't think there's good reason to expect higher cost-effectiveness from anyone else just based on what program they're running.  
                  Hopefully this helps clarify how we are thinking, even if I haven't given the specifics/sources that would let you evaluate our thinking (as I said, that would be a significant enough undertaking that I want a good sense of how important it is).

                  On Tue, Jun 15, 2010 at 8:07 AM, Nick Beckstead wrote:
                  Hi Holden, (cc: Elie)

                  1.  I would like to know more about why you do not attempt to distinguish between organizations whose cost-effectiveness falls below the $1000/death range.  Prima facie, it sounds like a strange policy.  The only justification I can think of is of the form: we just can't tell the difference between organizations that work at $250/death and $500/death with any confidence to speak of.  This answer is hard to understand from the outside, since you seem to be able to distinguish between organizations that are at $750/death and those over $1500/death.

                  2.  You write: "So, if the way we deal with cost-effectiveness bothers you (specifically, if you feel that we don't take cost-effectiveness estimates literally enough, and that we should for example be willing to let high relative theoretical cost-effectiveness outweigh serious questions about effectiveness), please let me know." I'm worried about this (but I don't really know what to think).  I tend to agree that many of the theoretical cost-effectiveness estimates are too optimistic, but I wouldn't want to overcorrect for this.  Some of the GWWC folks, Toby Ord in particular, make it sound like you guys go too far in "assuming the worst" when it comes to unknowns in a charity.  (I think Toby said this about SCI in particular.)  I need to understand the situation more before I agree or disagree, but I'd like to know more about your policy here.

                  If I were in your shoes, I would work with a risk over uncertainty premium and a good bit or risk aversion, since that is important for your long term credibility.  But as an individual donor, I am less concerned about such things.  I just want to give where it maximizes expected value, so I might be willing to tolerate many unanswered questions with murky probabilities, provided the expected value calculation works out right.

                  Not sure if you wanted me to post this to the listserv, or just send to you directly.

                  Best,

                  Nick



                  On Mon, Jun 14, 2010 at 10:41 PM, Holden Karnofsky <Holden@...> wrote:
                   

                  Hello all, I'd like your thoughts on how important the following issue is to you.

                  GiveWell has consistently taken the position that cost-effectiveness estimates are "too rough to take literally."  We therefore use them in a very non-literal way.  Specifically, any organization that comes in under $1000/death averted is considered by us to be "highly cost-effective" and we don't distinguish between them (instead we rate/rank organizations on "confidence in their effectiveness" factors).  By contrast, we do put weight on observations like "ART is several times as costly as TB control," where we feel the estimates are directly comparable and we have more confidence in the source of the large difference between them.

                  We have never taken the effort to fully spell out the reasons we feel this approach is appropriate.  When we stick to language like "This is too rough to be useful," it probably sounds to some people (well, it definitely sounds to at least one person) that we don't understand basic concepts like "expected value."

                  I believe we could mount a strong and handwaving-free defense of our approach, but that it would be quite a bit of work.

                  Currently, I have the sense that only 1-2 of our current followers disagree with us on (and care about) this issue.  However, I'd like to check that.  So, if the way we deal with cost-effectiveness bothers you (specifically, if you feel that we don't take cost-effectiveness estimates literally enough, and that we should for example be willing to let high relative theoretical cost-effectiveness outweigh serious questions about effectiveness), please let me know.



                  --
                  Nick Beckstead
                  Ph.D. Student
                  Department of Philosophy
                  Rutgers University



                • Holden Karnofsky
                  Hi Wai-Kwong, Our cost per life saved terminology is only short-hand. We look at cost-effectiveness in whatever terms are available, and also consider the
                  Message 8 of 8 , Jun 29, 2010
                    Hi Wai-Kwong,

                    Our "cost per life saved" terminology is only short-hand.  We look at cost-effectiveness in whatever terms are available, and also consider the cost per DALY.  For a sense of this, see http://www.givewell.org/international/technical/programs#Priorityprograms

                    Best,
                    Holden


                    On Sat, Jun 26, 2010 at 6:43 PM, Wai-Kwong Sam Lee <orionlee@...> wrote:
                     

                    Here are my two cents:


                    1. I am sympathetic to Givewell's position on cost-effectiveness - my intuition is  that not only an accurate cost estimate is realistically difficult, but also the estimate might not necessarily apply as the program gets scaled up or replicated. 

                    2. Having said that, in the event that if the cost estimate is an order of magnitude lower ($1000 / death), I do think it'I'd warrant to be called out. 

                    3. I do have a side question though: what is the policy towards intervention is mainly on increasingly DALY (and less so on preventing death)?  E.g., for school-based deworming profiled by Poverty Action Lab, my impression is that while the program increases DALY and less so on preventing death (intestinal worms are not as fatal as diseases like Malaria, TB, as far as I know). 


                    - sam
                    "Joy comes not to him who seeks it for himself, but to him who seeks it for other people."


                    On Thu, Jun 24, 2010 at 8:39 PM, Holden Karnofsky <Holden@...> wrote:
                     

                    Below is an exchange I had with Nick Beckstead on the subject of cost-effectiveness estimates (which I requested feedback on previously), forwarded with permission.


                    (Read from the bottom if interested)

                    ---------- Forwarded message ----------

                    Hi Nick, thanks for the thoughts.  I think these are good questions, and would like to forward this exchange to the list with your permission.

                    Re: the $1000 threshold.  It would take serious effort to flesh out and defend our full picture of what differences are and aren't meaningful.  The short answer is that there are a substantial number of interventions that are 1. Under this threshold and 2. "Best in class" in the sense that there's no highly comparable intervention that seems to perform better.  The differences between estimates for these "best in class" interventions seem highly fragile to us (there are sometimes even conflicting estimates given on different pages of the DCP report that are off by a factor of several, with no clear explanation).  However, for most of the things we've seen that fail the threshold, there are fairly concrete explanations for why they are less cost-effective than the "best-in-class" interventions (for example, the higher cost-effectiveness of TB treatment relative to ART can clearly be seen in differences in how much the drugs cost and in how long they need to be taken).

                    Re: weighing confidence in the organization vs. cost-effectiveness of the estimate.  I think a lot of this comes down to intuitions, especially re: the proper prior for a charity's efficacy.

                    Our prior is pretty low for a variety of reasons, including:
                    • We just think the whole endeavor of helping people in the developing world seems inherently challenging, especially for organizations and people that we don't believe have functioning/healthy incentives.  
                    • When we look at the history of aid, we don't see anything to undermine this view; there are some striking successes, but no macro picture that suggests aid is usually or even often effective.  (To be clear, we also don't think the macro picture suggests aid is *ineffective* - it's mostly just unclear.)
                    • And when we investigate the very best charities (in terms of giving us confidence that their activities work) we can find, we end up constantly coming up with new things to be worried about, not all of which are satisfyingly addressed even by these outstanding and highly transparent organizations.  From 10,000 feet these organizations' activities seem at least as simple and likely to succeed as any others'.  We keep this observation in mind when thinking about other organizations that also seem to have relatively straightforward activities, but haven't share information that would even let us begin to think about potential problems.
                    There are a couple of other reasons that we generally put "confidence that it works" at the center of our rankings, rather than formally multiplying P(it works) by (cost-effectiveness if it does work):
                    • Incentives.  Thought experiment: imagine that organizations A, B, and C are all health organizations facing similar challenges, but that C's program has 3x the theoretical cost-effectiveness of A's and B's.  Imagine that A discloses substantive information showing that its programs largely work; B discloses substantive information showing that its programs largely fail; C discloses nothing.  One might find it appropriate to expect C to perform about as well as the average of A and B (I wouldn't, since I think willingness to disclose is correlated with quality) ... but even if so, I think it would be a mistake to fund C in this case.  Doing so would create incentives for poor organizations to disclose less and therefore benefit from the fact that donors base their estimates on strong organizations.  As an evaluator, we are particularly mindful of what behavior we are rewarding, but I think a donor ought to pay some attention to this as well.
                    • The question of how to weigh confidence in activiteis vs. theoretical cost-effectiveness of activities seems somewhat moot to us since our top charities perform well on both.  Our top-rated organizations have very high cost-effectiveness in the scheme of things, and we don't think there's good reason to expect higher cost-effectiveness from anyone else just based on what program they're running.  
                    Hopefully this helps clarify how we are thinking, even if I haven't given the specifics/sources that would let you evaluate our thinking (as I said, that would be a significant enough undertaking that I want a good sense of how important it is).

                    On Tue, Jun 15, 2010 at 8:07 AM, Nick Beckstead wrote:
                    Hi Holden, (cc: Elie)

                    1.  I would like to know more about why you do not attempt to distinguish between organizations whose cost-effectiveness falls below the $1000/death range.  Prima facie, it sounds like a strange policy.  The only justification I can think of is of the form: we just can't tell the difference between organizations that work at $250/death and $500/death with any confidence to speak of.  This answer is hard to understand from the outside, since you seem to be able to distinguish between organizations that are at $750/death and those over $1500/death.

                    2.  You write: "So, if the way we deal with cost-effectiveness bothers you (specifically, if you feel that we don't take cost-effectiveness estimates literally enough, and that we should for example be willing to let high relative theoretical cost-effectiveness outweigh serious questions about effectiveness), please let me know." I'm worried about this (but I don't really know what to think).  I tend to agree that many of the theoretical cost-effectiveness estimates are too optimistic, but I wouldn't want to overcorrect for this.  Some of the GWWC folks, Toby Ord in particular, make it sound like you guys go too far in "assuming the worst" when it comes to unknowns in a charity.  (I think Toby said this about SCI in particular.)  I need to understand the situation more before I agree or disagree, but I'd like to know more about your policy here.

                    If I were in your shoes, I would work with a risk over uncertainty premium and a good bit or risk aversion, since that is important for your long term credibility.  But as an individual donor, I am less concerned about such things.  I just want to give where it maximizes expected value, so I might be willing to tolerate many unanswered questions with murky probabilities, provided the expected value calculation works out right.

                    Not sure if you wanted me to post this to the listserv, or just send to you directly.

                    Best,

                    Nick



                    On Mon, Jun 14, 2010 at 10:41 PM, Holden Karnofsky <Holden@...> wrote:
                     

                    Hello all, I'd like your thoughts on how important the following issue is to you.

                    GiveWell has consistently taken the position that cost-effectiveness estimates are "too rough to take literally."  We therefore use them in a very non-literal way.  Specifically, any organization that comes in under $1000/death averted is considered by us to be "highly cost-effective" and we don't distinguish between them (instead we rate/rank organizations on "confidence in their effectiveness" factors).  By contrast, we do put weight on observations like "ART is several times as costly as TB control," where we feel the estimates are directly comparable and we have more confidence in the source of the large difference between them.

                    We have never taken the effort to fully spell out the reasons we feel this approach is appropriate.  When we stick to language like "This is too rough to be useful," it probably sounds to some people (well, it definitely sounds to at least one person) that we don't understand basic concepts like "expected value."

                    I believe we could mount a strong and handwaving-free defense of our approach, but that it would be quite a bit of work.

                    Currently, I have the sense that only 1-2 of our current followers disagree with us on (and care about) this issue.  However, I'd like to check that.  So, if the way we deal with cost-effectiveness bothers you (specifically, if you feel that we don't take cost-effectiveness estimates literally enough, and that we should for example be willing to let high relative theoretical cost-effectiveness outweigh serious questions about effectiveness), please let me know.



                    --
                    Nick Beckstead
                    Ph.D. Student
                    Department of Philosophy
                    Rutgers University




                  Your message has been successfully submitted and would be delivered to recipients shortly.