Loading ...
Sorry, an error occurred while loading the content.

Vetting "Millions Saved"

Expand Messages
  • Holden Karnofsky
    I previously discussed the Millions Saved set of success stories in global health - see http://groups.yahoo.com/group/givewell/message/25 We ve seen this
    Message 1 of 1 , May 18 9:19 AM
    • 0 Attachment

      I previously discussed the "Millions Saved" set of "success stories in global health" - see http://groups.yahoo.com/group/givewell/message/25

      We've seen this work cited repeatedly in reference to "success stories in international aid," and have factored it into our process for choosing priority interventions (more on this later).  Since its importance to us has been increasing, we decided that I should "vet" it to some degree - i.e., subject the claims of "success" to the same scrutiny we give to charities' claims, and get an idea of how convincing these stories are from our perspective.  So I did three things:

      1. Went through all the case studies to see how they address two key questions: (a) what sort of data supports the claim of "success" and how reliable is it?  (b) what analysis implies that the "success" can be attributed to the project in question, as opposed to other factors such as a general/unrelated improvement in living standards?  My notes on this are below.

      2. Picked one to look at more closely - following its references to see whether the picture from primary sources matches the picture given by the case study.  I chose the one on tuberculosis control since one of our top charities (Stop TB Partnership) focuses in this area.  My notes on this are at http://givewell.net/node/371

      3. After all this was done, I spoke with Jessica Gottlieb, who worked directly on the revised edition of Millions Saved.  Audio recording forthcoming.

      My conclusion is that this is a fairly strong set of case studies.  None have the sort of rigor that can be had at the micro level with randomized controlled trials, but most (not all) have what I consider reasonably convincing answers to the two key questions above.

      --

      I asked 2 major questions of each of the 20 stories:

      1. What data is the claim of impact based on?  Were the data collected through direct observation or through estimation/projection?  Should they be considered reliable?

      17 of the 20 answered this in a way that I found reasonably (if not overwhelmingly) convincing.
      • 6 of the studies were on projects targeting elimination or near-elimination of a particular disease.  They refer to data collection by "surveillance," sometimes giving details and sometimes not.  Generally it seems to refer to requiring medical care centers to report directly observed cases (see http://globalhealth.change.org/blog/view/what_is_surveillance_anyway).  With elimination programs, the incentive (unless there's a highly explicit and organized attempt to falsify success) is not to underreport but rather to make sure as many cases as possible are found (this is an integral part of the control strategy).
      • 1 study (Chagas in South America) was control rather than elimination but used the same "surveillance" terminology.
      • 3 studies (caries in Jamaica, tuberculosis in China, HIV/AIDS in Thailand) explicitly discussed sampling and directly testing the population.  HIV/AIDS was done by external evaluators, China was a govt survey; Jamaica was a survey performed by doctors involved in the project.
      • Maternal mortality in Sri Lanka and diarrhea in Egypt both relied on death registers.  Vitamin A in Nepal was a demographic and health survey including mortality.  
      • Surgery in India relied on local reporting of vision conditions.  Fertility in Bangladesh was periodic national surveys.
      • Conditional cash transfers in Mexico were evaluated through an intensive study (randomized controlled trial) in a sample of districts.
      • 3 case studies (onchocerciasis control in Africa; salt iodization in China; tobacco regulation in Poland) were not clear on this point.
      2. How was the possible counterfactual addressed?

      I felt reasonably persuaded by 11 of the 20; 5 were more iffy but at least addressed the question.
      • 6 of the studies were elimination or near-elimination of a disease; they did not address the counterfactual question, but presumably the idea that the diseases "went away by themselves" (or due to changes in standard of living) was fairly straightforward to dismiss in these cases.   Jessica Gottlieb confirmed this reasoning.
      • Conditional cash transfers in Mexico were evaluated through an intensive study (randomized controlled trial) in a sample of districts.
      • Tuberculosis in China compared districts that got extra funding to districts that didn't - reasons to feel fairly (not totally) confident are spelled out at http://givewell.net/node/371
      • 3 others used what I would consider common-sense persuasion.  Discussion of HIV/AIDS in Thailand was mostly based on timing, as well as the coincidence of reported condom use and measured HIV/AIDS prevalence (and the extremeness of how the numbers changed).  Discussion of maternal mortality in Sri Lanka focused on a study claiming that the types of death that were targeted for reduction had fallen more than other types of death (looking at this in a variety of ways).  Neural tube defects in Chile used a combination of methods I found pretty convincing.
      • 2 others used regression analysis controlling for observable data such as changes in income.  This sort of analysis is fairly common but fairly controversial; I personally fall on the skeptical side.
      • 2 others explicitly addressed the counterfactual issue and said "studies" had addressed it, but didn't elaborate.  Tobacco regulation in Poland used Hungary as a "comparison group."
      • The remaining 4 did not address this at all.
      The weakest on these two questions were ORS in Egypt and Chagas disease in South America (which didn't address counterfactual at all); IDD in China and oncho in Africa (neither of which were clear about how data was collected or how the counterfactual was addressed).

      Jessica Gottlieb told me that the counterfactual question had been explicitly brought up and at least discussed by the working group for each of these cases - and that it had been the main reason for rejection of many other possible "success stories" -  though she didn't provide specifics.

      She also stated that 
      • The case studies were intended to be "representative" and that if there were several success stories for a single program type (for example, tuberculosis control), only one was used.  This works well for us since our main aim with these was to identify priority programs.
      • The most common reason for dismissing a case study was that it hadn't had clear and demonstrable impact.  I had been worried about missing success stories with clear impact but failure to meet one of the other various criteria; she said there were very few of these.
    Your message has been successfully submitted and would be delivered to recipients shortly.