Proudly toxic

Edward Broughton

Director, Research and Evaluation, USAID ASSIST Project/URC

In January, the Institute of Medicine’s “Committee to Support USAID’s Engagement in Health Systems Strengthening” convened an open meeting to discuss methods for improving health care in low- and middle-income countries supported by USAID. I was asked to address the issue of cost-effectiveness analysis (CEA) in improvement.  My basic message was that it’s not easy but that’s no excuse not to try do CEAs as best we can.

Coming after presentations by other researchers advocating randomized control trials (RCTs) as one of the methods to evaluate the effectiveness of improvement interventions, I seconded their call to do them.

Yet others argued that such research is not possible.  One speaker even went so far as to say that RCTs are “toxic” in improvement evaluation, arguing that methods to improve health care are “complex social interventions” and simply can’t be tested by RCTs.

ProdI point out that there are literally thousands of published examples of RCTs involving complex interventions with critical social interactions.  A good example is this recent study published in BMC Pregnancy and Childbirth: A complex intervention to improve pregnancy outcome in obese women; the UPBEAT randomized controlled trial.  And I would argue that such research is not toxic to the larger purpose of making health care interventions more effective. The fact is that RCTs on complex social interventions are done often.

Every scientist I know believes RCTs only answer one basic question of whether something works but do nothing to tell us about how / why they do or do not. Therefore, well designed and executed qualitative and implementation research is also needed to tell us how and under which conditions such interventions can be effective in the real world.

In ASSIST’s improvement work, I believe we should always strive for a “valid proxy for the counter-factual” of having an improvement program. Often control groups fit this purpose best. Randomizing can help balance out complex social factors that can confound the effect of the complex social interventions. Tight budgets, bad timing, and local circumstances mean we often can’t often do RCTs on our interventions. But we should, and do, always try – at least to use control groups, even if we can’t randomize.  USAID has recognized the importance of seeking such rigor in our evaluation of improvement interventions by requiring ASSIST to build in comparison groups, validation of data, and CEAs.

Those outside the health care improvement field can’t be expected to believe improvement works without strong evidence. Controlled trials and RCTs give this. Run charts without controls don’t. We’ve got to stop using the excuse of “but it’s a complex social intervention”. So I hereby declare, I’m proudly toxic, I fully support toxicity, and I’ll try to be more toxic when I can in the future. 

Related Countries: 


As R & E head in India, I often find it difficult to conclude anything concrete from run charts. I'm always struggling with the counter-factual which in this sense is the baseline period but a counterfactual one knows nothing about. Run charts are great to measure performance and time trends but to actually know whether improvement works, one needs more rigorous methods.

I completely agree. Run charts are a very good to suggest what’s going on and it may be all that we can pragmatically expect in data-poor settings. However, to ignore potential confounders, something examining the counterfactual can remedy, can lead to errors in judging if a change is really an improvement.
Ignorance is bliss? Maybe in some cases. But in improvement work, it might not be blissful for the patients we are trying to help.

Facebook icon
Twitter icon
LinkedIn icon
e-mail icon