Helping project implementers understand evaluation design
Around this time last year, the USAID ASSIST Project met with 60 improvement and research experts, policymakers, and funders from 10 organizations and 28 countries to attend a 5-day meeting in Salzburg on “Better Health Care: How Do We Learn About Improvement?”
The objective of the Salzburg Global Seminar Session 565 was to discuss the best ways to design, implement, and evaluate improvement interventions in order to optimize learning, attribution, and generalizability.
To guide discussions, ASSIST developed, in collaboration with the Institute of Healthcare Improvement (IHI), an evaluation design booklet providing one-page descriptions of various rigorous evaluation designs used to assess improvement interventions, and discussing the strengths and weaknesses associated with each approach.
AcademyHealth is a leading professional organization for health service researchers, policymakers, and health practitioners, and several of its members attended the meeting. AcademyHealth had been exploring opportunities to strengthen rigor in evaluating interventions across the health spectrum, including health care, public health, and social services. Prior to the seminar, the organization had convened a series of meetings to address this issue. Following the seminar, AcademyHealth expressed interest in developing an evaluation guide for health interventions based on the evaluation booklet distributed during the Salzburg Seminar.
Addressing this request, ASSIST and IHI again collaborated to produce a new guide, published by AcademyHealth, providing a framework to guide decision-making around appropriate designs to evaluate public health, and other service interventions, including quality improvement.
“Evaluating Complex Health Interventions: A Guide to Rigorous Research Designs” is aimed at those implementing interventions in public health and community settings, including quality improvement, who are involved in evaluation, but may not themselves be evaluators.The guide aims to help stakeholders make informed decisions by arming them with the information necessary to understand and assess the strengths, weaknesses, and the various tradeoffs involved in the selection of an evaluation design. The guide also helps the reader assess the validity of an evaluation that has already been conducted.
The six evaluation designs presented in this guide represent a mix experimental, quasi-experimental and observational designs. The guide includes a flow chart to inform the selection of evaluation designs.
Each design includes:
- a general description with a diagram to illustrate the design;
- two examples from the peer-reviewed literature of how the design was used to evaluate a specific health or social service intervention;
- key strengths and weaknesses of the study design;
- timeline and budget considerations;
- policy implications; and
- considerations for future use.
As the number, variety, and complexity of interventions increases, the need to understand which ones are working, for whom, and under what circumstances grows. Something we want to emphasize is that:
There is no single “correct” evaluation design.
Rather, there exist a range of approaches that can be used to enhance the rigor of evaluations, depending on the question of interest, the type of intervention, the context in which it is implemented, the needs of the audience for the evaluation, and the availability of data. The better project managers, health practitioners, and others involved in implementation understand this, the better equipped they will be to advocate for rigorous evaluation and to interpret evaluation results appropriately; thus, strengthening programs. The higher the quality of the evidence upon which decisions are made, the more likely we are to take the appropriate actions to improve health outcomes.