The marriage between improvement and evaluation

Anjali Chowfla

Improvement Advisor, USAID ASSIST Project

(Photo Credit: Salzburg Global Seminar | 565 | Better Health Care: How can we learn about improvement?)

What do you get when you put 61 of the world’s most respected health care improvers and research scientists together in a room that once played host to the Von Trapp family? Chances are, your answer is not “a marriage.” Yet, after a five-day interactive session entitled “Better Health Care: How Do We Learn about Improvement?” organized by the Salzburg Global Seminar at the Schloss Leopoldskron in Salzburg, Austria—recognizable for its role in “The Sound of Music”—that is precisely what we got.

As the field of health care improvement evolves away from its genesis tackling routine processes of care, such as waiting times, and moves towards a focus on improving health outcomes on a large scale, the improvement community is increasingly being asked to confront the question:

“Are the results being obtained attributable to the changes being made?”

 With this in mind, the organizers of the Salzburg Global Seminar Session 565 engaged improvers and researchers from 25 countries in a weeklong discussion from July 10th-15th, 2016 on tangible and practical ways to increase the rigor, attribution, and generalizability of interventions designed to improve the quality of health care without comprising the iterative, adaptive nature of improvement.

Participants grappled with questions such as “What are the characteristics of useful generalizable knowledge?” and “What level of evidence is good enough to establish attribution?”  and debated ways of closing the chasm between those focused on improving health care delivery and those tasked with evaluating the success or failure of improvement interventions.

Discussions ranged from “cracking open the black box of improvement” in order to understand not only if an intervention worked but how and why, to the suitability (or lack thereof) of traditional, fixed-protocol evaluation designs, such as randomized control trials (RCTs), in capturing the dynamic nature of working within complex adaptive systems.

The role of evaluation in improving the improvement process itself generated heated debate, with some participants arguing against opening up the door to evaluators advising improvers on how to improve and others contending that the improvement community stood to gain from the insight and skill that evaluators could bring to areas such as picking an appropriate aim and measuring data accurately.

While the discussions at Salzburg were initially framed by the “improver/evaluator” dichotomy, by the end of the session participants gathered to synthesize lessons learned and were able to move towards a consensus as a framework for learning about improvement, centered around practical guidance for improvers on how to better integrate evaluation into improvement interventions, began to emerge.  It was agreed that there is no one right model for the relationship between evaluator and implementer, nor one right design for evaluating improvement, but rather that the relationship and the model will ultimately depend on the given context, stakeholder priorities, and funding.

As the “marriage” between improvement and evaluation was cemented, participants began working on a series of papers, to be published in a special supplement of the International Society for Quality in Health Care (ISQua) Journal, distilling the concepts discussed at Salzburg.

More about the discussions and outcomes of the Salzburg Global Seminar can be found in the Session 565 Report: “Better Health Care: How Do We Learn about Improvement.”