Live Blog: Salzburg Global Seminar

Pre-Conference Blog

By: Andrew Murphy | June 9, 2016

The purpose of the Salzburg Global Seminar Session 565, entitled, “Better Health Care: How do we learn about improvement?” is to have a deep discussion about how we are evaluating improvement interventions and if there are better ways that we could be evaluating them.

There is a consensus among many leading implementers and researchers in the field of healthcare improvement that the adaptive nature, inherently found within improvement, requires a different way of thinking when it comes to how we evaluate. As practitioners and researchers in the field of healthcare, most of us have taken Epidemiology 101 or a similar course in which we learned about different experimental approaches by which we can prove that something has happened. In this class, we usually move through different study design methods to demonstrate different levels of causality. Students start by learning about cohort studies or something similarly broad and lead up to the “gold-standard” of experimental approaches, the Randomized Control Trial (RCT)... 

Continue reading this blog here.

Day 1 - Improving Improvement

By: Louise Hallman | July 10, 2016

“How do we know that the results that we are seeing are actually because of our interventions and the changes that we are making?” This is the key question facing Fellows as they arrived in Austria for the Salzburg Global program Better Health Care: How do we learn about improvement? 

The program, in partnership with the USAID Assist Project and with support from the New Venture Fund, is bringing many returning Fellows to Salzburg to build on work begun in 2012 at the session Making Health Care Better in Low and Middle Income Economies: What are the next steps and how do we get there?

Since that program four years ago, the field of health care quality improvement science has evolved, prompting the return to Salzburg. 

Again led by M. Rashad Massoud, Director, USAID Health Care Improvement Project, Senior Vice President, University Research Co., LLC/Quality & Performance Institute, as the Session Chair, this year’s cohort of 60 Fellows from 25 countries will consider how we know whether the results achieved in improving health and health care can be attributed to the intervention conducted.

Improved process level data and outcome level data have made it possible to show improvements in health care, such as the reductions in complications in care and in mortality, for example in pre-natal mortality.  

As Massoud explains: “[In India], over an 18 month period, they had worked on 270,000 deliveries and were able to reduce pre-natal mortality by 12.7%... In view of such accomplishments, we are being asked a lot of questions... how do you know that the results that you are getting are truly because of what you are doing, and that the changes you are making are causing these results? The answer is that we do have our time series charts to show that there has been a change, but we do not know if that change is necessarily due to what we have done or that it is only because of what we have done and it is not because of something else that is [also] going on.”

These questions now pose the next challenge in the evolution of the science of improvement, and while no single, simple answer will be found in Salzburg, the global gathering seeks to help in the design, implementation and evaluation of improvement to uncover which interventions are most effective at achieving sustained results in health outcomes.

While no one Fellow in Salzburg is expected to have the one, simple answer, it is hoped that the collection of expertise gathered for the program this week will help compile the various parts of the solution.

Day 3 - Case Studies: Intensive Home-Visitation Program

Quality improvement experts examine case studies from around the world 

On the third day of the Salzburg Global Seminar session Better Health Care: How do we learn about improvement? following input from expert faculty, Fellows expand work on their case studies and consider how they can improve the rigor, attribution and generalizability.

Day 4 - Synthesis of Approaches for Evaluation

By: Mary Kawonga and Nancy Zionts | July 13, 2016

On the fourth day of the session Better Health Care: How do we learn about improvement? Fellows considered the synthesis of approaches for evaluating quality improvement.

Purpose of the Session:

To synthesize take-aways from all prior sessions in order to formulate key points that participants would like to see in the guidance document.

The session included three presentations followed by facilitator-guided discussions in three groups, based on the three case studies from day 3.

The first presentation entitled “towards guidance for evaluating improvements – for investigators, improvers and funders” provided key definitions, tools and ideas) for participants to think about when developing guidance. The presentations explored the question: what guidance do we think would be useful to give to people in order to make a faster and better difference?

Some key take-away messages:

  • Careful attention to language (“the tools to see”): working definitions of key concepts enhance common understanding. Improvement was defined as “the new better way” of doing things. Other key concepts discussed include: implementation (the actions to enable people to perform the “better way”), intervention, outcome, and context.
  • Context is an active player rather than background. It is the conditions in which interventions “grow” / affect how to get from the current state / practice to the new better way. Tools and approaches for documenting context are available. Readiness assessments are important to assess the suitability of a given context.
  • Clearly describe the intervention. Did they gather the right data?
  •  Be clear about what are we evaluating: the new better way of doing things or how to enable people to do the better way? There are multiple ways of helping people to change practice, a key QI question is: which is most effective?
  • Match investigation methods to needs. A continuum of designs ranging from the RCT (stronger) to one person’s opinion (weaker). Key questions: is the evidence strong enough for generalization? Important to match methods to needs and resource availability.

The “painter’s palette” was presented, depicting ten accountability questions that can help people think about the kinds of issues they should consider for undertaking improvement initiatives in a results-oriented manner. This “outcomes-thinking” aims for results of an improvement initiative designed on the basis of needs and resources. The questions can be applied for accountability to stakeholders at different levels (from national level to the client).

In break-away groups participants worked through developing guidance statements so that those involved in research and in improvement initiatives can do this better. The possible domains for which we may generate guidance statements were identified as:

  • Context: their understanding and measurement of it
  • Adaptation: how to anticipate, plan for, measure and analyse adaptation
  • Causal mechanisms: explicit apriori causal models / mechanisms
  • Formative evaluation
  • Distinguishing clinical research from implementation. 


Facebook icon
Twitter icon
LinkedIn icon
e-mail icon