Freudian Slip

Edward Broughton

Director, Research and Evaluation, USAID ASSIST Project/URC

What do Sigmund Freud and some in the field of improvement have in common? No, this is not the beginning of a Woody Allen joke and I hope the answer is a simple “nothing”. But sometimes when I hear those immersed in improvement say that our work is too complex and nuanced for randomized trials or other rigorous research methodologies, it reminds me of the famous cigar-smoking psychoanalyst.

Freud claimed that the psychoanalytic theories he invented could not be scientifically evaluated because doing so was simply too complicated and not suited for such study. However, in the more than 100 years that have followed, many researchers have done the difficult work in the field of psychology and found that many of Freud’s theories and the therapies generated by them are simply wrong and unhelpful.

Of course, this is not to dismiss the whole field of psychoanalysis but the Freudian form of it has taken a hammering from those using science to see how well it works.  I think this points out that we can never rely on the “it’s too hard” excuse to say that we can’t provide the robust studies to support (or otherwise) the work we do.  

Doing the research on improving health services and systems is indeed very difficult and complicated and many settings we work in do not lend themselves to conducting randomized controlled trials. But we must do the best science that the circumstances allow to guide us as we try to make health care better. Without it, we risk irrelevancy.

And if you don’t like what I’m saying, blame my mother…..

Related Countries: 
Language: 

Comments

Disappointing Freud, I will not blame your mother, Edward, because there is an age limit beyond which you can no longer rid yourself of responsibility! So, your conclusion that we risk irrelevancy if we do not use randomized controlled trials is a major exaggeration. RCTs are overestimated as a "golden" methodology for assessing impact, when it is, in fact, only one of many. As a methodology, it has significant limitations of "contamination from real life events" making it inappropriate most times, and is also very expensive. Let us not have "favorite" methods, but allow the purpose of evaluation, our budget, and other circumstances dictate our methods, and not the other way around. We don't want the tail wagging the dog, and we certainly do not want to upset your dear mother.

Thank you, Tessie, for your comment. My views the RCTs is similar to Winston Churchill's views on democracy - it is the worst form of research except for all other that have been tried - at least to answer the question of whether or not an intervention works or not. However, what I wrote is not to defend the RCT but to note that we should never make excuses, like expense and contamination from real life events, for not doing the best science we can. And I stand behind my warning of the risk of irrelevancy (if we don't use the best science we can, not necessarily RCTs) because I've witnessed it myself. But I admit this is a sample size of one!

Cost constraints are not an excuse, but a critical consideration. As the Cost Effectiveness Guru of ASSIST, you must be open to considering not spending a lot of money on an RCT that may not give us reliable and useful answers. You say that you are not "defending" a method, but you imply it is the "best possible science." I am surprised to that you compare it to democracy, which is the most open regime to grassroots participation (not so for RCTs where experts decide on the questions and interpretation of results, thus alienating the grassroots). RCT is just a method, best used in medical research, and an expensive one at that, we cannot have the method drive our evaluation design, but need to be driven by purpose.

I would agree that a rigorous scientific method should be applied as much as we can - especially in evaluating improvement and attributing the improvement to some intervention. Rigorous scientific method, however, are not limited to RCTs, there are much to be learned from other studies: from qualitative case studies, to observational studies, to natural experiment studies.

One very cute article about RCTs that always made me laugh:
http://www.bmj.com/content/327/7429/1459

Yes, you laugh, but how much did the parachute study cost? Chris, one of the smart Harvard School of Public Health students who was at the QRM last week wrote on Twitter, and I agree, that RCT are good for producing new evidence, like research, and once we know a part of program theory works a certain way, we do not need to spend the money all over again. There is so much else that we need to learn that cannot be known, or are actually obscured by RCTs.

This week, I met up with the previous director of evaluation at UNAIDS who comes from a clinical trials background and was also at CDC for a while, and she lamented the mistakes we made using RCTs in the early days of the HIV/AIDS epidemic where we missed important impact because we collected data 3 months too early, or where we failed to see a differentiation in the epidemiology of the disease, because we did not recognize that a part of the population behaved differently.

RCTs are like having a sub machine gun at home to use in case a burglar comes in, when you should not even have a pistol. Just to get Freudian all over again. And if you have a problem with this analogy, blame my father who was an airforce pilot.

There was a very revealing expose on RCTs on the On The Media radio broadcast today "Portraying Medicine: The Perils of Painting by the Numbers". Begins with recording of epidemiologist Ben Goldacre's 2011 Ted Talk, describing the "minefield of medical numbers." Listen to the podcast at https://www.wnyc.org/radio/#/ondemand/367591.

Thanks Tessie. I'll resist the temptation to discuss the recent Princeton / Northwestern Political Science study that stated the US "democracy" is NOT from the grass roots but that would be a major digression. We hold beyond reasonable doubt the belief that smoking increases cancer risk without one RCT to back it up. But we believe it because rigorous other methods were used many times in massive, high-quality studies linking cigarettes to cancer incidence. Yes, we should always consider the cost and feasibility of RCTs and when they are not possible, we should still do the best science we can to evaluate improvement programs.

Thanks Jim. Yes, I heard this Podcast (OTM is a favorite). They bring up some very good points.

Facebook icon
Twitter icon
LinkedIn icon
e-mail icon