I agree with almost everything McCartney and Finnikin say,1 and am delighted that they have said it so clearly and succinctly. But I am concerned that they risk making ‘systematic interrogation of all new healthcare policies for evidence and cost-effectiveness’ sound easier than it is.
The traditional hierarchy of types of evidence places randomised controlled trials [RCTs] at the top, and this may well still be reasonable for evidence about a new drug or surgical procedure. But applying the same approach to evaluating complex interventions is increasingly acknowledged to be a mistake, because trial-based evidence cannot be assumed to be transferable to other settings. In a recent special issue of Social Science and Medicine focused on RCTs and evidence-based policy, Deaton and Cartwright discuss the limitations of RCTs as a method of establishing ‘why things work’.2 Without a credible account of causation we cannot begin to work out whether a complex intervention that has certain effects in one setting will have those same effects somewhere else.
As well as leading to rapid abandonment of ineffective or harmful new policies, it would be nice if ‘real-life testing’ was used in a more complicated and constructive way, helping us understand the way elements of the intervention and elements of the context interact to produce both good and bad effects. The alternative is to continue to handle new ideas about healthcare delivery the way we do now, jumping onto each bandwagon that rolls past, only to jump rapidly off again when it turns out not to be useful in our particular setting.
- © British Journal of General Practice 2019