Intended for healthcare professionals

Analysis

Designing and evaluating complex interventions to improve health care

BMJ 2007; 334 doi: https://doi.org/10.1136/bmj.39108.379965.BE (Published 01 March 2007) Cite this as: BMJ 2007;334:455
  1. Neil C Campbell, reader1,
  2. Elizabeth Murray, director, e-health unit2,
  3. Janet Darbyshire, director3,
  4. Jon Emery, professor of general practice4,
  5. Andrew Farmer, university lecturer5,
  6. Frances Griffiths, associate professor6,
  7. Bruce Guthrie, professor of primary care7,
  8. Helen Lester, professor of primary care8,
  9. Phil Wilson, senior clinical research fellow9,
  10. Ann Louise Kinmonth, professor of general practice10
  1. 1Department of General Practice and Primary Care, University of Aberdeen, Foresterhill Health Centre, Aberdeen AB25 2AY
  2. 2Department of Primary Care and Population Sciences, University College London, London N19 5LW
  3. 3MRC Clinical Trials Unit, London NW1 2DA
  4. 4School of Primary, Aboriginal and Rural Health Care, University of Western Australia, Claremont, Australia
  5. 5Department of Primary Health Care, University of Oxford, Oxford OX3 7LF
  6. 6Health Sciences Research Institute, University of Warwick, Coventry CV4 7AL
  7. 7Tayside Centre for General Practice, University of Dundee, Dundee DD2 4BF
  8. 8National Primary Care Research and Development Centre, Manchester M13 9PL
  9. 9Section of General Practice and Primary Care, University of Glasgow, Glasgow G12 9LX
  10. 10General Practice and Primary Care Research Unit, University of Cambridge, Cambridge CB2 2SR
  1. Correspondence to: N Campbell n.campbell{at}abdn.ac.uk
  • Accepted 9 December 2006

Determining the effectiveness of complex interventions can be difficult and time consuming. Neil C Campbell and colleagues explain the importance of ground work in getting usable results

Complex interventions are “built up from a number of components, which may act both independently and interdependently.”1 2 Many health service activities should be considered as complex. Evaluating complex interventions can pose a considerable challenge and requires a substantial investment of time. Unless the trials illuminate processes and mechanisms they often fail to provide useful information. If the result is negative, we are left wondering whether the intervention is inherently ineffective (either because the intervention was inadequately developed or because all similar interventions are ineffective), whether it was inadequately applied or applied in an inappropriate context, or whether the trial used an inappropriate design, comparison groups or outcomes. If there is a positive effect, it can be hard to judge how the results of the trial might be applied to a different context (box 1).

Box 1: Illustration of problems of interpreting randomised controlled trials of complex interventions

Primary care mental health workers

The NHS Plan in 2000 suggested that by 2004, primary care trusts in England should employ 1000 new primary care mental health workers to help deliver better quality mental health care.3 There was little underpinning evidence of the value of the role or time to evaluate whether it would be effective before nationwide implementation.

In 2002, one trust decided to pilot the role. It employed and trained five psychology graduates and assigned them to one or two practices each.4 Their role included direct work with clients, supporting practice teamwork, and work in the wider community. It used a pragmatic inexpensive cluster randomised controlled trial to explore the effect of these workers on patient satisfaction, mental health symptoms, and the cost effectiveness of care. Sixteen practices and 368 patients participated.

At three months, patient satisfaction (the primary outcome) was higher among patients in intervention than in control practices (P=0.023).5 However, lack of information about the active ingredient of the intervention (what the workers actually did) made this finding difficult to interpret and potentially less generalisable to other areas.

Efforts to illuminate this “black box” by the trialists included:

  • Workers being asked to keep work diaries

  • A parallel qualitative study exploring the experiences and views of trust commissioners, practice teams, workers, and patients.

Unfortunately, few workers managed to complete diaries in any detail. The qualitative study suggested that a key role of the workers' was befriending patients,6 but it was not possible to isolate the influence of this on the trial findings. Further work on the meaning and value of befriending is now required.

What could have been done before the trial if time had permitted?
  • Exploration of the potential effect of different facets of the mental health workers' role

  • Consideration of the local context (eg, ethnic characteristics of the population)

  • Modelling of mechanisms by which care and patient health might be improved

  • Design of a trial, using appropriate outcomes, to evaluate effects on these mechanisms.

The Medical Research Council framework for the development and evaluation of randomised controlled trials for complex interventions to improve health was designed to tackle these problems.1 2 It proposed a stepwise approach, parallel to that used in evaluating new drugs (box 2). This approach has been hugely influential internationally, but the MRC now recognises that it needs further development. We make suggestions for flexible use of the framework by providing a series of examples with lessons learnt. We focus on preliminary work before a definitive randomised controlled trial. Examples are taken from primary care, but the principles are applicable to all healthcare settings.

Box 2: MRC framework for design and evaluation of complex interventions

Stepwise approach (on paper)

Phase 0—Preclinical or theoretical (why should this intervention work?)

Phase 1—Modelling (how does it work?)

Phase 2—Exploratory or pilot trial (optimising trial measures)

Phase 3—Definitive randomised controlled trial

Phase 4—Implementation

Parallel approach (in practice)

Combine phases 0-II into one larger activity to develop understanding of the problem, the intervention, and the evaluation

Overview

We found it helpful to consider phases 0, 1, and 2 of the stepwise approach as part of one larger iterative activity rather than as sequential stages (box 2). We found we needed data to clarify our understanding of the context of the research, the problem we sought to tackle, the intervention, and the evaluation (figure).1 Research on all these areas can be conducted simultaneously. In the following sections, we outline the important contextual considerations, describe the aim of each of three main components (problem definition, intervention, and evaluation), the key tasks necessary to meet each aim, and the conceptual and research approaches helpful in achieving the key tasks.

Figure1

Relation between context, problem definition, intervention, and evaluation for complex interventions

Context

Context is all important. It includes the wider socioeconomic background (including underlying cultural assumptions), the health service systems, the characteristics of the population, the prevalence or severity of the condition studied, and how these factors change over time. How a problem is caused and sustained, whether it is susceptible to intervention, and how any intervention could work may all depend on the context. This means that understanding context is crucial not only when designing interventions but also when assessing whether an intervention that was effective in one setting might work in others (box 3). Contexts differ between locations and change over time—for example, the introduction of financial incentives in 2004 for general practitioners to achieve targets in the management of chronic diseases changed the context of UK primary care.7

Box 3: Importance of context in complex interventions

The Evercare programme of case management for elderly people has been shown to reduce hospital admission in US nursing home residents, reducing overall costs by about $88 000 (£45 000; €68 000) per nurse practitioner.8 NHS England piloted a UK version of Evercare and has since implemented community matron management for older people at high risk of emergency hospital admission. Differences in context raise uncertainties about effectiveness, however, particularly since the broader evidence that case management is effective is weak and inconsistent.9

Is the problem the same?

The US and UK share the wider context of rising healthcare costs for expanding elderly populations, one component of which is rising rates of emergency admissions. However, the problem most amenable to intervention differs in the two countries. Poor coordination of care is relatively more important in the US, and the lack of financial incentives to keep patients in the community is relatively more important in the UK.

Is the intervention the same?

UK implementation of Evercare case management differs from the US trial in several respects:

  • The target population is different (all those at high risk of emergency admission in the UK v nursing home residents in the US). Effectiveness of the UK implementation therefore depends on accurately identifying patients at high risk of emergency admission, which was not possible in the pilots or initial implementation10

  • In the UK, nursing home and NHS funding remains separate, so community matrons' effectiveness largely relies on better review and coordination of existing services, which are already less fragmented than in the US.

What are the appropriate outcomes?

Case management may reduce emergency hospital admission, but it might also improve patient care in terms of other important outcomes including functional status, patient and carer quality of life, and satisfaction with services. There is also potential for adverse effects on the overall quality of care for elderly people since recruitment of community matrons from existing district nursing services may exacerbate nurse shortages. The policy focus on emergency admission may therefore be too narrow.

The implications for researchers are twofold. They need to understand the context when designing a theoretically based intervention whose mechanism of action can be clearly described and whose validity is supported by empirical data. Secondly, when reporting trials, researchers should describe the context in which the intervention was developed, applied, and evaluated, so that readers can determine the relevance of the results to their own situation.

Defining and understanding the problem

The next step is to develop a sufficient understanding of the problem to identify opportunities for intervention that could result in meaningful improvements in health or healthcare systems. Table 1 gives the key components of this task, along with a worked example from our experience.

Table 1

 Defining and understanding the problem for intervention: example of online behavioural intervention for people with cardiovascular disease

View this table:

Conceptualising the problem

Different health problems have different levels of complexity. Some can be conceptualised in relatively simple ways, but others occur at multiple levels. In the example in table 1, high death rates in people with cardiovascular disease are affected by:

  • Disease—Atherosclerosis, risk factors (cholesterol, blood pressure, smoking), comorbidity

  • Patient—Beliefs about lifestyle, adherence to treatment, and symptoms

  • Practitioner—Accessibility, prescribing practices, practices in health promotion

  • Health service—Availability of effective preventive and therapeutic care

  • Policy—Policies on preventive services (tobacco control, diet, exercise, etc)

  • Social context—Socioeconomic status, social support.

This is important if a decision to intervene at one level could be cancelled out or promoted by actions at other levels. For example, improving practitioners' health promotion practices may have no effect on patients' health behaviour if social and environmental factors obstruct response.16

Drawing on theories can help to conceptualise a problem, but having more than one level challenges us to use more than one theoretical approach. In the above example, if the problem to be tackled is individuals' health behaviour, it may be best explained using theories from health psychology.13 It could also, however, draw on social theory to understand interactions with the social environment and organisational theory to understand health service and practitioner factors.16

Collecting evidence

A range of research methods can be used to collect evidence. In the example in table 1 researchers used systematic literature reviews, epidemiological research, and expert opinion to quantify the extent of the problem and identify the groups most at risk and the key modifiable risks. Had the factors causing and sustaining the problem been less well understood, the researchers may have had to do some primary research. For example, reasons for delayed presentation by patients with symptoms of lung cancer are poorly understood, so epidemiological and qualitative research is being undertaken to identify and quantify determinants and targets that may be amenable to intervention (international cancer research portfolio study CRU1278). Qualitative research can explore opportunities for, and barriers to, change. The findings, and extrapolations from other related research, can inform an initial assessment of how much improvement the intervention might achieve.

Developing an optimal intervention

For an intervention to have a credible chance of improving health or health care, there must be a clear description of the problem and a clear understanding of how the intervention is likely to work. The original MRC framework identified designing, describing, and implementing a well defined intervention as: “the most challenging part of evaluating a complex intervention—and the most frequent weakness in such trials.”2 Table 2 summarises the key tasks for achieving this understanding and gives an example.

Table 2

Key tasks for optimising an intervention: example of computer support for assessment of familial risk of cancer in primary care

View this table:

Conceptual approaches

Conceptual modelling or mapping can clarify the mechanisms by which an intervention might achieve its aims. The essential process involves mapping out the mechanisms and pathways proposed to lead from the intervention to the desired outcomes, then adding evidence and data to this map. Modelling of the intervention both depends on, and informs, understanding of the underlying problem. The intervention must engage the target group and affect pathways amenable to change that are identified as important to the problem. In the example in table 2 the intervention engages the general practitioner (providing tailored advice and training), the primary care team (organising referral around a single trained general practitioner), and the patient (facilitating their provision of information).

Collecting evidence

We found evidence useful in optimising four aspects of the intervention:

Refining the conceptual models by identifying important influences, relations between components, and consequences not previously considered. For example, in table 2, literature reviews of related interventions provided evidence on how computer decision support was received by practitioners, affected consultations with patients, and could improve implementation of guidelines. It also provided evidence on different ways of expressing risk to patients. Qualitative research helped to place the intervention in the context of primary care and consultations with patients.

Generating (tentative) estimates of effect size by populating conceptual models with data from observational studies or systematic reviews. In table 2, the initial data were numbers of appropriate referrals at baseline and findings from related interventions. Further data were provided by carefully controlled intervention studies.

Identifying barriersor rate limiting steps in intervention pathways—Complex interventions can fail because of unforeseen barriers.21 Barriers can be cognitive, behavioural, organisational, sociocultural, or financial. They may occur early in the intervention process or during steps not previously considered or thought important.22 In the computer support example (table 2) some rate limiting steps were identified early when populating the intervention model with data on uptake of computer support in general practice, but others emerged during subsequent qualitative research. Early identification provides opportunities for resolution (which in this case included redesigning the software and training general practitioners on how to consult while using the software).

Optimising combinations of components in the intervention—There is no consensus on how to achieve this. Once a conceptual model has been formed, some complex interventions may be amenable to simulations23 or carefully controlled experimental studies outside the normal clinical setting. In our example, simulated patients were used to test the intervention with general practitioners. This identified the likely outcomes for a range of patients and allowed general practitioners to comment on how the intervention could be improved. Simulation can also be used to explore the effect of changes in dose on response, and changes in contextual influences. Early randomised studies also have a place. In the example a randomised study was used to quantify the potential for benefit by using an intermediate outcome (decisions to refer) known to be tightly linked to final outcomes (referrals). Later, in another randomised trial, the researchers attempted to optimise the intervention by including an adaptive arm. In this arm, the intervention could be modified according to practitioner feedback when use of software during consultations fell below predetermined criteria.

Developing and optimising trial parameters

The ideal evaluation provides convincing evidence of effectiveness or otherwise, without wasting resources. Table 3 lists the key tasks in designing such an evaluation.

Table 3

 Optimising the evaluation: example of community based screening for genital infection

View this table:

Conceptual methods

The development of research protocols for randomised trials is detailed elsewhere.28 We found three considerations particularly important for robust evaluations of complex interventions. Firstly, outcomes must link plausibly with the intervention's proposed mechanisms of action and include its potential adverse effects and other costs. Secondly, realistic estimates of recruitment and retention are essential before moving to a definitive trial. Thirdly, if randomisation is to be clustered, good (or at least plausible) estimates of intraclass correlation are needed.

Collecting data

The conceptual model of the intervention can provide a rational guide to both intermediate and final outcome measures.29 Sensitive intermediate outcomes can enable small trials to provide meaningful findings during the development of the intervention (table 3). In definitive trials with negative results, they can help identify the point along the causal pathway where the intervention failed. If estimates of recruitment, retention, and intraclass correlation have not been obtained during prior research with the target group, a feasibility study may be needed to model patient flow. Such studies also enable assessment of feasibility of the methods of randomisation including acceptability to participants and suitable level to avoid contamination effects. They provide data to inform sample size calculations for the final evaluation and descriptive statistics on the baseline performance of the final outcome measures.

Conclusion

The design of an intervention depends on understanding the underlying problem and the context, what difficult processes are involved in optimising the intervention, and why the evaluation needs outcomes appropriate to the intervention mechanism. Defining and understanding the problem and its context, developing and understanding the intervention, and developing and optimising the evaluation are three substantial tasks but can be conducted simultaneously. The process of development ends with one of three scenarios. Firstly, it may become clear that the intervention is unlikely to be cost effective in the current environment and does not warrant the cost of a large randomised trial. Secondly, the evidence supporting the intervention may become so strong that there is no doubt that it will be beneficial—in which case it should be implemented. Finally, although doubt may remain about the effects of the intervention, it is sufficiently promising to warrant the costs of a definitive evaluation. In that case, the researcher who understands the underlying problem, has developed a credible intervention, and considered the key points in evaluation will be in a strong position to conduct a worthwhile, rigorous, and achievable definitive trial.

Summary points

  • Good design is essential to get meaningful information from randomised controlled trials of complex interventions

  • The MRC framework was developed to improve such trials

  • The first three phases of the framework can be conducted simultaneously in an iterative process to better understand the problem, the intervention, and the evaluation

Footnotes

  • We thank St Johns College, Cambridge, for hosting the group and the MRC cooperative group for their support.

  • Contributors and sources: This article was prepared by a working group, comprising postdoctoral general practice researchers with experience of using the MRC framework to develop complex interventions. The group was convened by the MRC cooperative group for the development and evaluation of innovative strategies for the prevention of chronic disease in primary care. Group members were selected from different institutions and were widely geographically dispersed within the United Kingdom. The working group met annually for three years (a total of 5 days), during which time we reviewed the practical experience of the members of the group, using examples, to identify the important tasks and processes that formed part of developing and defining complex interventions for evaluation. In a separate exercise, examples of practice were collated and analysed inductively to look for common themes and examples of divergence. The article was conceived by the working group as a whole. NCC, JE, and ALK wrote the first draft. EM and NCC wrote the final draft. All authors contributed to the concepts in the paper and to redrafting the paper. NCC is guarantor.

  • Competing interests: None declared.

References

View Abstract