Intended for healthcare professionals

Papers

Impact of published clinical outcomes data: case study in NHS hospital trusts

BMJ 2001; 323 doi: https://doi.org/10.1136/bmj.323.7307.260 (Published 04 August 2001) Cite this as: BMJ 2001;323:260
  1. Russell Mannion (rm15{at}york.ac.uk), senior research fellow,
  2. Maria Goddard, assistant director
  1. Centre for Health Economics, University of York, Heslington, York YO10 5DD
  1. Correspondence to: R Mannion
  • Accepted 13 June 2001

Abstract

Objective: To examine the impact of the publication of clinical outcomes data on NHS Trusts in Scotland to inform the development of similar schemes elsewhere.

Design: Case studies including semistructured interviews and a review of background statistics.

Setting: Eight Scottish NHS acute trusts.

Participants: 48 trust staff comprising chief executives, medical directors, stroke consultants, breast cancer consultants, nurse managers, and junior doctors.

Main outcome measures: Staff views on the benefits and drawbacks of clinical outcome indicators provided by the clinical resource and audit group (CRAG) and perceptions of the impact of these data on clinical practice and continuous improvement of quality.

Results: The CRAG indicators had a low profile in the trusts and were rarely cited as informing internal quality improvement or used externally to identify best practice. The indicators were mainly used to support applications for further funding and service development. The poor effect was attributable to a lack of professional belief in the indicators, arising from perceived problems around quality of data and time lag between collection and presentation of data; limited dissemination; weak incentives to take action; a predilection for process rather than outcome indicators; and a belief that informal information is often more useful than quantitative data in the assessment of clinical performance.

Conclusions: Those responsible for developing clinical indicator programmes should develop robust datasets. They should also encourage a working environment and incentives such that these data are used to improve continuously.

What is already known on this topic

What is already known on this topic Current policy on performance assessment in England and Wales places a great deal of emphasis on the collection and dissemination of clinical information

Dissemination of clinical outcome data has had limited impact on the behaviour of provider organisations in the United States

What this paper adds

What this paper adds Research in Scottish trusts suggests that clinical indicators are rarely used to stimulate quality improvement or share good practice

The reasons for low impact include internal factors relating to the properties of the indicators and external factors within the organisational environment in which the data are used

Introduction

The public dissemination of standardised data on clinical outcomes is now established practice in many health systems. In the United States, where public reporting is most advanced, comparative information on performance in the form of report cards, provider profiles, and consumer reports has been released for over a decade.13 In Europe, Scotland has been at the forefront of public disclosure. Since 1994 the Scottish Executive has published clinical outcome indicators collected by the clinical resource and audit group (CRAG) for all Scottish NHS acute trusts and health boards. More recently, clinical performance data have been published for trusts in England and Wales as part of the NHS performance assessment framework.

A postal questionnaire survey conducted by the Scottish Executive in 1997 indicated that the CRAG indicators published in Scotland were of some practical value to health professionals and, in a few instances, had helped to bring about a change in clinical practice. On the whole, however, the survey found that the indicators had little effect on behaviour.4 We present the key findings of a study designed to examine the impact of the publication of these data on provider organisations. The CRAG indicators are similar to those now published more widely within the rest of the United Kingdom, and therefore an analysis of the Scottish experience may help with the implementation of such programmes elsewhere.

The CRAG indicators are compiled and disseminated by the Scottish Executive. Seven reports have been published detailing 38 clinical indicators for named trusts and health boards in Scotland. It is important to note that the CRAG indicators are not part of a formal framework of performance assessment. Since the indicators were first published the Scottish Executive has emphasised that they should not be used to make definitive judgments on the performance or quality of services.

The indicators vary by specialty but have some common features:

  • They are based on a linked dataset comprising inpatient hospital episode statistics, the 1991 census small area statistics, and recorded deaths

  • To minimise the role of random year on year variation each indicator spans a period of at least three years

  • The indicators are standardised to control for aspects of case mix that can be identified on the basis of existing data. They are also standardised, when appropriate, for deprivation5 or the principal diagnosis of any hospital admissions in the previous five years, or both.

Methods

Selection and interview strategy

We considered that the most appropriate methods for explaining in detail the use of the CRAG indicators would be case studies of trusts comprising semistructured interviews with key staff and a review of background documentation. We selected eight trusts, reflecting a range of sizes (income and number of beds), geographical area (city, town, rural), and previous “performance” on the CRAG indicators (table). While the case studies covered the full range of published indicators, for the purposes of obtaining more in depth analysis we focused on two clinical specialties: one published at health board level (five year survival in women with breast cancer) and one published for each trust (30 day survival after emergency admission for stroke).

Impact of CRAG data on eight NHS hospital trusts (breast cancer and stroke services)

View this table:

Interviews were undertaken with chief executives, medical directors, consultants with responsibility for breast cancer services, consultants with responsibility for stroke services, nurse managers, and junior doctors (eight of each). The interviews were semistructured, tape recorded, and transcribed before analysis.

Analysis

We thought that clinical indicators would be more likely to generate action when the data are perceived to be credible and up to date and when staff believe they are meaningful and important. Similarly, we assumed that action would be more likely to be generated when the external environment facilitates and supports change through strategies such as targeted dissemination, staff training in the use of data, and a framework of incentives.

We analysed interview transcripts using the qualitative methods of content analysis.6 After an initial scrutiny of the transcripts we identified several preliminary themes under broad headings to reflect our prior hypotheses. We then collected together passages under each identified theme and cross referenced them according to the site and the type and grade of staff. We also analysed the data to assess whether our findings were related to characteristics of trusts (for example, size, geographical location, and performance on the chosen indicators).

Results

Impact on behaviour and practice

The CRAG indicators had a “low profile” in all trusts and were rarely cited by staff as the primary drivers of quality improvement or sharing best practice between organisations. In six trusts the CRAG indicators were reported to have stimulated some action in relation to breast cancer or stroke services, but such action was restricted to checking and auditing the quality of the data rather than direct action to improve delivery of service (table).

The indicators were seldom used in isolation when service changes were being considered and were typically augmented with more detailed local data. The main use to which the CRAG data were put was to add further weight and background evidence to applications for additional funding, either within the trust or to the health board. In four trusts the CRAG data were cited as useful background information to support the case for a stroke unit. Our findings do not seem to be related to the characteristics of trusts in terms of size, geographical location, or performance on the indicators.

Exploring the reasons for low impact

There are several possible explanations for the limited impact of the indicators (box).

Reasons for low impact of CRAG indicators

Credibility

“One concern is how valid the data are. It is important if you are going to use data that you have clinical people on board and that they are happy the data collection is correct. We have a degree of suspicion over some of the CRAG data that are coming out” (stroke consultant)

Timeliness

“It's pretty basic information and it comes out several years after it is taken. Things have changed over that period of time. So, in relation to say treatment of cervical cancer, the whole way of cancer management has changed. The change had already occurred by the time the data were issued” (medical director)

Awareness

“There should be more widespread dissemination of this information [the CRAG reports]. It would certainly be useful to push it down to my level of service manager … Clinical outcomes don't just apply to doctors” (nurse manager)

Training and facilitation

“I don't think there is sufficient knowledge about CRAG data. It is not taught in medical schools” (breast surgeon)

Incentives

“The reward [for performing well on the indicators] is for those services looking for development. It [the CRAG data] is used to help strengthen the case for change. If you are bidding for capital equipment you can use it to persuade your case. In terms of sanctions it is a peer one—not letting your peers down” (chief executive)

Supplementary information

“I don't think at the current level of accuracy you can pull out that sort of information [individual clinical performance] from these figures. Poor performance with doctors tends to be [transmitted] from word of mouth and other soft information” (stroke consultant)

Process versus outcome indicators

“It is easier to measure the process, it is quicker and more responsive than outcome data (breast surgeon)

External accountability

“We are not really held to account at all on this set of indicators” (clinical director)

Credibility—Many staff, in particular the consultants, had serious concerns over the quality of the data used to compile the CRAG indicators, and because of this the data lacked credibility. Problems centred on issues of data quality, including incomplete and inconsistent coding and inadequate adjustment for variation in case mix.

Timeliness—The elapsed time between collection and publication of data was a major drawback to the indicators being used in a meaningful way for continuous quality improvement. In many cases the CRAG indicators are at least a year out of date and considerably more for some indicators such as breast cancer.

Awareness—Recent studies have suggested that if clinical indicators are to be useful in supporting quality improvement they need to be transmitted and communicated to staff at all levels of the organisation.7 However, we found that although consultants and chief executives were aware of the data, most nurse managers and junior doctors reported that they had little or no knowledge of the indicators. Only one trust disseminated these data to nurse managers and junior doctors.

Training and facilitation—None of the trusts ran specific training or education programmes on the appropriate use and interpretation of clinical indicators, and no single person within each trust was identified as being responsible for supporting their use throughout the organisation.

Incentives—The indicators are not part of a formal system of performance assessment, with an incentive mechanism attached to performance. However, some staff acknowledged that informal incentives that had an effect on status and professional reputation were sometimes associated with relative performance on the indicators.

Supplementary information—Some staff thought that informal information transmitted through channels and professional networks was more useful than formal indicators in capturing important aspects of performance that defy simple codification. In particular, the clinical indicators were often viewed as too unwieldy and out of date to spot poor clinical performance in staff and that “whistle blowing” and “word of mouth” were the most common channels for alerting initial concern.

Process or outcome indicators—Many members of staff preferred process rather than outcome indicators as they were thought to be more reliable, up to date, and easier to measure and to provide better guidance on what specific actions are needed to improve the quality of care.

External accountability—Little pressure was exerted by outside bodies on trusts for them to “perform” well on the CRAG indicators. On the whole, health boards did not hold trusts accountable for their performance, and it was reported that patients or their representatives seldom consulted or acted on the indicators.

Discussion

Our findings suggest that the CRAG indicators have helped to raise the level of awareness of quality issues among trust staff and in some instances may have alerted providers to specific issues that require further investigation. However, we also found that the indicators were rarely used directly to stimulate continuous quality improvement nor were they used to identify and share best practice between organisations.

Our findings are based on case studies of only eight Scottish trusts and, although the sample was selected to be broadly representative of trusts in Scotland, may not be applicable to trusts elsewhere in the United Kingdom. Similarly, our study focused on the impact of only two clinical indicators, and our findings may have been different if we had focused on other indicators. Notwithstanding the above, as the clinical indicators published under the performance assessment framework in England and Wales are similar to those published in Scotland, many of our findings relating to these data may be directly transferable to similar programmes currently being developed elsewhere in the United Kingdom. We believe that this is the first in depth evaluation of the impact of a clinical indicator programme in a UK context, which makes many of our findings more directly transferable to the rest of the NHS.

There have been a limited number of studies based in the United States that have assessed the impact of the publication of clinical data on provider organisations. On the whole, these studies have found that published clinical indicators rarely stimulate quality improvement. 13 7 Our findings indicate several reasons why published clinical indicators often have little or no effect in provider organisations. A key lesson of the Scottish experience is that those responsible for designing clinical indicator systems should not only concentrate on developing robust datasets with but should also encourage a suitable organisational environment and incentive context to foster the use of these data for continuous quality improvement.

Acknowledgments

We thank the interviewees, who without exception readily gave up their valuable time to share with us their knowledge of the CRAG indicators. We are grateful to Peter Smith and Huw Davies for their comments on an earlier draft of this paper. We also thank David Clyne and Steve Kendrick for their assistance with this project.

Contributors: RM was principal investigator and is guarantor of the paper. RM and MG designed the study. RM conducted the interviews and performed the preliminary qualitative analysis. MG read a sample of interview transcripts and helped to interpret the data. Both authors contributed towards the final paper.

Footnotes

  • Funding Department of Health as part of a core programme of work on performance management at the Centre for Health Economics, University of York. The views expressed in this paper are those of the authors and not necessarily those of the Department of Health.

  • Competing interests None declared.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.