Skip to main content

Main menu

  • HOME
  • ONLINE FIRST
  • CURRENT ISSUE
  • ALL ISSUES
  • AUTHORS & REVIEWERS
  • SUBSCRIBE
  • RESOURCES
    • About BJGP
    • Conference
    • Advertising
    • BJGP Blog
    • eLetters
    • Feedback
    • Librarian information
    • Alerts
    • Resilience
    • Video
  • RCGP
    • BJGP for RCGP members
    • BJGP Open
    • RCGP eLearning
    • InnovAIT Journal
    • Jobs and careers
    • RCGP e-Portfolio

User menu

  • Subscriptions
  • Alerts
  • Log in

Search

  • Advanced search
British Journal of General Practice
  • RCGP
    • BJGP for RCGP members
    • BJGP Open
    • RCGP eLearning
    • InnovAIT Journal
    • Jobs and careers
    • RCGP e-Portfolio
  • Subscriptions
  • Alerts
  • Log in
  • Follow bjgp on Twitter
  • Visit bjgp on Facebook
  • Blog
Advertisement
British Journal of General Practice

Advanced Search

  • HOME
  • ONLINE FIRST
  • CURRENT ISSUE
  • ALL ISSUES
  • AUTHORS & REVIEWERS
  • SUBSCRIBE
  • RESOURCES
    • About BJGP
    • Conference
    • Advertising
    • BJGP Blog
    • eLetters
    • Feedback
    • Librarian information
    • Alerts
    • Resilience
    • Video
Original papers

Are NHS primary care performance indicator scores acceptable as markers of general practitioner quality?

Guy Houghton and Andrew Rouse
British Journal of General Practice 2004; 54 (502): 341-344.
Guy Houghton
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Andrew Rouse
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info
  • eLetters
  • PDF
Loading

Abstract

Background: In 2002 the Department of Health published a list of 20 indicators to judge the performance of the 302 primary care organisations (PCOs) in England during 2001–2002. General practitioners (GPs) have expressed doubts about the relevance, applicability and evidence base of these indicators for actual practice.

Aims: To fashion NHS performance indicators to be acceptable and relevant to practicing GPs.

Design of study: A Delphi technique followed by simple mathematical modelling.

Methods: We asked a group of 24 senior GP educators to place the Department of Health performance indicators in rank order as markers of quality in general practice. We found just seven indicators comprised 73% of the markers chosen and all seven were chosen by over three-quarters of the responders. Using a simple ‘sign test’ system, we then calculated a composite points score for all 302 PCOs.

Results: We found that there were almost twice as many PCOs at the upper and lower ends of performance and fewer in the middle than we predicted theoretically. The results suggest that pan-PCO or practice factors account for the low performance scores of 16 of 35 PCOs with extremely poor performance and for the high scores of 17 of the 36 PCOs with extremely high performance.

Conclusion: We have developed a method that shows how numerous Department of Health performance indicators can be merged into a single composite performance score. We show that this composite performance score is easy to derive, simple to interpret, is acceptable to GPs, and has face validity.

  • performance indicators
  • primary health care
  • quality markers

Introduction

IN 1999, the Department of Health for England set up a system to monitor the performance of primary care organisations (PCOs). The system comprised of 20 measures that were intended to reflect the progress of primary care in implementing the government's National Health Service (NHS) modernisation agenda. The 20 measures are shown in Box 1.

Box 1. NHS performance indicators for 2001–2002 for primary care organisations.

Access to quality services

  • The number of patients waiting over 18 months for inpatient treatment

  • The number of patients waiting over 15 months for inpatient treatment

  • The number of patients waiting more than 26 weeks for an outpatient appointment

  • Surgery to remove cataracts

  • Surgery for coronary heart disease

  • Surgery to replace joints including hips and knees

Service provision

  • Percentage of patients able to see a GP within 48 hours

  • Patient satisfaction with the services available from GP survey as measured by the GP survey

  • The number of GPs who have access to the Internet

  • Deaths caused by an accident

  • Emergency admissions for people suffering from chronic conditions such as asthma and diabetes

  • Prescribing levels of generic drugs

  • Prescribing levels of antibacterial drugs

  • Prescribing levels of ulcer healing drugs

Improving health

  • Deaths from circulatory diseases

  • People who have quit smoking after 4 weeks

  • Teenage pregnancy measured as conceptions under the age of 18 years

  • People screened for cervical cancer

  • Children immunised against measles, mumps and rubella and diphtheria

  • People vaccinated against influenza

In August 2002, the results of the first annual assessment of PCO performance using these indicators was published.1 Since these measures are intended to reflect the progress of primary care in implementing the government's NHS modernisation agenda, we discussed the performance indicator scores obtained by Birmingham PCOs with local health professionals and general practitioners (GPs).

The overwhelming response was that there were far too many indicators, and that many were clearly of no relevance to general practice. We therefore decided the following:

  • To ascertain whether we could identify a subgroup out of these 20 indicators that GPs would consider valid indicators of their performance within the PCO.

  • To develop a single composite score from these ‘GP validated’ indicators.

  • To establish whether this composite indicator was likely to be a valid tool for measuring and comparing the combined performance of GPs in PCOs.

Methods and results

We listed the 20 NHS indicators in random order and approached 25 GPs who were both in active practice and recognised leaders of general practice education across the West Midlands Deanery. These doctors included the Deputy Postgraduate Dean, the Regional Director of GP Education, area directors, educational advisors, course organisers, GP tutors and GP trainers, coming from a region with a larger population than Scotland and encompassing the rurality of Herefordshire, the urban deprivation of the Black Country and the affluence of Sutton Coldfield and Solihull. We asked them individually to identify all those indicators that, in their opinion, were most likely to reflect high quality GP practice, particularly those for which GPs could have personal control of the service that the indicator sought to measure. We then asked them, again without any conferring, to place eight of these indicators in rank order and return their results to us anonymously. A pilot of this methodology had previously ascertained that participants were unable to rank order more than eight choices.

HOW THIS FITS IN

What do we know?

Governments like to use performance indicators. General practitioners (GPs) and other professionals tend to find externally imposed measurements irrelevant and threatening.

What does this paper add?

We have identified a subgroup of National Health Service performance indicators that practicing GPs consider acceptable and relevant. Using a simple scoring system, we were able to merge the scores from these indicators into a single composite performance score and compare the performance of all 302 primary care organisations in England in 2001–2002.

We found that just seven indicators comprised 73% of the indicator choices selected and these were chosen by three-quarters of all responders (Table 1). Five indicators were not selected by any of the 24 participating GPs: 26–week outpatient waits, 18–month and 15–month in-patient waits, surgery rates for cataract removal, and accidental deaths.

View this table:
  • View inline
  • View popup
Table 1

Indicators listed according to frequency of selection by responders.

Using these seven indicators we calculated a ‘composite GP performance indicator score’ for each PCO. The composite GP performance indicator score was derived in the following way. We used the Department of Health 2001–2002 data to obtain the median score of the 302 PCOs in England for each of these seven indicators. If a PCO's score was better than the national median value for an indicator we assigned the PCO one point. We repeated this point allocation process for each of the seven GP performance indicators. (See Table 2 for national median scores and examples of high and low scoring PCOs.) For each PCO, we added these individual scores to produce a composite score, which would have to lie between 0 and 7. We then plotted a graph which showed the number of PCOs observed with composite indicator score (Figure 1). Thirty-six PCOs achieved high scores of either 6 or 7, 35 had low scores of 0 or 1, and 231 had middling scores of 2, 3, 4, or 5.

Figure 1
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1

Expected and observed composite indicator scores for 302 primary care organisations.

View this table:
  • View inline
  • View popup
Table 2

Composite indicator scores calculated for anonymised actual examples of high and low scoring PCOs.

We also plotted the graph that we would expect if the performance of all PCOs was equal, and differences in their performances were owing to chance or sampling variation. Put simply, common sense (and binomial theory) suggests that:

  • The average composite score for any PCO must be 3.5.

  • The majority (165) of PCOs should have composite scores of 3 or 4.

  • Fewer (49) PCOs should have to have a composite score of either 2 or 5.

  • Even fewer (19) PCOs should have low composite scores of 0 or 1, and a similar number have high composite scores of 6 or 7.

Figure 1 shows that only 126 PCOs — 39 fewer than expected — had scores of 3 and 4. More interestingly, about twice as many PCOs had scores of 0 and 1 (35), and of 6 or 7 (36) than expected theoretically. These results suggest that:

  • The performance of about half of the 71 PCOs with very high or low scores can be attributed to chance and should therefore not attract criticism or praise.

  • The performance of the other — approximately 35 — PCOs are likely to be related to specific PCO related factors. The implication must be that the performance of these PCOs can justifiably be criticised or praised.

  • Unfortunately, using this data alone we have no way of deciding which PCOs have low or high scores owing to ‘luck’ and which have low or high scores owing to practice-related factors.

Discussion

Nobody likes composite performance scores (‘It is like mixing apples and oranges’). Nevertheless, governments have shown an increasing desire to collect indices of performance to justify organisational and funding changes. These are of little value if they are not easily understood and credible to those whose performance they purport to measure. We believe that our method, a 7-indicator composite score, meets these criteria. Furthermore these scores are:

  • Simple. Calculating a composite score for a PCO by using the criteria of whether each indicator value is above the median is an easy statistical ‘sign test’ technique.2

  • Easily interpreted. We found that GPs and other health professionals could follow the simple steps of our method without feeling that they had been blinded by abstruse statistical or mathematical theory. We presented these results to the West Midlands area directors of general practice and there was unanimous agreement on the simplicity of the method.

  • Relevant. Practitioners have doubted the value, relevance and the evidence base of government inspired performance indicators.3-6 First sight of the 2001–2002 list, with its emphasis on hospital waits as markers for primary care quality, did little to dispel this attitude. We believe that by allowing GPs to select the indicators that they feel able to control, they have given professional credibility to the composite indicator score. Even the few GPs evaluating the study who were not totally convinced of the fairness of the indicators, requested that we use the methodology to calculate the composite scores of the practices selected to train GP registrars in the Deanery. It is important to remember that these composite scores are not based on arbitrary ‘thresholds’ or ‘targets’, but a real score derived from the median performance of all PCOs. They could be used to put PCOs in bands, comparable to the government's star rating system, but they are internal comparisons rather than the basis of league tables, which practitioners find neither helpful nor acceptable.

  • Face validity. In its drive to improve recruitment into general practice, the government has introduced a ‘golden hello’ inducement payment to new GP principals.7 This is paid either at ‘normal’ or ‘enhanced’ rates, depending on the recruitment need of each PCO, as judged by the Department of Health. Only eight of the 35 (23%) PCOs with high composite scores offer the enhanced payment, compared with 33 out of the 36 (92%) low scoring PCOs. Furthermore, several of the indicators chosen by our group of clinicians have been used in the formulation of the General Medical Services contract in the determination of the global sum, additional services and quality payments.8 The Department of Health has announced a new set of indicators for primary care and removed two — Internet access and the results of the National Patient Survey — from the previous list, neither of which reached the shortlist chosen by our Delphi group of GP educators.

We noted that none of the low scoring PCOs registered a score for cervical cytology, in contrast with all but three of the high scorers. All of the high scorers except one were above the mean for influenza immunisation, whereas only one of the low scorers was above the mean. Initial analysis suggests that low scoring PCOs tend to be in conurbations and reflect socioeconomic deprivation; this is in line with previous research into family health services authorities performance indicators where morbidity, socioeconomic characteristics, and secondary care supply were identified as confounding factors, explaining between a third and a half of the variation in admission rates across health authority areas.9 Although Roland and his colleagues at the National Primary Care Research and Development Centre, University of Manchester, have been researching into quality markers in individual practices,10 we feel that, especially with increasing central emphasis on managed primary care, scores like ours can be used to encourage joint PCO and practice endeavours to demonstrate improving quality of care.

Acknowledgments

We would like to thank the GP educators who participated as our expert Delphi group.

  • Received April 23, 2003.
  • Revision received July 9, 2003.
  • Accepted November 26, 2003.
  • © British Journal of General Practice, 2004.

References:

  1. ↵
    1. Department of Health
    (2002) NHS performance indicators: primary care organisations 2001/02 (Department of Health, London) http://www.performance.doh.gov.uk/performanceratings/2002/national_pco.html (accessed 24 Mar 2004).
  2. ↵
    1. Machin D,
    2. Campbell MJ
    (1987) Statistical tables for design of clinical trials (Blackwell Science, Oxford).
  3. ↵
    1. Houghton G
    (1995) General practitioner reaccredidation: use of performance indicators. Br J Gen Pract 45:677–681.
    1. Birch K,
    2. Scrivens E,
    3. Blaylock P,
    4. Field SJ
    (1999) Performance indicators: international perspectives and the development of indicators for general practice in the UK (Keele University Press, Keele).
    1. Majeed FA,
    2. Voss S
    (1995) Performance indicators for general practice. BMJ 311:209–210.
  4. ↵
    1. McColl A,
    2. Roderick P,
    3. Gabbay J,
    4. et al.
    (1998) Performance indicators for primary care groups: an evidence based approach. BMJ 317:1354–1360.
  5. ↵
    1. Department of Health
    (Sep 3, 2002) NHS GP ‘golden hello’ scheme, Annex A: Amendments to the GMS statement of fees and allowances. http://www.dh.gov.uk/PolicyAndGuidance/OrganisationPolicy/PrimaryCare/NHSGoldenHelloScheme/fs/en (accessed 6 Apr 2004).
  6. ↵
    1. British Medical Association
    (2003) New GMS contract. Investing in general practice (British Medical Association, London).
  7. ↵
    1. Giuffrida A,
    2. Gravelle H,
    3. Roland M
    (1999) Measuring quality of care with routine data: avoiding confusion between performance indicators and health outcomes. BMJ 319:94–98.
  8. ↵
    1. Campbell S,
    2. Hann M,
    3. Hacker J,
    4. et al.
    (2001) Identifying predictors of high quality care in English general practice: observational study. BMJ 323:784.
View Abstract
Back to top
Previous ArticleNext Article

In this issue

British Journal of General Practice: 54 (502)
British Journal of General Practice
Vol. 54, Issue 502
May 2004
  • Table of Contents
  • Index by author
Download PDF
Download PowerPoint
Article Alerts
Or,
sign in or create an account with your email address
Email Article

Thank you for recommending British Journal of General Practice.

NOTE: We only request your email address so that the person to whom you are recommending the page knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Are NHS primary care performance indicator scores acceptable as markers of general practitioner quality?
(Your Name) has forwarded a page to you from British Journal of General Practice
(Your Name) thought you would like to see this page from British Journal of General Practice.
Citation Tools
Are NHS primary care performance indicator scores acceptable as markers of general practitioner quality?
Guy Houghton, Andrew Rouse
British Journal of General Practice 2004; 54 (502): 341-344.

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero

Share
Are NHS primary care performance indicator scores acceptable as markers of general practitioner quality?
Guy Houghton, Andrew Rouse
British Journal of General Practice 2004; 54 (502): 341-344.
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One
  • Mendeley logo Mendeley

Jump to section

  • Top
  • Article
    • Abstract
    • Introduction
    • Methods and results
    • Discussion
    • Acknowledgments
    • References:
  • Figures & Data
  • Info
  • eLetters
  • PDF

Keywords

  • performance indicators
  • primary health care
  • quality markers

More in this TOC Section

  • How people present symptoms to health services: a theory-based content analysis
  • Screening of testicular descent in older boys is worthwhile: an observational study
  • Central or local incident reporting? A comparative study in Dutch GP out-of-hours services
Show more Original Papers

Related Articles

Cited By...

Advertisement

 

Register Now for the BJGP Research Conference, 12 March 2020

BJGP Open

 

@BJGPjournal's Likes on Twitter

 
 

British Journal of General Practice

NAVIGATE

  • Home
  • Current Issue
  • All Issues
  • Online First
  • Authors & reviewers

RCGP

  • BJGP for RCGP members
  • BJGP Open
  • RCGP eLearning
  • InnovAiT Journal
  • Jobs and careers
  • RCGP e-Portfolio

MY ACCOUNT

  • RCGP members' login
  • Subscriber login
  • Activate subscription
  • Terms and conditions

NEWS AND UPDATES

  • About BJGP
  • Alerts
  • RSS feeds
  • Facebook
  • Twitter

AUTHORS & REVIEWERS

  • Submit an article
  • Writing for BJGP: research
  • Writing for BJGP: other sections
  • BJGP editorial process & policies
  • BJGP ethical guidelines
  • Peer review for BJGP

CUSTOMER SERVICES

  • Advertising
  • Contact subscription agent
  • Copyright
  • Librarian information

CONTRIBUTE

  • BJGP Blog
  • eLetters
  • Feedback

CONTACT US

BJGP Journal Office
RCGP
30 Euston Square
London NW1 2FB
Tel: +44 (0)20 3188 7679
Email: journal@rcgp.org.uk

British Journal of General Practice is an editorially-independent publication of the Royal College of General Practitioners
© 2019 British Journal of General Practice

Print ISSN: 0960-1643
Online ISSN: 1478-5242