Article Text

Download PDFPDF

Errors in general practice: development of an error classification and pilot study of a method for detecting errors
  1. G Rubin1,
  2. A George1,
  3. D J Chinn1,
  4. C Richardson2
  1. 1Centre for Primary and Community Care, University of Sunderland, Sunderland, UK
  2. 2Sunderland Teaching Primary Care Trust, Sunderland, UK
  1. Correspondence to:
 Professor G Rubin
 Centre for Primary and Community Care, University of Sunderland, Sunderland SR2 7BW, UK; greg.rubinsunderland.ac.uk

Abstract

Objective: To describe a classification of errors and to assess the feasibility and acceptability of a method for recording staff reported errors in general practice.

Design: An iterative process in a pilot practice was used to develop a classification of errors. This was incorporated in an anonymous self-report form which was then used to collect information on errors during June 2002. The acceptability of the reporting process was assessed using a self-completion questionnaire.

Setting: UK general practice.

Participants: Ten general practices in the North East of England.

Main outcome measures: Classification of errors, frequency of errors, error rates per 1000 appointments, acceptability of the process to participants.

Results: 101 events were used to create an initial error classification. This contained six categories: prescriptions, communication, appointments, equipment, clinical care, and “other” errors. Subsequently, 940 errors were recorded in a single 2 week period from 10 practices, providing additional information. 42% (397/940) were related to prescriptions, although only 6% (22/397) of these were medication errors. Communication errors accounted for 30% (282/940) of errors and clinical errors 3% (24/940). The overall error rate was 75.6/1000 appointments (95% CI 71 to 80). The method of error reporting was found to be acceptable by 68% (36/53) of respondents with only 8% (4/53) finding the process threatening.

Conclusion: We have developed a classification of errors and described a practical and acceptable method for reporting them that can be used as part of the process of risk management. Errors are common and, although all have the potential to lead to an adverse event, most are administrative.

  • errors
  • general practice
  • clinical governance

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Health care is characterised by a reliance on human operators who work with increasingly complex technology and variable levels of uncertainty. Errors are inevitable and may have serious consequences for life. Industries with similar characteristics, notably aviation, have developed methods of documenting and investigating risk that allows systematic efforts to reduce the frequency and severity of adverse events.1,2 Risk management is less developed in health care, although it is now explicitly identified as a professional responsibility intended to ensure patient safety and improve clinical quality (box 1).3–5 Good risk management practices also have important financial implications for healthcare providers. In 1998/9 medical litigation in the UK cost the NHS £400 million ($620 million; €580 million) with an estimated further £2.4 billion ($3.7 billion; €3.5 billion) in potential liability.3

Box 1 Primary care in the UK

  • In England, Primary Care Trusts (PCTs) are National Health Service (NHS) organisations, each serving populations of about 100 000, responsible for improving health in their local areas.

  • PCTs have responsibility for securing the provision of a wide range of services, including the commissioning of acute and specialised services. They have responsibility for the administration, management, and development of all family health services, including the implementation of clinical governance.

  • Clinical governance was introduced into the NHS in 1997 as a framework through which NHS organisations would be accountable for continuously improving the quality of services and safeguarding high standards of care.

  • Two of the key concepts of clinical governance are risk management and identifying unacceptable variations in care.

  • Methods used to monitor quality of care include routine performance monitoring, significant event audit and patient surveys, among others.

The language of risk management can be confusing with terms such as error, adverse event, and significant event being used interchangeably.6 Errors—which are the failure, for reasons that are preventable, of a planned action to achieve its intended outcome7—may result in an adverse event, an injury caused by medical management rather than the underlying condition of the patient.8 By definition, errors are more common than adverse events but there have been no attempts to estimate their prevalence and the case for research in this area has been argued.8–11

Different methods have been used to detect adverse events and errors. In industry, multiple sources of data are used such as confidential surveys, non-punitive incident reporting, and observational safety audits.1,2 Within secondary health care, adverse events have been detected by retrospective chart review.12–14 In primary care, prescription monitoring,15 anonymous incident reporting,16,17 significant event audit, and analysis of complaints and litigation have been used.18

Those responsible for clinical governance in primary care have focused until now on incidents with the greatest potential for leading directly to an adverse event,18 even though these are rare. Errors, which are more common, may lead to adverse outcomes, sometimes in subtle ways, being compounded by circumstances or further errors.19 We aim to describe a classification of errors in general practice and to use it to assess the feasibility and acceptability of a method for recording staff reported errors.

METHODS

Developing a methodology, classification, and an error recording form

A pilot study was conducted in a five-doctor urban training practice in July 2001. Clinical and administrative staff were briefed at a practice meeting about errors and the nature of the study. Participants were advised that an error was “an event that was not completed as intended and/or meant that work was disrupted in some way”, a definition based upon that provided by Reason.7 They were asked to record free text descriptions of events in blank notebooks for a period of 2 weeks. They were asked to record all errors, no matter how seemingly trivial, as they occurred and at every occurrence, and were assured that all information recorded was confidential and non-attributable. To promote anonymity of reporting, notebooks were placed in every consulting room and in the reception area and administrative offices rather than being given to individuals. Ten notebooks were used in total.

After 2 weeks all notebooks were returned to the authors and all events were transcribed verbatim into a single document. Comments on the process were collected at a meeting with practice staff. From the data, a classification of errors was then developed by two of the authors (GR and AG). Disagreements were resolved by reference to the remaining authors. An error recording form based on the classification was then piloted in the same practice for another 2 week period in November 2001. Participants were asked to record all occurrences of events and forms were again linked to work areas and not to individuals. At the end of this period the content and internal validity of the classification and the utility of the recording form were established through further meetings with the practice and with the Primary Care Group (PCG) clinical governance subgroup.

Feasibility of methodology

In June 2002 all 19 practices in the former Sunderland South PCG, which covered inner city and urban areas, were invited to participate in a 2 week survey of errors using the error recording form. The background to the study, the definition of error,7 and instructions for data collection were discussed in detail with all practice managers. Copies of the error recording forms were despatched 4 weeks before data collection. Each practice received a reminder telephone call in the week before the 2 week data collection period. At the end of the recording period all error recording forms were returned to the authors for analysis. In addition, practices were asked to record the number of appointment slots (booked and urgent) available during the 2 week period. They were subsequently provided with an analysis of their own results, as well as comparative anonymised data for the other participating practices.

Acceptability of methodology

Two months after the end of the study we asked participating practices to comment on the acceptability of the process. An anonymous self-completion questionnaire containing seven statements (table 1) was developed from themes relating to barriers to error reporting identified in the literature.8–9,20 Practice managers distributed these to all clinical and non-clinical staff in the practices who were asked to respond to each question using a 5 point Likert scale ranging from “strongly agree” to “strongly disagree”. Space was available for additional comments.

Table 1

Responses to acceptability questionnaire

Data analysis

Data received were used to produce the error classification. Quantitative data were analysed using SPSS version 10 for Windows.21 The Mann-Whitney U test (Z) and Pearson’s χ2 test were used to compare practice profiles and responses to the acceptability questionnaire.

RESULTS

Classification, methodology, and development of error recording form

Five doctors, one nurse, one pharmacist, and 11 administrative staff participated and 65 events were recorded in eight of the 10 notebooks. Five broad themes were identified: prescriptions, communication, appointments, equipment and clinical errors. Feedback from practice members was generally positive. It was noted that some errors occurred frequently and it was difficult to know how to describe some events accurately. There was agreement that a simple error form listing the most common errors would be a better method of recording errors and would make sure most events were recorded. Space could be left for events that did not fit into these categories to be described. It was felt that 2 weeks was long enough for the data collection to maintain interest and enthusiasm.

With the addition of an “other errors” category, the classification was used to create an error recording form which was tested in the same practice. Ten forms were used and 36 events were recorded on eight of these. The forms were felt to be more straightforward to use. Fewer errors were recorded in this 2 week period. This was attributed to a reduction in staff numbers because of holidays and a lack of enthusiasm from some of the doctors. Suggestions were made about the wording of the categories and a revised version of the classification and error recording form was produced.

Feasibility of methodology and frequencies of errors

Ten practices agreed to take part in the study. Five practices were “too busy” and four were “not interested”. Participating practices had a median list size of 5943 (interquartile range (IQR) 4976–10 400) compared with 3000 (IQR 2638–5078) for non-participating practices (Z = 2.53, p = 0.01). There were no significant differences in number of whole time equivalent GPs, training status, and membership of the Royal College of General Practitioners between participating and non-participating practices.

A total of 163 people worked in these 10 practices: 39 (24%) GPs, 20 (12%) practice nurses, 81 (50%) reception staff, and 10 (6%) practice managers. The remaining 13 (8%) comprised a mixture of other professionals including health visitors, midwives, community psychiatric nurses, and pharmacists.

940 errors were recorded over the 2 week period (table 2). A total of 136 error recording forms were returned from the 10 practices with a median of 15 (IQR 11–17) forms returned by each practice. Errors were not recorded on 17/136 (13%) of the forms. 42% (397/940) of errors were related to prescriptions, of which 6% (22/397) were attributed to medication errors. Communication errors comprised 30% (282/940) of the total number with missing case notes being the most common source. The majority of equipment errors were computer related, while printers accounted for most of the errors with other equipment. 3% (24/940) of the errors detected were clinical. Inaccurate note keeping (for example, incorrect patient demographic details) was the main source of error in this category. Analysis of errors recorded in the “other” category led to an additional subcategory of prescription errors (“inaccurate computer prescribing records”).

Table 2

Error classification and summary results

A total of 12 431 appointments were available during the 2 weeks, giving a crude error rate of 75.6/1000 appointments (95% CI 71 to 80). The median error rate for each practice was 72.1/1000 appointments (IQR 35–101).

Acceptability of methodology

One hundred and sixty three questionnaires were distributed between the 10 participating practices. Responses were received from 33% (54/163) of participants, representing nine practices; 37% (34/91) of responses were from administrative staff and 28% (20/72) from clinical staff (table 1). There were no significant differences in responses between administrative and clinical staff, although clinical staff tended to report that the method used to detect errors was less easy to understand. The response was generally positive towards the method used in the study; 9% (5/54) felt it was disruptive and 8% (4/53) felt threatened by being asked to report errors.

DISCUSSION

We have developed a classification of errors containing six categories (prescriptions, communication, appointments, equipment, clinical care and “other” errors) and used it to test the feasibility of a method for recording staff reported errors in general practice. The method proved to be generally acceptable. An overall error rate of 75.6/1000 appointments was found, most of which were administrative, relating to prescriptions or communication. 13% (122/940) related to computers and this is important as more practices become computerised and less dependent on paper records. 5% (46/940) were clinical or medication errors. There have been very few studies describing errors in primary care despite the high volume of patient contacts. This study highlights the nature and relative frequencies of these errors.

Strengths

In a locality with no strong tradition of participating in research, over half the practices approached took part in the study. The method for recording errors was not disruptive to everyday work and errors were recorded contemporaneously, reducing the problem of memory recall.22 Anonymous reporting, as used here, is known to reduce fear of reprimand8,9,20 and should improve reporting levels. Under reporting of events is a problem for all methods of detecting errors and adverse events and may range in extent from 50% to 96%,2 suggesting that the true error rate is higher than we describe. Conversely, the same error may have been reported by more than one member of the practice team.

A broad definition of error was adopted in order to capture as many events as possible. This reduced the need for participants to analyse events closely before recording them, making the process quicker and less complicated. Reason’s definition7 rests on the failure of an action to achieve its intended outcome. Generally, the parties involved are in agreement as to what that intended outcome is, although there will be circumstances when this is not always the case.

Limitations

This was a pilot study conducted in a single city and so may not be generalisable. We have no evidence to suggest that the participating practices are not representative of inner city and urban practices in other parts of the UK. Smaller practices were less likely to participate, reflecting concerns about additional work generated by the study. Smaller practices may be organised differently from larger practices, although there is no evidence to suggest that they offer poorer care23 and these differences could have affected the overall results. There was a low response rate to the acceptability questionnaire. We do not have information on non-responders but responses may have been improved if the questionnaire had been used sooner following the end of the period of data collection. The responses appear generally positive and this may be responder bias rather than genuine reflection of attitudes. The former is doubtful given the mixture of positive and negative statements in the questionnaire, the anonymous nature, and the fact that opponents of the study might be more likely to express an opinion in order to prevent this process from being repeated.

Most of the errors recorded were administrative and this may reflect the higher proportion of participants with non-clinical roles. Administrative errors may also be thought to be less hazardous and therefore less threatening to report. One recent study, however, found similar levels of administrative errors, with the only fatal error being the failure to pass on a message.24 We were not able to collect data on the comparative reporting rates of clinical and non-clinical staff because of the anonymous nature of the recording process, or on the consequences of reported errors. Information on the nature and outcomes of errors would enable the use of techniques such as significant event audit25 or root cause analysis9 in investigating the cause and identifying potential solutions. The observation period lasted for only 2 weeks as we wished to maintain enthusiasm for the process. Although we have no reason to believe that the period chosen was unrepresentative, this is a short period in comparison with other studies in the field.

Other studies

In secondary care a computer analysis of 2000 incidents and 800 adverse events captured in a large incident monitoring study produced a complex classification with 90 categories.26 In primary care a recent US study developed a taxonomy of medical errors.24 Forty two physicians reported 330 free text descriptions of events over a period of 20 weeks. These were classified into two broad categories: “process errors” and “knowledge and skills” errors. Process errors included administration, investigations, treatments, communication and payments. Our classification was initially based on 101 events from the pilot study collected over a shorter time period with additional information provided by the larger study and with a range of primary care workers collecting information. Both classifications are comparable except for the addition by us of a category for equipment errors. Our classification is unlikely to have reached saturation and repeated use will allow for additional information and modifications to be made.

Key messages

  • Errors in general practice can be categorised into those involving prescriptions, communication, appointments, equipment, clinical care, and “other” errors.

  • Most errors occurring in general practice are administrative.

  • Using this system, the overall error rate among 10 general practices was

  • 75.6/1000 appointments.

  • An anonymous self-report system for identifying errors is acceptable to general practitioners and their practice staff.

  • The value of detecting errors in general practice as a means of stimulating change and improving systems has yet to be determined.

An adverse event rate in primary care of 3.7/100 000 clinic visits has been reported in one US study.16 Anonymous reports were filed to a risk management centre from eight primary care clinics over a 5.5 year period. Only incidents resulting in or having the potential for physical, emotional, or financial liability were included. It is difficult to determine which of the errors in our study would have led to comparable events.

Implications

Risk management is an essential element of clinical governance. The importance of detecting and recording errors as a key step in learning from experience is emphasised in “An organisation with a memory” published by the Department of Health in 2000.3 Our study was conducted in partnership with the PCT clinical governance team as part of their risk management strategy. The method is acceptable and can be used to identify practice specific problems that require action. It can also highlight common areas for concern which the PCT can consider and act upon, disseminating the information and supporting lessons learned through a “no blame culture”.9 It can be used to measure the effect of changes introduced in response to initial findings or as a consequence of organisational change such as new staff members or new IT equipment.

There remains the need to evaluate the sensitivity and specificity of this method of error reporting and its stability with repeated use. Accuracy of reporting could be tested by confirmatory techniques such as direct observation.27 We did not try to quantify the patient experience of errors in general practice which could provide a valuable alternative perspective. Lastly, the usefulness of this process as an agent for stimulating change within general practices needs to be established.

REFERENCES

View Abstract

Linked Articles

  • Action points
    Tim Albert