INTRODUCTION
Relational continuity in general practice is associated with positive outcomes for patients, doctors, and health systems, including several of the most important outcomes in medical care, including reduced admissions to hospitals and reduced mortality.1–4 In 2022, a key question became how to measure continuity of GP care. The Conference of Local Medical Committees (2022), the policymaking body for NHS GPs, passed a resolution that continuity should be included in a future NHS contract for GPs.5 The Select Committee on Health and Social Care (2022)6 reporting on The Future of General Practice recommended that GP continuity be improved by measuring it in all practices by 2024.
Continuous measurement is important in quality improvement programmes. Achieving improvements in continuity requires effective measurements. If all practices reported a standardised measure of continuity, this might identify practices needing continuity support, and identify high-performing practices providing good models.
Different measures exist in continuity research. The calculation methods, advantages, and disadvantages of these for research have been described.7–9 Alternative measures have been promulgated by practices or NHS organisations. The Select Committee6 proposed that continuity be measured and reported quarterly in all general practices, using the Usual Provider of Care (UPC)10 or the St Leonard’s Index of Continuity of Care (SLICC).11
For a continuity measurement method to be useful, it needs to be simple for practices to use, to be easily understood by GPs and managers, and to capture meaningful continuity, ideally within a reasonably short timescale. Plans are already being made to measure continuity in English general practices but there are important differences between the various methods.
We compare the two methods recommended by the Select Committee and also consider the Bice–Boxerman COC Index,12 which is often used in research, and the Own Patient Ratio (OPR),11 currently used in some general practices.
RECOMMENDED MEASUREMENT METHODS
UPC
One Select Committee6-recommended measure is the UPC,10 a commonly used quantitative measure in continuity research. It is relatively simple to calculate and interpret. For each patient, the UPC score is the proportion of appointments or contacts with the most frequently seen GP.
A minimum number of consultations per patient is required for the UPC, so it cannot be calculated on all consultations and patients. To have enough patients included, a sufficient timescale is needed, which depends on attendance rates. For a general practice population, this is usually at least a year. For comparisons between groups, or time periods, a mean of patient scores is often used as a summary statistic.
SLICC
The SLICC is the percentage of all patient GP consultations that are with their named/registered/personal/list-holding GP.11 It is a simple percentage and is quickly understood by GPs and staff. It is inclusive, making use of every appointment/contact, for every patient consulting, with every GP in the practice. It can be applied to short timescales — usually 1 month. This allows regular monitoring of continuity and can be face-to-face or telephone, or both combined. Table 1 compares the SLICC and UPC. Table 2 shows example results of them for patients and patient groups.
Table 1. A comparison of four of the methods for measuring continuity in general practices
Table 2. Worked examples of patient/patient group scores using the continuity measuresa
Despite being regarded as ‘similar’ to the UPC, the SLICC is not calculated at the patient level but at the level of a group of patients, usually the list of patients for each named GP. This measure can also be applied to entire practices or to specific groups of interest.
ALTERNATIVE MEASURES OF CONTINUITY
Continuity of Care index (COC, Bice–Boxerman)
The Continuity of Care (COC, Bice–Boxerman) index12 is often used in continuity research. This measure (included in Tables 1 and 2) incorporates the dispersion of consultations, with higher scores for patients who see fewer GPs. The COC score reduces with less-than-perfect continuity more than with the UPC, so GPs used to UPC measurements may find it surprising.
Like the UPC, there is a minimum number of consultations required for a patient to be included, usually three, excluding many consultations. This means the COC has similar limitations for timeframes and consultation rates. The mean is, again, often used as a summary statistic.
Own Patient Ratio (OPR)
The Own Patient Ratio11 (included in Tables 1 and 2) is the percentage of a GP’s consultations that are with patients on their own list. This can be useful for individual GPs as it can be easier to make changes so that they see more of their own patients. However, this may not accurately reflect the patient experience, particularly if the GP has a list size that is too large for the number of sessions worked. Used in conjunction with the SLICC, the OPR enables GPs and managers to understand how the practice works.
Other measures
The Herfindahl–Hirschman7,13 is similar to the COC, being a measure of the concentration of consultations among a group of providers, although it is calculated differently and less used in healthcare research. The SECON,9 also used in research, incorporates the sequence of appointments, with higher scores for consecutive consultations with the same GP. This might interest practices studying their episodic continuity.
There are also some additional, largely unresearched, measures used by NHS organisations or software providers. These are often simple measures and sometimes focus on particular groups of patients such as frequent attenders. The OPR has sometimes been independently developed and used, but, without the SLICC, it lacks the patient perspective.
Another measure is the percentage of patients who reach a threshold percentage of consultations with one GP (either the most seen or their personal GP). This is, essentially, a different way of creating a UPC summary statistic. A similar measure has been proposed using the COC/Bice‒ Boxerman index, taking the percentage of patients who score 0.7 or higher.14
Some practices count the total number of GPs a patient has seen, which is more useful for studying continuity for frequent attenders. It can show practices how patient consultations are spread between doctors.
Patient surveys have been used to measure continuity in research and within practices. These include the Nijmegen Continuity Questionnaire and disease-specific continuity surveys.15 These have the potential to capture the patient perspective of relationship continuity, potentially more meaningful than quantitative measures based on appointment data. However, surveys are time consuming and costly. In England, the annual General Practice Patient Survey includes two questions that have been used as measures of continuity. These results correlated with the UPC in one study.16
MEANINGFUL MEASUREMENT OF CONTINUITY
Continuity is a proxy for the doctor–patient relationship and the associated benefits accrue when patients and GPs have repeated consultations together over time. Ideally, a measure of continuity should capture that long-term relationship. A meaningful measure therefore needs to either measure consultations over a long period of time or measure consultations where there is a reasonable expectation of a continuing clinical relationship.
Most measures require a minimum number of consultations so that the patient has seen the most-seen GP several times when they have a high score. The SLICC and OPR are different in that they assume that the patient has, or will have, a clinical relationship with their named/list-holding GP. The single patient appointment that is included in 1 month is considered to be one of a series.
In English practices with personal lists,17,18 the contractually required named accountable GP19 is the GP who takes long-term clinical responsibility, making the SLICC and OPR straightforward and meaningful. The Select Committee6 has recommended that 80% of practices have personal lists by 2027. Currently, in some practices, the requirement for a named GP is seen as an administrative formality and patients are not encouraged to see their named GP,20 nor does the GP take long-term responsibility for the patient.
If the named GP is not the GP with whom the patient has (or is expected to have) a continuing therapeutic relationship, the SLICC and OPR are not very meaningful, for example, if practices did not keep this field up-to-date after a change in GP. Likewise, if there is high patient or GP turnover at the practice, a SLICC may not be meaningful as the single consultations are less likely to build up to long-term continuity. Here, a measure that investigates continuity for patients with the most-seen GP (such as the UPC) may be more useful, particularly as this would also restrict the measurement to patients with a minimum number of consultations.
If the practice prioritises measuring the dispersion of appointments across GPs, the COC/Bice–Boxerman or total number of GPs seen may be more helpful. However, if these measures are used over too short a timescale, they are no longer meaningful, as only a very small number of patients will reach the appointment number threshold for inclusion.
If a practice is using a micro-team system in which continuity is expected to be with more than one GP, the SLICC and UPC are not likely to capture this well. The COC might then be a more meaningful index to use. However, it is possible to adapt the SLICC or UPC to calculate the proportion of appointments with either/any member of the micro-team.
TIMESCALE
To be useful in general practice and for healthcare improvement, a measure should track changes in continuity over relatively short timescales. The Select Committee recommends quarterly reporting of continuity measurement.6 The SLICC and OPR can produce meaningful results for a calendar month that makes them useful for regular monitoring of continuity and also use of statistical process control charts (Supplementary Figure S1), which are used in quality improvement to distinguish between normal variation and significant changes.
For other measures, up-to-date monitoring is more difficult. It is possible, using measures such as the UPC, to take the year to date as the timescale, then update this each month or quarter. This gives a cumulative measure and can determine improvement or deterioration in continuity levels. However, it does not have the immediacy of single-month measurement and practices may become discouraged that continuity does not appear to improve more rapidly. Statistical process control charts cannot be used with cumulative measures.
EASE OF USE
Many of the measures used in research may be too complex to calculate and understand in busy practices. The UPC has the advantage of being a straightforward measure. However, unless it is attached to a pre-specified GP for each patient, there is a statistical problem with patients with low numbers of consultations. Because the score can only be 1 or 0.5 with two consultations (common), the UPC is artificially inflated for patient populations with fewer consultations, particularly when a mean is used as a summary statistic. The UPC may be useful for identifying the most-seen GP, which might then aid practices in establishing personal lists.
The SLICC and OPR are easy to calculate and understand. Once a named GP is identified and recorded, these measures allow monthly measurement of continuity in a way that GPs and practice managers understand. The SLICC and sometimes the OPR are already used in several practices around the country,21 allowing for benchmarking. The Select Committee published results showing both good (>50% SLICC) and excellent GP continuity (>75% SLICC).21
CONCLUSION
Measuring GP continuity in all English general practices is now proposed. GP software clinical systems may soon be required to provide all English practices with the capability to calculate one or more continuity measures.
The principles of the different methods in use need to be clearly understood as different methods prioritise different features and can generate different figures for the same group of patients. There is a risk that software developers will produce measures that do not measure continuity in a way that is meaningful or statistically reliable.
When considering ease of understanding, and the capacity for use in quality improvement (statistical process control charts), the SLICC (possibly combined with the OPR) is likely to be the best measure to incorporate into clinical systems. However, for practices looking to establish personal lists, it may also be useful to have the ability to calculate a UPC at the individual patient level to identify the most-seen GP.
The Select Committee identified two methods of measuring GP continuity. As they differ, the optimum arrangement would be to use both.
Notes
Provenance
Freely submitted; externally peer reviewed.
Competing interests
The St Leonard’s Index of Continuity of Care (SLICC) was constructed by Denis Pereira Gray in 1973 and introduced in the St Leonard’s Practice in 1974. He coined the term ‘personal lists’ in 1979 and they were used by him and Philip Evans subsequently and to date. Kate Sidaway-Lee named the SLICC in 2019.
- © British Journal of General Practice 2023