Summary of main findings
Despite a lack of clear evidence, doubts over the quality of care delivered by small, and particularly solo, practices have frequently been raised in the past. Under the QOF, this study shows that differences in performance between practices of different size depend on the measure used. When measuring quality in terms of points scored, practices with fewer patients performed worse, on average, than those with more. However, when measuring quality in terms of achievement rates, the average performance of the smallest practices was only marginally worse than that in larger practices in year 1 (2004–2005), and the gap had closed by year 3. This discrepancy is due to the maximum achievement thresholds, which make it impossible to discriminate between higher-performing practices, whose actual rates of achievement may vary by up to 50%, when using points scored as the measure of quality.
Smaller practices face several disadvantages under the pay-for-performance scheme. They tend to have relatively more patients with chronic disease, and until 2009 received less remuneration per patient under the scheme because of the formula used to adjust payments for disease prevalence.11 Given that practices are remunerated on the basis of points scored, with achievement beyond the maximum thresholds not rewarded, the payment system does not adequately recognise the achievements of high-performing practices, many of which are small. Smaller practices therefore had to work harder for relatively less financial reward under the scheme, and yet collectively their performance improved at the fastest rate.
A particular concern with physicians working alone is that they have greater opportunity to defraud the system, for example by inappropriately exception reporting patients or falsely claiming a target has been achieved, as they do not have colleagues to directly monitor their behaviour. Fraud is difficult to monitor, but this study found that the smallest practices (predominantly single handed) had the lowest average rates of exception reporting. Patterns of population achievement, which includes exception-reported patients, were also similar to those for reported achievement, which suggests that single-handed practitioners were no more likely to have inflated their achievement scores through inappropriate exception reporting than their peers in larger practices. In addition, patterns of achievement for activities measured externally, such as control of glycated haemoglobin (HbA1c) levels in patients with diabetes, were similar to those for indicators measured and reported internally.
Despite the generally high performance of small practices, many had very low achievement rates. Practices with fewer than 3000 patients represented one-fifth of all practices but nearly half of the worst-performing 5% in terms of reported achievement. Although these poorly performing practices improved at the fastest rate, in year 3 it remained the case that a significant minority of small practices were apparently providing substantially poorer care than the national average. Small practices are more likely to be located in deprived areas and to be poorly organised,12 but these factors are only weakly associated with performance under the scheme, and practices with these characteristics are capable of high levels of achievement.13,14 Other factors must, therefore, be involved.
In 2004, one solution to the problem of poor quality of primary care would have been to close small practices. However, in addition to removing many of the worst-performing practices it would have removed many of the best: over 45% of the practices in the top 5% for reported achievement had fewer than 3000 patients. A more logical approach would be to reduce variation, by bringing the worst-performing practices towards the level of the best. This appears to have occurred under the pay-for-performance scheme, through incentivising a systematic approach to a limited range of chronic diseases and publicly reporting performance. However, variation in achievement, and in exception reporting rates, remains strongly associated with practice size. This is partly a mathematical relationship: variation in all patient and practice characteristics, and in patient outcomes, will be inversely related to list size regardless of the actual quality of care provided. This has consequences for how pay-for-performance schemes measure performance, as each additional patient for whom a target is achieved or missed, whether due to the quality of clinical care provided or due to factors beyond practices’ control, has greater consequences for practices with fewer patients.
Comparison with existing literature
Small practices in general, and single-handed practitioners in particular, come under regular pressure to join their colleagues in larger practices, and that pressure has intensified with the developments in general practice over the last decade, culminating in the 2008 Next Stage Review.4 Between 2004–2005 and 2006–2007, the number of single-handed practices in England decreased, particularly in more deprived areas,15 as over 2500 additional full-time equivalent physicians entered general practice. Despite this, previous research suggests that there is little relationship between the size of a practice and its ability to provide high quality care.16 Overall, some aspects of quality are associated with smaller practices, such as patient ratings of access or continuity of care, and others with larger practices, such as data recording or organisation of services.16–18 There is also no consistent association between practice size and differences in outcome, for example number of hospital admissions for asthma, epilepsy, or diabetes; avoidable admissions;19 or quality of care for patients with ischaemic heart disease.20
Implications for future research
The particular problems associated with single-handed practice — lack of peer review, risk of clinical isolation and of abuse of trust2 — need to be addressed. The principal question is whether they are soluble without abolishing single-handed status. Single-handed practitioners are subject to the same clinical governance and appraisal arrangements as those in group practices, receive the same monitoring from primary care trusts, and, since 2004, have been measured against the same clinical quality targets under the QOF. The present results suggest that small practices, most of which are single handed, achieve, on average, similar levels of performance to larger group practices, despite an arrangement that systematically disadvantages them, but a significant minority do have low rates of achievement and the reasons for this require more attention. However, if we ask questions about why the smallest practices often appear to be the worst we should also be asking why they often appear to be the best.