‘... the idea was to prove at every foot of the way up that pyramid that you were one of the elected and anointed ones who had the right stuff and could move higher and higher ...’.
Tom Wolfe, in The Right Stuff (1979)
Almost every patient in the UK is affected by the capacity of the medical work force to contribute to the delivery of primary care, and by the personal qualities, skills, and training of the GPs who make up that workforce. Little wonder then that the general practice workforce features high on the political agenda.
In terms of workforce capacity, concerns are mounting in view of the demographics that predict unfilled retirement vacancies.1 The target set by the Department of Health that 50% of new medical graduates should be recruited to general practice each year2,3 is still recognised as important, but is proving difficult to achieve when the percentages of recent graduates naming general practice as their first choice remains constant at little over 20%.4 Health Education England (HEE) have reported only an increase of 95 recruits to GP training this year, and they have demanding targets to meet in terms of increasing numbers in training as well as enhancing training.5 Although the government’s mandate to HEE endorses extending GP training to 4 years, this has not yet become part of the business plan. With quantity and quality of recruits a matter of considerable public and professional concern, it is worth unpicking four issues which impact on the quality of the patient experience.
Firstly there is the question of whether the ‘right’ people choose medicine as a career, and do the ‘right’ medical graduates choose general practice? Secondly, which of the aspirant students and doctors should the medical schools and postgraduate training schools select for training?
Thirdly, as ‘the anointed ones’ complete stages in their training, how fairly and effectively are the judgements made on their competence and suitability? Finally, how well do these choices and judgements stack up in terms of how the chosen ones perform in their careers? There are two significant articles in this issue which contribute to our knowledge on the second6 and third7 of these issues.
INFLUENCING CHOICE AND ENHANCING RECRUITMENT
Addressing the failure to recruit more newly-qualified doctors into GP specialist training requires a number of approaches. The parallel emphasis by HEE on supply and demand5 is intended to presage engineering whereby local education and training boards (LETBs) ensure that there is more capacity in terms of training placements for GP trainees, and this means reducing specialist training numbers. Career guidance has been recognised as an important and overlooked factor, and common sense (if not evidence) would suggest that professional morale and professional image are important if undergraduate and foundation placements in practice are to encourage rather than discourage recruitment. Medical schools can do more to encourage that students are positively orientated to general practice, something which newer medical schools appear to do better than some of the more established ones and high quality careers advice is often overlooked.8
THE NATIONAL SYSTEM FOR SELECTION FOR GP TRAINING
The collective effort on the part of the UK deaneries to address the lack of standardisation resulted in a national system for selection with machine marked components including situational judgement tests and clinical problem solving tests which contribute to shortlisting. The final stage involves attendance at a selection centre where situations pertinent to practice are simulated. Candidate reactions to this form of testing have been favourable.9 By showing in this issue6 that predictive validity of the MRCGP short-listing tests extended up to the end of training, and that the use of the selection centre offered incremental improvement, Patterson and colleagues have laid the foundations for research to test the impact of these new skills on real patients in practice. It is a pointer to the success of this selection system that it is now to be piloted in specialties outwith general practice.10
DEBATES AROUND THE MRCGP CLINICAL SKILLS ASSESSMENT
Just as it seemed that slowly, slowly we are getting some partial answers to important questions, we have been brought up short by the realisation that more information does not mean more consensus. Even as the eagerly awaited information emerged on the MRCGP clinical skills assessment (CSA), the divergent interpretations (by the same authors) of data on assessment performance in this cornerstone test of GP competence11,12 has excited much soul-searching inside and outside the Royal College of General Practitioners.
The contribution in this journal by Denney and colleagues7 is a welcome addition to the debate. On the one hand a similar level of paradox is reported as was evident in the reports of Esmail and Roberts.11,12 Those who wish to engage with this complex topic have to be prepared to move from univariate to multivariate analysis. On the other hand, Denney et al were able to reach a point where they were confident enough to report that ‘examiners show no general tendency to favour their own kind.’
We are left in an uncomfortable position: if bias is not the explanation for such a strong disparity in assessment outcomes, then is the reason for this disparity desirable or undesirable in educational terms? Clearly it is undesirable that so many diligent and talented black and minority ethnic doctors from non-UK backgrounds are encountering heart-rending difficulties with this compulsory hurdle to accreditation. Their experience is disproportionate to the experience of white and UK-trained colleagues. But is the assessment the problem or is it the training for and the approach to the assessment that merits our most stringent attention? It is desirable in educational terms to have assessments that detect doctor performance which, if undetected, would have an adverse impact on patient experience. I don’t think we are, yet, in a position to be sure, and until we are, from an educational point of view, we should not rush to set aside such a well-researched and well-developed assessment as the CSA. The rapid responses to Esmail and Roberts’ BMJ article11 illustrate that this assessment is delivered by many caring, fair-minded, and highly-trained examiners. For now, we have nothing better to replace the test, which is not to say that it should be immune to rigorous further development.
THE OUTCOMES THAT MATTER
The literature is sparse on how our best attempts to recruit, select, train, and assess GPs measure up in terms of beneficial impact on the patient experience, and we need to look to proxy measures. The postgraduate assessments in other specialities are one such measure, and McManus and colleagues report exciting work on physician training,13 whereby they developed the theoretical basis for a concept of the ‘academic backbone’. The evidence that earlier attainment at secondary school, and in undergraduate and in postgraduate training predict performance up to and including entrance to the specialist register is a demonstration of what can be shown using sophisticated analysis of large cohort data. This group conceive that performance in assessment is achieved by the development of structured and applied knowledge. This ‘cognitive capital’ or ‘medical capital’ now needs to be related to doctors’ performance in the care of patients.
CONCLUSION
With a hefty research agenda ahead, those who are interested in GP education can now get to work in teasing out ways to relate process to outcome in respect of recruitment and career advice; selection into medical school and onto postgraduate training schemes; and assessments in postgraduate training. It is becoming more possible to look for improvements in patient care that relate to all stages in the process of primary care workforce development.
Notes
Provenance
Commissioned; not externally peer reviewed.
- © British Journal of General Practice 2013