Intended for healthcare professionals

Editorials

Summative assessment in general practice

BMJ 1996; 313 doi: https://doi.org/10.1136/bmj.313.7058.638 (Published 14 September 1996) Cite this as: BMJ 1996;313:638
  1. Douglas Carnall, General practitioner
  1. 206 Queensbridge Road, London E8 3NB

    Much to learn from new scheme

    This month sees the introduction of a new system of summative assessment of training for general practitioners in Britain. Under the old system registrars in general practice collected the signatures of hospital consultants and general practitioner trainers certifying that they had satisfactorily completed a minimum of three years of experience in approved training posts.1 3 Now, doctors who want to be eligible to become principals in general practice must satisfy the Joint Committee on Postgraduate Training for General Practice (JCPTGP) that they have adequate knowledge, consulting skills, and clinical competence for the role.

    Such knowledge and competence will be assessed by multiple choice questions and a videotaped assessment of consulting skills, together with a written report of practical work in general practice and a structured report of performance in practice from the registrar's trainer.2 The assessments are being organised at regional level, with the joint committee confining its role to inspection and standardisation.3

    Concern that the old system of assessment in Britain may have allowed registrars of poor quality to slip through the net drove the introduction of the new scheme. Reports of individual patient tragedies at the hands of doctors who had only recently received their certificate from the Joint Committee on Postgraduate Training for General Practice,4 5 the low rate of refusal of certification (only 16 of 6200 registrars were denied their trainer's signature on completion of the vocational year in practice during 1990-5 (T S Murray, unpublished data)), and a survey which showed that many general practise trainers regarded the certificate as insufficient proof of competence to practise6 led to pressure for a new system of assessment.

    The examination for Membership of the Royal College of General Practitioners (MRCGP) has been considered as the instrument of assessment, but its standard is for excellence rather than minimum competence. The college believes that, while “in time it will be usual for new principals to hold the MRCGP, it should not be mandatory for entry to NHS general practice” (W M Styles, statement of chairman of council 1996). However, the college has modified its examination so that its new part A (clinical assessment and multiple choice questions) will enable successful candidates to gain exemption from similar components of their regional summative assessment (W M Styles).

    The region with most experience of summative assessment is west Scotland, where, for the past three years, all registrars have had to complete a pilot assessment package on which most new regional schemes will be based.5 Of the 359 registrars who completed this package, 17 were judged to have fallen below the level of minimum acceptable competence, but 10 of the 17 were signed up by their trainers as competent to practice. The authors of the west Scotland study suggest that this is further evidence that the close relationship between trainer and registrar militates against objective assessment, necessitating external quality control.

    Despite the perceived need for enhanced accreditation before registrars can start independent practice, many problems remain. Although basic competence is to be assessed by reference to standard criteria, potentially enabling a 100% pass rate, the credibility of the exercise will rest on the willingness of the assessors to fail some registrars. If experience in west Scotland is broadly applicable to the rest of Britain about 5% of registrars will fail the assessment each year and will have to be offered an extension of training.5

    Although all those concerned with the process want to ensure that failed registrars receive further training, in the absence of any regulatory changes the only way that a registrar could obtain funding for further training for this is to appeal to the secretary of state for health with the support of a recommendation from the joint committee. This situation is unlikely to change in the near future as the Department of Health has yet to find parliamentary time to amend either the vocational training regulations or the “Red Book,” which governs payments for training in general practice.7 Because of this, the subcommittee that represents registrars nationally has opposed the implementation of summative assessment, while also expressing concern at its lack of validation and the poor communication over its implementation.8

    Although summative assessment has brought beneficial changes—far more registrars in west Scotland now have regular training in consultation skills using videotapes, for example—there are reasons to doubt whether its implementation will truly enhance education in general practice overall. By definition, doctors are good at passing exams. Batteries of tests are unlikely to worry most registrars but may postpone the moment at which “the mask of relaxed brilliance”9 is dropped to enable the development of patterns of adult learning.10 The registrar year is likely to be disrupted as examination based assessments command the energies of both registrars and those who educate them at the expense of other activities. Those who compete for funding from the regional budgets which bear the costs of assessment—estimated at £;165 for each registrar (T S Murray, unpublished data)—may also feel that the resources could be better used on improving hospital training posts and formative assessment (regular structured feedback throughout training). An economic analysis of the pilot scheme has yet to be published.5

    General practice in Britain already had a system of summative assessment in the old trainer's report. Enhancing the objectivity of this report by setting explicit standards that the registrar must attain is a welcome development, and would encourage the use of formative assessments in all training posts. Like the other components of the assessment, its effect on patient care are unproved, but it is at least not potentially educationally deleterious. Until there is decent evidence that the new assessment can be shown to predict performance in practice it should be optional. Such evidence is available for holders of the MRCGP, who are, for example, less likely to have complaints upheld against them.4 This may be because of the excellence of its syllabus but also because more highly performing individuals enter it in the first place. Trying to cut off the tail at the other end of this performance distribution is a futile exercise. General practice educators would do better to concentrate their efforts on shifting the entire distribution upwards by encouraging individuals to review their performance in relation to their peers in unthreatening educational settings and use existing professional performance review procedures to deal with extreme outliers.

    Elsewhere in the world, accreditation is followed by reaccreditation. As the leaders of the profession in Britain consider this logical step,11 they should consider the lessons learnt from the implementation of summative assessment. Perhaps the most important is that educational innovation is best informed by the experience of the assessed as well as the theories of the assessors.

    References

    1. 1.
    2. 2.
    3. 3.
    4. 4.
    5. 5.
    6. 6.
    7. 7.
    8. 8.
    9. 9.
    10. 10.
    11. 11.