Intended for healthcare professionals

General Practice

Evaluation of the effectiveness of an educational intervention for general practitioners in adolescent health care: randomised controlled trialCommentary: Applying the BMJ's guidelines on educational interventions

BMJ 2000; 320 doi: https://doi.org/10.1136/bmj.320.7229.224 (Published 22 January 2000) Cite this as: BMJ 2000;320:224

Abstract

Objective: To evaluate the effectiveness of an educational intervention in adolescent health designed for general practitioners in accordance with evidence based practice in continuing medical education.

Design: Randomised controlled trial with baseline testing and follow up at seven and 13 months.

Setting: Local communities in metropolitan Melbourne, Australia.

Participants: 108 self selected general practitioners.

Intervention:A multifaceted educational programme for 2.5 hours a week over six weeks on the principles of adolescent health care followed six weeks later by a two hour session of case discussion and debriefing.

Outcome measures: Objective ratings of consultations with standardised adolescent patients recorded on videotape. Questionnaires completed by the general practitioners were used to measure their knowledge, skill, and self perceived competency, satisfaction with the programme, and self reported change in practice.

Results: 103 of 108 (95%) doctors completed all phases of the intervention and evaluation protocol. The intervention group showed significantly greater improvements in all outcomes than the control group at the seven month follow up except for the rapport and satisfaction rating by the standardised patients. 104 (96%) participants found the programme appropriate and relevant. At the 13 month follow up most improvements were sustained, the confidentiality rating by the standardised patients decreased slightly, and the objective assessment of competence further improved. 106 (98%) participants reported a change in practice attributable to the intervention.

Conclusions: General practitioners were willing to complete continuing medical education in adolescent health care and its evaluation. The design of the intervention using evidence based educational strategies proved an effective and quick way to achieve sustainable and large improvements in knowledge, skill, and self perceived competency.

Key messages

  • Firm evidence shows that the confidence, knowledge, and skills of doctors in adolescent health contribute to barriers in delivering health care to youth

  • Evidence based strategies in continuing medical education were used in the design of a training programme to address the needs of doctors and youth

  • The programme covered adolescent development, consultation and communication skills, health risk screening, health promotion, risk assessment of depression and suicide, and issues in management of psychosocial health risk including interdisciplinary approaches to care

  • Most interested doctors attended and completed the 15 hour training programme over six weeks and the evaluation protocol covering 13 months

  • Doctors completing the training had substantial gains in knowledge, clinical skills, and self perceived competency than the controls; these gains were sustained at 12 months and were further improved in the objective measure of clinical competence in conducting a psychosocial interview

Evaluation of the effectiveness of an educational intervention for general practitioners in adolescent health care: randomised controlled trial

  1. L A Sanci, fellow in adolescent health (sancil{at}cryptic.rch.unimelb.edu.au)a,
  2. C M M Coffey, epidemiologista,
  3. F C M Veit, adolescent physiciana,
  4. M Carr-Gregg, director of training and educationa,
  5. G C Patton, directora,
  6. N Day, principal research fellowb,
  7. G Bowes, professorial fellowa
  1. a Centre for Adolescent Health, Department of Paediatrics, University of Melbourne, Parkville, Victoria 3052, Australia
  2. b Centre for Health Program Evaluation, Faculty of Business Economics, Monash University, Clayton, Victoria 3168, Australia
  3. Clinical Skills Centre, University of Dundee, Ninewells Hospital and Medical School, Dundee DD1 9SY
  1. Correspondence to: L A Sanci
  • Accepted 7 October 1999

Abstract

Objective: To evaluate the effectiveness of an educational intervention in adolescent health designed for general practitioners in accordance with evidence based practice in continuing medical education.

Design: Randomised controlled trial with baseline testing and follow up at seven and 13 months.

Setting: Local communities in metropolitan Melbourne, Australia.

Participants: 108 self selected general practitioners.

Intervention:A multifaceted educational programme for 2.5 hours a week over six weeks on the principles of adolescent health care followed six weeks later by a two hour session of case discussion and debriefing.

Outcome measures: Objective ratings of consultations with standardised adolescent patients recorded on videotape. Questionnaires completed by the general practitioners were used to measure their knowledge, skill, and self perceived competency, satisfaction with the programme, and self reported change in practice.

Results: 103 of 108 (95%) doctors completed all phases of the intervention and evaluation protocol. The intervention group showed significantly greater improvements in all outcomes than the control group at the seven month follow up except for the rapport and satisfaction rating by the standardised patients. 104 (96%) participants found the programme appropriate and relevant. At the 13 month follow up most improvements were sustained, the confidentiality rating by the standardised patients decreased slightly, and the objective assessment of competence further improved. 106 (98%) participants reported a change in practice attributable to the intervention.

Conclusions: General practitioners were willing to complete continuing medical education in adolescent health care and its evaluation. The design of the intervention using evidence based educational strategies proved an effective and quick way to achieve sustainable and large improvements in knowledge, skill, and self perceived competency.

Key messages

  • Firm evidence shows that the confidence, knowledge, and skills of doctors in adolescent health contribute to barriers in delivering health care to youth

  • Evidence based strategies in continuing medical education were used in the design of a training programme to address the needs of doctors and youth

  • The programme covered adolescent development, consultation and communication skills, health risk screening, health promotion, risk assessment of depression and suicide, and issues in management of psychosocial health risk including interdisciplinary approaches to care

  • Most interested doctors attended and completed the 15 hour training programme over six weeks and the evaluation protocol covering 13 months

  • Doctors completing the training had substantial gains in knowledge, clinical skills, and self perceived competency than the controls; these gains were sustained at 12 months and were further improved in the objective measure of clinical competence in conducting a psychosocial interview

Introduction

The patterns of health need in youth have changed noticeably over the past three decades. Studies in the United Kingdom, North America, and Australia have shown that young people experience barriers to health services.15 With the increase in a range of youth health problems, such as depression, eating disorders, drug and alcohol use, unplanned pregnancy, chronic illness, and suicide, there is a need to improve the accessibility and quality of health services to youth.3 6

In the Australian healthcare system general practitioners provide the most accessible primary health care for adolescents.7 Yet Veit et al surveyed 1000 Victorian general practitioners and found that 80% reported inadequate undergraduate training in consultation skills and psychosocial diseases in adolescents and 87% wanted continuing medical education in these areas.4 8 These findings agreed with comparable overseas studies.911

Evidence based strategies in helping doctors learn and change practice are at the forefront of the design of continuing medical education.12 In response to the identified gap in training an evidence based educational intervention was designed to improve the knowledge, skill, and self perceived competency of general practitioners in adolescent health. We conducted a randomised controlled trial to evaluate the intervention, with follow up at seven and 13 months after the baseline assessment.

Participants and methods

The divisions of general practice are regional organisations that survey the needs of, and provide education for, general practitioners in their zone. There are 15 divisions in metropolitan Melbourne. Advertisements inviting participation in our trial were placed in 14 of the 15 divisional and state college newsletters and mailed individually to all division members. The course was free, and continuing medical education points were available. Respondents were sent details of the intervention and the evaluation protocol and asked to return a signed consent form. Divisions and doctors were excluded if they had previously received a course in adolescent health from this institution.

Randomisation

Consenting doctors were grouped into eight geographical clusters by practice location to minimise contamination and to maximise efficiency of the delivery of the intervention. Clusters (classes) of similar size were randomised to intervention or control by an independent researcher.

Intervention

The box details the objectives, content, and instructional design of the multifaceted intervention. A panel comprising young people, general practitioners, college education and quality assurance staff, adolescent health experts, and a state youth and family government officer gave advice on the design.15 The curriculum included evidence based primary and secondary educational strategies such as role play with feedback, modeling practice with opinion leaders, and the use of checklists. 12 16 The six week programme was delivered concurrently by LS, starting one month after baseline testing (see figure on website).

Goals, content, and instructional design of intervention in principles of adolescent health care for general practitioners

Intervention goals
  • To improve general practitioners' knowledge, skill, and attitudes in the generic concepts of adolescent health to effectively gain rapport with young people, screen them for health risk, and provide health promotion and appropriate management plans

  • To increase awareness of the barriers their practices may pose for youth access and how these may be overcome

  • To understand how other services can contribute to the management of young people and how to access these in their locality

Intervention content (weekly topics)
  • Understanding adolescent development, concerns, and current morbidities, the nature of general practice, and yourself

  • Locating other youth health services and understanding how they work, and medicolegal and ethical issues in dealing with minors

  • Communication and consultation skills and health risk screening

  • Risk assessment of depression and suicide

  • Detection and initial management of eating disorders

Instructional design
Needs analysis
  • From previous surveys and informally at start of workshops

Primary educational strategy

Workshops for 2.5 hours weekly for six weeks

  • Debriefing from previous session

  • Brief didactic overviews

  • Group problem based activities and discussion

  • Modeling of interview skills by opinion leaders on instructional video

  • Role play and feedback practice sessions with adolescent actors

  • Activities set to practise in intervening week

  • Individual feedback on precourse evaluation video

Course book

  • Goals, objectives, course requirements, and notes

  • Suggested further reading

  • Class or home activities with rationale for each

Resource book

  • Reading material expanding on workshop sessions

Practice reinforcing and enabling strategies
  • Adolescent assessment chart for patient audit

  • Logbook for reflection on experience with the patients audited

  • Self assembled list of adolescent health services in local community

  • Availabilty of tutor (LS) by phone for professional support between workshops

  • Refresher session for group discussion of experiences in practice (six weeks after course)

Measures

Table 1 summarises the instruments used in the evaluation. Parallel strategies of objective and self reported ratings of knowledge, skill, and competency were used to ensure findings were consistent.17 18 Participants' satisfaction with the course and their self reported change in practice were evaluated at 13 months. Any other training or education obtained in adolescent health or related areas were noted.

Table 1

Evaluation measures, their content, inter item reliability, and intraclass correlation within randomisation groups estimated at baseline

View this table:

Clinical skills

Seven female drama students were trained to simulate a depressed 15 year old exhibiting health risk behaviour. Case details and performances were standardised according to published protocols1921 and varied for each testing period. Doctors were given 30 minutes to interview the patient in a consulting room at this institution. An unattended camera recorded the consultation on videotape.

The standardised patients were trained in the use of a validated rating chart21 assessing their own rapport and satisfaction and discussion about confidentiality. These were completed after the interview while still in role. They were blind to the intervention status of the doctors, and no doctor had the same patient for successive interviews.

Two independent observers, blind to participants' status, assessed the taped consultations in the three testing periods. A doctor in adolescent health coded three items in the scale relating to medical decision making. A trained non-medical researcher assessed all other items. The chart was developed from two validated instruments for assessment of adolescent consultations21 and general practice consultations.22 23 Marks for both competency and content of the health risk assessment were summarised into a percentage score. The same observers were used in all three testing periods.

Self perceived competency

Two questionnaires were developed for the doctors to rate both their comfort and their knowledge or skill with process issues, including the clinical approach to adolescents and their families and with substantive issues of depression, suicide risk assessment, alcohol and drug issues, eating disorders, sexual history taking, and sexual abuse. Doctors also rated their consultation with the standardised patient on a validated chart,21 itemising their self perceived knowledge and skill.

Knowledge

Knowledge was assessed with short answer and multiple choice items developed to reflect the workshop topics. The items were pretested and refined for contextual and content validity. The course tutor, blind to group status, awarded a summary score.

Analysis

Statistical analysis was performed with STATA (Stata, Texas), with the individual as the unit of analysis. Factor analysis with varimax rotation was used to identify two domains within the comfort and self perceived knowledge or skill items: process and substantive issues. The internal consistency for all scales was estimated using Crohnbach's γ. Reproducibility within and between raters was estimated with one way analysis of variance as was the intraclass correlation of baseline score within each teaching group.

The effect of this intervention was evaluated by regression of gain scores (score at seven month follow up minus baseline score)on the intervention status, with adjustment for baseline and potential confounding variables. Robust standard errors were used to allow for randomisation by cluster. The sustainability of outcome changes in the intervention group between the assessments at seven months and 13 months was evaluated with paired t tests.

Table 2

Demographic characteristics of general practitioners by intervention group. Numbers are percentages

View this table:

Results

Participants

Newsletters and mailed advertisements to 2415 general practitioners resulted in 264 expressions of interest. Overall,139 doctors gave written consent to be randomised. Attrition after notification of study status left 55 (73%) doctors in the intervention group and 53 (83%) in the control group, with an average of 13.5 (12 to 15) doctors in each class.

The age and country of graduation of the doctors in this study were similar to the national workforce of general practitioners.24 25 Female doctors were overrepresented (50% in this study versus 19% and 33% in the other reports). Table 2 describes the randomisation groups. There was imbalance in age, gender, languages other than English spoken, average weekly hours of consulting, types of practice, and college examinations.

Table 3

Multiple regression analyses of baseline and difference in scores on continuous outcome measures evaluating success of educational intervention at seven month follow up. Models include gender, age group, language other than English, type of practice, average hours worked per week, and college exams taken. Difference scores are also adjusted for baseline score and training obtained from elsewhere over 7month period. Robust standard errors allowed for cluster randomisation. All scores out of 100

View this table:
Table 4

Evaluation of change in unadjusted percentage scores for intervention group (n=54) from baseline to seven month follow up and from 7 month to 13 month follow up using paired t tests. Values are mean (95% CI) unless stated otherwise

View this table:

Compliance

One doctor dropped out of the intervention group. Overall,44 doctors attended all six tutorials, eight missed one, and two missed three. In total, 103 of 108 (95%) of participants at baseline completed the entire evaluation protocol (see website).

Measures

The evaluation scales showed satisfactory internal consistency and low association with class membership (table 1). Satisfactory interrater agreement was achieved on the competency scale (n=70, r=0.70). The intrarater consistency for both medical and non-medical raters was also satisfactory (n=20, r=0.80 and 0.91 respectively).

Effect of the intervention

Table 3 describes the baseline measures and the effect of the intervention at the seven month follow up. All analyses were adjusted for age, gender, languages other than English, average weekly hours of consulting, practice type, and college examinations. Doctors reporting education in related areas during follow up (67%control (34 of 51), 41% intervention (22 of 54)) were characterised. The difference analysis was adjusted for this extraneous training and baseline score, although the extraneous training did not affect any outcomes. The study groups were similar on all measures at baseline. The intervention group showed significantly greater improvements than the control group at the seven month follow up in all outcomes except the rapport rating by the standardised patients.

The contextual validity and applicability of the course was assessed by 48 of 53 doctors and rated positively by 46 (96%).

Follow up of the intervention group at 13 months

The intervention effect was sustained in most measures and further improved in the independent rater's assessment of competence (table 4). The crude rating of the confidentiality discussion by the standardised patients deteriorated at the 13 month assessment but was significantly greater than baseline. Overall, 98% of the participants reported a change in practice, which they attributed to the intervention.

Discussion

A course in adolescent health for six sessions designed with evidence based strategies in doctor education brought substantial gains in knowledge, skills, and self perceived competency of the intervention group of doctors compared with the control group, except for the rapport and satisfaction rating by the standardised patients. The changes were generally sustained over 12 months and further improved in the independent observer's rating of competence. Almost all participants reported a change in actual practice since the intervention.

These results are better than reported in a review of 99 randomised controlled trials to evaluate continuing medical education published from 1974-95.12 Although over 60% had positive outcomes they were small to moderate and usually in only one or two outcome measures. In keeping with the recommendations of this review we adapted a rigorous design, clearly defined our target population, and used multiple methods for evaluating competence. Perhaps more importantly the intervention design incorporated three further elements:the use of evidence based educational strategies, a comprehensive preliminary needs analysis, and the content validity of the curriculum ensured by the involvement of both young people and doctors.

The participants clearly represented a highly motivated group of doctors. This self selection bias was unavoidable but reflected the reality that only interested doctors would desire special skills in this domain and conforms to the adult learning principle of providing education where there is a self perceived need and desire for training.12 26 27 We therefore established that the intervention is effective with motivated doctors. It is generally accepted that doctors with an interest in a topic would already have high levels of knowledge and skill, with little scope for improvement. This was not the case in our study. Baseline measures were often low and improvements were large, confirming the need for professional development in adolescent health. The retention rate was excellent and possibly due, in part, to the role of a doctor in the design of the programme, in recruitment, and in tutoring.

Doubt remains as to whether improved competency in a controlled test setting translates to improved performance in clinical practice.28 High competency ratings are not necessarily associated with high performance, but low competency is usually associated with low performance.16 29 30

The rapport and satisfaction rating by the standardised patients was the only outcome measure apparently unresponsive to the intervention. Actors' ratings and character portrayal were standardised, and gender bias was controlled by using only actresses. Even with these precautions three actresses scored differently from the rest, one had fewer physician encounters, and the subjective nature of the rating scale probably contributed to large individual variation. A trend towards improvement in the intervention group was noted but our study lacked sufficient power to find a difference. In other settings validity and reliability in competency assessments with standardised patients has been shown to increase with the number of consultations examined.31 32 Pragmatically, it was not feasible to measure multiple consultations in our study.

Errors in interrater measurement were minimised by using the same raters for all three periods of testing. The independent observer and patient were blind to study status but may have recognised the intervention group at the seven month follow up because of the learnt consultation styles. Other measures of competency were included to accommodate this unavoidable source of error.

Our study shows the potential of doctors to respond to the changing health needs of youth after brief training based on a needs analysis and best evidence based educational practice. Further study should address the extent to which these changes in doctors' competence translate to health gain for their young patients.

Acknowledgments

We thank the participating doctors, Helen Cahill (Youth Research Centre, Melbourne University), Dr David Rosen (University of Michigan), and Sarah Croucher (Centre for Adolescent Health).

Contributors: LAS, the prinicipal investigator, initiated and conducted the intervention and wrote the paper. CC advised on recruitment, randomisation, pilot testing of instruments, and data analysis and helped write and edit the paper. FV provided advice, participated in pilot testing of instruments, provided the medical rater's assessment of doctors' skill, and helped edit the paper. MC-G helped to design and deliver the intervention and to edit the paper. GP advised on the study design and helped write and edit the paper. ND was a supervisor to LAS and advised on the evaluation methodology and helped edit the paper. GB was the chief supervisor to LAS and advised on the intervention design and evaluation and helped edit the paper. LAS will act as guarantor for the paper.

Footnotes

  • Funding The Royal Australian College of General Practitioners Trainee Scholarship and Research Fund and the National Health and Medical Research Council.

  • Competing interests None declared.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.
  21. 21.
  22. 22.
  23. 23.
  24. 24.
  25. 25.
  26. 26.
  27. 27.
  28. 28.
  29. 29.
  30. 30.
  31. 31.
  32. 32.

Commentary: Applying the BMJ's guidelines on educational interventions

  1. Jean Ker, lecturer in medical education (jsker{at}dundee.ac.uk)
  1. a Centre for Adolescent Health, Department of Paediatrics, University of Melbourne, Parkville, Victoria 3052, Australia
  2. b Centre for Health Program Evaluation, Faculty of Business Economics, Monash University, Clayton, Victoria 3168, Australia
  3. Clinical Skills Centre, University of Dundee, Ninewells Hospital and Medical School, Dundee DD1 9SY

    In the western world, healthcare systems are facing enormous changes driven by both political and economic forces and by the increase in consumer expectations for competent and consistent quality health care. In response to these changes, medical education has become an increasingly important aspect of every doctors' professional life. Publishers have responded by including papers on medical educational issues with increasing frequency. This move has, however, required the development of guidelines to evaluate papers on educational interventions.

    This critique applies guidelines developed by the BMJ's education group, which were published in the BMJ on 8 May 1999.

    Guideline 1: General overview

    The commitment of the BMJ to publish more educational research makes the paper by Sanci et al an eminently suitable one for practising doctors interested in medical education.

    Adolescent health care is challenging not only for general practitioners but for healthcare professionals involved in service delivery at all levels. This paper shows how successfully continuing medical education can be incorporated into changes in service delivery.

    The principle steps of the educational intervention process are clearly outlined and can be generalised to other clinical settings, making it of interest to a wide readership. It contributes to the growing literature on evaluation of educational interventions in the general practice setting by attempting to show sustained changes in practice performance after a brief programme for continuing medical education.

    The paper also follows the general style and guidelines for publication in the BMJ.

    Guideline 2: Theoretical considerations

    One of the purposes of the guidelines on evaluating educational interventions is to facilitate, through papers, readers'understanding of the teaching and learning process so that they can apply any relevant aspects to their own practice.

    In relation to this, the goals of this educational intervention are well described in the context of Australian general practice. The educational rationale was, however, rather brief in its explanation. An expanded discussion on the strategies used could have covered advantages and disadvantages. Readers may be able to utilise some of the learning opportunities given, but their links to the goals were not explicit.

    Guideline 3: Study presentation and design

    A panel of stakeholders, including patients, was used to identify the content and design of the multifaceted intervention, which ensures the relevance of the intervention in terms of healthcare practice, and this was described in detail. The study design to ensure that standardised patients and observers were blind to the intervention status of the doctors is commendable.

    In answering the questions posed in the guidelines some concerns with the design are raised.

    The study is described as a randomised controlled study. A better and less misleading description would have been to describe it only as a randomised study, as it is often difficult to eliminate contaminants in an educational intervention. In fact the bias described in the type of practice, the language spoken, the age differences, as well as the college exams taken, does question the positive outcomes reported in the study.

    The lack of a pretest to identify whether the two groups were comparable in terms of knowledge does also bring into question the final interpretation of the intervention. Purposive sampling based on a pretest and the variables described above would have been more appropriate and would have lent more meaning to the outcome.

    The statistical analysis is clearly shared with the reader and well described. The use of a multifaceted evaluation system using recognised validated instruments reflects the guidelines for evaluating papers on educational interventions.

    Guideline 4: Discussion

    The discussion was structured in accordance with the guidelines, with a clear statement of the principle findings. The sustainability of the intervention could, however, have been highlighted as it was a significant finding. The strengths and weaknesses of the study in relation to selection bias were well debated and justified.

    The discussion in relation to other studies was, however, only briefly addressed, referring to only one systematic review of strategies for continuing medical education. This could have been expanded to support some of the findings, particularly in relation to the rapport and satisfaction of the standardised patients as a measurement of outcome.

    The discussion did not begin to explore the implications for clinicians other than to indicate a need for assessing the health gain for patients from such interventions but did not discuss the difficulties of cost benefit analysis.

    The guidelines on evaluating educational interventions as applied to this paper enabled the reviewer to systematically address all relevant aspects of the intervention. What is not clear is how much weighting should be placed on each guideline in relation to deciding whether the article should be published or not.

    Footnotes

    • website extra The sample size calculation and a chart showing the flow of participants through the trial appears on the BMJ's website www.bmj.com

    View Abstract