Article Text

Download PDFPDF

Exploring reasons for differences in performance between UK and international medical graduates in the Membership of the Royal College of General Practitioners Applied Knowledge Test: a cognitive interview study
  1. Julie Pattinson1,
  2. Carol Blow2,
  3. Bijoy Sinha3,
  4. Aloysius Siriwardena4
  1. 1 Community and Health Research Unit, School of Health and Social Care, University of Lincoln, Lincoln, UK
  2. 2 AKT examination core group member, MRCGP examination, Royal College of General Practitioners, London, UK
  3. 3 Lincolnshire GP Speciality Training Programme, Health Education England East Midlands, Nottingham, UK
  4. 4 School of Health and Social Care, University of Lincoln, Lincoln, UK
  1. Correspondence to Professor Aloysius Siriwardena; nsiriwardena{at}lincoln.ac.uk

Abstract

Objectives International medical graduates (IMGs) perform less well in national postgraduate licensing examinations compared with UK graduates, even in computer-marked multiple-choice licensing examinations. We aimed to investigate thought processes of candidates answering multiple- choice questions, considering possible reasons for differential attainment between IMGs and UK graduates.

Design We employed a semistructured qualitative design using cognitive interviews. Systematic grounded theory was used to analyse data from ‘think aloud’ interviews of general practitioner specialty trainees (GPSTs) while answering up to 15 live questions from the UK Membership of the Royal College of General Practitioners Applied Knowledge Test (AKT).

Setting East Midlands, UK.

Participants 21 GPSTs including 13IMGs and 8 UK-trained doctors.

Outcomes Perceptions of participants on how they answered AKT questions together with strategies used or difficulties experienced.

Results We interviewed 21 GPSTs (8 female, 13 male, 13 IMGs, 14 from black and minority ethnic groups, age 24–64 years) in years 1–3 of training between January and April 2017. Four themes were identified. ‘Theoretical versus real-life clinical experience’: participants reported difficulties recalling information and responding to questions from theoretical learning compared with clinical exposure; rote learning helped some IMGs recall rare disease patterns. Recency, frequency, opportunity and relevance: participants reported greater difficulty answering questions not recently studied, less frequently encountered or perceived as less relevant. Competence versus insight: some participants were over optimistic about their performance despite answering incorrectly. Cultural barriers: for IMGs included differences in undergraduate experience, lack of familiarity with UK guidelines and language barriers which overlapped with the other themes.

Conclusions The difficulties we identified in candidates when answering AKT questions may be addressed through training. IMGs face additional difficulties which impede examination success due to differences in educational experience, content familiarity and language, which are also potentially amenable to additional training support.

  • ethnicity
  • fairness
  • differential attainment
  • licensing examinations
  • Membership Royal College Of General Practitioners (MRCGP)
  • Applied Knowledge Test (AKT)

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

  • This is the first study exploring reasons for differences in performance between UK graduates (UKGs) and international medical graduates (IMGs) in a licensing examination using in-depth cognitive (think aloud) interviews.

  • We interviewed doctors during specialty training for general practice used an inductive grounded theory approach.

  • The multiple-choice questions used during the interviews were selected from the UK Membership of the Royal College of General Practitioners Applied Knowledge Test on the basis that they showed differences or similarities in performance when comparing UKGs and IMGs based on previous test results.

  • The think aloud interviews enabled us to explore the thought processes of trainee doctors while answering these ‘live’ questions from the UK applied knowledge test in general practice.

Introduction

There are an increasing number and proportion of international medical graduates (IMGs), the term for doctors who gained their primary medical qualification (PMQ) overseas and most of whom are from low-income countries,1 working in the health services of high-income countries such as the UK,2 North America3 and Australia.4 This group of doctors, which constitutes around a quarter of the medical workforce,1 face particular challenges in training, support and assessment.5 An area of concern is the marked disparity in success rates of IMGs in medical licensing examinations which allow them to practise in their chosen specialty, often after lengthy programmes of specialty training.6–8

In the UK, for example, where general practitioners (GPs) form the largest single group of doctors working in the National Health Service (NHS), recent statistics show that while 77.6% of GPs gained their PMQ in the UK, 16.3% of doctors gained their PMQ abroad, including 6.1% from the European Economic Area.9 Differences have been highlighted between IMGs and UK graduates (UKGs) in their performance in the UK licensing examination for general practice, the Membership of the Royal College of General Practitioners (MRCGP),10 where statistics have shown differential pass rates favouring white British UKGs compared with IMGs and black and minority ethnic (BME) doctors.8 Success in the MRCGP is important because it is an endpoint examination where passing provides entry into general practice, and failure means abandonment of that career.11

Quantitative studies dominate the literature, examining potential causal relationships explaining differential attainment between demographic groups for different high stakes examinations.12 Recent studies exploring potential causes of differential attainment have expanded the range of methods used to qualitative designs.13–19

Factors thought to affect examination performance in IMGs include a range of factors related to experience, culture and ethnicity. The concept of ethnicity is complex, implying shared origins or social background, shared and distinctive cultures and traditions, maintained between generations and leading to a sense of identity and group, often with a common language or religious tradition.20 These factors include limited knowledge of NHS systems, low attendance during training and lack of participation with peers.21 Other possible reasons include a doctor-centred approach to consulting,22 poor grasp of English language23 or lack of clinical knowledge and skills.24 Knowledge of failure rates among IMGs was also a concern for ethnic minority doctors.22

Both educational and social factors may be potential contributors to differential attainment. A major study, ‘Fair Training Pathways for all’,25 observed that IMGs’ inexperience with UK systems and cultural norms, and cultural differences impeding relationships at work, were significant risks for hindering progression. Interventions suggested included those addressing risks relating to: unconscious bias in trainers; adjustment to UK culture and systems; doctors integration in the workplace; bias in recruitment and assessment; and trainee anxiety about potential bias.25

There is general agreement that examiner bias or overt discrimination is unlikely to be the sole cause of differential performance in medical licensing examinations12 and this is particularly the case in computer-based machine-marked test of knowledge. Therefore, further research exploring causes for differences in examination outcomes between IMGs and UKGs in knowledge tests is needed.26

We aimed to investigate how doctors in training answered knowledge test questions for a general practice licensing examination using cognitive (‘think aloud’) interviews to explore differences between UK and non-UK-trained doctors in their approach.

Research questions

What are the thought processes of doctors training in UK general practice when attempting to answer multiple-choice questions on applied knowledge from the national licensing examination? What are the differences in approach from UKGs and IMGs to answering test questions and to what extent might this relate to differences in performance?

Methods

Design

We used a qualitative design employing cognitive (‘think aloud’) interviews27 to explore the thought processes of doctors in general practice training while answering a selection of ‘live’ knowledge test (single-best answer (SBA)) questions from the applied knowledge test (AKT), part of the MRCGP licensing examination for general practice.

The researcher followed an interview sequence where they asked a target question and used verbal probing to obtain more specific information (eg, tell me a little bit more about why you think that is easy/difficult?) before moving forward to the next question. Target questions included, ‘Could you please talk me through in your own words how you perceive the standard introduction statement to the test/what it may/may not be telling you?’ seeking comprehension of the question/complex instructions.

Context

The AKT is one of three components of the UK MRCGP licensing examination certifying UK family doctors’ fitness for independent practice.28 Other components of the MRCGP examination include a clinical skills assessment and workplace-based assessment, which together assess the curriculum for specialty GP training. The AKT is a 190 min, 200-item computer-delivered test, assessing knowledge of clinical medicine (80%), evidence-based medicine (10%) and administration (10%) relevant to UK general practice using largely SBA or best of five, but also extended matching questions, algorithm or data table completion, multiple best answer and free-text formats.8

We used 15 AKT SBA questions, accessed via the clinical lead for the MRCGP AKT examination, selecting questions which had been used at least once in recent tests. These included questions on all three subsections of the test, clinical medicine, evidence-based medicine and administration. Questions were also selected on the basis of known differences in performance between IMGs, and UKGs using the facility (the proportion of candidates answering correctly); in some questions, the facility for IMGs was higher than for UKGs and vice versa.

Setting and participants

General practitioner specialty trainees (GPSTs) were invited to take part in the study via General Practice Training Scheme programme directors across the East Midlands region of the UK. The study was introduced to GPSTs during their weekly vocational training half-day educational programme and all those who volunteered to participate were offered an interview.

One researcher (JP) conducted interviews at postgraduate vocational training centres across the East Midlands between January and April 2017. Participants were asked to sign informed consent to being interviewed and a non-disclosure agreement relating to the questions seen. We sampled GPSTs purposively to get a balance of IMGs and UKGs aiming for variation in participating doctors’ experience, ethnicity, age, sex and language proficiency.

Data collection

We used a systematic grounded theory29 approach to collect and analyse interview data inductively. Interviews lasting up to 1 hour were digitally recorded and transcribed verbatim. The first three interviews enabled us to pilot the interview schedule—no amendments were required for subsequent interviews. Participants were asked to think aloud30 while reading and answering the AKT questions provided on paper. The objective was to reveal participants’ thought processes involved in interpreting a question and arriving at an answer. We followed a verbal interview sequence asking a target question and probing until the participant moved forward to the next question. Probes followed the Question Appraisal System31 which consists of seven main categories which seek to identify problems with the development of questionnaire items but which we adapted and used for the test questions: (1) problems with reading, (2) problems with instructions, (3) problems with item clarity, (4) problems with assumptions, (5) problems with knowledge/memory, (6) problems with sensitivity/bias and (7) problems with response categories. Although interviews were conducted individually, early interviews enabled probing in specific areas during later interviews.

Analysis

We undertook thematic analysis supported by NVivo V.10.32 We conducted open coding, reading through transcripts several times before starting to analyse each line by line, allowing grounded codes to emerge from the data, while attempting to put aside any presuppositions.29 We created initial tentative labels for segments of text, not based on existing theory but guided by the meaning that emerged from the data. After identifying patterns grounded in the data, this generated a multitude of categories to aid the identification of important concepts that required further investigation. From the initial in vivo coding, just over 100 codes were identified.

The second stage of analysis, axial coding, allowed us to link codes together to form categories, with identification of categories informed by frequency of codes. The final stage was initiation of selective coding, when we coded data in relation to identified core variables. Constant comparison during data analysis helped identify new data coming through. We also compared and contrasted codes from IMGs and UKGs to look for similarities and areas of difference between these groups.27 This study was exploratory in nature and we continued interviewing participants until no new themes came through, confirming data saturation.

Initial in vivo coding,33 where investigators used participants’ individual wording and language to code a fragment of data, was performed independently by two researchers (JP and AS), but codes were subsequently compared with reach consensus. The interviewer (JP) also wrote reflective memos that helped with interpretation during data analysis.29 The interviewer (JP) was not a medical doctor and did not have access to the AKT answers until after the interviews were completed to minimise bias.

Informed consent was taken from all participants and all participants were informed their data would be anonymised to prevent their identity being revealed.

Patient and public involvement

Members of the healthier ageing patient and public involvement group at the University of Lincoln were involved in initial discussions about the design and conduct of this study.

Results

We interviewed 21 GPST (GPSTs: 8 female, 13 male), aged from 24 to 64 years, with two-thirds in their first year of specialty training and the other third in years 2 or 3, who agreed to participate from a total cohort of 72 trainees (29%). Of these, 13 participants were IMGs and 8 UKGs. All IMGs and one UK-trained doctor were from a BME group (table 1).

Table 1

Participant characteristics

The main themes from the data were organised as follows: theoretical versus real-life clinical experience; recency, frequency, opportunity and relevance, insight, and cultural barriers. These are described below and summarised in table 2.

Table 2

Grounded theoretical framework: categories from selective coding identifying reasons for differential attainment comparing UK and international medical graduates

Theme 1: theoretical versus real-life clinical experience

Classroom versus clinical experience

For all participants, a greater exposure to patients in clinical practice made it easier to recall information when answering AKT questions.

‘You need theory obviously but the practical exposure makes you remember because there are so many things to remember in medicine.’ [Female, IMG]

Clinical exposure to specialties or specialty topics

Limited clinical exposure to a host of specialty topics, including rheumatology, ophthalmology, paediatrics (managing medical conditions affecting infants, children and young people), gynaecology, relating to sexual health or radiology, provided difficulties answering questions for all participants.

‘To be honest with this particular question, I haven’t seen a vulva inflamed with ulcers. It’s more my, the clinical approach that medical training has given me to say a patient presenting with these symptoms what is the likely cause’.[Female, BME UKG].

‘I haven’t actually come across this before in practice [osteoporosis assessment]. For me probably more theoretical. I’m not 100% sure of the answer to this.’[Male, UKG].

IMG participants reported feeling at a greater disadvantage, because of less exposure to certain specialty topics at undergraduate level, and for some also during UK training.

‘Because for some people if you are coming from outside Europe, they don’t see any CT [Computed Tomography] in real life. The knowledge about it is just like my teacher taught me while standing in front of the board, in front of the classroom.’[Male, IMG].

Textbook learning and memorisation

IMG participants demonstrated how ‘rote’ learning defined as repeated rehearsal of verbal material helped them to answer specific AKT questions compared with UKGs.

‘We are taught to memorise things. Even if they don’t make sense. The more you memorise, the better. So we used to memorise all the doses and names.’ [Female, IMG].

Some IMGs felt disadvantaged by theoretical and rote, since rehearsal stored the information without attaching meaning to it.

‘All my knowledge is theory based. So I don’t get that feel I know it.’ [Female, IMG].

Theme 2: ‘recency, frequency, opportunity and relevance’

Recency

All participants reported that lengthy intervals between learning and assessment negatively affected their ability to recall information to answer AKT questions.

‘I haven’t done maths or calculations since high school. Maybe sixteen years.’ [Male IMG].

‘This is going back years of what did I learn in medical school about ophthalmology.’ [Female, BME UKG].

‘This national guidance, I haven’t looked specifically at recently. Perhaps looked at it a while ago.’ [Female, UKG].

IMG participants were at a greater disadvantage compared with UKGs because some had not revisited AKT topics since their undergraduate training abroad.

‘I’ve got no clue currently. I haven’t worked in paediatrics or come across this [UK training]. When I was in basic training seven years ago [overseas], that’s the time I read about vaccinations. I have forgotten.’ [Male, IMG].

‘Since I’ve been to this country, I’ve never had any training about ophthalmology. The only training I’ve had about ophthalmology was at medical school and it was just for briefly; two or three months. That’s it.’ [Female, IMG].

GPSTs are known to face greater difficulties if longer than 9 years has elapsed since medical school or their PMQ was obtained outside the UK.

Repeated exposure

For all, frequent exposure to a greater number of patients increased knowledge of different types of illness and reportedly increased recall when answering AKT questions.

‘Getting exposed to many different patients makes it easier to remember.’ [Female, IMG].

Limited opportunity for experience

Missed training opportunities through lack of rotation through specialties impeded successful training in all participants; they had been unable to access a range of hospital departments to gain clinical experience in particular curriculum areas (eg, obstetrics and gynaecology, paediatrics).

‘I’m not going to be doing a rotation through that. So whatever I’m going to get out of it will be personal study and gynaecological patients that come into the practice that I see and discussion with my trainers’ [Male, IMG].

Gender barriers

Gender was a barrier to performance in specific AKT questions for all participants. Male IMG participants were less likely to have attended female patients during training.

‘I think female doctors in practice tend to see more gynae related issues. It’s less awkward to be examined by a female than by a male. I’ve also heard about risk of complaint against the male doctor to sensitive females.’ [Male, IMG].

‘I’ve certainly seen a few ladies in the last few weeks with vaginal discharge.’ [Female, BME UKG].

‘I don’t really deal with prostate problems much being a lady doctor.’ [Female, UKG].

Uncommon presentations

All participants reported difficulties answering AKT questions from topics that were uncommon clinical presentations in general practice, specifically in minor specialties.

‘Neurology, I’m not sure in regard to AKT, which is a paper exam. More like facing an actual patient. Neurology is a difficult specialiity.’ [Female, IMG].

Statistical learning

All participants expressed that there were limited opportunities for learning statistical aspects of the curriculum but data interpretation and statistics were areas where IMG doctors felt they required additional training support because of limited opportunities to learn statistics at undergraduate level.

‘As an undergraduate I never went through statistics. No!’ [Male, IMG].

As a result IMG participants found data interpretation questions difficult compared with UKGs.

‘We are not exposed to things like this (in under graduate training). Statistics is not new to us but not in this context. It’s not in the context of writing it out and explain. Putting in a graphic or table form. Not something we are used to. Nothing like this. I don’t know (if I want to have a go at it)’. [Male, IMG].

IMG candidates worried about examination timing which also dissuaded them from answering complex data interpretation or statistical questions.

‘I’ve done statistics before, not things they used in medicine and clinical trials, sensitivity and specificity. That was new to me.’[Female, UKG].

‘I’m just thinking I’ve never come across such a question (statistics). Or such a table should I say. I haven’t seen this particular table before.’ [Female, BME UKG].

Relevance to general practice

All participants struggled with questions relevant for minor specialty areas, administration, or where a referral would be made for secondary and tertiary care treatments. The questions were felt to be irrelevant to the work of a GP and were expected to be carried out by other health professionals (eg, nurse, midwife or pharmacist) or specialists.

‘Tend to refer most of them to opticians. Who will then refer to ophthalmologists as required?’ [Male, IMG].

‘I think it’s more a nurse’s role. The midwife.’ [Male, IMG].

Statistical relevance

This was particularly the case for statistics where participants felt questions were less relevant to daily practice.

‘Most of the time (statistics) I don’t need it in real life’. [Female, IMG].

‘This is just looking at data. I would never be asked this question. I can’t imagine I would ever be asked this by a patient’ [Male, UKG].

Participants generally expressed a dislike for statistics questions and felt unsupported in preparing for these.

‘I certainly know some friends of mine who would have a heart sink moment with this question. Even though they trained in this country, because they hate maths. They would rather have a clever pharmacist work it out.’[Female, BME UKG].

‘For all of us it’s all about stats. Probably we all chose medicine to avoid maths, I don’t know. For us it’s always the same. Everyone always talks about stats. Nobody likes it. We feel it’s very abstract. We feel it doesn’t relate to what we are doing at all. Medicine is more about treatment for us and stats should be left for the pharmacists or the researchers. For us it’s too abstract.’ [Male, IMG].

IMG participants in particular did not understand the relevance of learning statistics until they had begun training in the UK. This affected their motivation, confidence and attitude towards calculation-based questions.

‘So during my course of training I’ve had two day courses on statistics but I was thinking do I really need it? I wasn’t taking it seriously.’[Male, IMG].

‘Because I studied overseas, we didn’t have much statistics. When we actually started studying for AKT, we got to know that we need to study statistics.’ [Female, IMG].

‘Already looking at this I’m getting scared. Looking at these numbers.’ [Male, IMG].

Exam scenarios versus real life

All participants reported how in routine practice they could access the British National Formulary or National Institute for Health and Clinical Excellence (NICE) guidelines whereas this was not possible during the AKT, the examination setting itself being seen as a barrier.

‘This is more of an exam scenario because I would just have to be able to know it whereas in real life I can look it up with the patients.’ [Female, UKG].

‘I would find this question more appropriate for real practice if I had been asked a patient has this symptom and these are the possible investigations that you can do, which one would you do?’ [Female, BME UKG].

Participants felt that a drug calculation in the exam was unrealistic as part of a routine consultation.

‘More for exam. If I am sitting in a clinical setting with a ten minute consultation, what am I going to gain doing the maths while the patient is sitting there?’ [Male, IMG].

Theme 3: competence and insight

Perception of competence

Participants were sometimes overoptimistic when answering questions, presuming they had answered correctly, when in fact they had responded incorrectly. The types of language that over optimistic participants used when answering questions, for example, related to gender, to liking and being excited about the question, answering with much certainty, or stating the option was definitely correct.

‘Ok, drug side effects. I think I’m excited about this one. It has a lot to do with pharmacology drugs which I like. I like looking at the BNF and drug books just to look at the side effects. Just looking at this topic of drug side effects, even though I have not gone through it, I think I feel a bit confident……Definitely it’s going to be A’[Male, IMG].

‘So immediately it’s a thirty year old lady and again I’m comfortable answering this question perhaps more than male doctors because I get this problem presented to me. Not exactly this case but I have had this type of problem before presented in clinics. Before even looking at the options, it strikes me as a risk of an STI.’ [Female, UKG].

Biased self-evaluations

Participants’ estimates of knowledge, whatever their ethnic background, did not always correspond with their level of performance. Participants did not always recognise when their decision was correct or incorrect but some were worse at distinguishing between correct and incorrect responses. The types of factors that led to participants overestimating knowledge (when answering incorrectly) were related to previously having read about the topic or being familiar with it in clinical practice.

‘I like these ones (item stems, drug side effects) because they are straightforward. Only one can be correct….I’ve read about it. Done it in questions. Presented in surgery. I’m doing psychiatric now, and I’ve actually seen someone actually on a day to day basis’ [Male, IMG].

‘Yes, they are common drugs. All things I am familiar with. Atenolol, I prescribe every day.’[Male IMG].

Theme 4: ‘cultural barriers’

Unfamiliarity with the NHS

IMG participants reported additional challenges including adapting to NHS culture, the style of teaching and learning, and a new language.

‘People from administration were saying it will take a few days but it’s not a few days, its months for you. To get used to the system and learn. But it was a bit difficult. Especially the first year I would say.’ [Female, IMG].

‘If you are not familiar with the system, you don’t know what services are available.’ [Female, IMG].

Abbreviations

Abbreviations were identified as a particular problem.

‘And even the simplest things. Abbreviations for example. Talking about patients. We don’t use abbreviations a lot from where I come from.’ [Female, IMG].

National guidance

IMG participants expressed having to adapt from a disease-centred model of care in their home countries to a model that embraced guidance which they were unfamiliar with overseas.

‘I think guidance are used more in the UK. So I still have to read a lot of guidance. Back where I trained, I don’t think national guidelines apply.’ [Male, IMG].

‘In some areas of Africa, we focus more or less on malaria and other infections. Some of the things we do know about. You give this. Give this. But over here everything is very rigid. So you couldn’t get to that NICE guideline of what is expected of you at a particular time.’ [Male, IMG].

IMG participants were unfamiliar with NICE guidelines before training in the UK, expressing concerns that guidelines were a ‘new thing’ for them.

‘Because I was trained overseas, the Ghana is quite different to what it is over here. Coming to work in the UK, you need to be working with the NICE guidelines. So I have not been, when I was in training, I have not been introduced to it. So it is a new thing I have been picking up after my graduation while working in the UK.’ [Male, IMG].

Item stems and options in the AKT referred to UK current national guidance and IMG candidates expressed that without an adequate length of training in the UK these types of questions were more difficult.

‘If I had come from overseas last year and I was sitting this exam this year, I probably will struggle because overseas we don’t follow UK guidance. This question is more relevant for people who were trained here. So guess ‘A’.’[Female, IMG].

Many participants had not read national guidance (NICE) on specific AKT topics and UKGs also lacked knowledge of specific guidelines.

Obviously I know the national guidance but I haven’t read the specific guidance of osteoporosis and menopause.’ [Female, BME UKG].

IMG participants highlighted lack of teaching on NICE guidelines during undergraduate and UK training and they required time to learn this.

‘Yes. It’s so much. Especially when you are a trainee: you are so busy with a lot of things. Especially if you are attached to a hospital job: you are so exhausted when you go home.’ [Female, IMG].

‘For the start of exam preparation, I probably need to cram a few guidance like…but it’s difficult to memorise what all the guidance are for everything.’ [Female, IMG].

For all participants, examination conditions differed from clinical practice, where doctors could refer to documents such as NICE guidelines.

‘It’s something that if I was with a patient, I would check the guidelines.’[Female, UKG].

Examination format

IMG participants highlighted unfamiliarity with the question formats used. This was compounded by the difficulties of translating their thoughts from their native language when answering AKT questions, despite having competent to good levels of English according to their International English Language Testing System scores (table 1).

‘Because of that you may not get the right answer. Most of us who are not trained here when we read the question will have got a bit of three way translation. We understand the English but we are thinking in our own native language’. [Male, IMG].

‘I trained in Russia in Russian language [native language African], so sometimes I have to think allowed and try to process the information before I totally understand it.’ [Female, IMG].

‘Maybe the UK graduates studied here and easier for them: the language is their language. They understand things easily. For us, we have to convert and think in our language to understand because we are trained like that.’ [Female, IMG].

This created discussions around additional time needed for IMG participants.

‘We take a little more time than them maybe. But I have seen UK graduates also failing AKT. So we can’t say that also’. [Male, IMG].

‘They need time for to understand what that means. But we haven’t got that. We are given the same time as someone whose language is English.’ [Male, IMG].

Discussion

This study used cognitive interviews for the first time to understand individual’s thought processes when tackling questions from a licensing multiple choice examination. The key themes of real-life clinical experience, familiarity, and insight applied to both UK and IMG participants, while IMG participants experienced additional difficulties linked to differences in previous educational experience or familiarity with the UK NHS.

The purpose of the examination is to ensure that candidates have knowledge of UK general practice so the findings of differences between IMGs and UKGs, due to less exposure to or experience of certain topics, unfamiliarity with NHS systems or lack of deep learning, supports the current test content and format but also suggests educational approaches which may help reduce the apparent differences in performance.

Differences were detected in IMGs’ learning styles and in their undergraduate training, for example, the lack of previous training in research and statistics. There were also differences for IMGs in their experience of health systems and requirements for practising in the UK such as the use of national guidelines. Furthermore, having English as an additional language also posed a challenge for some IMGs. Specific support to increase understanding of the health systems and guidelines could be helpful where this is revealed as a learning need. Additional support to strengthen language skills may also be helpful for some IMGs.

IMG participants revealed distinct learning styles (eg, rote learning, memorisation) which led to being adept at certain questions and difficulty answering others compared with UKGs. IMG doctors’ reported difficulties, adapting from didactic learning to other forms of learning such as using portfolios or reflective writing22 could be addressed through training programmes. Also, amenable to educational input was the finding that IMG participants were less familiar with specific areas such as statistics and data interpretation, due to less education in these curriculum areas; and they stated they lacked understanding of the relevance of statistics until they had trained in the UK.

All participants felt calculation-based questions were over technical, but IMGs faced additional difficulties due to lack of familiarity with the context in which these were being asked, and so avoided answering these or took longer over them. This raises the issue of extending exam time for IMGs. In February 2015, the MRCGP increased the AKT length by 10 min for all participants to help reduce these time pressures, particularly for those candidates less proficient in the English language.

Our findings of a potential relationship between insight and competence, where both UK-trained and IMG participants who were overconfident about their answers got them wrong, are in line with Kruger and Dunning’s (1999) theory34 that a lack of skill means poorly performing individuals are less able to recognise their deficiencies, and this very same lack of skill also deprives them of the ability to recognise when particular decisions are correct or incorrect.35 Overconfidence may also lead to students making poor study choices which could impede their learning.36 Consideration of these problems and use of metacognitive strategies may allow individuals to gain self-insight, taking charge of their learning, enabling greater awareness of how they learn, evaluating learning needs, generating strategies to meet these needs and then implementing these.37 38

Cultural difficulties were also apparent in IMG participants particularly during the early stages of training where the challenges of adapting to a new culture, different style of teaching, often in a new language was apparent.18 22 39 40 IMG doctors may lack preparation for entering UK specialty training for general practice, compared with UKGs who have previously had 5 years undergraduate medical training and an additional 2 years foundation training in the UK, for example, in relation to learning about guidelines or adapting from a disease-centred to patient-centred model of care.22 39

Communication and language difficulties in ethnic minority doctors and IMGs are also well documented across the literature.17 21 23 Fair Training Pathways for all identified that although IMG doctors were inexperienced with UK systems and cultural norms, these were perceived as amenable to change through education .25

IMG participants stated that they had limited access to clinical training at undergraduate level and this is reflected in other studies revealing deficiencies in IMGs experience of clinical training.22 40 41 IMGs may also face unconscious bias during training and assessment, communication challenges, poorer relationships with patients and staff, and lack of participation with peers.17 Seeking ways to enhance insight into performance through appropriate and supportive feedback may also positively influence learning.42 Assessing potential underperformance and addressing this early through specific educational strategies will also be helpful to IMGs.5 17 18 22

Strengths and limitations

This is the first study exploring reasons for differences in performance between UKGs and IMGs in a GP licensing examination using in-depth cognitive interviews. We identified barriers for participants, irrespective of IMG status or ethnicity, but particular challenges for IMGs for all domains especially in relation to cultural barriers. We used an inductive approach, collecting data until saturation was achieved, which helped reveal important issues and differences. Participants comprised a small sample of GPSTs with two-thirds in their first year of training, so the results may not be generalisable to other specialties and may have been different for more experienced (second or third year) trainees.

Implications for future policy, research and practice

This study provides information about the ways we can practically support all GP trainees including IMGs by highlighting gaps in training and experience and by identifying areas for intervention which may be helpful. The results also suggest wide differences in undergraduate experience which may disadvantage some doctors, particularly IMGs, for whom a standard 3-year training programme may be insufficient or unrealistic to meet their needs. IMGs may require additional help prior to or early during GP training, to build cultural and interpersonal competence and confidence,43 through familiarisation with NHS systems, clinical guidance, cultural or language differences and other areas where deficiencies in training, experience or learning approaches may leave them less prepared for licensing examinations compared with UKGs. The costs of this early support could offset the additional costs of failure and extensions to training. Our findings have increased knowledge of reasons why performance may vary in the AKT by candidate ethnicity. These results are relevant to GP specialty trainees, trainers and programme directors when designing courses and programmes, and also to those constructing tests. Cognitive interview methods could be applied in other examinations where it is known that differential performance exists.

Conclusion

This study has identified reasons why ethnic minority doctors may perform differently in questions from a knowledge test for licensing, reasons which may also be pertinent for other assessments. Our findings may also inform interventions which help support IMGs to pass these assessments such as a longer period of induction during UK training, addressing areas of particular difficulty or gaps in undergraduate experience and targeted training to understand NHS systems.44 This study provides further understanding into reasons for differential attainment and the basis for future research.

Acknowledgments

Our thanks to the participants and also to members of the Community and Health Research Unit and the AKT Core Group for their comments on the paper.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.
  21. 21.
  22. 22.
  23. 23.
  24. 24.
  25. 25.
  26. 26.
  27. 27.
  28. 28.
  29. 29.
  30. 30.
  31. 31.
  32. 32.
  33. 33.
  34. 34.
  35. 35.
  36. 36.
  37. 37.
  38. 38.
  39. 39.
  40. 40.
  41. 41.
  42. 42.
  43. 43.
  44. 44.

Footnotes

  • Contributors AS had the original idea for the study. All the authors contributed to the study design. JP undertook gaining ethical approval, recruitment, interviews and wrote the first draft of the paper. JP and AS coded and analysed the interview transcripts in NVivo. AS, CB and BS advised on subsequent drafts of the paper and all authors approved the final version of the manuscript. AS is guarantor for the paper.

  • Funding College of Social Science Research Fund, University of Lincoln.

  • Competing interests AS is research and development lead for assessment for the MRCGP examination and CB is a previous lead and current member of the MRCGP AKT Core Group.

  • Ethics approval This study received ethical approval from the University of Lincoln, School of Health and Social Care Research Ethics Committee.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data sharing statement Data are unavailable due to the original conditions/restrictions of the ethics approval.

  • Patient consent for publication Not required.