Skip to main content

Main menu

  • HOME
  • ONLINE FIRST
  • CURRENT ISSUE
  • ALL ISSUES
  • AUTHORS & REVIEWERS
  • SUBSCRIBE
  • BJGP LIFE
  • MORE
    • About BJGP
    • Conference
    • Advertising
    • eLetters
    • Alerts
    • Video
    • Audio
    • Librarian information
    • Resilience
    • COVID-19 Clinical Solutions
  • RCGP
    • BJGP for RCGP members
    • BJGP Open
    • RCGP eLearning
    • InnovAIT Journal
    • Jobs and careers

User menu

  • Subscriptions
  • Alerts
  • Log in

Search

  • Advanced search
British Journal of General Practice
Intended for Healthcare Professionals
  • RCGP
    • BJGP for RCGP members
    • BJGP Open
    • RCGP eLearning
    • InnovAIT Journal
    • Jobs and careers
  • Subscriptions
  • Alerts
  • Log in
  • Follow bjgp on Twitter
  • Visit bjgp on Facebook
  • Blog
  • Listen to BJGP podcast
  • Subscribe BJGP on YouTube
British Journal of General Practice
Intended for Healthcare Professionals

Advanced Search

  • HOME
  • ONLINE FIRST
  • CURRENT ISSUE
  • ALL ISSUES
  • AUTHORS & REVIEWERS
  • SUBSCRIBE
  • BJGP LIFE
  • MORE
    • About BJGP
    • Conference
    • Advertising
    • eLetters
    • Alerts
    • Video
    • Audio
    • Librarian information
    • Resilience
    • COVID-19 Clinical Solutions
Research

Decision support tools to improve cancer diagnostic decision making in primary care: a systematic review

Sophie Chima, Jeanette C Reece, Kristi Milley, Shakira Milton, Jennifer G McIntosh and Jon D Emery
British Journal of General Practice 2019; 69 (689): e809-e818. DOI: https://doi.org/10.3399/bjgp19X706745
Sophie Chima
Primary Care Collaborative Cancer Clinical Trials Group;
Roles: Research coordinator
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jeanette C Reece
Centre for Epidemiology and Biostatistics;
Roles: NHMRC Research Fellow
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Kristi Milley
Primary Care Collaborative Cancer Clinical Trials Group;
Roles: National manager
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Shakira Milton
Centre for Cancer Research, Victorian Comprehensive Cancer Centre, University of Melbourne, Melbourne, Australia.
Roles: Research assistant in implementation science
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jennifer G McIntosh
Centre for Cancer Research, Victorian Comprehensive Cancer Centre, University of Melbourne, Melbourne, Australia.
Roles: Senior research fellow, cancer researcher
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jon D Emery
NHMRC practitioner fellow, Centre for Cancer Research and Department of General Practice, Victorian Comprehensive Cancer Centre, University of Melbourne, Melbourne, Australia.
Roles: Herman professor of primary care cancer research
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info
  • eLetters
  • PDF
Loading

Abstract

Background The diagnosis of cancer in primary care is complex and challenging. Electronic clinical decision support tools (eCDSTs) have been proposed as an approach to improve GP decision making, but no systematic review has examined their role in cancer diagnosis.

Aim To investigate whether eCDSTs improve diagnostic decision making for cancer in primary care and to determine which elements influence successful implementation.

Design and setting A systematic review of relevant studies conducted worldwide and published in English between 1 January 1998 and 31 December 2018.

Method Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were followed. MEDLINE, EMBASE, and the Cochrane Central Register of Controlled Trials were searched, and a consultation of reference lists and citation tracking was carried out. Exclusion criteria included the absence of eCDSTs used in asymptomatic populations, and studies that did not involve support delivered to the GP. The most relevant Joanna Briggs Institute Critical Appraisal Checklists were applied according to study design of the included paper.

Results Of the nine studies included, three showed improvements in decision making for cancer diagnosis, three demonstrated positive effects on secondary clinical or health service outcomes such as prescribing, quality of referrals, or cost-effectiveness, and one study found a reduction in time to cancer diagnosis. Barriers to implementation included trust, the compatibility of eCDST recommendations with the GP’s role as a gatekeeper, and impact on workflow.

Conclusion eCDSTs have the capacity to improve decision making for a cancer diagnosis, but the optimal mode of delivery remains unclear. Although such tools could assist GPs in the future, further well-designed trials of all eCDSTs are needed to determine their cost-effectiveness and the most appropriate implementation methods.

  • cancer
  • clinical decision support tool
  • early diagnosis
  • general practitioners
  • primary health care

INTRODUCTION

A timely diagnosis of cancer is critical, as delays are associated with poorer patient outcomes and survival rates.1,2 GPs play a key role in early cancer diagnosis, with 75–85% of cases first presenting symptomatically in primary care.3–5

The primary care interval describes the time from first symptomatic presentation to the GP, through to referral to a specialist.6 The length of this interval varies, with many patients presenting to their GP three or more times before referral.7 Consequently, interventions that assist GPs’ clinical decision making have the potential to improve the timeliness of cancer diagnosis and improve cancer outcomes.

Electronic clinical decision support tools (eCDSTs) are electronic systems that assist clinical decision making.8 Patient-specific information is entered into the eCDST by the GP or can be automatically populated from the patient’s electronic health record. Using validated algorithms, the eCDST produces recommendations, prompts, or alerts for the GP to consider. eCDSTs can be actively used during a GP consultation or may be designed to continuously mine data in the background.

The development of eCDSTs has been driven by the complex nature of a cancer diagnosis. Often, patients present to the GP with non-specific symptoms that have a low diagnostic value.9 Algorithms have been designed to apply epidemiological data on combinations of symptoms and test results, and prompt consideration of a cancer diagnosis based on cancer risk thresholds.10,11 eCDSTs have been proposed as a solution for cancers that are more challenging to diagnose in primary care because of their variable symptomatic presentation and limited specific features.12

eCDSTs have been shown to improve both practitioner performance11 and diagnostic accuracy in simulated patients for a range of conditions, such as dementia, osteoporosis, and HIV.13 The effects of eCDSTs on referral behaviours have been summarised in a previous systematic review,14 but it did not investigate cancer diagnosis specifically; consequently, the role of eCDSTs in cancer diagnosis has not been adequately addressed.

This systematic review aimed to summarise existing evidence on the effects of eCDSTs on decision making for cancer diagnosis in primary care, and determine factors that influence their successful implementation.

Electronic clinical decision support tools (eCDSTs) improve practitioner performance and patient care, but their role in cancer diagnosis has not been adequately addressed. This review outlines the effectiveness of eCDSTs for cancer diagnosis and factors affecting their implementation. Decision support tools have been proposed as an approach to reduce delays in diagnosis, particularly for cancer with non specific symptom signatures. To the authors’ knowledge, this is the first systematic review of available publications to inform eCDST implementation in primary care for the diagnosis of cancer.

How this fits in

METHOD

A mixed-methods narrative review was conducted. The review was registered on PROSPERO (registration ID: CRD42018107219) and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) criteria were followed.15

Search strategy

Electronic searches were run across three databases: MEDLINE, EMBASE (Ovid), and the Cochrane Central Register of Controlled Trials (CENTRAL). The search strategy (available from the authors on request) included MeSH headings and word variations for three terms: ‘general practitioner’, ‘cancer’, and ‘electronic decision support’. All studies from 1 January 1998 until 31 December 2018 were included. Titles and abstracts were screened independently by two reviewers and any disagreements were resolved with a third researcher. To identify studies not found via the electronic searches, reference lists were manually checked, citation tracking was performed, and experts in the field were contacted. The corresponding authors from all the included studies were contacted via email to identify further studies or unpublished research.

Inclusion and exclusion criteria

Studies investigating an eCDST designed to aid decision making for a potential cancer diagnosis were selected. For inclusion in the review, the study had to report on a:

  • cancer diagnosis;

  • cancer referral; or

  • cancer investigation.

Healthcare utilisation and cost, practitioner performance, and other educational outcomes were also included in the study. As the mode and delivery of eCDSTs vary, studies using any form of electronic support that included algorithm-based prompts or recommendations were eligible. Tools that applied risk markers for prevalent undiagnosed cancer were included, as were qualitative studies if they evaluated barriers and facilitators to implementing eCDSTs for cancer diagnoses in primary care.

Exclusion criteria for qualitative and quantitative studies included:

  • decision support used for cancer screening in asymptomatic populations, including tools that incorporated risk factors to predict future incident risk of cancer;

  • studies that did not involve decision support designed for use in primary care;

  • articles not in English;

  • unpublished work;

  • editorials; and

  • academic theses.

Assessment of bias

Several Joanna Briggs Institute (JBI) Critical Appraisal Checklists were used, depending on the study design of the articles included in the systematic review, to assess the risk of bias of included studies.16 The authors used the following JBI Critical Appraisal Checklists:

  • Checklist for Quasi-Experimental Studies (non-randomised experimental studies);

  • Checklist for Qualitative Research;

  • Checklist for Diagnostic Test Accuracy Studies; and

  • Checklist for Randomized Controlled Trials.

Studies with a percentage score of >80% were considered to have a low risk of bias; those with a percentage score of 60–80% were considered to have a moderate risk of bias.

Study design

Data were extracted and analysed separately from included studies, before all results were combined into an extensive narrative synthesis. Segregated methodology was used to synthesise the evidence while maintaining the standard distinction between quantitative and qualitative research, in line with recommendations.17,18

Data extraction and synthesis

Data extraction was performed, and cross-checked by the two reviewers who screened the articles. For quantitative studies, data extraction was based on an adapted version of the Cochrane Effective Practice and Organisation of Care data-collection checklist.19 For qualitative studies, data extraction was performed by a reviewer who screened the original articles, together with a third author, guided by the approach of Noblit and Hare.20 This involved identifying the major themes from primary papers, determining how they are related, and building on themes to interpret overarching theories and understandings.20

Categorisation of extracted themes was based on the normalisation process theory (NPT) framework,21 a theory used to describe factors and actions that promote or impede the embedding of new technologies into an existing practice. NPT uses four constructs to explain the processes that affect the integration and adoption of new technologies:

  • coherence;

  • collective action;

  • cognitive participation; and

  • reflexive monitoring.22,23

Using these constructs, the barriers and facilitators identified in the qualitative studies were mapped onto the NPT framework to explain the results.20

RESULTS

In total, 1065 titles were identified and 66 full-text papers reviewed for eligibility (Figure 1). Twelve articles, reporting on nine individual studies, fulfilled the selection criteria: eight quantitative,24–31 three qualitative,32–34 and one mixed methods.35 Characteristics of the included studies are summarised in Table 1.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

PRISMA flow diagram of literature search. eCDST = clinical decision support tool. PRISMA = Preferred Reporting Items for Systematic Reviews and Meta-Analyses.

View this table:
  • View inline
  • View popup
Table 1.

Characteristics of included studies

The design of each eCDST and key results are summarised in Table 2. Outcomes included:

  • appropriateness of care (n = 5);

  • diagnostic accuracy (n = 1);

  • time to diagnosis (n = 1);

  • cost-effectiveness (n = 1);

  • process measures (n = 1); and

  • qualitative (n = 4).

View this table:
  • View inline
  • View popup
Table 2.

Quantitative study descriptions and results

Appropriateness of referral was defined by the proportion of patients referred who were diagnosed with cancer. The approach to implement an eCDST within GP workflow varied, but the tools were designed to be used in real time, during consultation, or applied outside of the consultation to flag up potential cases of cancer.

Quantitative synthesis of the included studies was not possible because of significant methodological and clinical heterogeneity. Results for the risk of bias assessment for each included study are given in Table 2. In summary, four quantitative studies had a low risk of bias, including the quantitative component of the mixed-methods study,26–28,35 one had a moderate risk,25 and in two the risk of bias was high.24,29 Of the qualitative studies, three had a low risk of bias32–34 and one a high risk, including the qualitative component of the mixed-methods paper.35 High risk of bias did not influence the inclusion of articles in the review.

Quantitative

eCDSTs used during GP consultation

Three studies26,28,35 examined eCDSTs that were designed to be used during the GP consultation with patients. Jiwa et al35 assessed whether an electronic referral pro forma used when patients present with bowel symptoms improved the appropriateness of referral for colorectal cancer (CRC). Control practices received an educational outreach visit by a local colorectal surgeon. The pro forma had no impact on the appropriateness of referral; however, it did improve the information and quality of the referral in comparison with the standard referral used by the control group.

Logan et al26 used a computer-generated prompt that recommended further investigations to rule out CRC when full blood-count results indicated iron deficiency anaemia. Control practices received laboratory results as per ‘usual care’. The prompts had no effect on the appropriateness of referral or investigation for CRC but, instead, led to increased prescriptions and adequate dose of iron.

Walter et al28 assessed MoleMate, a diagnostic tool for melanoma, which incorporates a scoring algorithm with spectrophotometric intracutaneous analysis (also known as SIAscopy) of pigmented skin lesions. MoleMate did not improve the appropriateness of referral, due in part to the high sensitivity and relatively low specificity set for the eCDST.28 Despite this, a health economic analysis found that, in UK practice, MoleMate was likely to be cost-effective compared with current best practice with an incremental cost-effectiveness ratio of £1896 per quality-adjusted life year gained.31

eCDSTs used outside the GP consultation

Two studies assessed eCDSTs designed to be applied outside of the consultation, identifying patients at increased risk of an undiagnosed cancer. Murphy et al27 applied electronic triggers to identify ‘red-flag’ symptoms in patients who had presented to their GP in the previous 90 days, without documented follow-up. There was a statistically significant reduction in time to diagnostic evaluation for CRC and prostate cancer in the intervention arm.

Kidney et al25 evaluated an eCDST that searched the patient’s electronic medical record and created a list of patients at increased risk of an undiagnosed CRC based on National Institute for Health and Care Excellence guidelines for urgent referrals. A third of all patients flagged by the algorithm were judged to need further review by their GP; 1.2% were subsequently diagnosed with CRC.

Clinical images

Two studies24,29 — both of low quality — that tested eCDSTs to support GPs’ assessment of clinical images of melanoma and non-melanoma skin cancer were identified. Both studies showed an improvement in decision making for cancerous and non-cancerous lesions.

Qualitative

Three studies showed improvements in decision making related to cancer diagnosis,24,25,29 one showed reduced time to diagnosis,27 and three demonstrated positive effects on secondary clinical or health service outcomes such as prescribing,26 quality of referrals,35 or cost-effectiveness.31 Jiwa et al35 and Kidney et al34 conducted a qualitative sub-study within their quantitative evaluation of the eCDST, both involving semi-structured interviews with GPs. Chiang et al33 and Dikomitis et al32 conducted exploratory qualitative studies of GPs’ experiences using eCDSTs in practice, one of which used simulated consultations. Both studies used NPT as a framework. The themes and constructs extracted from each qualitative study are outlined in Table 3.

View this table:
  • View inline
  • View popup
Table 3.

Qualitative results: normalisation process theory

Overarching themes

Three core constructs were identified in the synthesis:

  • trust;

  • the GP’s role as a gatekeeper; and

  • the impact on workflow.

Mistrust of the eCDST was driven by the disagreement between the tool’s recommendations and the GP’s assessment, ambiguity of underlying guidelines embedded within the eCDST, and a desire to understand the evidence that underpinned the clinical recommendation.32,33,35

The GP’s role as a gatekeeper was identified as a barrier due to conflicting referral thresholds between the eCDST and the GP, with GPs concerned about potential over-referral of patients at low risk.32,34,35 Finally, for eCDSTs designed to be used during consultation, there were challenges due to disruption of the usual workflow and the generation of additional tasks in an already-busy appointment.32–34

DISCUSSION

Summary

This systematic review evaluated the efficacy of eCDSTs used for cancer diagnosis in primary care, and describes factors that influence effective implementation. Three studies showed improvements in decision making related to cancer diagnosis,24,25,29 one showed reduced time to diagnosis,27 and three demonstrated positive effects on secondary clinical or health service outcomes such as prescribing26, quality of referrals,35 or cost-effectiveness.31 Key qualitative findings related to issues of trust in the tool, the impact on a GP’s role as gatekeeper, and potential negative effects on GP workflow.

eCDSTs that were used outside of GP consultations appeared to be more effective than tools used in real-time during consultation; they seemed to have the ability to detect patients at an increased risk of an undiagnosed cancer, leading to improvements in clinical assessment and time to diagnostic assessment. However, the implementation issues shifted from a disruption of the GP workflow during consultation to the ability to successfully convey the results of the eCDST to GPs outside of the consultation and ensure they acted on the information.25,27 Communicating this information to GPs did not always lead to follow-up of the patient.

Strengths and limitations

To the authors’ knowledge, this is the first review to evaluate the efficacy of eCDSTs used for cancer diagnosis in primary care and examine factors influencing their effective implementation. Rigorously conducted, it provides a summary of available findings to inform eCDST implementation in primary care for the diagnosis of cancer.

However, there are some limitations. This review is limited by the small number of included studies and large-scale randomised controlled trials: eCDSTs are relatively under-utilised for cancer diagnosis. Further, none of the included studies looked at outcomes such as survival rates and only one evaluated time to diagnosis;27 this highlights the challenges of conducting trials of diagnostic interventions for relatively rare conditions in primary care. Much larger implementation trials with long-term follow-up of cancer diagnoses, stage, and survival are required to determine the magnitude or effect of eCDSTs on cancer outcomes.

Comparison with existing literature

As with this work, a 2011 systematic review by Mansell et al36 did not identify any studies that examined a delay in referral of cancer as a primary outcome; all 22 included studies used a proxy measurement, such as GP knowledge or quality of referrals.

Mistrust of the eCDST was driven by the disagreement between the tool’s suggestion and the GP’s assessment, ambiguity of guidelines, and a desire to understand the underlying research underpinning the clinical recommendation.32,33 GPs reported that the eCDST compromised their autonomy, with the eCDST recommendation being perceived as ‘the final word’ rather than support at the time of decision making.33 This is consistent with recent evidence from the GUIDES implementation guidelines for eCDSTs.37 These guidelines comprise a checklist of factors that were found by patient and healthcare users to influence the effectiveness of eCDST implementation. The GUIDES checklist highlighted that the most important factor for successful implementation is ‘trustworthy evidence-based information’.37

As gatekeepers, there is much pressure on GPs to balance the use of limited and costly referrals for tests against potentially missing a cancer diagnosis.38 There are conflicting thresholds when comparing an eCDST’s output with the GP’s ability to refer everyone who was recommended.32–34 The International Cancer Benchmarking Partnership, a collaboration between six countries, identified that a stronger gatekeeper role, and different cancer-risk thresholds for referral, were associated with poorer cancer survival.39 Concerns about resource constraints and unwillingness to refer differed by country, but was found to play a large role in decision making in Australia and the UK.39

The usability and acceptability of eCDSTs was dependent on several competing issues, such as disruption of workflow, prompt fatigue, and time. There is a growing recognition in the literature that the technology being developed must seamlessly integrate into the current work practices of those using eCDSTs.37 The eCDST’s functionality and how it affects workflow could be mitigated using a consistent feedback loop between GPs and tool designers.40 There were no practices in place to monitor and adapt the eCDSTs for use in consultation, and no opportunity for the GPs to critically appraise how the tool affects workflow.

Implications for research and practice

The diagnostic algorithms in the eCDSTs included in this review were of a limited nature, but diagnostic and clinical utility could increase with more sophisticated algorithms that combine a larger number of factors such as symptoms, abnormal test results, and patterns over time. With the advances in artificial intelligence (AI) and machine learning in clinical practice, future developments will likely drive the next wave of eCDSTs. The use of AI has had promising preliminary results in areas such as visual image analysis in dermatology41 and radiology;42 however, further research using large primary care datasets is required before it can be known whether this approach will have utility to improve diagnoses of symptomatic cancers in general practice.

The available evidence in this review suggests that eCDSTs have the capacity to improve decision making for a cancer diagnosis, but the optimal mode of delivery remains unclear. Given the complex nature of a cancer diagnosis, the advancement and sustainability of eCDSTs in primary care relies on a continuous loop of practitioner feedback and refinement. The findings of the review presented here indicate that improvements in their design and implementation are needed to ensure they can be embedded in normal general practice workflows and alter professional decision making as intended. Strategies for effective communication need to be better explored.

Notes

Funding

This work was supported by the Cancer Australia Primary Care Cancer Collaborative Clinical Trials Group (PC4). Jon D Emery is supported by a National Health and Medical Research Council Practitioner Fellowship and is a member of the senior faculty of the multi-institutional CanTest Collaborative, which is funded by Cancer Research UK (C8640/A23385).

Ethical approval

Ethical approval was not required for this study.

Provenance

Freely submitted; externally peer reviewed.

Competing interests

The authors have declared no competing interests.

Discuss this article

Contribute and read comments about this article: bjgp.org.uk/letters

  • Received June 18, 2019.
  • Revision requested July 29, 2019.
  • Accepted August 26, 2019.
  • © British Journal of General Practice 2019

REFERENCES

  1. 1.↵
    1. Tørring ML,
    2. Frydenberg M,
    3. Hansen RP,
    4. et al.
    (2011) Time to diagnosis and mortality in colorectal cancer: a cohort study in primary care. Br J Cancer 104(6):934–940.
    OpenUrlCrossRefPubMed
  2. 2.↵
    1. Neal RD,
    2. Tharmanathan P,
    3. France B,
    4. et al.
    (2015) Is increased time to diagnosis and treatment in symptomatic cancer associated with poorer outcomes? Systematic review. Br J Cancer 112(Suppl 1):S92–S107.
    OpenUrlCrossRefPubMed
  3. 3.↵
    1. Hamilton W
    (2009) Br J Gen Pract, Five misconceptions in cancer diagnosis. DOI: https://doi.org/10.3399/bjgp09X420860.
  4. 4.
    1. Emery JD
    (2015) The challenges of early diagnosis of cancer in general practice. Med J Aust 203(10):391–393.
    OpenUrl
  5. 5.↵
    1. Allgar VL,
    2. Neal RD
    (2005) Delays in the diagnosis of six cancers: analysis of data from the National Survey of NHS Patients: Cancer. Br J Cancer 92(11):1959–1970.
    OpenUrlCrossRefPubMed
  6. 6.↵
    1. Bergin RJ,
    2. Emery J,
    3. Bollard RC,
    4. et al.
    (2018) Rural–urban disparities in time to diagnosis and treatment for colorectal and breast cancer. Cancer Epidemiol Biomarkers Prev 27(9):1036–1046.
    OpenUrlAbstract/FREE Full Text
  7. 7.↵
    1. Lacey K,
    2. Bishop JF,
    3. Cross HL,
    4. et al.
    (2016) Presentations to general practice before a cancer diagnosis in Victoria: a cross-sectional survey. Med J Aust 205(2):66–71.
    OpenUrl
  8. 8.↵
    1. Moja L,
    2. Kwag KH,
    3. Lytras T,
    4. et al.
    (2014) Effectiveness of computerized decision support systems linked to electronic health records: a systematic review and meta-analysis. Am J Public Health 104(12):e12–e22.
    OpenUrl
  9. 9.↵
    1. Astin M,
    2. Griffin T,
    3. Neal RD,
    4. et al.
    (2011) Br J Gen Pract, The diagnostic value of symptoms for colorectal cancer in primary care: a systematic review. DOI: https://doi.org/10.3399/bjgp11X572427.
  10. 10.↵
    1. Usher-Smith J,
    2. Emery J,
    3. Hamilton W,
    4. et al.
    (2015) Risk prediction tools for cancer in primary care. Br J Cancer 113(12):1645–1650.
    OpenUrlCrossRef
  11. 11.↵
    1. Garg AX,
    2. Adhikari NK,
    3. McDonald H,
    4. et al.
    (2005) Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA 293(10):1223–1238.
    OpenUrlCrossRefPubMed
  12. 12.↵
    1. Lyratzopoulos G,
    2. Wardle J,
    3. Rubin G
    (2014) Rethinking diagnostic delay in cancer: how difficult is the diagnosis? BMJ 349:g7400.
    OpenUrlFREE Full Text
  13. 13.↵
    1. Kostopoulou O,
    2. Rosen A,
    3. Round T,
    4. et al.
    (2015) Br J Gen Pract, Early diagnostic suggestions improve accuracy of GPs: a randomised controlled trial using computer-simulated patients. DOI: https://doi.org/10.3399/bjgp15X683161.
  14. 14.↵
    1. Roshanov PS,
    2. You JJ,
    3. Dhaliwal J,
    4. et al.
    (2011) Can computerized clinical decision support systems improve practitioners’ diagnostic test ordering behavior? A decision-maker-researcher partnership systematic review. Implement Sci 6(1):88.
    OpenUrlCrossRefPubMed
  15. 15.↵
    1. Moher D,
    2. Liberati A,
    3. Tetzlaff J,
    4. et al.
    (2009) Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med 151(4):264–269.
    OpenUrlCrossRefPubMed
  16. 16.↵
    1. Joanna Briggs Institute
    JBI reviewers’ manual: 2014 edition, https://wiki.joannabriggs.org/display/MANUAL/JBI+Reviewer%27s+Manual (accessed 5 Nov 2019).
  17. 17.↵
    1. Sandelowski M,
    2. Voils CI,
    3. Barroso J
    (2006) Defining and designing mixed research synthesis studies. Res Sch 13(1):29.
    OpenUrlPubMed
  18. 18.↵
    1. Joanna Briggs Institute
    Methodology for JBI mixed methods systematic reviews, https://wiki.joannabriggs.org/display/MANUAL/8.1+Introduction+to+mixed+methods+systematic+reviews (accessed 5 Nov 2019).
  19. 19.↵
    1. Cochrane Effective Practice and Organisation of Care (EPOC)
    (2017) EPOC resources for review authors, Screening, data extraction and management. http://epoc.cochrane.org/resources/epoc-resources-review-authors (accessed 8 Nov 2019).
  20. 20.↵
    1. Noblit GW,
    2. Hare RD
    (1988) Meta-ethnography: synthesizing qualitative studies (Sage Publications, Thousand Oaks, CA).
  21. 21.↵
    1. Murray E,
    2. Treweek S,
    3. Pope C,
    4. et al.
    (2010) Normalisation process theory: a framework for developing, evaluating and implementing complex interventions. BMC Med 8(1):63.
    OpenUrlCrossRefPubMed
  22. 22.↵
    1. Henderson EJ,
    2. Rubin GP
    (2013) The utility of an online diagnostic decision support system (Isabel) in general practice: a process evaluation. JRSM Short Rep 4(5):31.
    OpenUrlCrossRefPubMed
  23. 23.↵
    1. Kanagasundaram NS,
    2. Bevan MT,
    3. Sims AJ,
    4. et al.
    (2016) Computerized clinical decision support for the early recognition and management of acute kidney injury: a qualitative evaluation of end-user experience. Clin Kidney J 9(1):57–62.
    OpenUrlCrossRef
  24. 24.↵
    1. Gerbert B,
    2. Bronstone A,
    3. Maurer T,
    4. et al.
    (2000) Decision support software to help primary care physicians triage skin cancer: a pilot study. Arch Dermatol 136(2):187–192.
    OpenUrlCrossRefPubMed
  25. 25.↵
    1. Kidney E,
    2. Berkman L,
    3. Macherianakis A,
    4. et al.
    (2015) Preliminary results of a feasibility study of the use of information technology for identification of suspected colorectal cancer in primary care: the CREDIBLE study. Br J Cancer 112(Suppl 1):S70–S76.
    OpenUrlCrossRefPubMed
  26. 26.↵
    1. Logan EC,
    2. Yates JM,
    3. Stewart RM,
    4. et al.
    (2002) Investigation and management of iron deficiency anaemia in general practice: a cluster randomised controlled trial of a simple management prompt. Postgrad Med J 78(923):533–537.
    OpenUrlAbstract/FREE Full Text
  27. 27.↵
    1. Murphy DR,
    2. Wu L,
    3. Thomas EJ,
    4. et al.
    (2015) Electronic trigger-based intervention to reduce delays in diagnostic evaluation for cancer: a cluster randomized controlled trial. J Clin Oncol 33(31):3560–3567.
    OpenUrlAbstract/FREE Full Text
  28. 28.↵
    1. Walter FM,
    2. Morris HC,
    3. Humphrys E,
    4. et al.
    (2012) Effect of adding a diagnostic aid to best practice to manage suspicious pigmented lesions in primary care: randomised controlled trial. BMJ 345:e4110.
    OpenUrlAbstract/FREE Full Text
  29. 29.↵
    1. Winkelmann RR,
    2. Yoo J,
    3. Tucker N,
    4. et al.
    (2015) Impact of guidance provided by a multispectral digital skin lesion analysis device following dermoscopy on decisions to biopsy atypical melanocytic lesions. J Clin Aesthet Dermatol 8(9):21–24.
    OpenUrl
  30. 30.
    1. Meyer AN,
    2. Murphy DR,
    3. Singh H
    (2016) Communicating findings of delayed diagnostic evaluation to primary care providers. J Am Board Fam Med 29(4):469–473.
    OpenUrlAbstract/FREE Full Text
  31. 31.↵
    1. Wilson EC,
    2. Emery JD,
    3. Kinmonth AL,
    4. et al.
    (2013) The cost-effectiveness of a novel SIAscopic diagnostic aid for the management of pigmented skin lesions in primary care: a decision-analytic model. Value Health 16(2):356–366.
    OpenUrlCrossRefPubMed
  32. 32.↵
    1. Dikomitis L,
    2. Green T,
    3. Macleod U
    (2015) Embedding electronic decision-support tools for suspected cancer in primary care: a qualitative study of GPs’ experiences. Prim Health Care Res Dev 16(6):548–555.
    OpenUrlCrossRefPubMed
  33. 33.↵
    1. Chiang PP,
    2. Glance D,
    3. Walker J,
    4. et al.
    (2015) Implementing a QCancer risk tool into general practice consultations: an exploratory study using simulated consultations with Australian general practitioners. Br J Cancer 112(Suppl 1):S77–S83.
    OpenUrlCrossRefPubMed
  34. 34.↵
    1. Kidney E,
    2. Greenfield S,
    3. Berkman L,
    4. et al.
    (2017) BJGP Open, Cancer suspicion in general practice, urgent referral, and time to diagnosis: a population-based GP survey nested within a feasibility study using information technology to flag-up patients with symptoms of colorectal cancer. DOI: https://doi.org/10.3399/bjgpopen17X101109.
  35. 35.↵
    1. Jiwa M,
    2. Skinner P,
    3. Coker AO,
    4. et al.
    (2006) Implementing referral guidelines: lessons from a negative outcome cluster randomised factorial trial in general practice. BMC Fam Pract 7(1):65.
    OpenUrlCrossRefPubMed
  36. 36.↵
    1. Mansell G,
    2. Shapley M,
    3. Jordan JL,
    4. Jordan K
    (2011) Br J Gen Pract, Interventions to reduce primary care delay in cancer referral: a systematic review. DOI: https://doi.org/10.3399/bjgp11X613160.
  37. 37.↵
    1. Van de Velde S,
    2. Kunnamo I,
    3. Roshanov P,
    4. et al.
    (2018) The GUIDES checklist: development of a tool to improve the successful use of guideline-based computerised clinical decision support. Implementation Sci 13(1):86.
    OpenUrl
  38. 38.↵
    1. Vedsted P,
    2. Olesen F
    (2011) Br J Gen Pract, Are the serious problems in cancer survival partly rooted in gatekeeper principles? An ecologic study. DOI: https://doi.org/10.3399/bjgp11X588484.
  39. 39.↵
    1. Rose PW,
    2. Rubin G,
    3. Perera-Salazar R,
    4. et al.
    (2015) Explaining variation in cancer survival between 11 jurisdictions in the International Cancer Benchmarking Partnership: a primary care vignette survey. BMJ Open 5(5):e007212.
    OpenUrlAbstract/FREE Full Text
  40. 40.↵
    1. Kawamoto K,
    2. Houlihan CA,
    3. Balas EA,
    4. Lobach DF
    (2005) Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ 330(7494):765.
    OpenUrlAbstract/FREE Full Text
  41. 41.↵
    1. Haenssle HA,
    2. Fink C,
    3. Schneiderbauer R,
    4. et al.
    (2018) Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Ann Oncol 29(8):1836–1842.
    OpenUrl
  42. 42.↵
    1. Hosny A,
    2. Parmar C,
    3. Quackenbush J,
    4. et al.
    (2018) Artificial intelligence in radiology. Nat Rev Cancer 18(8):500–510.
    OpenUrl
Back to top
Previous ArticleNext Article

In this issue

British Journal of General Practice: 69 (689)
British Journal of General Practice
Vol. 69, Issue 689
December 2019
  • Table of Contents
  • Index by author
Download PDF
Download PowerPoint
Email Article

Thank you for recommending British Journal of General Practice.

NOTE: We only request your email address so that the person to whom you are recommending the page knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Decision support tools to improve cancer diagnostic decision making in primary care: a systematic review
(Your Name) has forwarded a page to you from British Journal of General Practice
(Your Name) thought you would like to see this page from British Journal of General Practice.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
Decision support tools to improve cancer diagnostic decision making in primary care: a systematic review
Sophie Chima, Jeanette C Reece, Kristi Milley, Shakira Milton, Jennifer G McIntosh, Jon D Emery
British Journal of General Practice 2019; 69 (689): e809-e818. DOI: 10.3399/bjgp19X706745

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero

Share
Decision support tools to improve cancer diagnostic decision making in primary care: a systematic review
Sophie Chima, Jeanette C Reece, Kristi Milley, Shakira Milton, Jennifer G McIntosh, Jon D Emery
British Journal of General Practice 2019; 69 (689): e809-e818. DOI: 10.3399/bjgp19X706745
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One
  • Mendeley logo Mendeley

Jump to section

  • Top
  • Article
    • Abstract
    • INTRODUCTION
    • METHOD
    • RESULTS
    • DISCUSSION
    • Notes
    • REFERENCES
  • Figures & Data
  • Info
  • eLetters
  • PDF

Keywords

  • cancer
  • clinical decision support tool
  • early diagnosis
  • general practitioners
  • primary health care

More in this TOC Section

  • The impact of remote care approaches on continuity in primary care: a mixed-studies systematic review
  • Performance of ethnic minority versus White doctors in the MRCGP assessment 2016–2021: a cross-sectional study
  • Trends in the registration of anxiety in Belgian primary care from 2000 to 2021: a registry-based study
Show more Research

Related Articles

Cited By...

Intended for Healthcare Professionals

BJGP Life

BJGP Open

 

@BJGPjournal's Likes on Twitter

 
 

British Journal of General Practice

NAVIGATE

  • Home
  • Current Issue
  • All Issues
  • Online First
  • Authors & reviewers

RCGP

  • BJGP for RCGP members
  • BJGP Open
  • RCGP eLearning
  • InnovAiT Journal
  • Jobs and careers

MY ACCOUNT

  • RCGP members' login
  • Subscriber login
  • Activate subscription
  • Terms and conditions

NEWS AND UPDATES

  • About BJGP
  • Alerts
  • RSS feeds
  • Facebook
  • Twitter

AUTHORS & REVIEWERS

  • Submit an article
  • Writing for BJGP: research
  • Writing for BJGP: other sections
  • BJGP editorial process & policies
  • BJGP ethical guidelines
  • Peer review for BJGP

CUSTOMER SERVICES

  • Advertising
  • Contact subscription agent
  • Copyright
  • Librarian information

CONTRIBUTE

  • BJGP Life
  • eLetters
  • Feedback

CONTACT US

BJGP Journal Office
RCGP
30 Euston Square
London NW1 2FB
Tel: +44 (0)20 3188 7400
Email: journal@rcgp.org.uk

British Journal of General Practice is an editorially-independent publication of the Royal College of General Practitioners
© 2023 British Journal of General Practice

Print ISSN: 0960-1643
Online ISSN: 1478-5242