Skip to main content

Main menu

  • HOME
  • ONLINE FIRST
  • CURRENT ISSUE
  • ALL ISSUES
  • AUTHORS & REVIEWERS
  • SUBSCRIBE
  • BJGP LIFE
  • MORE
    • About BJGP
    • Conference
    • Advertising
    • eLetters
    • Alerts
    • Video
    • Audio
    • Librarian information
    • Resilience
    • COVID-19 Clinical Solutions
  • RCGP
    • BJGP for RCGP members
    • BJGP Open
    • RCGP eLearning
    • InnovAIT Journal
    • Jobs and careers

User menu

  • Subscriptions
  • Alerts
  • Log in

Search

  • Advanced search
British Journal of General Practice
Intended for Healthcare Professionals
  • RCGP
    • BJGP for RCGP members
    • BJGP Open
    • RCGP eLearning
    • InnovAIT Journal
    • Jobs and careers
  • Subscriptions
  • Alerts
  • Log in
  • Follow bjgp on Twitter
  • Visit bjgp on Facebook
  • Blog
  • Listen to BJGP podcast
  • Subscribe BJGP on YouTube
Intended for Healthcare Professionals
British Journal of General Practice

Advanced Search

  • HOME
  • ONLINE FIRST
  • CURRENT ISSUE
  • ALL ISSUES
  • AUTHORS & REVIEWERS
  • SUBSCRIBE
  • BJGP LIFE
  • MORE
    • About BJGP
    • Conference
    • Advertising
    • eLetters
    • Alerts
    • Video
    • Audio
    • Librarian information
    • Resilience
    • COVID-19 Clinical Solutions
Research

Clinical gestalt to diagnose pneumonia, sinusitis, and pharyngitis: a meta-analysis

Ariella P Dale, Christian Marchello and Mark H Ebell
British Journal of General Practice 2019; 69 (684): e444-e453. DOI: https://doi.org/10.3399/bjgp19X704297
Ariella P Dale
Colorado Department of Public Health and Environment, Denver, US.
Roles: Healthcare-associated infections surveillance data coordinator
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Christian Marchello
University of Otago, Dunedin, New Zealand.
Roles: Fellow
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Mark H Ebell
Department of Epidemiology and Biostatistics, College of Public Health, University of Georgia, Athens, Georgia, US.
Roles: Professor
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info
  • eLetters
  • PDF
Loading

Abstract

Background The overall clinical impression (‘clinical gestalt’) is widely used for diagnosis but its accuracy has not been systematically studied.

Aim To determine the accuracy of clinical gestalt for the diagnosis of community-acquired pneumonia (CAP), acute rhinosinusitis (ARS), acute bacterial rhinosinusitis (ABRS), and streptococcal pharyngitis, and to contrast it with the accuracy of clinical decision rules (CDRs).

Design and setting Systematic review and meta-analysis of outpatient diagnostic accuracy studies in ambulatory care.

Method PubMed and Google were searched for studies in outpatients that reported sufficient data to calculate accuracy of the overall clinical impression and that used the same reference standard. Study quality was assessed using Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2), and measures of accuracy calculated using bivariate meta-analysis.

Results The authors identified 16 studies that met the inclusion criteria. The summary estimates for the positive (LR+) and negative likelihood ratios (LR−) were LR+ 7.7, 95% confidence interval (CI) = 4.8 to 11.5, and LR− 0.54, 95% CI = 0.42 to 0.65 for CAP in adults, LR+ 2.7, 95% CI = 1.1 to 4.3 and LR− 0.63, 95% CI = 0.20 to 0.98 for CAP in children, LR+ 3.0, 95% CI = 2.1 to 4.4 and LR− 0.37, 95% CI = 0.29 to 0.46 for ARS in adults, LR+ 3.9, 95% CI = 2.4 to 5.9 and LR− 0.33, 95% CI = 0.20 to 0.50 for ABRS in adults, and LR+ 2.1, 95% CI = 1.6 to 2.8 and LR− 0.47, 95% CI = 0.36 to 0.60 for streptococcal pharyngitis in adults and children. The diagnostic odds ratios were highest for CAP in adults (14.2, 95% CI = 9.0 to 21.0), ARS in adults (8.3, 95% CI = 4.9 to 13.1), and ABRS in adults (13.0, 95% CI = 5.0 to 27.0), as were the C-statistics (0.80, 0.77, and 0.84 respectively).

Conclusion The accuracy of the overall clinical impression compares favourably with the accuracy of CDRs. Studies of diagnostic accuracy should routinely include the overall clinical impression in addition to individual signs and symptoms, and research is needed to optimise its teaching.

  • acute rhinosinusitis
  • community-acquired pneumonia
  • diagnosis
  • evidence-based medicine
  • overall clinical impression
  • pharyngitis

INTRODUCTION

The overall clinical impression, also called ‘clinical gestalt’, is an intuitive approach to decision making used by physicians to make clinical diagnoses. It takes into account multiple signs and symptoms without necessarily using an analytic approach such as a point score or algorithm, and is an inductive approach based on pattern recognition rather than a hypotheticodeductive approach. Some studies have shown that inductive pattern-recognition strategies may be more widely used and more successful than hypotheticodeductive strategies.1–3 However, proponents of evidence-based practice encourage the use of clinical decision rules (CDRs) for diagnosis, as do practice guidelines. CDRs use a formal approach such as multivariate analysis or recursive partitioning to identify signs, symptoms, and point-of-care tests that are the best independent predictors of a diagnosis or clinical outcome. They are then typically converted to a simple point score or algorithm such as the Ottawa Ankle Rules for ankle injury,4 or the Wells rule to diagnose pulmonary embolism.5 The goal of CDRs is to improve the efficiency and accuracy of clinical diagnosis and thereby reduce unnecessary testing.6

However, CDRs may be cumbersome to access and use at the point of care. As a result, CDRs are only infrequently used in real-world clinical practice.7 Instead, clinicians rely on their overall clinical impression. As the overall clinical impression can incorporate additional variables not included in the CDR, it has the potential of being more accurate. For example, while a clinical rule may categorise a patient as being at low risk for group A beta-haemolytic streptococcal (GABHS) pharyngitis, knowing that a sibling was diagnosed with GABHS pharyngitis the week before could be an important factor.

For acute respiratory tract infections, CDRs have been developed to diagnose GABHS pharyngitis,8,9 acute rhinosinusitis (ARS) and acute bacterial rhinosinusitis (ABRS),10 and community-acquired pneumonia (CAP).11 In this study, the authors performed a systematic review of the accuracy of the overall clinical impression for GABHS pharyngitis, ARS, and CAP, which has not been systematically studied before, and evaluated how its accuracy compared with that of CDRs for the same conditions.

METHOD

Search

For this systematic review, PubMed was searched for published studies using a search strategy (available from the authors), combining synonyms for overall clinical impression, the clinical diagnosis, and ambulatory care. The reference lists of all included studies were also searched to identify studies not captured by the PubMed search strategy. In addition, published systematic reviews of the clinical diagnosis of GABHS pharyngitis, CAP, and ARS or ABRS were searched for additional studies,12–16 as were the first 50 results returned by a Google search of ‘<disease> diagnosis clinical impression’ for each disease. The search was not restricted by language, country, or date of publication.

How this fits in

It is known that the overall clinical impression is widely used in clinical practice but has not been systematically studied. This study showed that in adults the overall clinical impression had good accuracy for the diagnosis of community-acquired pneumonia, for acute rhinosinusitis, and for acute bacterial rhinosinusitis. It had moderate accuracy for diagnosis of streptococcal pharyngitis and for pneumonia in children. In each case, the accuracy of the overall clinical impression was similar to or better than that for a clinical decision rule for the same conditions. Thus, the overall clinical impression has good accuracy and is an important diagnostic tool that is deserving of further study and quantification.

Inclusion and exclusion criteria

The present research was limited to prospective studies that reported diagnostic data regarding the accuracy of the overall clinical impression (clinical gestalt) to diagnose CAP, ARS, ABRS, or acute GABHS pharyngitis. ARS was defined as abnormal imaging, and ABRS as abnormal culture of antral puncture fluid. Studies were limited to the ambulatory-care setting (outpatient clinic, urgent care, or emergency department [ED]) as hospital-acquired and ventilator-associated pneumonia are separate clinical entities. All patients must have received the same acceptable reference standard: chest radiograph (CXR), lung ultrasound, or computed tomography (CT) for pneumonia; imaging or antral puncture fluid analysis for ARS; and throat culture for GABHS pharyngitis. The authors excluded studies of nosocomial infections, infections in immunocompromised persons, or studies of the diagnosis of bacteraemia or sepsis. The authors included studies of both children and adults. Studies of ARS using inspection of antral puncture fluid or bacterial culture as the reference standard were classified as also diagnosing ABRS.

Data abstraction

Each title and abstract was reviewed by two investigators to identify potential studies for inclusion. Any study identified for full-text analysis by one of the reviewers was reviewed independently by two investigators, and any discrepancies were resolved by a third reviewer (lead investigator). For studies that met the inclusion and exclusion criteria, two reviewers abstracted study characteristics, data regarding the accuracy of clinical gestalt, and study design characteristics for the quality assessment, with discrepancies resolved via consensus discussion or, if necessary, by the lead investigator. All of the included studies were reviewed a final time by the lead investigator to confirm the accuracy of data abstraction.

Where a study reported the accuracy of clinical gestalt using more than two categories (for example, ‘sure’, ‘quite sure’, and ‘unsure’), the results were collapsed into two dichotomous categories, that is, ‘sure’ versus ‘quite sure’ or ‘unsure’. The selection of category combinations was based on the combination that provided the highest diagnostic odds ratio (DOR; ratio of positive to negative likelihood ratio [LR]), a measure of discrimination. Where studies reported physician estimates of probability, >50% versus ≤50% was used. One study reported data in the form of a figure.17 The figure was enlarged, digital vertical lines drawn to determine the intercept, and a ruler was used to calculate the number of patients in each category. Data were reported separately for the three study sites in this study (Illinois, Nebraska, and Virginia), as each site enrolled a distinct population and found somewhat different sensitivity and specificity.17

Quality assessment

The Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) framework was adapted to evaluate the quality of the included studies. Studies at low risk of bias for all four domains (patient selection; index test; reference standard; and patient flow and timing) were judged to be at low risk of bias overall.18 Those with a single domain at high risk of bias were judged to be at moderate risk of bias overall, and all others were judged to be at high risk of bias.

Statistical analysis

The authors performed the meta-analysis using the Reitsma function in the mada package in R (version 3.4.3), which uses a bivariate model equivalent to the hierarchical summary receiver operating characteristic (HSROC) model of Rutter and Gatsonis.19 The authors used a summary receiver operating characteristic (ROC) curve to plot 95% confidence intervals for the summary estimates and calculated the area under the ROC curve (AUROCC), also called the C-statistic. Heterogeneity was evaluated using inspection of the summary ROC plots and confidence intervals, as I2 is not recommended for use in diagnostic meta-analysis20 or when there is a small number of primary studies.21 To facilitate comparison with a dichotomous overall clinical impression for each diagnosis, clinical decision rules were dichotomised into low or moderate versus high risk, or low risk versus moderate or high risk depending on which approach provided the highest diagnostic odds ratio (DOR).

RESULTS

The initial search identified 2109 articles, of which 54 were evaluated as full text and 15 met the inclusion criteria. A review of references of included studies identified no additional studies for full-text review. The Google search identified no additional studies, whereas the review of previous systematic reviews identified one additional study of pharyngitis22 for a final total of 16 included studies (three acute pharyngitis, nine CAP, and four ARS or ABRS). The search is summarised in Figure 1 using the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) framework.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

PRISMA flow diagram of study search.

Characteristics of included studies

The characteristics of the included studies are summarised in Table 1. A total of six studies took place in the US, four in Sweden, and one each in Ireland, Israel, Lesotho, Norway, Spain, and a consortium of 12 European countries. Most gathered data in either a primary care clinic or the ED or a combination of those sites. Regarding age group, 11 studies enrolled only adults, four only children, and one both adults and children. All studies of pneumonia diagnosis used chest radiography as the reference standard, all studies of pharyngitis used throat culture, and studies of rhinosinusitis used either antral puncture revealing purulent fluid23–25 or sinus radiography.26 The rhinosinusitis and pneumonia studies generally included patients where there was already some clinical suspicion for these diagnoses; an exception was the study by van Vugt and colleagues that included any patient with acute cough.11 The prevalence of pneumonia varied from 5% in the van Vugt study to 44%; the median prevalence was 15%. The pharyngitis studies had broad inclusion criteria of any patient with a sore throat, with prevalence of GABHS pharyngitis ranging from 17% to 31%.

View this table:
  • View inline
  • View popup
Table 1.

Characteristics of included studies

Quality assessment

The assessment of study quality using the QUADAS-2 framework is summarised in Table 2. The authors judged nine studies to be at low risk of bias, six to be at moderate risk of bias, and three to be at high risk of bias. One study reported data from three sites, two of which were judged low risk of bias and one high risk of bias.17

View this table:
  • View inline
  • View popup
Table 2.

Assessment of study quality using the QUADAS-2 framework

Accuracy of the overall clinical impression (‘clinical gestalt’)

The accuracy of clinical gestalt as a diagnostic test for GABHS pharyngitis, ARS, and CAP is summarised in Table 3. Due to differences in the clinical presentation of pneumonia in children and adults, as well as observed heterogeneity in the summary ROC curve, results for the accuracy of CAP in adults and children with suspected pneumonia are reported separately. The summary estimates for the positive (LR+) and negative (LR−) likelihood ratios were LR+ 7.7, 95% confidence interval (CI) = 4.8 to 11.5 and LR− 0.54, 95% CI = 0.42 to 0.65 for the diagnosis of CAP in adults; LR+ 2.7, 95% CI = 1.1 to 4.3, and LR− 0.63, 95% CI = 0.20 to 0.98 for the diagnosis of CAP in children; LR+ 3.0, 95% CI = 2.1 to 4.4 and LR− 0.37, 95% CI = 0.29 to 0.46 for ARS in adults; LR+ 3.9, 95% CI = 2.4 to 5.9 and LR− 0.33, 95% CI = 0.20 to 0.50 for ABRS in adults; and LR+ 2.1, 95% CI = 1.6 to 2.8 and LR− 0.47, 95% CI = 0.36 to 0.60 for GABHS pharyngitis in both adults and children. Based on the diagnostic odds ratio, clinical gestalt was most accurate for diagnosis of CAP in adults (DOR 14.2, 95% CI = 9.0 to 21.0), ABRS in adults (DOR 13.0, 95% CI = 5.0 to 27.0), and ARS in adults (DOR 8.3, 95% CI = 4.9 to 13.1). It was less accurate for the diagnosis of CAP in children (DOR 5.5) and GABHS pharyngitis (DOR 4.6).

View this table:
  • View inline
  • View popup
Table 3.

Summary estimates of diagnostic accuracy of clinical gestalt for the diagnosis of common respiratory infections

The summary ROC curves are shown in Figure 2. The summary AUROCC of the overall clinical impression as a test for CAP was 0.80 in both children and adults, 0.77 for ARS in adults, 0.84 for ABRS in adults, and 0.73 for GABHS pharyngitis in adults and children. Note that the C-statistic for CAP in children was unreliable in the authors’ judgement based on the small number of studies and high heterogeneity. Inspection of the summary ROC curves in Figure 2 reveals different patterns of heterogeneity for each disease. There was good homogeneity for the diagnosis of acute pharyngitis, despite the fact that the three studies enrolled children in one, adults in another, and both in a third. For sinusitis, there was good homogeneity with regards to sensitivity (range 0.71 to 0.84) but less with regards to specificity (range 0.61 to 0.92).

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Summary receiver operating characteristic curves (ROC) are shown for the accuracy of clinical gestalt in the diagnosis of community-acquired pneumonia (CAP) in adults, CAP in children, group A beta-haemolytic streptococcal (GABHS) pharyngitis, and acute rhinosinusitis (ARS).

For the diagnosis of CAP in adults, the ROC curve showed a pattern that was consistent with a threshold effect. That is, as sensitivity increases, specificity decreases, with the points arrayed along the ROC curve. There was also better homogeneity for studies of CAP in adults compared with studies in children, which are presented separately in the ROC curves. As noted before, most studies in this group were limited to patients with clinically suspected disease. The one study with very broad inclusion criteria of any patient with cough had the highest specificity (0.99) but among the lowest sensitivities (0.29), perhaps a consequence of the low prevalence of CAP.11

Accuracy of clinical decision rules

For comparison with the overall clinical impression, the authors determined the accuracy of CDRs for GABHS pharyngitis in children and adults,8,35 CAP,36 and acute bacterial rhinosinusitis (ABRS).10 The accuracy of the Strep Score for GABHS pharyngitis in adults and children was obtained from recent systematic reviews.13,35 The accuracy of the CDR for CAP was obtained from a large European study of outpatients with acute cough where all received a chest radiograph.36 The CDRs for ARS and ABRS were developed by the author based on a study of 175 primary care patients who all underwent CT, and antral puncture for fluid and culture if fluid was seen on CT.10 ARS was defined as abnormal CT, and ABRS as abnormal culture of antral puncture fluid, as in the clinical gestalt studies. The accuracy of the CDRs are summarised in Table 4.

View this table:
  • View inline
  • View popup
Table 4.

Accuracy of selected clinical decision rules for pneumonia, pharyngitis, and acute rhinosinusitis

DISCUSSION

Summary

This is the first systematic review of the accuracy of clinical gestalt or the overall clinical impression as a diagnostic test. The authors found that the overall clinical impression is an accurate diagnostic test for CAP, ARS, and ABRS in adults (DOR 14.2, 8.3, and 13.0, respectively), and is moderately accurate for the diagnosis of GABHS pharyngitis in adults and children (DOR 4.6) and for the diagnosis of CAP in children (DOR 5.5).

Clinical gestalt is more accurate than individual signs and symptoms for all three conditions, and compares well with clinical decision rules. For example, using a cut-off of three or more out of four symptoms as a positive test, the Strep Score had diagnostic odds ratios of 4.2 in adults and 2.5 in children, compared with a DOR of 4.6 for the overall clinical impression in mixed populations of adults and children. The CDR for CAP in adults had a DOR of 7.2, compared with a DOR of 14.2 for the overall clinical impression in adults. For ARS, the CDR had a DOR of 3.6 compared with 8.3 for clinical gestalt. For ABRS the CDR had a DOR 5.9, compared with 13.0 for clinical gestalt. In all cases, the overall clinical impression performed as well or better than the clinical decision rule.

Patterns of heterogeneity differed between conditions. There was good homogeneity around estimates of the accuracy of gestalt for pharyngitis, for ABRS using antral puncture as the reference standard, and for CAP in adults. A threshold effect can be observed for the diagnosis of CAP. A threshold effect is the result of a trade-off between sensitivity and specificity, and may occur when different definitions of the outcome of interest are used, such as different thresholds for diagnosis of CAP. Some physicians may prioritise sensitivity at the price of specificity, and others specificity at the price of sensitivity.

Strengths and limitations

A strength of this study is the fact that the results for the accuracy of clinical gestalt were fairly consistent for adults with CAP, ABRS, and pharyngitis based on inspection of the summary ROC curves. Other strengths of the present study include the use of modern methods for diagnostic meta-analysis, a comprehensive search, and that only three of 18 studies were judged to be at high risk of bias. This study had several limitations as well: the clinical decision rules discussed above for ARS and CAP have not been prospectively validated. However, accuracy usually suffers during prospective validation, so the fact that gestalt was as accurate as these proposed CDRs is notable. There were a fairly small number of studies, several were quite old, some were at high risk of bias, and three of the four for ARS were by the same author. There was also considerable heterogeneity with regards to inclusion criteria, the age of participants, and the reference standards used. Finally, the studies of pneumonia generally only included studies where there was already some clinical suspicion of CAP. However, only a minority in each of the nine studies had CAP diagnosed by radiography.

Comparison with existing literature

The authors conclude that clinical gestalt is either similarly accurate to or more accurate than CDRs based on usual metrics of diagnostic accuracy. Since clinical gestalt requires no calculations, no algorithm, and no computer, it is not surprising that it is far more widely used than CDRs for clinical decision making. That said, the ability to use clinical gestalt as an accurate test for pneumonia or acute rhinosinusitis is not innate. It must be developed and cultivated, as any skill, and likely requires exposure to a great many cases with a known outcome (‘patterns’) before it is fully developed and accurate. Artificial neural networks can be ‘trained’ to create a complex algorithm by exposing the network to a large number of patterns with known outcomes, eventually developing the ability to accurately make predictions for new cases.

Multivariate models and neural networks typically require several hundred or more patterns to create a predictive model. How many of these known cases or ‘patterns’ are required before the human brain is trained remains unclear. Bierema proposes a model for professional knowledge development that identifies stages of novice, beginner, competent, proficient, expert, and generative leader.37 For novice and beginner learners, CDRs can be used to hone diagnostic skills and teach them the best independent predictors of disease, providing focus and a framework for their diagnostic training. For the proficient and expert physician, the CDR moves to the background, while a physician who is a generative leader may further develop and improve CDRs.

Implications for research and practice

The authors propose that use of formal CDRs is potentially most useful for early-stage clinicians, who have not yet been exposed to a large number of patterns. As they develop their own clinical gestalt, informed by repeated use of validated CDRs, they may eventually rely less and less on the CDR. But even for experienced clinicians CDRs can serve as a back-up to their clinical gestalt. For example, if a physician judges that a patient with CAP can be treated as an outpatient, it is still worthwhile to double-check that judgement by calculating the CRB-65 prognostic score for pneumonia.38 In fact, both the clinical decision rule and clinical gestalt only identified about half of the patients with pneumonia, missing the other half. Thus, use of a CDR and clinical gestalt may be complementary and supportive of each other rather than an either/or proposition.

In conclusion, clinical gestalt is accurate for the diagnosis of CAP, ARS, and ABRS in adults, and the overall accuracy is similar to or better than that of clinical decision rules. Experienced clinicians should be confident in their use of the overall clinical impression and use clinical decision rules as a backstop to that judgement. Trainees, on the other hand, may benefit more from explicit use of CDRs until they develop their clinical skills. Further work is needed to understand how to best teach clinical gestalt to trainees.

Future studies of clinical diagnosis should primarily include an ‘overall clinical impression’ question to gather further data on the accuracy of clinical gestalt for a range of conditions, including of course non-infectious conditions such as chest pain, deep vein thrombosis, and pulmonary embolism. If found to be accurate and reliable for the diagnosis of a disease, the overall clinical impression could be built into guidelines regarding the evaluation of a range of conditions such as suspected sepsis, myocardial infarction, depression, and early diagnosis of cancer. It will also be important to consider how an overall judgement about the likelihood of disease fits with the threshold framework for decision making, such that a judgement of ‘disease is unlikely’ also falls below the test threshold for that disease.39

Notes

Funding

No funding was received for this study.

Ethical approval

This study was exempt from ethical approval as it was limited to secondary data analysis.

Provenance

Freely submitted; externally peer reviewed.

Competing interests

The authors have declared no competing interests.

Discuss this article

Contribute and read comments about this article: bjgp.org/letters

  • Received November 24, 2018.
  • Revision requested December 19, 2018.
  • Accepted January 4, 2019.
  • © British Journal of General Practice 2019

REFERENCES

  1. 1.↵
    1. Ridderikhoff J
    (1991) Medical problem-solving: an exploration of strategies. Med Educ 25(3):196–207.
    OpenUrlCrossRefPubMed
  2. 2.
    1. Ridderikhoff J
    (1993) Problem-solving in general practice. Theor Med 14(4):343–363.
    OpenUrlCrossRefPubMed
  3. 3.↵
    1. Coderre S,
    2. Mandin H,
    3. Harasym PH,
    4. Fick GH
    (2003) Diagnostic reasoning strategies and diagnostic success. Med Educ 37(8):695–703.
    OpenUrlCrossRefPubMed
  4. 4.↵
    1. Stiell IG,
    2. Greenberg GH,
    3. McKnight RD,
    4. et al.
    (1992) A study to develop clinical decision rules for the use of radiography in acute ankle injuries. Ann Emerg Med 21(4):384–390.
    OpenUrlCrossRefPubMed
  5. 5.↵
    1. Lucassen W,
    2. Geersing GJ,
    3. Erkens PM,
    4. et al.
    (2011) Clinical decision rules for excluding pulmonary embolism: a meta-analysis. Ann Intern Med 155(7):448–460.
    OpenUrlCrossRefPubMed
  6. 6.↵
    1. McIsaac WJ,
    2. Goel V
    (1998) Effect of an explicit decision-support tool on decisions to prescribe antibiotics for sore throat. Med Decis Making 18(2):220–228.
    OpenUrlCrossRefPubMed
  7. 7.↵
    1. Ebell M
    (2010) AHRQ White Paper: use of clinical decision rules for point-of-care decision support. Med Decis Making 30(6):712–721.
    OpenUrlCrossRefPubMed
  8. 8.↵
    1. Centor RM,
    2. Witherspoon JM,
    3. Dalton HP,
    4. et al.
    (1981) The diagnosis of strep throat in adults in the emergency room. Med Decis Making 1(3):239–246.
    OpenUrlCrossRefPubMed
  9. 9.↵
    1. McIsaac WJ,
    2. Goel V,
    3. To T,
    4. Low DE
    (2000) The validity of a sore throat score in family practice. CMAJ 163(7):811–815.
    OpenUrlAbstract/FREE Full Text
  10. 10.↵
    1. Ebell MH,
    2. Hansen JG
    (2017) Proposed clinical decision rules to diagnose acute rhinosinusitis among adults in primary care. Ann Fam Med 15(4):347–354.
    OpenUrlAbstract/FREE Full Text
  11. 11.↵
    1. van Vugt SF,
    2. Verheij TJ,
    3. de Jong PA,
    4. et al.
    (2013) Diagnosing pneumonia in patients with acute cough: clinical judgment compared to chest radiography. Eur Respir J 42(4):1076–1082.
    OpenUrlAbstract/FREE Full Text
  12. 12.↵
    1. Metlay JP,
    2. Kapoor WN,
    3. Fine MJ
    (1997) Does this patient have community-acquired pneumonia? Diagnosing pneumonia by history and physical examination. JAMA 278(17):1440–1445.
    OpenUrlCrossRefPubMed
  13. 13.↵
    1. Aalbers J,
    2. O’Brien KK,
    3. Chan WS,
    4. et al.
    (2011) Predicting streptococcal pharyngitis in adults in primary care: a systematic review of the diagnostic accuracy of symptoms and signs and validation of the Centor score. BMC Med 9:67.
    OpenUrlCrossRefPubMed
  14. 14.
    1. Shaikh N,
    2. Swaminathan N,
    3. Hooper EG
    (2012) Accuracy and precision of the signs and symptoms of streptococcal pharyngitis in children: a systematic review. J Pediatr 160(3):487–493.
    OpenUrlCrossRefPubMed
  15. 15.
    1. Shah SN,
    2. Bachur RG,
    3. Simel DL,
    4. Neuman MI
    (2017) Does this child have pneumonia? The rational clinical examination systematic review. JAMA 318(5):462–471.
    OpenUrlPubMed
  16. 16.↵
    1. Ebell MH,
    2. Smith MA,
    3. Barry HC,
    4. et al.
    (2000) The rational clinical examination. Does this patient have strep throat? JAMA 284(22):2912–2918.
    OpenUrlCrossRefPubMed
  17. 17.↵
    1. Tape TG,
    2. Heckerling PS,
    3. Ornato JP,
    4. Wigton RS
    (1991) Use of clinical judgment analysis to explain regional variations in physicians’ accuracies in diagnosing pneumonia. Med Decis Making 11(3):189–197.
    OpenUrlCrossRefPubMed
  18. 18.↵
    1. Whiting PF,
    2. Rutjes AW,
    3. Westwood ME,
    4. et al.
    (2011) QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med 155(8):529–536.
    OpenUrlCrossRefPubMed
  19. 19.↵
    1. Rutter CM,
    2. Gatsonis CA
    (2001) A hierarchical regression approach to meta-analysis of diagnostic test accuracy evaluations. Stat Med 20(19):2865–2884.
    OpenUrlCrossRefPubMed
  20. 20.↵
    1. Zhou Y,
    2. Dendukuri N
    (2014) Statistics for quantifying heterogeneity in univariate and bivariate meta-analyses of binary data: the case of meta–analyses of diagnostic accuracy. Stat Med 33(16):2701–2717.
    OpenUrlCrossRefPubMed
  21. 21.↵
    1. von Hippel PT
    (2015) The heterogeneity statistic I(2) can be biased in small meta-analyses. BMC Med Res Methodol 15:35.
    OpenUrlPubMed
  22. 22.↵
    1. Dobbs F
    (1996) A scoring system for predicting group A streptococcal throat infection. Br J Gen Pract 46(409):461–464.
    OpenUrlAbstract/FREE Full Text
  23. 23.↵
    1. Berg O,
    2. Bergstedt H,
    3. Carenfelt C,
    4. et al.
    (1981) Discrimination of purulent from nonpurulent maxillary sinusitis. Clinical and radiographic diagnosis. Ann Otol Rhinol Laryngol 90(3 Pt 1):272–275.
    OpenUrlPubMed
  24. 24.
    1. Berg O,
    2. Carenfelt C
    (1985) Etiological diagnosis in sinusitis: ultrasonography as clinical complement. Laryngoscope 95(7 Pt 1):851–853.
    OpenUrlPubMed
  25. 25.↵
    1. Berg O,
    2. Carenfelt C
    (1988) Analysis of symptoms and clinical signs in the maxillary sinus empyema. Acta Otolaryngol 105(3–4):343–349.
    OpenUrlCrossRefPubMed
  26. 26.↵
    1. Williams JW Jr.,
    2. Simel DL,
    3. Roberts L,
    4. Samsa GP
    (1992) Clinical evaluation for sinusitis. Making the diagnosis by history and physical examination. Ann Intern Med 117(9):705–710.
    OpenUrlCrossRefPubMed
  27. 27.
    1. Grossman LK,
    2. Caplan SE
    (1988) Clinical, laboratory, and radiological information in the diagnosis of pneumonia in children. Ann Emerg Med 17(1):43–46.
    OpenUrlCrossRefPubMed
  28. 28.
    1. Mahabee-Gittens EM,
    2. Grupp-Phelan J,
    3. Brody AS,
    4. et al.
    (2005) Identifying children with pneumonia in the emergency department. Clin Pediatr (Phila) 44(5):427–435.
    OpenUrlCrossRefPubMed
  29. 29.
    1. Redd SC,
    2. Patrick E,
    3. Vreuls R,
    4. et al.
    (1994) Comparison of the clinical and radiographic diagnosis of paediatric pneumonia. Trans R Soc Trop Med Hyg 88(3):307–310.
    OpenUrlCrossRefPubMed
  30. 30.
    1. Gonzalez Ortiz MA,
    2. Carnicero Bujarrabal M,
    3. Varela Entrecanales M
    (1995) Prediction of the presence of pneumonia in adults with fever. Med Clin (Barc) 105(14):521–524.
    OpenUrlPubMed
  31. 31.
    1. Lieberman D,
    2. Shvartzman P,
    3. Korsonsky I,
    4. Lieberman D
    (2003) Diagnosis of ambulatory community-acquired pneumonia. Comparison of clinical assessment versus chest X-ray. Scand J Prim Health Care 21(1):57–60.
    OpenUrlCrossRefPubMed
  32. 32.
    1. Melbye H,
    2. Straume B,
    3. Aasebo U,
    4. Brox J
    (1988) The diagnosis of adult pneumonia in general practice. The diagnostic value of history, physical examination and some blood tests. Scand J Prim Health Care 6(2):111–117.
    OpenUrlCrossRefPubMed
  33. 33.
    1. Moberg AB,
    2. Taleus U,
    3. Garvin P,
    4. et al.
    (2016) Community-acquired pneumonia in primary care: clinical assessment and the usability of chest radiography. Scand J Prim Health Care 34(1):21–27.
    OpenUrl
  34. 34.
    1. Attia MW,
    2. Zaoutis T,
    3. Klein JD,
    4. Meier FA
    (2001) Performance of a predictive model for streptococcal pharyngitis in children. Arch Pediatr Adolesc Med 155(6):687–691.
    OpenUrlCrossRefPubMed
  35. 35.↵
    1. Le Marechal F,
    2. Martinot A,
    3. Duhamel A,
    4. et al.
    (2013) Streptococcal pharyngitis in children: a meta-analysis of clinical decision rules and their clinical variables. BMJ Open 3(3), pii: e001482.
  36. 36.↵
    1. van Vugt SF,
    2. Broekhuizen BD,
    3. Lammens C,
    4. et al.
    (2013) Use of serum C reactive protein and procalcitonin concentrations in addition to symptoms and signs to predict pneumonia in patients presenting to primary care with acute cough: diagnostic study. BMJ 346:f2450.
    OpenUrlAbstract/FREE Full Text
  37. 37.↵
    1. Bierema LL
    (2016) Navigating professional white water: rethinking continuing professional education at work. New Dir Adult Contin Educ 2016(151):53–67.
    OpenUrl
  38. 38.↵
    1. McNally M,
    2. Curtain J,
    3. O’Brien KK,
    4. et al.
    (2010) Br J Gen Pract, Validity of British Thoracic Society guidance (the CRB-65 rule) for predicting the severity of pneumonia in general practice: systematic review and meta-analysis. DOI: https://doi.org/10.3399/bjgp10X532422.
  39. 39.↵
    1. Pauker SG,
    2. Kassirer JP
    (1980) The threshold approach to clinical decision making. N Engl J Med 302(20):1109–1117.
    OpenUrlCrossRefPubMed
Back to top
Previous ArticleNext Article

In this issue

British Journal of General Practice: 69 (684)
British Journal of General Practice
Vol. 69, Issue 684
July 2019
  • Table of Contents
  • Index by author
Download PDF
Download PowerPoint
Article Alerts
Or,
sign in or create an account with your email address
Email Article

Thank you for recommending British Journal of General Practice.

NOTE: We only request your email address so that the person to whom you are recommending the page knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Clinical gestalt to diagnose pneumonia, sinusitis, and pharyngitis: a meta-analysis
(Your Name) has forwarded a page to you from British Journal of General Practice
(Your Name) thought you would like to see this page from British Journal of General Practice.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
Clinical gestalt to diagnose pneumonia, sinusitis, and pharyngitis: a meta-analysis
Ariella P Dale, Christian Marchello, Mark H Ebell
British Journal of General Practice 2019; 69 (684): e444-e453. DOI: 10.3399/bjgp19X704297

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero

Share
Clinical gestalt to diagnose pneumonia, sinusitis, and pharyngitis: a meta-analysis
Ariella P Dale, Christian Marchello, Mark H Ebell
British Journal of General Practice 2019; 69 (684): e444-e453. DOI: 10.3399/bjgp19X704297
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One
  • Mendeley logo Mendeley

Jump to section

  • Top
  • Article
    • Abstract
    • INTRODUCTION
    • METHOD
    • RESULTS
    • DISCUSSION
    • Notes
    • REFERENCES
  • Figures & Data
  • Info
  • eLetters
  • PDF

Keywords

  • acute rhinosinusitis
  • community-acquired pneumonia
  • diagnosis
  • evidence-based medicine
  • overall clinical impression
  • pharyngitis

More in this TOC Section

  • Newer long-acting insulin prescriptions for patients with type 2 diabetes: prevalence and practice variation in a retrospective cohort study
  • Developing a primary care-initiated hepatitis C treatment pathway in Scotland: a qualitative study
  • How parents and children evaluate emollients for childhood eczema: a qualitative study
Show more Research

Related Articles

Cited By...

Intended for Healthcare Professionals

BJGP Life

BJGP Open

 

@BJGPjournal's Likes on Twitter

 
 

British Journal of General Practice

NAVIGATE

  • Home
  • Current Issue
  • All Issues
  • Online First
  • Authors & reviewers

RCGP

  • BJGP for RCGP members
  • BJGP Open
  • RCGP eLearning
  • InnovAiT Journal
  • Jobs and careers

MY ACCOUNT

  • RCGP members' login
  • Subscriber login
  • Activate subscription
  • Terms and conditions

NEWS AND UPDATES

  • About BJGP
  • Alerts
  • RSS feeds
  • Facebook
  • Twitter

AUTHORS & REVIEWERS

  • Submit an article
  • Writing for BJGP: research
  • Writing for BJGP: other sections
  • BJGP editorial process & policies
  • BJGP ethical guidelines
  • Peer review for BJGP

CUSTOMER SERVICES

  • Advertising
  • Contact subscription agent
  • Copyright
  • Librarian information

CONTRIBUTE

  • BJGP Life
  • eLetters
  • Feedback

CONTACT US

BJGP Journal Office
RCGP
30 Euston Square
London NW1 2FB
Tel: +44 (0)20 3188 7400
Email: journal@rcgp.org.uk

British Journal of General Practice is an editorially-independent publication of the Royal College of General Practitioners
© 2022 British Journal of General Practice

Print ISSN: 0960-1643
Online ISSN: 1478-5242