THE use of near-patient tests in primary care has received much attention, ranging from simple urinalysis strips1 and measurement of blood glucose, to more complex desktop analysers for the measurement of cholesterol.2 Screening programmes are likely to make more regular use of testing than diagnostic use, thus increasing the economy of scale of the near-patient test. If patients are tested while still in the surgery, there are potential savings on administration, and follow-up of abnormal results can be ensured.
The information value of a test result is determined by the likelihood ratio of the test and the balance between the utilities of testing and not testing. Tests have an important value in reducing the uncertainty under which doctors practice. A study of the influence of a rapid transit erythrocyte sedimentation rate on the diagnosis reached by Dutch general practitioners (GPs), found that the result confirmed the GPs' original diagnosis in 82% of cases and was ‘reassuring for both doctor and patient’.3
An article in this month's Journal looking at near-patient testing describes the use of C-reactive protein (CRP) measurement to diagnose bacterial sinusitis.4 This health technology assessment from Denmark indicates that antibiotic prescribing can be targeted more effectively through the use of near-patient CRP measurement.
If a near-patient test is performed while the patient waits, a return visit for further management may be avoided. However, a study of desktop analysers in London revealed that approximately 15% of patients were asked to return for the result, even though the analyser was used.5 A number of GPs used laboratory tests in preference to near-patient tests, to provide a delay (using time to resolve a diagnostic problem) while still satisfying the need of the patient for symptoms to be taken seriously.
Primary care budget holders may allocate part of their budget to pay for the purchase of near-patient tests. Costs of capital equipment, such as optical readers, centrifuges or analysers, have to be accounted for, as well as the recurring costs of reagents and consumables such as capillary tubes. If expensive equipment is infrequently used, the unit costs of an investigation rises sharply.
A primary concern relating to near-patient testing is quality assurance. There have been several documents produced by pathologists, particularly from clinical chemists, that outline guidelines for decentralised laboratory work.6-8 Collaboration between pathology laboratories and primary care is essential if near-patient testing is to be safely and effectively utilised.
Primary care practitioners cannot ignore the issues of both internal and external quality control steps in the validation of test results. These will add to the unit costs of testing. Some near-patient tests, like pregnancy tests, are single-use test strips or cards where the quality control has been built in during manufacture, often taking the form of a visible ‘negative test’ indicator. Although convenient, these tests would be less cost-effective where many tests are performed daily.
The ‘Hawthorne effect’ describes a change in the behaviour of subjects when their work is being studied.9 In a similar fashion the availability of a test increases the usage of that test. It is not known whether such increases in use indicate a previously unmet need, are inappropriate responses, or are long term. Studies have shown that in practices where desktop analysers have been introduced, the rate of testing increases, but these extra tests do not lead to an alteration in diagnosis or management. However, none of these studies have examined the effect of apparently ‘inappropriate’ tests on reducing the degree of uncertainty experienced by both doctor and patient.
Both the measurement of CRP in suspected sinusitis or lower respiratory tract infection and primary care oral anti-coagulation management are examples of problems where near-patient testing can have a direct influence on patient management; a lot of experience with anticoagulation management has been accumulated. Bleeding is the most serious and common complication of oral anticoagulation therapy (mainly warfarin in the UK). For any given patient, the potential benefit from prevention of thromboembolic disease needs to be balanced against the potential harm from induced haemorrhagic side effects. Methodological problems have hampered the interpretation of previously reported data, particularly with regard to definitions of major and minor bleeding episodes, with some investigators accepting hospital admission for transfusion of up to 4 units of blood as being minor. Reviews of observational and experimental studies have found annual bleeding rates of 0–4.8% for fatal bleeding and 2.4–8.1% for major bleeds.10,11 Minor bleeds are reported more often, with an annual event rate of around 15%.10
Age is one factor that has been reported as increasing the risk of bleeding, with one study finding a 32% increase in all bleeding and a 46% increase in major bleeding for every 10-year increase above the age of 40 years.12
Early studies suggested an increased risk with increasing target international normalised ratio (INR).11 These early data are difficult to interpret, with results being reported in both INR and prothrombin time. It is also important to take into account the actual intensity, the level of therapeutic control of INR achieved, as well as the intended intensity, the target INR range. One study, which achieved point prevalence of therapeutic INRs of 77%, reported no association between bleeding episodes and target INR.12
Data from an Italian study13 involving 2745 patients in 2011 patient years of follow-up reported much lower bleeding rates, with an overall rate of 7.6 per 100 patient years. The reported rates for fatal, major and minor bleeds were 0.25, 1.1 and 6.2 per 100 patient years respectively. This study also identified an increased risk with age, and found statistically increased risk during the first 90 days of treatment. Peripheral vascular and cerebrovascular disease were found to carry a higher relative risk of bleeding, and a strong association between target INR was found. A relative risk (RR) of 7.91 (95% confidence interval [CI] = 5.44 to 11.5, P<0.0001) was noted when the most recent INR recorded was greater than 4.5.
From our own data using near-patient testing for INR monitoring, we found a serious adverse event rate of 3.4 per 100 patient years (1.1 for haemorrhage, 2.3 for thrombosis) including a mortality rate of 1.1 per 100 patient years for patients managed within a primary care-based clinic.14 Gender appeared to have little influence on the risk of adverse events, with men having a very slightly higher RR than women of having a non-serious event (RR = 1.03, 95% CI= 0.8 to 1.3), with a lower risk than women of having a ser-ious outcome (RR = 0.89, 95% CI = 0.3 to 2.4). Similarly, age appeared to have little impact on risk of adverse events.
Goudie et al report data from a primary care-based observational study over 5 years.15 They report 18 major bleeding events, including four fatalities over 664.8 patient years giving a major haemorrhage rate of 0.6 per 100 patient years, including a haemorrhagic fatality rate of 0.06 per 100 patient years. Unfortunately, data are not provided regarding thrombosis rates, nor any data on the quality of INR control achieved. They do suggest, however, that it is dependency rather than age per se that is important in terms of haemorrhage risk.
Near-patient testing has a role in primary care. However, practitioners need to ensure that they are using tests appropriately and that the test characteristics are suitable for the purpose of testing either for diagnosis or monitoring.
- © British Journal of General Practice, 2004.