Skip to main content

Main menu

  • HOME
  • ONLINE FIRST
  • CURRENT ISSUE
  • ALL ISSUES
  • AUTHORS & REVIEWERS
  • SUBSCRIBE
  • BJGP LIFE
  • MORE
    • About BJGP
    • Conference
    • Advertising
    • eLetters
    • Alerts
    • Video
    • Audio
    • Librarian information
    • Resilience
    • COVID-19 Clinical Solutions
  • RCGP
    • BJGP for RCGP members
    • BJGP Open
    • RCGP eLearning
    • InnovAIT Journal
    • Jobs and careers

User menu

  • Subscriptions
  • Alerts
  • Log in

Search

  • Advanced search
British Journal of General Practice
Intended for Healthcare Professionals
  • RCGP
    • BJGP for RCGP members
    • BJGP Open
    • RCGP eLearning
    • InnovAIT Journal
    • Jobs and careers
  • Subscriptions
  • Alerts
  • Log in
  • Follow bjgp on Twitter
  • Visit bjgp on Facebook
  • Blog
  • Listen to BJGP podcast
  • Subscribe BJGP on YouTube
Intended for Healthcare Professionals
British Journal of General Practice

Advanced Search

  • HOME
  • ONLINE FIRST
  • CURRENT ISSUE
  • ALL ISSUES
  • AUTHORS & REVIEWERS
  • SUBSCRIBE
  • BJGP LIFE
  • MORE
    • About BJGP
    • Conference
    • Advertising
    • eLetters
    • Alerts
    • Video
    • Audio
    • Librarian information
    • Resilience
    • COVID-19 Clinical Solutions
Research

Accuracy of blood-pressure monitors owned by patients with hypertension (ACCU-RATE study): a cross-sectional, observational study in central England

James A Hodgkinson, Mei-Man Lee, Siobhan Milner, Peter Bradburn, Richard Stevens, FD Richard Hobbs, Constantinos Koshiaris, Sabrina Grant, Jonathan Mant and Richard J McManus
British Journal of General Practice 2020; 70 (697): e548-e554. DOI: https://doi.org/10.3399/bjgp20X710381
James A Hodgkinson
Institute of Applied Health Research, University of Birmingham, Edgbaston.
Roles: Research fellow
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Mei-Man Lee
Nuffield professor of primary care health sciences;
Roles: Statistician
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Siobhan Milner
Institute of Applied Health Research, University of Birmingham, Edgbaston.
Roles: Project officer
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Peter Bradburn
Institute of Applied Health Research, University of Birmingham, Edgbaston.
Roles: Research nurse
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Richard Stevens
Nuffield professor of primary care health sciences;
Roles: Departmental lecturer in medical statistics
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
FD Richard Hobbs
Nuffield professor of primary care health sciences;
Roles: Head of department
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Constantinos Koshiaris
Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford.
Roles: Medical statistician
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Sabrina Grant
Three Counties School of Nursing and Midwifery, University of Worcester, Worcester.
Roles: Senior lecturer
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jonathan Mant
Primary Care Unit, Strangeways Research Laboratory, Cambridge.
Roles: Professor of primary care research
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Richard J McManus
Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford.
Roles: Professor of primary care research
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info
  • eLetters
  • PDF
Loading

Abstract

Background Home blood-pressure (BP) monitoring is recommended in guidelines and is increasingly popular with patients and health professionals, but the accuracy of patients’ own monitors in real-world use is not known.

Aim To assess the accuracy of home BP monitors used by people with hypertension, and to investigate factors affecting accuracy.

Design and setting Cross-sectional, observational study in urban and suburban settings in central England.

Method Patients (n = 6891) on the hypertension register at seven practices in the West Midlands, England, were surveyed to ascertain whether they owned a BP monitor and wanted it tested. Monitor accuracy was compared with a calibrated reference device at 50 mmHg intervals between 0–280/300 mmHg (static pressure test); a difference from the reference monitor of +/−3 mmHg at any interval was considered a failure. Cuff performance was also assessed. Results were analysed by frequency of use, length of time in service, make and model, monitor validation status, purchase price, and any previous testing.

Results In total, 251 (76%, 95% confidence interval [95% CI] = 71 to 80%) of 331 tested devices passed all tests (monitors and cuffs), and 86% (CI] = 82 to 90%) passed the static pressure test; deficiencies were, primarily, because of monitors overestimating BP. A total of 40% of testable monitors were not validated. The pass rate on the static pressure test was greater in validated monitors (96%, 95% CI = 94 to 98%) versus unvalidated monitors (64%, 95% CI = 58 to 69%), those retailing for >£10 (90%, 95% CI = 86 to 94%), those retailing for ≤£10 (66%, 95% CI = 51 to 80%), those in use for ≤4 years (95%, 95% CI = 91 to 98%), and those in use for >4 years (74%, 95% CI = 67 to 82%). All in all, 12% of cuffs failed.

Conclusion Patients’ own BP monitor failure rate was similar to that demonstrated in studies performed in professional settings, although cuff failure was more frequent. Clinicians can be confident of the accuracy of patients’ own BP monitors if the devices are validated and ≤4 years old.

  • accuracy
  • blood-pressure monitors
  • calibration
  • hypertension
  • primary health care

INTRODUCTION

Raised blood pressure (BP) is a key risk factor for the development of cardiovascular disease,1 a major cause of morbidity and mortality worldwide.2 An accurate BP monitoring device is fundamental to the diagnosis and management of hypertension.

Self-monitored BP at home is a statistically significantly better predictor of future cardiovascular risk than manual office BP measurement,3 and self-monitoring as part of a self-management strategy is an effective way to improve BP control.4,5 Home BP monitoring has gained popularity in recent years among both patients and healthcare professionals (HCPs), many of whom incorporate self-monitored readings in their treatment decisions;6 nevertheless, there is considerable variation in practice, and there remains scepticism among some HCPs about the accuracy of patients’ own readings, especially outside of a trial context.6,7

Although guidance on how to conduct self-monitoring of BP recommends use of validated upper-arm cuff devices, appropriate training, use of a pre-specified schedule (for example, number of days of readings, time of day), and physician verification of measurements,8,9 none to date recommends checking the accuracy of home BP monitors used by patients. Previous research has highlighted that monitors used in GP surgeries and community pharmacies have shown variation in accuracy.10,11

Several clinical protocols12–15 exist for the validation of BP measuring devices but these are, generally, undertaken on brand-new models and do not assess sustained accuracy thereafter. Typically, new monitors are assumed to be accurate for 2 years and then annual checks are undertaken in clinical practice. However, it is not clear whether this is appropriate as the drift in accuracy, over time, of an automated sphygmomanometer is not known, and a study investigating monitors in pharmacies suggested they decline in accuracy after 18 months.11

Some automated BP monitors on sale to the public have been clinically validated; in such cases, a monitor or one with device equivalence16 will have passed at least one of the recognised accuracy protocols.12–14 However, error rates in devices used for self-monitoring are unknown; this rate is a function of random error (variability) and systematic error (bias), and, ultimately, depends on the conditions under which a device is used.

This study aimed to test — for the first time in the UK, to the authors’ knowledge — the accuracy of monitors in use by the general public for the self-monitoring of BP. Secondary aims included to:

  • determine which automated sphygmomanometers were currently used by patients;

  • assess factors affecting accuracy, including those makes and models that performed best; and

  • evaluate the influence of regular use and length of time in service on accuracy.

Self-monitoring blood pressure (BP) is common, but the accuracy of patients’ own monitors is currently unclear. This study provides evidence that the accuracy of some monitors used at home is similar to that of those used in professional settings, albeit with more frequent cuff failure. The study also found that validated monitors, those costing >£10, and those in use for ≤4 years were more likely to perform better. Clinicians can be reassured that patients’ own BP monitors are likely to be accurate if a validated model that is ≤4 years old is being used.

How this fits in

METHOD

Patients on the hypertension register at seven practices in the West Midlands (in central England), UK, were sent an invitation letter together with a one-page questionnaire and a self-addressed envelope. The questionnaire (Supplementary Information S1) asked if they owned a BP monitor and, if so, some basic questions about it and whether they wanted its accuracy to be assessed free of charge — this required them to bring the monitor into the practice at a prearranged time when they would meet a member of the research team. Practices were purposefully sampled by social deprivation (based on Index of Multiple Deprivation [IMD] 2010 scores) in order to achieve a diverse sample of monitors that were likely to range in affordability. IMD scores varied from 6.09 to 49.58.

Testing took place between March 2016 and August 2017. Following visual inspection (that the machine switched on and had a readable display), the accuracy of each testable digital sphygmomanometer was evaluated by comparing it with a calibrated reference digital BP monitor tester (Omron PA350); tests were conducted at 50 mmHg intervals across a range of 0–300 mmHg following a standard process, as recommended by each monitor manufacturer and the British Hypertension Society.12 A difference from the reference monitor of +/−3 mmHg at any testing interval was considered a failure. In addition to this static pressure test, monitors and cuffs underwent fast deflation tests (pass threshold: deflation from 260 mmHg to 15 mmHg in <10 seconds) and air-leakage checks (pass threshold: loss of <6 mmHg over 60 seconds at a stabilised pressure of 280 mmHg). Results were documented on a monitor testing form (Supplementary Information S2). With a conservative assumption of a failure rate of 50%, it was estimated that a sample size of 385 was required, using a 95% confidence interval (CI) width of +/−5%.

The mean absolute error (MAE) for each monitor was calculated as the average (arithmetic mean) of the absolute difference — whichever is larger and positive of monitor blood pressure minus reference blood pressure, and reference blood pressure minus monitor blood pressure — at each test. The relationship between monitor accuracy and make and model, length of time in use, frequency of recorded uses, monitor purchase price, and validation status was assessed using linear regression with MAE as the outcome. All model assumptions were checked. Failure rate by the different predictors was assessed using Fisher’s exact test statistic.

RESULTS

Sample

In total, 6891 patients appearing on the hypertension register for seven GP practices were invited to take part; 1543 (22%) responses were received. Of these, 653 (42%) patients owned monitors, of whom 526 (81%) expressed an interest in having their monitor tested. A total of 410 (78%) of the 526 monitors were provided for testing; 79 (19%) of these proved untestable because of the impossibility of separating the monitor from the cuff to test each component independently — these were, typically, wrist monitors. The 331 monitors tested comprised more than 50 different models, with the majority coming from three manufacturers: Boots (n = 62), Lloyds (n = 131), and Omron (n = 108).

Device accuracy

In total, 250 devices (76%, 95% CI = 71 to 80%) passed on all tests (monitors and cuffs); 49 (15%, 95% CI = 11 to 18%) monitors failed, largely on the accuracy test (n = 46; 14%, 95% CI = 10 to 18%), and 39 (12%, 95% CI = 8 to 15%) overestimated pressure. Table 1 details the MAE between the reference and tested devices. Four monitors had internal corrosion or could not hold pressure and, as such, could not be subjected to the full range of testing once they had definitively failed.

View this table:
  • View inline
  • View popup
Table 1.

Mean absolute error between the values reported by the reference device and test devices (n = 327) at the pressure intervals testeda

The largest difference from the reference monitor was 11.4 mmHg (data not shown). In total, 17 (5% (95% CI = 3 to 8%) monitors failed by >5 mmHg (data not shown) and 23 (7%, 95% CI = 4 to 10%) failed at the 150 mmHg level, which is closest to the threshold used for diagnosis and treatment. The overall MAE (all monitors tested) rose at each tested pressure interval to 1.5 mmHg (95% CI = 1.3 to 1.6 mmHg) at 280/300 mmHg (depending on the maximum specified pressure for a given monitor), compared with 0.6 mmHg (95% CI = 0.5 to 0.7 mmHg) at 50 mmHg and 1.0 mmHg (95% CI = 0.9 to 1.1 mmHg) at 150 mmHg (Table 1).

Length of time in service

Table 2 details the length of time in service of the monitors tested. In total, there was no information on years in service for 48 monitors. Of those monitors on which the full range of tests were performed and for which owners could provide a reasonable estimate regarding the number of years in service (n = 279), 188 (67%) had been in use for >2 years, and some for substantially longer — 61 (22%) monitors had been in use for >7 years and one was reported as having been in use for >20 years. Overall, the MAE tended to increase with length of time in service (P<0.001), though sample sizes were small in some categories (such as 6–7 years and >10 years). The failure rate was 5% (8/155) for the first 4 years in service, rising to 26% (32/124) for older models.

View this table:
  • View inline
  • View popup
Table 2.

Mean absolute error compared with reference device, and failure rate of tested monitors (n = 327) by length of time in servicea

Previous testing

Only 58 (9%) of the initial 653 responders reported having had their device tested previously: 22 said their monitors had been tested within the previous 2 years, 25 said they had been tested ≥2 years ago, and 11 gave no details about the date of previous testing (data not shown). Of the 58 responders, 40 checked device accuracy by comparing results with readings generated by an HCP (GP/nurse/pharmacist) and five had checked their machine with the manufacturer; the remaining 13 provided no information. Of 26 previously tested monitors tested again by the authors, eight failed (31%, 95% CI = 26 to 36%, P = 0.48 for difference between previously tested and never tested).

Frequency of use

Table 3 shows the estimates regarding how often the devices were used. There appeared to be only a limited relationship between the frequency of use and the MAE. The failure rate for monitors used once a month or more was 9% (17/183) compared with 22% (28/129) for those used less than once a month (P<0.01).

View this table:
  • View inline
  • View popup
Table 3.

Mean absolute error compared with reference device and failure rate of monitors (n = 327) by frequency of usea

Validation

Of those monitor models for which the validation status could be definitively identified (n = 317), 218 (69%) were validated and 99 (31%) were not: 209 (96%, 95% CI = 94 to 98%) of the validated monitors passed all the device tests compared with 63 (64%, 95% CI = 58 to 69%) of the unvalidated monitors (P<0.001 for the comparison).

Cuff functionality

Table 4 shows that 287 (78%, 95% CI = 74 to 82%) of the cuffs were medium-sized (22–32 cm) and 57 (15%) large; it should be noted that some devices had multiple cuffs. Cuff air leakage resulted in failure for 44 (12%) of 369 cuffs (95% CI = 8 to 15%). The failure rate was higher (P = 0.002) in large cuffs (26%, 95% CI = 22 to 31%) than in those that were medium sized (10%, 95% CI = 7 to 13%). Other cuff-size categories had too few cases to be evaluated. Failure of the cuff air-leakage tests contributed to the overall failure rate as described above.

View this table:
  • View inline
  • View popup
Table 4.

Failure rates of cuffs by cuff size

Purchase price

The reported original purchase price of devices varied from £5 to just over £100, with one outlier costing £240 and another acquired for free. Table 5 shows the relationship between purchase price and failure rate for those devices with data for both variables (n = 240). The vast majority (188/240, 78%) cost ≤£30, with the modal decile being £11–20 (n = 100). Monitor failure rate was highest for the cheapest machines (14 [34%] of 41 devices costing £1–10 failed); it improved as devices became more expensive, and (3 [6%] of the 52 devices costing >£31 failed [P<0.001]). However, including cuff failure rate resulted in no difference overall in failure by device cost.

View this table:
  • View inline
  • View popup
Table 5.

Failure rate by approximate purchase price of monitor

Regression analysis

A regression model identified that length of time in service (8% increase in MAE for each additional year of service) and validation status (validated models having a 23% decrease in MAE compared with unvalidated monitors) were statistically significant predictors of MAE, but estimated frequency of use, previous testing, and cost of device were not (Table 6).

View this table:
  • View inline
  • View popup
Table 6.

Regression model — mean absolute errora

Due to the sheer diversity of models encountered, the intention to conduct an analysis of the performance of different makes and models proved impossible. In the regression model, any discernible difference in performance characteristics was explained by the validation status of the device type.

DISCUSSION

Summary

This first study (to the authors’ knowledge) of accuracy of patients’ own monitors in the UK found that approximately three-quarters of monitors and/or the matching cuff passed a standard calibration test. Inaccurate monitors generally overestimated BP, and large cuffs were more than twice as likely to fail as those that were medium sized. Validated monitors, those costing >£10, and those that were ≤4 years old were most likely to be accurate.

Strengths and limitations

This work provides robust data to answer a question often raised by clinicians6 —namely, ‘how accurate is patients’ own BP monitoring equipment?’ — and one that is important in terms of planning the implementation of BP self-monitoring on a wider scale. Assessing a large number of monitors across several practices covering different sociodemographic strata provides reassurance that these results are likely to be generalisable, more so than previous smaller studies, despite a response rate of <25%. Fewer than one in 10 monitors had had any kind of previous evaluation, so such information is important.

It should be noted, however, that it was only possible to assess the accuracy of monitors that participants brought to be tested, which may represent a biased sample. A number of monitor types — primarily, wrist monitors for which there is no way of separating the cuff from the monitor — could not be tested using the researchers’ standard calibration equipment (Omron PA350); however, current guidance recommends the use of upper-arm devices, which the authors were, in general, able to test.

Data on frequency of use, length of time in service, purchase price, and previous testing were reliant on participant recall and, as such, may be subject to confounding, for example, devices in which users have more confidence (because of their apparent accuracy) may be used more frequently. However, any kind of evaluation of potential factors explaining variations in monitor performance is, to the authors’ knowledge, unique to this study.

Comparison with existing literature

At 42%, ownership of home BP monitors in the study presented here was slightly higher than in previous published surveys of patients with hypertension in the UK,17 but is in keeping with GPs’ estimates of patient self-monitoring.6 This is perhaps unsurprising, given the likelihood of preferential responses from monitor owners who wanted their equipment tested, although the authors emphasised also being interested in receiving null responses and included a self-addressed envelope to encourage all those contacted to respond.

Previous work from outside of the UK has generally found much worse performance than found in the study presented here: a Canadian study18 conducted between 2011 and 2014 found that around a third of patients’ monitors showed a difference of >5 mmHg (systolic and/or diastolic) compared with a mercury measurement. No statistically significant difference was found between monitors that were accurate versus those that were not when grouped according to patient characteristics, cuff size, or the brand of the home monitor in the Canadian study.18 Even greater inaccuracy was identified by a different Canadian group, with 69% of devices showing differences of ≥5 mmHg and no improvement in performance for validated machines.19 However, a Korean study20 using the same methodology found monitor failure rates of 15% — similar to those in the study presented here — and that inaccuracy was more common in unvalidated devices (19%, 25/130) than those that were validated (7%, 6/82).

A Turkish study,21 again using similar methods (although with 4 mmHg as the threshold for failure), identified inaccuracy rates of 59%, inaccuracy rates of 67% in 119 upper-arm devices. The same sample of monitors showed accuracy was statistically significantly greater in validated devices (n = 22) compared with unvalidated devices (n = 52) (68% versus 15%, P<0.01).22 Conversely, an earlier Canadian study23 found no difference in monitor performance dependent on validation status but, again, included very few monitors (n = 26) that had been validated. The research presented here confirms the importance of using validated devices, which is generally called for in guidelines.9

Although it is a concern that several wrist monitors were not assessable and almost a quarter of the equipment (including cuffs) failed, the overall monitor failure rate of 15% is similar to that previously identified in devices used in general practice (13%)10 and pharmacies (14%).11 In those settings, devices were used more frequently but for shorter periods. Given that the authors of the study presented here employed quite stringent criteria — with a difference of 3 mmHg throughout the range being enough to constitute failure — this suggests the majority of home BP monitors can be considered reliable enough for use in primary care, especially those that are newer and validated.

Implications for practice

An accurate BP monitor is fundamental to the diagnosis and management of hypertension. Self-monitoring BP devices are currently not prescribed on the NHS and, to be able to recommend home monitoring of BP more widely, there needs to be confidence in the devices accessible to patients or an ability to provide clear guidance on which models to trust, and how long for. Monitor manufacturers typically recommend annual calibration after 2 years’ service. The fact that a small proportion of home monitors in use appear to be very inaccurate does suggest the need for regular performance checks, although a more pragmatic approach might be to restrict this to unvalidated monitors or validated models that are >4 years old.

This study suggests that validation status is a reasonable indicator of both short- and longer-term performance; HCPs should be encouraged to provide patients with clear advice on this. Given the issues with cuff failure noted in this study, it might be beneficial for manufacturers to develop quality-control algorithms that alert users when cuffs are not performing properly.

Monitors were more likely to fail the accuracy test because of overestimating BP rather than underestimating it; this suggests that underdiagnosis/treatment is less likely than overdiagnosis, which is reassuring.

This study suggests that the majority of monitors in current use by patients in UK primary care are likely to be accurate, and GPs should recommend that patients who are considering self-monitoring consult online lists of validated monitors (for example, https://bihsoc.org/bp-monitors/), replace monitors every 4 to 5 years, and avoid wrist models. Practices using such a policy could be confident that managing hypertension with such equipment is likely to be appropriate; other work by the authors suggests this will lead to better BP control.5

Acknowledgments

The authors would like to acknowledge the help of the GP surgeries involved in this research, without whose input this study would not have been possible. We also thank Karen Biddle for her administrative support and Georgina Dotchin for contributing to monitor testing.

Notes

Funding

This work represents independent research commissioned by the National Institute for Health Research (NIHR) under its Programme Grants for Applied Research funding scheme (RP-PG-1209-10051). The views expressed in this study are those of the authors and not necessarily of the NHS, the NIHR, or the Department of Health. Richard J McManus was supported by an NIHR Research Professorship (NIHR-RP-02-12-015) and by the NIHR Collaboration for Leadership in Applied Health Research and Care (CLAHRC) Oxford at Oxford Health NHS Foundation Trust, and is an NIHR Senior Investigator. FD Richard Hobbs is part-funded as director of the NIHR School for Primary Care Research, theme leader of the NIHR Oxford Biomedical Research Centre, and director of the NIHR CLAHRC Oxford. Jonathan Mant is an NIHR Senior Investigator. No funding for this study was received from any monitor manufacturer. Constantinos Koshiaris is funded by an SPCR and Wellcome Trust/Royal Society Sir Henry Dale fellowship.

Ethical approval

Ethical approval was granted on 9 November 2015 by North West — Liverpool East Research Ethics Committee (reference number: 15/NW/0828).

Provenance

Freely submitted; externally peer reviewed.

Competing interests

Richard J McManus has received research support in terms of blood-pressure monitors from Omron and Lloyds Pharmacies.

Discuss this article

Contribute and read comments about this article: bjgp.org/letters

  • Received November 14, 2019.
  • Revision requested January 6, 2020.
  • Accepted February 7, 2020.
  • ©The Authors

This article is Open Access: CC BY 4.0 licence (http://creativecommons.org/licences/by/4.0/).

REFERENCES

  1. 1.↵
    1. Lewington S,
    2. Clarke R,
    3. Qizilbash N,
    4. et al.
    Age-specific relevance of usual blood pressure to vascular mortality: a meta-analysis of individual data for one million adults in 61 prospective studiesLancet2002360934919031913
    OpenUrlCrossRefPubMed
  2. 2.↵
    1. Ezzati M,
    2. Lopez AD,
    3. Rodgers A,
    4. et al.
    Selected major risk factors and global and regional burden of diseaseLancet2002360934313471360
    OpenUrlCrossRefPubMed
  3. 3.↵
    1. Stergiou GS,
    2. Siontis KCM,
    3. Ioannidis JPA
    Home blood pressure as a cardiovascular outcome predictor: it’s time to take this method seriouslyHypertension201055613011303
    OpenUrlCrossRef
  4. 4.↵
    1. McManus RJ,
    2. Mant J,
    3. Bray EP,
    4. et al.
    Telemonitoring and self-management in the control of hypertension (TASMINH2): a randomised controlled trialLancet20103769736163172
    OpenUrlCrossRefPubMed
  5. 5.↵
    1. McManus RJ,
    2. Mant J,
    3. Franssen M,
    4. et al.
    Efficacy of self-monitored blood pressure, with or without telemonitoring, for titration of antihypertensive medication (TASMINH4): an unmasked randomised controlled trialLancet201839110124949959
    OpenUrl
  6. 6.↵
    1. Fletcher B,
    2. Hinton L,
    3. Bray EP,
    4. et al.
    Self-monitoring blood pressure in patients with hypertension: an internet-based survey of UK GPsBr J Gen Pract2016DOI: https://doi.org/10.3399/bjgp16X687037.
  7. 7.↵
    1. Grant S,
    2. Hodgkinson JA,
    3. Milner SL,
    4. et al.
    Patients’ and clinicians’ views on the optimum schedules for self-monitoring of blood pressure: a qualitative focus group and interview studyBr J Gen Pract2016DOI: https://doi.org/10.3399/bjgp16X686149.
  8. 8.↵
    1. Parati G,
    2. Stergiou GS,
    3. Asmar R,
    4. et al.
    European Society of Hypertension Practice Guidelines for home blood pressure monitoringJ Hum Hypertens20102412779785
    OpenUrlCrossRefPubMed
  9. 9.↵
    1. National Institute for Health and Care Excellence
    Hypertension in adults: diagnosis and managementLondonNICE2019https://www.nice.org.uk/guidance/ng136 (accessed 23 Apr 2020).
  10. 10.↵
    1. A’Court C,
    2. Stevens R,
    3. Sanders S,
    4. et al.
    Type and accuracy of sphygmomanometers in primary care: a cross-sectional observational studyBr J Gen Pract2011DOI: https://doi.org/10.3399/bjgp11X593884.
  11. 11.↵
    1. Hodgkinson J,
    2. Koshiaris C,
    3. Martin U,
    4. et al.
    Accuracy of monitors used for blood pressure checks in English retail pharmacies: a cross-sectional observational studyBr J Gen Pract2016DOI: https://doi.org/10.3399/bjgp16X684769.
  12. 12.↵
    1. O’Brien E,
    2. Petrie J,
    3. Littler W,
    4. et al.
    An outline of the revised British Hypertension Society Protocol for the Evaluation of Blood Pressure Measuring DevicesJ Hypertens1993116677679
    OpenUrlCrossRefPubMed
  13. 13.
    1. Association for the Advancement of Medical Instrumentation
    American National Standard. Electronic or automated sphygmomanometer. ANSI/AAMI SP10-1992Arlington, VAAAMI1993
  14. 14.↵
    1. O’Brien E,
    2. Atkins N,
    3. Stergiou G,
    4. et al.
    European Society of Hypertension International Protocol Revision 2010 for the validation of blood pressure measuring devices in adultsBlood Press Monit20101512338
    OpenUrlCrossRefPubMed
  15. 15.↵
    1. Stergiou GS,
    2. Palatini P,
    3. Asmar R,
    4. et al.
    Recommendations and Practical Guidance for performing and reporting validation studies according to the Universal Standard for the validation of blood pressure measuring devices by the Association for the Advancement of Medical Instrumentation/European Society of Hypertension/International Organization for Standardization (AAMI/ESH/ISO)J Hypertens2019373459466
    OpenUrl
  16. 16.↵
    1. Atkins N,
    2. O’Brien E
    The Dabl Educational Trust device equivalence procedureBlood Press Monit2007124245249
    OpenUrlCrossRef
  17. 17.↵
    1. Baral-Grant S,
    2. Haque MS,
    3. Nouwen A,
    4. et al.
    Self-monitoring of blood pressure in hypertension: a UK primary care surveyInt J Hypertens2012Article ID: 582068.
  18. 18.↵
    1. Ruzicka M,
    2. Akbari A,
    3. Bruketa E,
    4. et al.
    How accurate are home blood pressure devices in use? A cross-sectional studyPLoS One2016116e0155677
    OpenUrl
  19. 19.↵
    1. Ringrose JS,
    2. Polley G,
    3. McLean D,
    4. et al.
    An assessment of the accuracy of home blood pressure monitors when used in device ownersAm J Hypertens2017307683689
    OpenUrl
  20. 20.↵
    1. Jung M-H,
    2. Kim G-H,
    3. Kim J-H,
    4. et al.
    Reliability of home blood pressure monitoring: in the context of validation and accuracyBlood Press Monit2015204215220
    OpenUrl
  21. 21.↵
    1. Dilek M,
    2. Adibelli Z,
    3. Aydogdu T,
    4. et al.
    Self-measurement of blood pressure at home: is it reliable?Blood Press20081713441
    OpenUrlPubMed
  22. 22.↵
    1. Akpolat T,
    2. Dilek M,
    3. Aydogdu T,
    4. et al.
    Home sphygmomanometers: validation versus accuracyBlood Press Monit20091412631
    OpenUrlPubMed
  23. 23.↵
    1. Stryker T,
    2. Wilson M,
    3. Wilson TW
    Accuracy of home blood pressure readings: monitors and operatorsBlood Press Monit200493143147
    OpenUrlCrossRefPubMed
Back to top
Previous ArticleNext Article

In this issue

British Journal of General Practice: 70 (697)
British Journal of General Practice
Vol. 70, Issue 697
August 2020
  • Table of Contents
  • Index by author
Download PDF
Article Alerts
Or,
sign in or create an account with your email address
Email Article

Thank you for recommending British Journal of General Practice.

NOTE: We only request your email address so that the person to whom you are recommending the page knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Accuracy of blood-pressure monitors owned by patients with hypertension (ACCU-RATE study): a cross-sectional, observational study in central England
(Your Name) has forwarded a page to you from British Journal of General Practice
(Your Name) thought you would like to see this page from British Journal of General Practice.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
Accuracy of blood-pressure monitors owned by patients with hypertension (ACCU-RATE study): a cross-sectional, observational study in central England
James A Hodgkinson, Mei-Man Lee, Siobhan Milner, Peter Bradburn, Richard Stevens, FD Richard Hobbs, Constantinos Koshiaris, Sabrina Grant, Jonathan Mant, Richard J McManus
British Journal of General Practice 2020; 70 (697): e548-e554. DOI: 10.3399/bjgp20X710381

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
Accuracy of blood-pressure monitors owned by patients with hypertension (ACCU-RATE study): a cross-sectional, observational study in central England
James A Hodgkinson, Mei-Man Lee, Siobhan Milner, Peter Bradburn, Richard Stevens, FD Richard Hobbs, Constantinos Koshiaris, Sabrina Grant, Jonathan Mant, Richard J McManus
British Journal of General Practice 2020; 70 (697): e548-e554. DOI: 10.3399/bjgp20X710381
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One
  • Mendeley logo Mendeley

Jump to section

  • Top
  • Article
    • Abstract
    • INTRODUCTION
    • METHOD
    • RESULTS
    • DISCUSSION
    • Acknowledgments
    • Notes
    • REFERENCES
  • Figures & Data
  • Info
  • eLetters
  • PDF

Keywords

  • accuracy
  • blood-pressure monitors
  • calibration
  • hypertension
  • primary health care

More in this TOC Section

  • Antibiotics versus no treatment for asymptomatic bacteriuria in residents of aged care facilities: a systematic review and meta-analysis
  • Association between vitamin D and incident herpes zoster: a UK Biobank study
  • Supporting people with pain-related distress in primary care consultations: a qualitative study
Show more Research

Related Articles

Cited By...

Intended for Healthcare Professionals

BJGP Life

BJGP Open

 

@BJGPjournal's Likes on Twitter

 
 

British Journal of General Practice

NAVIGATE

  • Home
  • Current Issue
  • All Issues
  • Online First
  • Authors & reviewers

RCGP

  • BJGP for RCGP members
  • BJGP Open
  • RCGP eLearning
  • InnovAiT Journal
  • Jobs and careers

MY ACCOUNT

  • RCGP members' login
  • Subscriber login
  • Activate subscription
  • Terms and conditions

NEWS AND UPDATES

  • About BJGP
  • Alerts
  • RSS feeds
  • Facebook
  • Twitter

AUTHORS & REVIEWERS

  • Submit an article
  • Writing for BJGP: research
  • Writing for BJGP: other sections
  • BJGP editorial process & policies
  • BJGP ethical guidelines
  • Peer review for BJGP

CUSTOMER SERVICES

  • Advertising
  • Contact subscription agent
  • Copyright
  • Librarian information

CONTRIBUTE

  • BJGP Life
  • eLetters
  • Feedback

CONTACT US

BJGP Journal Office
RCGP
30 Euston Square
London NW1 2FB
Tel: +44 (0)20 3188 7400
Email: journal@rcgp.org.uk

British Journal of General Practice is an editorially-independent publication of the Royal College of General Practitioners
© 2022 British Journal of General Practice

Print ISSN: 0960-1643
Online ISSN: 1478-5242