Strengths and limitations
All typical practices in England were included, thus minimising the risk of a biased sample. Real prescribing and spending data were used that are sourced from pharmacy claims data used for very substantial payments: CCGs and pharmacies are both therefore highly motivated to ensure these data are accurate. A small number of atypical practice settings such as walk-in centres were excluded because these generally do not issue repeat prescriptions for medicines, and no data on their EHRs are available. The data in this study do not include hospital medicines data, but the authors do not expect the same issue to occur in hospitals, because medicines are procured differently and the use of electronic prescribing and EHRs in secondary care in England is more limited.
One weakness of this study is that ghost-branded generics were identified based on their reimbursement price, which may not be 100% accurate. This is because the NHS Business Services Authority collects, but does not share, the more granular prescribing data needed to identify ghost-branded generics with complete accuracy (based on the presence of a generic name and manufacturer name in the ‘prescription item’ field).
A key strength of this analysis is that entirely open data were used, and all the code and data are shared for critical evaluation, re-use, and reproducibility.
Comparison with existing literature
To the authors’ knowledge this article represents the first research using large-scale national data and qualitative observations to identify a shortcoming in clinical practice caused by an EHR design feature. The authors are aware of only one small study in the Netherlands that previously reported an association between EHR system and inferior performance on one prescribing safety indicator, in only 90 GP practices, with no follow-up to establish the design flaw.16 A 2017 systematic review1 identified 34 relevant studies exploring the role of computerised systems in suboptimal prescribing. However, none used quantitative methods to directly compare practices in different systems: most reported the results of questionnaires interrogating clinicians about their experiences of EHRs; qualitative research observing and/or interviewing clinicians; or quantitative analyses of large databases of clinicians’ spontaneous reports of errors and safety issues. Some studies set out to evaluate the impact of a single specific new change to a computerised prescribing system, typically as a behavioural intervention to increase compliance with a desired choice.17–19 For example, two small studies assessed modifications to the default settings in an EHR that improved generic prescribing rates by 5.4% and 23.1%, respectively, albeit against a backdrop of low generics use in the US.4,5
Implications for research and practice
Three key policy issues have arisen from this study: the importance of evaluating EHR systems; of implementing open standards in health care; and of open data analysis in health care.
Clinicians use EHR systems to store information, retrieve relevant information rapidly when assessing a patient, and to implement specific clinical actions such as ordering a test or prescribing a treatment. Healthcare activity is increasingly computerised, and EHR software is likely to exert a very substantial influence on the way that modern medicine is practised, in the same way that the rapid explosion in the use of social media has changed the ways that people interact socially.20 The authors were therefore surprised to find so little engagement by the clinical academic community in evaluating the impact of EHR design choices. Specifically, the authors are not aware of any previous attempts to use variation in observed prescribing behaviour between different EHR systems to identify, explain, and address the causes of suboptimal prescribing, or indeed any other aspect of clinical practice. They are now pursuing a research programme to evaluate differences in prescribing associated with different EHR systems in English primary care. More broadly, the relative absence of more ambitious work to address wider questions of EHR design is concerning. The EHR is a key technology used in primary care by clinicians, and will become increasingly important in secondary care as the NHS implements EHRs on a widespread basis. Questions of how best to represent, retrieve, and present knowledge about patients to clinicians — and the impact this can have on patient care — should be a key priority for funders and researchers in ‘digital health’.
One of the contributing factors to prescribing ghost-branded generics is likely to be that, when a prescriber is choosing from a pick list, a ghost-branded generic looks like a true generic. This is an example of a ‘look alike sound alike’ error, which has been recognised as a common source of medication error.21,22 The Medicines and Healthcare products Regulatory Agency (see Box 2 for details) has issued a safety alert because of adverse events, including deaths as a result of confusion between similarly named medicines.23 Although the chance of a serious adverse event and fatal outcome are low with ghost-branded generics, there are some medicines, such as category 1 anti-epileptic drugs, where it is important to specify the manufacturer on the prescription for clinical reasons,24 that is, prescribe a ghost-branded generic, so it is necessary to allow this functionality in some limited situations.
In recent years the NHS has pursued a strategy of setting standards to support the uptake and safe use of technology in the NHS, culminating in the launch of NHSX (see Box 2 for details).25 The NHS Dictionary of Medicines and Devices (DM+D) is the mandated standard drug dictionary for all suppliers of systems related to medicines across the NHS.26 Although initially the design choice made by SystmOne that caused ghost-branded generics was thought to be largely unforeseen, in fact implementation guidance27 supporting the DM+D standard makes reference to the potential for confusion among prescribers driven by this issue. If the NHS sets standards, it must also invest in assessing whether these standards have been adhered to, to ensure that system providers make modifications quickly if shortcomings are identified, and to monitor for unintended consequences of any standards set.
The UK government has recognised the importance of sharing NHS data where possible,28 and the publication since 2010 of highly granular NHS primary care prescribing data has facilitated a rich ecosystem of tools, such as the live dashboard created by the authors (OpenPrescribing.net), alongside extensive original research from multiple teams.7,29–35 There is, however, substantial room for improvement. The authors have previously written about breaches by the NHS of best practice around data management and publication.36 In the case of ghost-branded generics, it is a concern that the NHS Business Services Authority manages the DM+D standard and has responsibility for its accuracy, but has not published medicines data compliant with its own standard. Ghost-branded generics could have been identified sooner if it had done so.27 The publication of prescribing data compliant with the mandated NHS standard of DM+D is therefore warranted.
In summary, a design choice in a commonly used EHR has led to an excess cost to the NHS of £9.5 million in 2018 in ghost-branded generics. A live dashboard on the OpenPrescribing.net website has been created to monitor this phenomenon in every practice and CCG in England. The authors recommend further research into EHR design choices, and publication of prescribing data compliant with the mandated NHS standard of DM+D.