
Back in 1983, I wrote a monograph (never published) called ‘No Such Thing as a Free Lunch’, which sought to expose the biases inherent in drug company-sponsored research and the hard-sell marketing of drugs to doctors.1,2 Over the subsequent three decades, we have seen a steady realisation by the profession that bias in drug trials and drug marketing is a real, widespread, and sinister phenomenon. While the battle to eliminate bias in the drug industry is far from over, the field is now at a stage where the problems are understood, a governance infrastructure exists, and a strong and well-organised counter-lobby cries foul every time the industry makes a claim it cannot fully substantiate.
The same cannot be said for new technologies, the development and introduction of which are occurring at a pace that far outstrips the evidence base for their efficacy and cost effectiveness. To illustrate my point, I offer you, as a starting point for further debate, five biases of new technologies in medicine. They are:
Pro-innovation bias. This bias, first described by the innovation guru Everett Rogers in the 1960s, says that anything new is inherently better than anything already in use.3 People are classified in the value-laden terms ‘innovators’ (the best sort of person to be) followed by ‘early adopters’, ‘early majority’, ‘late majority’, and ‘laggards’. Who in their right mind would be a laggard?
Subjunctivisation bias. Much of the policy rhetoric on new technologies rests not on what they have been shown to achieve in practice but on optimistic guesses about what they would, could, or may achieve if their ongoing development goes as planned; if the technologies are implemented as intended; and in the absence of technical, regulatory or operational barriers.4 This is what Dourish and Bell call the ‘proximate future’: a time, just around the corner, of ‘calm computing’ when all technologies will be plug-and-play and glitch-free.5
Bells and whistles bias. This bias assumes that the more functions a technology offers, the better it will work. If you have ever tried to make the case to a salesperson that you want a mobile phone for the purposes of making phone calls, not to track your global positioning, take photographs, or check your email, you will know the counter-argument to this.
Connectivity bias. This assumes that the more technologies and systems to which a new technology can connect, the more useful it will be. The computer system that sits in splendid isolation, processing a parochial dataset for a local team is seen as so 20th century compared to one that links to a national or, better, international data archive. Yet as those of us who regularly have to link our practice record system to the N3 Spine know to our cost, local systems work faster and more reliably the fewer external connections they make.
Human substitution bias. This bias assumes that whatever the task, a technology is as good as, or better than, a human. When we are sick, lonely, or distressed, we crave company. If you do not believe that a whole research industry is now emerging oriented to developing ‘social presence robots’ that will substitute for real, flesh-and-blood humans in these very situations, check out the studies emerging in the robotics journals.6,7
If these biases conjure up the kind of Brave New World in which you would prefer not to live, perhaps it’s time for a programme of research and social protest to parallel what Godlee and Goldacre have been spearheading in relation to pharmaceuticals.
- © British Journal of General Practice 2013