INTRODUCTION
Interest in ‘nudging’ the public on health-related matters, such as healthy eating, exercising, becoming an organ donor, and most recently COVID-19, has spilled over into nudging healthcare professionals.1–6 Although experience and intuition serve clinicians well most of the time, the rules-of-thumb that drive, often quick or subconscious, decisions made under the pressures of day-to-day practice may not always result in good-quality, cost-effective care. This has generated growing interest in designing behaviour change interventions that consciously or otherwise ‘nudge’ clinicians in a certain direction.1–5 However, the ethics of nudging have been questioned, as has the science underpinning it.7 In this analysis we examine the rise of nudge-theory and discuss the opportunities and limitations of its application to behaviour change interventions aimed at clinicians.
THE RISE OF BEHAVIOURAL ECONOMICS AND ‘NUDGE’ UNITS
Daniel Kahneman in 2002, and Richard Thaler in 2017, won the Nobel Prize for their work in the field of behavioural economics. Their respective books Thinking, Fast and Slow8 and Nudge9 became international bestsellers by demonstrating how humans do not behave as rationally as traditional economic theory predicts, or as we would often like to think.
In Thinking, Fast and Slow Kahneman presents evidence suggesting we utilise two main thought-processing systems, which he calls system 1 and system 2.8 System 1 is fast and intuitive; system 2 is slow and deliberative. Many of our day-to-day activities rely on system 1, for example, our daily commute or judging whether you think you’ll get on with someone you’ve just met. These decisions are based on heuristics, in other words rules of thumb, derived from experience, habit, emotion, and intuition. They require little effort and often happen unconsciously. By contrast, system 2 requires purposeful, slow thinking, such as doing a complex sum in your head or writing a structured argument.
In Nudge Thaler and Cass Sunstein reason that, whether we like it or not, we are always being influenced by our environment and past experiences.9 Because of our tendency to revert to automatic or ‘fast’ thinking processes, we often make choices based on the path of greatest familiarity or least resistance. This usually serves us well in our busy lives. However, compared with system 2, system 1 is more vulnerable to making biased and sometimes plainly erroneous decisions, which we may not have made had we deliberated more carefully. Systematically identifiable short-cuts and biases in our decision-making processes are often referred to as cognitive biases.
Thaler and Sunstein argue that day-to-day choices with potentially important consequences should be framed in a way that offset cognitive biases and encourage desirable decision making. They argue that this should happen without limiting options or significantly changing associated economic incentives; by doing so, we are nudged in a certain direction. Thaler and Sunstein refer to this as ‘Libertarian Paternalism’. The classic example is a supermarket positioning fruit at eye level and chocolate bars on the bottom shelf. Nudges may aim to engage our system 2 thinking, but often target our more subconscious system 1 decision-making processes.
Over the past decade, government-sponsored Behavioural Insights Teams have emerged in countries across the world with the aim of improving public policymaking.10 They did so as behavioural economics gained mainstream attention and have since been frequently referred to as Nudge Units. Yet their work draws on sociology and psychology, and has expanded beyond nudging, including advising on regulatory measures and financial incentives. These types of interventions are technically no longer nudges by Thaler and Sunstein’s definition as they respectively restrict choice and change individuals’ economic incentives, but are still informed by behavioural science.11
In Box 1 we present some of the most commonly referenced cognitive biases and behavioural science approaches on which nudge-based interventions are designed.8,9,12–15 This adaptation of the Behavioural Insights Teams’ original MINDSPACE framework is not an exhaustive or mutually exclusive framework, as the original was — but aims to help the reader identify the commonest concepts in this field.16
MINDSPACE | Behavioural science approaches and cognitive biases commonly used in nudge-based behaviour change interventions |
---|
Messenger | We are heavily influenced by who communicates information to us, for example:
We are more likely to disregard the same information from people or organisations we hold in low esteem, compared with those we trust and respect (messenger effect). We are more likely to value information tailored to us (personalisation effect).
|
Incentives | In responding to incentives, financialb or otherwise, our behaviour is shaped by predictable biases, including:
Wishing to avoid a loss more than a gain of an equal amount (loss aversion). Options perceived as scarce generate greater feelings of desire (anticipated regret/scarcity effect). Overweighting small probabilities and underweighting high probabilities (non-linear probability weighting/lottery effect/optimism bias/overconfidence bias). Valuing something in relative rather than absolute terms (reference dependence). Mentally allocating and spending money under separate headings, in a way that would not make financial sense if the money was bundled together (mental accounting). Valuing an immediate reward at the expense of a future reward of equal or greater amount (hyperbolic discounting/present bias).
|
Norms | We are strongly influenced by what others do, particularly our own social network, therefore:
When a norm is desirable this should be highlighted and when it is undesirable it is preferable not to draw attention to it. Social networks can generate contagion of the desired behaviour, until it becomes a norm.
|
Defaults | We tend to follow the path of least resistance, therefore:
We will often go with the default option (status quo bias). Even small barriers that require effort to overcome may prevent us from doing something (friction costs). We find it easier to substitute a behaviour using heuristics-based system 1 thinking, than engaging in effortful system 2 thinking to change the behaviour (substitution bias). We may continue to do something even if not in our best interest if we have already invested substantial time and effort into it (sunk cost fallacy).
|
Salience | How we interpret information depends on how it is presented (framing), for example:
Our attention is drawn to what is novel and seems relevant to us. We are more likely to respond to prompts or feedback that are timely and easy to understand. Our decisions can be overly influenced by the initial information we get (anchoring bias). Our working memory is limited (cognitive load), and we tend to unconsciously filter out information as a coping strategy, in particular in stressful situations (choice overload). Overreliance on our own memory and experience can lead our decisions astray (recall bias/availability bias). We rely less on intuition when presented with multiple options simultaneously, rather than when presented with the same options separately (joint versus separate evaluation). We are more likely to remember the most striking and the final moment of an event (primacy and recency effects).
|
Priming | Our acts are often influenced by subconscious cues (priming), for example:
Sights, smells, words, or sounds may shape our behaviour and decisions without us realising they are doing so.
|
Affect | Our emotions can unduly influence the decisions we make, for example:
We may feel a stronger desire to act when a single identifiable person is affected rather than when a group has the same need (identifiable victim effect). We underestimate how emotions, such as stress, and feelings, such as hunger or tiredness, will affect our judgement (hot–cold empathy gap).
|
Commitment | We like to think of ourselves as consistent between our commitments and our behaviour, therefore:
We are more likely to do something if we plan it (implementation intentions), and have a clear deadline. We are more likely to do something if we publicly agree to it (commitment contract). We have a strong desire to reciprocate acts (reciprocity). Complying with a small request can make us more likely to accept a larger one in the future (foot-in-the door technique).
|
Ego | We act in ways that make us feel better about ourselves, therefore:
We value things more when we have a sense of ownership or control over them (endowment effect). We tend to compare ourselves against others (relative ranking). When things go well we tend to attribute this to our efforts, and when they do not we tend to blame others or the situation, and vice versa (fundamental attribution error). We tend to seek evidence that supports the opinion we already hold (confirmation bias). We seek ways to contest facts that provide evidence contrary to our viewpoint, making it hard to learn from mistakes (cognitive dissonance).
|
Box 1. Cognitive biases and behavioural science approaches commonly used in nudge-based behaviour change interventionsa
COGNITIVE BIAS AND NUDGING CLINICIANS
Examining the list of cognitive biases in Box 1 and considering the potential associated nudge-based interventions, most readers will recognise that, despite the hype that nudging has received in recent years, many approaches are not novel to clinicians. The ‘foot-in-the-door’ free lunch, the sponsored educational event triggering the desire to ‘reciprocate’, or the branded pen ‘priming’ clinicians to prescribe have all been used for a long time by the pharmaceutical industry. Clinical leaders, managers, and policymakers will be familiar with the knowledge that who the ‘messenger’ is matters, that clinicians’ ‘egos’ need to be attended to, and that team-based ‘reciprocity’ is crucial to effect change. Skilled clinical educators utilise ‘primacy and recency effects’ when highlighting key points and take-home messages in their presentations. They appreciate that relating a story about a patient (the ‘identifiable victim’) can make more impact than population-level data. Quality improvement leads increasingly use coaching techniques that involve clinicians explicitly describing how they are going to achieve their goals (‘implementation intentions’) and making a public commitment to doing so (‘commitment contract’). Public reporting and benchmarking are forms of reputational ‘incentives’ that depend on our ‘ego’, our tendency to ‘loss aversion’, and our innate desire to compare ourselves with others (‘relative ranking’). The simplicity of the surgical checklist made it a global success as it increased the ‘salience’ of the need for safety checks and overcame the ‘friction’ of thinking what had to be done each time.
Despite relatively longstanding knowledge of cognitive biases, we still have a long way to go to design highly effective clinical behaviour change interventions. Often well-intentioned interventions that appear ‘logical’ on paper have limited impact on behaviour, lead to unintended consequences, or result in different outcomes in different settings. Cognitive bias can explain some of these untoward outcomes and with this in mind there is arguably scope to better apply knowledge about them in the design, implementation, and evaluation of behaviour change interventions. These may include interventions aiming to change clinical practice (for example, uptake of evidence-based practice) or participation in other activities (for example, leadership, teaching, or research). It could be through the better design of educational material, financial incentives, or changing the working environment. Details that may appear trivial to some underpin many cognitive biases and need to be carefully considered. For example, in educational material aimed at clinicians the exact ‘framing’ of a message — that is, the wording and tone, the route or the timing of delivery, the layout of data, the colours and images used — all matter. There is potential in many settings to reduce the amount of ‘friction’ that makes it hard for clinicians to deliver best practice and increase the ‘friction’ to deliver non-evidence-based care. For example, making it more cumbersome to request tests that evidence suggests add little value to clinical decision making. Setting the ‘default’ prescribing option as the most cost-effective drug is common nowadays in UK general practice. Electronic health records (EHRs) in conjunction with machine learning have significant potential to nudge clinicians’ behaviour in a more tailored way.17 Designing nudges within EHRs also offers the opportunity to rapidly test clinician responses, in a way that is informed by the clinicians’ past behaviour, as well as the patient’s current condition.5
Importantly, there is opportunity to train clinicians more rigorously about the risks of cognitive biases and about behavioural science in general, which is key to understanding patients, colleagues, and themselves.3 ‘Human Factors’ courses are gathering momentum. These often draw on airline industry experience to educate us about dangerous cognitive biases such as ‘anchoring bias’, ‘recall bias’, ‘choice overload’, ‘confirmation bias’, and ‘cognitive dissonance’ (Box 1), which can put patients’ safety at risk. Yet, this type of training is still largely absent from most undergraduate and postgraduate curricula.
ARE CLINICIANS REALLY ‘PREDICTABLY IRRATIONAL’?
Lists such as those in Box 1 may help us understand why behaviour change interventions go awry. However, such lists will not predict which cognitive biases are going to emerge, in whom, in which settings, at what point, to what degree, and for how long — or how clinicians will respond to the related nudges-based interventions. The complexity of behaviour and systems change in health services has been well documented.18 Clinicians also need the resources and skills to effect change. Cultural contexts and pre-existing social norms will influence how individuals respond. There are opportunities for synergistic interactions, as well as the risk of counterproductive ones with different types of nudges, but it is not always predictable what will happen. Clinicians’ apparently ‘wrong’ behaviour, for example, deviating from a guideline, may in fact be founded on careful judgement or an appropriate emotional response to a patient’s needs. Many of our cognitive biases, which appear irrational by traditional economics’ standards, make much more sense when framed within the context of evolutionary psychology.
Although we are all vulnerable to cognitive biases, the fact that the function of a clinician is to act as an ‘agent’ for the patient, rather than in their own interest, also poses an important challenge in directly applying evidence from behavioural economics, which has largely been derived from nudging people to make decisions in their own best interests. Moreover, although a growing field, a large part of research into nudging healthcare professionals’ and their cognitive biases has been on medical students and trainee doctors using hypothetical vignettes.2,3,19 One of the most frequently referenced experiments by the UK’s Behavioural Insights Team on changing GPs’ antibiotic prescribing behaviour illustrates some of the limitations of such evidence as a guide to policy (Box 2).6
Box 2. Provision of social norm feedback to high prescribers of antibiotics in general practice: a pragmatic national randomised controlled trial — key study limitations6
Understanding the wider context of service improvement and behavioural science that underpin clinicians’ behaviour is also essential. Here, implementers should consider drawing on more comprehensive behaviour change frameworks and theories. For example, the ‘Behaviour Change Technique Taxonomy’ offers a more comprehensive list of behaviour change techniques.20 ‘Normalisation Process Theory’ (NPT) is useful when seeking to embed behaviour change, in particular when complex changes are needed across an organisation.21 As with any intervention, nudges will always need testing and ongoing evaluation.
THE ETHICS OF NUDGING
Potential targets of a nudge are understandably wary about what is being defined as ‘desirable’ and by whom. Most are resistant to the idea of being covertly manipulated. Here we return to Thaler and Sunstein’s point that we cannot escape the fact that we are continually being influenced by our environment.9 Clinical work involves caring for patients with limited time, identifying pathology among undifferentiated symptoms, coping with emotional situations, and managing uncertainty. Therefore — whether the target is to engage system 2 thinking or subconsciously influence system 1 — to help clinicians provide safer, better quality patient care within the resources available, there is a valid argument for purposefully designing interventions aimed at shaping clinicians’ behaviour in a way that will work with their cognitive biases. However, as with any intervention there needs to be a strong likelihood that the desired behaviour change will lead to the intended outcome. When the desired behaviour change may come at a cost to care in another domain, the pros and cons need to be balanced. When the motivation driving the behaviour change intervention is less well intentioned, such as when driven by commercial or personal interests, then this must be questioned.
CONCLUSION
Designing nudges with transparency and based on evidence should help engage clinicians in the process, thereby reducing the risk of interventions failing or, worse, backfiring. Future research in this field ought to focus on understanding which cognitive biases are most problematic in clinical practice, for whom, and when. By understanding this we will be better placed to design working environments and train clinicians to mitigate some of the risks that cognitive biases can present to good clinical care.
Acknowledgments
Many thanks to Melody Qiu, Arnoupe Jhass, Mark Petticrew, and Martin Marshall for comments on earlier drafts of the paper, as well as to Charlotte Paddison and Carl May for conversations on behavioural economics, behavioural science, and cognitive biases.
Notes
Funding
Luisa Pettigrew is funded by a
Doctoral Research Fellowship in the Department of Health Services Research and Policy at the London School of Hygiene and Tropical Medicine. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR, or the Department of Health and Social Care.
Ethical approval
Not applicable.
Provenance
Freely submitted; externally peer reviewed.
Competing interests
The authors have declared no competing interests.
- © British Journal of General Practice 2021