INTRODUCTION
As advancements in machine learning and artificial intelligence (AI) continue at an ever-increasing rate, there are growing concerns over the potential development of lethal autonomous weapons systems (LAWS), commonly known as ‘killer robots’.1 Such systems are defined as any weapon capable of targeting and initiating the use of potentially lethal force without direct human supervision and direct human involvement in lethal decision making.2 Several countries, including the UK, are developing these weapons for military use, setting the stage for an imminent arms race. The emergence of these technologies would represent the complete automation of lethal harm, which AI experts fear would mark a third revolution in warfare, following gunpowder and nuclear weapons.3 LAWS would radically violate the ethical principles and moral code that are integral to our profession, necessitating urgent and collective action from the entire healthcare community.
CONCERNS AROUND LAWS
The prospect of a world with LAWS generates ethical, legal, and diplomatic apprehensions. These technologies would bring dire humanitarian consequences and geopolitical destabilisation. They would make possible the anonymous selection of human targets in the absence of human oversight, exacerbating the encoded human biases while excluding human morality. Targets would be chosen by their perceived age, sex, ethnicity, facial features, dress code, gait pattern, social media use, or even their home address or place of worship. Once in existence, LAWS could be produced rapidly, cheaply, and at scale,4 marking the advent of a novel Weapon of Mass Destruction that could be widely stockpiled and deployed in great numbers. Their digital nature would render them vulnerable to cyber hacking and erroneous malfunction, while the absence of human input makes the legal accountability of their actions highly unclear. They would be vulnerable to acquisition on the black market, used in assassination and ethnic cleansing, and integrated into nonmilitarised state activities such as law enforcement and border control. Without a human ‘in the loop,’ LAWS would dehumanise conflict and lower the threshold to entering warfare, while their lethality would endlessly increase as the AI’s recursively self-improving algorithm acquires more and more data through repeated cycles of search–identify–engage.
HEALTH CARE’S HISTORY OF ADVOCACY AGAINST INHUMANE WEAPONS
The healthcare community has played a key role in establishing the bans on chemical, biological, and certain conventional weapons (landmines) that are currently in force. More recently, the collective voice of global health care, led by the International Physicians for the Prevention of Nuclear War, played a pivotal role in the campaign to ban nuclear weapons, culminating in the 2017 Treaty on the Prohibition of Nuclear Weapons.5 The success of our advocacy derives from our moral authority and professional credibility on the devastating humanitarian consequences of warfare and inhumane weapons.
WHY HEALTH CARE MUST OPPOSE LAWS
In line with our commitment to ‘do no harm’, the healthcare community must denounce the development of LAWS on grounds of their moral abhorrence. As healthcare professionals, we believe that scientific progress should only be used to benefit society and should not be used to automate harm. Humans should never entirely hand over to machines any decision regarding human life, especially a decision to end it.
We are becoming increasingly familiar with the role of AI in health care. These technologies highlight the importance of human involvement in clinical decision making, especially in situations with context and ambiguity, to reduce the risk of unintentional bias and iatrogenic harm.6 We therefore use AI to augment, rather than replace, human decision making that impacts human life.7 In doing so, we maintain that humans cannot be replaced by algorithms in the prevention of harm, a position in stark contrast to the mandate of LAWS, which are purposefully designed to replace human judgement in the decision to inflict harm. We are, therefore, morally obliged to resist any world in which LAWS exist.
WHY THIS IS URGENT
To date, no lethal autonomous weapons system has been developed. The technology to do so, however, is advancing at breakneck speed. The world stands on the brink of an arms race, with the UK, US, and Russia among the wealthy nations poised in the starting blocks.8 It will be significantly more challenging to ensure safety standards and legal regulations around these weapons if we enter an arms race scenario.
A CALL TO ACTION
The healthcare community has a history of successful advocacy for weapons bans, is well positioned to describe the humanitarian effects of weapon use, understands the risk of automation in decision making, and is experienced in promoting preventive action. Despite this, our profession has been conspicuously absent from the conversation around LAWS. This is not due to indifference, nor lack of moral outrage, but rather a general unawareness of the situation at hand. Meanwhile, the UK government not only opposes a ban on autonomous weapons, but also actively contributes to their generation by ploughing public money into next-generation digital autonomy, machine learning, and AI.8
As a collective voice the healthcare community must strongly support the UN’s efforts to pre-emptively ban killer robots. Our representative bodies and royal colleges must urgently declare their formal standpoints and lobby the UK government to act on this issue of existential importance.
Please sign this online petition to add your voice — http://ipt.io/UA4VR.
- © British Journal of General Practice 2019