Lethal Autonomous Weapons Systems: Humanity’s Best Hope?

Royal Air Force MQ-9 Reaper(RAF Photo, Cpl S. Follows)

Royal Air Force MQ-9 Reaper
(RAF Photo, Cpl S. Follows)

The past decade has witnessed a revolution in the use of remotely operated systems by the UK’s Armed Forces.  Nowhere has this been more evident – or controversial – than in the air domain.  Debate over the nomenclature of such systems – known variously as Unmanned Air Vehicles (UAVs), Uninhabited Air Systems (UASs), Remotely Piloted Air Systems (RPAS) and the plethora of hybrids that these and other terms have spawned – reflects the ideological battle that continues to rage over the nature of such systems and the extent to which meaningful human control prevails over them.  The term “drone” has become the popular, yet currently misleading, term for such systems, which has been exploited by opponents to propagate the false notion that the RAF, through its armed MQ-9 Reaper operations, is engaged in unethical and inhumane killing by autonomous machines beyond human control.

The UK Ministry of Defence’s (MOD) communications effort in countering the “Killer Drones” narrative has been moderately successful, despite the Reaper’s unedifying name.  The cornerstone of the MOD’s narrative defending the use of remotely operated systems is the centrality of human control over the machines.  But the technology enabling ever higher degrees of autonomy is evolving rapidly, and so are the arguments.  Edges are blurring between machine-made and man-made decisions.  Accordingly, the legal, ethical and presentational challenges that accompany such advances are already causing headaches for policymakers in the UK and other states.  There is reason to consider that evolution – or perhaps even revolution – in the artificial intelligence and robotics fields will ultimately fulfil the dream (or nightmare) of drones endowed with the ability to form reasoned judgements and then decide and act on them without human input.  Should technology spawn such capabilities, the only remaining impediments to their “weaponisation” would be international law and decision makers’ ethics.  Furthermore, from a practical perspective, it will likely become increasingly difficult to determine how the boundary between meaningful human control and machine autonomy can be universally defined and agreed upon, when the ideal would be to achieve a perfect symphony between man and machine.

Already, designers of military and commercial equipment of all sorts seek to lever the mutual advantages of human and machine to achieve optimum synergy for the overall system.  As machines become ever more intelligent and capable, surely it is likely that some functions currently performed best by humans will ultimately be better performed by machines, thereby releasing humans to exploit their capacity to perform extant or new functions for which their aptitude remains supreme.  Indeed, there is fundamentally nothing new in this context.  For example, aircraft autopilot systems perform certain functions better than their human counterparts, and history shows us that the exploitation of human and machine synergy has been in constant evolution since Palaeolithic man first hewed a cobble into a hand-axe.  So far, it is arguable that this evolution has been constrained to the physical rather than conceptual domain, but is it really inconceivable that artificial intelligence should not supersede human decision making, including those decisions involving lethality, if the relevant technology proves itself to be more competent than human beings in making such decisions?

Although there is no internationally-agreed definition of what constitutes a Lethal Autonomous Weapons System (LAWS), it may be interpreted that in order to be described as truly or fully “autonomous,” rather than simply “automated,” a system must be capable of independently interpreting higher-level intent and direction then analysing its physical and operational context in order to make decisions and act independent of further human influence.  In the case of fully autonomous weapons systems, these decisions include whether to employ lethal force.  Despite the wide spectrum of opinions on the legalities and ethics of LAWS, there is a general consensus that none exist yet.  Indeed, Human Rights Watch (HRW) accepts that “Fully autonomous weapons…do not yet exist.”  Even highly-automated systems such as the Phalanx close-in anti-missile system fail to meet the definition of “full autonomy” because humans programme them to respond within precisely defined parameters to pre-defined conditions.  With apologies to Descartes, in essence such systems do not “think,” therefore they are not [autonomous].

All parties agree that contemporary technology is incapable of producing systems with the required artificial intelligence to meet the broadly agreed understanding of what a truly autonomous system is, i.e. although a degree of autonomy can be achieved through the automation of certain functions of a weapons system, they are as yet incapable of exercising reasoning and judgement to the same sophisticated level as a human being.  In these regards, humans continue to outperform machines, and in the view of the International Committee of the Red Cross (ICRC), supersession by machines is “unlikely to be possible in the foreseeable future.”  Consequently, although partially autonomous systems have been demonstrated to perform well under highly predictable circumstances, so far not even the most complex “autonomous” system has exhibited the power of judgement necessary to adapt satisfactorily to complex, dynamic and unexpected circumstances.

In light of the pronounced limitations of current autonomous technology, the debate over LAWS has principally circulated around the issue of whether to introduce a pre-emptive ban on such systems, with groups such as the International Committee for Robot Arms Control claiming that “the delegation of violence to a machine – whether lethal or less lethal – is a violation of human dignity.”  The UK rejects the premise of this argument, stating that it would never delegate the decision to employ lethal force to a machine and that International Humanitarian Law (IHL) already prohibits their development.  The UK considers that its stance accords precisely with extant IHL, which it believes already effectively bans all states from introducing fully autonomous systems.  Article 36 of Additional Protocol 1 to the 1949 Geneva Conventions obliges states “to determine whether [a weapon’s] employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable to the High Contracting Party.”  In its interpretation of Article 36, the UK contends that a fully autonomous system would never be capable of meeting the principles of humanity, proportionality and distinction in the targeting process and, therefore, IHL signatory states are compelled to limit weapons systems to those which operate under “meaningful human control.”  Under the current provisions of IHL, the principle of humanity is inseparable from the human species; ipso facto, no other living or artificial creation has the right to judge matters involving humanity.  But in arguing that humans alone have the right to make decisions that have humanitarian implications, there is an inherent presupposition that either humans are (and always will) remain superior to artificial creations in making judgements based on humanitarian principles, or that human mistakes or misdeeds will remain more admissible than machines’ potential inerrancy, simply by dint of being human. The first presupposition is open to conjecture; the second, ironically, seems almost certainly inconsistent with humanitarian objectives.  So far in his history, man has failed consistently to live up to humanity’s loftier ideals.  To err is, indeed, human, as humanity’s sad history of war and its associated crimes against humanity have lamentably demonstrated.  But to forgive mankind en masse for its propensity for making bad decisions would be an error in itself if mankind develops artificial intelligence that is better equipped than humans to make better humanitarian decisions.  To argue that decisions to employ lethal force should always be made by humans is to argue that ISIL’s murderous reign of terror is more acceptable than, in another context, the sparing of a non-combatant by a machine whose “mind” is unfettered by fatigue, fear or hatred.

US Navy X-47B UCLASS Over Patuxent River, Maryland(US Navy Photo, Liz Wolter)

US Navy X-47B UCLASS Over Patuxent River, Maryland
(US Navy Photo, Liz Wolter)

The US, which is the only state other than the UK to have publicly announced its policy on autonomous and semi-autonomous weapons, has provided some detail on its approach to LAWS, but it is ultimately more ambivalent than the UK regarding its interpretation: “Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”  What specifically is meant by the intention to “allow appropriate levels of human judgment” remains unclear, but senior US officials seem much more at ease with the concept that fully autonomous weapons systems will supersede some manned and remotely operated systems.  US Secretary of the Navy Ray Mabus recently declared, “I’m for a full-up penetrating strike fighter…..[UCLASS] ought to be the bridge to a full-up strike fighter – an autonomous strike fighter – that [operates] in contested environments.”  By “contested environment,” it is reasonable to assume Secretary Mabus meant one in which not only can the opposition be expected to employ kinetic measures to defeat friendly systems, but one that is contested in electromagnetic terms too, i.e. an environment in which the ability to control a system via satellite link (or any other method reliant on the electromagnetic spectrum) is disrupted.  It might further be implied, therefore, that human operator intervention would be severely limited, if not negated entirely, under such operational conditions.  Hence, autonomy – i.e. self-reliance and the ability to “think” – would be vital facets of such a system.

From this temporal vantage point, it is uncertain whether fully autonomous weapons systems will be viable.  But let us, for a moment, assume that they will become feasible at some future point, either through evolution or revolution in the artificial intelligence and robotics domains.  Technical viability will ultimately challenge legality.  Any international accord that either confirms that IHL already effectively bans LAWS or introduces a bespoke pre-emptive ban on such systems may deter or delay the development of such systems, but it is surely inconceivable that legislation could be anything more than a speed bump on the road to some form of military employment.

History shows that weapons innovation nearly always usurps extant legislation – how, for example, could nuclear weapons ever meet the conditions of the proportionality and humanity that IHL enshrines, yet they continue to form a vital component of several states’ military inventories?   Moreover, paradoxically perhaps, nuclear weapons are generally considered to have exerted a positive effect on the peacefulness of the post-World War 2 era.  So, despite their seeming incompatibility with IHL, it may be considered that nuclear weapons have (so far, at least) made a positive contribution to peace and, therefore, have reduced the scale of human suffering through war.  Consequently, for many states, despite their potentially apocalyptic consequences, nuclear weapons are considered to be peace-positive.  Should technology permit, those states that judge LAWS to offer military advantage are likely to argue that such systems are more capable than humans in exercising the lofty principles of human reasoning and judgement than humans themselves because they would not be susceptible to the deleterious effects of anger, fatigue, fear, greed, hatred and pain to which humans are subject.  Should LAWS develop to a point where they are capable of practising the highest levels of judgement and reason, unfettered by human frailties, it might reasonably be argued that they would be better equipped than humans to decide on matters concerning the use of lethal force.  It should need no reminder that each and every crime against humanity has been committed by man.

Neither technology nor humanity is yet at a point where life-taking decisions can be delegated to machines.  But whatever the status of IHL, it will take just a few LAWS genies to be released from their technological, legal and ethical lanterns to revolutionise warfare.  The UK’s position over LAWS is commendable, but the foundations look vulnerable to an unpredictable and innovative future.  Whether through technological evolution or revolution, it would be unwise to conclude that international law in any form will ultimately prevent the creation of systems displaying a degree of autonomy that draws into significant question the viability and appropriateness of “meaningful human involvement” in decisions involving the employment of lethal force.  Paradoxically, the machines may ultimately be more humane than humans; given humanity’s track record, this does not appear to be an impossible or, indeed, undesirable aspiration.

Wing Commander Jim Beldon is a navigator with over 3,000 flying hours on the E-3D Sentry and has served on operations in the Balkans (including Operation ALLIED FORCE), Iraq, Afghanistan and across the globe.  He has served at various grades at the UK Ministry of Defence, the Permanent Joint Headquarters and the Joint Services Command and Staff College.  He commanded 8 Squadron (AWACS) from 2012 to 2014 before assuming his current role in Defence Communications at the Ministry of Defence.  In 2015, he was appointed as a Fellow of the Royal Aeronautical Society in recognition of his contribution to British aviation over the past 20 years.

Leave a comment