Response to the Open Letter Calling for a Ban on Autonomous Weapons


On July 27 of last year, over 1,000 experts and researchers – including Stephen Hawking, Elon Musk and Steve Wozniak – signed an open letter warning of the perils of using artificial intelligence in the military and expressing their worry that lethal autonomous weapon systems (LAWS) would “become tomorrow’s Kalashnikovs.” They hence called for a ban on such weapons. Gérard de Boisboissel qualifies the problem with a new focus.

Historically, technological innovations, giving a nation some tactical advantage or strategic superiority over others, were developed more significantly in times of conflict. However, if today’s conflicts tend to be asymmetrical, threats arising worldwide demonstrate that – at least for western armies –conflict situations with symmetrical phases will likely arise yet again in the short term.

In this context, we should focus on the advantages autonomous firing weapons can bring to armed forces before condemning them just because they could one day become a threat if misused.


Phalanx Close-In Weapons System (CIWS) on USS George Washington
(Photo by US Navy, Mass Communication Specialist 3rd Class Stephanie Smith)

We can first of all confirm very sophisticated systems already exist, as modern armies have already acquired such military equipment. For instance, the Phalanx anti-missile defence system is used on US Navy aircraft carriers and combat ships: the system combines radar target acquisition with automatic positioning of gatling guns. Another example is the SGR-A1 Samsung Techwin robot which is deployed by the South Korean Army. Its aim is to prevent any intrusion by North Korea into the demilitarised zone separating the two countries: with a human still on-the-loop, the robot can detect, identify and track targets automatically.

On that account, why should we claim the use of LAWS will no doubt emerge in the next decades? Quite simply because they offer the following defence benefits:

  • They are faster than humans both in reactivity and in addressing threats. They protect and maintain military strength (personnel and equipment) before the latter are destroyed;
  • They can cope with saturating attacks;
  • They are operational on a 24/7 basis with great consistency, whereas humans are subject to fatigue and inattention.

Nonetheless, such machines cannot act on their own initiative, a point of the utmost importance. When deployed, these LAWS can be used in an “autonomous shooting mode” only after a human decision has been made by a military decision-maker trained to deal with such matters (probably an officer or a group of specially trained officers). Hence, the military decision-maker(s) will be responsible for configuring the machine and activating the lethal autonomous function. S/he will do so according to his/her knowledge of the threat, the rules of engagement in a given environment and the tactical situation. S/he will furthermore decide when to activate the “autonomous shooting mode” respecting the constraints s/he has to take into account.

We can therefore accept that LAWS are not reprehensible per se if clear rules are established for their deployment; i.e. when and how to deploy them.

Drone 1

Concept Art for Tern Unmanned VTOL Aircraft for Small Ships
(Image by DARPA)

Beforehand, a thorough certification process should be completed on programming and configuring such machines. Clear rules and validation methods should also be drafted. Thereafter the use of LAWS must be limited in both time and space. A disabling device should also be included to deactivate the “autonomous shooting mode” whenever needed. This means constant communication is maintained between the operator and the machine. Moreover, in case a defective transmit/receiver circuit is detected, the “autonomous shooting mode” will be automatically disabled. Finally, a black-box will be embarked to maintain records of all shooting decisions.

As we are currently keen on safeguarding military manpower and integrating troop safety with the doctrine for the use of arms, it would seem to be quite a conceivable option for our armed forces to activate LAWS in an “autonomous shooting mode” in order to protect friendly devices – if the above-mentioned rules are respected.

Concerning plans of attack, only high intensity combat missions would seem to permit the use of LAWS. This would occur when a military commander, for instance, could not insure total control nor coordinate lethal actions when facing saturating threats that would consequently jeopardise the survival of his/her troops and mission success.

Finally, some areas are more suitable than others for their deployment: air, sea, subsea, desert or underground spaces. By contrast, urban areas are inopportune for deployment as distinguishing and defining a potential target’s behaviour in such a dense, heterogeneous environment is very difficult. The risks in deploying LAWS in such a space are too high.


X-45 Concept Art
(Image by DARPA)

In that same vein, let’s imagine what future air combat would look like: an aeroplane’s handling is reduced by man’s physical abilities (a trained pilot will endure a maximum of 8G only for a short time and adds to on-board weight) whereas fighter planes of the future will be unoccupied; i.e. Un-manned Combat Aerial Vehicles (UCAVs). Tomorrow’s pilot, located on or off-board, will operate not one but a squadron of machines, each one fulfilling a specific mission: target surveillance and detection, target geolocation and weapons employment. In this way, just like an aircraft carrier has to be protected by a flotilla of surface ships, a pilot, depending on the tactical situation, will be able to activate the “autonomous shooting mode” of selected UCAVs to protect his/her aerial devices.

Let us also note that in condemning such devices, the development and enhancement of counter measures (interceptions, regaining control, cyber protection) against potential enemy LAWS would be hindered.

Let us get back to the Open Letter written on 27 July 2015. It states Artificial Intelligence (AI) can be positive for the needs of research if applied to peaceful ends but should be banned if contributing to the arms race. It should, however, be admitted that AI has become a major area of research and that it will gradually pervade our daily lives, generating numerous applications we will use every day. For instance, AI automatic individual recognition/identification will be applied in the area of security (detecting individuals in danger zones), in shops (customer identification) and in leisure activities (autonomous machines for services/comfort). Yet, as military technologies increasingly rely on dual-use commercial technologies, adapting such algorithms to the military environment will be simple enough as open source data is freely accessible on the Internet. Since such technology is easily available, prohibiting some recourse to it would be both complicated and delusional.


Artist Rendition of CICADA Drone Swarm Released from C-130
(Image by US Naval Research Laboratory)

Nevertheless, if technologies for developing armed military robots are readily available to a State or a non-State organisation, and at least the latter has the will to adopt them, the absolute threat lies in submitting a lethal decision to an auto-learning process or a “deep learning” process. Only a known and perfectly valid software code (as certified by formal methods procedures) should activate a lethal system. Putting your trust in a machine for autonomous shooting decisions, after undergoing some auto-learning process, means that a human is no longer “upstream of the decision making loop”. This is something that should be absolutely prohibited, but in no way does it suggest a LAWS system cannot be activated if it is controlled by a trained decision maker.

To dispel popular misconceptions, there is no legal gap concerning the use of autonomous and semi-autonomous weapons. As with other military actions, the use of LAWS is subject to the rules set out in international humanitarian law. More to the point, there is no room for any exemption of responsibility for a military commander deciding to deploy autonomous weapons in violation of international humanitarian law. Sanctions would be taken against him/her following the principles and common procedures before a competent jurisdiction. Moreover, the recourse to a black box recorder will enable the tracking of human decisions/interventions when LAWS is used in a fully autonomous mode.

As for ethical issues, Professor Dominique Lambert puts forward that contemporary decision makers have to act in infinitely complex situations and in fluctuating environments, where a military decision maker should rise to the occasion to better perceive a field situation. It can be admitted that an “algorithmic ethics code” will never replace human ethics. Accordingly, we must forbid the use of the adnoun “ethics” for machines endowed with “pseudo ethical algorithms” as sophisticated as they may be.

The issue of ethics hence moves across to the decision maker/military commander who will decide whether to activate a LAWS “autonomous shooting mode”. S/he can thus retain the option of using this additional form of defence against enemies depending on the military situation.

To conclude, it would be misleading to ban the development of LAWS just because there is a potential risk of losing control of the system. In some cases, such weapons will equip our armed forces, but their deployment will have to follow constrained and very clearly defined rules of engagement which are briefly summarised below:

  1. Such systems must be activated in an “autonomous shooting mode” by trained military decision makers;
  2. The mode must always have the possibility to be deactivated by the same military decision makers. This implies direct communication links between man and the machine (safety back-ups to be planned);
  3. Activating the “autonomous shooting mode” must be limited in time and space;
  4. Any activation of an “autonomous shooting mode” must be tracked and recorded;
  5. Such activations are limited to the case of saturating threats that cannot be processed in human reaction times, or in hostile environments or where accessibility is restricted for humans;
  6. Auto-learning processes stemming from AI must be prohibited for detections, target acquisitions and target discriminations.

Following 20 years’ experience in the telecom industry, Gérard de Boisboissel currently works as a Research Engineer at the Saint-Cyr Military Academy Research Centre. As a member of the Global Action and Land Forces cluster, he helps in organizing and monitoring multi-disciplinary research programs pertaining to conflict changes: military robotics, cyber-defense and the enhanced soldier. In 2013, he was nominated Secretary General of the Saint-Cyr/Sogeti/Thales Cyber-defence & Cyber-security Chair.


Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s