In war, overpowering an opponent to the point of surrender requires the tools, knowledge, and actions that can induce such a response from one’s opposition – one must get and stay ahead. Since World War II, the application of modern technology in military missions to create autonomous weapons systems (AWS) was, and has since been perceived as the third revolution of warfare, manifesting itself in machines and military robots from the primitive German Goliaths and Soviet Teletanks to today’s semi-autonomous South Korean Samsung sentries at the DMZ, and the evermore familiar American drones in the Middle East. Using such weapons systems, soldiers are further from the actual battlefield than ever before, which not only cut costs, but also prevent the loss of solders on the field. The attraction for using such weapons is understandable.

With steadily advancing gains in artificial intelligence, the application of developing new technology into autonomous weapons has triggered heated debates over whether completely autonomous weapons can be relied upon to act in accordance with international law. Unlike semi-autonomous weapons systems which require humans to have the last say in launching an attack, machines with complete autonomy are, once activated, can select and engage targets without further intervention by a human operator.” Despite the incremental implementation of more AWS on the battlefield, only two countries, the US and UK, so far have openly developed national policies regarding the regulation of AWS, both of whom highlight the importance of a level of regulation over them before being activated or deployed.

The issue at the core of the debate is focused around their ability to comply with International Humanitarian Law (IHL), specifically the principles of distinction, proportionality, and precaution. Granted, such autonomous systems in general are just starting to be researched, but according to the United States Air Force, we may well be seeing these weapons in use by as early as 2030.

The first criterion is if AWS can distinguish between civilian and combatant? Certainly, with the right code, it is theoretically possible to program AWS so that they can. In creating algorithms and simulations that give AWS the skills to categorize friend from foe (through a mechanism such as facial recognition technology, it could fulfill this criterion). However, it would perhaps be folly to argue that in the ever more complex battlefields of war, AWS would be able to distinguish between civilian and combatant. If a program cannot fully be autonomous in making that decision, why not just stick with humans, and human operated systems?

The same arguments can be made on the AWS’ ability to calculate the proportionality of their military attack with regards to how much damage they do to civilians, and their ability to take as much caution as possible to prevent loss of innocents. With the adequate programming, the fulfillment of these three principles is possible, but with only the alleged likelihood for a program to be as complex and adaptable to change as the human brain, the effectiveness of AWS is uncertain.

With all this in mind, there are currently two paths from which the growing issue of AWS can be addressed. A complete pre-emptive ban to stop any further development of AWS has been called for by people like Elon Musk and Stephen Hawking, stating that with the relatively cheaper technology required for AWS (than for, say, nuclear weapons), the ease at which such weapons can be acquired is, to say the least, alarming. The stunting of AI in the process however could in turn limit technological developments that could actually be helpful in reducing civilian casualties on the battlefield. A complete termination of AWS, if seen as a way to make warfare less costly, is therefore unrealistic.

An alternative to an outright ban would be to create a code of conduct, a set of non-binding guidelines for the research and development of artificial intelligence specific to AWS. True, non-binding laws cannot be enforced, but such laws are adopted to create flexibility and adapt better to the changing context they are applicable to – an ideal situation for a rapidly evolving scene like weapons technology and AI. In providing guidelines via a code of conduct, momentum is set for interactions between state and non-state actors to build knowledge and confidence in the field of AWS, the keystone in creating further regulations and laws that become binding. It is by no means a permanent solution to the issues AWS pose, but instead is more of a start in the direction of better regulation.

The technology for this discussion may not be here yet, but the possibility of its presence in the next 15 years makes this a discussion nonetheless important in helping establish the rules and regulations required of new technology entering the battlefield. The main concern that currently surrounds the legality of AWS boils down to its capacity to make life-or-death decisions for humans.

We cannot know how this will play out, but if we can assume that international law and its interpretation is based on context and judgment, the challenge posed in demonstrating these qualities may prove too difficult for AWS to overcome. This does not mean that we should be fearing some Terminator-esque AWS and write off their development completely, however; establishing a code of conduct by which all countries developing AWS with complete autonomy will abide by would prove useful in guiding the discussion for further regulation in the future once the technological capabilities of AWS become clear.

By Ceinwen Thomas