Australasian Science: Australia's authority on science since 1938

Remote Weapons: Ethics from a Distance

A fully armed MQ-9 Reaper drone

A fully armed MQ-9 Reaper drone taxis down an Afghanistan runway. Credit: US Air Force/Staff Sgt. Brian Ferguson

By Adam Henschke

Are military drones that launch lethal attacks by remote control of any more concern than traditional warfare capabilities?

Drones, lethal unmanned air vehicles and robots as part of modern warfare: these sound like the realm of fantasy, but we are facing a revolution in military technologies. Many interested in military ethics are concerned that these technological advances might present some important ethical risk, and so we should reject these sorts of remote weapons.

Is there something ethically special about remote weapons that should cause us to be concerned? There are three points at which we might be facing some particular ethical concerns: that remote weapons are special, secret or ensnaring.

Remote weapons allow the application of lethal force by people who are not in the immediate vicinity of battle. Drones are the most commonly thought of technologies here, in part because they are the most widely used of these remote-controlled technologies. But drones do not necessarily entail lethal force. While drones like the Reaper Drone sometimes use weapons, the vast majority of drones in military use are used for intelligence, reconnaissance and other support roles. While there may be ethical concerns with standard drones, such as privacy and border security, these seem far less ethically troublesome than death by remote control.

At the other end of the scale are war-robots, fully autonomous weapons systems that are designed to make decisions to kill without human intervention. There are certainly deep ethical issues with this sort of technology – perhaps issues of human life and death shouldn’t be left up to artificial intelligence. But lethal use of force by unmanned air vehicles (UAVs) do not fit this description: humans are still “in the loop” and make the final decision about whether to kill or not.

Two things might make remote weapons ethically special – the fact that they operate at a distance, or the idea that they may be particularly disrespectful to those they kill.

The first issue, distance, gives many people their first sense of moral alarm. There seems something ethically problematic with someone in Nevada launching a missile from a UAV against someone in Afghanistan. The distance between pilot and target is so great, the reasoning goes, that such decisions about killing should not be made.

This argument, however, has very little bite. Since the invention of the bow-and-arrow, we have been killing enemies at a distance. Current warfare is frequently fought at a great distance: planes, submarines and battleships all use weapons that impact the enemy at a substantial distance from those firing the weapons. Moreover, the people operating these weapons will rely on TV screens and tools to receive information about their target, just like the UAV pilot. So, in this regard, UAVs are in no way ethically special compared with non-controversial existing military technologies.

Second is the idea that there is something disrespectful about being killed at a distance. However, this would be the same for any weapon beyond hand-to-hand or close-quarters combat. One thing that may make a UAV pilot distinct from a pilot in actual battle is that the UAV pilot is at very low risk of being harmed. A plane could still be knocked out of the sky, but the UAV pilot might be on the other side of the planet. By being so removed from any risk themselves, they are showing disrespect by not fighting fairly This is called the “extreme asymmetry objection”: the risk of harm between the target and the pilot is so great that the UAV pilot somehow disrespects his target by killing them in this way.

However, war does not have to be a fair competition. It would be absurd to think that we cannot fight the enemy until we’re sure we have the same number of soldiers, the same weapons and the same strengths.

A range of important ethical criteria must be met for a war to be considered just, but “fairness as equality” is not one of them. War is not a sport, and should not be thought of as such.

Of vital importance is that the ethics and laws of armed conflict still apply to UAV pilots, as with any military action. Should a UAV pilot flout the laws of armed conflict by launching a strike against a group of known civilians, this is unethical and likely to be illegal in exactly the same way as it would be for a fighter pilot. That it happens by remote control is not of special relevance.

Perhaps there is something else that is troubling about the use of UAVs in modern conflict: that the ways in which they are used is secret. The argument is that UAV use ought to be banned not because of what they are but because of how they are used. Since we don’t know how they are used, we ought to be very concerned when they are used.

The primary focus of criticism here is the US military’s use of UAVs to kill enemies in Afghanistan, Yemen and other areas of conflict. Targeted lethal strikes by the US military cause a proportionally small number of deaths: most deaths are caused by close air support (CAS), where troops on the ground are fighting in close range to both their enemy and civilians. Most casualties are from strikes called in by ground commanders, who authorise CAS strikes. These are not targeted strikes by the military or the CIA.

While the US military and the CIA both use UAVs, the vast majority of kills by UAVs have been done by the military. When reading figures on UAV strikes and deaths, the military use and CIA use are often conflated. This is important to recognise as the legal oversight of military UAV use is very strict, and is hardly secret: the US army’s targeting process, the ways in which decisions about lethal strikes from UAVs, is outlined in The Targeting Process: The Official US Army FM 3-60 (FM 6-20-10). As this is available to purchase on Amazon, any claims about the secrecy of the military are entirely wrong here.

There is, to be fair, no comparable public access to the manual for the CIA’s targeting process, and ethical concerns remain about how the CIA makes its decisions to use lethal force. Following the controversy over the accidental deaths of US citizen Warren Weinstein and Italian citizen Giovanni Lo Porto, who were believed to have been killed in a CIA-led UAV strike against an Al Qaeda compound in Pakistan in January this year, US President Barack Obama said that he wanted to make the CIA program more transparent. The Wall Street Journal and CNN have reported that efforts are being made to shift all lethal UAV use away from the CIA and to the military, in part because of concerns about oversight of the CIA program.

A further question is whether intelligence agencies should be using lethal force at all. US Senator John McCain, for instance, has said that the use of lethal force should be limited to the military.

What remote weapons point to are deeper ethical questions about the role of organisations, what the military ought to be doing, and what intelligence organisations ought to be doing. Again this is not a special problem for remote weapons: the question is not should organisations like the CIA be permitted to use UAVs for lethal strikes, but should the CIA be permitted to use lethal force at all? Ethical analysis of new technologies often does this – the new use shines a light on existing practices and prompts us to ask a deeper set of questions about what we are already doing.

The final issue is perhaps more speculative, but might have more ethical bite than the first two – that of being ensnared. The concern is that by having remote weapons, a country might find it much easier to decide to go to war, but once in war, finds it very hard to get out. Remote weapons could lower the entry costs, but raise the exit costs. Since the pilots are safe at home, this substantially limits the chance of casualties on our side and makes a war seem easier to win. However, this turns out to be a false economy because, once we are in the war, we find out that we have to invest much more, such as soldiers, support staff and long-term investment to stabilise the region.

In order for this to be a serious ethical concern, three conditions need to be met – that remote weapons do actually make it easy to go to war, that they make it hard to leave, and that by comparison with other methods of warfare, they make it easier to get in but harder to get out.

The idea of ease of entry comes back to the negligible threat faced by UAV pilots. If a war is fought remotely, then one’s own soldiers will face very little risk. Such a war can become politically more palatable, or even publicly easy to enter. At the core of this claim is an assumption about the way that remote weapons change the domestic perception of war – that these remote wars pose limited risk to the people fighting. In this way, remote weapons do seem to be relevantly different from other methods of warfare. But this doesn’t mean that the war will be easy to win.

As the current crisis in Syria shows, many conflicts are immeasurably complex. Knowing exactly who the enemy is with certainty is very hard but is utterly necessary for a war to be just: one can’t simply kill civilians and claim ignorance. UAVs can be very helpful in gathering information, but local knowledge, and a deep and sustained engagement at the ground level is essential.

Furthermore, while bombs and bullets are important aspects of winning wars, much more is needed. Remote weapons are increasingly integrated into military practice, but boots on the ground are still fundamental for winning wars, and long-term stabilisation and peace require much more than weapons – remote or otherwise. Any efforts to sell such wars as easy wins ought to be criticised.

Complicating things further is that enemy perception of UAVs might counteract the aims of stabilisation. Some criticisms of the US UAV program in Afghanistan suggest that the use of lethal UAVs have hardened the enemy’s resolve and may prompt locals to participate in conflict. Remote weapons might thus become a tool of enemy recruitment, making it harder to resolve conflicts.

So it seems at least plausible that remote weapons could ensnare people into long, drawn out and highly costly military campaigns and could even help foster enmity among the local population, hardening resolve and acting as part of recruitment drives that feed more locals into the enemy military forces. However, this plausibility is dependent on the perception of UAVs: if a government is actively presenting a decision to go war as low cost and winnable because of remote weapons, they are distorting the reality of warfare.

With these discussions in mind, we can draw the conclusions that the concerns about the “specialness” of remote weapons are unfounded, and that the claims of military secrecy are unwarranted. The role of the CIA is, to be sure, much more ethically problematic, but that concern is not about the use of such remote weapons per se, but about the proper function of military and intelligence organisations. Finally, the idea that UAVs could play a role in drawing countries into wars is possible but dependent on the public’s perception of the role of these remote weapons in future wars.

Adam Henschke is an ethicist at the National Security College, the Australian National University. He is co-editing Binary Bullets: The Ethics of Cyberwarfare for Oxford University Press (due early 2016) and has a book on the ethics of surveillance under contract to Cambridge University Press.