The need for regulation in drone strikes

In 2013, Zubair Rehman, a 13-year-old Pakistani boy whose grandmother was killed by a drone strike, said: “now I prefer cloudy days when the drones do not fly. When the sky brightens and becomes blue, the drones return and so does the fear”. With the United States’ military programmes striking fear into the hearts of children when they see a blue sky, we must ask ourselves some difficult questions about their use.

US drone strikes: above the law?

A key point of contention is the international legal basis for the drone strikes. The presumed international legal authority derives from the state’s right of self-defence against an imminent or actual armed attack. In The New York Times, Michael V. Hayden, a former director of the CIA, argued that the strikes were justified, saying that “the United States viewed these attacks as legitimate acts of war against an armed enemy”, adding that “radical Islamism thrives in many corners of the world where governments cannot or will not act. In some of these instances, the United States must”. Nevertheless, there has been an ongoing debate over recent years about the scope of this right, especially around the notion of imminent threat. Moreover, no agreement was found over the legitimate ends of force employed to protect it.

On a broad reading of U.S. legislation, the President of the United States can fight a global war in perpetuity. Indeed, in 2001, the Authorization for the Use of Military Force Act (AUMF) was adopted and provided congressional authorisation for using military force against those responsible for the September 11 attacks. This Act gave the president a carte blanche to engage in warfare without specified enemies, geographic or temporal limits, and three US presidents have since invoked it to justify attacks against Islamist militants. Rosa Brooks, a law professor at Georgetown University, commented on this: “right now we have the executive branch, making a claim that it has the right to kill anyone anywhere on earth at any time for secret reasons based on secret evidence in a secret process that’s undertaken by unidentified officials.” This perpetual war leads to a disintegration of the whole notion of armed conflict.

Amongst the most damning things is the absence of guidance or rules on defining the combatant. In practice, drone strikes impose an extrajudicial death sentence on people located far from the battlefield. In war, there is no doubt about the legality of the government’s use of lethal force, but outside that context, the intentional killing of a civilian without prior judicial process or judiciary control may breach international law. Through these extrajudicial death sentences, the division between combatants and civilians has become blurred and vague.

Everybody recognises the challenges faced by governments to keep their citizens secure in the face of global threats. Nonetheless, without an agreement on what constitutes an armed conflict, no consensus on who counts as a combatant, and no agreement on what constitutes an imminent threat, the traditional laws of war no longer provide guidance. The international community needs to work to create new kinds of checks, balances, and accountability mechanisms for preserving the rule of law itself.

Out of the shadows

In 2013, the Obama administration adopted stricter guidelines for the US drone campaign. This included the requirement of ‘near certainty’ for intelligence and military units that the target is at the location of an intended strike, and that non-combatants will not be hurt or killed. In 2016, the US Government developed the Presidential Policy Guidance (PPG) about actors and processes involved in deciding to launch a strike in areas “outside of active hostilities”. However, with the lack of published information, it is impossible to assess if these rules are being followed or not. A report published last year by the Columbia Law School Human Rights Clinic and the Sana’s Centre for Strategic Studies has found far more civilian casualties than administration officials have admitted. In Pakistan, Somalia, and Yemen, the report found that the U.S. Government officially reported only 20 percent of more than 700 reported strikes since 2002. When discussing the processes involved, a former analyst at the Counterterrorism Airborne Analysis Centre in Langley, said in the New York Times: “they come and say: we are getting ready to drop a bomb on there – are there any people other than the Taliban commander in this compound? I’d just say ‘no’ because they don’t want to hear ‘I don’t know’”.

Georg Wilhelm Friedrich Hegel argued that the essence of tragedy is conflict – not a moral conflict between right and wrong, but a conflict between legitimate rights and institutions. This provides an interesting framework to conceptualise the debates triggered by transparency in drone warfare. On the one hand, there is the doctrine of state secrets and the need to maintain secrecy for the effectiveness of US intervention, protection of sources, and ensure the quality of information. On the other hand, there is a legitimate right to information, and yet we do not know essential details. In particular, we do not know the laws and policies being applied to a strike, nor do we know the decision process of a strike, and if there is any mechanism to prevent mistakes and abuse. In short, the limits of these wars remain unknown.

The changing face of warfare

In public opinion, it appears that drones have turned conflict into a costless and bloodless exercise, allowing the elimination of terrorism through “joystick warriors”. Jeff Bright, a former US drone pilot for five years, explained in a The New York Times column, “I [would have] just walked out on dropping bombs on the enemy, and 20 minutes later I’d get a text — can you pick up some milk on your way home?” Philip Alston, the former United Nations special rapporteur on extrajudicial executions, warned in 2010 that remotely piloted aircraft could create a “PlayStation mentality to killing”, shedding war of its moral gravity.

Drone strikes represent a major change in our relationship with our enemies. They can be either ‘personality strikes’, where the individual is known, or, more commonly, ‘signature strikes’, which target people based on their appearance and behaviour. In A Theory of The Drone, Grégoire Chamayou argues that signature strikes replace “an epistemology of clear observation and judgement of fact with an epistemology of suspicion in which the targeting decision is based on the recognition of conduct or a life profile indicating a presumed state of membership in a hostile organisation.” Prior to drones, we would shoot the enemy without knowing who they were; we now kill individuals specifically because of who they are, or at least who we think they are.

Humans’ role in future wars: the case for human-centred AI

We are at a critical juncture. The use of artificial intelligence (AI) in warfare has already become a key tool in the exercise of military power. International competition among nations in AI has already started. In July 2017, China produced a ‘Next-Generation Artificial-Intelligence Development Plan’. In September 2017, Vladimir Putin remarked, “whoever becomes the leader in this sphere will become the ruler of the world.” At the same time, the Pentagon is looking for ways to strengthen its ties with AI researchers, particularly in Silicon Valley.

The military’s use of AI, especially with drone strikes, poses a tough question: how willing are we to abstract human control over the critical functions of targeting and engagement attacks? Fei-Fei Li, the head of Stanford University’s AI lab and chief scientist for AI at Google Cloud, explained in The New York Times “despite its name, there is nothing artificial about this technology — it is made by humans, intended to behave like humans, and affects humans. So if we want it to play a positive role in tomorrow’s world, it must be guided by human concerns.” As discussed in The Economist by James Miller, the former under-secretary of Defence for Policy at the Pentagon, it appears that autonomous systems are operating in highly contested space, and thus the temptation to let the machine take over will soon become overwhelming for states.

As the targeting of unknown enemies becomes more commonplace, we must keep humanity at the forefront of all discussions. For if we do not, we risk shifting towards a new normal and likely a tragedy. This is well summarised by Steve Goose, who said, “there’s a value of someone being able to appreciate the human consequences of war. A world without that could be potentially more harmful. If we went to war and no one slept uneasily at night, what does that say about us?”

by Arthur Dinhof

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s