Другие журналы на сайте ИНТЕЛРОС

Журнальный клуб Интелрос » Joint Force Quarterly » №67, 2012

Jeffrey S. Thurnher
No One at the Controls: Legal Implications of Fully Autonomous Targeting

"Autonomous robots on the battlefield will be the norm within twenty years."

-P.W. Singer, Wired for War1

Robots and unmanned systems have proven incredibly valuable on the battlefield during the war on terror and are likely to play a larger and more sophisticated role for militaries in the future. From 2000–2010, the number of U.S. unmanned aerial vehicles (UAVs) proliferated from fewer than 50 to over 7,000, with similarly astounding increases among land- and sea-based unmanned systems.2 Despite overall reductions in upcoming U.S. defense budgets, expenditures for unmanned systems are projected to grow.3 All branches of the U.S. military are poised to rely more heavily on unmanned systems in the future.4 Not only are the numbers of these systems increasing but so are their capabilities. Technology has advanced so rapidly in the past few years, particularly regarding artificial intelligence, that the creation of fully autonomous systems appears a distinct possibility in coming years. The potential deployment of fully autonomous lethal systems raises significant legal and ethical concerns. These concerns, including whether such systems would even comport with the Law of Armed Conflict (LOAC), have yet to be definitively resolved. The technology, however, continues to race forward regardless. Therefore, operational commanders should begin examining the legal and the command and control implications of using such lethal autonomous robots (LARs) as they help steer the future development and doctrine of unmanned systems.5 While the use of LARs will arguably be deemed permissible under LOAC in most circumstances, prudent operational commanders should still implement additional control measures to increase accountability over such systems.

MQ-9 Reaper takes off in Afghanistan

MQ-9 Reaper takes off in Afghanistan

U.S. Air Force

Technological Advances May Make LARs Possible

Operational commanders need to be aware of recent technological advances and the extent to which the military is poised to incorporate them into future unmanned systems. While LARs may seem incredibly futuristic at first blush, the technological gap is quickly narrowing. In fact, the former chief scientist for the U.S. Air Force even contends that technology currently exists to facilitate “fully autonomous military strikes.”6 Several recent technological breakthroughs, particularly those involving artificial intelligence, highlight how attainable these systems are becoming.

The past few years have witnessed tremendous technological breakthroughs in artificial intelligence. Two highly publicized examples showcase its extraordinary potential. The first involves the IBM supercomputer system known as “Watson.” The Watson supercomputer is best known for competing and winning against human competitors on the Jeopardy television game show during several special episodes which aired in February 2011. The uniqueness of Watson stemmed from the way it learned to identify the answers to the trivia questions. To attempt to replicate the complex human thought process, Watson was designed with more than 100 statistical algorithms. These helped Watson rapidly sort through multiple databases of stored information. They essentially helped Watson learn—statistically speaking—which words were most likely associated with which answers.7 Watson marked an enormous advance in artificial intelligence both in the number of algorithms embedded into it and in the statistical methods it used in solving problems. The extraordinary technology showcased in the supercomputer will likely begin appearing in other computer systems and could be adapted to assist LARs in the future.8 This is but one recent breakthrough in artificial intelligence.

A second technological breakthrough came from Google with its driverless car. Google funded a team of researchers to design vehicles that could drive without human controllers on city streets and public highways. The researchers, most of whom are part of Stanford University’s Artificial Intelligence Laboratory, created seven vehicles that navigated California’s freeways and streets accident-free for approximately 140,000 miles with only sporadic human assistance.9 The sophisticated artificial intelligence in these vehicles was able to “sense anything near the car and mimic the decisions made by a human driver.”10 This cutting-edge technology represented a tremendous leap forward in artificial intelligence. The potential military use of systems capable of autonomous navigation is clear. In fact, this Google project was an extension of an earlier Stanford University project that won the 2005 Defense Advanced Research Projects Agency (DARPA) Grand Challenge competition. That Pentagon-funded competition offered a $2 million prize to the team that could develop an autonomous vehicle capable of navigating itself over a 130-mile desert course.11 The Google version of the vehicle represents a marked improvement over the one that won the DARPA prize, and possesses the advanced artificial intelligence capabilities that the military will likely incorporate in future unmanned systems.

The true breakthrough of systems like Watson and the Google car is the way in which they adapt and learn. These systems essentially are able to learn from their own mistakes.12 The branch of artificial intelligence used in these systems is called “machine learning.”13 The computers can recognize patterns in data and accurately make decisions or perform functions based on those observed patterns.14 It is akin to humans learning through examples.15 Machine learning is helping computer developers tackle problems “once thought too complex for computers.”16

Any future development of LARs will rely heavily on such types of artificial intelligence reasoning capabilities. Machine learning computers will likely help future LARs attain the necessary behaviors to make critical decisions about whether and how to engage and destroy a target. The U.S. military has wisely positioned itself to incorporate these new technological breakthroughs into the next generation of its unmanned systems.

The Department of Defense (DOD) is at the vanguard of developing new unmanned technologies. DARPA is the “primary player in the world of funding new research in . . .
robotics.”17 It sponsors research on future technologies, and is currently focused heavily on robots and unmanned systems.18 Other government entities, such as the Office of Naval Research (ONR), are funding efforts to develop robots that can act independent of humans.19 These DOD organizations helped create the vast numbers of unmanned systems that were deployed to Afghanistan and Iraq over the past decade of fighting.20 The organizations are now poised to develop even more sophisticated systems.

As technology advances, many cutting-edge DOD unmanned systems are taking greater advantage of these artificial intelligence improvements and are being designed with more autonomous features. In the U.S. Navy, close-in weapons systems such as the Phalanx found on Aegis-class cruisers and other ships now possess upgraded software enabling them to autonomously find, track, and destroy enemy antiship missiles.21 ONR is developing systems for the U.S. Navy such as the Biomimetic Autonomous Undersea Vehicle (BAUV), which is capable of conducting long-term underwater surveillance. BAUV can recognize changes in the environment and make adjustments autonomously to maintain its position in the water for many weeks.22 The Navy is also developing “mine-hunting” autonomous mini-submarines.23

The Navy is not alone in pursuing unmanned systems with autonomous features. The U.S. Air Force has designed its Global Hawk UAV systems to include autonomous flight options.24 Rather than directly controlling the aircraft’s every move, human operators merely designate patrol areas for the platform. The system then navigates itself to those areas using Global Positioning System satellites.25 The Air Force is also researching the use of Proliferated Autonomous Weapons, which are systems of small robots that could be flown autonomously to attack targets as a swarm.26

The U.S. Army has been developing a series of unmanned vehicles capable of autonomous operations. Some future Army counter-battery systems may be able to autonomously destroy incoming artillery and missile barrages at speeds faster than humans could possibly perform.27 Other Army unmanned ground systems are being designed to move around the battlefield autonomously, such as the Crusher Unmanned Ground Combat Vehicle. The Crusher possesses advanced artificial intelligence capabilities and may serve as an unmanned reconnaissance, supply, or fire support vehicle.28 It represents a potential prototype of the next-generation autonomous robotic ground fighting vehicle.29

In anticipation of these autonomous features becoming more widely available, DOD is already developing doctrine and tactics for incorporating autonomous systems into the overall force. Military organizations such as DARPA, ONR, and the U.S. Army Research Laboratory have been working diligently on the so-called warfighters’ associate concept, which will partner humans and robots to work as “synergistic teams.”30 The expectation is that robots on the battlefield will form the bulk of detachments, such as infantry units that would be comprised of 150 human soldiers working alongside 2,000 robots.31

Operational commanders need to be aware not only that these technological breakthroughs will make autonomous features more readily available but also that there will be a growing need for unmanned systems to become more autonomous. There are several key reasons for the growing need. First, requiring a man-in-the-loop for all unmanned systems is prohibitive both in cost and personnel. It takes scores of people, from pilots to technicians to intelligence analysts, to operate a single tethered UAV.32 Impending budget constraints may cause the overall size of the uniformed force to shrink in coming years. Autonomous unmanned systems, which are comparatively less expensive and require fewer human supervisors, will be expected to fill the capability gaps.33 Second, future battles will likely occur at such a high tempo that human controllers may not be able to direct drone forces to rapidly counter enemy actions.34 Essentially, a force in the future that does not have fully autonomous systems may not be able to compete with an enemy who does. Many nations, including China, are already developing advanced systems with autonomous features.35 Third, adversaries are improving satellite communications jamming and cyber-attack capabilities, and, as a result, systems tethered to a human controller may be incredibly vulnerable.36 Without a constant connection to a human operator, tethered systems are incapable of completing their missions.37 Thus, in general, future weapons systems will be “too fast, too small, too numerous, and will create an environment too complex for humans to direct.”38 One likely solution will be unmanned systems that are much more autonomous than those that presently exist.

Although the United States is developing a variety of autonomous features for many of its unmanned systems, the Nation remains committed, at the moment, to having a human remain in the loop for lethal targeting decisions.39 One of the main reasons the United States has not yet fully embraced lethal autonomous targeting is the legal uncertainty associated with robots making those life and death decisions.40 Deciding whether LARs are permissible under LOAC remains a hotly contested issue.

LOAC Would Permit Fully Autonomous Targeting Under Most Circumstances

LOAC has proven flexible, and has evolved and adapted over time due to advances in both weapons technology and military tactics.41 Many weapons systems were initially outlawed only to be accommodated later, once the technology proliferated to other nations and international norms conformed.42 LOAC is essentially derived from customary international practices and international treaties, but thus far there is neither international consensus nor an international treaty about autonomous targeting.43 Internationally, the debate over whether LARs should be lawful is highly contentious.44 Any examination of the lawfulness of LARs must begin with the aspect of LOAC known as jus in bello (justice in war), which focuses on determining the practices allowed and prohibited in war.45 The jus in bello is comprised of four bedrock principles: military necessity, distinction, proportionality, and unnecessary suffering or humanity.46 With a careful analysis of these and other foundational LOAC principles, the use of LARs will likely be deemed permissible in the vast majority of circumstances.

LOAC is not designed to hinder the conduct of war but is instead intended to ensure combatants properly direct violence toward the “enemy’s war efforts.”47 The principle of military necessity helps to achieve that goal. Military necessity requires combatants to focus their military efforts and attacks on those items with a military objective or those offering a “definite military advantage.”48 Thus, force may only be used when it will help the belligerent win the war.49 Belligerents are expected to examine whether an “object of attack is a valid military objective” before engaging a particular target.50 One normally looks to an object’s nature, location, use, or purpose to make that decision.

Given those parameters, LARs would need to be able to make the determination that a potential target meets the criteria as a valid military objective. While this decisionmaking process might be complex, forces utilizing unmanned systems would be able to greatly influence this process and likely ensure compliance with the LOAC principle. Even though a system is designed to operate autonomously, it would presumably be given specific orders from its headquarters about what types of missions it would be directed to accomplish. Leadership would most likely program LARs to only engage specific targets or at least specific types of targets. In essence, the systems would be programmed to recognize who the enemy is and what objects belong to that enemy. As long as the types of targets and missions assigned to LARs are valid military objectives, the LARs would be in compliance with the principle of necessity when engaging those targets.

The issue becomes more complicated if the target is not on a preset list. Such a situation might arise with a “target of opportunity” or in response to an emergency situation. The most likely emergency situation is one in which friendly forces are being attacked and LARs are dispatched to provide assistance. In those circumstances, the military necessity prong would be relatively easy to meet as part of a unit self-defense argument. Operational commanders may still want to limit LARs from engaging targets in such emergency situations.

The jus in bello principle of distinction requires belligerents to distinguish between combatants and civilians.51 It applies to both real persons and tangible objects.52 The intent is to minimize the harm to civilians and their property.53 Commanders have the affirmative duty to distinguish between these before ordering an attack.54 This principle is intended to prohibit indiscriminate attacks.

Unmanned aerial systems lead pilot controls ScanEagle UAV during exercise for aeromedical evacuation and ground medical components

Unmanned aerial systems lead pilot controls ScanEagle UAV during exercise for aeromedical evacuation and ground medical components

U.S. Air Force (Donald R. Allen)

LARs would have the same requirements to distinguish as any other member of the force. They need to be able to discern between civilian and military objects and personnel. To make this distinction, LARs should be able to rely on uniforms and other distinctive signs. Given the advanced image recognition technology expected to be incorporated into LARs, the systems will likely be capable of recognizing this distinction consistently.55

As the United States and others have learned during the past decade of fighting, however, enemies do not always wear uniforms or use distinctive marks. In such uncertain cases, civilians are safeguarded “unless and for such time as they take a direct part in hostilities.”56 Determining if and when a civilian is taking direct part in hostilities can often be most difficult. Similar to humans, LARs would have a hard time making this distinction.57 However, LARs possess one advantage over humans in this regard. They are not constrained by the notion of self-preservation. Thus, LARs could be programmed to sacrifice themselves to “reveal the presence of a combatant.”58 LARs could easily be ordered to hold fire until they are fired upon. In so doing, the use of LARs could greatly help a belligerent distinguish combatants from noncombatants on a complex battlefield. Belligerents would still need to satisfy the other foundational principles, including proportionality.

Proportionality requires belligerents to weigh the military advantage of their attack against the unavoidable collateral damage that will result.59 An attack is lawful as long as it is not expected to cause collateral damage that would be “excessive” in relation to the military advantage.60 Thus, collateral damage is permitted but only in an amount that would not be deemed excessive. It is vital to recognize that the balancing decision is made in anticipation of the attack rather than with the actual amount of collateral damage caused after the fact.61

This proportionality determination equates to a judgment call, which has always belonged to a human. Traditionally, the call has been compared against what a “reasonable person” or a “reasonable commander” would do in such a situation. As long as a similarly situated person would be expected to make a comparable determination of what is excessive under the circumstances, the decision to strike would be deemed lawful.62 Advances in artificial intelligence notwithstanding, it remains unclear whether a robot’s determination of excessiveness could be considered sufficient given such a standard.63

ScanEagle UAV launches from USS Comstock in Gulf of Aden

ScanEagle UAV launches from USS Comstock in Gulf of Aden

U.S. Navy (Joseph M. Buliavac)

Even if the proportionality standard represented an obstacle, many workarounds might still enable commanders to lawfully employ LARs on the battlefield. Operational commanders could use LARs in situations where a higher amount of collateral damage might be acceptable. Normally, attacks directed against high value targets or against a declared hostile force in a high-intensity conflict might fall into this category.64 Similarly, a commander could designate a limit for the amount of expected collateral damage that is permissible during a specific mission. Thus, if LARs determine that the expected number of civilian casualties exceeds the predetermined acceptable limit, they would not be permitted to engage the target without supplementary human approval. Beyond proportionality, the United States must also ensure LARs do not cause unnecessary suffering.

The last jus in bello principle is unnecessary suffering or humanity. When examining the lawfulness of LARs, this principle should not prevent their use as long as standard munitions and tactics are used in these robots.65 LOAC requires belligerents to prevent unnecessary suffering when conducting attacks. To comply, belligerents cannot use any weapon or ammunition that is calculated to cause such harm.66 Instead, they must only use lawfully designed weapons and ammunition and employ them in a lawful method of warfare. All U.S. military weapons and ammunition have been designed with these considerations in mind. As a result, the United States does not field unlawful munitions per se, such as hollow-point rounds or warheads filled with glass.67 In this case, LARs equipped with standard weapons and ammunition and used in accordance with U.S. doctrine would likely be deemed to comply with the principle of unnecessary suffering.

Overall, as explained in the preceding paragraphs, LARs would arguably be in compliance with all four foundational jus in bello principles in the vast majority of circumstances.68 Commanders should, therefore, be confident in their ability to utilize LARs, especially when supplemented with additional control measures. This opinion on the lawfulness of LARs is by no means universal, however. Many legal commentators argue that LARs should be banned under international law.

There are several strong counterarguments for why LARs might not be permissible under LOAC. First, many critics argue that LOAC assumes a human is ultimately making the weighty life and death decisions. It would, therefore, be morally wrong to completely remove humans from these targeting decisions. Accordingly, LARs operate outside the bounds of the applicable international laws and norms.69 Second, other critics contend that the systems should be deemed illegal because their use could lead to a total lack of accountability for attacks on civilians. They assert that there is no human who can be held accountable for a breach committed by an autonomous system.70 Those critics contend that there is a “visceral human desire to find an individual accountable.”71 Third, other critics argue that the fact that a system is technologically possible may not mean it is lawful. They contend that some weapons systems are simply too dangerous and thus risk causing too much unnecessary suffering. They argue that other systems, such as lasers with the ability to blind soldiers on a battlefield, are technologically possible but have been banned from war for being too abhorrent.72 They contend that LARs should suffer a similar fate. Fourth, still other critics contend that LARs fail the proportionality test for some of the reasons that were discussed above. In particular, they argue that robots will not be able to “holistically weigh” the proportionality test.73 While LARs may be able to determine if the number of expected civilian casualties exceeds some predetermined limit, the proportionality test requires a greater sense of what is excessive.

While those critics provide compelling reasons to doubt the lawfulness of LARs, their counterarguments can be rebutted with a deeper examination of the many prevailing theories on the law. The first counterargument questioned whether LOAC is designed to handle life and death decisions made by robots vice humans. LOAC is indeed a flexible and robust body of law. It has adapted to numerous technological changes, such as the development of submarines and helicopters and nuclear weapons.74 Although the development of LARs represents a significant advancement in warfighting, it is not so drastic a change as to warrant throwing out the existing body of international laws. LOAC can evolve to encompass LARs and provide necessary and sound guidance to their use. The second counterargument focused on the lack of accountability. Contrary to the opinions of those critics, LOAC does not require that a human be held personally accountable for any mistakes or violations that may occur on the battlefield. While the need to hold someone accountable might be “visceral,” it is not definitively required by law. Instead, international law demands that states not absolve themselves of liability with respect to a grave breach of the laws of war.75 Therefore, the state would likely be responsible for any breach related to LARs.76 Such a framework essentially exists today if, for instance, a sophisticated mine exploded incorrectly and injured a civilian or some civilian property. The lack of a human to hold accountable does not undermine the lawfulness of the weapons system.77

With respect to the third counterargument regarding abhorrent weapons, LARs can easily be distinguished from blinding lasers and other banned weapons. As opposed to those weapons where the weapon itself is at issue, the unique feature of LARs is autonomous control.78 LARs are expected to use the same types of conventional munitions found on manned military systems, and the lethality of LARs would not differ substantially from that of other weapons systems. Thus, LARs would not cause the same type of unnecessary suffering as blinding lasers. Thus, it seems less likely that LARs would be deemed abhorrent under international law.

The fourth counterargument dealt with proportionality and the requirement for a holistic approach. As was discussed above, the proportionality judgment call is normally assumed to be a human decision. While it is not clear whether a robot’s determination will be deemed holistic enough for the critics, the commander’s judgment, as evidenced by his orders to LARs about acceptable levels of collateral damage, may be sufficient to encompass that holistic examination. Furthermore, there is actually no specific LOAC requirement for the judgment call to be holistic. International law merely requires belligerents to balance the military advantage against the expected collateral damage. Thus, critics are expanding the notion of proportionality beyond what is legally required.

In general, such strong counterarguments highlight just how complicated and unresolved these legal issues remain. Given this complexity, prudent operational commanders should enact additional control measures when utilizing LARs.

Prudent Additional Control Measures for Commanders of LARs

Even though LARs will likely be technologically possible and permitted under LOAC in the future, operational commanders would be wise to plan carefully for how and when to use such systems. There may be situations in which using LARs might actually prove disadvantageous and unnecessarily risky. If an operational commander ever doubts the effectiveness or lawfulness of using LARs in a particular situation, he either should not deploy them or should implement additional control measures to further protect the unit and the commander from LOAC violations. The following additional control measures will assist operational commanders in their employment of LARs systems.

First, operational commanders need to ensure that all LARs have the proper rules of engagement (ROE), tactical directives, and other national caveats embedded in their algorithms. Moreover, commanders must ensure that any revisions to the ROE or
directives are rapidly inputted into and incorporated by the LARs. Unmanned underwater systems, particularly those without regular communications with the headquarters, may prove to be the most challenged in this arena. For LARs that cannot make such adjustments while deployed, commanders need to ensure those systems can be recalled and then reprogrammed quickly.

Second, commanders should limit when and where LARs are employed to avoid potential proportionality issues. Geographically, LARs are best suited to engage targets in areas where the likelihood of collateral damage is reduced, such as underwater or in an area like the demilitarized zone in Korea. Regardless of geography, LARs might be appropriate when the target is one of particularly high value. In such situations, a commander may have fewer proportionality concerns or might at least be able to quantify the amount of acceptable collateral damage. Utilizing LARs only in specific geographic environments or when pursuing high value targets would alleviate many of the critics’ proportionality concerns and best protect operational commanders.79

Third, operational commanders should carefully examine the type of conflicts where they might deploy LARs. They would be wise to use LARs predominantly during high-intensity situations where the ROE are status-based, meaning there is a declared hostile force to attack. Those declared hostile forces would then be more easily recognizable, eligible targets for LARs. LARs are less appropriate in counterinsurgency or irregular warfare situations, where “the blurring of the lines between civilian and military is a commonplace occurrence.”80 Similarly, commanders may also want to restrict LARs in emergency situations where the proposed target is not already on a preset list of targets. In such irregular fights and in emergency situations, the legal authority to engage with lethal force is more often conduct-based and thus contingent upon an enemy demonstrating a hostile intent or engaging in a hostile act. Given the higher degree of difficulty in identifying targets and the greater distinction concerns, the best approach may be to avoid using LARs under these circumstances. Prudent commanders should only use LARs in appropriate situations and recognize when it is best to resort to manned systems instead.

Transducer Evaluation Center pool at Space and Navy Warfare Systems Center Pacific tests autonomous robotics designed by international student engineers

Transducer Evaluation Center pool at Space and Navy Warfare Systems Center Pacific tests autonomous robotics designed by international student engineers

U.S. Navy (Kimberly K. Fritz)

Lastly, LARs should be required to have some version of a human override, sometimes referred to as software or ethical “brakes.”81 The systems should be able to be shut down or recalled immediately upon a commander’s order.82 Commanders should also establish triggers for when LARs must seek human guidance before engaging a target. For instance, when a LARs system identifies expected collateral damage greater than a predetermined acceptable limit, it could be forced to seek guidance from the command before engaging that target. Commanders would need to establish protocols and support structures to facilitate quick decisionmaking for these potential targets. In these circumstances, human decisionmakers need a high degree of clarity about what situation the robot is facing. This oversight would not be effective if the human operator were merely a rubber stamp to approve an engagement. With prudent additional control measures such as these, commanders can more safely employ LARs on the battlefield and better protect themselves and their commands.

Conclusion

The United States will likely face asymmetric threats in military campaigns of the future. Whether the threat is the substantial jamming and cyber-attack capabilities of the People’s Republic of China or the legions of swarming Iranian patrol boats, LARs may provide the best way to counter it.83 LARs have the unique potential to operate at a tempo faster than humans can possibly achieve and to lethally strike even when communications links have been severed. Autonomous targeting technology will likely proliferate to nations and groups around the world. To prevent being surpassed by rivals, the United States should fully commit itself to harnessing the potential of fully autonomous targeting. The feared legal concerns do not appear to be an impediment to the development or deployment of LARs. Thus, operational commanders should take the lead in making this emerging technology a true force multiplier for the joint force. Operational commanders who establish appropriate control measures over these unmanned systems will ensure their LARs are effective, safe, and legal weapons on the battlefield. JFQ

Архив журнала
№85, 2017№86, 2017№84, 2016№83, 2016№82, 2016№81, 2016№80, 2016№79, 2015№78, 2015№77, 2015№76, 2015№75, 2014№74, 2014№73, 2014№72, 2013№71, 2013№70, 2013№69, 2013№68, 2013№67, 2012№66, 2012№65, 2012№64, 2012№63, 2011№62, 2011№60, 2011№59, 2010№58, 2010№57, 2010
Поддержите нас
Журналы клуба