Reading time : 12 minutes

 “War never changes. Only weapons are new. Yet it is not the weapons, but the men who handle them, who win victories.”[1] While this was said in the context of the first few months of what would eventually become the long and gruelling First World War, the sentiment behind it remains just as true today. Wars are arguably the worst part of the human condition and hence has been made subject to various rules and regulations by the international community, that also include the regulation of the types of weapons that can be used. The conduct of modern warfare is hence constrained by various conditions, and it is in this scenario that a new class of weapons have been introduced. These have been given many different names, some with slightly different meanings but all of which encapsulate the same basic intention, and these include Autonomous Weapons Systems (AWS), Lethal Autonomous Weapons Systems (LAWS), Fully Autonomous Weapons (FAW), or the more dramatic “Killer Robots”. It is the intention of this paper to examine a brief history of automation of weapons in the last few decades, how it is different from the concept of an “autonomous weapon”, the regulations that these new weapons will have to subscribe to, the challenges in the existing regulatory framework as well as in bringing about international consensus in bringing about newer, more targeted regulations for these new class of weapons.

A Brief History of Combat Drones

Automated systems have been used in warfare at least since the beginning of the twentieth century. During World War II, American fighter pilots famously used remote controlled planes like the “Dennymite” for target practice.[2] The Germans also used the remote controlled weapons in World War II, like the “Goliath”, a remote controlled tracked vehicle that carried high explosives that could be driven into enemy positions,[3] and the “Fritz”, a drone-like bomb with wings that a “pilot” could guide into a specific target using a remote control. [4] All of these can be considered fairly primitive technologies by today’s standards, as they all required constant interaction with a human operator and had little to no autonomy.[5] Further development of drone technologies would continue throughout the Vietnam War and Cold War but the rise of computer technology moved resources away from robotics and slowed down the development of automated warfare technologies.[6] A great symbolic leap forward came during the Gulf War, where the US military had deployed the Pioneer drone for surveillance and battlefield damage assessments, and it would become the recipient of the first ever surrender of human soldiers to an unmanned system when Iraqi soldiers surrendered to it.[7]

Post the 9-11 terror attack and the Global War on Terror, the US military vastly expanded the use of military drones[8], especially as by 2011 the widespread use of these weapons became feasible and attractive. Political and cultural factors also led to drones becoming increasingly common, especially considering the requirements of fighting terrorists instead of the conventional enemies of the World Wars, including the absence of a confined battle-space, which then requires round-the-clock global surveillance that can be more efficiently done by drones than with human pilots.[9] Other factors like the use of targeted killings to attack terrorists that hide among civilian populations, and the decreasing political tolerance for military casualties have also spurred the growth of drone use.[10] Today, Unmanned Aerial Vehicles (UAVs) are the most commonly recognised part of the drone arsenal of any country, especially the United States. These include various UAVs including the easily recognisable MQ-1, Predator, which was originally designed for surveillance but was later outfitted with Hellfire missiles for offensive capabilities[11], and it’s more “souped-up” version, the MQ-9 Reaper, and more surveillance oriented drones like RQ-4 Global Hawk.[12] India has been slow to develop and use combat drones for its own military, and seems content to use import vehicles, like the Israeli Harpy, Heron and Searcher II platforms. More recently, the Defence Research and Development Organisation (DRDO) has been involved in the development of the Rustom, which seeks to replace the Heron, and is designed for Intelligence, Surveillance and Reconnaissance (ISR), communications relay and possibly even munitions delivery.[13]

Autonomy v Automation

While the drones that are in use today seem to come right out of the realm of science fiction, they are still fairly basic in comparison to robotics technology. The majority are basically just advanced remote controlled airplanes with a human pilot, who just happens to sit at a military base rather than in the cockpit. These drones do not think, decide or act independently, and are described as “automated”,[14] while the future of this technology is expected to reach “autonomy”, with the ability to execute missions without guidance from a human operator. The distinction between the two is vital to understand and meet the policy challenges the latter raise. As of today, humans are considered to be “in the loop” with regard to drone technology,[15] as they have the ability to decide most of the actions of a drone, including taking offensive action. However, progress in technology seems to be in the direction of humans being pushed “out of the loop”, where they will not be necessary to make decisions like when a drone (or a drone swarm) takes off, where it goes, how it acts, what it will look for, and with whom it shares that information.[16] However, even with the pressing need to regulate these newer technologies before it is too late, there is still little consensus on what “autonomy” means and how “autonomous drones” can be defined.

To clarify the difference between automation and autonomy, it is important to understand a machine’s decision making process, which can be effectively understood using the concept of the “OODA Loop”, and it is this loop that commentators from varying backgrounds refer to when using phrases like “in the loop” or “out of the loop”. This concept was the brain child of John Boyd, a US Air Force pilot and military strategist, who realised that in a dogfight, the fighter pilot with the advantage was the one who could make faster and better decisions, and throw the opponent’s decision making “loop” out of sync.[17] Boyd then described the human decision making process using 4 steps: Observe, Orient, Decide and Act,[18] or OODA for short. A person going through this loop will first “observe” the world around her to take in information,[19]  following which she “orients” or interprets the information so gathered,[20] and then goes on to use her pre-existing knowledge and experience to “decide” how to act, and finally “acts” or executes the decision made.[21] While this has been described as a “gross oversimplification”[22] of the decision making process of both humans and robots, considering the 4 steps overlap in time and are not steps of a linear process, it remains a useful tool to understand system design.[23]

Engineers use the OODA loop to evaluate a machine’s level of autonomy by measuring how much it depends on human operators to execute the same. The greater its ability to do it, the greater its autonomy.[24]  Automated systems lack self-direction and decision making capability. Automated systems simply have “the capacity to operate without human intervention”[25], whereas autonomous entities are capable of being “independent in the establishment and pursuit of their own goals”.[26]

There exists no hard line difference between autonomy and automation, and autonomy can be best described as existing on a spectrum,[27] which could range from purely “automated” robots such as those used in a factory to weld doors to cars on the one end, and futuristic, Terminator-like robots that can seek, identify and kill a target without any human intervention on the other. Most of the real world systems will fall somewhere in between these extremes. In view of this, the Air Force Research Lab of the US Air Force released an 11-level autonomy spectrum,[28] which takes into account a crucial factor that a system could mix and match autonomy and automation while going through the OODA loop, and may show a high level of autonomy at the observe stage, for instance, but lesser levels at the decide or action stages, requiring human assistance at that point.

The drones in use today are fairly low in the spectrum of autonomy. While drones like the Predator and the Reaper have very sophisticated surveillance and attack capabilities, in many ways they are still remote-operated planes that operate at the lower end of the autonomy spectrum. They have been described as “little more than a super-fancy remote-controlled plane.”[29] Both the Predator and the Reaper can be flown on three modes, which are remote-controlled flying, semi-autonomous monitored flight, and pre-programmed flight.[30] Each of these, however, require frequent human interaction, and the drones cannot suggest sophisticated actions to its operators or make complex decisions independently. The Global Hawk is an example of a drone a bit higher up on the autonomy spectrum, as it has abilities including being able to take off and land almost entirely unassisted,[31] and may be placed at Level 1 or Level 2 of the US Air Force’s Scale.[32]

Combat Drones and the Law of Armed Conflict

Drones, described as Unmanned Aircraft Systems (UAS) or Unmanned Aerial Vehicles (UAV) in the defence sector, are becoming increasingly common in the modern battlefield. The most recent example of the same has been in the conflict between Armenia and Azerbaijan in 2020 over the disputed Nagorno-Karabakh region, which ended with Azerbaijan as a military victor.[33] It has been argued that Azerbaijani drones that managed to take control of the skies were the game-changer in this conflict.[34] The use of drones by the US military in the targeted killings of terrorists in Pakistan and Afghanistan have also seen its fair share of controversy, being described as everything ranging from “just the killing of the enemy, wherever and however found” to extrajudicial killings, targeted assassination and outright murder.[35]

While the uses of armed drones is not specifically regulated under international law, their use is still subject to general rules of international law. Armed drones are not considered as weapons in themselves, but rather, platforms that deliver a weapon. However, International Humanitarian Law does cover weapons as well as weapon systems or platforms under means of warfare.[36] The legality of drone strikes has to be seen through two normative concepts, jus ad bellum, the law governing the resort to force, and jus in bello, the law governing the conduct of hostilities. Together, they comprise what is known as Law of Armed Conflict or LOAC. LOAC governs a very specific subject matter, the use and application of force during armed conflict, and stands in contrast to broader or more general legal constructs.[37]

The LOAC has different rules for different types of conflict. “International Armed Conflicts” are traditional armed conflicts between states and are governed by the 1907 Hague Conventions[38], the four Geneva Conventions of 1949[39], custom and the first Additional Protocol to the Geneva Convention (AP I) for the parties of the same.[40] “Conflicts not of an International Character”, alternatively known as “non-international armed conflicts” are armed conflicts between states and non-state actors, including but not limited to internal armed conflicts, and are governed by Common Article 3 of the Geneva Conventions,[41] custom, domestic law and the second Additional Protocol to the Geneva Conventions (AP II) for the parties of the same.[42] Another type of conflict that has often been described is the “internationalised non-international armed conflict”, or conflicts between states and non-state actors that feature additional states on one or both sides, and it has been argued that since these conflicts are not expressly contemplated under the traditional laws of war, rules from both types of conflicts must be applied where appropriate.[43]

Regulating LAWS

While fully autonomous weapon systems do not exist today, the foundations of the technology that will allow these to be real in the near-future have been laid.[44] LAWS can be divided into 3 categories based on the level of human involvement in the OODA loop: man in the loop, man on the loop, and man off the loop. The development of LAWS can be said to have begun with the man in the loop technology in 2010 with border sentry robots, like the South Korean SGR-A1, a stationary robot reportedly in use in the Demilitarized Zone between the Koreas.[45] The robot is equipped with voice and gesture recognition technology, and has the ability to ask an approaching enemy to put up their hands and surrender.[46] If the person fails to comply, as determined by the gesture recognition technology, the SGR-A1 will send a signal to its human operator who can choose to use lethal force.[47] The fact that a human has to make the final decision to use lethal force makes the SGR-A1 a man in the loop technology. Man on the loop weapons do not require humans to make affirmative actions, such as whether to deploy lethal force, but a human in continuously monitoring and has the authority to override its actions.[48] The United States X-47B Unmanned Combat Air Vehicle (UCAV) can be considered to be an example of a man on the loop machine. It has the ability to take off from, and land on aircraft carriers, refuel and navigate autonomously.[49] While initially the X-47B acts as a human in the loop machine, as a person has to program mission parameters like the destination, once it takes off, the machine itself is making decisions like the best route, while the human is monitoring the flight and can override the UCAV’s decisions.[50]

International law can govern weapons development in two ways: weapons being ruled unlawful per se, or weapons being ruled unlawful based on the ways they are used.[51] Per se weapons bans are rooted in the international norms prohibiting weapons that consistently caused or would consistently cause superfluous injury or unnecessary suffering, or weapons incapable of discriminating between military and civilian targets.[52] Article 35(2) of the Additional Protocol I of the Geneva Convention prohibits weapons that are “of a nature to cause superfluous injury or unnecessary suffering.”[53] Examples of weapons that have been banned under this rule include explosive bullets, asphyxiating gas, and bayonets with serrated edges.[54] However, most of such weapons have been banned only after they had been introduced in the battlefield. There is only one example of a weapon that has been banned even before they were introduced, and that is the blinding laser, as it was considered that blindness is an “unnecessary injury” in the context of warfare.[55] It is unlikely that LAWS will be banned under this category, as they are not “specifically designed” to inflict superfluous injury, nor are they necessarily indiscriminate. They are basically just a new way to deliver a payload, one that would have essentially the same effect if delivered from a manned jet, or a conventional UAV. LAWS also don’t cause a new form of injury, as is the case in blinding lasers. Further, it can also not be said at this point that LAWS are incapable of differentiating between lawful and unlawful targets.[56] For weapons that are not per se illegal, their legality is assessed based on each use and analysed under principles of distinction and proportionality. AP I Article 51(4)(a)[57] makes unlawful attacks that are not “directed at a specific military objective”. This is known as the distinction principle. The second requirement of proportionality, is codified under AP I Articles 51(5)(b)[58], and 57(2)(a)(iii).[59] The principle of proportionality does not bar civilian damage, but only places an acceptable limit on the same.[60]

Challenges in Regulating LAWS

The issue of regulating LAWS is an extremely important one for the international community, considering the pace at which the technology has been developing. However, it is beset by multiple challenges. At the outset, there exists no internationally agreed upon definition for LAWS, and different States and NGOs define them based on different characteristics. The US, for example, defines AWS as weapons systems that “once activated, can select and engage targets without further intervention by a human operator.”[61] The United States’ definition notes that “this includes human supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.”[62] Alternatively, some States base their definition on the capabilities of the systems themselves.[63] For instance, the United Kingdom defines an “autonomous system” as one that “is capable of understanding higher-level intent and direction.”[64] The U.K. definition further explains that “from this understanding and its perception of its environment, such a system is able to take appropriate action to bring about a desired state.” [65] The lack of a common definition makes it hard for any regulatory scheme that can govern these weapons to come into being.

Another important challenge is the difference in the perspectives of different States with regard to the potential regulation of LAWS. Many States, along with corporations and NGOs have called for a pre-emptive ban on fully autonomous weapons, while countries like the US, U.K., and Russia are against the negotiation of any treaty regulating LAWS at this juncture, believing the same to be premature and unnecessary.[66] The US and UK believe that existing LOAC are sufficient to regulate the development and use of such weapons.[67]

Arguably the most critical challenge is that of accountability for violation of international norms. In the case of LAWS, there is no human who plans or decides upon an attack, but the machine itself. There are 3 models of liability that can be used to ensure someone is held responsible for misconduct of LAWS: product liability, command responsibility, and direct responsibility of the robot. The third can be dismissed for the time being, as no technology exists that can be used to fulfil it.[68] Under a product liability regime, either the software designer or the manufacturer of LAWS would be accountable for violations of LOAC by LAWS. Criminal liability could only be placed on these civilian actors if they acted with the intent to break international law.[69] Private manufacturers of military technology, however, are rarely considered accountable for malfunctions of their weapons, and this form of liability is not available to regulate LAWS.[70] Finally, with regard to command responsibility, some observers feel that it is unfair to impose criminal liability for a fully autonomous weapon on a military commander.[71] However, others feel that the unpredictability of an autonomous robot would be similar to that of human action, and since commanders are held responsible for the actions of humans, the same can be said for the actions of robots. The existing law of command responsibility says that there are 3 ways a commander could be held liable: the commander knew the robot was capable of violating the law of war, the commander should have known that the robot was so capable, or that the robot was used in violation of these laws and the commander failed to take action against those responsible.[72]

Suggestions and Conclusion

LAWS no longer remain in the imaginations of science fiction writers, and it is high time that laws catch up. The international community needs to come together to create an effective LAWS treaty regime. Such a treaty should ideally be a mixture of existing weapons law, and impose the requirements such as distinction and proportionality on LAWS. It would also be necessary to make certain conditions specific for LAWS, like requiring States to record and preserve data from the LAWS for some specific period, and requiring that States equip LAWS with self-neutralisation or self-destruct mechanisms to ensure control. Attention will also have to be paid to the question of accountability, and as this would ideally be part of a new treaty, it could extend beyond existing legal frameworks, and impose product liability on manufacturers, and further, the treaty should also seek to clearly impose command responsibility.[73] Another aspect of a robust regulatory system could be to use the OODA loop within itself. For example, regulations could treat different stages of the loop differently, like setting limits on the observe stage including area and duration, which will then have effects on how the machine orients itself, decides on a course of action and the action taken. The regime could also differentiate between decision making loops that the machine is allowed to take on its own, and those that require human supervision.[74] However, it has to be kept in mind that no such regulation will be perfect, and there always exists a measure of risk when handing over decision making roles to machines, and it is crucial that policymakers are well versed with these risks.

[1] Arthur Wilson Page, The World’s Work Second War Manual: The Conduct of War, 41 (Doubleday Page and Company, New York, 1st ed., 1914)

[2] Peter W. Singer, Wired for War: The Robotics Revolution and Conflict in the Twenty-First Century, 49 (Penguin, New York, 1st ed., 2009)

[3] Id. at 47

[4] Id. at 48

[5] William C. Marra & Sonia K. McNeil, “Understanding the Loop: Regulating the Next Generation of War Machines” 36 Harvard Journal of Law and Public Policy 1162 (2013)

[6] Supra Note 2 at 53

[7] Id. at 57

[8] Id. at 61

[9] Gary E. Marchant et al., “International Governance of Autonomous Military Robots”, 12 The Columbia Science and Technology Law Review 275 (2011)

[10] U.N. General Assembly, Study on Targeted Killings: Rep. of the Special Rapporteur on Extrajudicial, Summary, or Arbitrary Executions, 9I27, U.N. Doc. A/HRC/14/24/Add.6, at 9

[11] Bill Yenne, Birds Of Prey: Predators, Reapers and America’s Newest UAVs in Combat 39-45 (Specialty Press, Minnesota, 2010).

[12] U.S. Air Force, “Unmanned Aircraft Systems Flight Plan, 2009-2047” 16 (2009)

[13] Paul J. Springer, Military Robots and Drones: A Reference Handbook, 90-91 (ABC CLIO California, 2013)

[14] Supra Note 5 at 1141.

[15] Shane Harris, “Out of the Loop: The Human-free Future of Unmanned Aerial Vehicles” Emerging Threats in National Security and Law (2012)

[16] Supra Note 5 at 1142

[17] Scott E. McIntosh, “The Wingman-Philosopher of MiG Alley: John Boyd and the OODA Loop,” 58 Air Power History 24 (2011).

[18] Robert Coram, Boyd: The Fighter Pilot Who Changed the Art of War (Back Bay Books, New York, 2002)

[19] Supra Note 15 at 26-27

[20] The Dynamic OODA Loop: Amalgamating Boyd’s OODA Loop & the Cybernetic Approach to Command and Control,  available at

[21] Ibid.

[22] Raja Parasuraman, “A Model for Types and Levels of Human Interaction with Automation”, 30 IEEE Transactions On Systems,Man, And Cybernetics- Part A: Systems And Humans 286 (2000)

[23] Gilles Coppin & Franqois Legras, “Autonomy Spectrum and Performance Perception Issues in Swarm Supervisory Control”, 100 Proceedings Of The IEEE 590 (2012)

[24] Supra Note 2 at 74

[25] O. Grant Clark et. al., “Mind and Autonomy in Engineered Biosystems”, 12 Engineering Applications of Artificial Intelligence 389 (1999).

[26] Id. at 2

[27] Id. at 10

[28]Evolution of a UAV Autonomy Classification Taxonomy, available at

[29] How the Predator UAV Works, available at (last visited on June 18, 2021)

[30] Supra Note 12 at 26-27

[31] Supra Note 2 at 36

[32] Supra Note 5 at 1170

[33] Armenia, Azerbaijan and Russia sign Nagorno-Karabkh peace deal, available at (last visited on June 18, 2021)

[34] Robyn Dixon, “Azerbaijan’s Drones owned the battlefield in Nagorno-Karabakh-and showed the future of warfare”, The Washington Post, November 12, 2020

[35] James Kitfield, “Wanted.- Dead,” National Journal 21 (2010)

[36] Humanitarian Concerns raised by Use of Armed Drones, available at (last visited on 19 June 2021)

[37] Chris Jenks, “Law from Above: Unmanned Aerial Systems, Use of Force, and the Law of Armed Conflict” 85 North Dakota Law Review, 649 (2009).

[38] Hague Convention IV Respecting the Laws and Customs of War on Land, 1907

[39] The Geneva Convention for the Amelioration of the Condition of the Wounded and Sick in Armed Forces in the Field, 1949; Geneva Convention for the Amelioration of the Condition of the Wounded, Sick and Shipwrecked Members of the Armed Forces at Sea, 1949; Geneva Convention Relative to the Treatment to Prisoners of War, 1949; Geneva Convention Relative to the Protection of Civilian Persons in Time of War, 1949

[40] Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts, 1977.

[41] Supra Note 39, Article 3

[42] Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of Non-International Armed Conflicts, 1977

[43] Ryan J. Vogel, “Drone Warfare and the Laws of Armed Conflict”, 39 Denver Journal of International Law and Policy, 110 (2010)

[44]Autonomous Weapon Systems and International Humanitarian Law: A Reply to the Critics, available at visited on 19 June 2021)

[45] Gun-Toting Sentry Robots Deployed in South Korea, available at (last visited on 19 June 2021)

[46] Id.

[47] South Korea to Field Gun-Cam Robots on DMZ, available at (last visited on 19 June 2021).

[48] Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Human Rights Council, U.N. Doc. A/HRC/23/47

[49] Northrop Grumman, X-47B UCAS Unmanned Combat Air System,  available at (last visited on 19 June 2021)

[50] New Drone Has No Pilot Anywhere, So Who’s Accountable? Available at (last visited on 19 June 2021)

[51] Supra Note 44 at 8

[52] Id. at 17

[53] Supra Note 40

[54] William H. Boothby, Weapons and the Law of Armed Conflict 26 (Oxford OUP, Oxford, 2009).

[55] Gwendelynn Bills, “LAWS unto Themselves: Controlling the Development and Use of Lethal Autonomous Weapons Systems”, 83 George Washington Law Review 176 (2014).

[56] Id. at 193

[57] Supra note 53 art. 51(4)(a)

[58] Id. art. 51(5)(b)

[59] Id. art. 57(2)(a)(iii)

[60] Supra note 44 at 20-21.

[61] U.S. Department of Defence, “DIR. 3000.09, Autonomy In Weapon Systems” 13 (2012)

[62] Id.

[63] Shane R. Reeves, Ronald T. P. Alcala & Amy McCarthy, “Challenges in Regulating Lethal Autonomous Weapons under International Law”, 27 Southwestern Journal of International Law 101 (2021).

[64] U.K. Ministry of Defence, “Joint Doctrine Pub. 0-30.2, Unmanned Aircraft Systems” 13 (2018)

[65] Id.

[66] Lethal Autonomous Weapons Systems: Recent Developments, available at (last visited on June 20, 2021)

[67] Group of Governmental Experts of the High Contracting Parties to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, Report of the 2018 Session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems, ¶ 28, U.N. Doc. CCW/GGE.1/2018/3 (Oct. 23, 2018);

[68] Supra Note 55 at 196

[69] Losing Humanity: The Case Against Killer Robots available at

[70] Supra Note 55 at 197

[71] Supra Note 69 at 42

[72] Antonio Cassese Et Al., Cassese’s International Criminal Law 187 (Oxford OUP, Oxford 3d ed. 2013)

[73] Supra Note 55, 198-207

[74] Supra Note 5, 1179-1181.

Author: K. K. Prahalad, Damodaram Sanjivayya National Law University, Visakhapatnam

Editor: Kanishka VaishSenior Editor, LexLife India.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s