By: Miriam Mokhemar

While most of us were taking finals or gathering with family over Winter Break, country leaders were meeting in Europe to discuss banning killer robots. 125 nation representatives and experts in the fields of artificial intelligence (AI), military strategy, weapons disarmament, and humanitarian law met in Geneva at the United Nations’ Convention on Certain Conventional Weapons. During the meeting on December 13-17, these leaders and experts wanted to introduce new legally binding mechanisms against killer robots that could be used as Lethal Autonomous Weapon Systems (LAWS) powered by AI.[1]

While many countries are still developing their LAWS, others are beginning to use these in battle today. In the ongoing civil war in Libya, reports suggested that a LAWS were used in the form of a drone. In March 2020, the Government of National Accord-the interim government of Libya- launched Turkish Kargu-2 drones “hunted down and remotely engaged” opposing military forces.[2] These drone LAWS were “programmed to attack targets without requiring data connectivity between the operator and the munition” creating a “‘fire, forget and find’ capability”.[3] Between these weapons and jamming devices from other warfare systems, the Libyan government decimated the operational capability of opposing forces.

Libya is just one example of a country using these new battlefield technologies as a significant force multiplier. The US has been testing drone LAWS by incorporating AI into L-39 Albatros jets at the Defense Advanced Research Projects Agency.[4]

LAWS using AI is attractive to state militaries because it new and not widely used by other militaries, removes the military personnel fatality risk, human emotions, and ethics by making decisions based on processing data and algorithms. Like any new technology, once it becomes mass produced and cheaper, it is even more attractive. However, the negatives that come with AI from bias data, such as targeting “friendlies”, attacking civilians, and enforcing gender and racial stereotypes, will be transferred into LAWS.[5] Furthermore, if nonstate actors acquire LAWS or AI technologies, they become substantially more threaten to the state-nonstate battles in conflicts.[6]

This has not stopped countries from advancing LAWs and blocking international legal procedures against such use. The United States, Australia, South Korea, India, Russia, Japan, and Israel have blocked any advancements in U.N. talks toward legally binding measures to ban and regulate the development and use of LAWS.[7] These countries in particular find that the military advantages of quicker reactions and reduced direct exposure of troops in battle outweighs the negatives of AI and potential harm from international regulations.[8] Other countries, such as Austria, Chile, Ireland and New Zealand, have been lobbying for legal binding rules.[9]

It is important to note that China, one of the United States’ important adversaries, was not against legally binding rules “based on an agreement to be reached on definitions”.[10] China even submitted a position paper on regulating military use of AI at the conference.[11] While the position paper uses strong rhetoric, it does not mention restricting the use of machines capable of choosing and engaging targets autonomously.[12]

Additionally, the position paper might suggest that China’s military use of AI may not necessarily focus on LAWS. Instead, China’s desires may be just to make miliary decisions faster.

According to Colonel Yuan-Chou Jing, former director of the Army Command Headquarters’ Intelligence Division and an associate professor at the Graduate Institute of China Military Affairs Studies, China would want to beat opposing forces based on “swiftness” and would mirror tactics similar to Hitler’s use of the Blitzkrieg during World War II.[13]  Colonel Jing asserts the advantage of Blitzkrieg tactics were that they effectively overwhelmed the enemy, especially when they had an unprepared infantry.[14] After every Blitzkrieg attack, the German infantry would move into the battlefield to suppress the last resistance from opposition forces, but it took time for armored tanks to move out of the battlefield.[15]

With AI, Blitzkrieg tactics could be faster and more decisive in acquiring targets and eliminating them, particularly when the initial attack is from a drone. It would also eliminate the need for troops to suppress the final opposition or make it easier for troops to intervene as the last stage of the Blitzkrieg-styled attack.

However, China’s position paper does not mention LAWS directly. Combined with other important security states objecting to international binding regulations against LAWS, the global community should have every reason to believe LAWS and killer robots are being developed and likely to be utilized in the future.


[1] Sam Shead, “UN talks to ban ‘slaughterbots’ collapsed — here’s why that matters”, CNBC News (Dec. 22, 2021, 9:45 AM EST), https://www.cnbc.com/2021/12/22/un-talks-to-ban-slaughterbots-collapsed-heres-why-that-matters.html.

[2] Final report of the Panel of Experts on Libya established pursuant to Security Council resolution 1973 (2011), (March 8, 2021), https://undocs.org/S/2021/229; See Maria Cramer, “A.I. Drone May Have Acted on Its Own in Attacking Fighters, U.N. Says”, The New York Times (June 4, 2021), https://www.nytimes.com/2021/06/03/world/africa/libya-drone.html.

[3] Id.

[4] Sue Halpern, “The Rise of A.I. Fighter Pilots”, The New Yorker (Jan. 17, 2022), https://www.newyorker.com/magazine/2022/01/24/the-rise-of-ai-fighter-pilots.

[5] Sumana Bhattacharya, “TOP 10 MASSIVE FAILURES OF ARTIFICIAL INTELLIGENCE TILL DATE”, Analytics Insight (Sept. 15, 2021), https://www.analyticsinsight.net/top-10-massive-failures-of-artificial-intelligence-till-date/.

[6] Sarah Kreps, “Democratizing Harm: Artificial Intelligence in the Hands of Nonstate Actors”, The Brookings Institute, 2, November 2021, https://www.brookings.edu/wp-content/uploads/2021/11/FP_20211122_ai_nonstate_actors_kreps.pdf.

[7] “Japan, U.S. block advancement in U.N. talks on autonomous weapons”, Kyodo News (Dec. 20, 2021), https://english.kyodonews.net/news/2021/12/c086de7578e9-japan-us-block-advancement-in-un-talks-on-autonomous-weapons.html.

[8] Id.

[9] Id.

[10] Id.

[11] Permanent Mission of the People’s Republic of China to the United Nations Office at Geneva and Other International Organizations in Switzerland “Position Paper of the People’s Republic of China on Regulating Military Applications of Artificial Intelligence (AI)”, United Nations (Dec. 13, 2021, 13:00 CET) http://www.china-un.ch/eng/dbdt/202112/t20211213_10467517.htm.

[12] Id.

[13] Yuan-Chou Jing, “How Does China Aim to Use AI in Warfare?”, The Diplomat (Dec. 28, 2021), https://thediplomat.com/2021/12/how-does-china-aim-to-use-ai-in-warfare/.

[14] Id.

[15] “The German ‘Lightning War’ Strategy Of The Second World War”, Imperial War Museums (2022), https://www.iwm.org.uk/history/the-german-lightning-war-strategy-of-the-second-world-war.