Advertisement

Unifor Leaderboard

Machines in the chain of command

Disputed ban on killer robots raises alarm bells over military use of artificial intelligence

Canadian PoliticsHuman RightsUSA Politics

Taranis by BAE Systems. Image courtesy of BAE Systems.

Follow the leader

From the Pentagon’s feral “legged squad support system” to swarms of drones to malicious computer programs directing theatres of war without human intervention—killer robots have dominated the imagination in this century’s defining arms race.

Lethal autonomous weapons systems have been widely denounced as one of the greatest threats facing humanity today. United Nations Secretary-General António Guterres famously called lethal autonomous weapons systems (LAWs) “politically unacceptable and morally repulsive.” Yet, here we are, at the turn of an apocalyptic year, and there are still no clear human rights protections against AI systems in the Geneva Conventions—and despite years of disputing an international ban on LAWs through the UN, there has been little progress to actually put one in place.

The Campaign to Stop Killer Robots was launched in 2013 in support of a global ban on LAWs and maintaining human control as a condition for the use of any weapons systems. Several Canadian groups have been active through this network, including the disarmament research institute Project Ploughshares and the Canadian Pugwash Group. Since the coalition’s founding, hundreds of AI and robotics researchers and academics—notably across Canada, the United States, Australia and Belgium—have released open letters calling for urgent action on the “third revolution in warfare,” to prevent abuses of human rights and dignity. The campaign attracted particular attention from Canadian media in 2018, when Prime Minister Justin Trudeau hosted the 44th G7 meeting in La Malbaie, Québec.

On December 13, 2019, Trudeau issued a mandate letter to Minister of Foreign Affairs François-Philippe Champagne, citing the need to “[a]dvance international efforts to ban the development and use of fully autonomous weapons systems.” Members of the Campaign to Stop Killer Robots coalition celebrated this mandate as “setting a promising tone” for Canada, which has otherwise been described as “waffling on the issue.”

The United Nations Convention on Certain Conventional Weapons (CCW) was expected to meet earlier this summer in what would have been the sixth year of discussing a ban on LAWs, but was delayed due to the COVID-19 pandemic. CCW meetings adapted this month for online participation have shown a split in international support for “human responsibility for decisions on the use of weapons systems.”

Branka Marijan, Senior Researcher at Project Ploughshares, emphasized the leadership role Canada could play in these discussions, describing this as “well in line with its other efforts on ensuring the responsible uses of artificial intelligence it has championed with France through the G7.”

In a webinar hosted earlier in August, Ploughshares’ Executive Director Cesar Jaramillo emphasized the urgency of the ban, given the rapid developments in AI technology and the potential humanitarian toll of normalizing fully autonomous weapons systems. Ploughshares sees the CCW delay as an opportunity for Canadian officials to follow up the mandate letter with a position paper—something that has been urged by the UN.

Yet the UN’s consensus-based process has led to a frustrating impasse. As Human Rights Watch reported earlier this August, some of the largest global investments in developing lethal autonomous weapons are from Australia, China, Israel, Russia, South Korea, Turkey, the United Kingdom, and the US.

Canada’s own Strong, Secure, Engaged policy, released by the Department of National Defence (DND) in 2018, states that the Canadian Armed Forces are “committed to maintaining appropriate human involvement in the use of military capabilities that can exert lethal force,” and that there is a need to “promote the development of international norms for the appropriate responsible and lawful use of remotely piloted systems, in support of Global Affairs Canada.”

The policy also vaguely acknowledges that advances in autonomous systems “have the potential to change the fundamental nature of military operations,” and that domestic legal and governance systems will need to adapt quickly. But there is currently little clarity around what accountability looks like even with existing semi-autonomous systems.

“At the moment, it is not clear that a human operator would be liable for the actions undertaken by a system,” Marijan pointed out. “If the human operator did not have knowledge or intent for a specific action, their liability would be difficult to establish. Quite simply, our laws of war are made for humans, not machines or algorithms.”

Even with Canada’s mandate to support a global LAWs ban and the lingering uncertainty around accountability, the issue hasn’t been treated by Canadian officials with the urgency it deserves. Paul Meyer, Chair of Canada’s Pugwash Group and Fellow of International Security at Simon Fraser University, confirmed Canada’s restrained approach to the LAWs ban. Referring to the mandate letter, Meyer stated that “[w]e don’t see signs of realizing the guidance.”

Canada is in a unique position to lead international action toward a global ban on LAWs, if not through the UN channel, then by following the example of previous arms control mechanisms. Canada played a crucial role in the 1997 Ottawa Treaty on landmines, where then Minister of Foreign Affairs Lloyd Axworthy initiated a process that circumvented the UN with a core group of states in support of this ban.

Meyer attributed the success of the Ottawa Treaty to a strong civil society movement through the International Campaign to Ban Landmines, which now finds hopeful resonance in the Campaign to Stop Killer Robots. Such opposition from civil society groups and researchers, Meyer noted, also offers an opportunity “for non-traditional players in national security to have some influence on the policies and practices that are ultimately adopted by their governments.”

DARPA’s Legged Squad Support System. Photo courtesy of DARPA.

Our southern neighbours

Canada’s stance on a LAWs ban is nonetheless affected by US military policy and shaped by interoperability between NATO allies. While Trudeau’s directive to Champagne clearly supports a ban on LAWs, there is nothing stopping Canadian companies from developing LAWs, both domestically and abroad.

Back in 2011, before the launch of the global Campaign to Stop Killer Robots, the US Department of Defence (DoD) released a roadmap for the “seamless integration” of drones and fully autonomous weapons systems, and the gradual reduction of human control and decision-making. More recently, the DoD’s 2018 artificial intelligence strategy describes plans to “accelerate the adoption of AI and the creation of a force fit for our time,” framing military AI as a tool to “preserve the peace and provide security.”

The US has since been heavily invested in developing LAWs with major players like Lockheed Martin, Boeing and Raytheon presenting a sweeping picture of automated warfare—including mission planning programs such as BAE Systems’ semi-autonomous MARS.

The scope of autonomous weapons development goes beyond just building drones or BigDogs to include major investments in data processing capabilities. In 2017, the Pentagon’s Defense Advanced Research Projects Agency (DARPA) launched a massive four-year collaboration with Intel on a machine-learning and AI-based project called HIVE (Hierarchical Identify Verify & Exploit). The project has largely flown under the radar, attracting press mostly from financial or industry publications. HIVE is expected to surpass current hardware’s data processing capabilities by up to a thousand times, and is collaborating with MIT and Amazon Web Services to analyze data “generated by the Internet of Things, ever-expanding social networks, and future sensor networks.”

In 2019, one of the Pentagon’s closest collaborators, Google, was hit with a firestorm of thousands of employees protesting against the renewal of a drone contract called Project Maven. As Lee Fang reported for The Intercept, Google had employed gig workers to help the US Air Force in an artificial intelligence program for image recognition, so that drones could identify and engage targets in the battlefield.

The DoD has since adopted some rudimentary “ethical principles” for the use of AI, but this has clearly not dampened US ambitions. DARPA continues to push toward full autonomy by funding biomimicry studies of human brains and the Context Reasoning for Autonomous Teaming (CREATE) program, which aims to introduce “reasoning” into swarms of autonomous systems (or drones).

The impetus here, as the DoD has emphasized, is that Chinese and Russian development of AI technology “threaten[s] to erode [US] technological and operational advantages and destabilize the free and open international order.” It seems that the natural conclusion is to engage in an all-out arms race to protect the interests of a country that, with a defence budget of $732 billion in 2019, spends more on its military than the next ten countries combined (China, India and Russia included). The RAND Corporation, for example, has modeled a scenario in confrontation with Russia in the Baltics, set in 2030, which “finds NATO forces employing swarming AWS to suppress Russian air defense networks and key command and control nodes in Kaliningrad.”

But this developmental state of fully-autonomous AI systems is precisely one of the key challenges for introducing a ban on LAWs. Other technologies like landmines—which set a precedent for pursuing a ban outside of the UN framework—had already been present in international military arsenals, making it easier to define and regulate the scope of a ban. Existing semi-autonomous weapons systems that are already used in military arsenals to guide human decision-making could also eventually be adapted for full autonomy.

With the nuclear clock sitting at 100 seconds to midnight, the question of a global ban on LAWs is more than ever enmeshed with the parallel issue of a nuclear arms race. The risk of nuclear escalation by AI decision-making is described in a recent report by the Stockholm-based arms control research institute SIPRI. The introduction of AI decision-making into nuclear arsenals “would be morally wrong,” SIPRI researchers conclude, and “would dramatically increase the risk of accidental or inadvertent nuclear escalation.”

Petr Topychkanov, Senior Researcher at SIPRI, explained that the use of AI in nuclear weapons systems may play a stabilizing role when these applications are used for nuclear safety, and to inform human decision-making by providing intelligence or surveillance. But the story changes when AI-based systems substitute human decision-making.

“The bottom line is that the decision to use nuclear weapons should remain with humans. It’s not about pushing the ‘red’ button only,” he said. “It’s relevant for any decisions, [such as] changing the alert status of nuclear weapons, because such changes could provoke the other side’s response, leading to nuclear escalation.”

The use of AI in combat situations, like ground-based mobile systems or combat aircraft, may be difficult to define, control and verify, especially within semi-autonomous systems. As Topychkanov explained, “the line between autonomous and human control can be blurred.” This ambiguity poses an even greater challenge for defining the scope of a LAWs ban when the development of automated or semi-automated systems remains highly classified.

“In the case of AI applications for nuclear weapons-related systems, it’s hard to understand their status: whether they are at the stage of R&D and tests, or even operational deployment,” he said. “The area of nuclear weapons and strategic systems remains highly classified and untransparent for any confidence and transparency-building measures.”

Semi-autonomous X-47B Unmanned Combat Air Vehicle, developed by Northrop Grumman and DAPRA. Photo courtesy of Northrop Grumman.

Outsourcing accountability

Regulation of R&D for military use of AI must also contend with the potential of civilian technology to be co-opted. “Diffuse technologies” can be initially developed for seemingly innocent civilian applications—such as sorting items for an online retailer, for automated recruitment tools, or for medical and educational purposes—and then refined and applied for shooting missiles and killing people. As Project Ploughshares’ Branka Marijan explained, this could translate into the use of facial recognition for confirming targets, or image recognition for understanding an environment in which a weapons system is found.

Canada already contributes public funding to cross-border development of diffuse technologies through the Canada–Israel Industrial Research and Development Foundation (CIIRDF), which brokers partnerships between Canadian and Israeli research institutes and high-tech firms in the private sector. The CIIRDF claims that “projects or technologies that may have military/non-peaceful applications are not eligible” as they focus on medical, educational or other civic technologies.

Such claims of “peaceful applications,” however, gloss over the embedded nature of research institutes in Israel, where there is no separation between civil society and the military. As a result, defence companies like Rafael and Elbit Systems benefit from the capabilities of technologies that, with the help of Canadian tax dollars, may be initially developed for civilian purposes.

Israel is indeed one of the leading nations investing in military applications of AI and lethal autonomous weapons, including harpy drones developed by Israel Aerospace Industries, semi-autonomous ground vehicles developed by Roboteam, and the Skylord drones, developed by the IDF and Israeli company XTEND, which were recently commissioned by the US military as part of its “counterterrorism” operations.

The military research and development (R&D) environment in Canada has itself increasingly enabled private companies to exploit the current lack of regulation around LAWs development by dipping into the public coffer. Some of this key research is conducted through the Innovation for Defence Excellence and Security (IDEaS) program, which has committed $1.6 billion over 20 years for private or institutional partners to develop military technologies.

Predictably, there has been no critique of the federal budget subsidizing the development of military technology by private corporations.

One DND contractor working on military applications of AI is Datametrex, a Toronto-based company that has benefited from public funding through the IDEaS program to develop an AI-based “propaganda filter” for Canadian social media as an information warfare strategy. Revealing itself to be a “trusted solution provider within the US departments,” according to CEO Marshall Gunter, the company recently extended a contract with the US Air Force to develop “technologies specific to the human element of warfighting capability” with the Wright State Research Institute (WSRI).

Vancouver-based AerialX is another company that can boast a wholesome made-in-Canada contribution to LAWs arsenals. The company’s DroneBullet, a fully-autonomous missile that uses an internal navigation system, is apparently “a favorite among Homeland Security, government agency, law enforcement and militaries worldwide.”

Xtract AI, a subsidiary owned entirely by Vancouver-based Patriot One Technologies, is another company that works across industries, from health care and human resources to military technology. The firm secured a contract with the DND worth just under a million dollars in February 2020, to improve soldiers’ situational awareness using AI and augmented reality with “information sharing across a decision network.” Xtract was awarded another contract in May for the development of computer vision technology for concealing and detecting soldiers and vehicles.

Meanwhile, the company advertises its image analysis technology for medical purposes and detecting failures in transportation infrastructure, and its video analysis technology is shown to be used in identifying unwanted materials in recycling plants.

The DroneBullet by AerialX. Photo courtesy of AerialX.

But while some Canadian companies have aggressively pursued opportunities to develop military applications for AI in Canadian and US markets, others like Waterloo-based Clearpath Robotics and its subsidiary OTTO Motors have supported a ban on lethal autonomous weapons systems and campaigned for ethical guidelines around AI development. “[W]e all have a responsibility to define policy against lethal use of the technology,” said Clearpath CEO Matt Randall in an international open letter to the UN.

From Canada’s hub of AI development, the Montreal R&D community has similarly advocated for the ban on LAWs and urgently developing regulation around military development of AI. Since the publication of the Montreal Declaration on Responsible Use of AI in 2017, world-renowned scientist Yoshua Bengio has been an outspoken opponent of lethal autonomous weapons development.

Now almost a decade since the launch of the global Campaign to Stop Killer Robots, Bengio explains, “We are still very far from AI systems which could understand the social and moral context and consequences of life-and-death situations with high moral stakes, like in the act of killing a human being.”

“However, the responsibility for this rests squarely on humans who build, buy or deploy such systems or allow this to happen without protesting or intervening to avoid it, especially if they are in positions of power,” he added. “As object recognition technology becomes a commodity, as the world slides towards more aggressive stances between major players and more authoritarian-leaning governments, the potential for power-concentration and flouting of human rights using lethal autonomous robots increases in a scary way.”

How scary?

“Once you can identify and track a target person,” Bengio said, “you can see that you get very close to also being able to eliminate, in one form or another, your political opponents and political activists.”

Those who support the use of LAWs may actually be convinced that they are more ethical or progressive solutions to military problems. “There is also a great deal of techno-optimism and solutionism that comes from those who oppose a ban,” added Marijan. “They suggest that soldiers commit crimes and make mistakes, and that machines could improve on that.”

Calling the automation of warfare morally abject and dangerous for humanity’s welfare, Bengio added, “No current or foreseeable machine can understand humans, society and moral values at a level worth talking about when we discuss the kinds of moral doubts of a soldier about to press on the trigger.”

At the end of the day, when a human is in control of a killing machine they can, at any moment and for any reason, choose not to fire. The replacement of human-based decisions in a theatre of war is ultimately driven by a motivation to kill more efficiently—and the replacement of human actors to shift responsibility. The sleek, technocratic doctrine of AI-augmented warfare leaves no room for an anti-war movement—from outside, or from within the military.

Canada is now faced with an opportunity to take real initiative on defining the scope of arms control around an emerging technology that is on the verge of being adopted into wider military use, and falling out of human control. To do so, it will be necessary to show independence from an unscrupulous ally that is besieged by its own manifest destiny—and thus show the kind of leadership and diplomacy that a committed civil society movement and one former Canadian Foreign Minister once demonstrated in a successful ban on landmines.

Lital Khaikin is an author and journalist based in Tiohtiá:ke (Montréal). She has published articles in Toward Freedom, Warscapes, Briarpatch, and the Media Co-op, and has appeared in literary publications like 3:AM Magazine, Berfrois, Tripwire, and Black Sun Lit’s “Vestiges” journal. She also runs The Green Violin, a slow-burning samizdat-style literary press for the free distribution of literary paraphernalia.

Advertisement

URP leaderboard April 2025