Emerging AI governance for international security: The stakes and stakeholders of responsibility

Zoe Stanley-Lockman | 10 March 2021


This commentary looks first at the stakes of governing technology for peace, security, and defence, then identifies the stakeholders who have a responsibility to shape the emerging international technology order. With small states in mind, the commentary concludes by applying this framing to concrete policy challenges such as arms control, disarmament and non-proliferation, and adherence to international humanitarian law.


The more emerging technologies develop, the harder it becomes to control or change them. Take the rapid advancements in artificial intelligence (AI): open-source datasets have opened up entirely new research avenues for computer vision over the past decade, but scientific discoveries have been accompanied by non-trivial, harmful biases. Biotechnology has followed a similar trajectory: as the field crystallises, so do the real-world consequences of their risks.

In science, technology and society studies, this is called the Collingridge dilemma. Forty years ago, the academic David Collingridge wrote about the governance (initially termed ‘social control’) of emerging technologies, warning: “When change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult and time consuming.” This must have seemed like a distant academic debate in the 1980s, precisely because the technologies we now rely on in daily life were not yet mature. But today, the Catch-22 situation that Collingridge described has become an all-too-real policy dilemma.

Governments and societies change slowly relative to the pace of technological development, making technology governance seem like a perpetual game of catch-up. One resulting question is: how can we do more than catch up – and actually anticipate change? The stakes of anticipatory governance for technologies like AI are high because they permeate all sectors of the economy, society – and, by extension, peace, security and defence.

What stakes?

Shaping the trajectory of AI and other emerging technologies is the responsible course of action to manage the risks associated with their development and use. Understanding technology as a means or process to serve human purpose means understanding that it is inherently tied to social structures. Societal values are embedded in technology as a result – most directly through practices like norms, standards, and regulations. Far from comprehensive, examples of such practices here mean to give an idea of how technology is linked to power and politics. The common thread that runs through them is that, applied to the international security environment, each of these tools is subject to intensifying competition. Norms dictate acceptance of patterns of behaviour, as seen in attempts like the United Nations seeking to prevent cyber attacks on critical infrastructure, or from companies acting as “norm entrepreneurs” to maintain the openness of research while also limiting negative consequences associated with publication of their research. Standards, which confer a first-mover advantage to those who set them, detail requirements on performance, safety, security, interoperability, and operating procedures. On this note, Chinese government influence in standardisation bodies, as expected to be cemented in China Standards 2035, has sparked concern in Washington. As standardisation becomes a new battleground in Sino-U.S. competition, intra-European coordination becomes more urgent. Regulations, ranging from export controls for militarily sensitive items and defined technology chokepoints as well as the European Commission’s interest in creating a new liability mechanism for “high-risk” AI systems, are perhaps the most state-driven of these technology governance tools.

As is clear in the European Commission’s approach to AI legislation, identifying risk is important because emerging technologies are double-edged swords. The idea of so-called “dual-use” technologies (in other words, those technologies that can be put to military or civilian use) is far from novel. But conventional control methods require that access and knowledge are driven by a small number of states, or at least that the inputs are limited and scarce enough to be effectively monitored.

For diffuse research fields like biotechnology and data science – even if change were deemed reliable – we are in Collingridge’s phase where change is expensive, difficult, and time-consuming. In addition, risks range from the insidious to the existential. While diffusion of knowledge is net positive, it also inevitably means that powerful technologies can be deployed in harmful ways without ethical design features or oversight mechanisms in place. Inaccurate predictions from biased AI systems disproportionately impact minorities, and can even lead to targeting the wrong individuals in war theatres. Deepfakes stem from the same technology that may be used for drug discovery and other incredible scientific achievements – but when weaponised, are used in attacks to harass individual women and in information operations against populations.

In a study on responsible innovation for international peace and security, SIPRI has noted that risk of misuse or irresponsible use begins before the decision to develop a system, and is carried through design and development, deployment, and diffusion. Crucially, these risk vectors show that risk is distributed across the lifecycle of a technology. As the SIPRI authors argue, preventing risks before they materialise could be a partial antidote to 21st Century arms control issues. Beyond armaments, incentivising responsible innovation in alignment with democratic values is also poised to become the added value of like-minded states in the competitive environment in which companies and governments alike operate. In other words, it creates grounds for democratic technology cooperation in a techno-nationalistic system.

Which stakeholders?

The distribution of risk across a technology’s lifecycle is no small challenge because it requires looping in the host of stakeholders involved in – and responsible for – each aspect of the technology’s design, development, diffusion, and deployment. Stakeholders like international organisations, states, and the European Union are already mentioned above. Yet, to meet the wide-ranging risks, anticipatory governance requires upstream involvement from other relevant entities such as the private sector, civil society, academia, and more nebulously defined research networks or communities of practice. Given the rate at which technologies converge to culminate in new inventions, no single individual or entity can single-handedly manage risk and shape the trajectory of innovation to the benefit of humanity.

This means that to conduct responsible innovation – also called responsible research and innovation (RRI) in EU nomenclature), anticipatory, deliberative, and responsive governance can only be achieved by including these diverse perspectives with unique responsibilities. This inclusivity of many stakeholders is also deemed necessary to make sure that social and cultural aspects are considered on top of technical ones. Responsible innovation not only entails safety and security – it goes further to consider moral obligations that are embedded in hard and soft law, and in human activity to help keep the focus on impact and outcomes,rather than inputs.

In practice, this requires a change in paradigm from one of liability to a broader emphasis on accountability and responsibility. In addition to the aforementioned ongoing attempts of the European Commission to marry these concepts, interested parties may look toward other organisations that have deep experience working with safety-critical and high-risk systems. At the national level, democratic governments, including their armed forces, are situated to take on these responsibilities given that they are structured to adhere to strict legal processes and reduce risks. For example, Australian defence researchers and ethicists recently offered practical tools to manage ethical risks, including a checklist, risk matrix, and a more formal documentation program for high-risk AI systems. Such documentation practices, which require different parties to register their involvement in the system across its lifecycle, could apply equally to civilian or defence processes.


A number of middle powers and small states have increased their interest in shaping the international technology order, seeking to play an outsized role in international technology governance. In the past, coalitions of small states and civil society organisations have proven decisive in disarmament and non-proliferation initiatives. Ireland, for instance, was part of the “core group” of the Ottawa Process that led to the 1997 Anti-Personnel Mine Ban Convention. For such core groups to establish the rules of the international technology order today, facilitating governance changes to responsible innovation may be key, while managing technological risks affecting human rights and power dynamics in the international system are equally relevant.

It is not new to focus technology management on practices such as norms, standards, and regulations. But the interrelationships between governments and other crucial governance actors – in the private, public, academic, and civil society sectors – will be important to shape innovation in alignment with core values. Weaving together coherent governance regimes whereby actors understand their responsibilities in mitigating risk is important to preventing negative consequences across the technology lifecycle. What is more, in this modern-day competitive international system, the earlier that attempts to shape the international technology order take place, the greater the opportunity to create democratic accountability.


Zoe Stanley-Lockman is an Associate Research Fellow in the Military Transformations Programme at the S. Rajaratnam School of International Studies in Singapore. She previously worked at the European Union Institute for Security Studies and specialises in military innovation and emerging technologies.

Authors’ views are their own and do not represent the official position of The Azure Forum.