Strategic Insight 011/2023
04 May 2023
Russian military debates AI development and use
Over the past decade, artificial intelligence (AI) moved front and center in the Russian military’s thinking about new concepts and technologies for current and future wars. Following the Ukraine invasion, and the imposition of IT and high-tech sanctions on the Russian Federation, the Russian government and the Russian Ministry of Defense (MOD) are seeking ways to adapt to the resulting environment. The MOD considers AI as a decision-making tool and a key element in managing uncrewed technologies with a human-in-the-loop as a guiding approach to research and development.
Defining the importance of AI — Russian government’s sentiment
Statements made by the Russian President Vladimir Putin and his government to communicate the importance of AI have reflected the national aim to become an AI RDTE&F (research, development, testing, evaluation and fielding) leader – an ambition that can certainly be thwarted going forward due to sanctions that have affected Russian IT and high-tech industry during the Ukraine war. In his speeches, the Russian head of state notes that AI is pivotal to Russia’s future, noting that the artificial intelligence competition among states is fierce, and Russia’s place in the world, along with the nation’s sovereignty and security depends on AI research and development results. In the wake of its invasion of Ukraine, defining national sovereignty has evolved into arguing for Russia’s “technological sovereignty”, a concept best described as diminished dependence on imported Western technology and growing reliance on the domestic ability to produce key high-tech systems of importance to strategic industries, such as AI.
This sentiment is indicative of the Russian leadership’s recognition that it is engaged in a technological competition with the other leading powers for its very survival, with AI playing a key part as a defense and enforcement mechanism. In 2021, the MOD officials noted that global digitalization trends indicate that AI technologies were becoming instruments of geopolitical influence and confrontation. In this environment, a state with a development model that best meets future challenges will be able to adapt its military and government structures to modern conflicts. Likewise, the MOD officials note that AI was viewed by all leading countries, particularly the United States, as a “means of achieving global domination” – therefore, Russia should take a leading position in artificial intelligence development to ensure national security.
Russian society’s perceived vulnerability to Western influence serves as an amplifier for efforts to build out capabilities in the information environment and cyberspace. As a nation that considers itself on the defensive against consistent Western efforts to “hack” their way into key domestic military, socio-cultural and economic targets, using advanced technologies like AI for monitoring, evaluation and interdiction becomes a key Russian mechanism to safeguard the country from such perceived encroachment. Russian MOD officials also recognize that AI as a powerful information tool could target societies and political establishments by impacting the content, speed and volume of data and information delivery and perception.
Russian military establishment on AI use
Public statements on artificial intelligence technologies in the Russian military systems and processes emphasize AI as a decision-making tool that collects and analyses vast quantities of data for the final human input. At the same time, Russian military thought also points to the development of digital processes in today’s military that should naturally progress to the widespread adoption and use of AI systems that can perform creative functions traditionally considered the human prerogative. This eventual “intellectualization” of weapons, military tactics and concepts could result in the evolution of the human role towards primarily monitoring the combat environment, conducting a comprehensive analysis of decisions made by robotic systems, as well as monitoring the issuance of attack commands.
Currently, the Russian military establishment envisions that humans will be “in the loop” and will remain the decision-makers for military AI systems and processes, noting that the introduction of AI technologies in military systems should not lead to replacing humans, but will expand their opportunities in obtaining information, get faster and more accurate data processing and transmission, as well as increase the speed of decision-making and improve control systems operations. The MOD officials likewise note the final decision is always up to the human commander when dealing with an AI-enabled system. However, it may be challenging to foresee how the ever-increasing speed of decision-making in the coming multi-domain wars may affect a human operator’s ability to make the correct final decision. Some Russian experts think that their military intends to fully evolve from human to machine decision-making, by noting that the decision-making in combat operations would eventually be carried out by robotic systems.
Within the MOD, using AI in autonomous, uncrewed and robotic systems is one of the most visible aspects of the country’s high-tech RDTE&F. This technology is viewed as a critical mission multiplier that should ultimately replace human fighters in dangerous assignments and situations. In an often-quoted 2020 statement, the Deputy Director of the Advanced Research Foundation (ARF, Russia’s DARPA-like organization), remarked that human fighters will eventually be supplanted by military robots that can act faster, more accurately and more selectively than people. Broadly speaking, Russian military thinkers note that AI technologies have already found applications in today’s military roles, such as decision support systems, natural language processing, data mining, drone swarm development and pattern recognition tools and algorithms. Going forward, the MOD priorities for military development include introducing AI elements into the drone control systems, swarm development and man-unmanned teaming, and integrating these systems in the common operating environment with manned aircraft. AI tests to enable multiple ground and maritime robotic systems are also supposedly taking place across the Russian military industry and services. Just as important is MOD’s ongoing research to foresee the consequences of using AI in combat, to clearly predict the problems expected in future conflicts and to promote damage minimization and risk management systems across autonomous and manned platforms.
Ethics and Human-in-the-Loop
Considering the psychological, mental and other limitations of human decision-maker’s abilities, as well as the currently modest role of automation in decision-making, Russian MOD researchers think there is a growing need to use “intelligent” systems for planning and managing daily and combat activities to transform unstructured data into knowledge ready for immediate use. In their deliberations, goal setting today is a human prerogative, but since AI systems could be more and more scalable, the volume of “human” functions transferred to them could also increase.
Russian military academics and experts try to confront an ethical dilemma on the current and future use of military artificial intelligence by debating how AI can simulate human commanders’ decision-making in combat. To some in the MOD, neural networks training is carried out on a sample limited by the knowledge of experts, as well as by the amount of information in official documents and manuals, leading to the conclusion that AI would not be able to copy actual human thinking. This current level of training can lead to the adoption by the artificial intelligence of solutions that do not go beyond the boundaries of the training sample itself. At the same time, the ingenious, resourceful, creative and high-risk solutions usually associated with human thinking in critical and stressful situations would be ignored or simply go unlearned. This means that there could be situations on the battlefield in which the AI, unlike the human commander, would not be able to make an actual “intelligent” decision. To the MOD, the human, as a “biological prototype” of an artificial neural network, arrives at this knowledge throughout his or her life under the continuous influence of external and internal conditions and factors. It is this large volume of the specified conditions and actions – that is, the entire lifetime experience – that “does not fit into any computer program in principle”.
Therefore, if the human experience is a factor that any advanced intellectual system cannot replicate, then AI would remain limited in its capability, even if it could offer quick and precise – but limited – calculations and conclusions. While these deliberations indicate a potential point in time when such technology is possible, today’s military combat still relies heavily on human-centered decisions and actions, with soldiers operating largely legacy systems designed decades ago – Russia’s current military performance in Ukraine is part of this trend.
When it comes to AI’s practical application in war, Russian military experts imply the use of so-called “weak” (or “narrow”) specialized artificial intelligence built for specific tasks and missions. To the point, in July 2022, the ERA hosted a discussion on robotics, concentrating on the technical vision, pattern recognition and use of AI for the weapons development. The MOD-directed discussion included application of AI in robotic and information systems, and improving information systems with processing large data sets, with main emphasis placed on the practical use and introduction of such technologies in the military.
Russian MOD officials also try to address the shortage of specialized AI skills in public statements prior to the Ukraine invasion, noting that projects involving AI use require mandatory expertise of specialized government and military departments. Insufficient technical and scientific competencies and the unpreparedness of key decision-makers to implement systems with AI elements are a serious management threat to Russia today. This situation may be significantly exacerbated in the near future by the sanctions’ aftershocks, with many IT workers and developers leaving Russia after February 2022, prompting the Russian government to ensure a pipeline of high-tech workers and university graduates for domestic enterprises. The uncertainty over the domestic IT market going forward may impact the development of the military-oriented workforce as well, even as the actual data on MOD’s AI workforce is likely to remain classified for the near future.
The evolving debate on military AI in Russia
Today, major aspects of military AI involve decision-making capacity as a major element in the ongoing Russian RDTE&F. The debate over human control in AI systems will probably dominate such discussions for years to come. The one official public MOD document that alludes to the use of AI across the military was released in March 2023 and enshrines human control over many aspects of artificial intelligence in military use. The Russian MOD is also establishing institutions to centralize existing research efforts, technologies and human assets – in September 2022, the MOD inaugurated the Artificial Intelligence Department tasked with R&D and acquisition coordination, similar to the DoD’s JAIC. Its official role as the main AI node in the military’s ecosystem points to the authority to manage technologies, efforts and lessons, which may include prioritizing and funding the efforts that can be most useful in combat. Additionally, AI research as a “cross-cutting” topic across the MOD is carried out across the Russian military institutions, such as work at the AI laboratory at the ERA Technopolis that develops, trains and tests neural networks. The Russian military is also learning from the nation’s civilian AI developments to avoid duplicating research and development efforts while benefiting from available technology solutions.
For now, public writing among Russian military academics is relatively mute on the impact of rapidly evolving technology like generative AI and ChatGPT on economies, societies, and potentially military operations. Likewise, rapid technical evolutions involving neural network training is closely studied, but not openly discussed among the Russian military R&D ecosystem. Therefore, some of the detailed arguments cited here are probably updated in a more classified environment among Russian military institutions, which make monitoring global AI developments one of the top priorities.
The Russian military AI RDTE&F can be broadly divided into pre- and post-February 2022 phases. The phase before the war in Ukraine is characterized by relatively steady growth in initiatives and diversity in debate and opinions on the role of military AI. The relatively opaque and classified nature of some of this work meant that public statements cited here drove the impression of the Russian military on the technological ascendancy, building on the military reforms launched in 2010–2011. The post-February 2022 phase is unfolding now, with the Russian high-tech sector affected by global sanctions and the government seeking ways to replace the physical, technical and intellectual assets lost since the Ukraine invasion. Ultimately, Russia’s military performance in Ukraine and the Ukrainian military’s adoption of new and emerging technologies may likewise inform the domestic debate on the use of AI in combat.
Samuel Bendett is an analyst with CNA’s Russia Studies Program and an adjunct senior fellow with the Center for New American Security’s Technology and National Security Program. Previously he worked at the National Defense University. The views expressed here are his own.
The Azure Forum is a nonpartisan, independent research organisation. In all instances, the Azure Forum retains independence over its research and editorial discretion with respect to outputs, reports, and recommendations. The Azure Forum does not take specific policy positions. Accordingly, all author views should be understood to be solely those of the author(s).
The Azure Forum for Contemporary Security Strategy is Ireland’s first and only independent think tank dedicated to providing recommendations on peace, security and defence. As Ireland’s first national security research institute, the Forum aims to contribute to national and international security analysis and strategic studies for a more peaceful, secure, resilient and prosperous future nationally and globally at a time of emerging global risk.