AI on the Battlefield: A Call For Global Action

AI is emerging as a crucial weapon in the Russian-Ukrainian conflict. It serves as a powerful tool on the battlefield for Ukraine's fight for democracy and sovereignty, enhancing predictive analytics, drone surveillance, and decision-making capabilities. Russia follows similar tactics for military operations against Ukraine, utilizing AI towards developing their nuclear strategy and for the manipulation of civilians through propaganda. Consequences of such AI misuse go far beyond Ukraine's borders. Global peace and security will be even further jeopardized without an immediate international response and the establishment of regulations on  AI use in modern warfare.

Since February 24, 2022, Russia has been waging a large-scale aggression against Ukraine, shelling cities, destroying infrastructure and leading to more casualties. To effectively counter their adversary and accurately identify enemy targets, Ukrainians are utilizing artificial intelligence to advance drone technology. Introduced by the Ministry of Defense of Ukraine, platforms – SemanticForce and Avengers – allow effective tracking of enemy movement and the identification of 12,000 units of enemy vehicles/equipment on a weekly basis. Implementing international regulations could legally equip Ukraine to receive additional AI assistance from its allies, enhancing understanding of compliant combat strategies.

The urgency for AI regulation is not driven solely by Ukraine’s defensive use of AI; Russia’s weaponization of AI has expanded into the realm of propaganda that undermines both Ukraine’s security and global democratic values. OpenAI reported at least 17 campaigns under Operation DoppelGänger that used generative AI to create disinformation, discredit Ukraine (labeling it as a failed, corrupt, and Nazi state), spread Kremlin narratives, and instill fear across broader Europe. In March of 2022, Russia created and distributed a deepfake in which President Volodymyr Zelenskyi allegedly announced the surrender of Ukraine. This and the further spread of false information on the losses of Ukrainian forces aim to undermine the trust of Western allies and cause a reduction in financial aid. 

Compounding the issue of generative AI use is Russia’s nuclear arsenal, with over 5,500 confirmed warheads. The potential for AI to automate decision-making in high-stakes military situations raises risks of miscalculations and unintended escalations. If AI systems begin to guide decisions involving nuclear capabilities, the margin for error becomes perilously small. As the Kremlin continues to exploit technology to compensate for manpower losses, the global implications of such actions demand urgent international regulations. 

In recent years, global leaders, from President Joe Biden to Pope Francis, have raised alarms about the ethical implications of AI. Elon Musk and Steve Wozniak have called for a halt in developing new AI systems, pointing to serious risks associated with developing AI that matches human intelligence. These concerns have already been given attention.The UN has called for autonomous combat systems, and the European Union unveiled a digital strategy, emphasizing the need for clear policies to regulate the use of drones and AI in both civilian and military contexts, ensuring strict boundaries to prevent misuse in warfare or surveillance. Furthermore , the US government is negotiating a deal with China to ban AI usage in nuclear arsenals. 

However, there is no international treaty to regulate the military use of AI. A key step should be banning fully autonomous weapons, or “killer robots,” through an international treaty similar to the one that already exists to regulate chemical weapon use. AI should also be restricted in both nuclear command and nuclear control systems to avoid increasing the already high risk of error. Existing agreements like the Strategic Offensive Arms Reduction Treaty (SATT) or Nuclear Non-Proliferation Treaty (NPT) could be expanded to cover AI in nuclear systems. Additionally, a global ethical framework, similar to the Geneva Conventions, should prohibit AI in attacks on civilians, disinformation, or mass surveillance.

The example of Russian-Ukrainian war is a reminder of the dangers and opportunities AI technologies bring. Although they have the potential to greatly improve the effectiveness of military operations, their potential benefits must be carefully balanced against the risks of misuse.We must demand international norms to ensure global stability and to avoid following in the footsteps of Oppenheimer.

Olha Burdeina is a sophomore at Brown University concentrating in International and Public Affairs. She is a staff writer for the Brown Undergraduate Law Review and can be contacted at olha_burdeina@brown.edu

Yani Ince is a senior at Brown University concentrating in History and Political Science. She is a blog editor for the Brown Undergraduate Law Review. She can be reached at ianthe_ince@brown.edu