Austria in the 21st Century Oppenheimer Moment
In modern warfare, a revolutionary and unsettling entity is emerging: Lethal Autonomous Weapons Systems (LAWS). These sophisticated machines are capable of independently selecting and engaging targets without human intervention and have the potential to rewrite the rules of engagement and challenge our understanding of ethics, responsibility, and International Humanitarian Law. With the integration of cutting-edge artificial intelligence, machine learning, and advanced sensor technologies, LAWS promise to enhance military effectiveness, identifying and neutralizing threats without direct human intervention. States are eager to pursue these projects and the battlefields of the future loom with both promise and peril.
International humanitarian law and the norms of the arms control community, fractured as they may be, have at the very least, a strong foundation. A core understanding is that state actions may be traced back to individual decision-makers or policies that can be understood, analyzed, and held accountable for violations of international agreements. That is, beyond issues of proliferation and development, the legitimate use of any system is predicated on the norms of jus in bello–accepted regulations on parties’ conduct in conflict–such as distinction, proportionality, and reasonable precautions. Today, a twenty-first-century “Oppenheimer Moment” in the form of LAWS threatens these core assumptions.
Already, the proliferation of AI technology is clear in modern conflicts. Ukraine has utilized complex artificial intelligence systems to interpret massive amounts of intelligence, surveillance, and reconnaissance data. This has allowed for more efficient strategic and tactical information processing which otherwise would not be possible. In April, Israel revealed its use of the Lavender targeting system which identified 37,000 potential militants without any human input. One Israeli intelligence officer stated, “I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added-value as a human, apart from being a stamp of approval.” Unit 8200, one of Israel’s premier strategic intelligence agencies, candidly admitted that the Lavender system has been programmed with pre authorized allowances for the number of possible civilian deaths in any of its planned attacks. However, these systems operate only at a strategic level, the ultimate decision about when to take a human life is still within human control–but this may not last. The most concerning application of AI is at the tactical level through Lethal Autonomous Weapons Systems (LAWS). While no agreed-upon definition exists, LAWS refer to any system that can execute missions and apply lethal force without human control.
LAWS can dramatically increase the efficiency and effectiveness of any military action, removing human decision-making in favor of faster, more accurate computing power. From a tactical perspective, LAWS have a lot to offer, it is not hard to imagine why a general would choose to deploy an autonomous robot unburdened by fear, fatigue, or any need for training. However, leaders and scholars have already identified a plethora of issues. First, LAWS remove many of the costs, both financial and political, of initiating and prolonging conflict. Without boots on the ground elected officials face dramatically lower public pressure when engaging in conflict, and this may significantly lower the initiation threshold for future wars.
Additionally, by removing human control and placing the fate of a mission in the control of an algorithm, these systems will disrupt established chains of command. This shock leaves International Organizations nearly unable to hold states accountable for their actions, at least without a significant legal shift to account for LAWS. After all, it is difficult to assign blame to the actions of an AI algorithm, whereas traditional missions can always be traced to one or maybe multiple–human–decision-makers. In an autonomous world, decision-makers may have little say in the actions of their forces and states have few means to predict enemy attacks, drastically increasing the unpredictability of war.
There is also a risk of LAWS falling into the hands of non-state actors, empowering them with advanced capabilities far more effective than conventional arms. On a macro level, international organizations have little legal framework through which to interpret the use of autonomous weapons, and these questions are the subject of ongoing debate. The future evolution of warfare will undoubtedly hinge upon the continued advancement of AI, and yet as it rapidly develops the international community has failed to effectively address it. As it stands now there are no comprehensive international agreements to bind states' decision-making regarding LAWS.
That is not to say that such efforts are not being made. In international debates, one country has positioned itself at the forefront of regulating Lethal Autonomous Weapons Systems. Austria, a militarily neutral country, has always been an active leader in arms control discussions. It was a key negotiator in the 2013 Arms Trade Treaty (ATT), advocated in 2010 for a Nuclear Weapon-Free Zone in Europe, and led the push for a UN vote on the Treaty on the Prohibition of Nuclear Weapons (TPNW). Today, Austria has deployed similar strategies to address the rapid development of Autonomous Weapons Systems.
As early as 2013, Austria called for global discussions about the integration of defense technology with AI, and in 2018 it called for a comprehensive ban on any weapons systems that did not retain human control over the application of lethal force. Both events, while still important steps, failed to produce any agreements.
Two representatives of Austria in particular have led international efforts. Federal Minister for International Affairs, Alexander Schallenberg, and Alexander Kmentt, director of disarmament, arms control, and nonproliferation at the Austrian Foreign Ministry have been leading the charge. Schallenberg has consistently asserted his demand that "the most profound and far-reaching decision, who lives and who dies, remains in the hands of humans and not of machines".
States almost unanimously agree that a cohesive international framework is needed to address the development of LAWS, and that without one, global security is at risk. However, countries that are otherwise aligned in their intentions have not yet agreed on one specific approach.
States have broken into two conflicting blocs. One, led by the U.S., is urging states to self-police their own use of artificial intelligence under a set of agreed-upon values. Aligned states such as India and China also reject any kind of binding legislation. These states, which have a higher economic and strategic interest in loose regulation of emerging technologies, have been active participants in most conversations, yet still refuse to negotiate with more progressive states. In 2023, the United States released its “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.” This report outlined a set of 10 “best practices” for states to follow when deploying AI, however, it declares only that “states should take appropriate measures,” and at no point sets out a protocol for the application or enforcement of its guidelines.
For this reason, Austria is pushing states to promote negotiations for a legally binding mechanism to regulate emerging technologies. In April of 2024, The Austrian Federal Ministry for European and International Affairs continued its international engagement, organizing a first-of-its-kind international conference, titled “Humanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation,” to address the regulation of LAWS. Previous conferences had been mostly regional. 144 state delegates met alongside academics, technical experts, NGO representatives, defense officials, and humanitarian organizations. It aimed to set the stage for an eventual vote at the UN General Assembly for a legally binding mechanism. Panels discussed current development progress, fought to create universal definitions, and assessed risk areas for LAWS.
Austria, at the end of the conference, maintained its goal “to work with urgency and with all interested stakeholders for an international legal instrument to regulate autonomous weapons systems.” However, it also made minimal concessions. Most notably it agreed to further discussions centering around a two-tiered solution, where certain technologies are outlawed and others are regulated. This was a strategic move as Austria sets up its plans for a formal UN General Assembly vote for a legally binding instrument at this year's general session. States that stand to gain from this new high-tech arms race will continue to push back. Austria, and its many non-aligned allies, will continue to fight for the combatants of the future who ask one simple question. If I die in service to my state will it be a machine or a man that pulls the trigger?