Why the upcoming 2025 decision could redraw the rules of war and reshape military AI development
The UN's first vote on a legally binding autonomous weapons treaty in 2025 could redefine the role of AI in warfare—and expose deep global divisions on military ethics.

The rise of autonomous weapons—lethal systems capable of selecting and engaging targets without human intervention—has shifted from science fiction to battlefield reality. Known formally as LAWS (Lethal Autonomous Weapons Systems), these technologies are increasingly seen as one of the most disruptive military developments of the 21st century.

Now, after years of debate and deadlock, the world is approaching a critical moment: the first UN vote on a legally binding LAWS treaty, expected in late 2025. If passed, it would mark the first international treaty regulating autonomous weapons—setting a global precedent for how (or if) machines should be allowed to make life-and-death decisions in warfare.

This isn’t just about drones or battlefield AI. The vote will test the ability of the international community to govern AI-enabled weapons before they are fully deployed at scale, and will expose deep divisions between leading powers—including the United States, China, and Europe—on the future of military autonomy.


What’s at Stake in the 2025 Vote?

The expected UN vote will consider whether to establish a legally binding international framework that governs or prohibits the development and use of autonomous weapons. This could include:

  • Bans or restrictions on fully autonomous weapons that operate without meaningful human control.
  • Requirements for human-in-the-loop or human-on-the-loop decision-making.
  • Transparency obligations and oversight of AI systems used in lethal operations.
  • Mandates for accountability, ensuring humans—not algorithms—remain responsible for harm.

Proponents of regulation, including a coalition of UN member states, human rights organizations, and AI researchers, argue that:

  • Machines should never have the legal authority to decide to kill.
  • The deployment of LAWS without regulation could destabilize global security, especially as arms races accelerate.
  • Autonomous systems risk algorithmic bias, misidentification, or escalation based on faulty data.

Opponents—largely concentrated among the military powers and AI-exporting countries—argue that:

  • A ban or treaty would stifle technological innovation.
  • States have the right to develop military tools necessary for self-defense.
  • Many existing systems already blur the line, making definitions difficult to enforce.

Where the Major Players Stand

United States

The U.S. has historically resisted calls for a legally binding ban on LAWS, preferring non-binding norms or voluntary guidelines. Its argument: autonomous weapons can reduce collateral damage, increase precision, and support deterrence when properly used. The Pentagon’s Directive 3000.09 already provides a framework for human oversight, but Washington is wary of limiting tools that might provide a tactical edge over near-peer rivals.

That said, U.S. officials have begun signaling a more nuanced stance—acknowledging the need for human accountability, especially under growing pressure from allies and international NGOs. The upcoming vote may force a clearer U.S. position.

China

China presents a more ambiguous posture. While it has expressed support for a ban on the use of fully autonomous weapons, it continues to invest heavily in military AI, including drone swarms, loitering munitions, and surveillance-guided strike systems.

Beijing’s strategy appears to be diplomatic hedging: maintaining room for continued development while signaling moral alignment with arms control advocates. Its position on the vote will likely balance global image management and domestic strategic interests.

European Union

Europe has been the strongest voice in favor of a legally binding instrument. Nations like Germany, Austria, the Netherlands, and France have called for frameworks that ensure “meaningful human control” over all lethal decisions.

The EU’s push for global norms reflects its broader strategy of regulating disruptive technologies before their harms escalate. However, internal divisions remain—particularly over dual-use technologies and defense innovation funding. Still, Europe is expected to vote strongly in favor of a binding treaty.


Challenges Ahead: Definitions, Enforcement, and Loopholes

Even if a treaty is passed, much will depend on how it defines key terms:

  • What qualifies as a "lethal autonomous weapon"?
  • How is "meaningful human control" measured or verified?
  • Can existing semi-autonomous systems fall under exemption clauses?

Critics warn that vague language could allow for compliance theater, where states claim to uphold the treaty while continuing development under new guises.

There’s also the problem of non-signatories. Like past arms control treaties, if major players refuse to join or withdraw later (as seen with the U.S. and arms control agreements in the past), enforcement becomes difficult. This is especially troubling given the speed and secrecy of AI arms development.


Why This Vote Matters

The 2025 UN vote could be a turning point in the governance of AI-powered warfare. It will:

  • Establish whether global consensus is possible before full-scale deployment.
  • Set precedent for future treaties governing AI, robotics, and autonomous decision-making.
  • Reveal how international law adapts to the age of intelligent machines.

While some see autonomous weapons as inevitable, others believe that preemptive global agreements are the only way to avoid a world where algorithmic warfare becomes the norm.

In either case, the vote will signal where the world stands on one of the most profound ethical questions of our time: Should machines be allowed to make the ultimate decision—who lives and who dies?