Autonomous Weapons and the Abdication of Moral Agency in the Kill Chain

The Ethical Imperative for Maintaining Human Agency in AI-Enabled Weapons

As the evolution of military technology continues to create a greater distance between combatants, Artificial Intelligence (AI) enabled weapons systems represent a potential for an unprecedented rupture in the moral fabric of warfare. While technology has enhanced the human experience of warfare, AI risks removing human agency from killing decisions entirely. There is an ethical imperative to maintain meaningful human control over critical engagement decisions. AI should enhance every stage of the kinetic kill chain [Detect, Track, Classify, Localize, Engage, Assess]. However, the “Engage” decision must remain under meaningful human control. Countries must embrace the technological revolution brought about by AI while maintaining the moral imperative that life-and-death decisions remain within human moral agency.

Warfare ethics have evolved from ancient 'might makes right' through Just War Theory to the modern Geneva Conventions. Just War Theory established that legitimate warfare requires both just cause for conflict and moral conduct within it. These principles depend fundamentally on human moral agents making reasoned decisions about life and death. AI weapons systems challenge this foundation by potentially removing human rational judgment from the very decisions that Just War Theory demands remain within human moral authority. 

Autonomous weapons represent an escalation of the fundamental problem - delegating life/death discrimination to non-moral agents. While we may have tolerated this at small scales, AI weapons threaten to industrialize indiscriminate killing.

Most AI-enabled military systems remain in research and development phases, but successful demonstrations drive an urgent need to adopt strategic frameworks. In 2017, to address the deluge of photos and videos generated by drones in Iraq and Afghanistan, the United States developed Project Maven, a large data processing tool for computer vision detections of warfighter requirements. In 2021, the Royal Navy, Amazon Web Services, and Microsoft developed StormCloud, where command-and-control software monitored a designated area, decided which drones should fly where, identified objects on the ground, and suggested which weapon to strike which target.[i] Both systems highlight the gains to be achieved in enhancing aspects of the kill chain while maintaining human decision-making.

The risks of abdication become clear in controversial cases like Israel's use of Project Lavender. It is alleged that Lavender has been used to identify thousands of Palestinian targets, with human operators only giving cursory scrutiny to the system’s output before ordering strikes. Although the IDF retorted that Lavender was “simply a database whose purpose is to cross-reference intelligence sources,”[ii] It highlights the slippery slope, or ease of removing or abdicating agency in the kill chain.

This differs fundamentally from past technological advances because AI enables warfare at scales and speeds that remove meaningful human oversight entirely. Unlike previous weapons that enhanced human capabilities while preserving human decision-making, autonomous systems threaten to eliminate moral agency from the most consequential human decisions. This represents a strategic challenge requiring international cooperation and deliberate adoption of frameworks to prevent engineering systems with moral consequences but no moral accountability.

Modern US military doctrine continues to reflect principles rooted in Just War Theory, which has guided warfare ethics since Augustine and Aquinas in the medieval period. Just War Theory establishes both when war is justified (jus ad bellum) and how it must be conducted (jus in bello). The conduct of war - jus in bello - contains fundamentally deontological principles: absolute prohibitions against targeting civilians and requirements to treat even enemies with human dignity.

Immanuel Kant's moral philosophy provides the philosophical foundation for these deontological requirements. In his Groundwork for the Metaphysics of Morals (1785), Kant established that moral action requires three essential elements: autonomy of the will, moral agency, and adherence to the categorical imperative.[iii] His concept of autonomy of the will requires that moral decisions emerge from rational deliberation, not programmed responses. Moral agency demands that a rational being make life-and-death judgments. The categorical imperative requires that we could universalize our actions. A world where machines make all killing decisions would eliminate moral responsibility entirely. Traditional weapons, regardless of sophistication, preserve this moral structure. AI weapons systems that autonomously select and engage targets sever this connection between rational moral agents and lethal decisions.

Three fundamental limitations challenge the implementation of this deontological framework: competitive disadvantage, automation bias risk, and definitional ambiguity in execution. If an adversary develops fully autonomous kill chains, the relative speed of our decision-making could put us at risk of losing a critical time advantage. Additionally, the risk of automation bias by accepting these systems can result in an inadvertent surrender of moral agency. Article 36, an advocacy group, aptly observed: “commanders who manually approve individual targets suggested by AI tools 'without cognitive clarity or awareness’...have abdicated moral responsibility to a machine.”[iv] Furthermore, defining what constitutes 'meaningful' human control presents significant implementation challenges. Despite these challenges, strategic leaders must address these limitations through coordinated policy action.

Strategic leaders must pursue international AI warfare agreements like nuclear non-proliferation regimes, establishing common ethical standards with allies while recognizing that adversaries may not accept similar constraints. Like the IAEA[v], an international body of accountability should be established to ensure countries maintain human agency in the kill chain while adopting AI. This creates diplomatic opportunities and manages competitive risks by requiring a careful strategic balance between maintaining technological advantage and preserving moral leadership.

Military organizations must also evolve to preserve warrior ethos and human accountability in increasingly automated environments. This requires developing new competencies in human-machine teaming while preventing erosion of critical decision-making skills through over-reliance on AI recommendations. Leaders must ensure command responsibility remains meaningful even when assisted by algorithmic analysis. Continued lines of effort, like including key provisions of the “Block Nuclear Launch by Autonomous Artificial Intelligence Act” incorporated in recent legislation signed in 2024, are a good first step in establishing the new norm.[vi]

Defense resource allocation must fund more than just the technological acquisition, but also the institutional infrastructure necessary for ethical AI adoption. This includes specialized training, oversight mechanisms, audit systems, and human capital required to maintain meaningful control over autonomous systems. Strategic leaders who fail to address these interconnected challenges risk either falling behind technologically or surrendering moral agency—both unacceptable outcomes.

The emergence of AI weapons systems presents strategic leaders with a critical choice: embrace technological advancement while preserving human moral agency, or risk surrendering the ethical foundations that legitimize military force. The deontological imperative to maintain human command authority at engagement decisions provides a clear framework for this challenge. Success requires coordinated international agreements, evolved military institutions, and targeted resource allocation - all implemented rapidly before autonomous systems become entrenched in military doctrine. The stakes are unprecedented: failure to act decisively risks not merely tactical disadvantage, but the fundamental erosion of moral responsibility in warfare itself.


[i] The Economist. 2024. “How AI Is Changing Warfare,”

[ii] McKernan, Bethan, and Harry Davies. "‘The Machine Did It Coldly’: Israel Used AI to Identify 37,000 Hamas Targets." The Guardian. April 3, 2024.

[iii] Immanuel Kant, and Allen W. Wood. 2002. Groundwork for the Metaphysics of Morals. New Haven: Yale University Press. P.58

[iv] The Economist. 2024. “How AI Is Changing Warfare.”

[v] (n.d.). IAEA Mission Statement. IAEA. Retrieved August 19, 2025

[vi] Markey, Edward SEN, and Ted Lieu CON. "Markey, Lieu Applaud Inclusion of ‘Human in the Loop’ Nuclear Launch Safeguard." Markey.Senate.Gov. December 18, 2024.

The views expressed in all published pieces are those of the contributors alone. They do not represent the position of any government agency, military branch, intelligence organization, or the editorial team of The Ardent Mind. Nothing published on this site constitutes official policy, legal advice, or an authorized disclosure of any kind. Content is published for the purposes of policy analysis and public debate.