SHARE

The Defense Department is aware of how quickly artificial intelligence and autonomous weapons are becoming a part of war. As a result, it’s updating its automated weapons directive for the first time in more than a decade in order to guide development of new systems..

The new framework is laid out in the updated directive, “Autonomy in Weapon Systems.” Effective as of Wednesday, Jan. 25, the framework has been revised for “the dramatic, expanded vision for the role of artificial intelligence in the future of the American military,” according to Michael Horowitz, the Pentagon’s Director of Emerging Capabilities Policy.

Horowitz told reporters on Wednesday that the new version has only “relatively minor clarifications and refinements” — namely creating clear oversight and advisory bodies to ensure ethical research and development — but a directive update was necessary due to the increased use of autonomous weapons systems in the U.S. military and other armed forces around the world.

And it is a significant update in that regard. The last time the Department of Defense put out a guidance on artificial intelligence was 2012. Since then, the field has significantly grown, with autonomous or semi-autonomous weapons systems becoming  major elements in modern warfare. Drone systems have become essential parts of war, as the use of reconnaissance systems, amphibious attack drones and automated guns on technicals in the war in Ukraine have demonstrated. The U.S. military has also been looking into weapons capable of disabling enemy drone systems.

At the same time, the Pentagon’s own bureaucracy has evolved with new technologies; many of the offices that the department has stood up to address autonomy and AI in defense  are newer than the 2012 guidelines, and the 2023 directive formally integrates them into policy. Those include the Chief Digital and Artificial Intelligence Office, which is tasked with coming up with requirements to implement the Pentagon’s AI Ethical Principles. 

Subscribe to Task & Purpose Today. Get the latest military news, entertainment, and gear in your inbox daily.

The newly updated policy serves as a framework for how the military will study and develop AI systems going forward. No type of weapon is prohibited, Horowitz told Breaking Defense. Aside from updating responsibilities for development, the policy creates “guidelines designed to minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.” 

The directive also establishes the Autonomous Weapon Systems Working Group which, run by Dr. Colin Kahl, the Under Secretary of Defense for Policy, which will serve as an advisor to Pentagon leadership regarding autonomous technologies. It’s to create “good governance,” Horowitz told Breaking Defense. 

Although the framework wants to advance the study of autonomous and semi-autonomous systems, they must “be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

Right now, artificial intelligence is still evolving, and the military is still developing ways to integrate it into units. That has ranged from tests to create robotic wingmen for Air Force pilots under the combat collaborative aircraft project to testing how troops can evade detection from computers. 

This week, the Pentagon announced a $12 million partnership with Howard University to conduct research for the Air Force’s tactical autonomy program, which aims to develop systems which require minimal human engagement or oversight. 

Right now the Pentagon is excited about autonomous systems, but it wants to avoid becoming Skynet, it seems.

The latest on Task & Purpose

Want to write for Task & Purpose? Click here