SHARE

What is the future of algorithmic warfare? The character of war is changing as artificial intelligence and machine learning (AI/ML) applications transform everything from tactical engagements to operational art and military planning. Some thinkers go as far as to claim it is not just the character or war, but its nature and even the balance of power that are changing. 

This moment calls on military organizations to accelerate experimentation. These experiments must combine classified excursions across the joint force like the Global Information Dominance Exercises (GIDE) with unclassified classrooms that allow military professionals to explore new ways of visualizing, describing, and directing operations on the future battlefield. 

To that end, Marine Corps University (MCU), under the U.S. Marine Corps Education Command, used academic year 2024 to test making the classroom a battle lab. In the lab students and faculty built and tested a series of generative AI models exploring global integration and active campaigning, integrated deterrence, operational art, tactics, and combat development in collaboration with the Chief Digital and Artificial Intelligence Office (CDAO). More important, the effort empowered students to build and evaluate customized AI models linked to contemporary strategic, institutional, operational, and tactical challenges confronting the force.  

First, MCU used its Presidential Lecture Series to introduce both resident and non-resident students to algorithmic warfare. This series brought together speakers including U.S. Air Force Col. Matthew “Nomad” Strohmeyer, who currently leads experimentation efforts in CDAO alongside industry experts like Joseph P. Larson III (U.S. Marine Corps Reserve) and Nand Mulchandani, the chief technology officer for the Central Intelligence Agency. The talk explored not just the promise of AI to transform warfare, but why legacy bureaucracy and processes create challenges to implementation. 

These panel discussions helped students see beyond the headlines and hype to assess when, where, and how AI is most likely to increase maneuver and lethality across the levels of war. Based on his role in implementing the GIDE, Strohmeyer discussed the importance of adopting an agile mindset that prioritizes multiple, small experiments that embrace failure over large, “too big to fail” exercise constructs. 

Subscribe to Task & Purpose today. Get the latest military news and culture in your inbox daily.

Larson compared his experience as a company grade officer working in al-Anbar, Iraq supporting intelligence-led targeting with pen, paper, and PowerPoint to the prospects of using data science, statistics, and AI co-pilots to guide military operations. As he noted, this vision requires large investments in data infrastructure and retraining military professionals to understand when to trust and when to discount AI model-generated insights. Last, Mulchandani discussed strides the intelligence community was making based on prior investments in data infrastructure and how best to conceptualize AI co-pilots. Specifically, he differentiated between expert systems and agents that summarize large bodies of text from “crazy drunk” models that can stimulate human creativity. 

The university also empowered faculty with experience working with AI to develop an incubator across the schools to let students explore the potential of AI/ML as it relates to the future of warfighting. This effort ranged from classroom pilots at the Marine Corps War College using commercial, generative AI models to more structured experiments that had students build and deploy their own AI co-pilots. 

In the Expeditionary Warfare School, a small group of faculty and students built tailored models on tactics using platforms like Advana, Databricks, and simple Python scripts. Even when these experiments failed, they succeeded as military officers left with an understanding of the bureaucratic, organizational, and cultural challenges that tend to plague integrating new information technologies in any military organization. This approach was also consistent with agile software development that underwrites most commercial AI innovations.

At the School of Advanced Warfighting, faculty developed a multi-tiered approach working with TF LIMA, the lead for generative AI and large-language models in the U.S. Department of Defense, and experts from Scale AI. Using fine-tuning techniques like retrieval-augmented generation, faculty and students built AI co-pilots that combined military history, operational art, theory, and modern doctrine. The resulting models helped TF LIMA analyze how to develop test and evaluation standards for generative AI across the Defense Department. More directly, these models helped students work on classroom assignments like developing future scenarios and guides for writing commander’s intent with the assistance of tailored AI co-pilots. 

Building on these efforts, MCU plans to create a new quality enhancement plan (QEP) guiding how it will expose students to emerging and disruptive technologies going forward with a particular focus on decision-support applications and algorithmic warfare. This effort will build on the pilot projects discussed above and expand over the next five years to include partnerships across the Defense Department, think tanks, and civilian universities. The goal is to ensure that students are both exposed to best practices for using AI/ML and gain experience using the technology in the classroom, including planning exercises and wargames.  

Turning the classroom into a battle lab is a concept deep-rooted in the U.S. Marine Corps. In 1929, Col. James Carson Breckinridge published “Some Thoughts on Service Schools” in the Marine Corps Gazette. In the article, Breckinridge – who would go on to retire as a lieutenant general – discussed creating schools that embraced experimentation and critique, linking classrooms to ongoing discussions about the changing character of war and even arts and philosophy. In 2013, faculty resurrected this idea to start the Gray Scholars program, which originally aligned graduate research and electives with external partners ranging from the Office of Net Assessment and DARPA to the Marine Corps Warfighting Lab. 

This effort set off a wave of innovation including multiple programs run by the Krulak Center and larger fleet efforts like the Training and Education Command Warfighting Club. Many of these efforts built on earlier calls for expanding wargaming as a form of study, including the Fight Club initiative started by U.S. Army Col. Arnell David. Of note, these efforts paid dividends to both the students and the fleet. Efforts that grew out of Gray Scholars supported new naval theory, force design initiatives, concepts on swarming in modern war on display in the Replicator Initiative, and DARPA’s Mosaic Warfare concept. 

The only constant in war is change. As a result, military professionals need classrooms that are battle labs. These labs need to combine new approaches to aggregating and analyzing data like AI with military theory, history, and research methods that cultivate what Clausewitz called critical analysis. In the words of Breckinridge, “There is no program without criticism.” For too long, the classroom has been disconnected from current battles and emerging technology and narrowly focused on stale cases and debates about grand strategies that leave students with a thin understanding of modern battle networks and the changing character of operational art. Therefore, adapting the military for a new era of algorithmic warfare starts with turning the classroom into a modern battle lab.

The latest on Task & Purpose

  • Sailors at Norfolk will be locked out of their rooms if they fail inspection
  • Army Reserve colonel allegedly pocketed $62,000 in fake rental property scheme
  • Army quietly dropped 5-mile run requirement from airborne school in 2018
  • Alaska paratroopers get a secret weapon for the arctic: beards
  • Army will add 17 air defense units while cutting 24,000 active duty spots