An Army doctrine writer, Lt. Col. Scott McMahan spends most of his days turning wonky military concepts and research data into the manuals that soldiers use to do their jobs. Whether the topic is arctic combat, contested airspace, or deployment guidance, his goal is to make the Army’s basic instruction manuals for each simple enough that a high schooler could easily understand it.
He can now do that much quicker, he says, under an Army push to use artificial intelligence to create doctrines.
“You can use it as essentially an editing tool to say, ‘hey, I want this to be written at the 12th grade level,’ and ‘I want it to have a tone that is more approachable for the intended audience,’” McMahan told Task & Purpose.
McMahan and other soldiers write the manuals and doctrines for the Combined Arms Doctrine Directorate at Fort Leavenworth, Kansas. The manuals they write eventually define almost every aspect of Army life and missions. To do so, they are learning to use AI tools to write the principles and instructions given to soldiers on how they should think about modern military problems.
The same AI tools are rolling out to doctrine authors at Army expertise hubs, like the Army Intelligence Center of Excellence at Fort Huachuca, Arizona, which creates the Army’s rules for modern-day information warfare and psychological operations.
Combined Arms Doctrine Directorate Director Richard Creed Jr. described military doctrine as a “foundational language” that helps commanders and their forces communicate across different echelons, branches, and job specialties, like fighting a war in the Arctic.
The writers who create those documents are not necessarily experts in each subject, but instead collect reports and data on topics to keep Army publications up to date and create new guidance where needed. AI to generate ideas, edit grammar, and analyze reports and data collected from current exercises.
Directorate writers have eased their way into AI use, starting with Camo GPT, since it was the only tool with access to controlled unclassified information. But doctrine staffers are now using commercial AI tools on the Pentagon’s GenAI.mil program, which are approved for more sensitive information. Those include approved versions of Google’s Gemini tool and Open AI’s Chat GPT.
McMahan said he uses the Army’s own data platform called Vantage to look at raw numbers or data that comes directly from Army exercises where soldiers are testing the latest equipment and new formations — part of an effort the Army calls Transform in Contact.
Creed noted that the Army also receives weekly updates from subject matter experts “who are very closely watching the war in Ukraine.” These briefings are turned into written reports and uploaded into databases to become part of the web of information that the AI tools learn from.
Top Stories This Week
“We don’t just sit here at Fort Leavenworth divorced from the rest of the world. We’re tightly enmeshed in the operational force and what it’s doing, not just in terms of modernization, but in terms of the other things that Army forces do around the world continuously,” Creed said. “We’re pulling in that information, and that information becomes part of the databases that the AI can mine for us more efficiently.”
McMahan said he usually turns to AI for help developing a certain writing style. He might ask it to turn prose so it’s “written at the 12th grade level,” or if to look up relevant examples in history to help explain tactics.
McMahan said AI helps writers extract real-world observations and insights from written reports which they can use to craft doctrine language that they take to experts to double-check.
“I can take 10, 20, 30 of those documents, and I can say ‘what’s similar?’ What is always the case here?,” McMahan said. He might then ask the tools to search through the tactics, techniques, and procedures manuals, or TTPs, for ideas that match those patterns. “What ideas from the airspace control manual do I need to help illustrate the point that I’m trying to make here in my operational framework material?”
McMahan also noted that doctrine staffers are very aware that AI can produce hallucinations, nonsensical or inaccurate output that AI-driven large language models can create with no clear cause. . McMahan said the most common hallucination that he and other doctrine writers see is an AI-invented source, like a Military Review article written in 1994 by a Colonel Jones, which McMahan and his staff would then discover does not exist.
“It’s gotta be validated. We have to have accountability for our words,” he said. “I’m paid to check it.”