AI Weapons Are Here to Stay


Many have been debating the morality of developing weapons with artificial intelligence. Will this lead to a taboo against their use?

The debate around the ethics of AI weapons has involved everyone from advocacy groups, to government officials, to Google engineers. Many agree that AI weapons carry very significant ethical concerns. Which begs the question, will these concerns, and the efforts of anti-AI weapons advocacy groups, result in a ban of their use or a strong taboo? Some seem to think that an international agreement will be enough to stop their adoption in the world's militaries. However, the development of a taboo around the use of AI weapons depends on something much more straightforward, their effectiveness on the battlefield.

In April, Google employees very publicly protested the company’s participation in a Pentagon program that used AI to interpret images and improve the targeting of drone strikes. Certainly, the risks and ethical concerns of AI weapons are very real (much like any new technology of war). Furthermore, most opponents of AI weapons usually point to the ethical problems of having a computer algorithm both selecting and eliminating human targets without any human involvement in controlling the process.

However, the risks associated with AI weapons stretch beyond the ethics of war. Somehave pointed to crisis instability if AI weapons were to proliferate throughout the world. If two states involved in a crisis have access to weapons that can so easily engage in such rapid destruction, the first mover advantage will likely push those states toward war rather than away from it. For instance, it is often argued that the first mover advantage was the cause of the start of World War I. The rapid mobility of troops and advancement in weapons led military planners to believe that those who moved first would have an insurmountable advantage. Therefore, if you think someone else is getting ready to move, you have a higher incentive to move before they do. AI could create similar incentives.

Marines with 3rd Battalion, 5th Marine Regiment tested new equipment such as the Multi Utility Tactical Transport in a simulated combat environment at Marine Corps Base Camp Pendleton, Calif., July 8, 2016.U.S. Marine Corps/ Lance Cpl. Julien Rodarte

Most assume that the taboo around chemical weapons exist because they're a particularly terrible weapon of war and the international community has agreed to prohibit their use. But the prohibition of something by the international community isn't sufficient to stop its use. Ignoring whether a ban on AI weapons is practical, or whether it would even be effective, the single determinant of its use, or non-use, on the battlefield is the result of something much more straightforward—states will use it if it's effective on the battlefield.

Previous development of new tools of war demonstrates as much. When most new weapons are developed, calls for their banned use often follow. In John Ellis’ book, The Social History of the Machine Gun, he explains why the United States was unique in its early adoption of the machine gun and why it eventually supplanted the feudal conception that only brave men can win wars—not fancy weapons—and how the machine gun became ubiquitous on the battlefield.

Similar attempts to ban weapons and set rules for the appropriate conduct of war provide examples, from which we can conclude what to expect from AI weapons. In 1898, delegates from around the world convened in Geneva to discuss rules and laws in war. Of particular concern was the use of chemical weapons, a new weapon being developed at the time. Chemical weapons weren't operational for mass use at the time of the convention, but the delegates were concerned about their use nonetheless.

The convention would eventually call for a ban on asphyxiating, poisonous, or other gases. Only two countries were against banning their use, the United States and (in support of America) the United Kingdom. The argument the United States used against prohibiting chemical weapons was that states should use all means at their disposal to end a war as quickly as possible. They assumed that banning chemical weapons would only serve to extend the length of the conflict, thus ultimately causing more people to die. The U.S. Civil War was also fresh in their memory, having ended only thirty-three years prior, a war with substantial casualties and one that was expected to end quickly but didn't.

Alfred T. Mahan, the U.S. delegate to the convention in 1898, said, “the objection that a warlike device is barbarous has always been made against new weapons, which have nevertheless eventually been adopted.” So why does a taboo form around some barbarous weapons and not other barbarous weapons? Richard Price observed , “Throughout history, numerous weapons have provoked cries of moral protest upon their introduction as novel technologies of warfare. However, as examples such as the longbow, crossbow, firearms, explosive shells, and submarines demonstrate, the dominant pattern has been for such moral qualms to disappear over time as these innovations became incorporated into the standard techniques of war.”

Ultimately, a taboo would form around the use of chemical weapons. While this doesn't prevent their use—they were used multiple times most recently by the Assad regime in Syria and by Saddam Hussein in the Iran-Iraq war—it certainly has a deterrent effect on their widespread use. But what is driving its limited use? Is the "taboo" actually preventing some states from using them while not stopping other countries?

The argument that the taboo is the only thing stopping the widespread use of the weapon is made much less convincing by the fact that chemical weapons are not a particularly effective or useful tool on the battlefield. The creation of gas clouds of sufficient concentration to be effective is difficult. Targeting gets even more difficult in the fog of war and under various levels of “friction.” The inability of the weapon to hit discrete targets seems especially useless in the era of precision weapons. Furthermore, there is no guarantee that your own forces won’t be affected if the wind picks up or changes direction. These limitations make the weapon only attractive to leaders and regimes that are already in a very desperate position.

Now contrast that with the development of submarines. Submarines were first being developed for wide-spread use around the same time as chemical weapons, and submarines were also viewed similarly. They were seen as a dishonorable weapon, lurking unseen below the surface and killing their targets anonymously. The was especially the case with the rise of unrestricted submarine warfare. Calls were also made to ban the use of submarines, and the 1936 London Protocol sought to limit unrestricted submarine warfare.

Russia's Uran-9 unmanned ground vehicleRussian Ministry of Defense

So why did chemical weapons develop a taboo and submarines did not, and what does this tell us about the potential development of a taboo around AI weapons? All three new technologies of war considered here created and create significant ethical dilemmas and are still viewed by many as unethical. However, of the two historical examples, the ineffective tool developed a taboo while the effective tool did not. So, will AI be an effective tool of war?

It seems highly likely that it will be a useful tool of war. Although one should avoid the use of terms like a "revolution in military affairs," it's easy to see the promise of this new type of weapon system. However, one shouldn't expect an upheaval that ends the " modern system ” of warfare. While AI weapons are likely to advance military effectiveness, the subsequent development of countermeasures will likely prevent them from upending the modern system of war.

One of the promises of AI in the military that seems to guarantee its adoption is its broad applicability. AI can be used to increase effectiveness and efficiency for more than just combat operations. AI can improve supply lines, enhance the training of new soldiers, and increase the effectiveness and efficiency of intelligence gathering and processing.

But their effectiveness in combat operations seems especially promising. AI is not a wholly revolutionary idea to be applied to the military domain, and it is merely the next logical step in the digitization and mechanization of the modern battlefield. The adoption of electronic and robotic weapon systems into modern militaries was an increasingly rapid process following 9/11. According to one article, “Already 90 states and non-state groups possess drones, and 30 have armed drones or programs to develop them.” The notable example of ISIS adopting the use of drones demonstrates the potential of such weapons.

Finally, AI weapons will be critical to the Third Offset Strategy being pursued by the U.S. Military. Machine learning, human-machine teaming, and drone swarms are all potential systems that could upset the balance between states. We already see some of these tactics being employed on the battlefield. For example, Russia claimed that a drone swarm attacked one of its bases in Syria.

The early adoption of AI-like strategies on the battlefield, the sheer number of states investing in such technology, and current weapon systems that offer insight their potential, all seem to indicate that AI weapons will be highly effective tools of war. Therefore, while a robust discussion of the tool’s ethics and limits should be pursued, it is unlikely to force the development of a taboo around the tool’s use.

Adam Wunische is a U.S. Army veteran and a Ph.D. student at Boston College. He researches military operations and strategy, terrorism, and civil-military relations. He's also written for The Diplomat and The Strategy Bridge.

This article originally appeared on The National Interest

Read more from The National:


Boyfriends can sometimes do some really weird shit. Much of it is well-meaning: A boy I liked in high school once sang me a screamo song that he wrote over the phone. He thought it would be sweet, and while I appreciated that he wanted to share it with me, I also had no idea what he was saying. Ah, young love.

Sure, this sounds cringeworthy. But then there's 2020 Democratic presidential candidate Cory Booker, who appears to be, dare I say, the best boyfriend?

Read More Show Less

CEYLANPINAR, Turkey (Reuters) - Shelling could be heard at the Syrian-Turkish border on Friday morning despite a five-day ceasefire agreed between Turkey and the United States, and Washington said the deal covered only a small part of the territory Ankara aims to seize.

Reuters journalists at the border heard machine-gun fire and shelling and saw smoke rising from the Syrian border battlefield city of Ras al Ain, although the sounds of fighting had subsided by mid-morning.

The truce, announced on Thursday by U.S. Vice President Mike Pence after talks in Ankara with Turkish President Tayyip Erdogan, sets out a five-day pause to let the Kurdish-led SDF militia withdraw from an area controlled by Turkish forces.

The SDF said air and artillery attacks continued to target its positions and civilian targets in Ral al Ain.

"Turkey is violating the ceasefire agreement by continuing to attack the town since last night," SDF spokesman Mustafa Bali tweeted.

The Kurdish-led administration in the area said Turkish truce violations in Ras al Ain had caused casualties, without giving details.

Read More Show Less

The Colt Model 1911 .45 caliber semiautomatic pistol that John Browning dreamed up more than a century ago remains on of the most beloved sidearms in U.S. military history. Hell, there's a reason why Army Gen. Scott Miller, the top U.S. commander in Afghanistan, still rocks an M1911A1 on his hip despite the fact that the Army no longer issues them to soldiers.

But if scoring one of the Army's remaining M1911s through the Civilian Marksmanship Program isn't enough to satisfy your adoration for the classic sidearm, then Colt has something right up your alley: the Colt Model 1911 'Black Army' pistol.

Read More Show Less
Acting White House Chief of Staff Mick Mulvaney takes questions during a news briefing at the White House in Washington, U.S., October 17, 2019. (Reuters/Leah Millis)

WASHINGTON (Reuters) - President Donald Trump's withholding of $391 million in military aid to Ukraine was linked to his request that the Ukrainians look into a claim — debunked as a conspiracy theory — about the 2016 U.S. election, a senior presidential aide said on Thursday, the first time the White House acknowledged such a connection.

Trump and administration officials had denied for weeks that they had demanded a "quid pro quo" - a Latin phrase meaning a favor for a favor - for delivering the U.S. aid, a key part of a controversy that has triggered an impeachment inquiry in the House of Representatives against the Republican president.

But Mick Mulvaney, acting White House chief of staff, acknowledged in a briefing with reporters that the U.S. aid — already approved by Congress — was held up partly over Trump's concerns about a Democratic National Committee (DNC) computer server alleged to be in Ukraine.

"I have news for everybody: Get over it. There is going to be political influence in foreign policy," Mulvaney said.

Read More Show Less

Former Defense Secretary James Mattis decided to take on President Donald Trump's reported assertion that he is "overrated" at the Alfred E. Smith Memorial Foundation Dinner in New York City on Thursday.

"I'm not just an overrated general, I am the greatest — the world's most — overrated," Mattis said at the event, which raises money for charity.

"I'm honored to be considered that by Donald Trump because he also called Meryl Streep an overrated actress," Mattis said. "So I guess I'm the Meryl Streep of generals ... and frankly that sounds pretty good to me. And you do have to admit that between me and Meryl, at least we've had some victories."

Read More Show Less