The Military Is Using Artificial Intelligence, But It’s Not What You Think

Share

It’s almost 2017, which means Judgment Day will soon be upon us. It’s only a matter of time before Skynet becomes self-aware and begins its nuclear barrage on mankind. Our only hope for salvation is that Sarah Connor and Kyle Reese will save us.

Okay, so that is the plot of the most recent Terminator movie, which we all know is garbage and also completely fictional. But despite the fact that science-fiction movies like “Terminator,” “I, Robot,” and “The Matrix” have been trying to convince us that technology will rise up against the human race for decades, it’s just not going to happen, according to the military.

Dr. Micah Clark, a program officer in ONR’s warfighter performance department, told Task & Purpose in a recent interview, “The sudden emergence of a sentient artificial intelligence … is a near-irrational fear.”

As for the possibility of the rise of the Terminator, Clark added, “I don’t believe the military has an official position on the popular culture debate about the existential threats of AI.”

The military is experimenting with artificial intelligence applications, but it is nowhere close to designing robots with near-human levels of cognition.

“What I find much more concerning are simply mistakes that occur either due to errors in design and implementation,” Clark added.

The military’s real goal over the next several years is to create more opportunities to integrate artificial intelligence into human-robot teaming operations.

“[We] are very much focused on how unmanned systems will work together with the soldier to conduct common missions,” Dr. Jonathan A. Bornstein, chief of the autonomous systems division for the Army Research Lab, told Task & Purpose.

For example, Clark described a recent project called the MacGyver Bot, which behaves like the 1980s television character, who escaped dangerous situations by using on-hand objects and materials.

“In a simulation environment, we locked the robot in a room with some random pieces of equipment, and it’s goal was to get out through a locked door,” Clark said. “It came up with some quite creative solutions.”

However, it’s not just about building more sophisticated platforms.

What the Pentagon is hoping to avoid is having, as Clark called them, systems that are physically capable but intellectually stupid.

He added, “The presumption is … robots and machines can be teammates. And the question is, how can we make them better teammates?”

Though the goal is for these teaming robots to be intelligent enough to solve problems, they will derive their instruction from soldiers, and they won’t be sentient.

Related: Meet DOGO: The Cute Little Robot Out For Terrorist Blood »

As far as ethical concerns about the dehumanization of warfare, Bornstein said, “I want to be able to act independently, but within bounds. I want to have trust in what that system is going to do.”

According to both Clark and Bornstein, the plan is to create a framework for platforms that are intuitive, independent, and can communicate effectively with soldiers. They hope that troops will one day be able to view these robots as subordinates in the chain of command.

“If we want to work with machines in the future, we need to be able to trust them. And trust is a very nebulous term. It means that there is transparency, that we as soldiers understand what the robot is likely to do,” Bornstein said.

But, he added, that means that the robot must also be able to predict or assume what soldiers will do in a given circumstance.

While the military has begun using artificial intelligence in unmanned aerial vehicles and ground vehicles, truly independent robotic teammates are still a ways off. But some trials are proving promising.