SHARE

The Air Force has taken a giant step toward creating an artificial intelligence system that would never in a million years turn on humanity – unlike the “Skynet” nemesis in the first two Terminator movies, which are the only ones that count.

Recently, an artificial intelligence algorithm named ARTUµ — possibly a reference to Star Wars’ R2D2 — performed tasks on a U-2 Dragon Lady spy plane that are normally done by humans, the Air Force announced on Wednesday.

“After takeoff, the sensor control was positively handed-off to ARTUµ who then manipulated the sensor, based off insight previously learned from over a half-million computer simulated training iterations,” according to a news release from the humans who run the Air Force — for now. “The pilot and AI successfully teamed to share the sensor and achieve the mission objectives.”

The algorithm used the plane’s tactical navigation as an Air Force major whose callsign is “Vudu” flew the U-2, which was assigned to the 9th Reconnaissance Wing, Beale Air Force Base, California, the news release says.

In short: Man and machine successfully flew a reconnaissance mission during a simulated missile strike.

“ARTUµ’s primary responsibility was finding enemy launchers while the pilot was on the lookout for threatening aircraft, both sharing the U-2’s radar,” according to the news release, which seemed to suggest that this could prove to be the birth cries of a new form of superintelligence.

Air Force officials lavished praise on the successful experiment — because if science fiction has taught us anything, it’s that when computers start making decisions instead of humans, nothing could possibly go wrong.

“Putting AI safely in command of a U.S. military system for the first time ushers in a new age of human-machine teaming and algorithmic competition,” Dr. Will Roper, assistant secretary of the Air Force for acquisition, technology and logistics, said in a statement. “Failing to realize AI’s full potential will mean ceding decision advantage to our adversaries.”

In July, Task & Purpose addressed the elephant in the room by asking Nand Mulchandani, the acting director of the Pentagon’s Joint Artificial Intelligence Center, what steps the U.S. military is taking to make sure that artificial intelligence does not become self-aware and declare war on humanity.

Mulchandani said he had no idea how to design an algorithm to become self-aware and the Defense Department also has to follow policy and laws about artificial intelligence.

“Here at the DOD, things are taken very seriously in terms of systems that get deployed, which is why there tends to be this negative connotation that the DOD’s slow, the government’s slow, et cetera,” Mulchandani told reporters at a Pentagon news briefing. “Well, it’s slow for a reason. There’s a maturity of technology you have to put in there: there’s tests and eval that is taken very, very seriously here.”

Of course, that’s just what Skynet would want us to think.