Army photo

A soldier mans a robot-carried machinegun during the Army’s PACMAN-I experiment in Hawaii.

How does war change when your weapons can think? Do you trust a computer to decide when and whom to kill? Questions once asked only in science fiction are now becoming matters for policymakers. All four armed services are experimenting with artificial intelligence in every domain: land, sea, air, outer space, cyberspace, and the all-pervasive electromagnetic spectrum. While robots are still halting and limited on land, they are showing striking potential in less cluttered environments. Indeed, the most powerful and dangerous artificial intelligences may not be Terminators that walk abroad, but invisible algorithms that control what we see and how we make decisions.

[All this week we’re reprinting some of our best stories of 2016 on the biggest issues: robotics & artificial intelligence (today), future warfareChina, Russia, & the defense policies of Donald Trump.]

 

Air Force Photo

THE BIG QUESTION: Should US Unleash War Robots? Frank Kendall Vs. Bob Work, Army

The Pentagon’s top weapons buyer, Frank Kendall, warned today that the US might hobble itself in future warfare by insisting on human control of thinking weapons if our adversaries just let their robots pull the trigger. Kendall even worries that Deputy Defense Secretary Bob Work is being too optimistic when Work says humans and machines working together will beat robots without oversight. ….

“Bob Work’s view, for the near future at least, (is that) humans with machines will make better decisions than machines (will) alone,” Kendall continued. “He may right about that; there are instances where that is true. I don’t know how much he’s right about or how long it will be true.”

As computers keep getting better, Kendall said, “the trend is certainly against us” — “us” as in “human beings.”

[click here to read the full story]

 

Robert Work

DECISION MACHINES: Iron Man, Not Terminator: The Pentagon’s Sci-Fi Inspirations

“When most people when they hear me talk about this, they immediately start to think of think of Skynet and Terminator,” said the deputy secretary of defense. “I think more in terms of Iron Man.” The Pentagon wants artificial intelligence, said Bob Work, but it doesn’t want “killer robots that roam the battlefield” without human control.

Instead, Work said told an Atlantic Council conference, citing half-a-dozen science fiction stories from Iron Man to Stark Trek to Ender’s Game, the goal is something like the JARVIS software that runs Tony Stark’s fictional super-suit: “a machine to assist a human where the human is still in control in all matters, in all matters, but the machine makes the human much more powerful and much more capable.”

[click here to read the full story]

 

Marine Corps robots at MIX-16 experimentEN MASSE: Marines Seek To Outnumber Enemies With Robots

Since World War II, the US military has always expected to fight outnumbered. Soon, however, expendable unmanned systems may change that. For the first time in 70 years, America could have numbers on its side. That turns traditional assumptions about tactics, technology, and budgets upside down….

To reduce casualties in future landing operations, [Lt. Gen.] Walsh has already called for amphibious forces to have robotic vanguards. “Instead of Marines being the first wave in, it’s unmanned robotics, whether it’s in the air or the surface or subsurface…sensing, locating, and maybe killing (targets),” he said this morning.

[click here to read the full story]

 

JPO photo

IN THE AIR: F-22, F-35 Outsmart Test Ranges, AWACS

How smart is too smart? When F-35 Joint Strike Fighters flew simulated combat missions around Eglin Air Force Base in Florida, their pilots couldn’t see the “enemy” radars on their screens.

Why? The F-35s’ on-board computers analyzed data from the airplanes’ various sensors, compared the readings to known threats, and figured out the radars on the training range weren’t real anti-aircraft sites — so the software didn’t even display them. While the software and pilots on older aircraft hadn’t noticed the imperfections and inaccuracies in how the Eglin ranges portrayed the enemy, the F-35s’ automated brains essentially said, “Fake! LOL!” and refused to play.

[click here to read the full story]

 

Navy photo

AT SEA: Swarm 2: The Navy’s Robotic Hive Mind

Two years ago, on the James River, the Office of Naval Research dropped jaws with a “swarm” of 13 unmanned craft that could detect threats and react to them without human intervention. This fall, on the Chesapeake Bay, ONR tested ro-boats with dramatically upgraded software. The Navy called this experiment “Swarm 2” — but a better description would be “Hive Mind.”

….in the two years between the experiments, Robert Brizzolara and his team developed software that let the boats come up with a plan together and allocate tasks. Instead of a swarm of insects, they progressed to something like Star Trek‘s Borg Collective. That enabled the robots to make a plan, come up with a division of labor, and even hold forces in reserve.

[click here to read the full story]

 

Army photo

ON LAND: Tiny Drones Win Over Army Grunts; Big Bots? Not So Much

Tiny drones, no bigger than your palm, were the big stars of an Army experiment in Hawaii, participants told Breaking Defense. Larger ground robots, however, struggled in the jungle….

[Sergeants] Garner and Roe also appreciated the robots’ ability to mount sensors and haul equipment, particularly a .50 caliber heavy machinegun, the kind of firepower foot troops simply can’t carry. Some of the larger robots were even rigged with a remote-controlled gun mount. Operators are safely hidden in cover, send them towards the enemy and then open fire.

Until the remote broke, which happened to Garner’s unit twice.

[click here to read the full story]

 

An Army Grey Eagle testing a NERO jammer.

OVER THE AIRWAVES: Jammers, Not Terminators: DARPA & The Future Of Robotics

Robophobes, relax. The robot revolution is not imminent. Machine brains have a lot to learn about the messy physical world, said DARPA director Arati Prabhakar. Instead, DARPA sees some of the most promising applications for artificial intelligence in the intangible realm of radio waves. That includes electronic warfare — jamming and spoofing — as well as a newly launched “grand challenge” on spectrum management: allocating and reallocating frequencies among users according to demand more nimbly than a human mind could manage, let alone the federal bureaucracy. In short, don’t think Terminators: think jammers.

[click here to read the full story]