Let Your Robots Off The Leash – Or Lose: AI Experts

Let Your Robots Off The Leash – Or Lose: AI Experts
Let Your Robots Off The Leash – Or Lose: AI Experts

In DARPA-Army experiments, soldiers tried to micromanage their drones and ground robots, slowing their reaction times and restricting their tactics. Can AIs earn troops’ trust?

Robots & Puddles: Surprises From Army RCV Test

Robots & Puddles: Surprises From Army RCV Test
Robots & Puddles: Surprises From Army RCV Test

Experimental Robotic Combat Vehicles are outperforming Army expectations. But soldiers are finding plenty of quirks to fix.

Air Combat Commander Doesn’t Trust Project Maven’s Artificial Intelligence — Yet

Air Combat Commander Doesn’t Trust Project Maven’s Artificial Intelligence — Yet
Air Combat Commander Doesn’t Trust Project Maven’s Artificial Intelligence — Yet

Before the Air Force will trust AI to pick out targets, Gen. Holmes said, it has to get smarter than a human three-year-old.

ATLAS: Killer Robot? No. Virtual Crewman? Yes.

ATLAS: Killer Robot? No. Virtual Crewman? Yes.
ATLAS: Killer Robot? No. Virtual Crewman? Yes.

Alarming headlines to the contrary, the US Army isn’t building robotic “killing machines.” What they really want artificial intelligence to do in combat is much more interesting.

Attacking Artificial Intelligence: How To Trick The Enemy

“Autonomy may look like an Achilles’ heel, and in a lot of ways it is” – but for both sides, DTRA’s Nick Wager said. “I think that’s as much opportunity as that is vulnerability. We are good at this… and we can be better than the threat.”

Show Me The Data: The Pentagon’s Two-Pronged AI Plan

Show Me The Data: The Pentagon’s Two-Pronged AI Plan
Show Me The Data: The Pentagon’s Two-Pronged AI Plan

“It’s not going to be just about , ’how do we get a lot of data?’” deputy undersecretary Lisa Porter told the House subcommittee. “It’s going to be about, how do we develop algorithms that don’t need as much data? How do we develop algorithms that we trust?”

Why A ‘Human In The Loop’ Can’t Control AI: Richard Danzig

Why A ‘Human In The Loop’ Can’t Control AI: Richard Danzig
Why A ‘Human In The Loop’ Can’t Control AI: Richard Danzig

“Error is as important as malevolence,” Richard Danzig told me in an interview. “I probably wouldn’t use the word ‘stupidity,’ (because) the people who make these mistakes are frequently quite smart, (but) it’s so complex and the technologies are so opaque that there’s a limit to our understanding.”

Artificial Stupidity: Learning To Trust Artificial Intelligence (Sometimes)

Artificial Stupidity: Learning To Trust Artificial Intelligence (Sometimes)
Artificial Stupidity: Learning To Trust Artificial Intelligence (Sometimes)

In science fiction and real life alike, there are plenty of horror stories where humans trust artificial intelligence too much. They range from letting the fictional SkyNet control our nuclear weapons to letting Patriots shoot down friendly planes or letting Tesla Autopilot crash into a truck. At the same time, though, there’s also a danger…

Artificial Stupidity: Fumbling The Handoff From AI To Human Control

Artificial Stupidity: Fumbling The Handoff From AI To Human Control
Artificial Stupidity: Fumbling The Handoff From AI To Human Control

Science fiction taught us to fear smart machines we can’t control. But reality should teach us to fear smart machines that need us to take control when we’re not ready. From Patriot missiles to Tesla cars to Airbus jets, automated systems have killed human beings, not out of malice, but because the humans operating them…

Artificial Stupidity: When Artificial Intelligence + Human = Disaster

APPLIED PHYSICS LABORATORY: Futurists worry about artificial intelligence becoming too intelligent for humanity’s good. Here and now, however, artificial intelligence can be dangerously dumb. When complacent humans become over-reliant on dumb AI, people can die. The lethal track record goes from the Tesla Autopilot crash last year, to the Air France 447 disaster that killed 228…