The U.S. Navy’s nuclear-powered aircraft carriers flight decks are some of the most chaotic and deadly real estate in the world. Teeming with scores of high-performance aircraft, wheeled vehicles and up to a thousand sailors generating up to several hundred sorties per day, flight decks “are fraught with danger,” the Naval Safety Center warned in a 2003 publication. “You can get blown down by prop wash, blown over-board by jet exhaust, run over by taxiing aircraft or sucked up and spit out by a turning engine.”

Soon the Navy may have a new danger to add to the list. The sailing branch plans to add robotic jet-powered warplanes to the carrier-deck mix starting next year. The Unmanned Combat Air System Demonstrator program, or UCAS-D, is scheduled to launch a seven-ton Northrop Grumman X-47B drone from the carrier USS Eisenhower sometime in 2013.

The approximately $1-billion UCAS-D effort should lay the groundwork for the follow-on Unmanned Carrier-Launched Airborne Surveillance and Strike initiative, or UCLASS, which aims to add armed Unmanned Aerial Vehicles to all 11 of the Navy’s carriers no later than 2020. The UCLASS drones would boast greater range and endurance — and a lower unit cost — than existing manned fighters. Northrop, Boeing, Lockheed Martin and General Atomics are all competing for the UCLASS contract.

But blending people, vehicles, manned planes and robots on the tightly packed 4.5 acres of a carrier deck requires new techniques, technologies and new ways of thinking. Men and machines will need to know when to work together, when to ask each other for help and when to just give each other some space. “It’s manned-unmanned operations on the carrier that is the big shift,” Capt. Jaime Engdahl, the Navy’s UCAS-D program manager, told Breaking Defense in an interview.

And the clock is ticking. The UCAS-D test flights from the Eisenhower might make do with makeshift deck-handling methods, but full integration of robot warplanes onto carriers demands almost foolproof systems that aren’t yet ready. In laboratories across the U.S., technology developers are racing to design the so-called “human-machine interfaces” that will mediate between sailors and their robotic shipmates. Related interfaces are seeing early tests in some surprising places — including U.S. highways.

The first of two 62-foot-wingspan X-47Bs took off on its inaugural flight at Edwards Air Force Base in California in February 2011, showing off basic flight systems including the sensors and actuators for a completely autonomous touch-down on a land-based airstrip. At the time, Engdahl cautioned against focusing too much on the air vehicle and its basic functions. “Just as important is the technology for integrating into the carrier — the data-links, algorithms and control systems,” he said.

After a year at Edwards, the first X-47B shifted to the Navy test facility at Patuxent River in Maryland while the second completed its initial flight tests at Edwards before following its sister ‘bot eastward. At Pax River the X-47Bs would be “working toward demonstrating the aircraft’s ability to operate on and around an aircraft carrier,” Engdahl explained in a Navy news release.

This summer, in addition to test flights, the tail-less drones will be doing some practice taxiing and perform mock deck operations, presumably on a roughly 1,000-foot stretch of runway etched in the outline of a carrier. An X-47B will be craned aboard a carrier for additional deck tests next year ahead of its first carrier launch, Engdahl tells Breaking Defense.

Traffic Control

In parallel with the efforts at Edwards and Pax River, a team at the Massachusetts Institute of Technology has been working hard on a carrier-compatible human-machine interface. In 2009 the Office of Naval Research awarded an MIT team a five-year grant to develop the so-called Deck Operations Course of Action Planner, or DCAP, which the Navy said “will provide flight deck personnel with automated planning tools, enhanced information displays, and new user interface approaches that make it much easier to interact with autonomous systems.”

DCAP fits sailors, deck vehicles and aircraft — both manned and robotic — with radio tags. A computerized artificial intelligence tracks the location of the people and hardware plus the fuel and maintenance statuses of the aircraft and the material condition of the carrier’s launch and recovery equipment. The AI arranges the drones on the deck and clears them for takeoff and landing in coordination with manned planes.

The key to DCAP is a suite of sophisticated software algorithms that can quickly shift a complex deck schedule to accommodate human errors by the deck crew and mechanical failures on the part of the planes or ship. “How to build a full schedule quickly that can compensate for failures … is something people cannot do very well,” Jason Ryan, an MIT PhD student, said in a university news release. “But that’s something that algorithms are exceptional at.”

DCAP is deliberately not designed to be fully automated — and for good reason. “We don’t know a lot about how to tell a machine how to handle surprises,” Randall Davis, an MIT professor, tells Breaking Defense.

When a snag occurs that the DCAP system does not believe it can safely resolve, it alerts a human operator and prompts them to make a decision. The operator can override the system based on intuition. If, for instance, a pilot with a reputation for missing the arrestor wire appears on the landing schedule, the operator can move him higher up in the landing queue to give them more time, according to MIT. “That’s something that’s hard to program into systems, but it’s something that a human can look at and understand,” Ryan explained.

In June last year MIT conducted what ONR called a “successful live demonstration” of the DCAP on a scale model of a carrier deck, with 10 small wheeled robots standing in for X-47B-style drones. But the demo did not fully reflect the complexity of an active carrier deck, which can include many more people, vehicles and aircraft than were represented at MIT. An operational version of the DCAP will probably require even better AI with more safeguards.

Teamwork

While the Navy has not publicly released the exact capabilities it requires of an operational deck-handling system, it’s possible to speculate based on existing research initiatives. Safeguards could include sensors and algorithms installed in the drones themselves plus tablet-style robot controls for deck crew.

MIT professor Missy Cummings, who has worked on several military programs including DCAP, is also developing a tablet interface for Navy helicopter drones that could wind its way into a deck-handling system. The tablet design reflects Cummings’ philosophy that human-machine interfaces should be as simple and intuitive as possible — like a video game. “The best technologies can’t work if they can’t work with people,” Cummings tells Breaking Defense, speaking strictly in her capacity as an MIT professor. (In other words, she’s not speaking for the Navy.) Robot users need to “get to the point where UAVs can be operated by people with minimal training,” she adds.

As controls get simpler, the machines themselves need to be smarter and more autonomous. A host of companies — including Google — are refining the laser scanners, radars, cameras, mapping algorithms and gesture and voice recognition software to enable partially or fully robotic ground vehicles. Much of that tech was tested out in the Grand Challenge and Urban Challenge robot races hosted by the Defense Advanced Research Projects Agency (DARPA) in 2005 and 2007, respectively.

Under the leadership of Sebastian Thrun, a veteran of the DARPA races, Google has been testing a fleet of seven semi-autonomous compact cars, and has even been certified to drive them on a routine basis in Nevada. There are obvious applications for pilotless aircraft that must operate from crowded airports or carrier decks, which Cummings says represent “chokepoints” in robot usage.

With these technologies installed, naval drones in a deck environment could make some decisions all on their own about where to go and when, subject to override by human crews with control tablets. An operational DCAP and its own human overseers could watch over and coordinate all of this decision-making and interfere with low-level decisions only when necessary.

For instance, a deck-qualified X-47B, having just landed according to the DCAP’s schedule, could autonomously taxi from the arrestor wire towards a corner of the deck for parking — again, following the DCAP’s direction. But if the X-47B’s onboard laser scanner senses someone or something in its way — say, a wandering sailor or misplaced piece of deck equipment — the robot will make the decision on its own to halt. A human spotter could then override the stopped X-47B and, using the tablet’s touchpad screen, steer the drone around the obstacle to its parking spot. The DCAP would note the change and adjust the movements of other ‘bots.

Cummings says robot ground-handling technologies, derived in part from other military and civilian systems, could then go on to find countless applications in both the government and private sectors. Factories and the trucking industry are possible beneficiaries, according to MIT.

Carl Johnson, a Northrop vice president, seconds that assessment. “If I can find a safe and effective way to [autonomously] land on a carrier … why would I do it on just unmanned airplanes?” Johnson told Breaking Defense last year. “It’s a great technological game-changer that will affect everybody.”

Most importantly, it will affect the sailors who daily put themselves in harm’s way on 4.5 acres of deadly carrier-deck real estate.