GHWB Shoots a Phalanx CIWS

GHWB Sailors perform a weapons trial on a Phalanx Close-In Weapon System (CIWS) on the aircraft carrier USS George H.W. Bush (CVN 77) during carrier qualifications. (US Navy photo by Mass Communication Specialist 3rd Class Brandon Roberson)

The war in Ukraine has demonstrated, once and for all, that drones are here to stay — but it has also underlined that battlefield communications are fragile and easily disrupted, leaving drones unable to receive their orders from their operators. The US military is developing ways to defeat pervasive signal jamming, but it should recognize there will be no silver bullet, and that there will be times when the jammers win.

The solution: The military will eventually need more machines that can think like human soldiers, deciding autonomously what to destroy and who to kill in ambiguous battlefield conditions. And to secure future battlefields, the Pentagon, and the public, will need to get over its “Terminator” fears and embrace reality.

The Pentagon is working to mitigate the effectiveness of Russian signal jammers in Ukraine, trying to enhance the jam resistance of weapons systems provided to the Ukrainian military. As another method to defeat jamming, Ukrainian forces seem to be targeting electronic warfare systems themselves, using munitions that aim for equipment emitting jamming signals. The architecture of proliferated satellite constellations has offered some protection against jamming, but Russia is increasingly successful at degrading Starlink service and has consistently been able to disrupt many other signals — like GPS and drone command and control links. For navigation, military officials are also looking at GPS alternatives that do not use signals, like image-aided and laser scanner navigation technologies.

This is a cat-and-mouse game between the jammers and the jammed, with both sides racing to develop technologies that defeat each other’s latest and greatest capabilities. What works one week might be obsolete the next. And that means that human control — differentiating between enemy and friendly soldiers and civilians, enemy and friendly aircraft and civilian airplanes, among other battlefield variables in various weather conditions and during daytime and nighttime — can never be assured, even mission to mission.

Hence, the need to limit direct operator control. Broadly speaking, automated decision-making for identifying and engaging targets is not a new concept. Existing weapons such as heat-seeking missiles, mines, torpedoes, as well as systems like the Phalanx radar-guided gun and Israel’s Harpy drone, make lethal decisions autonomously, albeit following a very tight script that probably falls short of being considered artificial intelligence. Though a magnetic underwater mine detonating is an automatic reaction to it coming near a metallic warship hull, the action — the “decision” made — looks more like the instincts of a closing Venus flytrap than like human decision-making.

For years, the United States has looked at ways to incorporate AI into weapon systems, but has shown hesitancy towards AI-enabled autonomous weapons making lethal decisions without a human in the loop. Improved electronic warfare techniques should create an added sense of urgency and a reason for US policymakers and military leaders to reassess that reluctance.

Unlike the United States, neither Russia nor China appear to be circumspect about AI-enabled lethal autonomous weapons. Moreover, they are already collaborating on AI-powered weapon systems. Along with other weapons, Russia has been testing a tank-like robot which could conceivably one day operate autonomously, potentially making real-time decisions on the battlefield about what to shoot. China is developing a myriad of warfighting systems, like submarines and aircraft, that are designed to make decisions autonomously without a human in the loop. Other countries are taking a similar approach. For example, Ukraine is working to deploy swarms of automated drones that do not require communications with operators to identify and attack their targets.

The Pentagon lags behind, despite the fact there are no laws [PDF] that prohibit the development or use of AI-enabled lethal autonomous weapon systems. Perhaps the best example is the Bullfrog counter-drone system, which would become the first publicly known AI-enabled autonomous lethal weapon used by the US military. This system is a leap beyond legacy ones like Phalanx, Harpy, or underwater mine because it’s not just reacting to one input—like a radar signature, radio emission, or magnetic signature—but rather using sensors to comprehensively understand its environment, identify hostile drones, and make decisions based on its human-trained algorithms.

But Bullfrog is the exception, unfortunately, and not the norm. The US military should focus more on AI-enabled autonomous lethal weapons, particularly ones focused on drones, developing the technology so that it meets military requirements and addresses ethical concerns. Like Bullfrog, such systems can be designed for two modes—one allowing a human in the loop and another with full autonomy—so that fully automated capabilities are only used when and where appropriate to battlefield conditions.

It must be acknowledged that lethal autonomous weapons will make mistakes. But humans also make mistakes. And civilians, or simply the wrong targets, fall into the crosshairs of today’s weapons systems that are guided by humans. Whereas humans may be susceptible to emotions, such as revenge, in the heat of battle, a weapon system trained using machine learning would not. Arguably, this means that machines might even make more predictable and rational decisions than people on the battlefield. They will also have to operate “alongside” human soldiers, so work will be needed to examine and optimize the human-robot battlefield relationship. And if the US develops these systems now, AI-enabled lethal autonomous weapon systems elsewhere are more likely to reflect American and allied respect and adherence to the law of war, not ceding first mover advantage to Russia and China.

During a future war in which the US military may not be able to consistently access parts of the radio spectrum, it will still need to offensively use uncrewed drones and defend against them, probably in very complex and fluid environments. Possession of AI-enabled lethal autonomous weapons will probably make Russia and China more eager and willing to jam as much radio spectrum as possible, because they will not need it for the tactical fight on the battlefield.

If human soldiers find themselves cut off from communications with commanders, they can survive and win. If remotely operated machines are cut off, they do not work.

The United States should focus on developing AI-enabled machines, particularly aerial drones and counter-drone systems, that can think, fight, and destroy without human direction. There is no reason to delay efforts to design and build algorithms and autonomous lethal warfighting systems that meet military needs and adhere to US values and principles. Otherwise, the United States cedes a huge battlefield advantage to Russia and China.

Clayton Swope is the deputy director of the Aerospace Security Project and a senior fellow in the International Security Program at the Center for Strategic and International Studies (CSIS) in Washington. He previously served as a congressional staffer and at the Central Intelligence Agency.