Raytheon photo

QinetiQ Titan robot fitted with a Javelin anti-tank missile launcher

WASHINGTON: The US military should adopt artificial intelligence urgently without letting debates over ethics and human control “paralyze AI development,” a congressionally mandated panel says. “In light of the choices being made by our strategic competitors, the United States must also examine AI through a military lens, including concepts for AI-enabled autonomous operations.” (Emphasis ours).

In what will likely be controversial on Capitol Hill, the interim report released yesterday by the the bipartisan National Security Commission on Artificial Intelligence is full-throated in its defense of the pursuit of autonomous, AI-driven military systems as not only ethical but essential for future US military operations. Even in the military, some commanders have been publicly reluctant to trust AI — especially for anything related to nuclear weapons.

[Read about the independent Defense Innovation Board’s call for ethical military AI]

Notably, nowhere does commission use the phrase ‘human in the loop,’ the language currently favored by the Pentagon to assert that a human would always have ultimate control over any autonomous system.” That phrase, in turn, is an oversimplification of the current Department of Defense policy on autonomous systems, DoD Instruction 3000.9 — which also goes remarkably unmentioned, referenced in a single footnote to one of its 101 pages.

“Ethics and strategic necessity are compatible with one another,” the report says. “Defense and national security agencies must develop and deploy AI in a responsible, trusted, and ethical manner…. Everyone desires safe, robust, and reliable AI systems free of unwanted bias, and recognizes today’s technical limitations. Everyone wants to establish thresholds for testing and deploying AI systems worthy of human trust and to ensure that humans remain responsible for the outcomes of their use. Some disagreements will remain, but the Commission is concerned that debate will paralyze AI development.”

“Inaction on AI development raises as many ethical challenges as AI deployment,” the report continues. “There is an ethical imperative to accelerate the fielding of safe, reliable, and secure AI systems that can be demonstrated to protect the American people, minimize operational dangers to U.S. service members, and make warfare more discriminating, which could reduce civilian casualties.”

“Adopting AI for defense and security purposes is an urgent national imperative,” is one of the seven consensus-agreed principles agreed to be the commissioners. “The Commission is not glorifying the prospect of AI-enabled warfare,” they write. “But new technology is almost always employed for the pursuit of power. In light of the choices being made by our strategic competitors, the United States must also examine AI through a military lens, including concepts for AI-enabled autonomous operations.”

The concept of fully autonomous weapon systems is highly controversial, both in the US and among US allies. As we reported back in August, the International Campaign to Stop Killer Robots nearly doubled its membership over the past year, to 113 NGOs in 57 countries as well as The Vatican and Palestinian Authority. A total of 90 nations have called for negotiations towards some kind of ban.

In addition, an August report by the Congressional Research Service found that there is a widespread consensus at the United Nations that “appropriate levels of human judgement must be maintained” over any lethal autonomous weapon even though there is not agreement on a ban.

The commission’s seven guiding principles were agreed as a method for shaping the robust US debate about the linkage of AI to national defense and military systems, the report explains. Others include making global leadership a national security priority with a robust government investment strategy in order to maintain the US technological edge; investing in domestic STEM eduction and recruiting foreign talent; and maintaining free and open academic research.

The commission stressed, however, that the 101-page report is not final and thus does not make “final recommendations, suggest major organizational changes, or propose specific investment priorities in rank order attached to dollar figures.” The final report is due to Congress in October 2020.

The commission was established by the 2019 National Defense Authorization Act. Chaired by former Google CEO Eric Schmidt, who also chairs DoD’s Defense Innovation Board, it has 15 members including Bob Work, former DoD deputy secretary who serves as vice chairman.

“The development of AI will shape the future of power,” the report states bluntly. “The nation with the most resilient and productive economic base will be best positioned to seize the mantle of world leadership. That base increasingly depends on the strength of the innovation economy, which in turn will depend on AI. AI will drive waves of advancement in commerce, transportation, health, education, financial markets, government, and national defense.”

It identifies “five fundamental lines of effort that are necessary to preserve U.S. advantages: Invest in AI Research and Development (R&D); Apply AI to National Security Missions; Train and Recruit AI Talent; Protect and Build Upon U.S. Technology Advantages; and Marshal Global AI Cooperation.”

The report explains that the commission’s work so far has focused on four major issues:

  • foreign threats to our national security in the current AI era;
  • how AI can improve the government’s ability to defend the country, cooperate
    with allies, and preserve a favorable balance of military power in the world;
  • the relationship between AI and economic competitiveness as a component of
    national security, including the strength of our scientific research community and
    our larger workforce; and
  • ethical considerations in fielding AI systems for national security purposes.

Threats posed by AI misuse, the report says, include disinformation that undermine democratic systems; erosion of privacy and civil liberties; increasing cyber attacks; and increased potential for catastrophic accidents.

It also posits the benefits of AI for homeland defense, the Intelligence Community (IC) and the military:

  • For homeland defense, the report says, AI-enable tools can assist with border protections, cybersecurity, protection of critical infrastructure and natural disaster response.
  • For the Intelligence Community, “AI algorithms can sift through vast amounts of data to find patterns, detect threats, and identify correlations. AI tools can make satellite imagery, communications signals, economic indicators, social media data, and other large sources of information more intelligible. AI-enabled analysis can provide faster and more precise situational awareness that supports higher quality decision-making.”
  • On future battlefields, the military “could use AI-enabled machines, systems, and weapons to understand the battlespace more quickly; develop a common joint operating picture more rapidly; make relevant decisions faster; mount more complex multi-domain operations in contested environments; put fewer U.S. service members at risk; and protect innocent lives and reduce collateral damage.”

Of course, if those applications, however appealing, require relinquishing or even reducing human control, the controversy will be intense.