credit: Colin Clark

WASHINGTON: A few hours before the Pentagon released its first Artificial Intelligence strategy, I asked SASC Chair James Inhofe why the US military — and the US generally — appeared to be doing so relatively little about it, while China has made AI the centerpiece of an outright societal realignment, complete with a master plan and huge amounts of targeted money.

“I think Russia and China are in a better position than we are at the moment on Artificial Intelligence,” the senator said straightaway.

I asked if he would press hard for more money.

Sen. James Inhofe

Answer: “To me, there are other things that need to be done first,” Sen. Inhofe said. As the senator from Oklahoma, he pointed to one of the subjects he knows best: artillery — or, in the current jargon, Long-Range Precision Fires, the US Army’s No. 1 modernization priority. (Of course, Fort Sill is in the senator’s state, home to the Army’s Field Artillery and Air Defense Artillery schools.) He pointed to recent statements by Gen. Joseph Dunford, Chairman of the Joint Chiefs, that we have lost our quantitative and quantitative edges in artillery, and by Gen. Mark Milley, Army Chief of Staff and Dunford’s probable successor, who said the US is outranged and outgunned by our adversaries. Inhofe also mentioned the readiness woes that afflict the aging F-18 fleet, among others. So, don’t expect a great push for either more money or more focus on AI from the SASC chairman.

Sydney J. Freedberg Jr. graphic from Army data

RAP: Rocket Assisted Projectile (current M549A1 or future XM1113). ERCA: Extended Range Cannon Artillery. GMLRS-ER: Guided Multiple-Launch Rocket System – Extended-Range. ATACMS: Army Tactical Missile System. PRSM: Precision Strike Missile.
SOURCE: US Army. SLRC and Hypersonic Missile ranges as reported in Army Times.

When former Deputy Defense Secretary Bob Work heard the head of Google’s parent company, Eric Schmidt, say in November 2017 that America needs a national strategy for developing Artificial Intelligence, one image sprang to his mind’s eye.

“The image that popped into my mind was of Nikita Khrushchev banging his shoe in the UN and saying, ‘We will bury you,” Work said. “As Eric said, the US does not have a coherent strategy” for developing AI, the father of the Pentagon’s Third Offset Strategy opined.

Well, the Pentagon has released its AI strategy. There are virtually no mentions of increased funding in it. But it does make the stakes clear: “Failure to adopt AI will result in legacy systems irrelevant to the defense of our people, eroding cohesion among allies and partners, reduced access to markets that will contribute to a decline in our prosperity and standard of living, and growing challenges to societies that have been built upon individual freedoms.”

It identifies the center of gravity for AI work in the military, the Joint Artificial Intelligence Center (JAIC). And it sets some priorities:

“We will launch a set of initiatives to incorporate AI rapidly, iteratively, and responsibly to enhance military decision-making and operations across key mission areas. Examples include improving situational awareness and decision-making, increasing the safety of operating equipment, implementing predictive maintenance and supply, and streamlining business processes. We will prioritize the fielding of AI systems that augment the capabilities of our personnel by offloading tedious cognitive or physical tasks and introducing new ways of working.”

Importantly, the strategy notes the importance of ethics in developing and using AI, saying the Pentagon “will articulate its vision and guiding principles for using AI in a lawful and ethical manner to promote our values. We will consult with leaders from across academia, private industry, and the international community to advance AI ethics and safety in the military context.”

Then it sort of issues a laundry list of what AI will be used for, promising to “share our aims, ethical guidelines, and safety procedures to encourage responsible AI development and use by other nations.”

  • Increasing safety of operating equipment. AI also has the potential to enhance the safety of operating aircraft, ships, and vehicles in complex, rapidly changing situations by alerting operators to hidden dangers.
  • Implementing predictive maintenance and supply. We will use AI to predict the failure of critical parts, automate diagnostics, and plan maintenance based on data and equipment condition. Similar technology will be used to guide provisioning of spare parts and optimize inventory levels. These advances will ensure appropriate inventory levels, assist in troubleshooting, and enable more rapidly deployable and adaptable forces at reduced cost.
  • Streamlining business processes. AI will be used with the objective of reducing the time spent on highly manual, repetitive, and frequent tasks. By enabling humans to supervise automated tasks, AI has the potential to reduce the number and costs of mistakes, increase throughput and agility, and promote the allocation of DoD resources to higher-value activities and emerging mission priorities.

And they will work on solving hard “global challenges of significant societal importance” such as how to US AI for humanitarian assistance and disaster relief for wildfires, hurricanes, and earthquakes. “These open missions will challenge a broad community to advance the state of AI and learn how to operationalize the technologies on an integrated basis across domestic and international organizations. They will contribute to the development of thousands of new AI experts needed for public service over the next decade and spur future AI progress across multiple sectors,” the strategy promises.

Wendy Anderson of SparkCognition

In addition to the Pentagon strategy, the White House announced an Executive Order yesterday designed to coordinate AI work across the federal government. It contained few details, in contrast with the Chinese plan. One of the few former defense policymakers with real world experience in AI, Wendy Anderson, gave the Trump Administration credit for the EO, saying it “is significant.” But: “That said, the content of the EO isn’t new. <any AI thought leaders, technologists, others from the tech community, scientists, and policymakers have been making these important points for years. If there’s no implementation plan behind the EO – with details, deadlines, and funding — then it may be worse than no EO at all.”

Anderson, who was deputy chief of staff for Defense Secretary Chuck Hagel, put her finger on the one thing that is, at least in public, missing from all this: “I’d like to see the implementation plan and resources behind it as soon as possible. If we don’t want to fall behind on this game-changing technology, we need to up our game, and we need to do so now,” she says in her email. “In contrast to our EO released today, the Chinese strategy, which is full of details, deadlines, and a clear funding plan, also has engaged support and action at the very top.”

Anderson, who now works for an Austin,Texas company called SparkCognition that designs and builds AIs, asks what has the US been doing compared to the Chinese? “To date, we have mostly engaged in debates about banning AI exports. In the absence of significant US governmental AI spend and in the absence of a robustly resourced national AI strategy, we are now also attempting to limit our private companies’s ability to access capital via international sales to the world’s largest markets.”

Bottom line, for Anderson? “We are losing the money/investment race big time.”

Readers, are we missing our second Sputnik moment?