Air Force Secretary Kendall Speaks At The National Press Club

Air Force Secretary Frank Kendall. (Photo by Drew Angerer/Getty Images)

WASHINGTON — Air Force Secretary Frank Kendall said today that he has asked the service’s Scientific Advisory Board to rapidly kick out a study of the potential impacts of “generative” artificial intelligence, such as the increasingly popular AI program ChatGPT.

“I’ve asked my Scientific Advisory Board to do two things really with AI. One was to take a look at the generative AI technologies like ChatGPT and think about the military applications of them, to put a small team together to do that fairly quickly,” he said during an online interview with the Center for a New American Security (CNAS). “But also to put together a more permanent, AI focused group that will look at the collection of AI technologies, quote, unquote, and help us understand them and figuring out how to bring them in as quickly as possible.”

The board met June 15 to look at progress in its ongoing studies, due to be wrapped up next month.

Kendall stressed that at the moment ChatGPT and other generative artificial intelligence systems, which can create entirely new text, code or images rather than just categorizing and highlighting existing ones, are not ready for primetime.

“I find limited utility in that type of AI for the military, so far. I’m looking, and we’re all looking, right? But having it write documents for you, which is what ChatGPT does? [It] is not is not reliable, in terms of the truthfulness of what it produces,” he said. “We got ways to go, before we can rely on tools like that to do operational orders, for example.”

Former AI officers in the Pentagon previously expressed their concerns about generative AI to Breaking Defense, especially the current tech’s proclivity for “hallucinating” information. And in May, Pentagon Chief Officer for Digital and AI, Craig Martell, said he too was “scared to death” about the potential for AI to be used for extremely effective disinformation.

Nonetheless, Kendall said, “there is potential there for assistance, if you will, with some of the tasks that we do.”

For the most part, however, Defense Department interest in AI is focused on things like “pattern recognition, targeting, sorting through a lot of intel functions, where neural networks and machine learning can be … very helpful,” he explained. “What we’re calling AI offers now is much higher processing speeds and much more data that can be handled. So, it’s more of an incremental advance in technology than people acknowledge many times, but it has revolutionary potential in terms of the capability that it provides.”

Kendall stressed that those kinds of AI tools to assist decision-making already are being fielded in the commercial world — and inevitably will make their way into military systems.

“[T]he thing that people don’t, I think, really appreciate enough is that those technologies are happening with you ask for them or not,” he said. “All of those things are going to be coming in, they’re going to be used. They’re going to give us more capability.”

US military commanders have been clamoring for such AI decision aids for several years in order to out-think adversaries, both in future all domain warfare operations but also in “gray zone” competition just below the threshold of conflict.

And in February, Martell reinvigorated the Global Information Dominance Experiment (GIDE) series, designed to flesh out the US military’s ambitious Joint All Domain Command and Control (JADC2) concept aimed at machine-speed warfighting across the land, air, sea, space and cyber domains. The GIDE series, first held in 2021, includes a strong focus on the use of AI and machine learning technologies for tasking sensors and targeting shooters.

Thus, Kendall explained, the goal is to work through the ethical considerations while at the same time moving AI technology into the field fast.

“Humans are still going to have the role and be responsible for what actions are taken, and they have to be in the decision-making process so they can do that in a way which is also operationally efficient. And we’ve got to make sure that we apply those technologies in an ethical way, in a responsible way,” he said. “And we have to be very smart about how we write requirements, so that we provide the opportunity for those technologies to get on board and then find paths for them that address all the issues I just talked about, but get them in the hands of warfighters where they can give them an advantage.”