TECHNET CYBER 2023 — While the US military is eager to make use of generative artificial intelligence, the Pentagon’s senior-most official in charge of accelerating its AI capabilities is warning it also could become the “perfect tool” for disinformation.
“Yeah, I’m scared to death. That’s my opinion,” Craig Martell, the Defense Department’s chief digital and AI officer, said today at AFCEA’s TechNet Cyber conference in Baltimore when asked about his thoughts on generative AI.
Martell was specifically referring to generative AI language models, like ChatGPT, which pose a “fascinating problem”: they don’t understand context, and people will take their words as fact because the models talk authoritatively, Martell said.
“Here’s my biggest fear about ChatGPT,” he said. “It has been trained to express itself in a fluent manner. It speaks fluently and authoritatively. So you believe it even when it’s wrong… And that means it is a perfect tool for disinformation…We really need tools to be able to detect when that’s happening and to be able to warn when that’s happening.
“And we don’t have those tools,” he continued. “We are behind in that fight.”
RELATED: Pentagon should experiment with AIs like ChatGPT — but don’t trust them yet
Martell, who was hired by the Defense Department last year from the private sector, has extensive AI experience under his belt. Prior to his CDAO gig, he was the head of machine learning at Lyft and Dropbox, led several AI teams at LinkedIn and was a professor at the Naval Postgraduate School for over a decade studying AI for the military.
He implored industry at the conference to build the tools necessary to make sure information generated from the all generative AI models — from language to images — is accurate.
“If you ask ChatGPT ‘Can I trust you,’ its answer is a very long ‘No,’” he said to the audience. “I’m not kidding. It says I’m a tool and I’m going to give you an answer and it’s incumbent upon you to go verify it yourself. So my fear about…using ChatGPT, as opposed to fears about our adversaries using it… is that we trust it too much without the providers of the service building in the right safeguards and the ability for us to validate it.”
Martell’s warning comes as Pentagon leaders are anticipating ways to use generative AI for intelligence gathering and future warfighting. On Tuesday at the conference, Lt. Gen. Robert Skinner, director of the Defense Information Systems Agency (DISA), began his keynote address using a generative AI that cloned his voice and delivered his opening remarks.
“Generative AI, I would offer, is probably one of the most disruptive technologies and initiatives in a very long, long time,” Skinner said after revealing his introduction was AI-generated. “Those who harness that and can understand how to best leverage it, but also how to best protect against it, are going to be the ones that have the high ground.”
When asked today to respond to Martell’s thoughts, Skinner told reporters that he’s “not scared of generative AI,” but that it’ll be a “challenge to where the innovative spirit within the Department of Defense will shine.”
Stephen Wallace, DISA’s chief technology officer, said that the agency is looking at taking advantage of generative AI in several ways, from “back office capabilities… contract generation, data labeling.”
“The number of applications is very wide-ranging,” Wallace told reporters. “We always say that we can’t ‘people our way out of problems.’ And this is a way for us to augment our teams, make our teams better and ultimately deliver capabilities across the board.”