WASHINGTON — The National Geospatial Intelligence Agency (NGA) this month will issue a call to industry worth more than $700 million for help training AI-driven computer vision systems — including for its sprawling Maven program — to pick out targets of interest, according to the agency’s director.
Vice Adm. Frank Whitworth told reporters on Aug. 30 that the call will be “the largest data labeling request for proposal in the US government,” and “represents a significant investment in computer vision, machine learning and AI.”
NGA gathers imagery from satellites and aircraft, analyzes it, and then disseminates the resultant geospatial intelligence (GEOINT) products (such as 3D maps) to users across the US government.
The increasing amount of GEOINT data pouring in from myriad new satellites and other sensor platforms is posing a challenge for the Intelligence Community (IC), he explained. While artificial intelligence systems will help speed analysis, AI visual models first must be taught to accurately recognize military targets and ferret out abnormal activities.
A critical issue is ensuring that AI systems can reliably discriminate between friend from foe, as well as enemy soldiers from civilians — the latter something required by the international laws of war contained in the Geneva Conventions, Whitworth explained.
“There’s this issue of distinction, guaranteeing to the best of our ability … the distinction between a combatant and non-combatant, an enemy and non-enemy, and that’s hard, and I will tell you, based on my 35-plus years of experience, especially in targeting, that’s one of the hardest things we do in targeting,” he said.
“But it’s also essential in warning, because warning involves establishing a baseline that we see … of behavior or objects and where they are, and the specificity of their location and their characterization, and then citing that there’s an anomaly — that there’s something new, something different that might be troubling — and sometimes putting the breadcrumbs together to actually tell the tale of a trend that is disconcerting and might need some sort of reaction,” Whitworth added.
Data labeling helps AI computer visions systems do that, and according to NGA officials is a key to enabling Maven. Initiated by the Defense Department in 2017 to help speed AI adoption by the military, Maven in 2022 was handed over to NGA to manage and became a program of record last November.
“Data labeling is the process where the human actually identifies the object, and then, in a way that is understandable by the model, informs the model. And so you have to actually label it in a very specific way,” Whitworth elaborated.
In other words, data labeling works somewhat like how users pick out the squares showing pieces of a motorcycle or those showing birds to prove they are “not a robot” and get access to a website. It is a time-consuming task, and requires large numbers of individuals to review each piece of data and ensure its accuracy.
As part of its quest to push the development and use of AI for GEOINT, NGA also has launched a pilot project to certify what Whitworth called “large visual models.”
“NGA is proud to announce the establishment of an accreditation pilot for GEOINT AI models for the entire national system for geospatial intelligence, otherwise known as the NSG — the Accreditation of GEOINT AI Models, A-GAIM for short,” he said.
The NSG is a term of art referring to the technology, policies, capabilities, doctrine, activities, data and communities, including civil, involved in producing GEOINT. The NSG includes IC, the Joint Staff, the military services, and combatant commands, as well as international partners, industry and academia.
The A-GAIM pilot will evaluate “the methodology and robustness of a program’s model development and test procedures,” Whitworth said.
The goal, he added, is to “expand the responsible use of GEOINT, AI models and posture NGA and the GEOINT enterprise to better support the warfighter and create new intelligence insights. Accreditation will provide a standardized evaluation framework, implements risk management, promotes a responsible AI culture, enhances AI trustworthiness, accelerates AI adoption and interoperability and recognizes high quality AI while identifying areas for improvement.”
Whitworth said the hope is for the pilot to become a “pathfinder for DoD and larger IC,” so the idea is for it to be integrated with the standards DoD already has developed.
For example, he noted that NGA is working “in line with DoD guidance on ethical AI.” To this end, the agency has created a training program for coders called “GEOINT Responsible AI Training, GREAT for short.” Pilot classes were held in April and May, and the plan is to eventually make the training “broadly available” to coders across the NSG, he said.
“GREAT is tailored to developer- or user-specific challenges across the AI life cycle, and everyone taking the certification will be asked to sign a final pledge to develop or use AI responsibly,” Whitworth said.
Top defense insights from 2024
A curated look at standout opinions and analysis covering topics like uncrewed systems, NATO partnerships, US-Saudi defense dynamics, and evolving warfare strategies, spotlighting key issues shaping the global defense landscape.