AUSA 2023 Engineer Hour: Panel Discussion, “Army Engineers Making a Difference in Europe and the Pacific”

Jennifer Swanson, left, the deputy assistant secretary of the Army for Data, Engineering and Software, gives her remarks during the Engineer Hour Panel Discussion titled “Army Engineers Making a Difference in Europe and the Pacific,” at the Walter E. Washington Convention Center in Washington, D.C., Oct. 11, 2023.(US Army photo by Pfc. Brandon L. Perry)

WASHINGTON — The Army is looking to pilot a generative artificial intelligence program within its acquisition, logistics & technology (ASA(ALT)) division in July as part of its 500-day plan to reduce risks associated with implementing AI algorithms. If carried out correctly, the pilot will utilize generative AI to complete often laborious and lengthy tasks such as contract writing.

The pilot program will leverage a large language model (LLM) and will operate at an Impact Level (IL) 5 secure cloud environment — a system that has the highest level of authorization to store and process controlled unclassified information. The pilot will also act on its own authority to operate process procedures. 

“The pilot is not just about increasing our productivity, which will be great, but also what are the other things that we can do? What are the other industry tools that are out there that we might be able to leverage or add on … say, our vehicles or you know, our weapon systems,” said Jennifer Swanson, deputy assistant secretary of the Army for data, engineering and software, at Defense One’s Tech Summit today.

Unlike other LLM models like ChatGPT, the Army’s LLM will be trained using the Army’s data, Swanson said.

In March, the Army began a 100-day initiative, which will end June 30, to investigate methods for mitigating risks linked to AI algorithms, with a subsequent 500-day project set to commence later this summer. The LLM pilot program will be at the forefront of this latter stage, she explained.

This multi-layered approach to AI implementation within ASA(ALT) is part of a bigger initiative called “Defend AI,” Young Bang, principal deputy within the office of ASA(ALT) said earlier this month. Bang further noted Defend AI will have to partner with industry to develop algorithms that can be implemented into an Army and DoD system that will make up a larger defense network.

Driving the LLM pilot are the Deputy Assistant Secretary of the Army (DASA) data, engineering and software office, the DASA procurement office and the DASA strategy and acquisition reform office — with the latter working on the policy aspect of the project.

One of the hopeful uses of the new LLM is for the government to create contracts at a quicker rate, however, Swanson said she doesn’t predict this will happen immediately.

“I don’t think we are going to necessarily out of the gate write contracts with it, but I think in the area of contracts and in the area of policy, I think there’s a huge return on investment for us,” Swanson said.

“But we got to pilot and test and make sure everybody’s comfortable with it first,” she added.

To ensure that the LLM is giving accurate information, Swanson said that it will provide citations on where its information derives from. Additionally, she said as part of the service’s 100-day program, it developed a “fact-finding generative AI policy” for ASA(ALT) which currently requires a human in the loop, because “we don’t know what we’re getting right now,” Swanson said.

She said these steps will also help guide humans to understand how the LLM operates, essentially mimicking a black box of its inner workings.

“Having the citations will be very helpful in terms of being able to fact-check it, Swanson said.

“We’re going to treat it with our own data, so that’s another big advantage. Since it’s going to be operating in an IL 5 environment, we can put our CUI data in … so we know what we’re putting in and be able to make sure that all of that aligns,” she added.

Another problem that frequently occurs with generative AI is bias. When asked how the new LLM plans to eliminate bias, Swanson vaguely said the program will use ethical and responsible tools. However, she noted that some bias may be inevitable.

“Bias is hard because sometimes people are biased right? So, it’s just it’s one of those very I think fuzzy things that requires oversight, and critical thinking and human involvement to make sure that we’re not just letting the tool run wild,” Swanson said.

Though she did not reveal what company will be developing the LLM, Swanson heavily emphasized it will not be operated by ChatGPT.