Pentagon

To field weapons quicker, Pentagon should improve testing and evaluation process: GAO

The office outlines 13 recommendations for the DoD and services that primarily revolve around revising weapon systems' test and evaluation policies to reflect “leading practices” for product development. 

An F-22 Raptor, assigned to the 199th Fighter Squadron, lands on Joint Base Pearl Harbor-Hickam, Hawaii, June 13, 2017. The F-22 Raptor is the Air Force’s 5th generation fighter aircraft. Its combination of stealth, supercruise, maneuverability and integrated avionics, coupled with improved supportability, represents an exponential leap in warfighting capabilities. The Raptor performs both air-to-air and air-to-ground missions allowing full realization of operational concepts vital to the 21st-century Air Force. (US Air Force/ Tech. Sgt. Heather Redman)

WASHINGTON — The Department of Defense should take steps to improve the way it tests and evaluates big-ticket programs if it wants to successfully get weapons into troops’ hands more quickly, according to a congressional watchdog.

In a Government Accountability Office report published today, the office outlines 13 recommendations for the DoD and services that primarily revolve around revising weapon systems’ test and evaluation policies to reflect “leading practices” for product development. 

“GAO’s analysis of DOD-wide test and evaluation policies found they were not fully consistent with selected leading practices for product development as applied to test and evaluation: Involve testers early, conduct iterative testing, use digital twins and threads, and obtain user feedback iteratively,” the GAO wrote.

“Revisions should require involvement of testers in acquisition strategies; iterative approaches to testing, including use of digital twins and threads; and ongoing end user input,” it later added.

For example, the watchdog found that Army development of its future MV-75 Future Long-Range Assault Aircraft (FLRAA) does not include an iterative test approach. And without one, the service is “missing opportunities to provide testing that is tailored and responsive to rapid design iterations” that could ultimately lead to getting soldiers in the cockpit sooner, the report notes.

Over in the Air Force, the service’s effort to modernize the F-22 Raptor fleet through the Sustainment, Enhancement, and Evolution Program does not consider how to incorporate user feedback into an iterative testing process.

“The Air Force is missing opportunities to ensure that testing proceeds expeditiously and that the systems under test are responsive and relevant to warfighter needs,” the GAO wrote. “Such practices include advocating for tester access to digital twins and digital threads as part of acquisition strategy development, reflecting an iterative test approach in test strategies and test plans, and developing user agreements in test strategies and test plans to obtain user feedback.”

presented by

Of the total 13 recommendations GAO put forward to DoD and the services, they concurred with seven, partially concurred with five recommendations, and did not concur with one recommendation. 

At the top level, the secretary of defense’s office was provided with three recommendations that it partially concurred with. Those takeaways include ensuring that the undersecretary of defense for research and engineering and the director of operational test and evaluation revise their weapon system test and evaluation, digital engineering, and systems engineering policies to fully reflect leading practices, along with an iterative, integrated testing approach.

For its part, the Army agreed with all three recommendations directed its way. Those GAO recommendations included revising test and evaluation policy so that testing strategies and plans reflect an iterative, integrated approach enabled by digital twins and digital threads to support delivery of minimum viable products.

The rest of the recommendations followed a similar thread. The only full dissent came from the Navy on the suggestion that test plans incorporate “end user agreements that detail a process for obtaining ongoing user input and feedback.”