

The US national security establishment is cautiously embracing generative AI, so Breaking Defense decided to do an experiment.
By Lee Ferran
By fact checking the AI through a new verification process for Large Language Models, it’s possible to reduce hallucinations from 5-10 percent to as low as 0.1 percent.
By Breaking Defense
One of the hopeful uses of the new LLM is for the government to create contracts at a quicker rate.
By Carley Welch
Intelligence analysts need to be especially cautious about artificial intelligence “hallucinations” or other false outputs, said the CIA’s Chief Technology Officer — but AI can also generate genuinely useful insights out of left field.
By Sydney J. Freedberg Jr.
“I can’t really imagine a world where something like a Joint Strike Fighter program goes to a privately traded company,” Luckey told Breaking Defense. “I can’t imagine that ever happening.”
By Aaron Mehta
Led by the Pentagon’s Chief Digital and AI Office, the task force “will assess, synchronize, and employ generative AI capabilities across the DoD, ensuring the Department remains at the forefront of cutting-edge technologies while safeguarding national security,” according to a DoD announcement.
By Jaspreet Gill
At the moment, “generative” AI systems have “limited utility” for the US military, Air Force Secretary Frank Kendall said, but could help with some tasks if applied “in an ethical way.”
By Theresa Hitchens
“Here’s my biggest fear about ChatGPT,” Craig Martell said. “It has been trained to express itself in a fluent manner. It speaks fluently and authoritatively. So you believe it even when it’s wrong… And that means it is a perfect tool for disinformation…”
By Jaspreet Gill