presented by

Artificial intelligence graphic courtesy of Northrop Grumman.

In this Q&A, we discuss the difference between artificial intelligence (AI) and machine learning (ML), the role that each play in Joint All Domain Command and Control (JADC2), the importance of building trust into AI, and the work that Northrop Grumman is doing in these areas.

We spoke with Vern Boyle, Vice President of Advanced Processing Solutions for Northrop Grumman’s Networked Information Solutions div., and Dr. Amanda Muller, Consulting (AI) Systems Engineer and Technical Fellow, who is the Responsible AI Lead for Northrop Grumman.

Breaking Defense: Let’s first define AI and ML. I understand ML to be a subset of AI that does the data crunching, trend analysis, alert monitoring. We always refer to them as AI and ML as if they’re two separate things. Are they, and what are the other elements of AI besides ML that will play a role in Joint All Domain Command and Control (JADC2) and all domain operations?

Vern Boyle, Vice President of Advanced Processing Solutions for Northrop Grumman’s Networked Information Solutions div.

Boyle: AI and ML are sometimes used interchangeably but they really are not the same thing. Machine learning is math. AI is about an integration of technologies that is aimed at perception, reasoning, and decision making. You can use machine-learning math as part of that process, but they’re definitely not the same thing. People are using machine learning to process sensor data and to evaluate a wide variety of data types. But that by itself is not artificial intelligence.

Artificial intelligence within a concept like JADC2 breaks down into three areas.

Within this system of enabling technologies for communications, networking, and processing, you could envision an AI system that could control the JADC2 infrastructure. It could perceive and reason on the best ways to move information across different platforms, nodes, and decision makers. And it could optimize the movement of that information and the configuration of the network because it’ll be very complex. We’ll be operating in contested environments where it will be difficult for a human to react and understand how to keep the network and the comm links functioning. The use of AI to control the communication and networking infrastructure is going to be one big application area.

The second major category would be with respect to the information itself. Digital sensors produce a lot of information. You’ve heard customers describe it as the Internet of Military Things (IoMT). That is going to be flooded with data and will overwhelm any human. So the second major application for AI will be to try to reason on that data, understand what we’re seeing in the battlespace, and then help to make decisions about courses of action for both people and machines.

The third major category would be the use of AI for command and control of physical systems. This would be moving beyond sensor-data processing toward platforms that can control themselves—like driverless vehicles in commercial industry. For example, how would I have drones and aircraft make their own decisions about how to maneuver, or perform surveillance or electronic warfare functions?

These will all be important applications of AI for the broader JADC2 mission.

Breaking Defense: Isn’t number two, the flood of data and the reasoning of the data, machine learning?

Boyle: Yes, there is machine learning within that category. But beyond just using machine learning to understand the data, there are opportunities to optimize how the data gets processed and where the applications reside.

For example, the AI can understand the relationship between multiple sensors trying to see and observe similar or the same objects in the battlespace. It can move beyond just the machine learning of the sensor data for reasoning and decision making about what data is most valuable, what data should be fed into this algorithm, and what data should be dropped from the algorithm.

Breaking Defense: What are the greatest capability gaps facing armed forces seeking to implement JADC2?

Boyle: One of the most significant gaps right now is basic connectivity and networking. The platforms that you would like to have functioning to support JADC2 aren’t necessarily able to connect and move information effectively. This makes the DoD’s ambitions for AI and machine learning difficult to realize. It’s challenging because of legacy communications and networking systems, and there are many modern protocols within the commercial sector that have not been fully adopted into the IoMT.

It’s both a gap and a challenge. Let’s assume, though, that everyone’s connected. Now there’s an information problem. Not everybody shares their information. It’s not described in a standard way. Having the ability to understand and reason on information presumes that you’re able to understand it. Those capabilities aren’t necessarily mature yet either.

There’s also challenges with respect to multi-level security and the ability to share and distribute information at different classification levels. That adds a level of complexity that’s not typically present in the commercial sector.

Dr. Amanda Muller, Consulting (AI) Systems Engineer and Technical Fellow, who is the Responsible AI Lead for Northrop Grumman.

Muller: If we don’t allow operators to establish trust in the systems that we are creating they simply won’t be used. That’s not an option for something as complex as JADC2.

It is critically important to create trust with our operators so that they trust what the system is telling them. We need to address this because humans will never be completely out of the decision-making cycle.

Our AI systems must be built so that humans can trust in them when they need to make a decision within the JADC2 architecture.

Breaking Defense: How is NGC supporting the armed forces on their path to JADC2?

Boyle: We’re focused on the communication and networking piece that we just discussed. We have many capabilities deployed now on some important platforms, and we’re working with customers to leverage what we already have, as well as new capabilities that are coming along to address gaps in comms and networking. We have a variety of internal and customer-funded initiatives to chip away at all elements of the problem.

Our broad portfolio already contains enabling technologies needed to connect the joint forces, including advanced networking, AI/ML, space, command and control systems, autonomous systems powered by collaborative autonomy, and advanced resiliency features needed to protect against emerging threats. We provide the connective tissue for military platforms, sensors, and systems to communicate with one another—enabling them to pass information and data using secure, open systems, similar to how we use the Internet and 5G in our day-to-day lives.

The DoD has stated that it must have an AI-enabled force by 2025 because speed will be the differentiator in future battles. That means: speed to understand the battlespace; speed to determine the best course of action to take in a very complex and dynamic battlespace; and speed to be able to take appropriate actions. Together, they will let the DoD more quickly execute the OODA Loop (Observe, Orient, Decide, Act).

AI and advanced, specialized processing at the tactical edge will provide a strategic information advantage. AI and edge computing are the core enabling technologies for JADC2.

JADC2 graphic courtesy of Northrop Grumman.

Breaking Defense: You mentioned the OODA loop and edge computing. Where does AI/ML happen in the OODA loop, and what’s the importance of processing at the edge for JADC2?

Boyle: It’s critical for JADC2. When the customers use a term like IoMT, they’re taking the cue from the commercial sector where you’re able to connect and access data that’s been processed so you can instantly get your results and understand what you’re trying to understand.

In the commercial sector, however, you’re connected to a data center with a high-speed connection. The IoMT, however, will be deployed out at the edge in a tactical environment; it’s not going to be connected back into a data center. They will connect back into data centers at different points in time, maybe before a mission or after a mission, or maybe they’ll have opportunities for higher-speed connectivity, but, for the most part, all of that processing has to be done on the platforms themselves and through the network. It has to be done at the edge. That’s the only option.

Breaking Defense: The U.S. isn’t the only country focused on the power of AI. How will AI impact the potential Great Power conflict?

Boyle: The U.S. has maintained global, technological dominance for many decades. We have benefited from air superiority, for example, but can no longer take technological advantage for granted.

We have entered into a near-peer threat environment where both China and Russia are pushing for global power. As part of its strategic plan, China has declared it will be the global leader in AI by 2030 and their investments are supporting that claim.

China’s investment and application of dual-use technologies like advanced processing, cyber security, and AI are threats to U.S. technical and cognitive dominance. The U.S. must continue to advance our capabilities in these same high-tech areas to maintain our technological leadership with China.

The key difference is that China is applying AI technologies broadly throughout the country. They are using AI for surveillance and tracking their citizens, students, and visitors. They use AI to monitor online behaviors, social interactions and biometrics.
China has no concern about privacy rights or ethical application of the data that AI is able to gather and share. All data is collected and used by both industry and the Chinese government to advance their goal of global, technical dominance by 2030.

Fundamental to the U.S’ approach is assuring that the Defense Department’s use of AI reflects democratic values. It is critical that we move rapidly to set the global standard for responsible and ethical AI use, and to stay ahead of China and Russia’s advances toward the lowest common denominator.

The U.S., our allied partners, and all democratic-minded nations must work together to lead the development of global standards around AI and talent development.

Breaking Defense: How is Northrop Grumman addressing the ethics of AI to ensure your systems are trustworthy? You mentioned earlier about the importance of trust in AI and one of the ways that the AI makes itself trustworthy is by letting the operator know when it made a mistake. Please explain.

Muller: The DoD’s Defense Innovation Board (DIB) set out five ethical principles for militarized AI ensuring that it is responsible, equitable, traceable, reliable, and governable.

Responsible means that human beings maintain responsibility for the development and use of AI. Equitable AI reduces bias through testing, selection of adequate training sets, and diverse engineering teams. Traceable means ensuring the auditability of our systems through data provenance and versioning. Reliable means creating systems that are robust to adversarial attack and operate within defined mission use cases.

The fifth principle, governable, addresses your comment about letting the operator know when the AI is wrong. Governable AI systems allow for graceful termination and human intervention when algorithms do not behave as intended. At that point, the human operator can either take over or make adjustments to the inputs, to the algorithm, or whatever needs to be done. But the human always maintains the ability to govern that AI algorithm.

‘Explainability’ is something that you see come up a lot in the research and literature. I prefer the term ‘interpretability’, which means that the human can understand and interpret what the AI is doing, determine if it’s operating correctly, and take actions if it is not.

Northrop Grumman’s adoption of these principles—something we refer to as Responsible AI—will build justified confidence in the AI systems we create. Justified confidence is about developing AI systems that are robust, reliable, and accountable, and ensuring these attributes can be verified and validated.

The National Security Commission on Artificial Intelligence’s (NSCAI) Final Report highlights emerging consensus on the principles for using AI ethically and responsibly for defense and intelligence applications. (Note: NSCAI was a temporary, independent, federal entity created by Congress in the National Defense Authorization Act for Fiscal Year 2019. It was led by former Google CEO Eric Schmidt and former Deputy Secretary of Defense Robert Work, and delivered its 756-page Final Report in March 2021, disbanding in October.)

As the NSCAI report states, if AI systems do not work as designed or are unpredictable, ‘leaders will not adopt them, operators will not use them, Congress will not fund them, and the American people will not support them.’

The power of AI is its ability to learn and adapt to changing situations. The battlefield is a dynamic environment and the side that adapts fastest gains the advantage.

Like with all systems, though, AI is vulnerable to attack and failure. To truly harness the power of AI technology, developers must align with the ethical principles adopted by the DoD.

Breaking Defense: What does the future of AI development look like at Northrop Grumman?

Muller: Northrop Grumman is taking a systems engineering approach to AI development and is a conduit for pulling in university research, commercial best practices, and government expertise and oversight.

For example, Northrop Grumman has partnered with Silicon Valley startup Credo AI, which is sharing their governance platform and workflow as we apply comprehensive, relevant, and ethical AI policies to guide our own AI development. Credo AI just recently came out of stealth mode, but we have been working with them for over a year.

Their platform lets us look across the entire workflow as our AI is being developed and provide evidence that we’re adhering to our policies and principles. As new policies and principles are developed, we can pull those into the workflow and provide evidence that we’re complying with those, as well. This also gives us an adequate assessment of the risk associated with the AI that we are developing and allows us to manage that risk and determine if it is acceptable given the use cases in which we’re operating the AI.

We are also collaborating with leading commercial companies like IBM to advance AI technology, and are working with universities like Carnegie Mellon to develop new Responsible AI best practices. Carnegie Mellon is a true leader in AI development, and they have done a lot of great work in Responsible AI best practices.

Another step the company is taking is to extend our DevSecOps process to automate and document best practices in the development, testing, deployment, and monitoring of AI software systems. In addition, training Northrop Grumman’s AI workforce in Responsible AI is critical to success because knowing how to develop AI technology is just one piece of the complex mosaic.

Together, Northrop Grumman’s secure DevSecOps practices and mission-focused employee training helps to ensure appropriate use of judgment and care in responsible AI development. We strive for equitable algorithms and to minimize the potential for unintended bias by leveraging a diverse engineering team and testing for data bias using commercial best practices, among other monitoring techniques.