NIST’s Gaithersburg, Md., campus. Source: NIST

 

WASHINGTON: New publications from the National Institute of Standards and Technology (NIST) recommend that US government agencies assume they have been or will be hacked, and to implement zero-trust security principles and secure software development life cycle (SDLC) practices accordingly.

The recommendations are provided in two documents — Security Measures for ‘EO-Critical Software’ and Recommended Minimum Standards for Vendor or Developer Verification (Testing) of Software — published on July 9, aimed at creating guidance for securing software used by federal agencies. These follow the release of NIST’s definition of critical software. The definition and these accompanying recommendations were required by the administration’s cyber executive order issued in May.

The definition and security recommendations are part of what some administration officials have called an effort to “jumpstart the market for secure software.” They come after a series of significant hacks over the past year that have affected federal agencies and the defense industrial base. The purpose of these publications is to recommend actions federal agencies should take to shore up software security.

The Security Measures publication focuses on running software, while the Recommended Minimum Standards focuses on developing it.

The Security Measures publication largely entails a set of principles for zero-trust security. It notes that “all organizations should assume that a breach is going to occur or has already occurred, so access to EO-critical software must be limited at all times to only what is needed,” referring to critical software as defined by NIST. “Moreover, there must be constant monitoring for anomalous or malicious activity. Preventing breaches is still a ‘must,’ but it is also important to have robust incident detection, response, and recovery capabilities.”

To that end, the publication sets out five objectives, along with a set of actionable security practices and tools federal agencies should implement to meet each objective. The five objectives are as follows:

  • Protect EO-critical software and EO-critical software platforms (the platforms on which EO-critical software runs, such as endpoints, servers, and cloud resources) from unauthorized access and usage.
  • Protect the confidentiality, integrity, and availability of data used by EO-critical software and EO-critical software platforms.
  • Identify and maintain EO-critical software platforms and the software deployed to those platforms to protect the EO-critical software from exploitation.
  • Quickly detect, respond to, and recover from threats and incidents involving EO-critical software and EO-critical software platforms.
  • Strengthen the understanding and performance of humans’ actions that foster the security of EO-critical software and EO-critical software platforms.

The publication notes that the security recommendations are not comprehensive, and the suggestions do not eliminate the need for additional measures.

The second publication seeks to provide “minimum standards recommended for verification by software vendors or developers.” Verification is a major step in the SDLC. A secure SDLC aims to instill security into every step of the traditional SDLC, which is focused on functionality.

Verification, the publication notes, “encompasses many static and active assurance techniques, tools, and related processes to identify and remediate security defects while continuously improving the methodology and supporting processes.” The publication acknowledges the difficulty of detailing catchall verification practices and so provides high-level recommendations, adding that it may be necessary for software developers to customize how they verify their code.

To this end, the publication provides a set of six best practices for software security verification, including:

  • Threat modeling — The practice of evaluating software security from a potential adversary’s standpoint to anticipate and mitigate potential attack vectors.
  • Automated testing — Automated testing is often part of DevSecOps environments.
  • Code-based (static) analysis — The use of code scanners to identify potential vulnerabilities and unintended data leaks (e.g., hardcoded, unencrypted passwords).
  • Dynamic analysis — A collection of techniques designed to catch vulnerabilities by running test cases.
  • Check included software — Ensuring accompanying libraries, packages, and services are secure.
  • Fix bugs — Continuously update code to address discovered flaws, such as unintended functionality.

Now that the definition of critical software and recommended security measures have been issued, the next step is for the government to review existing Federal Acquisition Regulations to see if contract language should be amended. The EO provides one year from its issuance for this process to be completed.

Finally, based on amended FAR language, the government will effectively prevent itself from buying and using any software that meets the definition of critical but cannot satisfy the security measures.