WASHINGTON: Once considered by many to be merely a cyber buzzword, zero-trust security models are all the rage today. Ever since the NSA urged the defense sector to adopt zero trust, it’s been a topic top of mind for security pros, from keynotes to happy hours.

Zero trust, as a concept, existed even before the term, which is a decade old now. But with remote workforces and evolving IT environments, threat actors, and cyberattacks, many believe zero trust’s time has finally come.

“I think [zero-trust security] was slowly building over years,” NIST Computer Scientist Scott Rose told me in a recent interview.

Rose and NIST colleague Oliver Borchert are co-authors of NIST’s Special Publication 800-207 Zero Trust Architecture (ZTA), arguably one of the best guides for cybersecurity pros. The two computer scientists focus on zero trust at NIST.

“Incidents such as the OPM data breach discovered in March 2014 made it clear to me that it is inevitable to move to ZTA,” Borchert told me in the same interview. Zero trust, with its “assume breach” mentality, “does lead to a switch in thinking and design of security.”

Rose and Borchert characterize zero trust as a “cybersecurity paradigm” and “end-to-end approach.” As the name implies, zero trust is based on “the premise that trust is never granted implicitly but must be continually evaluated.” This differs from other models — notably, perimeter security.

“Perimeter-based security models focus their security on traffic entering the network,” Borchert explains. “Once access is granted, all is fair game, and lateral movement is easier to achieve than in a ZTA environment.”

Until recently, the network perimeter could have claimed that rumors of its death were greatly exaggerated. After all, before last March, the vast majority of the global workforce still reported every day to a centralized workplace.

While it’s true off-premise cloud hosting and mobile devices had been on the rise for years, many an organization’s IT assets were still located within the same enterprise network perimeter as the employees who were using them. (Note, for example, how many organizations are still using on-premise versions of Microsoft Exchange.)

And then the pandemic happened. Workers went largely remote overnight. To complicate matters, every home office router, personal laptop, remote access appliance, and unattended social media account — as STRATCOM learned — became an initial threat vector itself, as well as one that could provide a potential stepping stone into target enterprise networks. Threat actors noticed.

This presented an old cybersecurity challenge at a new scale: How do network defenders know if the person and device remotely accessing the enterprise network is actually who and what they claim to be? This question is nontrivial, as Target learned the hard way in 2013.

This is why many say identity is critical to zero-trust models. “It is important to know how resources are accessed,” Borchert notes. “And here, identity is not only related to a user, but also a device such as a printer or an IoT device, to name a few. Ignoring identity and identity verification is similar to leaving the key to one’s house on the front porch for everyone to enter.”

To Borchert’s last point, recall that all second-stage SolarWinds hacks entailed compromising victims’ Microsoft Active Directory.

But identity — while “foundational” to zero trust, as Rose notes — is not the only component.

Indeed, Borchert says, “Zero trust is not one single product that one can purchase off the shelf. Zero trust is a combination of technologies combined with the approach that each access to resources might be hostile or unauthorized and needs to be verified before it can be granted.”

In other words, zero trust requires many components. Yet zero trust is a “dynamic” security model, which means these components are continuously communicating and constantly adapting the environment to new data in real time. How? The answer is multifaceted but includes two critical elements: The first is the trust algorithm, and the second is automation.

“The trust algorithm is the ‘brain’ of the system,” Borchert explains. “The trust algorithm uses different inputs… in its evaluation. This evaluation is not a one-time event. It is performed continuously, and the outcome can change depending on policies, changes in the data provided, and context.”

Borchert continues, “Automation is key, especially data such as threat analysis and behavioral data, which change over time and can affect the outcome of the algorithm.”

One challenge to this dynamic, adaptive security model: It produces a lot of data. “The volume of data that can be used to change policies is too great for human administrators to parse and act on quickly,” Rose explains. “Automation is key to digesting the necessary data and making changes to respond quickly to changes in the environment.”

Identity, the trust algorithm, and automation are far from the only components or challenges to implementing a zero-trust model, but each represents a critical element.