The Bleeding Edge: Infrastructure as Code—Digital Trust Enabler?

The Bleeding Edge: Infrastructure as Code—Digital Trust Enabler?
Author: Ed Moyle, CISSP
Date Published: 1 January 2023

On the surface, digital trust is a simple enough concept to grasp: It speaks to confidence in the digital ecosystem all around us and in the systems, applications, technologies, data and processes that support our online and other digital interactions. For example, it reflects our belief in the resilience of systems, integrity of information and data, privacy and confidentiality of information about ourselves and our loved ones, and the security of applications and data. But when we really stop to think about digital trust, we realize that, while subtle, there is nuance to unpack—for example, the connection between familiarity and trust.

In the analog world, one key underpinning of trust is experience—meaning trust is often impacted by how much experience we have with a thing. As an example, most people trust cars more than planes, even though the overwhelming quantitative evidence supports air travel as safer. Why is this? One reason is that cars are more familiar. This is actually a well-documented cognitive bias. The familiarity principle, or the mere exposure effect,1 is an inherent bias in people to perceive familiar things as "better" (i.e., safer, more comfortable, more desirable). We ascribe a perception of risk to the novel just because it is novel.

In practice, this means that our perceptions of risk associated with new technology might be out of line with the actual objective, empirical risk. Any new technology introduces new risk—no technology is completely risk free—but in some cases the risk factors introduced are offset by the mitigation of current risk such that the cumulative net effect is overall reduction in risk to the organization depending on factors such as usage, what is being replaced, changes to business processes and more. Will this always be the case? Of course not. Some technologies might be new enough that the attack surface is still emerging; or there might be nascent design flaws that have yet to be fully exposed (consider the early days of Wired Equivalent Privacy [WEP)] as a proof point for this.) But the overall point is that, in some cases, even though it might not feel that way to us because of the familiarity principle, empirical analysis supports risk reduction if used strategically.

What this means then, is that we—as the enablers of digital trust in our organizations—should look for areas where we can squeeze the most risk reduction out of the technologies we deploy. In other words, in situations where there is new technology in the offing, it is incumbent upon the practitioner to both objectively analyze the risk and evaluate the technology for its potential in reducing risk and enabling trust. One example of this is Infrastructure as Code (IaC). When used strategically, IaC can actually help achieve digital trust outcomes. It is worthwhile to look at some circumstances in which this is true and consider how IaC can be harnessed to strengthen our trust posture.

What Is IaC?

IaC refers to technologies that allow the provisioning and configuration of workloads (usually in a cloud context) to be assigned through the use of human and machine readable artifacts. Technologies include Hashicorp’s Terraform, AWS CloudFormation, Salt (i.e., SaltStack), Puppet, Ansible and others.

Through the use of IaC, developers and administrators can automate the provisioning and configuration management of cloud workloads and even on-premises nodes. Instead of manually provisioning, configuring and, ultimately, maintaining workloads (e.g., in virtual machines or containers), they instead author code that, when parsed by a supporting framework, automatically creates and configures the desired resource.

Why is this advantageous? In modern environments (particularly cloud), it automates provisioning and configuration, which is necessary for DevOps/DevSecOps toolchains to stage and field changes automatically. So, instead of there being a manual step required for an engineer to build out and configure test, staging, and production environments, the necessary process can be almost fully automated. It also helps eliminate configuration "drift," which allows individual production systems to develop unique "personalities" through the application of direct administrator actions. (Drawing on the famous adage, it helps foster a "cattle not pets" mindset.) This means less time is required to debug quirks on a given node thanks to a reliable baseline configuration. And, it helps to ensure that documentation exists, and that the documentation matches what is fielded.

So, what does IaC look like? As an example to illustrate IaC, an administrator or developer might author a declarative statement like the Terraform snippet below. Terraform is used in the examples here because it is one of the more popular approaches, but the concepts apply equally to other IaC technologies:

resource "google_compute_instance" "default" {
name = "example-vm"
machine_type = "f1-micro"
zone = "us-central1-c"
tags = ["ssh"]
metadata = {
enable-oslogin = "TRUE"
}
...


IaC directly supports the ability of practitioners to conduct reviews and/or assessments of the resources and workloads described in IaC artifacts.

This simple example illustrates the definition of an Infrastructure-as-a-Service (IaaS) compute resource which, in this case, is a small workload deployed into Google Cloud Platform (GCP). It should be noted, however, that this is not a complete example. Several important details have been left out for the sake of brevity, including:

  • The network interface
  • The configuration of the resource
  • The storage volume(s) it will use
  • Some critical supporting elements such as informing Terraform of the provider to be used, which allows the framework to understand the context and, in turn, understand GCP resources

While it is simplified, the example is sufficient to illustrate what IaC is for those who are not already familiar with the concept.

IaC as a Pillar of Digital Trust

For those concerned with digital trust (i.e., our confidence in the technology components that enable our businesses and support the way we live), there are several very important benefits that a shift to IaC can bring about. These include:

  • Transparency in review and assessment; source of objective truth
  • Assistance creating documentary artifacts
  • Compliance and audit support

It is helpful to unpack what is meant by each potential benefit and examine how IaC can help bolster trust. It is important to note, though, that this is not intended to be an exhaustive list of all possible benefits. Specific business contexts, circumstances, usage or other details unique to an enterprise can potentially facilitate dozens (or more) of additional benefits. The items listed here are likely to apply to most usage contexts.

IaC can help bring trust to an environment rather than being seen as a new source of risk.

Transparency in Review and Assessment
IaC directly supports the ability of practitioners to conduct reviews and/or assessments of the resources and workloads described in IaC artifacts. Consider the sample used herein for the GCP IaaS resource. One does not have to look closely to know exactly what kind of workload it is (f1-micro VM), where it is located (us-central1-c), what its name is and so on. Had the full definition been included, we would know what volume(s) are attached to it, what configuration state it is in (e.g., if it is bootstrapped through a shell script or similar), network details and so forth.

For auditors, security professionals, or other technologists, the value of this should be obvious. Instead of having to interview staff, read through stale/obsolete architecture documentation, or gather information empirically through testing (e.g., vulnerability scanning, penetration testing), we can go directly to the source. Many IaC technologies even keep a running state definition that can be queried to get information about the current state. For governance, risk and privacy practitioners, the ability to view (in many cases at a glance) policy adherence all the way down to the technical level can give them a tool set to which they otherwise would not have access.

Assistance Creating Documentary Artifacts
The second benefit is in the advantage that IaC provides in facilitating the creation of diagrams, documentation and other derived artifacts. Rather than manually creating diagrams (that are usually stale hours or days after they are finished), IaC enables automated creation of certain types of diagrams. Going back to the Terraform example, tools such as open source projects Blast Radius2 or InfraMap3 can be leveraged to "automagically" create diagrams based on the HCL (the language used by Terraform) or the state information maintained by Terraform. Similar projects exist for other IaC technologies as well.

From a security practitioner point of view, imagine how much more quickly tasks such as application threat modeling can be accomplished when we have support for the creation of dataflow diagrams. For audit professionals, consider how much easier it is to understand the workings and interconnections between system components when we have a reliable diagram to draw upon. And, for compliance and governance professionals, keep in mind that a current and well-maintained diagram is a line-item requirement in some regulations (e.g., the Payment Card Industry Data Security Standard [PCI DSS] 1.1.2 and 1.1.3) in addition to being implicit in others.

Compliance and Audit Support
Last, IaC can help directly advance regulatory compliance and help with (third-party) audit responses. In addition to the compliance benefits potentially engendered by the creation of automated diagrams, keep in mind that not only actual state information (e.g., Terraform running state—tfstate4—using the earlier examples) but also the IaC artifacts themselves can be used as evidence to support the existence of configuration management controls, segregation of duties (SoD) (since the actual effecting of change is done through the tool rather than by human engineers), and other controls.

From the auditor or compliance practitioner perspective, again, it should be obvious why this is valuable. Instead of having to search for evidence to prove the implementation, scope and effectiveness of particular controls, we can (assuming our usage enforces those controls) produce instead the IaC artifacts or query state, or otherwise utilize the IaC framework to provide evidence.

The short version of all this is that IaC can help bring trust to an environment rather than being seen as a new source of risk. Because of the way it works, it can be strategically employed to reduce risk in many cases. This is particularly true when practitioners participate early and work in lockstep with technical development and operational teams on planning how IaC is to be used. This lets stakeholders leverage the properties of IaC to help advance trust goals and risk reduction. In fact, those who see how IaC can help and become accustomed to the value it can provide might become active champions for it inside their organization, promoting it and advocating for it with colleagues to help spread its use.

Endnotes

1 Staff, "Why Do We Prefer Things That We Are Familiar With?" The Decision Lab, http://thedecisionlab.com/biases/mere-exposure-effect
2 GitHub, 28mm/Blast-Radius, http://github.com/28mm/blast-radius
3 Gitub, Cycloidio/Inframap, http://github.com/cycloidio/inframap
4 Terraform, State, http://developer.hashicorp.com/terraform/language/state

ED MOYLE | CISSP

Is currently director of Software and Systems Security for Drake Software. In his 20 years in information security, Moyle has held numerous positions including director of Thought Leadership and Research for ISACA®, Application Security Principal for Adaptive Biotechnologies, senior security strategist with Savvis, senior manager with CTG, and vice president and information security officer for Merrill Lynch Investment Managers. Moyle is co-author of Cryptographic Libraries for Developers and Practical Cybersecurity Architecture, and a frequent contributor to the information security industry as an author, public speaker, and analyst.