Type-Mismatch Error

Type Mismatch Error
Author: Jack Freund, Ph.D., CISA, CISM, CRISC, CGEIT, CDPSE, Chief Risk Officer, Kovrr
Date Published: 4 March 2020

In programming, one common error encountered is a type-mismatch error (different compilers may call it different things). Space must be allocated to store data in variables, and the way this is done is by defining the type of variable it is. Typical examples include integers, real numbers, character values and Boolean. Sometimes, while writing code, programmers may ask that a value be stored in a variable that is not typed correctly, such as putting a number with fractions into a variable that is expecting only an integer. In such circumstances, the result is a type-mismatch error.

It is often said about career management that what got you here will not get you there. As security professionals move up through the ranks into management roles, they may come into conflict with business leadership, senior management or the board of directors (BoD). When this happens, they may experience a version of “type mismatching” necessitating the need to enhance their skills to get them where they need to be.

Communicating security priorities to organizational leadership is an example of how one can type mismatch by misusing security terminology. This typically happens when practitioners conflate common terms such as risk, threat and vulnerability. Alignment of this language and its precise meanings is important for crisp, clear communication. Far too often, security teams report security control weaknesses as risk to their organization, muddying the opportunity for business leaders to better understand their risk posture. This type mismatch can be detrimental to personal and departmental credibility.

Another common type mismatch is using the wrong model when reporting on risk. Unfortunately, it seems as if much of the security product marketplace is determined to give practitioners looking to level up their risk communication skills precisely the wrong tools with which to do so. Take, for example, common security scanning tools that provide prioritized risk-ranked results. They typically leverage some kind of ordinal scale (also called a Likert scale) model of priority, despite decades of research indicating that personal biases prevent clear communication using such methods. This method of communicating priority has been called both the “illusion of communication” and the more drastic “worse than useless,” for it can cause organizations to misallocate limited resources.

Sometimes it is not the methodology itself that is the problem but the way it is used in the marketplace that causes a type mismatch. Take, for example, the Common Vulnerability Scoring System (CVSS). Whenever a new software bug is discovered, it is given a CVSS score, which is then used by virtually every scanning tool out there. And much of the results of these scan results will report on the worst things it finds using this scoring system by either implicitly or explicitly calling the results security risk factors. However, in version 3.1 of the CVSS User Guide, the authors address this type mismatch head on. In section 2.1, in a section titled “CVSS Measures Severity not Risk,” the authors indicate that CVSS scores should be contextualized within the scope of an overall risk assessment to provide more connection to an organization's priorities.

These are but a few examples of where a security and risk type mismatch can insert problems into the technology communication process. Rectifying this involves a few meaningful steps to increase the reach of technology risk communication. First, seek to understand what it is the organization does. This means not just identifying the products and services offered, but linking them to critical business processes and, ultimately, to the technology stack that supports them. Being able to slice technology risk reporting by business unit and product will go a long way toward bringing technology risk closer to the business owners who are ultimately accountable for any loss events resulting from such security weaknesses.

Next, install a “techobabble” firewall. In other words, make a sharp distinction between things that belong in the tech stack and things that belong in the risk stack. Risk communication from the tech stack needs to be translated into meaningful, narrative-based risk statements for communication through the organization. In the end, this means that there will be a list of broken things that can be aligned to these narratives that bridge the gap between risk and control deficiencies.

Finally, translate technology failings into risk products that the business understands, namely quantitative, economic expressions of cybervalue-at-risk. The greatest type mismatch one will ever encounter is when everyone else in the boardroom is reporting on their areas in terms of revenue and expense forecasts, capital expenditures and inventory management and the security team brings a list of 5 things that are red with the word “High” next to them. Instead, translate that list of top risk factors into meaningful risk narratives that are tied to a statement of potential losses resulting from them. There is no greater source of type mismatch than putting character values into numeric variables. Avoid problems with security terminology, bias and misused scoring systems by focusing on why your organization exists and giving the leadership loss potentials expressed in financial terms to facilitate good decision-making.

Jack Freund, Ph.D., CISA, CRISC, CISM, is director of risk science for RiskLens, Fellow of the FAIR Institute, coauthor of Measuring and Managing Information Risk, 2016 inductee into the Cybersecurity Canon, IAPP Fellow of Information Privacy, and ISACA’s 2018 John W. Lainhart IV Common Body of Knowledge Award recipient.