In combining peripheral sensors, gateways, and cloud resources, Internet of Things (IoT) applications are becoming unprecedented targets because of the number of potential attack surfaces and security vulnerabilities they contain. By Stephen Evanczuk for Mouser Electronics
In combining peripheral sensors, gateways, and cloud resources, Internet of Things (IoT) applications are becoming unprecedented targets because of the number of potential attack surfaces and security vulnerabilities they contain.
A clear understanding of such threats, their likelihood, and their impact becomes more urgent as enterprises tie these applications more tightly into corporate infrastructures. Using methodical approaches to threat and risk assessments, development teams can harden security where essential or make informed decisions about acceptable risks.
The wide range of security vulnerabilities in connected systems finds expression all too frequently in news reports. Even a quick dip into the headlines shows a startling breadth of attacks, ranging from overt, massive distributed denial-of-service (DDoS) attacks to extremely covert advanced persistent threats (APTs) that linger and quietly extract valuable data or prepare for more extreme strikes.
Despite the sensationalist nature of these exploits, one of the most important lessons learned from these attacks is that the use of security mechanisms and the creation of a secure system are not the same thing.
Hackers successfully penetrate systems that are built with all manner of security mechanisms. Even the most security-conscious development team may unknowingly leave open attack surfaces in their designs.
In fact, the sheer complexity of today’s designs increases the chance of open attack surfaces, particularly with multilayered, connected systems such as IoT applications. When large numbers of programmable devices of different types connect to the cloud, end-to-end security becomes more of a statistical probability than an absolute certainty.
Each element in such an interconnected system of systems contributes not only its specific functionality but also its own set of vulnerabilities to the security equation.
By fully understanding how each vulnerability can become a threat to the overall application, an enterprise can decide if the associated risk of a successful exploit of that vulnerability will rise above the threshold of acceptance and ultimately require mitigation.
The ability to gain this level of visibility into the nature of a risk provides strategic value that cannot be overstated. At the same time, by intersecting security vulnerabilities with risk assessments, a development team can devise a tactical roadmap for developing a practical response to the nearly endless stream of threats to any connected system.
Indeed, without a more rigorous level of understanding gained through threat and risk assessments, even the most experienced development team is gambling on the security of their systems and applications.
Gaining this knowledge, however, starts with a clear understanding of the potential threats against a system, which is achievable through a well-documented threat model.
Threat models capture the specific security vulnerabilities associated with a system’s design.
Creating a threat model seems simple conceptually: For example, developers analyse their designs to identify security vulnerabilities that relate to each underlying component. However, in practice, threat modeling can involve much more work, research, and strategy than this simple idea suggests — and can yield far more than a list of technical security concerns.
More broadly applied, threat modeling can also identify vulnerabilities in the associated life cycle processes and overarching security policies that correlate with an IoT application.
Ultimately, what constitutes acceptable threat models can vary as widely as the IoT applications and organisations they serve. Even so, different threat models share certain characteristics, and any threat modeling methodology will follow a few common steps.
Threat modeling begins with an accurate description of the system, the so-called Target of Evaluation (TOE), associated with a specific use case, such as the operation of a utility water meter. If a threat model paints a picture of system vulnerabilities, the TOE description is the canvas.
By widening or tightening the scope of the TOE, a threat modeling team can expand or contract the focus in a threat identification process. For example, Arm’s recently released smart water-meter threat model sharply restricts its TOE, focusing only on the system’s core (Figure 1).
Of course, a TOE confined within a single subset of a larger, more complex system or application translates to a more limited ability to identify threats, assess risks, and build an effective mitigation plan.
For a complex system of systems such as an IoT application, experienced threat modelers might create a series of threat models, going from a fairly abstract description of the complete system to increasingly detailed descriptions of subsystems of particular importance or concern to the organisation.
Whatever the approach, there is no absolute requirement for the level of detail required in the TOE description. Modelling approaches that intend to provide the exhaustive details of each component may simply exhaust the participants in the process.
On the other hand, models that are too abstract are likely to hide subtle vulnerabilities or prevent the identification of vulnerabilities buried deeply in a chain of dependencies or third-party software libraries.
An effective middle ground collects an evolving level of detail up to the necessary level to capture all interactions that cross “trust boundaries” between the separate, unique zones of a system (Figure 2).
For example, an IoT application can comprise multiple zones linked with cloud resources, gateways, IoT terminal devices, and users. Transactions that operate across trust boundaries are particularly vulnerable to an exceptional array of attacks on transferred data, security credentials, or protocols.
Even seemingly innocuous attempts to communicate across a trust boundary can create a pathway for a “fingerprinting” attack — where hackers use known indicators contained in the system’s response to determine the system’s underlying components in preparation for more directed attacks.
Of course, an understanding of the interactions between the underlying components within each zone becomes especially important if some of those components come from third parties. For example, an IoT device that uses a third-party sensor driver could be vulnerable to threats at the driver’s boundary (Figure 3).
Though a suitably detailed description is essential for threat modeling, the identification of specific threats that connect to those details is the payoff. In the case of Arm’s water-meter threat model, the modelers provide a plain-language list of threats associated with each asset, such as firmware, measurement data, and interactions with external entities (like users, administrators, and attackers), that might touch the TOE.
For firmware, the model describes specific threats including the installation of compromised firmware, modifications of associated security certificates utilized to authenticate firmware updates, cloning, and more.
Based on the list of assets and identified vulnerabilities, development teams can evolve a set of corresponding security objectives and mitigation methods.
For example, Arm’s water-meter model concludes with a list of security requirements, including those for firmware, such as the need for a secure boot, firmware authentication, a response to a failed authentication, and others.
In identifying potential threats, few (if any) development organisations can possibly remain current on every possible threat that might apply to the detailed assets and processes included in their TOE descriptions.
The good news is that engineers can find several published sources that can help with this part of the process. Developers can use public resources such as the Common Attack Pattern Enumeration and Classification (CAPEC) list to review, from the top down, the most likely types of attacks.
Then, they can work, from the bottom up, to identify the likely targets of attack listed in the Common Weakness Enumeration (CWE) list, which describes inherent flaws in system design approaches, such as the use of hardcoded credentials.
As designers identify specific hardware or software components utilised in their designs, they can turn to the Common Vulnerabilities and Exposures (CVE) list, which lists specific software flaws or potential exploits in available hardware or software components.
For risk assessments, resources such as the Common Vulnerability Scoring System (CVSS) provide a consistent approach for rating the risks associated with specific vulnerabilities.
Although a risk relates to the nature of a specific vulnerability, it also includes other factors such as the avenue (vector) used to perform the attack, the complexity of the attack required to exploit the vulnerability, and others.
For example, an attack that can be performed through a network brings considerably more risk than one that requires physical access.
Similarly, an attack that is simple to perform carries significantly more risk than an attack that is highly complex in nature.
Using a CVSS calculator, engineers can quickly account for these various contributing factors, arriving at a numeric score for the risk level associated with a particular threat or class of threats.
For Arm’s water meter, the CVSS calculator finds that the combination of factors involved in a firmware attack represents a critical risk score of 9.0 (Figure 4)
Because of the broad range of requirements and techniques, automated tools such as Open Web Application Security Project’s (OWASP’s) Threat Dragon Project, Mozilla’s SeaSponge, and Microsoft’s Threat Modeling Tool exist to help developers work through the modeling process.
Each uses a different threat modeling methodology, ranging from system diagramming in the Threat Dragon Project and SeaSponge to Microsoft’s detailed STRIDE (translated as “Spoofing,” “Tampering,” “Repudiation,” “Information disclosure,” “Denial of service,” and “Elevation of privilege”) approach.
Though these tools are several years old and generally built for enterprise software systems, threat modeling is a broadly applicable, evergreen process that depends more on the current lists of attack vectors, weaknesses, and vulnerabilities than on specific methodologies.
Nevertheless, newer tools are now emerging that promise a tighter link between a system description and threat identification.
Despite the rapid emergence of deep learning technologies in other areas, however, significant challenges remain in applying these technologies to automated threat and risk assessments.
Even so, the availability of smart modeling and assessment tools is likely soon to come.
In the meantime, developers can find a variety of collections that list security weaknesses, vulnerabilities, and attack patterns — so much so that all the available detail can seem overwhelming, particularly to those just starting to engage in threat modeling.
In fact, one of the excuses commonly used to avoid threat modeling is that it is simply too complicated. Rather than jumping into the full depth of details, engineers can start with a more modest approach that focuses just on the most common threats.
As of the time of this writing, OWASP is still reviewing its top ten IoT security threats for 2018, but OWASP’s earlier top ten IoT list still provides a useful starting point.
In fact, developers need to go no further than their preferred news sites to find a ready catalog of top vulnerabilities and exploits.
For organisations able to move quickly past the basics, however, these same methods can prove invaluable in addressing equally critical aspects of IoT design.
For example, systems used in machine control loops typically face associated mission-critical requirements for functional safety.
In these systems, security and functional safety are so intertwined that suitable threat models for these systems will likely need to include scenarios where weakness in security or safety can equally lead to physical risks.
In the same way, security and privacy overlap in many respects, yet weaknesses in either area can lead to the same result of a disclosure of personally identifiable information.
The effective application of threat modeling and risk assessments in complex systems goes well beyond any simple list of available options and techniques.
Like each specific system, each development organization deals with its own unique constraints and capabilities. The requirements for one system or organisation might completely miss the mark for another.
What might be the only common requirement is the need to perform threat and risk assessments in the first place. Even so, should an enterprise attempt to create a “complete” threat model and risk assessment? The short answer is no. In fact, an attempt to do so would fall short of this perfect objective.
It is not possible to perfectly predict outcomes. Naturally chaotic processes in the world and the ebb and flow between system mitigations and hacker exploits ultimately derail any attempts toward perfection.
At the same time, without building the kind of security roadmap that a threat model and risk assessment provide, it is equally impossible to avoid at least some of the pitfalls and detours that lead to inevitable security breaches.
Check these articles out:
CLICK HERE FOR LATEST NEWS.
READ CURRENT AND PAST ISSUES OF IAA.
KEEP YOURSELF UPDATED, SUBSCRIBE TO IAA NOW!