Feeds:
Posts
Comments

Archive for June, 2021

I’ve only been threat modeling for about a decade. In that time, I’ve rarely seen threat modeling used as intended. Why is that and why do we threat model in the first place?

Lately everyone seems to have jumped on the threat modeling bandwagon. Every new certification wants to see a box ticked for threat modeling, so companies are scrambling to make that happen. But many times this means that companies are threat modeling after they’ve finished development. Does that even make sense? Let’s take a step back and ask the question, “Why are we doing this?”

Every development team understands unit testing. They get static analysis. Conceptually, they appreciate penetration testing, if not what it’s limitations are. But when it comes to threat modeling, you get more hand waving than a cruise ship departure.

There’s no end to the number of misconceptions regarding threat modeling. Some see it as a one-time, rubber stamp on the product’s security. Others believe that it must expose all known issues in order to be considered valid. Still others believe that if they contribute a component, that they are exempt from threat modeling because it will be done at the system level. And so forth. The underlying problem is not that all of these ideas are wrong, it’s that people don’t know why threat modeling exists.

First, threat models are only useful if they evolve with the product’s design. Not the implementation, but the design. Why? Because the point of threat modeling is to evaluate the design. The outcome of the exercise is a set of design deficiencies. Note that this is a very different thing from what a penetration test discovers. You can have a perfect design and still find issues during penetration testing, because it can identify errors in implementation. Since the threat model uses an idealization of the systems for its analysis, that isn’t possible.

Second, threat models focus us on best practices by forcing us to consider the decomposition of the system, the trust boundaries, and data flows. A threat model won’t ask if we’re using 4096-bit keys. It will ask if we have appropriate controls. Controls imply requirements, specifically, non-functional security requirements. That’s a topic for another day.

Third, threat models allow us to systematically and consistently consider the system. These are two very important elements. One of the questions we seldom ask properly is “How do we know we have security coverage?” Since threat modeling deals with a model of the system, the answer is once we’ve iterated over all the data flows in all the workflows of the system. Good luck doing that with penetration testing. The second element is equally important. Threat modeling is only useful if it produces consistent repeatable results. That means how we undertake the exercise matters. In my opinion, any moderately trained individual should be able to get the same results out of a threat model as some of equal training. Will there be deviations, sure. Should they be substantive? No.

I’ve always found it frustrating when people expect threat models to identify all know issues. This is both unhelpful and unnecessary. It’s unhelpful, because it implies an over-tuned analysis process that probably won’t work if things change. It’s unnecessary because as mentioned issues that are known are known. Especially if said issues are identified in testing, since you already have coverage. Also, they tend to be implementation issues.

One of the difficulties in the threat modeling process is that is not only exposes design deficiencies, but also that it exposes a lack of design process. In a hyper-full-stack-focused world, many have forgotten that agile doesn’t mean that you just build until it works and document as implemented. That kind of sloppy engineering doesn’t fly when safety is an issue. You can be agile, but all the forms still apply. An advantage in incorporating threat modeling into an agile design process is that since you’re system is by definition smaller, there’s less to model. Additionally, if you’re using compositional threat modeling, things can go really quickly. It’s elementary network theory.

The final thing I’ll bring up is cost. We threat model to save time and money. There’ve been multiple studies showing that the further along the process is when you identify an issue (especially in the underlying design) the more expensive it is to mitigate. And who doesn’t like to save money. Well, project management, I suppose, they just want to save time.

This post is a bit different from my usual link-laden ones, but I’ve been thinking about big picture stuff lately.

References

The Thinker, photo by Andrew Horne (public domain)
https://commons.wikimedia.org/wiki/File:The_Thinker,_Rodin.jpg

Read Full Post »

In my previous post, Line Upon Line: Compositional Threat Modeling, I made the case for compositional threat modeling (CTM). In this post, I’ll explore how CTM is already being used unintentionally and why we need to adopt an intentional approach.

It’s one thing to suggest that we should be intentional in our use of CTM, but quite another to assert that we’ve been using it already. That’s very true. Let me explain. I will, of course incorporate a bit of personal history, music and seamless plugs. If you stay with me on this, you’ll be treated to a cross between James Burke– and Carl Sagan-esque tale weaving.

In the early 1980s, I read Ken Thompson‘s Turing Award Lecture, Reflections on Trusting Trust. In it, Thompson describes how the C compiler itself can be compromised in what we would today describe as a supply chain attack. I never again looked at my development tools in the same way. Up until that point in time, I’d never considered that the very tools I used to build secure systems could betray me, but in that lecture, Thompson made it very clear that I’d created a model of the world that was just wrong. And this gets to the heart of why CTM is important.

When we undertake the threat modeling activity, we are reasoning on a model of the design of a system. Note that there were two levels of indirection in that statement. That’s important. Essentially, we’re reasoning on a model of a model. This is the point at which I’ll shamelessly plug Adam Shostack for his shout out to George Box for observing, “All models are wrong but some are useful.” As to why this is an important observation we need to consider what Alan Turing wrote twenty-five years prior in his 1951 paper The Chemical Basis of Morphogenesis.

“This model will be a simplification and an idealization, and consequently a falsification. It is to be hoped that the features retained for discussion are those of greatest importance in the present state of knowledge.”

It’s important to keep in mind that the model is a lie that we hope keeps sufficient features as to be useful. Problems occur when we forget that and start treating the model as though it were the actual system. To quote Alfred Korzibsky, “a map is not the territory.”

In order for us to reason on the security of the design of a system, we discard most of the information regarding both the system and its design. And that’s okay, so long as the features retained for discussion are those of greatest importance in the present state of knowledge with respect to security.

Unfortunately, in the process of our relatively recent adoption of threat modeling as a security activity, we seem to have taken the approach called out in The Alan Parsons Project 1985 album Stereotomy song In the Real World: “Don’t wanna live my life in the real world.” By this I mean that the models that we’re working with contain fewer feature than needed to produce sufficiently expressive results.

This is not to say that all the threat modeling results we get on a daily basis are bad. We can keep the baby and not drink the bath water. We’ve been leaving stuff out. And some of that stuff is important. What’s more important though is that the consumers of the issues identified by the threat models generally believe that we’ve completely juiced that orange.

This brings us back to Ken Thompson and compositional threat modeling. We trust our tools. We trust our operating environments. (Why I use the term operating environment and not operating system is covered in my post Does That Come in a Large? OS Scale in Threat Modeling.) We either leave the operating environment out of the model entirely or treat it as implicitly out of scope. We tend to do the same with open source and third party libraries. This is a bad thing.

You would be right to argue that including the operating environment or other underlying bits would make our threat models overly complex and hard to maintain. I wouldn’t argue that point. Instead I would argue for treating them as a composition. Do the work to establish why you believe that you can trust them. Then and only then can you safely call them out of scope and move on to working at a highly level of abstraction. That’s the beauty of CTM. Any given threat model is applied only to protect its element. Once you’ve done that, you can compose the elements and deal with what’s shared between them. Typically, that’s not much. If I have two systems A and B and A requires authenticated source and so does B, when I connect them I have bi-directional source/destination authentication by composition. I trust the bits at a lower abstraction level because they have already been modeled and shown to be secure, not because I taken it on faith. As the Russian proverb attributed to Ronald Reagan goes, “trust, but verify.”

When we apply CTM we no longer have big bang threat models. We have manageable composable ones. From the outside the system presents a model with a single surface. That’s important because that’s what an entity interacting with it sees. We don’t see the database underlying a web-hosted site. We see a socket connected protocol. And even that view is an abstraction. It is through composition that we can consider what’s really important to the activity of threat modeling, the application of controls to places where they’re missing, but needed.

In a future post, I’ll get into more specifics as to how to apply CTM.

References

Mariana Trench image by 1840489pavan nd [https://commons.wikimedia.org/wiki/File:Mariana-trench.jpg]

Read Full Post »

With all the recent cyber activity, there has been a renewed interest in threat modeling. Generally speaking, this is a good thing. But what should we threat model, you ask? The answer is everything. And when do we need the threat modeling to be done? Why, yesterday, of course.

We all know that everything and yesterday are unattainable goals. Still, those of us in the threat modeling community do our best with the time and material we have to work with. Leaving aside the aspect of time for a moment, let’s look deeper into the material aspect.

When I say material, I’m referring to both the scope of the threat model and what we know about it. For the longest time organizations would play “hide the data” and call that their security solution. That might take the form of “in a secure facility,” “in a sealed box,” “using obfuscated code,” and the like. The key here is that security was a function of someone else’s moat protecting your crown jewels. Over time, we threw an ever increasing number of resources at the creation and sophistication of those moats.

The problem with moats is that unless you never need to get from inside to outside, there’s always a well-paved path to get across it. Now you say, “we’ve got that covered, we watch everything coming and going.” Now in addition to the moat, you’ve got to invest in more and more complex gatekeepers. And the problem with gatekeepers is that they create friction. Friction expresses itself as the consumption of resources. For computers, those resources are time, processing power and storage.

When we create computer-based products, there’s a desire to make them at as low a cost and as quickly as possible. Settings aside the cost aspect, let’s consider time-to-market and how it’s impacted by the aforementioned gatekeepers. Making systems work is hard enough, but once you add encryption and authentication and secure enclaves and the like, well, it gets really hard. The natural tendency is to make it work and then make it secure (if you have time left in your schedule).

So, by the time the average company engages security, one of two scenarios is in play. Scenario one: there’s a default assumption that security is someone else’s problem (moat-land). Scenario two: security can be overlaid in the remaining time and space.

The problem with the first scenario is that it’s been a flawed model since the 1980’s (probably earlier). As soon as you allow people to connect things to a greater world, someone is going to connect them in way that you didn’t expect. And someone is going to get over your moat. Then where will you be? Our entire history as a species is replete with stories of Trojan horse tales. Why would we imagine that the technology realm would be any different?

The problem with the second scenario is that is treats security as a feature and not an emergent property of the system. Simply put, “you can’t bolt security on.” You certainly can’t secure the complex systems we now assemble on a daily basis from an assortment of open source, third-party, and proprietary bits when you try to do so after the fact.

This certainly paints a gloomy picture for security. But what, you may ask, does this have to do with threat modeling and what is compositional threat modeling? Both good questions. Let’s get into it.

Fundamentally, threat modeling is a design activity. That is, threat modeling considers a system’s design and evaluates if sound security principles are being used. There are a myriad way to undertake this consideration. Probably the most famous being Adam Shostack‘s four questions framework. This methodology cuts to the core of threat modeling. There are any number of tools available to help automate and systematize the threat modeling process. All of them presume a fully-formed system design. Now, given that the system may have been assembled from multiple sources, you may not have that fully-formed design available. What does that mean? It means that you don’t get a complete picture of system’s design deficiencies. Now, if everyone knows this, the organization is able to make reasonable risk decisions. Many time, however, non-security individuals look at the results of a threat model and consider them the totality of possible deficiencies. This is the reality of the world.

There’s another issue with threat modeling the system in its totality after the fact. A threat model is a kind of network diagram and subject to the same kind of combinatorial explosion. This has two negative impacts. First, it’s hard to preform a systematic analysis of the model in a timely fashion when you have hundreds of interconnections to consider. Second, when deficiencies number in the thousands, the development teams and their management are unlikely to want to even look at them. They are, after all, only possible issues.

So, how can we address these diverse issues? I use a  technique that I call compositional threat modeling.

Let’s consider a really simplistic example. The following diagram shows two things communicating across a trust boundary.

I could choose to threat model this system as a whole and would need to consider the impact of both my (Thing 1) inbound data flow on me and outbound data flow on Thing 2. Alternately, I could consider only things that impacted me (consider Thing 2 out of scope). This would yield a focused set of deficiencies applying only to Thing 1. I could then perform a complementary analysis with the focus on Thing 2. Now I have a pair that I can say form a fully modeled system by composition.

The advantages to this approach are numerous. The deficiencies identified are highly focused, so teams will be more likely to consider them. The methodology scales well. The threat models are more light weight (easier to maintain). The threat modeling process can more readily accommodate a diversity of element sources and different timelines of inclusion or availability. Threat models can be created and shared in ways that do not expose organizational IP. The threat models are easier to navigate (multiple resolution views are a natural consequence).

References:

Rock strata image by Matt Affolter (QFL247), CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=14715935

Read Full Post »