Feeds:
Posts
Comments

Posts Tagged ‘adam shostack’

In my previous post, Line Upon Line: Compositional Threat Modeling, I made the case for compositional threat modeling (CTM). In this post, I’ll explore how CTM is already being used unintentionally and why we need to adopt an intentional approach.

It’s one thing to suggest that we should be intentional in our use of CTM, but quite another to assert that we’ve been using it already. That’s very true. Let me explain. I will, of course incorporate a bit of personal history, music and seamless plugs. If you stay with me on this, you’ll be treated to a cross between James Burke– and Carl Sagan-esque tale weaving.

In the early 1980s, I read Ken Thompson‘s Turing Award Lecture, Reflections on Trusting Trust. In it, Thompson describes how the C compiler itself can be compromised in what we would today describe as a supply chain attack. I never again looked at my development tools in the same way. Up until that point in time, I’d never considered that the very tools I used to build secure systems could betray me, but in that lecture, Thompson made it very clear that I’d created a model of the world that was just wrong. And this gets to the heart of why CTM is important.

When we undertake the threat modeling activity, we are reasoning on a model of the design of a system. Note that there were two levels of indirection in that statement. That’s important. Essentially, we’re reasoning on a model of a model. This is the point at which I’ll shamelessly plug Adam Shostack for his shout out to George Box for observing, “All models are wrong but some are useful.” As to why this is an important observation we need to consider what Alan Turing wrote twenty-five years prior in his 1951 paper The Chemical Basis of Morphogenesis.

“This model will be a simplification and an idealization, and consequently a falsification. It is to be hoped that the features retained for discussion are those of greatest importance in the present state of knowledge.”

It’s important to keep in mind that the model is a lie that we hope keeps sufficient features as to be useful. Problems occur when we forget that and start treating the model as though it were the actual system. To quote Alfred Korzibsky, “a map is not the territory.”

In order for us to reason on the security of the design of a system, we discard most of the information regarding both the system and its design. And that’s okay, so long as the features retained for discussion are those of greatest importance in the present state of knowledge with respect to security.

Unfortunately, in the process of our relatively recent adoption of threat modeling as a security activity, we seem to have taken the approach called out in The Alan Parsons Project 1985 album Stereotomy song In the Real World: “Don’t wanna live my life in the real world.” By this I mean that the models that we’re working with contain fewer feature than needed to produce sufficiently expressive results.

This is not to say that all the threat modeling results we get on a daily basis are bad. We can keep the baby and not drink the bath water. We’ve been leaving stuff out. And some of that stuff is important. What’s more important though is that the consumers of the issues identified by the threat models generally believe that we’ve completely juiced that orange.

This brings us back to Ken Thompson and compositional threat modeling. We trust our tools. We trust our operating environments. (Why I use the term operating environment and not operating system is covered in my post Does That Come in a Large? OS Scale in Threat Modeling.) We either leave the operating environment out of the model entirely or treat it as implicitly out of scope. We tend to do the same with open source and third party libraries. This is a bad thing.

You would be right to argue that including the operating environment or other underlying bits would make our threat models overly complex and hard to maintain. I wouldn’t argue that point. Instead I would argue for treating them as a composition. Do the work to establish why you believe that you can trust them. Then and only then can you safely call them out of scope and move on to working at a highly level of abstraction. That’s the beauty of CTM. Any given threat model is applied only to protect its element. Once you’ve done that, you can compose the elements and deal with what’s shared between them. Typically, that’s not much. If I have two systems A and B and A requires authenticated source and so does B, when I connect them I have bi-directional source/destination authentication by composition. I trust the bits at a lower abstraction level because they have already been modeled and shown to be secure, not because I taken it on faith. As the Russian proverb attributed to Ronald Reagan goes, “trust, but verify.”

When we apply CTM we no longer have big bang threat models. We have manageable composable ones. From the outside the system presents a model with a single surface. That’s important because that’s what an entity interacting with it sees. We don’t see the database underlying a web-hosted site. We see a socket connected protocol. And even that view is an abstraction. It is through composition that we can consider what’s really important to the activity of threat modeling, the application of controls to places where they’re missing, but needed.

In a future post, I’ll get into more specifics as to how to apply CTM.

References

Mariana Trench image by 1840489pavan nd [https://commons.wikimedia.org/wiki/File:Mariana-trench.jpg]

Read Full Post »

With all the recent cyber activity, there has been a renewed interest in threat modeling. Generally speaking, this is a good thing. But what should we threat model, you ask? The answer is everything. And when do we need the threat modeling to be done? Why, yesterday, of course.

We all know that everything and yesterday are unattainable goals. Still, those of us in the threat modeling community do our best with the time and material we have to work with. Leaving aside the aspect of time for a moment, let’s look deeper into the material aspect.

When I say material, I’m referring to both the scope of the threat model and what we know about it. For the longest time organizations would play “hide the data” and call that their security solution. That might take the form of “in a secure facility,” “in a sealed box,” “using obfuscated code,” and the like. The key here is that security was a function of someone else’s moat protecting your crown jewels. Over time, we threw an ever increasing number of resources at the creation and sophistication of those moats.

The problem with moats is that unless you never need to get from inside to outside, there’s always a well-paved path to get across it. Now you say, “we’ve got that covered, we watch everything coming and going.” Now in addition to the moat, you’ve got to invest in more and more complex gatekeepers. And the problem with gatekeepers is that they create friction. Friction expresses itself as the consumption of resources. For computers, those resources are time, processing power and storage.

When we create computer-based products, there’s a desire to make them at as low a cost and as quickly as possible. Settings aside the cost aspect, let’s consider time-to-market and how it’s impacted by the aforementioned gatekeepers. Making systems work is hard enough, but once you add encryption and authentication and secure enclaves and the like, well, it gets really hard. The natural tendency is to make it work and then make it secure (if you have time left in your schedule).

So, by the time the average company engages security, one of two scenarios is in play. Scenario one: there’s a default assumption that security is someone else’s problem (moat-land). Scenario two: security can be overlaid in the remaining time and space.

The problem with the first scenario is that it’s been a flawed model since the 1980’s (probably earlier). As soon as you allow people to connect things to a greater world, someone is going to connect them in way that you didn’t expect. And someone is going to get over your moat. Then where will you be? Our entire history as a species is replete with stories of Trojan horse tales. Why would we imagine that the technology realm would be any different?

The problem with the second scenario is that is treats security as a feature and not an emergent property of the system. Simply put, “you can’t bolt security on.” You certainly can’t secure the complex systems we now assemble on a daily basis from an assortment of open source, third-party, and proprietary bits when you try to do so after the fact.

This certainly paints a gloomy picture for security. But what, you may ask, does this have to do with threat modeling and what is compositional threat modeling? Both good questions. Let’s get into it.

Fundamentally, threat modeling is a design activity. That is, threat modeling considers a system’s design and evaluates if sound security principles are being used. There are a myriad way to undertake this consideration. Probably the most famous being Adam Shostack‘s four questions framework. This methodology cuts to the core of threat modeling. There are any number of tools available to help automate and systematize the threat modeling process. All of them presume a fully-formed system design. Now, given that the system may have been assembled from multiple sources, you may not have that fully-formed design available. What does that mean? It means that you don’t get a complete picture of system’s design deficiencies. Now, if everyone knows this, the organization is able to make reasonable risk decisions. Many time, however, non-security individuals look at the results of a threat model and consider them the totality of possible deficiencies. This is the reality of the world.

There’s another issue with threat modeling the system in its totality after the fact. A threat model is a kind of network diagram and subject to the same kind of combinatorial explosion. This has two negative impacts. First, it’s hard to preform a systematic analysis of the model in a timely fashion when you have hundreds of interconnections to consider. Second, when deficiencies number in the thousands, the development teams and their management are unlikely to want to even look at them. They are, after all, only possible issues.

So, how can we address these diverse issues? I use a  technique that I call compositional threat modeling.

Let’s consider a really simplistic example. The following diagram shows two things communicating across a trust boundary.

I could choose to threat model this system as a whole and would need to consider the impact of both my (Thing 1) inbound data flow on me and outbound data flow on Thing 2. Alternately, I could consider only things that impacted me (consider Thing 2 out of scope). This would yield a focused set of deficiencies applying only to Thing 1. I could then perform a complementary analysis with the focus on Thing 2. Now I have a pair that I can say form a fully modeled system by composition.

The advantages to this approach are numerous. The deficiencies identified are highly focused, so teams will be more likely to consider them. The methodology scales well. The threat models are more light weight (easier to maintain). The threat modeling process can more readily accommodate a diversity of element sources and different timelines of inclusion or availability. Threat models can be created and shared in ways that do not expose organizational IP. The threat models are easier to navigate (multiple resolution views are a natural consequence).

References:

Rock strata image by Matt Affolter (QFL247), CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=14715935

Read Full Post »