Feeds:
Posts
Comments

Archive for the ‘Cybersercurity’ Category

When creating a class, it’s important to have a motivating example. In my experience, people learn best when they can see an immediate application to their own work. In the area of cybersecurity, this can be difficult. Examples in this space tend to be either too esoteric (return-oriented programming) or too divorced from the domain (credit card theft).

I’ve just finished up the creation of a two hour software security fundamentals class for management and developers. This is intended to provide a framework for integrating security into the software development process. Build it in vs. bolt it on. As I was putting the class together, the motivating example was just out of reach.

The push-back that must be overcome is that there already exists a process for dealing with security issues. It’s an extension to the standard quality assurance process. This process merely needs to be extended to include security-related testing, right?

Let’s look at that assertion for a moment.

How exactly does quality assurance work? Well, it’s based, by and large, on the flawed hypothesis model. Starting with the documentation, test cases are created to verify the assertions made therein. From there, scenarios are imagined. These are likewise verified. If issues (bugs) are discovered, generalizations are attempted. Any found point to larger problems in the code.

Sounds good, what’s the problem?

Consider the internet joke:

QA engineer walks into a bar. They order a beer, then order 0 beers, then order 999999999 beers, then orders a lizard, then orders -1 beers, then orders a eawlirensadk.

A customer walks into the bar and asks where the bathroom is. The bar bursts into flames, killing everyone.

That’s pretty much the problem with the flawed hypothesis model. You only verify the things you think of. If you’re only looking at how the bar serves beer, you’ll never catch issues involving other aspects of the system (here, bathroom location).

It’s a bit extreme as a motivating example, but everyone can relate to it, which is, of course, the point.

From there, the concept of flaws vs. bugs can emerge. QA finds bugs. On a good day, these may point to flaws. So, what’s the difference. For the purposes of discussion, flaws are design defects and bugs are implementation (code) defects. By its very nature, QA does not test design, only implementation.

At this point, management asks the question, isn’t this how it’s always been? Generally speaking, yes. Long gone are the days when people used program design language (PDL) to reason about the soundness of their software. At that time, security wasn’t much of a focus.

Enter threat modeling. By its very nature, threat modeling allows us to reason on the design. Why? Because it focuses not on the documentation, but rather the data flows and through extension the work flows of the system. Because we abstract ourselves from the implementation, we can reason about the system in ways that point us directly to security flaws.

To relate the impact to the real world, one has only to look at the cost to Samsung of not catching a design flaw in the Note 7 prior to release (US$17B). IBM estimates that relative to catching an issue a the design stage that the cost is 6.5 time higher in the implementation stage, 15 time higher during testing, and 100 times higher after release.

I’m in no way advocating the elimination of QA testing. You need both. As well as the processes we do in between, such as code reviews and static / dynamic analysis. But again, discovering issues in these stages of development is going to be more expensive. Defense-in-depth will always give you a better result. This is true not only in security, but the development process itself.

As I was finishing up my software security fundamentals class, the news broke regarding a high-profile technology firm that exposed the private data (images) of millions of individuals via their developer APIs. This is probably a case of failing to threat model their system. This isn’t the first time that this particular company has failed miserably in the area of security. It points out, in ways which greatly assist my efforts to get management on-board, that the failed hypothesis model is no substitute for critical analysis of the design itself.

As a system grows in complexity, it is critical to abstract out the minutiae and let the data flows point toward possible issues in the design. Threat modeling is one technique, but not the only one, that makes that possible.

Read Full Post »

I get called upon to do fairly incongruous things. One day it’ll be C++ usage recommendations. Another will find me preparing background materials for upper management. Some days, I’m prototyping. Always something new.

As of late, I’ve been bringing modern software threat modeling to the development teams. Threat modeling is one of those things that, for the most part, exists only in the realm of the mythical cybersecurity professionals. This is a sad thing. I’m doing what I can to change people’s perceptions in that regard.

Within cybersecurity, there is a saying. “You can either build it in or bolt it on.” As with mechanical systems, bolting stuff on guarantees a weak point and a usually lack of symmetry. From the software development standpoint, attempting to add security after the fact is usually a punishing task. It is both invasive and time-consuming.

But the bolt-on world is the natural response for those who use of the flawed hypothesis model of cybersecurity analysis. The appeal of the flawed hypothesis analysis lies in the fact that you can do it without much more than the finished product and its documentation. You can poke and prod the software based on possible threats that the documentation points toward. From the specific anticipated threats one can generalize and then test. The problem is that this methodology is only as good as the documentation, intuition, and experience of those doing the analysis.

So, what’s a software development organization to do?

Enter threat modeling. Instead of lagging the development, you lead it. Instead of attacking the product, you reason about its data flow abstraction. In doing so, you learn about how your design decisions impact your susceptibility to attack. From there, you can quantify the risk associated with any possible threats and make reasoned decisions as to what things need to be addressed and in what order. Pretty much the polar opposite of the “death by a thousand cuts” approach of the flaw hypothesis model.

Sounds reasonable, but how do we get there?

Let me start by saying that you don’t create a threat model. You create a whole pile of threat models. These models represent various levels of resolution into your system. While it is true that you could probably create an über threat model (one to rule them all, and such), you’d end up with the graphical equivalent of the Julia Set. What I’ve found much more manageable is a collection of models representing various aspects of a system.

Since the 1970’s, we’ve had the very tool which we’ll use to create our models. That tool is the data flow diagram. The really cool thing about DFD’s is that they consist of just four components. In order to adapt them to threat modeling we need add only one more. The most important piece it the data store. After all, there’s not much to look at in a computer system that doesn’t actually handle some sort of data. We manipulate the data via processes. These agents act upon the data which moves via flows. And finally we need external actors, because if the data just churns inside the computer, again, not much interest. That’s it. You can fully describe any system using only those four primitives.

Okay, you can describe the system, but how does this relate the threat modeling? To make the leap from DFD to threat model, we’ll need one more primitive. We need a way to designate boundaries that data flows cross. These we call threat boundaries. Not the world’s most imaginative nomenclature, but hey, it’s simple and easy to learn.

Once we have a particular DFD based on a workflow, we add boundaries where they make sense. Between the physical device and the outside world; or the application and operating system; or application and its libraries; or between two processes; or … (you get the idea). Again the threat model isn’t intended to be the application. It’s an abstraction. And as Box said, “all models are wrong … but some are useful.” It helps to keep in mind what Alfred Korzybski said, “the map is not the territory.” Anyone who’s traveled on a modern transit system would never confuse the transit map for the area geography. Nod to Harry Beck.

With the boundary-enhanced DFD, we can get to work. For the particular road I travel, we reason about the threat model using a STRIDE analysis. We consider each of the elements astride (pun) each data flow with respect to each of the six aspects of STRIDE: spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privilege. Not all aspects apply to all combination of our four primitives. There are tables for that. Each of these can be appraised logically. No chicken entrails required. At the end of the day, you have a collection of things you don’t have answers to. So, you bring in the subject matter experts (SMEs) to answer them. When you are done what remains are threats.

Threats. Spiffy. But not all threats are equal. Not in potential for damage, or likelihood, or interest. For a goodly length of time, this was a big problem with the whole threat modeling proposition. Lots of stuff, but no objective way to triage it.

Enter the Common Vulnerability Scoring System (CVSS). This is the Veg-O-Matic of threat risk quantification. CVSS considers the means, complexity, temporality and impact areas of a threat. From these it computes a vulnerability score. Now you have a ranking of what the most important things to consider are.

For many industries, we could stop right here and use the CVSS score directly. No so in the land of FDA regulation. For that land adds another dimension, patient safety (PS) impact. The augmented CVSS-PS ranking guides us to a proper way to objectively rate the threats from most to least severe.

Now, we can take these ranked threats and present them, complete with SME feedback, to the core and risk teams for determination and disposition.

But we’re really not done. The threat modeling process isn’t one-and-done. We take what we learn and build it into the base assumptions of future products. Once the product is built, we compare it to the original threat model to verify that the model still represents reality.

Well, that was a lot of exposition. Where’s the facilitation and teaching?

Actually, the exposition was the teaching. And the teaching was and explanation of how I go about facilitating.

When all is said and done, a threat model needs to be built in. That is, engineering owns it. The whole facilitation thing, that’s a skill. It needs to live in the teams, not in some adjunct cybersecurity group. Applying CVSS consistently takes a bit of practice, but again we’re back to facilitation.

As to actually teaching threat modeling, that takes the better part of a day. Lots of decomposition, details and diagrams. I like teaching. It’s a kind of cognitive spreading of the wealth. The same is true of facilitation, just more one-to-one.

Read Full Post »

« Newer Posts