Archive for December 17th, 2018

When creating a class, it’s important to have a motivating example. In my experience, people learn best when they can see an immediate application to their own work. In the area of cybersecurity, this can be difficult. Examples in this space tend to be either too esoteric (return-oriented programming) or too divorced from the domain (credit card theft).

I’ve just finished up the creation of a two hour software security fundamentals class for management and developers. This is intended to provide a framework for integrating security into the software development process. Build it in vs. bolt it on. As I was putting the class together, the motivating example was just out of reach.

The push-back that must be overcome is that there already exists a process for dealing with security issues. It’s an extension to the standard quality assurance process. This process merely needs to be extended to include security-related testing, right?

Let’s look at that assertion for a moment.

How exactly does quality assurance work? Well, it’s based, by and large, on the flawed hypothesis model. Starting with the documentation, test cases are created to verify the assertions made therein. From there, scenarios are imagined. These are likewise verified. If issues (bugs) are discovered, generalizations are attempted. Any found point to larger problems in the code.

Sounds good, what’s the problem?

Consider the internet joke:

QA engineer walks into a bar. They order a beer, then order 0 beers, then order 999999999 beers, then orders a lizard, then orders -1 beers, then orders a eawlirensadk.

A customer walks into the bar and asks where the bathroom is. The bar bursts into flames, killing everyone.

That’s pretty much the problem with the flawed hypothesis model. You only verify the things you think of. If you’re only looking at how the bar serves beer, you’ll never catch issues involving other aspects of the system (here, bathroom location).

It’s a bit extreme as a motivating example, but everyone can relate to it, which is, of course, the point.

From there, the concept of flaws vs. bugs can emerge. QA finds bugs. On a good day, these may point to flaws. So, what’s the difference. For the purposes of discussion, flaws are design defects and bugs are implementation (code) defects. By its very nature, QA does not test design, only implementation.

At this point, management asks the question, isn’t this how it’s always been? Generally speaking, yes. Long gone are the days when people used program design language (PDL) to reason about the soundness of their software. At that time, security wasn’t much of a focus.

Enter threat modeling. By its very nature, threat modeling allows us to reason on the design. Why? Because it focuses not on the documentation, but rather the data flows and through extension the work flows of the system. Because we abstract ourselves from the implementation, we can reason about the system in ways that point us directly to security flaws.

To relate the impact to the real world, one has only to look at the cost to Samsung of not catching a design flaw in the Note 7 prior to release (US$17B). IBM estimates that relative to catching an issue a the design stage that the cost is 6.5 time higher in the implementation stage, 15 time higher during testing, and 100 times higher after release.

I’m in no way advocating the elimination of QA testing. You need both. As well as the processes we do in between, such as code reviews and static / dynamic analysis. But again, discovering issues in these stages of development is going to be more expensive. Defense-in-depth will always give you a better result. This is true not only in security, but the development process itself.

As I was finishing up my software security fundamentals class, the news broke regarding a high-profile technology firm that exposed the private data (images) of millions of individuals via their developer APIs. This is probably a case of failing to threat model their system. This isn’t the first time that this particular company has failed miserably in the area of security. It points out, in ways which greatly assist my efforts to get management on-board, that the failed hypothesis model is no substitute for critical analysis of the design itself.

As a system grows in complexity, it is critical to abstract out the minutiae and let the data flows point toward possible issues in the design. Threat modeling is one technique, but not the only one, that makes that possible.

Read Full Post »

%d bloggers like this: