Feeds:
Posts
Comments

Posts Tagged ‘cybersecurity’

I’ve not been posting much as of late. Actually, it’s been quite some time.

It’s not for lack of things to write about. I’ve got about thirty books that I need to write reviews for. I have a bunch of things I’d like to write about concerning threat modeling and cybersecurity more broadly.

I’ve been putting all my creative energy over the past three and a half years into the care and feeding of the Autonomous Vehicle Cybersecurity Development Lifecycle (AVCDL). It’s been a huge undertaking. I’ve learned a great deal throughout its creation and defense. It’s been assessed twice now against an international technical standard (ISO/SAE 21434) and a global regulatory one (UN R155). I’ve created over a thousand pages of documentation embodied in nearly a hundred different documents. I’ve created two visual languages to be able to communicate the concepts it contains. Most recently, I’ve been working on the creation of training materials to go along with it. I’ve also given presentations on in and a few weeks ago been interviewed regarding the assessment process.

This brings me back to why I’m writing this post. I’ve created a new page to track all the videos I’m either in or responsible for, and wanted to give it some visibility. I’m not expecting a huge uptick in view, but it’s nice to have this information in one place.

Read Full Post »

With all the recent cyber activity, there has been a renewed interest in threat modeling. Generally speaking, this is a good thing. But what should we threat model, you ask? The answer is everything. And when do we need the threat modeling to be done? Why, yesterday, of course.

We all know that everything and yesterday are unattainable goals. Still, those of us in the threat modeling community do our best with the time and material we have to work with. Leaving aside the aspect of time for a moment, let’s look deeper into the material aspect.

When I say material, I’m referring to both the scope of the threat model and what we know about it. For the longest time organizations would play “hide the data” and call that their security solution. That might take the form of “in a secure facility,” “in a sealed box,” “using obfuscated code,” and the like. The key here is that security was a function of someone else’s moat protecting your crown jewels. Over time, we threw an ever increasing number of resources at the creation and sophistication of those moats.

The problem with moats is that unless you never need to get from inside to outside, there’s always a well-paved path to get across it. Now you say, “we’ve got that covered, we watch everything coming and going.” Now in addition to the moat, you’ve got to invest in more and more complex gatekeepers. And the problem with gatekeepers is that they create friction. Friction expresses itself as the consumption of resources. For computers, those resources are time, processing power and storage.

When we create computer-based products, there’s a desire to make them at as low a cost and as quickly as possible. Settings aside the cost aspect, let’s consider time-to-market and how it’s impacted by the aforementioned gatekeepers. Making systems work is hard enough, but once you add encryption and authentication and secure enclaves and the like, well, it gets really hard. The natural tendency is to make it work and then make it secure (if you have time left in your schedule).

So, by the time the average company engages security, one of two scenarios is in play. Scenario one: there’s a default assumption that security is someone else’s problem (moat-land). Scenario two: security can be overlaid in the remaining time and space.

The problem with the first scenario is that it’s been a flawed model since the 1980’s (probably earlier). As soon as you allow people to connect things to a greater world, someone is going to connect them in way that you didn’t expect. And someone is going to get over your moat. Then where will you be? Our entire history as a species is replete with stories of Trojan horse tales. Why would we imagine that the technology realm would be any different?

The problem with the second scenario is that is treats security as a feature and not an emergent property of the system. Simply put, “you can’t bolt security on.” You certainly can’t secure the complex systems we now assemble on a daily basis from an assortment of open source, third-party, and proprietary bits when you try to do so after the fact.

This certainly paints a gloomy picture for security. But what, you may ask, does this have to do with threat modeling and what is compositional threat modeling? Both good questions. Let’s get into it.

Fundamentally, threat modeling is a design activity. That is, threat modeling considers a system’s design and evaluates if sound security principles are being used. There are a myriad way to undertake this consideration. Probably the most famous being Adam Shostack‘s four questions framework. This methodology cuts to the core of threat modeling. There are any number of tools available to help automate and systematize the threat modeling process. All of them presume a fully-formed system design. Now, given that the system may have been assembled from multiple sources, you may not have that fully-formed design available. What does that mean? It means that you don’t get a complete picture of system’s design deficiencies. Now, if everyone knows this, the organization is able to make reasonable risk decisions. Many time, however, non-security individuals look at the results of a threat model and consider them the totality of possible deficiencies. This is the reality of the world.

There’s another issue with threat modeling the system in its totality after the fact. A threat model is a kind of network diagram and subject to the same kind of combinatorial explosion. This has two negative impacts. First, it’s hard to preform a systematic analysis of the model in a timely fashion when you have hundreds of interconnections to consider. Second, when deficiencies number in the thousands, the development teams and their management are unlikely to want to even look at them. They are, after all, only possible issues.

So, how can we address these diverse issues? I use a  technique that I call compositional threat modeling.

Let’s consider a really simplistic example. The following diagram shows two things communicating across a trust boundary.

I could choose to threat model this system as a whole and would need to consider the impact of both my (Thing 1) inbound data flow on me and outbound data flow on Thing 2. Alternately, I could consider only things that impacted me (consider Thing 2 out of scope). This would yield a focused set of deficiencies applying only to Thing 1. I could then perform a complementary analysis with the focus on Thing 2. Now I have a pair that I can say form a fully modeled system by composition.

The advantages to this approach are numerous. The deficiencies identified are highly focused, so teams will be more likely to consider them. The methodology scales well. The threat models are more light weight (easier to maintain). The threat modeling process can more readily accommodate a diversity of element sources and different timelines of inclusion or availability. Threat models can be created and shared in ways that do not expose organizational IP. The threat models are easier to navigate (multiple resolution views are a natural consequence).

References:

Rock strata image by Matt Affolter (QFL247), CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=14715935

Read Full Post »

Last week (17-21 August 2020) I had the pleasure of being staffing (trainer/facilitator) the first joint MDIC / FDA / MITRE Medical Device Cybersecurity Threat Modeling Bootcamp. The mind behind the training material was Adam Shostack. Originally planned as an in-person training, the pandemic forced a shift to on-line delivery.

The objectives of the bootcamp (from the MDIC site):

  • Intensive, hands-on sessions on threat modeling.
  • Learn about structured, systematic and comprehensive approach to threat modeling for engineering more secure systems from SMEs from public and private sector.
  • Learn the latest updates on medical device cybersecurity and related areas from representatives of FDA and industry.
  • Networking opportunity with SMEs from MedTech and non-MedTech sectors to learn on cybersecurity best practices that can be incorporated into the medical device industry
  • Contribute to the discussions on the development of Medical Device Threat Modeling Playbook

For anyone not familiar with Adam’s threat modeling training methodology, it is a highly interactive, small group focused training. When the training staff got together for three days in Washington, D.C. in February of 2020, this was the way we pre-flighted the bootcamp. To his credit, Adam deconstructed the material and re-envisioned it for a remote audience.

This first bootcamp had about sixty participants from across the medical device industry and include manufacturers, HDOs and regulators. The training provided a good introduction to the concepts of threat modeling and encouraged an appreciation of the needs of development, security, management and regulators. Instead the typical classroom style presentation followed by table-based group interactions, we had topic-based videos which the participants viewed in a dedicated on-line learning system, individual assignments, entire bootcamp presentations and group working sessions .

As an outcome of this bootcamp was to assist in the creation of a “playbook” for medical device threat modeling, the entire procedure was shadowed by members of the working group responsible for that effort.

So, what was my take-away as a trainer and practitioner?

Providing live distance learning is hard. The dynamic is completely different from in-person training. I’ve been taking remote live distance training classes since the proto-Coursera Machine Learning and Database classes from Stanford, nearly a decade ago. As a learner, the ability to stop the video and take notes and go back over things was invaluable. The lack of interactivity with the instructor was a drawback. This was my first experience on the other side of the screen. As a trainer and facilitator, keeping remote participants on-topic and on-schedule was challenging. Having the ability to use multiple computers (one for interaction [43″ 4K display] and another for staff side-channel discussion) was invaluable. In an in-person setting, I’d’ve had to leave the group or try and flag down another staff member, distracting from the flow.

Observationally, I think the dynamic of the participants is a bit diminished. Typically, you’d have breaks, during which participants would exchange ideas and make connections. At the end of the day, groups would have dinner together and discuss what they’re learned in greater detail.

I believe that, overall, the training was successful. My group indicated that they’d come away with a better understanding of threat modeling and a greater appreciation of the context in which the activity exists. We have another session coming up and I’m sure that it will incorporate all the lessons learned from this one. I’m looking forward to it.

The training is focused on threat modeling generally and so those not in the medical device industry would also profit from it. If you’re interested, I recommend that you visit the MDIC site linked above.

Read Full Post »

One of the common misconceptions I encounter when explaining threat modeling to people is the is of operating system scale. This is one of those cases where size really does matter.

When threat modeling, there is a desire to do as little work as possible. By that I mean that you shouldn’t model the same thing multiple times. Model it once put it in a box and move on. It’s furniture.

We do this to allow us to focus on the stuff we’re developing and not third-party or open source bits.

When it comes to operating systems, however, I don’t have just one border to deal with as I would with say a vendor provided driver. The thing we casually refer to as an operating system is actually a many layered beast and should be treated as such. When we do, the issue of OS scale disappears in a puff of abstraction smoke.

So, what is this so far unexplained scale?

Let’s rewind a bit to the original computers. They were slow, small (computationally and with respect to storage) and, in the grand scheme of things, pretty simple. There was no operating system. The computer executed a single program. The program was responsible for all operational aspects of its existence.

As computers became more sophisticated, libraries were created to provide standardized components, allowing developers to focus on the core application and not the plumbing. Two of these libraries stand out: the mass storage and communications libraries. We would eventually refer to these as the file system and network.

When computers began expanding their scope and user base, the need for a mechanism to handle first sequential, then later multiple jobs led to the development of a scheduling, queueing and general task management suite.

By the time Unix was introduced, this task manager was surrounded by access management, program development tools, general utilities and games. Because, well, games.

For users of these systems, the OS became shorthand for “stuff we didn’t need to write.”

The odd thing is that, on the periphery, there existed a class of systems too small or too specialized to use of even require this on-stop-shopping OS. These were the embedded systems. For decades, these purpose-built computers ran one program. They included everything from thermostats to digital thermometers. (And yes, those Casio watches with the calculator built in.)

Over time, processors got a lot more powerful and a lot smaller. The combination of which made it possible to run those previously resource hungry desktop class operating systems in a little tiny box.

But what happens when you want to optimize for power and space? You strip the operating systems down to their base elements and only use the ones you need.

This is where our OS sizing comes from.

I like to broadly divide operating systems into four classes:

  • bare metal
  • static library
  • RTOS
  • desktop / server

Each of these presents unique issues when threat modeling. Let’s look at each in turn.

Bare Metal

Probably the easiest OS level to threat model is bare metal. Since there’s nothing from third-party sources, Development teams should be able to easily investigate and explain how potential threats are managed.

Static Library

I consider this the most difficult level. Typically, the OS vendor provides sources which development builds into their system. Questions of OS library modification, testing specific to target / tool chain combination, and threat model (OS) arise. The boundaries can become really muddy. One nice thing is that the only OS elements are the ones explicitly included. Additionally, you can typically exclude aspects of the libraries you don’t use. Doing so, however, breaks the de-risking boundary as the OS vendor probably didn’t test your pared down version.

RTOS

An RTOS tends to be an easier level than a desktop / server one. This is because the OS has been stripped down and tuned for performance and space. As such, bits which would otherwise be laying about for an attacker to leverage are out of play. This OS type may present issues in modeling as unique behaviors may present themselves.

Desktop / Server

This is the convention center of operating systems. Anything and everything that anyone has ever used or asked for may be, and probably is, available. This is generally a bad thing. On the upside, this level tends to provide sophisticated access control mechanisms. On the downside, meshing said mechanisms with other peoples’ systems isn’t always straightforward. In the area of configuration, as this is provided by the OS vendor, it’s pretty safe to assume that any configuration-driven custom version is tested by the vendor.

OS and Threat Modeling

When threat modeling, I take the approach of treating the OS as a collection of services. Doing so, the issue of OS level goes away. I can visually decompose the system into logical data flows into process, file system and network services; rather than a generic OS object. It also lets me put OS-provided drivers on the periphery, more closely modeling the physicality of the system.

It’s important to note that this approach requires that I create multiple threat model diagrams representing various levels of data abstraction. Generally speaking, the OS is only present at the lowest level. As we move up the abstraction tree, the OS goes away and only the data flow between the entities and resources which the OS was intermediating will be present.

Let’s consider an application communicating via a custom protocol. At the lowest level, the network manages TCP/UDP traffic. We need to ensure that these are handled properly as the transit the OS network service. At the next level we have the management of the custom protocol itself. In order to model this properly, we need for the network service to not be involved in the discussions. Finally, at the software level, we consider how the payload is managed (let’s presume that it’s a command protocol).

Nowhere in the above example does the OS level have any impact on how the system would be modeled. By decomposing the OS into services and treating layers uniformly, we gain the ability to treat any OS like furniture. It’s there, but once you’ve established that it behaves properly, you move on.

Read Full Post »

When I was an undergraduate, I heard a story about a DEC PDP 11/70 at a nearby school that had a strange hardware mod. A toggle switch had been added by someone and wired into the backplane apparently. The switch had two settings, “magic” and “more magic.” The identity of the individual or individuals having made the mod was lost. For as long as anyone could remember, the switch had been in the “magic” position. One day, some brave soul decided to find out what happened when the “more magic” setting was selected. Upon flipping the toggle, the machine crashed. It thereafter resisted any attempts to get it to run. After a bit, they gave up, flipped the toggle back to “magic“, power cycled the machine and hoped. The machine returned to its previous state of operational happiness. One could say that they’d been trying to achieve too much of a good thing.

We might read this and come away with the idea that, well, they just should have gotten on with their work and not worried about the plumbing. That’s certainly the majority view, from what I’ve seen. But why was the switch there in the first place? If it only worked in one position, shouldn’t they have just wired things without the switch?

Let’s consider the temporal aspect, no one remembered who, when, or why, let alone what. It may well have been the case that once “more magic” actually worked. Who can say. That whole documentation thing.

When I work with project teams and individual developers, I have a habit of saying “no magic.” It comes from having heard this story. I’ll say this when working with individuals who’s code I’m reviewing, teams who’s architecture I’m reviewing, or leads and architects while facilitating the creation of threat models. I don’t care whether the magic manifests as constants (magic numbers) from who-knows-where or logic that’s more convoluted than a Gordian Knot. Basically, if the reason that something is being used exists without the benefit of understanding, it shouldn’t be there. I don’t care who put it there or how smart they were. Someday someone is going to come along and try to change things and it will all go south. Enough of these in a code review and it’s likely to get a single summary review comment of “no.”

How does this relate to security? Do you know what the auto-completion for “never implement your” is? I’ll let you try that on you own, just to verify. Never implement your own crypto[graphic functions]. Why? Where should I start? The math is torturous. The implementation is damn hard to do right. Did you know that you can break poorly implement crypto via timing analysis? Even if you don’t roll your own crypto, are you using some open source library or the one from the operating system? Do you know when to use which? Are you storing your keys properly?

Magic, all of it.

Some people believe that security can be achieved by obscuring things. These also tend to be the same people who’ve never used a decompiler. You’d be amazed what can be achieved with “a lot of tape and a little patience.”

If your goal is to have software and systems that are secure, you can’t have magic. Ever.

So, when I see company with a core philosophy of “move fast, break things,” I think well aren’t they going to have more attack surface than a two pound chunk of activated carbon. Not amazingly, they did and we are worse off because of it.

You can’t secure software-based systems unless you understand how the pieces play together. You can’t understand how the pieces play together until you understand how each piece behaves. You can’t understand how a piece behaves if it’s got magic floating around in it. It’s also important to not just glom onto a technique or technology because it’s handy or trendy. As Brian Kernighan and P.J. Plauger said, “it is dangerous to believe that blind application of any particular technique will lead to good programs[2].”

While you’re out there moving fast and tossing things over the wall, keep in mind that someone else, moving equally fast, is stitching them together with other bits. The result of which will also be tossed over another wall. And while it is true that some combination of these bits product interesting and quite useful results, what is the totality of their impact? At what point are we simply trusting that the pieces we’re using are not only correct and appropriate to our use, but don’t have any unintended consequences when combined in the way we have done.

You need to know that every part does the thing you intend it to do. That it does it correctly. And that, it does nothing you don’t intend. Otherwise, you’re going to have problems.

I’ll close with another story. In the dim days, before people could use the Internet (big I), there were a number of networks. These were eventually interconnected hence the name interconnected networks or Internet for short. Anyway, back in the day (early ’80s), universities were attaching to the Internet backbone, which was in and of itself pretty normal. What was not normal was when someone accidentally mounted a chunk of the Andrews File System (AFS) onto an Internet node. It ended up mounting the entirety of AFS on the Internet. This had the unexpected side effect of making a vast number of students previously unprotected emails publicly available to anyone with Internet access. Mostly that meant other university students. AFS wasn’t actually designed to be connected to anything else at that time. Bit of a scandal.

Unintended consequences.


  1. Image credit: Magic Book By Colgreyis ©Creative Commons Attribution 3.0 License.
  2. Kernighan and Plauger, Software Tools, 1976, page 2 paragraph 4

Read Full Post »

When creating a class, it’s important to have a motivating example. In my experience, people learn best when they can see an immediate application to their own work. In the area of cybersecurity, this can be difficult. Examples in this space tend to be either too esoteric (return-oriented programming) or too divorced from the domain (credit card theft).

I’ve just finished up the creation of a two hour software security fundamentals class for management and developers. This is intended to provide a framework for integrating security into the software development process. Build it in vs. bolt it on. As I was putting the class together, the motivating example was just out of reach.

The push-back that must be overcome is that there already exists a process for dealing with security issues. It’s an extension to the standard quality assurance process. This process merely needs to be extended to include security-related testing, right?

Let’s look at that assertion for a moment.

How exactly does quality assurance work? Well, it’s based, by and large, on the flawed hypothesis model. Starting with the documentation, test cases are created to verify the assertions made therein. From there, scenarios are imagined. These are likewise verified. If issues (bugs) are discovered, generalizations are attempted. Any found point to larger problems in the code.

Sounds good, what’s the problem?

Consider the internet joke:

QA engineer walks into a bar. They order a beer, then order 0 beers, then order 999999999 beers, then orders a lizard, then orders -1 beers, then orders a eawlirensadk.

A customer walks into the bar and asks where the bathroom is. The bar bursts into flames, killing everyone.

That’s pretty much the problem with the flawed hypothesis model. You only verify the things you think of. If you’re only looking at how the bar serves beer, you’ll never catch issues involving other aspects of the system (here, bathroom location).

It’s a bit extreme as a motivating example, but everyone can relate to it, which is, of course, the point.

From there, the concept of flaws vs. bugs can emerge. QA finds bugs. On a good day, these may point to flaws. So, what’s the difference. For the purposes of discussion, flaws are design defects and bugs are implementation (code) defects. By its very nature, QA does not test design, only implementation.

At this point, management asks the question, isn’t this how it’s always been? Generally speaking, yes. Long gone are the days when people used program design language (PDL) to reason about the soundness of their software. At that time, security wasn’t much of a focus.

Enter threat modeling. By its very nature, threat modeling allows us to reason on the design. Why? Because it focuses not on the documentation, but rather the data flows and through extension the work flows of the system. Because we abstract ourselves from the implementation, we can reason about the system in ways that point us directly to security flaws.

To relate the impact to the real world, one has only to look at the cost to Samsung of not catching a design flaw in the Note 7 prior to release (US$17B). IBM estimates that relative to catching an issue a the design stage that the cost is 6.5 time higher in the implementation stage, 15 time higher during testing, and 100 times higher after release.

I’m in no way advocating the elimination of QA testing. You need both. As well as the processes we do in between, such as code reviews and static / dynamic analysis. But again, discovering issues in these stages of development is going to be more expensive. Defense-in-depth will always give you a better result. This is true not only in security, but the development process itself.

As I was finishing up my software security fundamentals class, the news broke regarding a high-profile technology firm that exposed the private data (images) of millions of individuals via their developer APIs. This is probably a case of failing to threat model their system. This isn’t the first time that this particular company has failed miserably in the area of security. It points out, in ways which greatly assist my efforts to get management on-board, that the failed hypothesis model is no substitute for critical analysis of the design itself.

As a system grows in complexity, it is critical to abstract out the minutiae and let the data flows point toward possible issues in the design. Threat modeling is one technique, but not the only one, that makes that possible.

Read Full Post »

I get called upon to do fairly incongruous things. One day it’ll be C++ usage recommendations. Another will find me preparing background materials for upper management. Some days, I’m prototyping. Always something new.

As of late, I’ve been bringing modern software threat modeling to the development teams. Threat modeling is one of those things that, for the most part, exists only in the realm of the mythical cybersecurity professionals. This is a sad thing. I’m doing what I can to change people’s perceptions in that regard.

Within cybersecurity, there is a saying. “You can either build it in or bolt it on.” As with mechanical systems, bolting stuff on guarantees a weak point and a usually lack of symmetry. From the software development standpoint, attempting to add security after the fact is usually a punishing task. It is both invasive and time-consuming.

But the bolt-on world is the natural response for those who use of the flawed hypothesis model of cybersecurity analysis. The appeal of the flawed hypothesis analysis lies in the fact that you can do it without much more than the finished product and its documentation. You can poke and prod the software based on possible threats that the documentation points toward. From the specific anticipated threats one can generalize and then test. The problem is that this methodology is only as good as the documentation, intuition, and experience of those doing the analysis.

So, what’s a software development organization to do?

Enter threat modeling. Instead of lagging the development, you lead it. Instead of attacking the product, you reason about its data flow abstraction. In doing so, you learn about how your design decisions impact your susceptibility to attack. From there, you can quantify the risk associated with any possible threats and make reasoned decisions as to what things need to be addressed and in what order. Pretty much the polar opposite of the “death by a thousand cuts” approach of the flaw hypothesis model.

Sounds reasonable, but how do we get there?

Let me start by saying that you don’t create a threat model. You create a whole pile of threat models. These models represent various levels of resolution into your system. While it is true that you could probably create an über threat model (one to rule them all, and such), you’d end up with the graphical equivalent of the Julia Set. What I’ve found much more manageable is a collection of models representing various aspects of a system.

Since the 1970’s, we’ve had the very tool which we’ll use to create our models. That tool is the data flow diagram. The really cool thing about DFD’s is that they consist of just four components. In order to adapt them to threat modeling we need add only one more. The most important piece it the data store. After all, there’s not much to look at in a computer system that doesn’t actually handle some sort of data. We manipulate the data via processes. These agents act upon the data which moves via flows. And finally we need external actors, because if the data just churns inside the computer, again, not much interest. That’s it. You can fully describe any system using only those four primitives.

Okay, you can describe the system, but how does this relate the threat modeling? To make the leap from DFD to threat model, we’ll need one more primitive. We need a way to designate boundaries that data flows cross. These we call threat boundaries. Not the world’s most imaginative nomenclature, but hey, it’s simple and easy to learn.

Once we have a particular DFD based on a workflow, we add boundaries where they make sense. Between the physical device and the outside world; or the application and operating system; or application and its libraries; or between two processes; or … (you get the idea). Again the threat model isn’t intended to be the application. It’s an abstraction. And as Box said, “all models are wrong … but some are useful.” It helps to keep in mind what Alfred Korzybski said, “the map is not the territory.” Anyone who’s traveled on a modern transit system would never confuse the transit map for the area geography. Nod to Harry Beck.

With the boundary-enhanced DFD, we can get to work. For the particular road I travel, we reason about the threat model using a STRIDE analysis. We consider each of the elements astride (pun) each data flow with respect to each of the six aspects of STRIDE: spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privilege. Not all aspects apply to all combination of our four primitives. There are tables for that. Each of these can be appraised logically. No chicken entrails required. At the end of the day, you have a collection of things you don’t have answers to. So, you bring in the subject matter experts (SMEs) to answer them. When you are done what remains are threats.

Threats. Spiffy. But not all threats are equal. Not in potential for damage, or likelihood, or interest. For a goodly length of time, this was a big problem with the whole threat modeling proposition. Lots of stuff, but no objective way to triage it.

Enter the Common Vulnerability Scoring System (CVSS). This is the Veg-O-Matic of threat risk quantification. CVSS considers the means, complexity, temporality and impact areas of a threat. From these it computes a vulnerability score. Now you have a ranking of what the most important things to consider are.

For many industries, we could stop right here and use the CVSS score directly. No so in the land of FDA regulation. For that land adds another dimension, patient safety (PS) impact. The augmented CVSS-PS ranking guides us to a proper way to objectively rate the threats from most to least severe.

Now, we can take these ranked threats and present them, complete with SME feedback, to the core and risk teams for determination and disposition.

But we’re really not done. The threat modeling process isn’t one-and-done. We take what we learn and build it into the base assumptions of future products. Once the product is built, we compare it to the original threat model to verify that the model still represents reality.

Well, that was a lot of exposition. Where’s the facilitation and teaching?

Actually, the exposition was the teaching. And the teaching was and explanation of how I go about facilitating.

When all is said and done, a threat model needs to be built in. That is, engineering owns it. The whole facilitation thing, that’s a skill. It needs to live in the teams, not in some adjunct cybersecurity group. Applying CVSS consistently takes a bit of practice, but again we’re back to facilitation.

As to actually teaching threat modeling, that takes the better part of a day. Lots of decomposition, details and diagrams. I like teaching. It’s a kind of cognitive spreading of the wealth. The same is true of facilitation, just more one-to-one.

Read Full Post »