Archive for the ‘threat modeling’ Category

Last week (17-21 August 2020) I had the pleasure of being staffing (trainer/facilitator) the first joint MDIC / FDA / MITRE Medical Device Cybersecurity Threat Modeling Bootcamp. The mind behind the training material was Adam Shostack. Originally planned as an in-person training, the pandemic forced a shift to on-line delivery.

The objectives of the bootcamp (from the MDIC site):

  • Intensive, hands-on sessions on threat modeling.
  • Learn about structured, systematic and comprehensive approach to threat modeling for engineering more secure systems from SMEs from public and private sector.
  • Learn the latest updates on medical device cybersecurity and related areas from representatives of FDA and industry.
  • Networking opportunity with SMEs from MedTech and non-MedTech sectors to learn on cybersecurity best practices that can be incorporated into the medical device industry
  • Contribute to the discussions on the development of Medical Device Threat Modeling Playbook

For anyone not familiar with Adam’s threat modeling training methodology, it is a highly interactive, small group focused training. When the training staff got together for three days in Washington, D.C. in February of 2020, this was the way we pre-flighted the bootcamp. To his credit, Adam deconstructed the material and re-envisioned it for a remote audience.

This first bootcamp had about sixty participants from across the medical device industry and include manufacturers, HDOs and regulators. The training provided a good introduction to the concepts of threat modeling and encouraged an appreciation of the needs of development, security, management and regulators. Instead the typical classroom style presentation followed by table-based group interactions, we had topic-based videos which the participants viewed in a dedicated on-line learning system, individual assignments, entire bootcamp presentations and group working sessions .

As an outcome of this bootcamp was to assist in the creation of a “playbook” for medical device threat modeling, the entire procedure was shadowed by members of the working group responsible for that effort.

So, what was my take-away as a trainer and practitioner?

Providing live distance learning is hard. The dynamic is completely different from in-person training. I’ve been taking remote live distance training classes since the proto-Coursera Machine Learning and Database classes from Stanford, nearly a decade ago. As a learner, the ability to stop the video and take notes and go back over things was invaluable. The lack of interactivity with the instructor was a drawback. This was my first experience on the other side of the screen. As a trainer and facilitator, keeping remote participants on-topic and on-schedule was challenging. Having the ability to use multiple computers (one for interaction [43″ 4K display] and another for staff side-channel discussion) was invaluable. In an in-person setting, I’d’ve had to leave the group or try and flag down another staff member, distracting from the flow.

Observationally, I think the dynamic of the participants is a bit diminished. Typically, you’d have breaks, during which participants would exchange ideas and make connections. At the end of the day, groups would have dinner together and discuss what they’re learned in greater detail.

I believe that, overall, the training was successful. My group indicated that they’d come away with a better understanding of threat modeling and a greater appreciation of the context in which the activity exists. We have another session coming up and I’m sure that it will incorporate all the lessons learned from this one. I’m looking forward to it.

The training is focused on threat modeling generally and so those not in the medical device industry would also profit from it. If you’re interested, I recommend that you visit the MDIC site linked above.

Read Full Post »

One of the common misconceptions I encounter when explaining threat modeling to people is the is of operating system scale. This is one of those cases where size really does matter.

When threat modeling, there is a desire to do as little work as possible. By that I mean that you shouldn’t model the same thing multiple times. Model it once put it in a box and move on. It’s furniture.

We do this to allow us to focus on the stuff we’re developing and not third-party or open source bits.

When it comes to operating systems, however, I don’t have just one border to deal with as I would with say a vendor provided driver. The thing we casually refer to as an operating system is actually a many layered beast and should be treated as such. When we do, the issue of OS scale disappears in a puff of abstraction smoke.

So, what is this so far unexplained scale?

Let’s rewind a bit to the original computers. They were slow, small (computationally and with respect to storage) and, in the grand scheme of things, pretty simple. There was no operating system. The computer executed a single program. The program was responsible for all operational aspects of its existence.

As computers became more sophisticated, libraries were created to provide standardized components, allowing developers to focus on the core application and not the plumbing. Two of these libraries stand out: the mass storage and communications libraries. We would eventually refer to these as the file system and network.

When computers began expanding their scope and user base, the need for a mechanism to handle first sequential, then later multiple jobs led to the development of a scheduling, queueing and general task management suite.

By the time Unix was introduced, this task manager was surrounded by access management, program development tools, general utilities and games. Because, well, games.

For users of these systems, the OS became shorthand for “stuff we didn’t need to write.”

The odd thing is that, on the periphery, there existed a class of systems too small or too specialized to use of even require this on-stop-shopping OS. These were the embedded systems. For decades, these purpose-built computers ran one program. They included everything from thermostats to digital thermometers. (And yes, those Casio watches with the calculator built in.)

Over time, processors got a lot more powerful and a lot smaller. The combination of which made it possible to run those previously resource hungry desktop class operating systems in a little tiny box.

But what happens when you want to optimize for power and space? You strip the operating systems down to their base elements and only use the ones you need.

This is where our OS sizing comes from.

I like to broadly divide operating systems into four classes:

  • bare metal
  • static library
  • RTOS
  • desktop / server

Each of these presents unique issues when threat modeling. Let’s look at each in turn.

Bare Metal

Probably the easiest OS level to threat model is bare metal. Since there’s nothing from third-party sources, Development teams should be able to easily investigate and explain how potential threats are managed.

Static Library

I consider this the most difficult level. Typically, the OS vendor provides sources which development builds into their system. Questions of OS library modification, testing specific to target / tool chain combination, and threat model (OS) arise. The boundaries can become really muddy. One nice thing is that the only OS elements are the ones explicitly included. Additionally, you can typically exclude aspects of the libraries you don’t use. Doing so, however, breaks the de-risking boundary as the OS vendor probably didn’t test your pared down version.


An RTOS tends to be an easier level than a desktop / server one. This is because the OS has been stripped down and tuned for performance and space. As such, bits which would otherwise be laying about for an attacker to leverage are out of play. This OS type may present issues in modeling as unique behaviors may present themselves.

Desktop / Server

This is the convention center of operating systems. Anything and everything that anyone has ever used or asked for may be, and probably is, available. This is generally a bad thing. On the upside, this level tends to provide sophisticated access control mechanisms. On the downside, meshing said mechanisms with other peoples’ systems isn’t always straightforward. In the area of configuration, as this is provided by the OS vendor, it’s pretty safe to assume that any configuration-driven custom version is tested by the vendor.

OS and Threat Modeling

When threat modeling, I take the approach of treating the OS as a collection of services. Doing so, the issue of OS level goes away. I can visually decompose the system into logical data flows into process, file system and network services; rather than a generic OS object. It also lets me put OS-provided drivers on the periphery, more closely modeling the physicality of the system.

It’s important to note that this approach requires that I create multiple threat model diagrams representing various levels of data abstraction. Generally speaking, the OS is only present at the lowest level. As we move up the abstraction tree, the OS goes away and only the data flow between the entities and resources which the OS was intermediating will be present.

Let’s consider an application communicating via a custom protocol. At the lowest level, the network manages TCP/UDP traffic. We need to ensure that these are handled properly as the transit the OS network service. At the next level we have the management of the custom protocol itself. In order to model this properly, we need for the network service to not be involved in the discussions. Finally, at the software level, we consider how the payload is managed (let’s presume that it’s a command protocol).

Nowhere in the above example does the OS level have any impact on how the system would be modeled. By decomposing the OS into services and treating layers uniformly, we gain the ability to treat any OS like furniture. It’s there, but once you’ve established that it behaves properly, you move on.

Read Full Post »

%d bloggers like this: