One of the common misconceptions I encounter when explaining threat modeling to people is the is of operating system scale. This is one of those cases where size really does matter.
When threat modeling, there is a desire to do as little work as possible. By that I mean that you shouldn’t model the same thing multiple times. Model it once put it in a box and move on. It’s furniture.
We do this to allow us to focus on the stuff we’re developing and not third-party or open source bits.
When it comes to operating systems, however, I don’t have just one border to deal with as I would with say a vendor provided driver. The thing we casually refer to as an operating system is actually a many layered beast and should be treated as such. When we do, the issue of OS scale disappears in a puff of abstraction smoke.
So, what is this so far unexplained scale?
Let’s rewind a bit to the original computers. They were slow, small (computationally and with respect to storage) and, in the grand scheme of things, pretty simple. There was no operating system. The computer executed a single program. The program was responsible for all operational aspects of its existence.
As computers became more sophisticated, libraries were created to provide standardized components, allowing developers to focus on the core application and not the plumbing. Two of these libraries stand out: the mass storage and communications libraries. We would eventually refer to these as the file system and network.
When computers began expanding their scope and user base, the need for a mechanism to handle first sequential, then later multiple jobs led to the development of a scheduling, queueing and general task management suite.
By the time Unix was introduced, this task manager was surrounded by access management, program development tools, general utilities and games. Because, well, games.
For users of these systems, the OS became shorthand for “stuff we didn’t need to write.”
The odd thing is that, on the periphery, there existed a class of systems too small or too specialized to use of even require this on-stop-shopping OS. These were the embedded systems. For decades, these purpose-built computers ran one program. They included everything from thermostats to digital thermometers. (And yes, those Casio watches with the calculator built in.)
Over time, processors got a lot more powerful and a lot smaller. The combination of which made it possible to run those previously resource hungry desktop class operating systems in a little tiny box.
But what happens when you want to optimize for power and space? You strip the operating systems down to their base elements and only use the ones you need.
This is where our OS sizing comes from.
I like to broadly divide operating systems into four classes:
- bare metal
- static library
- RTOS
- desktop / server
Each of these presents unique issues when threat modeling. Let’s look at each in turn.
Bare Metal
Probably the easiest OS level to threat model is bare metal. Since there’s nothing from third-party sources, Development teams should be able to easily investigate and explain how potential threats are managed.
Static Library
I consider this the most difficult level. Typically, the OS vendor provides sources which development builds into their system. Questions of OS library modification, testing specific to target / tool chain combination, and threat model (OS) arise. The boundaries can become really muddy. One nice thing is that the only OS elements are the ones explicitly included. Additionally, you can typically exclude aspects of the libraries you don’t use. Doing so, however, breaks the de-risking boundary as the OS vendor probably didn’t test your pared down version.
RTOS
An RTOS tends to be an easier level than a desktop / server one. This is because the OS has been stripped down and tuned for performance and space. As such, bits which would otherwise be laying about for an attacker to leverage are out of play. This OS type may present issues in modeling as unique behaviors may present themselves.
Desktop / Server
This is the convention center of operating systems. Anything and everything that anyone has ever used or asked for may be, and probably is, available. This is generally a bad thing. On the upside, this level tends to provide sophisticated access control mechanisms. On the downside, meshing said mechanisms with other peoples’ systems isn’t always straightforward. In the area of configuration, as this is provided by the OS vendor, it’s pretty safe to assume that any configuration-driven custom version is tested by the vendor.
OS and Threat Modeling
When threat modeling, I take the approach of treating the OS as a collection of services. Doing so, the issue of OS level goes away. I can visually decompose the system into logical data flows into process, file system and network services; rather than a generic OS object. It also lets me put OS-provided drivers on the periphery, more closely modeling the physicality of the system.
It’s important to note that this approach requires that I create multiple threat model diagrams representing various levels of data abstraction. Generally speaking, the OS is only present at the lowest level. As we move up the abstraction tree, the OS goes away and only the data flow between the entities and resources which the OS was intermediating will be present.
Let’s consider an application communicating via a custom protocol. At the lowest level, the network manages TCP/UDP traffic. We need to ensure that these are handled properly as the transit the OS network service. At the next level we have the management of the custom protocol itself. In order to model this properly, we need for the network service to not be involved in the discussions. Finally, at the software level, we consider how the payload is managed (let’s presume that it’s a command protocol).
Nowhere in the above example does the OS level have any impact on how the system would be modeled. By decomposing the OS into services and treating layers uniformly, we gain the ability to treat any OS like furniture. It’s there, but once you’ve established that it behaves properly, you move on.
[…] Charles Wilson has perspective on threat modeling devices in “Does That Come in a Large? OS Scale in Threat Modeling […]
[…] (Why I use the term operating environment and not operating system is covered in my post Does That Come in a Large? OS Scale in Threat Modeling.) We either leave the operating environment out of the model entirely or treat it as implicitly out […]