Feeds:
Posts
Comments

There are books that inspire. There are also people who inspire. This review is about a book that inspired a person who inspires. It also inspired that person to write a book to inspire people.

I’ve been following Simon Sinek for a bit now. For the past few years he’s been speaking about a way of looking at things that differs from the common view. He’s also written a book (releasing in October 2019) about it titled, “The Infinite Game.” This review isn’t about Simon, his videos, or upcoming book. It’s about the book that inspired him.

That book, by James Carse, is Finite and Infinite Games.

Unlike many of my recent reads, this one was a bit on the compact side, coming in at under 150 pages. The interesting thing to me approaching the book was how such a short volume had inspired Sinek to take its concept on the road.

In typical university professor process, Carse opens by describing the endpoints of a spectrum, games of a finite and infinite nature. He asserts these to be just that, endpoints. We are left to conclude that there are other games within the spectrum, but none are explored. Although this would have allowed for a larger volume, there really isn’t much point. If you’re the kind of person who buys this book and doesn’t appreciate that it’s been written by a philosophical / political / historical viewpoint, the additional material wouldn’t have enabled you to get any more out of it. On the other hand, if you do, you’ll have little difficultly extrapolating the spectrum and its related historical and political examples.

Once the groundwork is laid, the author spends a chapter exploring a seemingly obvious point. “No one can play a game alone.” This appeals to our sense of self. We exist in relationship to the not-self. We cluster our selves into communities in relationship to other communities and nation-states likewise.

From here, the book explores our traditional view of games. Winners and losers. We see how this binary / hierarchical viewpoint demands the existence of a time-bound context (world). For Whovians this represents a fixed point in time. Examples would be the 1962 United States World Series (baseball) champion or the victors of a battle. The interesting element of which is that of any fixed point in time, it never changes. It is never different. It also never improves. Implicit in this view is the requirement of losers as well as accepted, well-defined rules.

In the broader world we see analogs to zero sum games. We see companies constantly defining themselves in terms of being the number one firm in something contextualized by a specific time period. These tend toward ever increasingly pointless hair splitting. The question becomes, “to what end?”

An interesting point, which I’d not previously considered, was that finite games require an audience. That is witnesses. These serve to validate the victor and their victory. They also are responsible for carrying the memory of the event, anchoring it in time.

It is only now, two thirds of the way into the book, that the other end of the spectrum is examined. To do so, the author has us look to nature (well actually anything not contrived by human beings).

As one might have come to conclude, the world (and by extension universe) got along just fine without us and will probably to do again. Carse points out that in and of itself, the world has no narrative structure, it simply is. In the same way an infinite games is the complete opposite of a finite game. It has no fixed rules, no audience, no winners, no losers and is not time bounded.

So what is the point of an infinite game?

To keep playing.

Players come and go. Objectives change. But, at the end of the day, playing is its own motivation. In this view of the world, there are no enemies to be defeated, there are rivals to out do. Without rivals (other players) you aren’t playing a game. There is no attempt to reach a pinnacle, but rather to be pushed to exceed ones own success.

In a finite game you declare victory. Once at the top of the heap, it’s all down hill. Finite games are by their very nature self-limiting. There is no incentive to excel once you’ve attained supremacy. We have seen time-and-again companies creating new and innovative technologies only to hold them back because they were already number one. These same companies lost that position to others not held back by past glories.

I have seem many times how in the software world, companies have milked the “completed” product cow while at the same time refusing to invest in keeping that same cow current in terms of technology. It is only after decades of neglect that they realize that they can no longer add features and that a generation or two of graduates have passed by since anyone was taught how to work with the technologies used in said cash cow. And then the cow dies.

T.S. Elliot said, “Immature poets imitate; mature poets steal; …” Pablo Picasso said, When there’s anything to steal, I steal” Steve Jobs spoke likewise. These are views of those constantly improving their craft. Learning from and incorporating the best we see in others vs. simply attempting to exploit the weaknesses we see is a hallmark of the infinite game player. The other players of their infinite games are not opponents but rather rivals. Opponents seek our defeat. Rivals seek our respect. Opponents want their fixed point in time. Rivals desire that every day we push them to be their best.

Not long ago, someone said to me, “unicorns want to be around other unicorns.” According to Guy Kawasaki, Steve Jobs said, “A players hire A players; B players hire C players; and C players hire D players. It doesn’t take long to get to Z players. This trickle-down effect causes bozo explosions in companies.” I prefer Eric Dietrich’s thought, “A Players Don’t Hire A Players — They Partner with A Players.”  We can look to the rise and fall of stack ranking in the technology world to see the negative impact of the finite game in a world where the goal is to create the future.

Upon finishing the book I was left with a sense of affirmation and sadness. I recommend this book to anyone who intends to undertake any endeavor over the long term.

 

 

One of the common misconceptions I encounter when explaining threat modeling to people is the is of operating system scale. This is one of those cases where size really does matter.

When threat modeling, there is a desire to do as little work as possible. By that I mean that you shouldn’t model the same thing multiple times. Model it once put it in a box and move on. It’s furniture.

We do this to allow us to focus on the stuff we’re developing and not third-party or open source bits.

When it comes to operating systems, however, I don’t have just one border to deal with as I would with say a vendor provided driver. The thing we casually refer to as an operating system is actually a many layered beast and should be treated as such. When we do, the issue of OS scale disappears in a puff of abstraction smoke.

So, what is this so far unexplained scale?

Let’s rewind a bit to the original computers. They were slow, small (computationally and with respect to storage) and, in the grand scheme of things, pretty simple. There was no operating system. The computer executed a single program. The program was responsible for all operational aspects of its existence.

As computers became more sophisticated, libraries were created to provide standardized components, allowing developers to focus on the core application and not the plumbing. Two of these libraries stand out: the mass storage and communications libraries. We would eventually refer to these as the file system and network.

When computers began expanding their scope and user base, the need for a mechanism to handle first sequential, then later multiple jobs led to the development of a scheduling, queueing and general task management suite.

By the time Unix was introduced, this task manager was surrounded by access management, program development tools, general utilities and games. Because, well, games.

For users of these systems, the OS became shorthand for “stuff we didn’t need to write.”

The odd thing is that, on the periphery, there existed a class of systems too small or too specialized to use of even require this on-stop-shopping OS. These were the embedded systems. For decades, these purpose-built computers ran one program. They included everything from thermostats to digital thermometers. (And yes, those Casio watches with the calculator built in.)

Over time, processors got a lot more powerful and a lot smaller. The combination of which made it possible to run those previously resource hungry desktop class operating systems in a little tiny box.

But what happens when you want to optimize for power and space? You strip the operating systems down to their base elements and only use the ones you need.

This is where our OS sizing comes from.

I like to broadly divide operating systems into four classes:

  • bare metal
  • static library
  • RTOS
  • desktop / server

Each of these presents unique issues when threat modeling. Let’s look at each in turn.

Bare Metal

Probably the easiest OS level to threat model is bare metal. Since there’s nothing from third-party sources, Development teams should be able to easily investigate and explain how potential threats are managed.

Static Library

I consider this the most difficult level. Typically, the OS vendor provides sources which development builds into their system. Questions of OS library modification, testing specific to target / tool chain combination, and threat model (OS) arise. The boundaries can become really muddy. One nice thing is that the only OS elements are the ones explicitly included. Additionally, you can typically exclude aspects of the libraries you don’t use. Doing so, however, breaks the de-risking boundary as the OS vendor probably didn’t test your pared down version.

RTOS

An RTOS tends to be an easier level than a desktop / server one. This is because the OS has been stripped down and tuned for performance and space. As such, bits which would otherwise be laying about for an attacker to leverage are out of play. This OS type may present issues in modeling as unique behaviors may present themselves.

Desktop / Server

This is the convention center of operating systems. Anything and everything that anyone has ever used or asked for may be, and probably is, available. This is generally a bad thing. On the upside, this level tends to provide sophisticated access control mechanisms. On the downside, meshing said mechanisms with other peoples’ systems isn’t always straightforward. In the area of configuration, as this is provided by the OS vendor, it’s pretty safe to assume that any configuration-driven custom version is tested by the vendor.

OS and Threat Modeling

When threat modeling, I take the approach of treating the OS as a collection of services. Doing so, the issue of OS level goes away. I can visually decompose the system into logical data flows into process, file system and network services; rather than a generic OS object. It also lets me put OS-provided drivers on the periphery, more closely modeling the physicality of the system.

It’s important to note that this approach requires that I create multiple threat model diagrams representing various levels of data abstraction. Generally speaking, the OS is only present at the lowest level. As we move up the abstraction tree, the OS goes away and only the data flow between the entities and resources which the OS was intermediating will be present.

Let’s consider an application communicating via a custom protocol. At the lowest level, the network manages TCP/UDP traffic. We need to ensure that these are handled properly as the transit the OS network service. At the next level we have the management of the custom protocol itself. In order to model this properly, we need for the network service to not be involved in the discussions. Finally, at the software level, we consider how the payload is managed (let’s presume that it’s a command protocol).

Nowhere in the above example does the OS level have any impact on how the system would be modeled. By decomposing the OS into services and treating layers uniformly, we gain the ability to treat any OS like furniture. It’s there, but once you’ve established that it behaves properly, you move on.

Originals

A clever blog post title is a wonderful thing. What’s interesting is that I couldn’t improve on Adam Grant‘s book “Originals” title. In my previous post, I said that that book wasn’t for the individual looking for themselves. This one is, kind of.

The book begins with a quote from George Bernard Shaw,

The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.

or as Max Planck said,

New scientific ideas never spring from a communal body, however organized, but rather from the head of an individually inspired researcher who struggles with his problems in lonely thought and unites all his thought on one single point which is his whole world for the moment.

Thats’s it. 250 pages and a slew of references.

To be fair, to leave things there would trivialize the book.

There is an odd sense of wandering I got from this book. At one moment, it speaks to those who seek out the creatives. This felt like an art collector was speaking. At another, it focuses on the hardships of being a creative. Just when you think you’ve got a sense of that, there’s a shift to the economics of utilizing creatives and then thrust into the environment which produced them.

All of these chapters could have been the basis of books in their own right. At the end of the book I had the sense that I’d read a primer on the human equivalent of livestock breeding of prized but temperamental Mishima cattle.

I would recommend this book to technology leaders as a reminder of how the world advances (not improves, mind you, but advances) and how to leverage the creatives. I also recommend this book to those who swim against the current. Appreciate that creatives are looked upon by many as a rare resource to be cultivated and value-extracted. Know that although most people will never understand that drives you, these people server to keep us growing.

It’s been quite some time since my previous book review. As a result, I have a stack of books to review. Ironically, the first book is “The Motivation Myth” by Jeff Haden.

I always felt bad for Luke Skywalker when it came to his Jedi training from Yoda. “Do or do not, there is no try.” How is someone “too old to begin the training” expected to unpack that? As someone whose spent the better part of 40 years either learning about, implementing or teaching others how to do bleeding edge tech, I get the “shove them out of their comfort zone” thing. That being said, I also know that when dealing with creatures who learn by metaphor that you can’t expect someone to suddenly jump from 2D to 3D and be effective or even successful.

This book is a gentle reflection on why the concept of motivation (perhaps a better word would have been inspiration) has limited application in the sphere of accomplishment.

Each chapter is leads the reader confront a different myth regarding task success. It’s not some stoke-able flame, lighted path, aspirational mumbo-jumbo, or guru-led excursion through the swamp that accomplished tasks through you. It’s you, your hard work and preparation. Seneca the Younger said, “luck is what happens when preparation meets opportunity.” Success is what happens when preparation meets execution.

At the end of the day, this book will benefit those who lack the mentors in their life to kick them in the ass every now and again. Self-doubt is inevitable, but action paralysis is not. Plan the work. Work the plan. Or as Gene Krantz (Failure is Not an Option) would say “work the problem.” Prepare, plan, execute, repeat.

Should you read this book? If you lead or mentor others, yes. It serves as a reminder that in a world of warm fuzzies, people have by-and-large come to expect success to come from outside themselves. We need to have high expectations for ourselves and others. If you are expecting that this book will make you as an individual successful, look elsewhere.

 

There have been numerous times when a new technology has led to a major shift in how we thought about how computers and software should be built. We are about to see one of those shifts. At least that’s what I’ve come to believe.

Let’s pop into the Wayback and set our sights on the early ’80s. At that time computers had one processor. Hardware-based floating point were the domain of mainframes and minicomputers. Communications between computers existed only for the well-heeled. Security meant keeping your computer locked up.

Life was pretty simple. If you wanted something done, you did it yourself. When software was shared it was done via the US Postal Service on 9-track tape.

Fast forward to the early ’90s. Desktop computers were fairly common. Uniprocessors still ruled. Hardware floating point was now readily available. The internet had just been introduced. Gopher was slowly to be displaced by the combination of FTP and web search engines. Security issues were a thing that happened, but was, on the whole a black art practiced by a small number of individuals and required skills that you needed to develop yourself.

It was around this time that I was casting about for a thesis topic for my Master’s in Electrical and Computer Engineering. I took on the topic of virus-resistant computer architectures (AARDVARK). Did I mention that it was 1992? Just researching the state of the art in computer viruses was a huge task. No Google, Amazon or ACM online article search. As to the other side of the equation, the how and why of hacking, well, I’ll leave that for another time.

By the time I was done, I’d proposed a computer architecture with separate instruction and data spaces where the application’s binary was encrypted and the key loaded in a separate boot sequence was stored in a secure enclave, accessible only to the binary segment loader. Programs were validated at runtime. I conjectured that such a computer would be ideal for secure use and could be built within the 18 months.

Everyone thought it was a great design and the school even worked with me to apply for a patent. The US Patent Office at that time didn’t get it. After five years we abandoned the effort. I was disappointed, but didn’t lose sleep over it.

Fast forward to 2012 when Apple released the iOS 6 security guidelines. Imagine my amusement when I see echos of AARDVARK. It’s all there: signed binaries, secure enclave, load validation. Good on them for doing it right.

Let’s step back and consider the situation. Computers are really small. They have integrated hardware floating point units, multi-processors and now, with the advent of this generation of iPhone, hardware-based security. The internet has gone global. Google indexes everything, Open source is a thing. So, we’re good?

Not so much. The Apple iPhones are an oasis in a vast desert of security badness. Yes, IPv6 has security goodness available, but IPv4 still rules. Secure programming practices are all but non-existent. Scan and contain is the IT mantra. Threat modeling is an exercise for the academic.

This brings to last year. Microsoft announced Azure Sphere. Application processor, dual-MCU, networking processor, security processor. All firewalled. All in the same package. The provided OS was a secured version of Linux. Each device is registered so only the manufacturer can deploy software, push updates and collect telemetry via the Azure cloud.

There must be a catch. Well, as you know, there’s no such thing as a free burrito.

The first device created to the Azure Sphere specification is the Mediatek MT3620. And no, you can’t use it for your next laptop. The target is IoT. But, there’s a lot of horsepower in there. And there’s a lot of security and communications architecture that developers won’t have to build themselves.

Microsoft is touting this a the first generation. Since they started with Linux and ARM, why wouldn’t you want to get something with more power for systems that have security at their core. If Microsoft approached this as Apple has the iPhone, iPad, AppleTV and Apple Watch; why shouldn’t we expect consumer computers that aren’t insecure.

But will I be able to use them for software development? That’s a tricky question.

When I envisioned AARDVARK, my answer was no. That architecture was designed for end-user systems like banks and the military. You can debug a Sphere device from within Visual Studio, so, maybe it’s doable. You’d need to address the issue of a non-isomorphic ownership model.

Are users willing to bind their device to a single entity? Before you say no, consider how much we’re already put in the hands of the Googles and Facebooks of the world. Like it or not, those are platforms. As are all the gaming systems.

Regardless, I believe that we will end up with consumer compute devices based on this architecture. Until then we’ll just have to watch to see whether the IoT sector gets it and by extension the big boys.

Either way, the future is Sphere.

Like many people working in technology, every year I assemble a summary of accomplishments for my annual review. It’s always interesting to take a long view look at things. This is especially true when you’re working on things with long lifetimes and uncertain outcomes.

Let’s look at the raw numbers:

  • 73 topics researched
  • 1 intern supervised / mentored
  • 11 independent projects led
  • 2 classes created
  • 3 classes updated
  • 8 classes taught
  • 61 student taught
  • 6 books reviewed for possible internal use
  • 2 standards bodies participated in
  • 1 ISO technical study group chaired
  • 18 project teams worked with
  • 12 first-line managers worked with
  • 4 upper-level managers worked with
  • multiple outside organizations worked with
  • 126 individual code reviews participated in
  • 29 internal trainings taken
  • 2 conferences attended
  • 2 Coursera classes taken

How can I be sure about these numbers? In a word, notebooks. I’m old school that way. It’s not that I don’t use technology for notes. I use Microsoft OneNote to track various subjects and lines of thought. URLs don’t really work in notebooks (or notes to Santa). But for daily tracking of thoughts and events, being able to pick up a pen and start writing is unequaled for me. I take my notebook to meetings preferentially. They don’t get IMs in the middle of meetings.

From these notebooks come my weekly status summaries. From those come my annual status summary document. If anything, my numbers may be a little low. There are times when I neglect to write in my notebooks.

2018 was a very busy year. Lots of overlapping projects in flight. Many of which produced their first fruits. Most of them were multiple years in planning and execution, requiring the efforts across many teams. I love it when a plan comes together. I had the opportunity to work with a cosmic boat load of teams.

It’s a pleasure for me to be asked to create and teach classes. I learn more about the subjects. I get to help improve other peoples’ skills.

Participating in conversations with the ISO C / C++ committees is always an education for me. It doesn’t matter how long I worked with a programming language, there’s something to learn, some new view or example that will help me teach others. It fun.

Participating in code review is much along the same lines. It’s a discussion with the code and the developer. Done properly, everyone learns something. And through the process, you get better code.

I read a lot, but usually that’s a person endeavor. Reading technical books for possible use by others within my organization either for general reference of in conjunction with a class is a different kind of reading. The scope, presumed background and audience are all very different from me just adding another chunk into my existing world map. I need books in support of individuals who need current reference materials. Sometimes I need them in support of technology which is vastly out of date.

One thing I don’t track are all the non-dead tree reference materials I review, summarize and pass along in support of the research projects or management requests for information that I do on a daily basis. On one level, this is a hole in my somewhat obsessive self tracking. On the other, doing so would be too much of an interruption to flow. This material, at least the good stuff, get tracked by subject in OneNote. Eventually, it’s either folded into supporting material summaries for management or put at the end of class material sections of for support and further reading.

Some of my most interesting work last year revolved around interactions with outside organization. Bringing technologies in to lighten the load and support group efforts is always satisfying.

As to what 2019 will be like, who can say. If it’s anything like 2018 was, there will be a lot to write about in next year’s review post.


image credit: Dustin Liebenow (creative commons)

 

When I was an undergraduate, I heard a story about a DEC PDP 11/70 at a nearby school that had a strange hardware mod. A toggle switch had been added by someone and wired into the backplane apparently. The switch had two settings, “magic” and “more magic.” The identity of the individual or individuals having made the mod was lost. For as long as anyone could remember, the switch had been in the “magic” position. One day, some brave soul decided to find out what happened when the “more magic” setting was selected. Upon flipping the toggle, the machine crashed. It thereafter resisted any attempts to get it to run. After a bit, they gave up, flipped the toggle back to “magic“, power cycled the machine and hoped. The machine returned to its previous state of operational happiness. One could say that they’d been trying to achieve too much of a good thing.

We might read this and come away with the idea that, well, they just should have gotten on with their work and not worried about the plumbing. That’s certainly the majority view, from what I’ve seen. But why was the switch there in the first place? If it only worked in one position, shouldn’t they have just wired things without the switch?

Let’s consider the temporal aspect, no one remembered who, when, or why, let alone what. It may well have been the case that once “more magic” actually worked. Who can say. That whole documentation thing.

When I work with project teams and individual developers, I have a habit of saying “no magic.” It comes from having heard this story. I’ll say this when working with individuals who’s code I’m reviewing, teams who’s architecture I’m reviewing, or leads and architects while facilitating the creation of threat models. I don’t care whether the magic manifests as constants (magic numbers) from who-knows-where or logic that’s more convoluted than a Gordian Knot. Basically, if the reason that something is being used exists without the benefit of understanding, it shouldn’t be there. I don’t care who put it there or how smart they were. Someday someone is going to come along and try to change things and it will all go south. Enough of these in a code review and it’s likely to get a single summary review comment of “no.”

How does this relate to security? Do you know what the auto-completion for “never implement your” is? I’ll let you try that on you own, just to verify. Never implement your own crypto[graphic functions]. Why? Where should I start? The math is torturous. The implementation is damn hard to do right. Did you know that you can break poorly implement crypto via timing analysis? Even if you don’t roll your own crypto, are you using some open source library or the one from the operating system? Do you know when to use which? Are you storing your keys properly?

Magic, all of it.

Some people believe that security can be achieved by obscuring things. These also tend to be the same people who’ve never used a decompiler. You’d be amazed what can be achieved with “a lot of tape and a little patience.”

If your goal is to have software and systems that are secure, you can’t have magic. Ever.

So, when I see company with a core philosophy of “move fast, break things,” I think well aren’t they going to have more attack surface than a two pound chunk of activated carbon. Not amazingly, they did and we are worse off because of it.

You can’t secure software-based systems unless you understand how the pieces play together. You can’t understand how the pieces play together until you understand how each piece behaves. You can’t understand how a piece behaves if it’s got magic floating around in it. It’s also important to not just glom onto a technique or technology because it’s handy or trendy. As Brian Kernighan and P.J. Plauger said, “it is dangerous to believe that blind application of any particular technique will lead to good programs[2].”

While you’re out there moving fast and tossing things over the wall, keep in mind that someone else, moving equally fast, is stitching them together with other bits. The result of which will also be tossed over another wall. And while it is true that some combination of these bits product interesting and quite useful results, what is the totality of their impact? At what point are we simply trusting that the pieces we’re using are not only correct and appropriate to our use, but don’t have any unintended consequences when combined in the way we have done.

You need to know that every part does the thing you intend it to do. That it does it correctly. And that, it does nothing you don’t intend. Otherwise, you’re going to have problems.

I’ll close with another story. In the dim days, before people could use the Internet (big I), there were a number of networks. These were eventually interconnected hence the name interconnected networks or Internet for short. Anyway, back in the day (early ’80s), universities were attaching to the Internet backbone, which was in and of itself pretty normal. What was not normal was when someone accidentally mounted a chunk of the Andrews File System (AFS) onto an Internet node. It ended up mounting the entirety of AFS on the Internet. This had the unexpected side effect of making a vast number of students previously unprotected emails publicly available to anyone with Internet access. Mostly that meant other university students. AFS wasn’t actually designed to be connected to anything else at that time. Bit of a scandal.

Unintended consequences.


  1. Image credit: Magic Book By Colgreyis ©Creative Commons Attribution 3.0 License.
  2. Kernighan and Plauger, Software Tools, 1976, page 2 paragraph 4
%d bloggers like this: