When I was an undergraduate, I heard a story about a DEC PDP 11/70 at a nearby school that had a strange hardware mod. A toggle switch had been added by someone and wired into the backplane apparently. The switch had two settings, “magic” and “more magic.” The identity of the individual or individuals having made the mod was lost. For as long as anyone could remember, the switch had been in the “magic” position. One day, some brave soul decided to find out what happened when the “more magic” setting was selected. Upon flipping the toggle, the machine crashed. It thereafter resisted any attempts to get it to run. After a bit, they gave up, flipped the toggle back to “magic“, power cycled the machine and hoped. The machine returned to its previous state of operational happiness. One could say that they’d been trying to achieve too much of a good thing.
We might read this and come away with the idea that, well, they just should have gotten on with their work and not worried about the plumbing. That’s certainly the majority view, from what I’ve seen. But why was the switch there in the first place? If it only worked in one position, shouldn’t they have just wired things without the switch?
Let’s consider the temporal aspect, no one remembered who, when, or why, let alone what. It may well have been the case that once “more magic” actually worked. Who can say. That whole documentation thing.
When I work with project teams and individual developers, I have a habit of saying “no magic.” It comes from having heard this story. I’ll say this when working with individuals who’s code I’m reviewing, teams who’s architecture I’m reviewing, or leads and architects while facilitating the creation of threat models. I don’t care whether the magic manifests as constants (magic numbers) from who-knows-where or logic that’s more convoluted than a Gordian Knot. Basically, if the reason that something is being used exists without the benefit of understanding, it shouldn’t be there. I don’t care who put it there or how smart they were. Someday someone is going to come along and try to change things and it will all go south. Enough of these in a code review and it’s likely to get a single summary review comment of “no.”
How does this relate to security? Do you know what the auto-completion for “never implement your” is? I’ll let you try that on you own, just to verify. Never implement your own crypto[graphic functions]. Why? Where should I start? The math is torturous. The implementation is damn hard to do right. Did you know that you can break poorly implement crypto via timing analysis? Even if you don’t roll your own crypto, are you using some open source library or the one from the operating system? Do you know when to use which? Are you storing your keys properly?
Magic, all of it.
Some people believe that security can be achieved by obscuring things. These also tend to be the same people who’ve never used a decompiler. You’d be amazed what can be achieved with “a lot of tape and a little patience.”
If your goal is to have software and systems that are secure, you can’t have magic. Ever.
So, when I see company with a core philosophy of “move fast, break things,” I think well aren’t they going to have more attack surface than a two pound chunk of activated carbon. Not amazingly, they did and we are worse off because of it.
You can’t secure software-based systems unless you understand how the pieces play together. You can’t understand how the pieces play together until you understand how each piece behaves. You can’t understand how a piece behaves if it’s got magic floating around in it. It’s also important to not just glom onto a technique or technology because it’s handy or trendy. As Brian Kernighan and P.J. Plauger said, “it is dangerous to believe that blind application of any particular technique will lead to good programs[2].”
While you’re out there moving fast and tossing things over the wall, keep in mind that someone else, moving equally fast, is stitching them together with other bits. The result of which will also be tossed over another wall. And while it is true that some combination of these bits product interesting and quite useful results, what is the totality of their impact? At what point are we simply trusting that the pieces we’re using are not only correct and appropriate to our use, but don’t have any unintended consequences when combined in the way we have done.
You need to know that every part does the thing you intend it to do. That it does it correctly. And that, it does nothing you don’t intend. Otherwise, you’re going to have problems.
I’ll close with another story. In the dim days, before people could use the Internet (big I), there were a number of networks. These were eventually interconnected hence the name interconnected networks or Internet for short. Anyway, back in the day (early ’80s), universities were attaching to the Internet backbone, which was in and of itself pretty normal. What was not normal was when someone accidentally mounted a chunk of the Andrews File System (AFS) onto an Internet node. It ended up mounting the entirety of AFS on the Internet. This had the unexpected side effect of making a vast number of students previously unprotected emails publicly available to anyone with Internet access. Mostly that meant other university students. AFS wasn’t actually designed to be connected to anything else at that time. Bit of a scandal.
Unintended consequences.
- Image credit: Magic Book By Colgreyis ©Creative Commons Attribution 3.0 License.
- Kernighan and Plauger, Software Tools, 1976, page 2 paragraph 4
Leave a Reply