Feeds:
Posts
Comments

Posts Tagged ‘security’

There have been numerous times when a new technology has led to a major shift in how we thought about how computers and software should be built. We are about to see one of those shifts. At least that’s what I’ve come to believe.

Let’s pop into the Wayback and set our sights on the early ’80s. At that time computers had one processor. Hardware-based floating point were the domain of mainframes and minicomputers. Communications between computers existed only for the well-heeled. Security meant keeping your computer locked up.

Life was pretty simple. If you wanted something done, you did it yourself. When software was shared it was done via the US Postal Service on 9-track tape.

Fast forward to the early ’90s. Desktop computers were fairly common. Uniprocessors still ruled. Hardware floating point was now readily available. The internet had just been introduced. Gopher was slowly to be displaced by the combination of FTP and web search engines. Security issues were a thing that happened, but was, on the whole a black art practiced by a small number of individuals and required skills that you needed to develop yourself.

It was around this time that I was casting about for a thesis topic for my Master’s in Electrical and Computer Engineering. I took on the topic of virus-resistant computer architectures (AARDVARK). Did I mention that it was 1992? Just researching the state of the art in computer viruses was a huge task. No Google, Amazon or ACM online article search. As to the other side of the equation, the how and why of hacking, well, I’ll leave that for another time.

By the time I was done, I’d proposed a computer architecture with separate instruction and data spaces where the application’s binary was encrypted and the key loaded in a separate boot sequence was stored in a secure enclave, accessible only to the binary segment loader. Programs were validated at runtime. I conjectured that such a computer would be ideal for secure use and could be built within the 18 months.

Everyone thought it was a great design and the school even worked with me to apply for a patent. The US Patent Office at that time didn’t get it. After five years we abandoned the effort. I was disappointed, but didn’t lose sleep over it.

Fast forward to 2012 when Apple released the iOS 6 security guidelines. Imagine my amusement when I see echos of AARDVARK. It’s all there: signed binaries, secure enclave, load validation. Good on them for doing it right.

Let’s step back and consider the situation. Computers are really small. They have integrated hardware floating point units, multi-processors and now, with the advent of this generation of iPhone, hardware-based security. The internet has gone global. Google indexes everything, Open source is a thing. So, we’re good?

Not so much. The Apple iPhones are an oasis in a vast desert of security badness. Yes, IPv6 has security goodness available, but IPv4 still rules. Secure programming practices are all but non-existent. Scan and contain is the IT mantra. Threat modeling is an exercise for the academic.

This brings to last year. Microsoft announced Azure Sphere. Application processor, dual-MCU, networking processor, security processor. All firewalled. All in the same package. The provided OS was a secured version of Linux. Each device is registered so only the manufacturer can deploy software, push updates and collect telemetry via the Azure cloud.

There must be a catch. Well, as you know, there’s no such thing as a free burrito.

The first device created to the Azure Sphere specification is the Mediatek MT3620. And no, you can’t use it for your next laptop. The target is IoT. But, there’s a lot of horsepower in there. And there’s a lot of security and communications architecture that developers won’t have to build themselves.

Microsoft is touting this a the first generation. Since they started with Linux and ARM, why wouldn’t you want to get something with more power for systems that have security at their core. If Microsoft approached this as Apple has the iPhone, iPad, AppleTV and Apple Watch; why shouldn’t we expect consumer computers that aren’t insecure.

But will I be able to use them for software development? That’s a tricky question.

When I envisioned AARDVARK, my answer was no. That architecture was designed for end-user systems like banks and the military. You can debug a Sphere device from within Visual Studio, so, maybe it’s doable. You’d need to address the issue of a non-isomorphic ownership model.

Are users willing to bind their device to a single entity? Before you say no, consider how much we’re already put in the hands of the Googles and Facebooks of the world. Like it or not, those are platforms. As are all the gaming systems.

Regardless, I believe that we will end up with consumer compute devices based on this architecture. Until then we’ll just have to watch to see whether the IoT sector gets it and by extension the big boys.

Either way, the future is Sphere.

Read Full Post »

I’ve just finished reading Code Warriors: NSA’s Codebreakers and the Secret Intelligence War Against the Soviet Union by Stephen Budiansky. It tells the story of the US National Security Agency (NSA) up through the end of the Cold War.

Given the number of dry histories of the people and agencies who deal with cryptography and spying, this book is reasonably readable. If you’re looking for a less arch, and more human view on how things got to where they are; you’ll like this book.

The takeaway from the book is that the biggest hindrance in the world of security is people. People who are control freaks or don’t believe that rules apply to them, or believe that the “other side” is stupid, or are just too damn lazy to do the simple things that would avoid issues are the problem. You can’t design your way around them. If you try, you’ll only make things worse.

You can’t pretend you have the moral high ground when you’re collecting enough information to make the US National Archives, the Library of Congress, Google and Facebook look redundant. I’m not picking sides, I’m just saying that if you’ve got a hammer and you use the hammer, own up to the fact and don’t go around telling everybody and their brother that they shouldn’t use hammers and that in fact that hammers either don’t exist or are illegal (or would be if they did actually exist, which they don’t).

Along the way, you’ll be introduced to a cast of well-intentioned, clueless, brilliant and ruthless individuals. There are miscommunications, denial of responsibilities, bruised egos, moments of insight and face palm moments.

Please keep in mind Hanlon’s razor.

 

Read Full Post »

Apparently, I am the first person to complete SEI CERT’s online version of their course in C/C++ secure coding. With all my pausing the videos to take notes, it took me forty hours to get through, but it was worth the time. Actually, it’s two courses, one in security concepts and the other in secure C/C++ coding.

There’s four full days worth of material presented. The videos are chunked into 2 – 60 minute segments, which is helpful as the presenters are speaking to an actual class. This choice of format is both good and bad. Good in that you see real questions being asked. Bad in that, as with many classroom situations, the presenters go off into the weeds at times.

There are six exercises covering major elements (strings, integers, files, I/O, etc.). For these, web interfaced Debian Linux VMware VMs are provided. These boot quickly and have all tools required to perform the exercises. An hour is allotted for each, which is plenty of time. Following each is a section covering the solutions. This is especially helpful as there’s no one to ask questions of during. It’s also possible to download a copy of the VM being used. Sadly, no provision for Windows or native Macintosh environments.

Two books are provided (in various formats). The Secure Coding in C and C++, Second Edition and The CERT® C Coding Standard, Second Edition: 98 Rules for Developing Safe, Reliable, and Secure Systems (2nd Edition).

Generally, I found the material was well delivered. The content in the areas of (narrow) strings, integers and memory handling were exhaustive. The sections on file handling and concurrency were almost completely lacking any Windows coverage. Given that number of systems used by businesses and individuals based on Windows, this is quite disappointing. There was also a bit of hand waving around the use of wide characters. Not providing complete coverage here is a true deficiency as properly dealing with the reality of mixed string types is a reality that isn’t going to go away.

As to this being a C and C++ class, well, sort of. There are nods to C++, but like so much code out there, the C++ is tacked on the side. It really should be treated as a separate class as is done with Java. Many recommended mitigations can with the proviso that they were not portable. I think that when you’re dealing with security, the fact that there exists a mitigation outweighs its portability. These things can be abstracted to achieve portability. As someone who’s spent a fair amount of time doing cross-platform development (Unix-MacOS, MacOS-Windows, Linux-Windows), these are important issues to me.

To those who argue that securing code will make it slow, I would ask what a single security compromise would cost their company in reputation and direct monetary terms.

Would I recommend the class? Yes. There are important concepts and real-world examples on display here. I challenge anyone to take this class and not be horrified by the way most C code is written.

Could I teach this material? Definitely.

Read Full Post »

I’m a big fan of notebooks. Whoever has the unenviable task of sorting through my stuff after I’ve made that final blog entry will be bored to tears with the piles of notebooks they’ll have to go through, assuming that they don’t simply rent a dumpster and chuck the whole pile. Not my problem though.

To be fair, the consistent use of notebooks to record a project’s designs, progress and failures does not make you a da Vinci. The world is replete with pen wielding lunatics. That being said, I believe that using notebooks (actual paper ones) helps give shape to nascent ideas. The simple act of putting pen (or super high-tech stylus) to paper (be it college rule loose leaf or archival quality) forces you to think a bit more about what you’re doing. Perhaps this is why I’ve never been much of a fan of those who believe that they can create a complex piece of software without some form of design. I’m not talking space shuttle here. A simple data flow diagram, threat model and use case statement is all I ask (the basics). These are the bare minimum.

Whiteboards a all well and fine, but unless you’ve got a dedicated scribe hanging about, what happens on the whiteboard tends to stay on the whiteboard. Besides as a clever person once said, “I couldn’t reduce it to the freshman level. That means we don’t really understand it.

At work, my notebooks are a mix of the mundane and the technical. I track meetings, conversations, designs and random thoughts. I find this helpful when trying to make sense of all the balls in the air and occasional ones that end up on the ground. At the end of each year, I participate in the great year-end summary exercise. For this my notebooks are invaluable. This year was no exception. What was exceptional was the numbers.

  • 193 code reviews (other people’s)
  • 81 user stories / bugs (evaluation, implementation, review)
  • 24 technical documents (papers, presentations and the like)

Once I’d finished doing my sums, I went back to my 2014 summary. Not even close. I’m not really sure why.

What did I do in 2015?

I spent a lot of time in some very specific areas:

  • writing security code
  • developing threat models
  • teaching Visual Studio to Linux developers
  • improving code quality through use of static analysis and compilers
  • coordinating open source and third-party library use

What did I learn in 2015?

  • developers expect a language to work based on the platform they learned on and not the spec (which no one ever reads)
  • people really, really don’t understand the difference between authentication and authorization
  • developers don’t see security as their problem
  • developers don’t trust static analysis (The code has never failed, so why is that change important?)
  • talking about cyclomatic complexity metrics is like trying to explain Klingon opera to a Star Wars fan

What’s up for 2016?

Who can say. Security doesn’t look like a bad guess. And it really doesn’t matter how fast machines get or how much storage they have. Programs will grow to consume all available resources.

The only certainty is that I’ll be filling another set of notebooks.

Read Full Post »

I’ll be the first to admit that I obsess over security. My internship in college dealt with Unix security. I’ve created encrypted protocols for wireless data communication. And for my master’s thesis, I created a highly virus-resistant computer architecture (AHVRC – aka Aardvark). I wrote it in 1993. I put it up on the web in 1999.

So, what to my wondering eye did appear a few days ago? None other than the latest installment of Apples “iOS Security” document.

Personally, I like reading Apple documentation. But then again, I read owner’s manuals. Anyway …

So, I find myself reading iOS Security and keep thinking, “that’s what I would have done.” Wait, that’s what I did do.

I was casting about for a thesis topic and my department chair noted that no one was doing anything in secure architectures. So I spent a chunk of time thinking and put a little 124 page missive together. Now gentle reader, you having taken it upon yourself to read a few pages in begin thinking, “this can’t be serious, it’s got animals instead of sub-systems.” True, true. The master level is supposed to have a certain level of awe and wonder associated with it. Boring. Here’s a little secret. In a traditional master’s program, you devote the equivalent of three courses to the research and writing of a document (thesis). The point of the thesis and its defense demonstrates mastery of the discipline. The defense is done publicly. Anyone may attend. You must advertise it to the student body. Some number of professors, typically in your discipline and of your choosing, make up the group who decide if you and your work are up to snuff. Question may be asked in any area of your studies, but primarily the discussions will revolve around your thesis. Hence, being called a defense. Once the professors have had at you, the gallery gets their shots.

You already knew that didn’t you. Well, that’s not the secret.

The secret is that the defense is conducted within the context of the thesis. They attack, but you get the build the world. Think of it as a duel. You get to choose the weapons.

Nothing warms the cockles of my heart more than to see the distinguished faculty discussing a highly technical matter in the context of dolphins, gophers and kinkajous.

I even applied (with Rose-Hulman generously funding) for a patent. Had I had more patience and a more informed examiner at USPTO, I probably would have a patent for the work.

I’m not sure if the developers at Apple ever read my thesis or referenced my patent filing. I do find the similarities in the two architectures interesting.

I hope everyone who reads this posting takes the opportunity to read both documents. Apple’s because they present the state-of-the-art in application security model implementation. Mine, because I think I’m pretty well pleased with myself about it.

Read Full Post »

Lately, we seem to be a bit over-exuberant in our desire to point fingers while running about in circles yelling “Look! Look! [insert major firm’s name here] screwed the pooch big time!”

While I believe it important to identify and correct security issues in as expedient fashion as possible, the endless echo chamber adds nothing of value.

I’ve read the code at the center of Apple’s recent SSL security issue. Yes, it’s bad. What it isn’t is unexpected.

I’m sure the old “goto’s are the source of all kinds of badness, from security holes to acne” crowd are probably have their pitchforks and torches at the ready. I, however, will not be joining them for this outing.

Hopefully, the thoughtful reader has already:

  • reviewed the code in question
  • is aware of the difference between -Wall and -Weverything
  • has read Lyon’s commentary
  • uses static analysis tools as an adjunct to [not a replacement for] code reviews
  • realizes why it is so important to fail first
  • knows that this type of thing isn’t going to change any time soon

You may be asking yourself, “What is this failing first of which I speak?”

Simply put, every function/method/routine should fail first and fail fast. Far too many people are out there writing “happy path” code. This is to say their code properly handles situations where everything goes to plan. Hence, the happy bit.

This kind of code isn’t really interesting code in my opinion. Ever since my classmates in college started showing me their programs, I’ve taken great pleasure in poking holes in them. My master’s work in computer viruses was an exercise in attempting to create a design that was intended to fend off attacks. Working at GE Space and later on Visual SourceSafe gave me an appreciation of systems that did not forgive failure. As a result, when I look at a problem, I think first about what will go wrong. Not what could, but what will. As was once said, “constants aren’t and variables won’t.” Or as Fox Mulder was wont to say, “trust no one.” Parameters will be invalid, globals will change while you’re trying to use them and other routines you call won’t do what you need them to do. This includes system routines. In the time since I began using *nix systems around 1980, I’ve seen attempts to set the time crash the system, attempts to set print output size return snarky commentary, and countless weird responses to out of memory conditions. To make matters worse, many people are still fixating on making their development environment behave as though we still lived in a world where the VT100 was the newest tool in the box. We don’t want to refactor the code because the re-validation effort would be outrageous.  We write our code to the least common denominator.

So, what’s a developer to do? Here is my hit list. It’s not exhaustive. It won’t end world hunger. It is entirely my opinion.

Validate All Parameters

Not some, not just pointers, not just the easy ones, but all of them.

Fail as Soon as You Fail

Don’t create a variable and continuously retest it as you go down the routine, get to the bottom, unwind and return.

Fail First

Since things can go wrong, test for them first. Failure code tends to be small, success code large. When you put the failure case after the success, it greatly reduces its connectedness to the related if statement. When you put it up front, you can immediately see what’s supposed to happen when things go south. Additionally, since we fail first, this can be done before you actually put in ‘operational’ code. This allows you to put together a framework for the code faster.

Return Something Meaningful

There are very few instances where you should create a routine that doesn’t return either a boolean or an enumeration of state. Use an error context when necessary. Don’t use a global. Ever. Ever ever. errno has got to be one of the worst ideas anyone ever had. Although errno shifted up 8 bits is right up there.

Initialize All Local Variables

The compiler may indeed initialize them for you. If you’ve got a smart compiler (they really are good these days), then this won’t cost you anything. Static analyzers will tell you to do this.

Have a Single Exit Point

Please don’t whine about how it makes your code look. You’re probably going to have other issues if you don’t. Things like memory and lock management in languages like C and C++. The future generations who are trying to debug around your routine will thank you for not making them dig through the chicken entrails to figure out how they got out of the routine.

Scope Your Variables

Having all your locals at the top of the routine because they’re easier to find says something. It’s not a good something. Did I mention that compilers were really good? Having variables in the smallest possible scope help them. Create a scope if you need to, there is no penalty from the code police for using braces

Scope Your Clauses

This includes if‘s, case‘s, for‘s and anything else that can have more than one statement. This would have made the Apple bug much more obvious.

Use the Appropriate Flow Control Structures

We can see that had Apple used else if’s instead of a series of separate if’s the code would have shown the dependent sequencing and made the goto’s unnecessary. There is a time and place for goto’s, but when they are used to such an extent as in the Apple case, it indicates structural issues. There are multiple control mechanisms for a reason. Using elaborate variable manipulation when you could use a break boggles the mind.

Stop Pretending You’re Smarter than the Compiler

If you’re not keeping up with what Intel, the gcc or the clang folks are doing with compilers or Intel is doing with profiling feedback; then don’t even try to pretend that you can provide more help to the compile than it can figure out on its own.

Use a Static Analyzer

Static Analyzers have way more patience for looking at code lifetimes than a person ever will. They don’t have attitude and will readily call your baby ugly. We need that. We have a whole boatload of ugly babies.

Make Code Reviews a Required Part of the Development Process

This may be one of the hardest things to do. It’s that ugly baby thing. It’s “the code” and not “our code.” I have yet to meet a developer who would disagree that they desire to produce the best product they can. Code reviews should be about objective measures and not bike sheds.

Test it Until it Falls Over

Good developers and good testers are like orchids. Unless you treat them well, they aren’t around very long. A good tester is a good developers best friend. A bad developer will drive off a good tester. A bad tester can kill a project. A good tester will understand the product and it’s use. They also are really sadistic people who take great pleasure in tormenting the product. What they do is find the sharp bits that are protruding and point them out. The thing about developing software is that after a while you stop thinking about what it might do and only think about what it does do. I can’t count how many times I’ve heard a developer ask, “Why would you do that?”

Be Realistic

If you don’t allow for the review feedback loop, you will suffer. It takes time to make corrections. Done is only done when you’ve exercised your error paths. If you don’t do failure testing, you will suffer. If you believe that once the happy path passes that you are ready to release, you will suffer. Notice the pattern?

Read Full Post »

%d bloggers like this: