Feeds:
Posts
Comments

Posts Tagged ‘computers’

One of the common misconceptions I encounter when explaining threat modeling to people is the is of operating system scale. This is one of those cases where size really does matter.

When threat modeling, there is a desire to do as little work as possible. By that I mean that you shouldn’t model the same thing multiple times. Model it once put it in a box and move on. It’s furniture.

We do this to allow us to focus on the stuff we’re developing and not third-party or open source bits.

When it comes to operating systems, however, I don’t have just one border to deal with as I would with say a vendor provided driver. The thing we casually refer to as an operating system is actually a many layered beast and should be treated as such. When we do, the issue of OS scale disappears in a puff of abstraction smoke.

So, what is this so far unexplained scale?

Let’s rewind a bit to the original computers. They were slow, small (computationally and with respect to storage) and, in the grand scheme of things, pretty simple. There was no operating system. The computer executed a single program. The program was responsible for all operational aspects of its existence.

As computers became more sophisticated, libraries were created to provide standardized components, allowing developers to focus on the core application and not the plumbing. Two of these libraries stand out: the mass storage and communications libraries. We would eventually refer to these as the file system and network.

When computers began expanding their scope and user base, the need for a mechanism to handle first sequential, then later multiple jobs led to the development of a scheduling, queueing and general task management suite.

By the time Unix was introduced, this task manager was surrounded by access management, program development tools, general utilities and games. Because, well, games.

For users of these systems, the OS became shorthand for “stuff we didn’t need to write.”

The odd thing is that, on the periphery, there existed a class of systems too small or too specialized to use of even require this on-stop-shopping OS. These were the embedded systems. For decades, these purpose-built computers ran one program. They included everything from thermostats to digital thermometers. (And yes, those Casio watches with the calculator built in.)

Over time, processors got a lot more powerful and a lot smaller. The combination of which made it possible to run those previously resource hungry desktop class operating systems in a little tiny box.

But what happens when you want to optimize for power and space? You strip the operating systems down to their base elements and only use the ones you need.

This is where our OS sizing comes from.

I like to broadly divide operating systems into four classes:

  • bare metal
  • static library
  • RTOS
  • desktop / server

Each of these presents unique issues when threat modeling. Let’s look at each in turn.

Bare Metal

Probably the easiest OS level to threat model is bare metal. Since there’s nothing from third-party sources, Development teams should be able to easily investigate and explain how potential threats are managed.

Static Library

I consider this the most difficult level. Typically, the OS vendor provides sources which development builds into their system. Questions of OS library modification, testing specific to target / tool chain combination, and threat model (OS) arise. The boundaries can become really muddy. One nice thing is that the only OS elements are the ones explicitly included. Additionally, you can typically exclude aspects of the libraries you don’t use. Doing so, however, breaks the de-risking boundary as the OS vendor probably didn’t test your pared down version.

RTOS

An RTOS tends to be an easier level than a desktop / server one. This is because the OS has been stripped down and tuned for performance and space. As such, bits which would otherwise be laying about for an attacker to leverage are out of play. This OS type may present issues in modeling as unique behaviors may present themselves.

Desktop / Server

This is the convention center of operating systems. Anything and everything that anyone has ever used or asked for may be, and probably is, available. This is generally a bad thing. On the upside, this level tends to provide sophisticated access control mechanisms. On the downside, meshing said mechanisms with other peoples’ systems isn’t always straightforward. In the area of configuration, as this is provided by the OS vendor, it’s pretty safe to assume that any configuration-driven custom version is tested by the vendor.

OS and Threat Modeling

When threat modeling, I take the approach of treating the OS as a collection of services. Doing so, the issue of OS level goes away. I can visually decompose the system into logical data flows into process, file system and network services; rather than a generic OS object. It also lets me put OS-provided drivers on the periphery, more closely modeling the physicality of the system.

It’s important to note that this approach requires that I create multiple threat model diagrams representing various levels of data abstraction. Generally speaking, the OS is only present at the lowest level. As we move up the abstraction tree, the OS goes away and only the data flow between the entities and resources which the OS was intermediating will be present.

Let’s consider an application communicating via a custom protocol. At the lowest level, the network manages TCP/UDP traffic. We need to ensure that these are handled properly as the transit the OS network service. At the next level we have the management of the custom protocol itself. In order to model this properly, we need for the network service to not be involved in the discussions. Finally, at the software level, we consider how the payload is managed (let’s presume that it’s a command protocol).

Nowhere in the above example does the OS level have any impact on how the system would be modeled. By decomposing the OS into services and treating layers uniformly, we gain the ability to treat any OS like furniture. It’s there, but once you’ve established that it behaves properly, you move on.

Read Full Post »

There have been numerous times when a new technology has led to a major shift in how we thought about how computers and software should be built. We are about to see one of those shifts. At least that’s what I’ve come to believe.

Let’s pop into the Wayback and set our sights on the early ’80s. At that time computers had one processor. Hardware-based floating point were the domain of mainframes and minicomputers. Communications between computers existed only for the well-heeled. Security meant keeping your computer locked up.

Life was pretty simple. If you wanted something done, you did it yourself. When software was shared it was done via the US Postal Service on 9-track tape.

Fast forward to the early ’90s. Desktop computers were fairly common. Uniprocessors still ruled. Hardware floating point was now readily available. The internet had just been introduced. Gopher was slowly to be displaced by the combination of FTP and web search engines. Security issues were a thing that happened, but was, on the whole a black art practiced by a small number of individuals and required skills that you needed to develop yourself.

It was around this time that I was casting about for a thesis topic for my Master’s in Electrical and Computer Engineering. I took on the topic of virus-resistant computer architectures (AARDVARK). Did I mention that it was 1992? Just researching the state of the art in computer viruses was a huge task. No Google, Amazon or ACM online article search. As to the other side of the equation, the how and why of hacking, well, I’ll leave that for another time.

By the time I was done, I’d proposed a computer architecture with separate instruction and data spaces where the application’s binary was encrypted and the key loaded in a separate boot sequence was stored in a secure enclave, accessible only to the binary segment loader. Programs were validated at runtime. I conjectured that such a computer would be ideal for secure use and could be built within the 18 months.

Everyone thought it was a great design and the school even worked with me to apply for a patent. The US Patent Office at that time didn’t get it. After five years we abandoned the effort. I was disappointed, but didn’t lose sleep over it.

Fast forward to 2012 when Apple released the iOS 6 security guidelines. Imagine my amusement when I see echos of AARDVARK. It’s all there: signed binaries, secure enclave, load validation. Good on them for doing it right.

Let’s step back and consider the situation. Computers are really small. They have integrated hardware floating point units, multi-processors and now, with the advent of this generation of iPhone, hardware-based security. The internet has gone global. Google indexes everything, Open source is a thing. So, we’re good?

Not so much. The Apple iPhones are an oasis in a vast desert of security badness. Yes, IPv6 has security goodness available, but IPv4 still rules. Secure programming practices are all but non-existent. Scan and contain is the IT mantra. Threat modeling is an exercise for the academic.

This brings to last year. Microsoft announced Azure Sphere. Application processor, dual-MCU, networking processor, security processor. All firewalled. All in the same package. The provided OS was a secured version of Linux. Each device is registered so only the manufacturer can deploy software, push updates and collect telemetry via the Azure cloud.

There must be a catch. Well, as you know, there’s no such thing as a free burrito.

The first device created to the Azure Sphere specification is the Mediatek MT3620. And no, you can’t use it for your next laptop. The target is IoT. But, there’s a lot of horsepower in there. And there’s a lot of security and communications architecture that developers won’t have to build themselves.

Microsoft is touting this a the first generation. Since they started with Linux and ARM, why wouldn’t you want to get something with more power for systems that have security at their core. If Microsoft approached this as Apple has the iPhone, iPad, AppleTV and Apple Watch; why shouldn’t we expect consumer computers that aren’t insecure.

But will I be able to use them for software development? That’s a tricky question.

When I envisioned AARDVARK, my answer was no. That architecture was designed for end-user systems like banks and the military. You can debug a Sphere device from within Visual Studio, so, maybe it’s doable. You’d need to address the issue of a non-isomorphic ownership model.

Are users willing to bind their device to a single entity? Before you say no, consider how much we’re already put in the hands of the Googles and Facebooks of the world. Like it or not, those are platforms. As are all the gaming systems.

Regardless, I believe that we will end up with consumer compute devices based on this architecture. Until then we’ll just have to watch to see whether the IoT sector gets it and by extension the big boys.

Either way, the future is Sphere.

Read Full Post »

An interesting thing about having spent over thirty years taking software from one platform to another is that, time and again, I’ve had my understanding of what constitutes correct code challenged. That’s a good thing.

Sadly, many people who ply the trade of software development mistakenly believe that a compiler has the ability to warn you when you’re code is going to behave in ill-advised ways. Worse yet, they fall into the trap of believing either that their code is correct if it compiles without warnings or that if the compiler accepts your code then any compiler will. Unfortunately, these beliefs are the developer equivalent of a two year old’s lack of object permanence. These two tragedies aside, the vast majority of developers are clueless as to how static analysis can and should be used to ensure code quality.

Let’s rewind a bit and work through these.

In the beginning was the language specification. It was a bright, shiny idea given form. Lest you get the idea that these documents, venerated by both compiler authors and language wonks alike, are intrinsically sane; please recall that the original Ada spec allowed minus signs in the middle of numeric literals and that 8 and 9 were perfectly acceptable octal digits in C. Now a computer language without a compiler is fairly dull. Enter the compiler authors. These individuals, who number about 1000 in the world and of whom I’ve personally known about a dozen, are highly proficient at taking the language specification and giving it life. The way they do this is far more Pollock than Vermeer. Why? Well, a language is the embodiment of a worldview. Unlike source code control systems, which are only created when someone gets fed up with the way that the current one they’re using does one particular thing to such an extent that they can’t bear living under its yoke any longer, languages come from the world of paradigms. Why? If your pet peeve is small, you’ll probably be able to either work around it or get it added to the language (typically contingent upon who you know). If your peeve isn’t small, any attempt to modify the language you’re using will have the same result as updating the value of Planck’s constant or dropping a storm trooper platoon into the middle of a city on Vulcan.

It can’t possibly be that bad you say. Actually, it can. And, in fact, it is. A long time ago, Apple was transitioning from it’s Pascal-based OS to a C-based one. The transition was fraught with byte-prefixed, null-terminated strings. The resultant code was pretty horrific. Just recently, I was working on a feature that required me to move between BSTR, wstring, COM and char strings. This because the language wasn’t designed with the notion that strings could be more than just US English. Compare this to Swift which is a pure Unicode language right down to the variable names.

Every compiler writer brings their own unique experiences and skill set to realizing the worldview embodied in the language specification. No two compilers will realize it the same way. Oh, they’ll be close and probably agree 90% of the time. The biggest area of difference will be in what each compiler considers important enough to warn the developer about. On one end of the scale, you could argue that if the language specification allows something that the developer is free to write the code accordingly. At the other, the compiler would report every questionable construct and usage. All compilers that I’m aware of fall somewhere in the middle.

Unfortunately, rather than biting the bullet and forcing developers to recognize their questionable and problematic coding choices, compilers have traditionally allowed code to have a pass by not enforcing warnings as errors and by having the default warning level be something uselessly low. To make matters worse, if you do choose to crank up the warning level to its highest severity, many times, the operating system’s headers will fail to compile. I’ve discovered several missing macro symbols that the compiler just defaulted to 0. Not the expected behavior. Microsoft’s compiler have the sense to aggregate warnings into levels of increasing severity. gcc however, does not. The omnibus -Wall turns out to not actually include every warning. Even -Wextra leaves stuff out. The real fun begins when the code is expected to be compiled on different operating systems. This can be a problem for developers who have only ever worked with one tool chain.

So, let’s say that you realize that compilers on different platforms will focus on different issues. You’ll probably start considering trying other compilers on the same platforms. Once all those different compilers, set to their most severe, pass your code is good right? Not so fast. Remember, a compiler’s job is to translate the code, not analyze it. But, you do peer reviews. Tell me, how much of the code do you look at? How long do you look at it? Do you track all the variable lifetimes? Locks and unlocks? In all the code paths? Across compilation units?

Of course you don’t. That would be impossible. Impossible for a person. That’s why there are static analysis tools. My current favorite is Coverity. And yes, it costs money. If you can sell your software, you can pay for your tools.

But the fun doesn’t end there. Modern compilers can emit additional code to allow for profile guided optimization (PGO). Simply put, instrument the code, run the code, feed the results back into the compiler. Why? Because hand tweaking the branch predictors is an exercise in futility. Additionally, you’ll learn where the code is spending its time. And this matters because? It matters because you can waste a lot of time guessing where you need to optimize.

Finally, there’s dynamic analysis. This is realm of run-time leak detection. Wading through crash dumps and log files is a terrible way to spend your time.

So, are you developing quality software or just hacking?

Read Full Post »

One of the interesting things that happens to me when I attend events like yesterday’s PDX Summit III is that it gets me thinking about things in a new and more connected way. For many who know me this will be perceived to mean that for some indeterminate length of time that I’ll be a bit more random than usual.

To misappropriate the Bard, “There are more things in heaven and earth, Horatio, than are accessible from your contact list.”

This morning I started reading Galileo’s Telescope and it got me thinking in terms of the big data / open source elements brought up at the summit. Before you injure your neck doing that head tilt puzzled look thing that dogs do, let me explain.

I have a great affinity toward data visualization. I could probably press my own olive oil with the stack of books I’ve got on the subject. So when I saw that Galileo had written a text entitled Sidereus Nuncius, my first thought was, “if you took nuncius (message) and pushed it forward into present day English, you’d end up at announce, denounce and enounce. What if you pushed it backward in time? How about sideways toward French? If we visualized this map, what would it look like? How would we navigate it?

I’ve always found it fascinating how speech informs thought. We live in a society where using ‘little words’ is encouraged in an effort to be more inclusive. The problem is that these ‘big words’ aren’t big for the sake of big. They encapsulate entire concepts and histories. We talk about ‘the big picture,’ ‘big data,’ and the like, but in our attempt to make it all accessible all we seem to be doing is creating a meaningless assemblage of words and acronyms, that at the end of the day, have the precision of a ten pound sledgehammer in a omelet shop.

What if instead of constantly, reducing our communication to the green card, red card of sports; we instead could point to the 21st century version of Korzybski’s Structural Differential and literally be on the same page? How would language acquisition be improved for both native and foreign languages, if you could build understanding based on the natural evolution of the language’s concept basis? What would the impact on science be if we could visualize past crossover points between disciplines? How much more readily would students learn the concepts of computer science and engineering if they could put present day abstractions into the context of past constraints rather than simply memorizing a given language, framework or operating system’s implementation?

Yeah, this is one of those posts that has no conclusion. It’s a digital scribble intended to be a jumping off point for future endeavors.

Read Full Post »

It took me a bit longer than I’d’ve liked, but finishing all the Apple WWDC 2015 videos (110-ish) in under three months is pretty satisfying.

I’m impressed at the speed with which Apple is executing the change of primary development language from Objective-C to Swift. I expected three years, but it looks like they’ll have things wrapped up in two. This is no mean feat. I’ve now experienced three core language shifts within Apple now. The first was from the Apple ][ 6502 assembly to the Macintosh 68000 assembly / Pascal hybrid. The second was the move to C. This was particularly tedious for those of us attempting to keep both camps happy. You haven’t lived until you’ve dealt with byte-prefixed, null-terminated strings. With the adoption of NextStep and the BSD/Mach micro-kernel can the transition to Objective-C. I’ll admit, I made fun of Objectionable-C. By that time, I’d spent the better part of a decade using C++. A bit of snobbery on my part. Those two children of C have fundamentally different views of the world. I cut my teeth on iOS using Objective-C and appreciated its extensibility when compared with C++. But, it didn’t have the base that C++ did. A billion devices later, well, that’s a different story. Now we have Swift. I believe that it represents the next generation of language. Not object-oriented or message-oriented, but protocol-oriented.

The number of sessions dedicated to tools was impressive as always. As was the quality of the presentations. Thankfully, we were spared the pain of having Apple’s french speakers presenting in English discussing graphics which the word banana coming up so often that one would think there was a drinking game just for that session.

I’m looking forward to tinkering with the WatchOS bits. Those sessions are probably a staple of developers.

Props goes out to the Xcode developers of continuing to bring a quality product to the table. An AirPlay view for the simulators would be nice (hint, hint). The sessions dedicated to the profiling, power and optimization of code are worth watching multiple times.

As is the case with many mature elements of the operating systems, security had fewer explicit sessions. Instead, security was a pervasive theme along with privacy.

One cannot talk about this years sessions without mentioning the brilliant leveraging the synthesis of scale and privacy to created ResearchKit.

The care that Apple puts into the sample code is truly inspiring. Having suffered through hundreds of pages of AOCE documentation, today’s entry into Apple development seems easy. Easy on the individual component level at least. There is not more that one would have to learn in order to create software from beginning to end with the level of quality and feature richness that the world has come to expect from applications on the Apple platforms.

Leaving the best to last, I’ll reflect on an issue that’s always bothered me with the transition strategy that Apple has used in the past. It’s not so much that I didn’t like the solution they came up with to deal with transitioning from one methodology to another. Or that I had a better answer, I didn’t. The price always seemed rather steep to me. I speak of binaries with multiple code and data resources used to allow a user to download a single image and run it anywhere. This was used in the transition from 68000-based machines to PowerPC ones and again when moving to Intel’s architecture. On iOS, we’ve seen the number of duplicate resources steadily climb as the screen geometries and densities have increased. The thing of which I speak is the double-headed axe of app thinning and on-demand resources. The ability to release an application to the store with all the bits for all the supported devices and be able to download only those that will actually be usable on a given device is tremendous. Couple that the a way to partition an application in such a way that only the resources within a user’s window of activity are present on the device and you have a substantial savings in both time and memory. Well done.

It’s been many years now since I’ve been able to attend WWDC in person and given the popularity of the conference, it’s not likely that I’ll be going any time soon. I’m content for the moment to be able to access all the content, if not the people, that someone attending would be able to. I look forward to next year’s sessions.

Read Full Post »

Over the years, I’ve gotten used to the reality that the vast majority of people who work in the technology field only do it for the money. The cab drivers who have told me they want to “get into” computers because “it’s easy money” don’t phase me. Similarly, the sea of “recruiters” who contact me spouting techno-babble get a pass for their cluelessness. As the embodiment of evil would say, they “are mercifully devoid of the ravages of intelligence.”

Every now and then though, an email comes to my in box that begs the question, “How does this person not get fired?”

Let’s look at this tremendous work of ignorance and hypocrisy. We’ll say that it can from M at Foo (a major technology corporation). I’ve colorized the text in blue. All other styles applied to the text are original.

Let’s begin.

Hi Charles,

I recently found your profile in our database, and your background is impressive. The [Foo] Media Division will be flying several candidates in for interviews at our Seattle headquarters in April and considering you. The roles we are filling will all be located in Seattle and a full relocation package and immigration support would be provided if you are selected.

Someone did a database keyword query and it included my name. Spiffy. If only there was a single thought in the first sentence instead of two. The grammar goes down hill from there. The deluge of prepositional phrases in sentence two points to a completely disorganized mind. Once again we see multiple thoughts presented. This time, however, the author neglected the second verb. One would assume that it is “are.” Forgiving this error as a typo, one is left with the distinct impression that (1) people are ignorant as to the location of Foo and (2) they will be available to go to Seattle on short notice. Although it is nice to know that the positions will be in Seattle, the fact that immigration support “would be provided” indicates that my resume has, in fact, not been read. Additionally, following the trend established in previous sentences, multiple thoughts are present. Finally, why should I care that I would be relocated when I don’t know what the position is yet?

We are looking bring on board Senior (7+ yrs. industry experience) Software Developers with experience designing and architecting highly scalable and robust code in Java, C++ or C#.  Strong OOD skills and CS fundamentals are required. Working with big data or machine learning can be a major plus.  In addition we have roles for Principal Engineers, Software Development Managers, Software Developers in Test and Technical Program Managers. If you fall into one of these categories we offer a different interview process independent of this event and eager to support you in learning more about these roles.

It appears that the fact that the position is senior merits both bolding and underlining, lest I miss it. It also seems that what is meant by senior is up for debate. I ask you, gentle reader, why would you abbreviate years by dropping two letters only to add a period? Here we see a neglected preposition (of). I will refer back to my unread resume as the reason for my assertion that this sentence is unnecessary. Let use press on.

These ever senior software developers (bold, underline) must have experience designing and architecting. I am reminded of the George Carlin sketch about the kit and caboodle. Redundant anyone? Moving on, let’s consider “highly scalable and robust code”. I have yet to see code which is highly-scalable (note the proper use of hyphenation) generally demands that is also be robust. This is my opinion, but I would imagine that people would generally agree that non-robust code tends not to be very scalable. As to my languages of record, I will again refer to my seldom read resume.

Obviously, the next sentence is of critical import as it is bolded and underlined in its entirety. Now, if anyone out there knows a developer who can architect a highly-scalable system and yet is lacking computer science fundamentals and strong object-oriented design skills, please introduce me.

Slogging along we have an obvious statement regarding a working understanding of the two biggest buzzwords in the heap today. That these can be a plus makes for a fairly nebulous statement. Is experience in these disparate areas important? Will it be part of the job?

Now we wander off into the weeds by telling me that they’re also looking to fill other positions. So, if they’re completely off the mark, not to worry?

If interested in exploring Development opportunities with us, the first step will be to complete our coding challenge ideally within the next 3 to 5 days.  If you need more time, please let me know. After the hiring manager reviews your ‘successful’ code, we’ll contact you to confirm your onsite interview where you will meet key stakeholders from the [Foo] Media team.

Back in multiple-thought land, let’s begin by ignoring the subject of the sentence. And now that you’ve bothered to read this far, here’s the catch. You have 3 to 5 days to complete a coding challenge. The plot thickens. But it’s not really 3 to 5 days. You can ask for special dispensation. It is nice to know that my code will be successful and that I will be contacted to confirm my onsite interview. But wait, we have another thought here. at the onsite interview, I’ll meet key stakeholders. For the less techno-babble encumbered, those would be the marketing and project manager.

Please click here [link removed, sorry] for the coding challenge and include your full name and email address in the tool. The application works best in Firefox or IE. There is no time limit, but if you do take breaks it counts against your completion time. Please expect the challenge to take between 10 – 90+ minutes.  The KEY is to write your absolute BEST code.  Additionally, be aware that should you be selected for interviews, you will also be asked to produce code on the white board.

Here’s a puzzling set of instructions. If they have read my resume and managed to send me an email, why is it that they need me to create an account in “the tool.” “The tool?” Seriously? I don’t recall moving to The Village.

Not so fast, now it’s “the application” and it works best in Firefox and Internet Explorer. Best? How about telling me the required version of browser to keep from getting half way into this “challenge” and having “the tool” spew like a unicorn doing the technicolor yawn.

And in a fit of verbal vomit worthy of a Willy Wonka legal contract, we are told that (1) there is no time limit, (2) the amount of time you take matters, (3) the estimated time to complete is somewhere between 10 minutes and God knows how long, and (4) [this is the big one] we are expected to write “your absolute BEST code.” And as an afterthought let’s tack on a comment about being able to produce code on a white board “should you be selected for interviews.”

Let’s think about this. Okay, you really didn’t need to, but it’s a nice way to slow down the pacing of the post.

In case you hadn’t figured it out, the fourth in this set of nonsensical requirements is what inspired my title. It comes from a scene in “Men in Black.”

James Edwards: Maybe you already answered this, but, why exactly are we here?

Zed: [noticing a recruit raising his hand] Son?

Second Lieutenent Jake Jenson: Second Lieutenant, Jake Jenson. West Point. Graduate with honors. We’re here because you are looking for the best of the best of the best, sir!

Zed: [throws Edwards a contemptible glance as Edwards laughs] What’s so funny, Edwards?

James Edwards: Boy, Captain America over here! “Best of the best of the best, sir!” “With honors.” Yeah, he’s just really excited and he has no clue why we’re here.

How do I create my best code? [aside from not intensifying absolutes] I think about the problem. Solving a problem in 10 minutes or less implies to me that the person (1) has solved the same problem so many times that they have reached the level unconscious competence with regard to it, (2) did the first thing that came to mind, or (3) guessed. You know the best way to not create high-scalable systems? By not thinking much about the problem.

Lastly, please send your updated resume directly to me: [M]@[foo].

Should I do this before I embark on the “challenge” or after? Who else would I send my updated resume to? And why bother restating your email address (incompletely) when I could simply reply this email?

NOTE- If you are currently interviewing with another [Foo] group, we ask that you finish that process. In the event you are in college (at any level) or graduated within the last six months, we invite you to directly apply to positions via this link: www.[foo].com/college.

Note is followed by a colon. And what happened with the whole lastly thing? He we have indication that Foo’s recruiting system can’t track who’s taking to you. So much for robust. We again see that no resumes have been read here. More than that, why would this even enter into the equation of an email to someone who is expected to have 7+ years of industry experience?

Thank you for your time and look forward to receiving your code challenge response.

There can’t possible be more you say. Not so dear reader. The great two-for-one sentence wrangler strikes again.

Warm regards,

[M]

At least the closing was without incident.

For a company that claims to be seeking the very best, they have a funny way of showing it. If you would like to offend the highly-educated and technically experienced developers you seek to hire, send them emails that simultaneously say that they (1) aren’t worthy of a proofread email and (2) aren’t deserving of a phone screen with a person.

After I’d read this email several times, I looked M up on LinkedIn. Their profile is private. That was a first for me with regard to an internal recruiter.

Well done Foo. Well done.

Read Full Post »