Feeds:
Posts
Comments

It took me a bit longer than I’d’ve liked, but finishing all the Apple WWDC 2015 videos (110-ish) in under three months is pretty satisfying.

I’m impressed at the speed with which Apple is executing the change of primary development language from Objective-C to Swift. I expected three years, but it looks like they’ll have things wrapped up in two. This is no mean feat. I’ve now experienced three core language shifts within Apple now. The first was from the Apple ][ 6502 assembly to the Macintosh 68000 assembly / Pascal hybrid. The second was the move to C. This was particularly tedious for those of us attempting to keep both camps happy. You haven’t lived until you’ve dealt with byte-prefixed, null-terminated strings. With the adoption of NextStep and the BSD/Mach micro-kernel can the transition to Objective-C. I’ll admit, I made fun of Objectionable-C. By that time, I’d spent the better part of a decade using C++. A bit of snobbery on my part. Those two children of C have fundamentally different views of the world. I cut my teeth on iOS using Objective-C and appreciated its extensibility when compared with C++. But, it didn’t have the base that C++ did. A billion devices later, well, that’s a different story. Now we have Swift. I believe that it represents the next generation of language. Not object-oriented or message-oriented, but protocol-oriented.

The number of sessions dedicated to tools was impressive as always. As was the quality of the presentations. Thankfully, we were spared the pain of having Apple’s french speakers presenting in English discussing graphics which the word banana coming up so often that one would think there was a drinking game just for that session.

I’m looking forward to tinkering with the WatchOS bits. Those sessions are probably a staple of developers.

Props goes out to the Xcode developers of continuing to bring a quality product to the table. An AirPlay view for the simulators would be nice (hint, hint). The sessions dedicated to the profiling, power and optimization of code are worth watching multiple times.

As is the case with many mature elements of the operating systems, security had fewer explicit sessions. Instead, security was a pervasive theme along with privacy.

One cannot talk about this years sessions without mentioning the brilliant leveraging the synthesis of scale and privacy to created ResearchKit.

The care that Apple puts into the sample code is truly inspiring. Having suffered through hundreds of pages of AOCE documentation, today’s entry into Apple development seems easy. Easy on the individual component level at least. There is not more that one would have to learn in order to create software from beginning to end with the level of quality and feature richness that the world has come to expect from applications on the Apple platforms.

Leaving the best to last, I’ll reflect on an issue that’s always bothered me with the transition strategy that Apple has used in the past. It’s not so much that I didn’t like the solution they came up with to deal with transitioning from one methodology to another. Or that I had a better answer, I didn’t. The price always seemed rather steep to me. I speak of binaries with multiple code and data resources used to allow a user to download a single image and run it anywhere. This was used in the transition from 68000-based machines to PowerPC ones and again when moving to Intel’s architecture. On iOS, we’ve seen the number of duplicate resources steadily climb as the screen geometries and densities have increased. The thing of which I speak is the double-headed axe of app thinning and on-demand resources. The ability to release an application to the store with all the bits for all the supported devices and be able to download only those that will actually be usable on a given device is tremendous. Couple that the a way to partition an application in such a way that only the resources within a user’s window of activity are present on the device and you have a substantial savings in both time and memory. Well done.

It’s been many years now since I’ve been able to attend WWDC in person and given the popularity of the conference, it’s not likely that I’ll be going any time soon. I’m content for the moment to be able to access all the content, if not the people, that someone attending would be able to. I look forward to next year’s sessions.

The future has always been a contentious place. I should know, I’ve spent most of my career there.

We’ve come a long way from the idea that the world only needed half a dozen computers [Thomas Watson, Jr.]. We now have some many computers that we managed to exhaust the 32-bit IPv4 address space. The solution embodied in IPv6 creates other issues, but that’s a topic for another post.

The interesting part of working in the computer domain is that feeling of being one step ahead of the langoliers. It can be at once exciting and terrifying. It is not for the faint-of-heart or those who believe that the need for learning stopped after their last final exam.

Lately, I’ve been watching an oddly converging divergence of ideas. One head of this hydra follows the path of the ever bigger. Bigger data sets, bigger pipes, bigger computations and unfortunately bigger OS’s. A second head constantly works toward making the whole morass vanish. I remember when I began to see fewer watches as people realized that their phone could do that. That emergency camera that the insurance company tells you to keep in your car? Answering machines? Travel alarm clocks? MP3 player? Portable DVD player? I would really hate to be in Garmin’s consumer division. Another head wants to be everywhere. It’s no longer sufficient to be that operation you could run out of a garage. Now, we have to be able to have stuff, both artifact and intangible, available everywhere. Remember when the fastest way to see a first-run Hollywood film overseas was to be on a military base? Speaking of military bases, you may have noticed that people are recognizing that security is important. The final head is fixated on why computers are this fixed assemblage of hardware. What if I really do need 20TB of memory and a 16K node mesh?

With all this “progress” going on, it’s all that the poor, beleaguered software developers can do just to keep up on one of these. But, that’s okay right? These are all unrelated. Right?

Well, we’ll get to that. For the moment, let’s see if you and I think the same about who’s doing what.

Bigger

Amazon and Google are both doing cloud, but the company I find interesting here is Microsoft. Azure takes the problem of software at scale reduces it to some fundamental building blocks (compute, storage, database and network). Operating system? We don’t need not stinking operating system! For those of you who remember what an IBM 1130 is, you’ll love Azure. It’s like driving a TR6 on the PCH at 80 mph (you could get from LA to SF in like, I don’t know five-ish hours). The world is yours until you crash. [Disclaimer: I have never driven a TR6.] Want more CPUs or storage or network, add more.

Invisible

The battery in my first mobile is heavier than my current phone. Apple’s biggest coup isn’t that it creates ever smaller technologies. They represent the technological equivalent of Michelangelo, who famously remarked about the process of sculpting his David:

It’s simple. I just remove everything that doesn’t look like David.

When Apple introduced the iPhone, developers were all torches and pitchforks. This wasn’t how things were done. Where’s the disk? How do I see this other application’s files (app was still a trending meme). Apple took away bits that we were accustomed to, but didn’t actually need. Most of the time. Sometimes they pulled an Apple Round Mouse. But mostly they drove development in a direction that only those of us who have been given the task of making a wireless keyboard that can run for six months on a pair of AA batteries understood. How to write code that wouldn’t make the device die in under four hours. To a large extent this was the evidence of Gates’ Law. We are now on the cusp of the Apple Watch which promises to hide the technology behind the technology even further. As someone who uses Apple Pay on a regular basis, I’m looking forward to see how the Watch does.

Everywhere

This is where the divergences converges. Azure instances can change locality temporally. As a result, your customers access servers in their vicinity. The user interfaces of software written for MacOS or iThings is multi-language and multi-locality (units) capable by default. Unlike Azure, iCloud isn’t so much a platform for developer as it is a vast warehouse of data. Apple’s recent announcement of ResearchKit has already shown how much impact an everywhere technology can have.

Secure

As one who has had the distinct displeasure of pulling his company’s internet connection on 2 November 1988, I believe that security is important. My master’s thesis was focused on computer viruses. I deal with the failure of developers to apply sound security practices to open source and commercial software on an ongoing basis.

For a really long time, no one really took securing the computer all that seriously.

Now, if you look at both Microsoft and Apple, you see security systems in a serious way. On iOS, it’s baked in. On Windows it’s half-baked. Yes, that’s a bit of snark. Security shouldn’t be an option. In iOS, if an application wants access to you contact list, it must declare that it wants to be able to access those APIs. The first time they attempt to access them, the user is prompted to allow the access. At any time, the user can simply revoke that access. Every application is sandboxed and credentials are held in a secure store. On Windows, security is governed by policy. These policies are effectively role-based. This is fine as far as it goes, but like the days-of-old, if you’re the wrong role at the wrong time running the wrong application (virus), you can deep fry any system. Hence my comment about it being half-baked.

Do we seriously believe that banks should be running on an operating system that isn’t build from the ground up around security?

Fixed

This final hydra head is perhaps the most interesting to me as it holds the most promise. It represents the hardware analog to Azure. Today, you may be able to configure an Azure instance, but that configuration only goes so far. If you look back to the dim days (which for some reason or other were in black and white, even though we had color movies as far back as 1912). Back then if you wanted more oomph, you ordered it (and an additional power drop). Now you are greatly constrained. Remember that 20TB system I mentioned earlier? Why can’t I get one? Because our manufacturing model is based on scale. This has been a good thing. It’s made it possible for me to have a laptop that doesn’t weigh 16lbs with a run time 2 hours. Isn’t that great? Ask a left-handed person sometime. As the number of actual computer manufacturers dwindles, we’re seeing more white box systems cropping up. These are being used to create the application clouds. But at a time when power is real money, how much are we wasting in resources to access the interesting bits of these boxes? More and more we see the use of storage arrays. All well and good. So, where are the processor arrays? The graphics arrays? What if I need 12 x 5K monitors? The people who crack this nut will make a great number of people very happy.

The Future Won’t be Brought to Us by AT&T

Once AT&T was the go-to place for the future of the future. Not any more. The future is far bigger than anyone imagined it to be and certainly far larger than any one company is capable of providing.

The question is, how do we identify the people who are ready to not only build that future, but to build it out?

I believe that you can learn a lot about people through the things they fill their heads with. Over time the mechanisms for this process have grown in number and availability. Once people would travel great distances seeking out teachers. Of course if you were powerful enough you could have them come to you. Once we managed to get the teaching down in permanent (well mostly) form, you didn’t actually need to bother with the whole physical presence thing. Still the needing the scribe thing made this practical only for the silly rich. By the time we get to the 20th century the super clever public library idea meant that you could recommend a book to someone without running the risk that their dog would eat the only copy for a thousand miles. The mid 20th century added audio and by the end video to the menu. With the advent of the internet, we could not only find materials to borrow from libraries with ridiculous ease, but could reserve it and get an email when it was ready for pickup. Then came the Kindle, Zinio, Netflix and the iTunes store. Now if someone is ingesting some bit of knowledge, in all likelihood, you can too (and within minutes).

So, what’s with the Burke-ian prologue?

Well, I was reading The New Yorker‘s article The Shape of Things to Come, about Jony Ive and the future of Apple. Among the bits of past, present and impact; was a fascinating bit. Ive was watching Moon Machines [iTunes Store]. Not expected that.

I’ve always been a bit of a space wonk, so I was interested just on the face of it. What I found fascinating was that a person born in 1967 England best know for early 21st century industrial design saw something of interest in a series dedicated to the United States’ Apollo program.

Having now watched the series, there are things that jump out at me. As with every time I take in something that’s been recommended (if person with the time constraints of a SVP at Apple mentions that they see value the spending time, that’s a hint one would be ill advised not to take advantage of), I strive to understand how it relates to the person, their work and goals. Ive’s comment speaks volumes.

… like the Apollo program, the creation of Apple products required “invention after invention after invention that you would never be conscious of, but that was necessary to do something that was new.”

The Apollo program was a tech start-up writ large. The goal was abstract; the time tables unyielding; the cost astronomical (literally); the toll on people and their relationships severe. In the end, the successes were ascendant and the failures devastating. The six episodes take on major aspects which had to work together in order to assure the success of the program.

The lessons of Apollo are applicable to endeavors in science, business, politics and design. Issues of control, quality, planning, communication and contingency are laid bare. As are their failures. Of particular distinction are the moments of crisis. Unlike anything before or since, we have documentation of and visibility into the people who stepped up to lead their teams and the processes through which they overcame them.

In an era of ever-increasing abstraction and the misplaced belief that you don’t actually need to understand how things work in order to produce something of quality, Moon Machines provides timely lessons. The quality of the end product begins with the confluence of domain and technology, not the application of one to the other. The speed and manner of disposing problems during a crisis depends greatly on the depth of understanding extant in the team of the two questions: What do I have? and What do I need? As well as the understanding of how to get to the latter using the former.

In the end, I have a greater admiration of those involved in Apollo thanks to a comment by Jony Ive.

How does the new series stack up to the old?


I watched the original Cosmos series when it premiered. Like many I was captivated by the way in which Dr. Sagan told a story. It was made all the better because the story was actually true. He took us on a survey of the Universe. Small to enormous, past to future, Sagan walked us in the footsteps of man’s discovery of the world around him. He also didn’t shy away from the topic of the earth’s limited resources and the impact man was having in the way in which we were extracting, utilizing and disposing of them. That series and James Burke’s Connections set the standard in my mind for how science and history could be presented to a wide audience.

So, when I heard that Dr. Tyson (the man who drove the getaway car) would be hosting Cosmos: The Next Generation, I was excited. I’d seen bits of his “Great Courses” class The Inexplicable Universe: Unsolved Mysteries and thought it was interesting. The production quality was a bit wonky, but I attributed that to it being a class.

I watched Cosmos: A SpaceTime Odyssey. Twice. The first time broadcast and the second via iTunes. The science was great. The images from space were stunning. The message of planetary stewardship carried an even greater urgency. And yet, I found myself not really being all that moved. Not like the original. And that bothered me.

It bothered me because I couldn’t quantify what it was that I didn’t like. Finally I realized that it was the ‘reenactments’ that were bothering me. The original series went to great lengths to stage the reenactments. The new series used stylized animation. For me the result was that these abstracted the events being depicted. It came across as though you were being told a story instead of being a witness to the event. The net effect of which is that your experience is more akin to sitting in a movie theater watching a cartoon about Robin Hood vs. standing feet from charging horses in a jousting match put on by the Society for Creative Anachronism. One could absolutely argue that neither one is real. But I would then ask, which one has a greater impact? If I set up a lab experiment with lenses and prisms, I know it has more reality than images in a book or animation on a tablet.

There’s talk of a second season (without Dr. Tyson). If it does come to pass, I hope they will consider using people instead of paint. In a world where we’ve replaced doing science with watching it, every little bit helps.


I hope that everyone takes the time to watch both the old and new Cosmos. Getting teens to watch it would be good too.

It’s been five months since Apple’s Worldwide Developer Conference ended. This year the WWDC iOS application had the session videos available sooner than ever. I’d been watching them during my commute to and from Portland’s downtown. Well, this past week, I finished watching the last one. And with 107 videos, it’s Apple’s version of Netflix putting up a few years of Doctor Who episodes for people to gorge on.

Truth be told, I would have finished a month ago. So, what was the hold up? iOS 8 released. In doing so, it revealed that some change in the OS caused the WWDC app to crash when you tried to view a video. Very sad. The really unfortunate part is that I was only four videos away from having watched them all.

Well, I’m happy to report that the WWDC app is now working fine and I’d encourage anyone who’s doing OSX or iOS development to take advantage of them. There is a tremendous amount of information on the latest tools, technologies and techniques. Swift and Metal feature prominently.

Now, I’m not saying that these are all Jobsian quality presentations. As much as I appreciate that software development is a global endeavor, whoever decided that someone with a pronounced French accent should be littering his presentation with the word banana should be forced to watch that session from beginning to end. Subtitles would also have helped some sessions.

All-in-all these presentations are well crafted and delivered. Apple also continues its tradition of providing substantive sample code. Nothing is worse than code snippets that don’t convey the true flavor of using an API as you would in practice. Happily gone are the days when the AOCE (Apple Open Collaboration Environment) documentation was 1200 pages of paper with no index. Not the best way to pick up a new technology.

So if you need a break from watching the original run of The Tomorrow People, the WWDC sessions are there for you.

In the mid-90s I was introduced to the phrase “Compile it. Link it. Ship it. Debug it.

Yeah, it’s sadly funny, but what’s that got to do with QA not being a speed bump?

Once we developed software, then we had waterfall, then agile, then test-driven development. Next? Who knows. Here’s a clue. We’ll still be developing software.

In a world of would be Sherlock Holmes’, we are in desperate need of more John Watson’s. And like the classic pair, the end product is much better when they are working in a tightly coupled fashion. And by that I mean both geographically and temporally. Let’s look at some common software development models of developer/tester interaction designed to fail.

Remote Test

There are some who believe that the phone/IM suffices as a mode of communication when developing complex software. Unfortunately, there is a proportional relationship between the distance between the developer and tester; and the scale of misunderstandings. “Frog protection” anyone? Nothing brings a developer back to reality from Nerdvana faster than someone standing in front of them waiting for them to explain exactly what it is that the software is supposed to be doing. Out of sight, out of mind.

Software First, Test Plan Later

This is especially interesting when QA has the responsibility of saying that the story associated with the feature is working. Let’s make it even more fun by saying that we can count story points when the developer says they’re done, but we won’t actually get QA sign-off until the next sprint. Technical debt-ville.

Unit Tests Mean We Can QA Later

Everyone who believes that they can proof read their own documents, stop reading now. The hallmark of a great tester is that the first thing that comes to mind when they look at your software is the last on yours. How can I break this. Developers struggle to get the “happy” path working. Where do most problems come up? The “sad” path. If people do get to write unit tests, and the code is structured to allow unit tests, it’s more likely than not that only the happy path will be tested. After all, it’s QA’s job to test what doesn’t work right?

Testers as Second Class Citizens

Holmes may be brilliant, but Watson is the proxy for the rest of humanity. You know, the ones who don’t keep heads in their refrigerator. The ones who we expect to, you know, pay money for the software. Not everyone reads Shakespeare in the original Klingon. Testers keep the “pizza under the door” crowd honest. Am I being a bit harsh on developers? Perhaps, just a wee bit.

Beta Test as Test

Throwing it over the wall taken to the extreme. Well, not quite the Google, “it’s not a product, it’s a beta” extreme, but GMail does pretty much set the goal post for that one. Guess what, when a potential customer gets hold of a beta, ninety percent of them will treat it as though it’s a complete product. Do you want your bank using your “beta as test” software? Would you use a beta compiler for your grandmother’s pacemaker? How about a drone? The close cousin to this is the weenie move of calling your software 0.9 for a decade and thinking that somehow insulates you from your commitment issues. Tossing your software out to the world without having the courage to “own it” is just lame. Doing so because you can’t be bothered to do the work is unprofessional. “Lost your company’s data? You can’t complain. It’s beta software afer all.”

It’s Time to Start Treating Testers Like the Partners We All Desperately Need

I’ve had the privilege to work with some very gifted individuals who time after time brought code, that I was sure was solid, to its metaphorical knees. They helped me to explain the complex in human terms. Treat them well and they will improve both the end product and the process. Remember, QA is not a speed bump, a nuisance to be endured. It is the whetstone that keeps the knife keen.

 

Update 2015-12-21: Apple released Swift source code. I’ve updated my sample again to reflect what I learned.

Update 2015-09-27: It’s been a year and much has happened with Swift. Please see my latest post on Swift command line input for current code.


Recently I’ve been watching Stanford intro CS classes. I like to see how they present the fundamental concepts and techniques of programming. This got me thinking about those missing bits of Swift that would allow me to actually write a command line-based application. [see my previous post Swift: Second Things First] Having these bits would be to allow me to teach Swift as a first language without having to teach the abstractions and interfaces required to properly develop for a graphical interface. I’m not much into attempting to teach in a way that breaks the “go from strength to strength” methodology. If you’re going to teach me to sing, it’s a whole lot easier if I don’t have to learn how to spin plates at the same time.

So, I spent some time and created a simple set of routines that when added to a Swift command line application allow you to get and put strings, integers and floats. Not exactly rocket science, which begs the question, “Why didn’t Apple do it?” Well, since I’m not Apple, I have no idea.

Here without further comment is the content of the file I wrote. That it is not the best Swift code, I have no doubt. If you can make it better, cool. And Apple, it you read this, please make something sensible of it.

Update: I’ve manually wrapped a few lines as WordPress is clipping.

//
//  swift_intput_routines.swift
//  swift input test
//
//  Created by Charles Wilson on 9/27/14.
//  Copyright (c) 2014 Charles Wilson.
// Permission is granted to use and modify so long as attribution is made.
//

import Foundation

func putString (_ outputString : NSString = "")
{
  if outputString.length >= 1
  {
    NSFileHandle.fileHandleWithStandardOutput().writeData(
               outputString.dataUsingEncoding(NSUTF8StringEncoding)!)
  }
}

func getString (_ prompt : NSString = "") -> NSString
{
  if prompt.length >= 1
  {
    putString(prompt)
  }

  var inputString : NSString = ""
  let data        : NSData?  = NSFileHandle.fileHandleWithStandardInput().availableData

  if ( data != nil )
  {
    inputString = NSString(data: data!, encoding: NSUTF8StringEncoding)!
    inputString = inputString.substringToIndex(inputString.length - 1)
  }

  return inputString
}

func getInteger (_ prompt : NSString = "") -> Int
{
  if prompt.length >= 1
  {
    putString(prompt)
  }

  var inputValue : Int = 0
  let inputString = getString()

  inputValue = inputString.integerValue

  return inputValue
}

func getFloat (_ prompt : NSString = "") -> Float
{
  if prompt.length >= 1
  {
    putString(prompt)
  }

  var inputValue : Float = 0.0
  let inputString = getString()

  inputValue = inputString.floatValue

  return inputValue
}

And here’s a little test program that uses it.


//
//  main.swift
//  swift input test
//
//  Created by Charles Wilson on 9/27/14.
//  Copyright (c) 2014 Charles Wilson. All rights reserved.
//

import Foundation

var name = getString("What is your name? ")

if name.length == 0
{
  name = "George"

  putString("That's not much of a name. I'll call you '\(name)'\n")
}
else
{
  putString("Your name is '\(name)'\n")
}

let age = getInteger("How old are you \(name)? ")

putString("You are \(age) years old\n")

let number = getFloat("Enter a number with a decimal point in it: ")

putString("\(number) is a nice number")

putString("\n\n")
putString("bye\n")

You’re probably wondering why I don’t use print(). Well, print() doesn’t flush stdout’s buffer. And, I really like to enter data on the same line as the prompt. And for those of you who say that you can use print() in XCode’s output window, I’ll remind you that a simulator isn’t the target device.

“But, wait. You didn’t comment the code.” No, I didn’t. By the time a student has enough understanding of Cocoa to compose the I/O routines provided above, the comments would be unnecessary.

So, there you have it. The typical second program that you’d ask a student to write.

I’ve been looking at Swift for about a month now. My first thought when I see a new language is,

How would I teach this language to someone new to programming?

After spending countless hours dealing with the little things in a language that contribute to the lack of patience developers have with non-developers, I still hold out hope that we will have learned from our past and can create a language which will enable those of us who toil in darkness to get out a bit more.

Back in the day, when a new language emerged from the primordial soup, it was accompanied by a language description. This document is ensures that in the case where the creator(s) of the language are run down by a rampaging gaggle of salvage yard geese mysteriously loosed by the supporters of the favorite language de jour will still be available to the six people already using it. [Just like Apple’s linker that was written in Oberon. But that’s a story for another day.] Lest you are under the mistaken impression that this document is the be-all and end-all of the language, I would refer the gentle reader to the first Ada spec which indicated that a unary minus could be present in the middle of a digit string.

Sometime later, a language manual would appear. If you are very lucky, this will be written by a teacher (Pascal User Manual and Report). Alternately, it may be written by a practitioner who is known for their ability to create concise code like awk (The C Programming Language). If you’re really lucky, the author may be both a teacher and practitioner who’s book printings required the deforestation of small pacific islands (The C++ Programming Language). Regardless of the provenance, only those who live on the bleeding edge or college (more recently high school) students embrace these tomes of wonderment.

Assuming that the language becomes popular enough to catch the attention of people other than the full-stack crowd, a book may appear whose clarity will ensure that it is longer lived than the inevitable dummies book. This rare collection includes the A FORTRAN Coloring Book, Basic BASIC, and Programming in C.

So far, Apple has released three documents on Swift. The first was the language reference. The second details Swift-ObjectiveC interoperability. The latest is the Swift standard library reference.

This year’s WWDC included seven Swift-specific sessions and eight others that referred to it. This level of coverage is quite impressive, but then again, they’ve been working on the language for about four years.

Enough background already, how would I teach Swift as a first programming language?

Unfortunately, right now, today, I can’t. You can tinker with Swift in playgrounds. You can integrate Swift and ObjectiveC. You can create swift-based iOS or OSX applications. What you can’t do is write a CLI program that is pure Swift.

Look at any programming language instructional methodology. What’s the second thing they teach? The second? Yes, the second. The first, since K&R, is hello, world.

The second thing that you have people do is prompt for their name and say hello back to them. Output is important, but without input programs are pretty boring. I’m not ignoring the vast and glorious mound of ObjectiveC and by extension C and C++ code that’s accessible to swift, but that’s not the same as being able to create the same things in swift.

Generally, I find swift a compelling language, but today it’s not a first language. I’m hoping that Apple will correct this deficiency in the not too distant future.


So, that was the post. It’s now a week later. Why is it still sitting unpublished? Well, I just wasn’t happy with my conclusion. Having had a bit of a mull, I’ve not changed my mind but I believe that I need to revisit my basic assumption as to what constitutes the baseline for teaching a first computer language.

The idea that to teach a person how to program you should have as little magic at play a possible. What is magic? Elaborate command invocations for one. Just being able to use the word invocation should be enough of a clue. Requiring the construction of things that have nothing to do with the actual language is another. This is probably the aspect that I have the most difficulty with.

“I’ll teach you how to program, but first you’ll need to lash together the user interface.” That would be all well and fine except that print is provided. Why don’t I need to provide a mechanism for stuff to go out if I need to provide one for stuff that comes in?

So, where do we start? The advantage of the pre-GUI age was that there was one true interface to the computer. The way we thought about our programs was dictated by the programming language we used. For a long time after the GUI was introduced we tried to treat our interfaces as extensions of our programs rather than partner environments. Even after we decided that there was sufficient power to run multiple applications at once, we were still mucking about with low-memory globals.

Trying to make the UI an independent entity took the idea of an abstraction penalty to new heights. The things that worked didn’t scale. And, in general, the things that scaled didn’t work. We won’t even talk about speed. Or fragile base classes … I’ll leave GUI evolution posts for another time.

Suffice it to say that the bones of many developers were used to pave the smooth road on which today’s applications travel to get from creator to customer. Somewhere in the process, we went from being a bunch of villages connected by trails to a planet full of complexity and wonder.

So now, we think about desktop, embedded systems, mobile devices, web, distributed systems, databases and games in radically different ways. These ‘once computers’ are now ‘delivery platforms.’ In order to create a product that aims to make use of (or be available on) multiple of these, it is necessary to perform the equivalent of running a restaurant where the staff are all expert in what they do, but each speaking their own language. To complicate matters, sometimes they want to use the same tools in the kitchen (usually the knives) and the customers tend to fight over getting the ‘best’ table.

If I teach someone C, C++, Java, lisp, PHP, python, or [insert language here], I don’t have to teach them the UI language of the system at the same time. With Swift I do. Is this going to complicate things? Probably. Will it take longer? If I want to be sure that they realize that this UI metaphor isn’t ‘the one true’ metaphor, absolutely.

I believe that Swift has a lot of potential. I would hate to see it restricted to being used only in the context of ‘Developing for Apple devices with Swift.’

Over the years, I’ve gotten used to the reality that the vast majority of people who work in the technology field only do it for the money. The cab drivers who have told me they want to “get into” computers because “it’s easy money” don’t phase me. Similarly, the sea of “recruiters” who contact me spouting techno-babble get a pass for their cluelessness. As the embodiment of evil would say, they “are mercifully devoid of the ravages of intelligence.”

Every now and then though, an email comes to my in box that begs the question, “How does this person not get fired?”

Let’s look at this tremendous work of ignorance and hypocrisy. We’ll say that it can from M at Foo (a major technology corporation). I’ve colorized the text in blue. All other styles applied to the text are original.

Let’s begin.

Hi Charles,

I recently found your profile in our database, and your background is impressive. The [Foo] Media Division will be flying several candidates in for interviews at our Seattle headquarters in April and considering you. The roles we are filling will all be located in Seattle and a full relocation package and immigration support would be provided if you are selected.

Someone did a database keyword query and it included my name. Spiffy. If only there was a single thought in the first sentence instead of two. The grammar goes down hill from there. The deluge of prepositional phrases in sentence two points to a completely disorganized mind. Once again we see multiple thoughts presented. This time, however, the author neglected the second verb. One would assume that it is “are.” Forgiving this error as a typo, one is left with the distinct impression that (1) people are ignorant as to the location of Foo and (2) they will be available to go to Seattle on short notice. Although it is nice to know that the positions will be in Seattle, the fact that immigration support “would be provided” indicates that my resume has, in fact, not been read. Additionally, following the trend established in previous sentences, multiple thoughts are present. Finally, why should I care that I would be relocated when I don’t know what the position is yet?

We are looking bring on board Senior (7+ yrs. industry experience) Software Developers with experience designing and architecting highly scalable and robust code in Java, C++ or C#.  Strong OOD skills and CS fundamentals are required. Working with big data or machine learning can be a major plus.  In addition we have roles for Principal Engineers, Software Development Managers, Software Developers in Test and Technical Program Managers. If you fall into one of these categories we offer a different interview process independent of this event and eager to support you in learning more about these roles.

It appears that the fact that the position is senior merits both bolding and underlining, lest I miss it. It also seems that what is meant by senior is up for debate. I ask you, gentle reader, why would you abbreviate years by dropping two letters only to add a period? Here we see a neglected preposition (of). I will refer back to my unread resume as the reason for my assertion that this sentence is unnecessary. Let use press on.

These ever senior software developers (bold, underline) must have experience designing and architecting. I am reminded of the George Carlin sketch about the kit and caboodle. Redundant anyone? Moving on, let’s consider “highly scalable and robust code”. I have yet to see code which is highly-scalable (note the proper use of hyphenation) generally demands that is also be robust. This is my opinion, but I would imagine that people would generally agree that non-robust code tends not to be very scalable. As to my languages of record, I will again refer to my seldom read resume.

Obviously, the next sentence is of critical import as it is bolded and underlined in its entirety. Now, if anyone out there knows a developer who can architect a highly-scalable system and yet is lacking computer science fundamentals and strong object-oriented design skills, please introduce me.

Slogging along we have an obvious statement regarding a working understanding of the two biggest buzzwords in the heap today. That these can be a plus makes for a fairly nebulous statement. Is experience in these disparate areas important? Will it be part of the job?

Now we wander off into the weeds by telling me that they’re also looking to fill other positions. So, if they’re completely off the mark, not to worry?

If interested in exploring Development opportunities with us, the first step will be to complete our coding challenge ideally within the next 3 to 5 days.  If you need more time, please let me know. After the hiring manager reviews your ‘successful’ code, we’ll contact you to confirm your onsite interview where you will meet key stakeholders from the [Foo] Media team.

Back in multiple-thought land, let’s begin by ignoring the subject of the sentence. And now that you’ve bothered to read this far, here’s the catch. You have 3 to 5 days to complete a coding challenge. The plot thickens. But it’s not really 3 to 5 days. You can ask for special dispensation. It is nice to know that my code will be successful and that I will be contacted to confirm my onsite interview. But wait, we have another thought here. at the onsite interview, I’ll meet key stakeholders. For the less techno-babble encumbered, those would be the marketing and project manager.

Please click here [link removed, sorry] for the coding challenge and include your full name and email address in the tool. The application works best in Firefox or IE. There is no time limit, but if you do take breaks it counts against your completion time. Please expect the challenge to take between 10 – 90+ minutes.  The KEY is to write your absolute BEST code.  Additionally, be aware that should you be selected for interviews, you will also be asked to produce code on the white board.

Here’s a puzzling set of instructions. If they have read my resume and managed to send me an email, why is it that they need me to create an account in “the tool.” “The tool?” Seriously? I don’t recall moving to The Village.

Not so fast, now it’s “the application” and it works best in Firefox and Internet Explorer. Best? How about telling me the required version of browser to keep from getting half way into this “challenge” and having “the tool” spew like a unicorn doing the technicolor yawn.

And in a fit of verbal vomit worthy of a Willy Wonka legal contract, we are told that (1) there is no time limit, (2) the amount of time you take matters, (3) the estimated time to complete is somewhere between 10 minutes and God knows how long, and (4) [this is the big one] we are expected to write “your absolute BEST code.” And as an afterthought let’s tack on a comment about being able to produce code on a white board “should you be selected for interviews.”

Let’s think about this. Okay, you really didn’t need to, but it’s a nice way to slow down the pacing of the post.

In case you hadn’t figured it out, the fourth in this set of nonsensical requirements is what inspired my title. It comes from a scene in “Men in Black.”

James Edwards: Maybe you already answered this, but, why exactly are we here?

Zed: [noticing a recruit raising his hand] Son?

Second Lieutenent Jake Jenson: Second Lieutenant, Jake Jenson. West Point. Graduate with honors. We’re here because you are looking for the best of the best of the best, sir!

Zed: [throws Edwards a contemptible glance as Edwards laughs] What’s so funny, Edwards?

James Edwards: Boy, Captain America over here! “Best of the best of the best, sir!” “With honors.” Yeah, he’s just really excited and he has no clue why we’re here.

How do I create my best code? [aside from not intensifying absolutes] I think about the problem. Solving a problem in 10 minutes or less implies to me that the person (1) has solved the same problem so many times that they have reached the level unconscious competence with regard to it, (2) did the first thing that came to mind, or (3) guessed. You know the best way to not create high-scalable systems? By not thinking much about the problem.

Lastly, please send your updated resume directly to me: [M]@[foo].

Should I do this before I embark on the “challenge” or after? Who else would I send my updated resume to? And why bother restating your email address (incompletely) when I could simply reply this email?

NOTE- If you are currently interviewing with another [Foo] group, we ask that you finish that process. In the event you are in college (at any level) or graduated within the last six months, we invite you to directly apply to positions via this link: www.[foo].com/college.

Note is followed by a colon. And what happened with the whole lastly thing? He we have indication that Foo’s recruiting system can’t track who’s taking to you. So much for robust. We again see that no resumes have been read here. More than that, why would this even enter into the equation of an email to someone who is expected to have 7+ years of industry experience?

Thank you for your time and look forward to receiving your code challenge response.

There can’t possible be more you say. Not so dear reader. The great two-for-one sentence wrangler strikes again.

Warm regards,

[M]

At least the closing was without incident.

For a company that claims to be seeking the very best, they have a funny way of showing it. If you would like to offend the highly-educated and technically experienced developers you seek to hire, send them emails that simultaneously say that they (1) aren’t worthy of a proofread email and (2) aren’t deserving of a phone screen with a person.

After I’d read this email several times, I looked M up on LinkedIn. Their profile is private. That was a first for me with regard to an internal recruiter.

Well done Foo. Well done.

I’ll be the first to admit that I obsess over security. My internship in college dealt with Unix security. I’ve created encrypted protocols for wireless data communication. And for my master’s thesis, I created a highly virus-resistant computer architecture (AHVRC – aka Aardvark). I wrote it in 1993. I put it up on the web in 1999.

So, what to my wondering eye did appear a few days ago? None other than the latest installment of Apples “iOS Security” document.

Personally, I like reading Apple documentation. But then again, I read owner’s manuals. Anyway …

So, I find myself reading iOS Security and keep thinking, “that’s what I would have done.” Wait, that’s what I did do.

I was casting about for a thesis topic and my department chair noted that no one was doing anything in secure architectures. So I spent a chunk of time thinking and put a little 124 page missive together. Now gentle reader, you having taken it upon yourself to read a few pages in begin thinking, “this can’t be serious, it’s got animals instead of sub-systems.” True, true. The master level is supposed to have a certain level of awe and wonder associated with it. Boring. Here’s a little secret. In a traditional master’s program, you devote the equivalent of three courses to the research and writing of a document (thesis). The point of the thesis and its defense demonstrates mastery of the discipline. The defense is done publicly. Anyone may attend. You must advertise it to the student body. Some number of professors, typically in your discipline and of your choosing, make up the group who decide if you and your work are up to snuff. Question may be asked in any area of your studies, but primarily the discussions will revolve around your thesis. Hence, being called a defense. Once the professors have had at you, the gallery gets their shots.

You already knew that didn’t you. Well, that’s not the secret.

The secret is that the defense is conducted within the context of the thesis. They attack, but you get the build the world. Think of it as a duel. You get to choose the weapons.

Nothing warms the cockles of my heart more than to see the distinguished faculty discussing a highly technical matter in the context of dolphins, gophers and kinkajous.

I even applied (with Rose-Hulman generously funding) for a patent. Had I had more patience and a more informed examiner at USPTO, I probably would have a patent for the work.

I’m not sure if the developers at Apple ever read my thesis or referenced my patent filing. I do find the similarities in the two architectures interesting.

I hope everyone who reads this posting takes the opportunity to read both documents. Apple’s because they present the state-of-the-art in application security model implementation. Mine, because I think I’m pretty well pleased with myself about it.