Feeds:
Posts
Comments

Archive for May, 2016

One of the things I’ve always liked about Apple is the way they revisit and revise their tools. Although there is comfort in logging on to any *nix system and knowing that ed or vi or gcc will be there, I believe that the lack of investment in standard tools hurts more than knowing that anyone who has written code since 1980 will still be at home. This is the technical equivalent of having a rotary gang switch to change channels on your flat-screen TV because that’s what your grandparents were used to and they still need to use the TV. You know what, they managed to adapt to there being a wireless remote and 500 channels.

Swift is probably the most visible of today’s Apple tool changes. As we fast approach the release of Swift 3.0, it’s good to check in and see what the future of software development of Apple products is going to be like. This is one of those growth and pain stories.

From the language standpoint, Swift 3.0 brings a greater level of consistency in terms of behavior and style. Compared to other languages, new developers will have fewer exceptions to keep track of. For those following the latest Stanford iOS class on iTunesU (CS193P) thinking that when they’re done that they’ve learned Swift, well, sorry, but in a few months you’ll need to update all your code.

One of the biggest changes in Swift 3.0 is the way that the Apple OS-specific libraries are handled. In the same way that C++ and Python aren’t bound to a particular operating system, neither is Swift. You don’t need to be on MacOS or iOS to use Swift. The problem is that most applications that interact with people need to be bound to some kind of operating environment. On Windows that’s typically done by using .Net. With Java, you’ve got a laundry list of environments (EE, SE, ME). For Swift, you’re binding to Apple’s plenitude of support libraries. All of which have, for the past decade or so, been based on Objective-C. The great thing about new languages is that they incorporate lessons learned. The devil of them is that vast body of existing code they need to interface to.

In the case of MacOS and iOS, the libraries make a lot of sense if you’re writing code in 2004. We’ve learned a lot about languages and frameworks and operating systems since then. But, as with all things, there are only so many hours in the day. Inevitably, you build a core around the language and, if you’re nice, create bridges (shims) to the external bits. If you’re Apple, you’re usually particularly nice and create toll-free bridges (ones built for you). If you’re Microsoft, you have the joy of multiple layers of conversions to look forward to. Eventually as the language evolves, you get to the point where the rules you gave people as to how the bridging worked do align with the language. That’s where we are today with Swift 3.0.

So, by now, you’re saying to yourself that this must be really bad to merit such a long preamble. Sort of. I came across a nifty article on what’s new in Swift 3.0 and decided to download the May 9th Swift trunk [see update below] and see for myself. There’s a nifty video from the same person. He does a good job of covering the changes. Remember my earlier reference to the Stanford class? I’ve been following it myself. I thought, “let’s see how what’s being taught tracks.” For my first attempt I chose a small app that uses gestures to tinker with a face drawn to the screen. Now as those of you familiar with iOS will be aware, there are two ways of implementing UI binding to code. You can either write it in code or use Interface Builder. The thing about using IB is that it handles a bunch of tedious details for you. It also hides stuff and is the source of weird errors if you screw things up.

In implementing this app, I of course used IB. And you know what, I crashed and burned. After installing the new tool chain and restarting Xcode, my modest app showed a variety of error stemming from the changes in Swift 3.0, all of which were easily corrected thanks to the very clear error messages and suggested remediation provided. The app launched and drew the baseline image, but any gesture failed miserably. Maybe it was the project. I created a new project with nothing but a raw view and added a tap recognizer in IB and wired it up to the view with a print diagnostic. It happily built, but also crashed on tap.

2016-05-26 15:16:31.150 f2[21677:736529] -[f2.ViewController tap:]:
  unrecognized selector sent to instance 0x7ffe59d31eb0
2016-05-26 15:16:31.156 f2[21677:736529] *** Terminating app due to
  uncaught exception 'NSInvalidArgumentException', reason:
  '-[f2.ViewController tap:]: unrecognized selector sent to instance
  0x7ffe59d31eb0'

Creating the action in code worked. To be fair, this tool chain is coming from the Swift site and not Apple. I then checked to see what happened if I tossed in a vanilla button. Same problem. That wouldn’t do. The answer turned out to be held in the AppDelegate. As it happens the new tool chain generated a warning for each of the standard boiler functions.

/Users/charleswilson/Documents/projects/2016_CS193P/FaceIt/FaceIt/
  AppDelegate.swift:17:10: Instance method
  'application(application:didFinishLaunchingWithOptions:)' nearly matches
  optional requirement 'application(_:didFinishLaunchingWithOptions:)' of
  protocol 'UIApplicationDelegate'

The provided code reads:

func application(_ application: UIApplication,
  didFinishLaunchingWithOptions launchOptions: [NSObject: AnyObject]?)
  -> Bool

It seems that the functions need additional decoration. The warning notes that the function nearly matches. Allowing FixIt to do its thing yeilded:

@objc(application:didFinishLaunchingWithOptions:)
  func application(_ application: UIApplication,
  didFinishLaunchingWithOptions launchOptions: [NSObject: AnyObject]?)
  -> Bool

Apparently, additional decoration is required to bridge properly. So I went back to the standard IB added recognizer. Here’s the action.

@IBAction private func tap(sender: UITapGestureRecognizer)

Once the objc decoration is added, all is well.

@IBAction @objc(tap:) private func tap(sender: UITapGestureRecognizer)

Not a big deal to make the correction once you realize what’s required. Hopefully, others will take this as a heads up and not be bogged down as they update their code.

Overall, I have a high degree of confidence in Swift 3.0, the changes make the language better. So, go get it yourself and see the future of Apple software development.

2016-06-05 Update

I pulled the May 31 Swift trunk with the same results. I’d imagine that Apple will clean this up really soon as WWDC is just around the corner.

2016-06-09 Update

I pulled the June 6 Swift trunk and got the same results. Cutting it close for WWDC.

Read Full Post »

An interesting thing about having spent over thirty years taking software from one platform to another is that, time and again, I’ve had my understanding of what constitutes correct code challenged. That’s a good thing.

Sadly, many people who ply the trade of software development mistakenly believe that a compiler has the ability to warn you when you’re code is going to behave in ill-advised ways. Worse yet, they fall into the trap of believing either that their code is correct if it compiles without warnings or that if the compiler accepts your code then any compiler will. Unfortunately, these beliefs are the developer equivalent of a two year old’s lack of object permanence. These two tragedies aside, the vast majority of developers are clueless as to how static analysis can and should be used to ensure code quality.

Let’s rewind a bit and work through these.

In the beginning was the language specification. It was a bright, shiny idea given form. Lest you get the idea that these documents, venerated by both compiler authors and language wonks alike, are intrinsically sane; please recall that the original Ada spec allowed minus signs in the middle of numeric literals and that 8 and 9 were perfectly acceptable octal digits in C. Now a computer language without a compiler is fairly dull. Enter the compiler authors. These individuals, who number about 1000 in the world and of whom I’ve personally known about a dozen, are highly proficient at taking the language specification and giving it life. The way they do this is far more Pollock than Vermeer. Why? Well, a language is the embodiment of a worldview. Unlike source code control systems, which are only created when someone gets fed up with the way that the current one they’re using does one particular thing to such an extent that they can’t bear living under its yoke any longer, languages come from the world of paradigms. Why? If your pet peeve is small, you’ll probably be able to either work around it or get it added to the language (typically contingent upon who you know). If your peeve isn’t small, any attempt to modify the language you’re using will have the same result as updating the value of Planck’s constant or dropping a storm trooper platoon into the middle of a city on Vulcan.

It can’t possibly be that bad you say. Actually, it can. And, in fact, it is. A long time ago, Apple was transitioning from it’s Pascal-based OS to a C-based one. The transition was fraught with byte-prefixed, null-terminated strings. The resultant code was pretty horrific. Just recently, I was working on a feature that required me to move between BSTR, wstring, COM and char strings. This because the language wasn’t designed with the notion that strings could be more than just US English. Compare this to Swift which is a pure Unicode language right down to the variable names.

Every compiler writer brings their own unique experiences and skill set to realizing the worldview embodied in the language specification. No two compilers will realize it the same way. Oh, they’ll be close and probably agree 90% of the time. The biggest area of difference will be in what each compiler considers important enough to warn the developer about. On one end of the scale, you could argue that if the language specification allows something that the developer is free to write the code accordingly. At the other, the compiler would report every questionable construct and usage. All compilers that I’m aware of fall somewhere in the middle.

Unfortunately, rather than biting the bullet and forcing developers to recognize their questionable and problematic coding choices, compilers have traditionally allowed code to have a pass by not enforcing warnings as errors and by having the default warning level be something uselessly low. To make matters worse, if you do choose to crank up the warning level to its highest severity, many times, the operating system’s headers will fail to compile. I’ve discovered several missing macro symbols that the compiler just defaulted to 0. Not the expected behavior. Microsoft’s compiler have the sense to aggregate warnings into levels of increasing severity. gcc however, does not. The omnibus -Wall turns out to not actually include every warning. Even -Wextra leaves stuff out. The real fun begins when the code is expected to be compiled on different operating systems. This can be a problem for developers who have only ever worked with one tool chain.

So, let’s say that you realize that compilers on different platforms will focus on different issues. You’ll probably start considering trying other compilers on the same platforms. Once all those different compilers, set to their most severe, pass your code is good right? Not so fast. Remember, a compiler’s job is to translate the code, not analyze it. But, you do peer reviews. Tell me, how much of the code do you look at? How long do you look at it? Do you track all the variable lifetimes? Locks and unlocks? In all the code paths? Across compilation units?

Of course you don’t. That would be impossible. Impossible for a person. That’s why there are static analysis tools. My current favorite is Coverity. And yes, it costs money. If you can sell your software, you can pay for your tools.

But the fun doesn’t end there. Modern compilers can emit additional code to allow for profile guided optimization (PGO). Simply put, instrument the code, run the code, feed the results back into the compiler. Why? Because hand tweaking the branch predictors is an exercise in futility. Additionally, you’ll learn where the code is spending its time. And this matters because? It matters because you can waste a lot of time guessing where you need to optimize.

Finally, there’s dynamic analysis. This is realm of run-time leak detection. Wading through crash dumps and log files is a terrible way to spend your time.

So, are you developing quality software or just hacking?

Read Full Post »

There’s an old computer joke. “There are 10 kinds of people. Those who count in binary and those who don’t.”

I think there are two kinds of software developers, those who constantly reexamine their skill set and those who like working on antique cars. Mind you, I have nothing against antique cars. A good chunk of my personal library is dedicated to computer history. I love reading about how we got to where we are. I’ve been privileged to have been an observer to some of that history. That being said, ours is a discipline where answers are only as good as the half-life of their questions.

What’s the lifetime of a question you ask. It depends. If the question is “What is the length of the diagonal of a right triangle given the lengths of the other two sides?”, pretty damn long. On the other hand, if it’s “What is the best way to teach people how to write a program?”, well, tick-tock.

So, what does this have to do with shelfware? As it happens, a fair number of the projects I’ve worked on have become shelfware. Sometimes it was tech that the world just wasn’t ready for. Other times, it was a case of the tank shell and mosquito. Way too big of a hammer for the problem. (The bigger the hammer, the greater the resources needed to create and eventually wield it.)  There are those time when you’ve created a solution either without a problem or by the time you’ve got your solution, the problem has either been solved another way thank you very much, or simply no longer exists. (Remember how everyone got all wound up about Y2K?) Finally, some projects simply run out of time.

The interesting thing is that no one has as their goal the creation of shelfware. But, you know what? Shelfware happens. So, what is one to do? My favorite exercise in such is the post mortem. As with all things, I ask my two questions. “What do I have?” and “What do I need?” What I have is easy to answer. It’s one of the above cases. What I need is to determine is how it got there and what can I do to prevent the same sequence of events from occurring again.

Part of what comes from this analysis help me improve my situational awareness. Contrary  to popular belief, it is not enough to be good at what you do and expect that your insight, code or experience will suffice. Software is like an elaborate dining experience. If you can’t get the right dishes out to the right people in the right order at the right time, there will be hell to pay. By the same token, if your team isn’t able to fire on all cylinders, you’re bound to either lose the race or crash and burn. Eventually. Nothing is more pathetic than a software team who’s sole purpose is to sustain a dead product. But I’ll leave zombieware for another day.

Situational awareness is all well and good, but it’s a external. The more important thing for me is to see what I can do to up my personal game. This is not to say that I wait for a train wreck to do this, only that unlike my usual mental fitness endeavors (consuming all the Apple WWDC and other conference sessions, Coursera classes, etc.) and scanning the horizon for interesting directions that the industry is moving toward, I like to take stock of my core skill set.

It takes skill to drive a nail with one or two hammer strikes, but it’s far more efficient to use a pneumatic nail gun. More fun too. And you don’t need the same level of skill. The same is true of software development. In the early days of both C and C++, there were no libraries. Everything was handcrafted in every single program. The number of times any given problem was solved, to varying degrees of success and quality, is staggering. People created problem-specific solutions which were then expanded upon for other projects. As usually occurs, they accreted unrelated and inseparable bits. By the time an opportunity to refactor the code base into some semblance of modernity, it is usually deemed too great a risk. So, while you’re becoming the expert in dinosaur care and feeding, the next generation of developers has been tinkering away with the latest and greatest.

I know, I know, the fundamentals you say. Okay. So, how about you show my your linear and logarithmic interpolation skills? No? You used Matlib perhaps? Hmm. Well then, how about explaining why 8.3 file names are storage efficient? Surely balancing binary trees. But wait, don’t you get that from maps in C++ and just about every other modern language? The finer details of memory allocation and pointers? Have you ever talked to a Java or Python developer? Binary tree common ancestors? Understanding how to implement that must be important. Seriously? In thirty-two years of developing software full time, I have never needed to re-balance a binary tree or find a common ancestor, except for interview questions. Which probably makes me unqualified to create the next great search algorithm. And that’s okay. All these answers reached their half-life long ago. Well, maybe not Tango trees. But they’re still a solution looking for a practical problem.

Over time, all these problems have had their solutions codified into either the language or the language’s standard library. This is a good thing. Map / reduce was once new and shiny. Now it’s built into Apple’s Swift language. I would much rather people who live to create the best sorting algorithm do so. That way everyone benefits. I don’t want to have to care if my data is partially sorted, whether I have fifty or fifty million items, if I have strings or structures. I want to sort my data in order to do something else with it, not because sorting it amuses me. By the same token, no one in their right mind would even consider creating a GUI from the ground up today. I’ve personally had to do this more than once. It’s a boat load of work and not portable. Why should data structures and their manipulation be any different? After all, we create abstractions so that we can handle problems of greater complexity.

So, if I’m not looking at these, then what?

Currently, three things have my attention. The first is Apple’s Swift. I’ve been watching this language develop over the past two years into a nearly complete solution for application creation. The second is C++11/4/7. I started working with C++ in the mid ’90s and it has managed to keep pace with developments in the processor and compiler spaces. Anyone who is still writing code as if nothing has changed since C++03 really needs to look at C++14. Between auto, atomics and memory handling changes, C++ continues to bring its A game. Finally, and most importantly, I’ve been looking at secure coding. Having recently taken the US CERT secure coding course, I was surprised at the level of code exploitation possible because of patterns of coding that go back decades. If OpenSSL has issues being exposed on a near weekly basis, would we expect that other major pieces of our infrastructure are not equally at risk?

To remain relevant in the world of software development, one must not only be competent in the current use of technology, but also be actively learning and incorporating the advances in the materials (languages / frameworks), methodologies (paradigms) and tools (editors, compilers, IDEs, etc.). Not to mention other peoples’ code.

Everything can and should be learned from,  even shelfware. It’s merely experience without exposure.

Read Full Post »