Feeds:
Posts
Comments

I’m big on education, think Swift is a great language, and believe games can be a practical way to motivate learning. So, how did I put this into practice?

What’s My Motivation?

During my career, I’ve had the opportunity to teach programming and software development (two distinctly different things) to both teens and adults. One thing that’s always struck me is the disjoint nature of the material. Not in terms of the subject matter, but rather with respect to the examples being used. I learning a spoken language, you don’t abandon a part of speech as you acquire another. Learning is both cumulative. As we learn, we revise our approach.

In teaching programming, we seem to be so focused on being focused, that we divorce ourselves from the actual processes that go on when we solve real-world problems. In the past few years, I’ve noticed that people are producing programming language courses reduced to five minute info-bites. Here’s the thing, software development is a long-form practice.

Early Insight

I put together my first programming curriculum in 1981 when I was a instructor at Computer Camp, Inc. in Santa Barbara, California. The students were teens and the problem in my mind was motivation. Unlike adults, most of the teens I’ve taught over the years don’t approach programming from experience. They have a beginners mind. This is both good and bad for a teacher. The good is that they don’t have bad habits yet. If properly taught, they will think in the language. The bad is that we, as experienced developers, have come to see programming languages as a collection of “computer language components” and not as a methodology for solving problems as expressed in a specific syntax. As a result, the vast majority of software written today would have the spoken equivalent of transliteration. All the words are there and a native speaker could probably make sense of it, but they would suffer greatly.

In 1982, I found myself tasked with teaching an advanced BASIC programming class. It was then that I hit upon the idea of a dungeon crawler. The students were interested from the outset. They appreciated that everything they were spending their precious time on was leading to the outcome. They looked at the language as a means to solve a problems and not a way to take a solution from another language and reapply it.

So, now I understood that it was possible to motivate and teach people how to think in a programming language. Could I leverage this understanding?

Teaching Revisited

In 2008 I had the opportunity to teach electrical engineers C++ and SystemC. These were individuals who’s software development experience was grounded in C programming. Their code and indeed, approach to software development, was procedural as once might expect. In order to teach them SystemC, people must first learn C++ (the language SystemC is written in). After working with the materials we had been using, I felt strongly that we weren’t motivating an appreciation and understanding of object orientation. I had the opportunity to participate in the creation of an entirely new C++ curriculum. From the beginning it introduced object orientation. There is a interesting shift that takes place when the responsibility for the data shifts from all the code that touches it to objects that manage it.

The Stanford Way

I’ve been watching Stanford’s iOS development course (CS193P) since it was first made available. It has undergone an interesting evolution over the past decade. Initially, it taught Objective-C development and iOS programming. This included pure (non-GUI) Objective-C and test driven development. With the fundamentals in place, the model-view-controller paradigm was taught as the foundation of iOS development. Then the class shifted into the standard piece-part methodology we see everywhere, albeit with a distinctly iOS bent.

Over the years, both the pure language and test driven development aspects went away. These were relegated to reading material. Objective-C was supplanted by Swift. More sophisticated areas were covered as the iPhone evolved. By the end of the course, students can build complex apps. But it feels like people are learning APIs rather than the language. But what can you do in 10 weeks? Would people actually pay for a college course to learn Swift and then another for iOS development?

Enter Wumpus

About five years ago, someone asked me to teach them how to make iPhone games. They had no software development experience and little desire for the traditional approach of learning via classes or books. They understood the ins and outs of game play and had a keen sense of what made a game playable.

The process that followed was the condensation of forty years of writing code and developing software. Today, when we work with just about any OS API, we have to deal with a context. But how do you motivate the very idea of the context. How do you teach people to work effectively with the net result of over fifty years of software development practices without just expecting that people will simply accept that this is the way it is and you just have to accept it? You can easily create an animation, but what is happening behind the scenes? Being able to understand and explore these questions is what will determine if someone will be capable of working beyond the software equivalent of writing pulp fiction.

In the end, I settled on teaching software development through the very old game of Hunt the Wumpus. This game appeared in the original Unix distributions. It has simple rules, a bit of action, some random elements and is, on the whole, able to be understood by a nine-year-old. It’s implementation can be used to demonstrate multi-dimensional arrays, randomization, object-orientation, internationalization, error handling, data visualization and testing.

As this was before Swift, it was implemented in Objective-C.

Personally, I used my implementation of Wumpus to experiment with iOS. Specifically, I was tinkering with storyboards in Xcode. I wanted to see if it was possible to implement the user interface of Wumpus entirely using scenes representing the rooms within the game. This is, of course, a horrific abuse of the scene concept and is the equivalent of unfolding an array of objects into individual routines. It did, in fact, work. And I would not ever recommend that the technique be used for production code.

Enter Swift

Two years ago Apple announces Swift. Immediately, I started working with it. Like many languages before it, Swift incorporated lessons learned. In the case of Swift, many lessons were learned. You can look at my earlier posts to see my past musings on the language.

In May, I found myself with sufficient time on my hands to undertake a rewrite of Wumpus in the soon to be released (now just released) Swift 3. Concurrently, iOS 10 was to come out and would be supported by Xcode 8. Changes all around. My initial Wumpus model was readily brought over from Objective-C. Over time I realized that many of the things in that implementation could be completely folded down to a single line of Swift code. Swift wasn’t an extension of an older language. In fact, as the language evolved from version 1 to 3, many elements initial present were removed or replaced. Today’s Swift is much more consistent as a result.

I knew the pieces of the user interface that would be required and set about recreating them. This time is a sane fashion. Once this was done, I began the process of connecting the view to the controller layer and eventually the model. All the while, adopting the Swift 3 and iOS 10 idioms.

At this point, I had a playable version of Wumpus. There was a main scene that took you to the rules or the game. The rules were a static chunk of text. The credits was static attributed text. You could navigate the maze and be moved (scene with alert) or die (scene with alert). Shooting came in and initially used a scrolling picker with the room numbers. Dull stuff.

Just Add … Everything

Now came the interesting bits. The iOS-specific bits.

It’d be dull to cover this in detail, so here’s a rough sequence.

  • 30+ background images
  • danger annunciator images
  • tint overlay to gray scale backgrounds
  • ambient sound across scenes (looped soundtrack)
  • incidental sounds within scenes (looped for danger and one-shot for events [moved, died])
  • added settings controls for all audio volumes
  • asset catalog used for both image and sound management (simplified access)
  • rebuilt settings using a table with dynamically constructed cells with action handlers
  • saved statistics using class-based archiver
  • rebuilt statistics using dynamic data generation from the statistics data
  • segues and segue unwinding (navigation control)
  • timers (scene auto-transition from title scene)
  • tap gestures (eliminating navigation buttons)
  • replaced static rules text with chunked pages and swipe gestures
  • custom font (Kalam)
  • parallax (titles, danger annunciators and event imagery)
  • dynamically constructed attributed text (credits)
  • endless scrolling text loop (credits)
  • dynamically constructed tables from plist data (statistics field names)
  • static collection view replacing lame picker interface (shoot scene)
  • app analytics (Firebase)
  • ad support (AdMob)
  • JSON processing (credits source import)
  • core data (credits attributed string construction)
  • built to work with both iOS 9.3 and 10.0 (core data had a major change)
  • social network (Facebook / Twitter) posting

Testing, Testing

An important part of creating an iPhone application is being able to ship it. But before that you should really test it. A lot. Really.

To do that you need to do the dance of getting certificates and creating an app instance. With these you can push builds to Apple’s servers where they can be accessed by internal testers (all builds) and external testers (specific builds, after review [sort of]). Then comes the great fun of prodding the testers.

Collateral Damage

It’s been tested. All the features (for this release) are present. And it’s time to ship, right? Actually, no. You can’t ship an app without creating a bucket and a half  of collateral images (screenshots) for the app store. There’s also the small matter of the web site that will support the app. And no self-respecting app would go up without a game play video.

About those images. You technically only need one set at the highest screen geometry. The others will be generated by scaling. Now, you’ve gone to all the trouble of adopting an adaptive user interface so things look reasonable on all the various screen geometries, so not generating imagery for every size would just be lazy. Happily, all these can be generated from simulator screen capture. Image having to round up half a dozen devices just to do screen caps. Did I mention that video? Well, you can’t video capture from the simulator. So, for those of you who look at my app on the store, there’s just the one from my current iPhone.

I do keep referring to Wumpus as an iPhone app. Well, it is. I designed it for portrait-only. Now this doesn’t prevent you from putting it on an iPad. The problem is that Apple has never updated the screen size used from iPhone apps on an iPad. It’s this pointlessly scrunched up screen size. It looked brain dead. So, I went back and tweaked the layout to be less egregious. It’s not pretty, but why are you running it on an iPad in the first place?

Can I go now?

What could possible be left to do?

  • specification of age rating
  • description for the store
  • verification that you own or have license for all the bits you’re using
  • text for alerts presented to the user, if certain features are used

About that whole licensing point. Wumpus uses a lot of images and audio tracks. They all need to be acknowledged properly. That was a driving factor in using Core Data to track them. All the ones I used were either public domain or minimally encumbered. The biggest problem I had was not finding them, but selecting from among them.

And yes, now it’s ready to ship.

Ship It

So, about two weeks ago, submitted Wumpus for review. Well, I tried to. Apple will only review apps built against finalized OS libraries. Wait. Wait. So I added a few more bits to fill the time. On Monday 12 September 2016, I was able to submit Wumpus for review. After a brief diversion of trying to find out how to answer new privacy questions related to the use of Firebase and AdMob. Then came the wait. Did I forget something? Was there some horrible error condition lurking waiting for the mystical Apple auto application checkers to detect. Would the review be delayed by more relevant applications (honestly, that’s just about every other app)? Nah, it was all good.

On Wednesday 14 September 2016, I got an auto-generated email informing me that my app was available for sale. Pretty anti-climactic really. If you have an iPhone/iPad, you can download it today. The related web site is also online.

And?

So where’s the tie-back to the teaching programming / software engineering? That was the point, right? Absolutely. I’m not done. Although Wumpus represents an interesting résumé piece and I’ll be extending it with additional technologies (such as web, Apple Watch), my take away is an example that I know I can use to teach both Swift and iPhone development. Like all good stories, this one leaves me wanting more.

iOS 10 ships on September 13th. Apple has changed their generated code for Core Data. It doesn’t compile if you set your deployment target to iOS 9.3. What’s a developer to do?

For a very long time now, Apple has be proud, and deservedly so, that their users are overwhelmingly keeping up-to-date with the latest releases of their operating systems. As such, Apple’s messaging to developers is to support the current and previous OS version. Most of the time, this isn’t a big deal. Sometimes, it is. With the latest changes related to Core Data, it’s a problem.

If you create a new Xcode project that will being using Core Data, you get some useful code bits in the AppDelegate. In Xcode 8, you get the following (comments removed for brevity):

lazy var persistentContainer: NSPersistentContainer = {
    let container = NSPersistentContainer(name: "coreme")
    container.loadPersistentStores(completionHandler: { (storeDescription, error) in
        if let error = error as NSError? {
            fatalError("Unresolved error \(error), \(error.userInfo)")
        }
    })
    return container
}()
 func saveContext () {
    let context = persistentContainer.viewContext
    if context.hasChanges {
        do {
            try context.save()
        } catch {
            let nserror = error as NSError
            fatalError("Unresolved error \(nserror), \(nserror.userInfo)")
        }
    }
}

This works great if you’re only going to build for iOS 10, but if you try to set your deployment target to iOS 9.3 as recommended, you’ll get the following error:

AppDelegate.swift:49:35: 'NSPersistentContainer' is only available on iOS 10.0 or newer

Well, that’s not good.

As users of Core Data know, operations using it use a database context. You can see it being used in the saveContext function above. Here we see that the context is a member of the persistentContainer. Since this doesn’t exist in iOS 9.3, we’ll have to get it some other way.

Let’s look at what used to be supplied with Xcode 7 (udpated to Swift 3):

lazy var applicationDocumentsDirectory: URL = {
    let urls = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)
    return urls[urls.count-1]
}()

lazy var managedObjectModel: NSManagedObjectModel = {
    let modelURL = Bundle.main.url(forResource: "bfoo", withExtension: "momd")!
    return NSManagedObjectModel(contentsOf: modelURL)!
}()

lazy var persistentStoreCoordinator: NSPersistentStoreCoordinator =
{
    let coordinator = NSPersistentStoreCoordinator(managedObjectModel: self.managedObjectModel)
    let url = self.applicationDocumentsDirectory.appendingPathComponent("SingleViewCoreData.sqlite")
    
    do {
        try coordinator.addPersistentStore(ofType: NSSQLiteStoreType, configurationName: nil, at: url, options: nil)
    } catch {
        let dict : [String : Any] = [NSLocalizedDescriptionKey        : "Failed to initialize the application's saved data" as NSString,
                                     NSLocalizedFailureReasonErrorKey : "There was an error creating or loading the application's saved data." as NSString,
                                     NSUnderlyingErrorKey             : error as NSError]
        let wrappedError = NSError(domain: "YOUR_ERROR_DOMAIN", code: 9999, userInfo: dict)
        print("Unresolved error \(wrappedError), \(wrappedError.userInfo)")
        abort()
    }
    
    return coordinator
}()

lazy var managedObjectContext: NSManagedObjectContext = {
    let coordinator = self.persistentStoreCoordinator
    var managedObjectContext = NSManagedObjectContext(concurrencyType: .mainQueueConcurrencyType)
    managedObjectContext.persistentStoreCoordinator = coordinator
    return managedObjectContext
}()

I’ve left out the saveContext function as it’s only distinction is where it gets the context from. Above, we see that the context is a separate variable.

Now without a doubt, the iOS 10 code is much simpler, but, if we want to be able to do the right thing and support both current and last version, we need a bit of a mash up. It’s a bit tedious, but you only have to build it once (hint, hint Apple). Here’s what I’ve come up with.

// MARK: - utility routines
lazy var applicationDocumentsDirectory: URL = {
    let urls = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)
    return urls[urls.count-1]
}()

// MARK: - Core Data stack (generic)
lazy var managedObjectModel: NSManagedObjectModel = {
        let modelURL = Bundle.main.url(forResource: modelName, withExtension: modelExtension)!
        return NSManagedObjectModel(contentsOf: modelURL)!
}()

lazy var persistentStoreCoordinator: NSPersistentStoreCoordinator = {
    let coordinator = NSPersistentStoreCoordinator(managedObjectModel: self.managedObjectModel)
    let urls = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)
    let url = self.applicationDocumentsDirectory.appendingPathComponent(modelName).appendingPathExtension(sqliteExtension)

    do {
        try coordinator.addPersistentStore(ofType: NSSQLiteStoreType, configurationName: nil, at: url, options: nil)
    } catch {
        let dict : [String : Any] = [NSLocalizedDescriptionKey        : "Failed to initialize the application's saved data" as NSString,
                                     NSLocalizedFailureReasonErrorKey : "There was an error creating or loading the application's saved data." as NSString,
                                     NSUnderlyingErrorKey             : error as NSError]
        
        let wrappedError = NSError(domain: "YOUR_ERROR_DOMAIN", code: 9999, userInfo: dict)
        fatalError("Unresolved error \(wrappedError), \(wrappedError.userInfo)")
    }
    
    return coordinator
}()

// MARK: - Core Data stack (iOS 9)
@available(iOS 9.0, *)
lazy var managedObjectContext: NSManagedObjectContext = {
    var managedObjectContext = NSManagedObjectContext(concurrencyType: .mainQueueConcurrencyType)    
    managedObjectContext.persistentStoreCoordinator = self.persistentStoreCoordinator    
    return managedObjectContext
}()

// MARK: - Core Data stack (iOS 10)
@available(iOS 10.0, *)
lazy var persistentContainer: NSPersistentContainer = {
    let container = NSPersistentContainer(name: creditsModelName)    
    container.loadPersistentStores(completionHandler: {
        (storeDescription, error) in
            if let error = error as NSError?
            {
                fatalError("Unresolved error \(error), \(error.userInfo)")
            }
        }
    )
    
    return container
}()

// MARK: - Core Data context
lazy var databaseContext : NSManagedObjectContext = {
    if #available(iOS 10.0, *) {
        return self.persistentContainer.viewContext
    } else {
        return self.managedObjectContext
    }
}()

// MARK: - Core Data save
func saveContext () {
    do {
        if databaseContext.hasChanges {
            try databaseContext.save()
        }
    } catch {
        let nserror = error as NSError
        
        fatalError("Unresolved error \(nserror), \(nserror.userInfo)")
    }
}

I’ve introduced databaseContext as the context variable to be used throughout. I think it’s more clear and now saveContext doesn’t have to care which OS you’re targeting. With this change in place, you can now safely target 9.3 and still build for 10.0.

One of the challenges in learning a new language, be it human or computer, is learning to think in the language. Learning Swift is no different.

I’ve been spending the past several months working on the first iOS (iPhone) app that I’ll be releasing through the Apple app store. I’m writing it entirely in Swift 3, the latest version of the language. Yesterday, I was looking over the code to see if there were any places where I’d been thinking in a non-Swift fashion. This moment of reflection was triggered by my viewing of a very recently released class in Swift from one of the major online education sites. I was disappointed that the instructor was using an idiom that I knew was easily simplified in Swift. My disappointment made me wonder if I had similar issues.

The following is actual code from my upcoming app. I had originally created it’s core in C as an extensible example for teaching that language. The setup is as follows:

  • the map is an array of connections from each room
  • the number of connections from each room is fixed
  • we are explicitly tracking the number of a given type of hazard

We want a function that returns true if there is hazard in one of the connecting rooms. Here’s the original code:

var hazardNear = false
        
if hazardCount > 0
{
    for nextRoom in 0 ..< numberOfConnections
    {
        if hazardRooms.contains(exits[room][nextRoom])
        {
            hazardNear = true
                    
            break
        }
    }
}
        
return hazardNear

Pretty conventional C-esque stuff. The first thing that jumped out at me was the reliance on knowing the number of connections. Swift arrays are true collections. Let’s treat them as such.

var hazardNear = false
        
if hazardCount > 0
{
    for nextRoom in exits[room]
    {
        if hazardRooms.contains(nextRoom)
        {
            hazardNear = true
                    
            break
        }
    }
}
        
return hazardNear

When viewed this way, it’s obvious that we’re just filtering the collection based on a condition. So, just do that.

var hazardNear = false
        
if hazardCount > 0
{
    hazardNear = exits[room].filter{hazardRooms.contains($0)}.count != 0
}
        
return hazardNear

We’re filtering the array of rooms for ones that contain the hazard and checking for a non-zero count. But this is still a bit clunky. Let’s examine what we’re really asking.

var hazardNear = false
        
if hazardCount > 0
{
    hazardNear = !exits[room].filter{hazardRooms.contains($0)}.isEmpty
}
        
return hazardNear

It’s far more clear to simply ask if the filtered array is non-empty directly. What’s left is one final simplification.

return hazardCount > 0 && !exits[room].filter{hazardRooms.contains($0)}.isEmpty

One might ask whether having an explicit count is necessary as the hazard array has a count. True. Although the hazard array is built based on the number of hazards, so I still need it to be around.

Overall, I fairly pleased with the results of the exercise. In the end, the code is much clearer in intent.

In the dim days, dinosaurs roamed the Earth. Okay, not that far back.

In the American West of the late 19th century, if you wanted the latest technological wonder needed to make your life easier, there was one place you’d go … the blacksmith. Why? Well, the smith had both the equipment and the skills to produce just about anything out of metal. (Our expectations of technology was a bit lower by  today’s standards.)

On the face of it, the things you needed to set yourself up as a blacksmith were fairly minimal.

  • Fire
  • Metal
  • Water
  • Tongs
  • Hammers
  • Chisels
  • Anvil

Of course all the tools in the world a pointless without the skills needed to use them effectively and efficiently.

Of the items on the list, the outlier is the anvil. It is the foundational piece of equipment second only to fire in importance.

Interestingly, this fairly humble object is used to the production of stunning pieces of metalwork.

For some time after the arrival of Europeans to North America, anvils were shipped from the Old World. Eventually, the infrastructure was sufficient to allow the production of anvils locally. Bringing the cost down significantly. Still, they were expensive. Likewise, if you wanted a blacksmith, you brought one over.

To become a blacksmith, you apprenticed with a blacksmith. At the end of your apprenticeship, you would be given either money or tools. With your tools and skills and after purchasing your own anvil, you could now go and ply your trade making everything from spoons to fences to lanterns.

For me this history has a special significance. When I started on my journey of working with software development, computers were million dollar investments. Access to them was restricted to a few highly trained individuals. I had the opportunity to study under one of those who was present at the beginning of the computer era. I watched the development of the computer from big iron to something you wear on your wrist.

Throughout there has always been the computer and keyboard with which I craft software. Long gone are the days of stacks of dollar bill sized, hole-ridden cards. Now my anvil could be lost in a stack of magazines. Strangely, that US$20 anvil of 1850 would cost about US$700 in 2016, about what you’d pay for a run-of-the-mill laptop or the iPad I’m composing this post on.

Like that anvil, I can take my tools with me anywhere. Like the blacksmith, the quality of my  work and the words of my employers and colleagues speak for me. Likewise, my tools and skills need constant sharpening and care.

I have seen many fads come and go. Throughout the basic skills are always applicable. When teaching, I try to emphasize that there are no shortcuts to mastery. There seems to be a great willingness to build software today by assembling large chunks of other people’s work. While this may lead to a functional product, it does not make you a better developer. To become a better developer you must understand how the pieces work. I’m not saying that you must build everything yourself, merely that given time that you could. This is the distinction between a technician and a technologist. The former can be trained in short order by the latter.

It seems that in an age when we want everything on a global scale, we believe that the quality of the craftsman a price that must be paid. In reality, scale need not sacrifice quality. The missing piece of the equation is time. The old adage of fast, cheap, good; pick two; is quite real. Quality takes time. If you’re seeking quality, keep you’re eye out for those who have their own anvil.

One of the nice things about taking months or even years to finish reading a book is that it gives me plenty of time to reflect on it. Sure, I could binge read, but the material is far less sticky. The longer I spend with a book, the more connections I can make. “Failure Is Not An Option,” which I just completed a few days ago, is a good example of this.

The book itself is a retelling of history of the manned space program from the point of view of someone intimately familiar with the events. At first blush, it would appear to have nothing to do with the process of software development. The Apple Pencil I’m using at the moment probably has more computing power than anything available at the time. So, what is the connection?

It is absolutely true that when dealing with matters of human life that failure is not an option. When you look at the history of American manned space flight from the outside, the only thing you see is the ultimate expression of this. You see successful execution, even in the case of unforeseen situations. And that’s the key, it’s from the outside. Success was not achieved because perfect people perfectly executed perfect plans using perfect equipment under perfect conditions. Yes, there were times when the execution, equipment or conditions were perfect. Most of the time, success was ensured inspite of less than perfect conditions. For me that’s the story.

During the space program, failure was a mandate. When you exist in a world of constrained resources, the best way to ensure that you will be successful is to see how you respond when the resources that you depend upon are available. But, wouldn’t you always being everything you needed and have it on hand? One would hope so, but let’s say, for the sake of argument, that you didn’t. What then? If something bad happens, how much time do you have to get things back on track before it’s game over? Well, that depends on what happened, when it happened and how important it was.

Okay, sure, fine, but what did they do and what does it have to do with software development?

In the case of the space program, failure was ensured before the mission. Every possible they could conceive of. Teams of people had the sole job of enducing failures during mission simulations. We’re not talking SimCity or Second Life here. These simulations were conducted in physical hardware as identical as possible. The difference is that this user interface was driven not by physical sensors, but computers. Crews were made to experience failure over and over until their responses were second nature. When you think about it, this is no different from any physical endeavor. The more you train yourself to respond to situational changes, the better your performance becomes.

As of late, I believe that many of the problems with software and hardware have been coming about because of a focus merely on the “happy path.” The idea that we should worry first about making it work and then later about failure conditions. Unfortunately, once management sees something that “works,” they  are unlikely to allocate time to break things. The situation has been made worse by the attitude that software will be tested in beta.

As a result, not only is inferior quality software being produced, but the software is being used in environments which expose their data to exfiltration. We see this is malware infecting point-of-sale systems. We see it in OpenSSL and other open source code. Software being widely adopted under the assumption that someone else must be testing it. Entire generations argue for speed over safety.

So, why has this attitude taken hold? Is it harder to design for failure first? Not really. In fact, it makes unit testing easier as it builds in the failure paths before the actual implementation. But doesn’t this make the code slower? Not implicitly. Between the use of modern tool chains and profile guided optimization, the cost of fail first, fail fast is minimal. Why don’t we do privilege segregation? For the same reason we don’t apply principles of  MVC, MVVM or VIPER. People sit in front of a screen and type. They use the excuse of doing “agile” development on short cycles for not properly planning. Now, I have nothing against short cycles, but to be used properly, you have to accept that not every problem can be evaluated, solved, implemented and tested in 2 or 3 weeks. Some things are hard. Some things can only be accomplished by specific individuals. Some tasks have dependencies on outside resources. Software isn’t building IKEA furniture. There are high level of indeterminacy that can and do crop up.

Neither can we pretend that the software we create lives in isolation or that security is the responsibility of the user. Holding onto data simply because it would make the developer’s job easier during development is not enough of a reason. Not using OS provided encrypted data storage because it makes it harder to debug is simply lame. Transmitting data in the clear or without access control just invites both data exfiltration and command-and-control injection.

Here’s the thing. If you want to succeed, you must fail. You must fail first and you must fail fast. Failure helps to characterize the system. It helps in the creation of documentation. It helps validate the design. Failure makes gives you a better understanding of the problem space. In short, failure is not an option; it is a requirement.

One of the things I’ve always liked about Apple is the way they revisit and revise their tools. Although there is comfort in logging on to any *nix system and knowing that ed or vi or gcc will be there, I believe that the lack of investment in standard tools hurts more than knowing that anyone who has written code since 1980 will still be at home. This is the technical equivalent of having a rotary gang switch to change channels on your flat-screen TV because that’s what your grandparents were used to and they still need to use the TV. You know what, they managed to adapt to there being a wireless remote and 500 channels.

Swift is probably the most visible of today’s Apple tool changes. As we fast approach the release of Swift 3.0, it’s good to check in and see what the future of software development of Apple products is going to be like. This is one of those growth and pain stories.

From the language standpoint, Swift 3.0 brings a greater level of consistency in terms of behavior and style. Compared to other languages, new developers will have fewer exceptions to keep track of. For those following the latest Stanford iOS class on iTunesU (CS193P) thinking that when they’re done that they’ve learned Swift, well, sorry, but in a few months you’ll need to update all your code.

One of the biggest changes in Swift 3.0 is the way that the Apple OS-specific libraries are handled. In the same way that C++ and Python aren’t bound to a particular operating system, neither is Swift. You don’t need to be on MacOS or iOS to use Swift. The problem is that most applications that interact with people need to be bound to some kind of operating environment. On Windows that’s typically done by using .Net. With Java, you’ve got a laundry list of environments (EE, SE, ME). For Swift, you’re binding to Apple’s plenitude of support libraries. All of which have, for the past decade or so, been based on Objective-C. The great thing about new languages is that they incorporate lessons learned. The devil of them is that vast body of existing code they need to interface to.

In the case of MacOS and iOS, the libraries make a lot of sense if you’re writing code in 2004. We’ve learned a lot about languages and frameworks and operating systems since then. But, as with all things, there are only so many hours in the day. Inevitably, you build a core around the language and, if you’re nice, create bridges (shims) to the external bits. If you’re Apple, you’re usually particularly nice and create toll-free bridges (ones built for you). If you’re Microsoft, you have the joy of multiple layers of conversions to look forward to. Eventually as the language evolves, you get to the point where the rules you gave people as to how the bridging worked do align with the language. That’s where we are today with Swift 3.0.

So, by now, you’re saying to yourself that this must be really bad to merit such a long preamble. Sort of. I came across a nifty article on what’s new in Swift 3.0 and decided to download the May 9th Swift trunk [see update below] and see for myself. There’s a nifty video from the same person. He does a good job of covering the changes. Remember my earlier reference to the Stanford class? I’ve been following it myself. I thought, “let’s see how what’s being taught tracks.” For my first attempt I chose a small app that uses gestures to tinker with a face drawn to the screen. Now as those of you familiar with iOS will be aware, there are two ways of implementing UI binding to code. You can either write it in code or use Interface Builder. The thing about using IB is that it handles a bunch of tedious details for you. It also hides stuff and is the source of weird errors if you screw things up.

In implementing this app, I of course used IB. And you know what, I crashed and burned. After installing the new tool chain and restarting Xcode, my modest app showed a variety of error stemming from the changes in Swift 3.0, all of which were easily corrected thanks to the very clear error messages and suggested remediation provided. The app launched and drew the baseline image, but any gesture failed miserably. Maybe it was the project. I created a new project with nothing but a raw view and added a tap recognizer in IB and wired it up to the view with a print diagnostic. It happily built, but also crashed on tap.

2016-05-26 15:16:31.150 f2[21677:736529] -[f2.ViewController tap:]:
  unrecognized selector sent to instance 0x7ffe59d31eb0
2016-05-26 15:16:31.156 f2[21677:736529] *** Terminating app due to
  uncaught exception 'NSInvalidArgumentException', reason:
  '-[f2.ViewController tap:]: unrecognized selector sent to instance
  0x7ffe59d31eb0'

Creating the action in code worked. To be fair, this tool chain is coming from the Swift site and not Apple. I then checked to see what happened if I tossed in a vanilla button. Same problem. That wouldn’t do. The answer turned out to be held in the AppDelegate. As it happens the new tool chain generated a warning for each of the standard boiler functions.

/Users/charleswilson/Documents/projects/2016_CS193P/FaceIt/FaceIt/
  AppDelegate.swift:17:10: Instance method
  'application(application:didFinishLaunchingWithOptions:)' nearly matches
  optional requirement 'application(_:didFinishLaunchingWithOptions:)' of
  protocol 'UIApplicationDelegate'

The provided code reads:

func application(_ application: UIApplication,
  didFinishLaunchingWithOptions launchOptions: [NSObject: AnyObject]?)
  -> Bool

It seems that the functions need additional decoration. The warning notes that the function nearly matches. Allowing FixIt to do its thing yeilded:

@objc(application:didFinishLaunchingWithOptions:)
  func application(_ application: UIApplication,
  didFinishLaunchingWithOptions launchOptions: [NSObject: AnyObject]?)
  -> Bool

Apparently, additional decoration is required to bridge properly. So I went back to the standard IB added recognizer. Here’s the action.

@IBAction private func tap(sender: UITapGestureRecognizer)

Once the objc decoration is added, all is well.

@IBAction @objc(tap:) private func tap(sender: UITapGestureRecognizer)

Not a big deal to make the correction once you realize what’s required. Hopefully, others will take this as a heads up and not be bogged down as they update their code.

Overall, I have a high degree of confidence in Swift 3.0, the changes make the language better. So, go get it yourself and see the future of Apple software development.

2016-06-05 Update

I pulled the May 31 Swift trunk with the same results. I’d imagine that Apple will clean this up really soon as WWDC is just around the corner.

2016-06-09 Update

I pulled the June 6 Swift trunk and got the same results. Cutting it close for WWDC.

An interesting thing about having spent over thirty years taking software from one platform to another is that, time and again, I’ve had my understanding of what constitutes correct code challenged. That’s a good thing.

Sadly, many people who ply the trade of software development mistakenly believe that a compiler has the ability to warn you when you’re code is going to behave in ill-advised ways. Worse yet, they fall into the trap of believing either that their code is correct if it compiles without warnings or that if the compiler accepts your code then any compiler will. Unfortunately, these beliefs are the developer equivalent of a two year old’s lack of object permanence. These two tragedies aside, the vast majority of developers are clueless as to how static analysis can and should be used to ensure code quality.

Let’s rewind a bit and work through these.

In the beginning was the language specification. It was a bright, shiny idea given form. Lest you get the idea that these documents, venerated by both compiler authors and language wonks alike, are intrinsically sane; please recall that the original Ada spec allowed minus signs in the middle of numeric literals and that 8 and 9 were perfectly acceptable octal digits in C. Now a computer language without a compiler is fairly dull. Enter the compiler authors. These individuals, who number about 1000 in the world and of whom I’ve personally known about a dozen, are highly proficient at taking the language specification and giving it life. The way they do this is far more Pollock than Vermeer. Why? Well, a language is the embodiment of a worldview. Unlike source code control systems, which are only created when someone gets fed up with the way that the current one they’re using does one particular thing to such an extent that they can’t bear living under its yoke any longer, languages come from the world of paradigms. Why? If your pet peeve is small, you’ll probably be able to either work around it or get it added to the language (typically contingent upon who you know). If your peeve isn’t small, any attempt to modify the language you’re using will have the same result as updating the value of Planck’s constant or dropping a storm trooper platoon into the middle of a city on Vulcan.

It can’t possibly be that bad you say. Actually, it can. And, in fact, it is. A long time ago, Apple was transitioning from it’s Pascal-based OS to a C-based one. The transition was fraught with byte-prefixed, null-terminated strings. The resultant code was pretty horrific. Just recently, I was working on a feature that required me to move between BSTR, wstring, COM and char strings. This because the language wasn’t designed with the notion that strings could be more than just US English. Compare this to Swift which is a pure Unicode language right down to the variable names.

Every compiler writer brings their own unique experiences and skill set to realizing the worldview embodied in the language specification. No two compilers will realize it the same way. Oh, they’ll be close and probably agree 90% of the time. The biggest area of difference will be in what each compiler considers important enough to warn the developer about. On one end of the scale, you could argue that if the language specification allows something that the developer is free to write the code accordingly. At the other, the compiler would report every questionable construct and usage. All compilers that I’m aware of fall somewhere in the middle.

Unfortunately, rather than biting the bullet and forcing developers to recognize their questionable and problematic coding choices, compilers have traditionally allowed code to have a pass by not enforcing warnings as errors and by having the default warning level be something uselessly low. To make matters worse, if you do choose to crank up the warning level to its highest severity, many times, the operating system’s headers will fail to compile. I’ve discovered several missing macro symbols that the compiler just defaulted to 0. Not the expected behavior. Microsoft’s compiler have the sense to aggregate warnings into levels of increasing severity. gcc however, does not. The omnibus -Wall turns out to not actually include every warning. Even -Wextra leaves stuff out. The real fun begins when the code is expected to be compiled on different operating systems. This can be a problem for developers who have only ever worked with one tool chain.

So, let’s say that you realize that compilers on different platforms will focus on different issues. You’ll probably start considering trying other compilers on the same platforms. Once all those different compilers, set to their most severe, pass your code is good right? Not so fast. Remember, a compiler’s job is to translate the code, not analyze it. But, you do peer reviews. Tell me, how much of the code do you look at? How long do you look at it? Do you track all the variable lifetimes? Locks and unlocks? In all the code paths? Across compilation units?

Of course you don’t. That would be impossible. Impossible for a person. That’s why there are static analysis tools. My current favorite is Coverity. And yes, it costs money. If you can sell your software, you can pay for your tools.

But the fun doesn’t end there. Modern compilers can emit additional code to allow for profile guided optimization (PGO). Simply put, instrument the code, run the code, feed the results back into the compiler. Why? Because hand tweaking the branch predictors is an exercise in futility. Additionally, you’ll learn where the code is spending its time. And this matters because? It matters because you can waste a lot of time guessing where you need to optimize.

Finally, there’s dynamic analysis. This is realm of run-time leak detection. Wading through crash dumps and log files is a terrible way to spend your time.

So, are you developing quality software or just hacking?

%d bloggers like this: