Feeds:
Posts
Comments

Archive for the ‘Computers’ Category

An interesting thing about having spent over thirty years taking software from one platform to another is that, time and again, I’ve had my understanding of what constitutes correct code challenged. That’s a good thing.

Sadly, many people who ply the trade of software development mistakenly believe that a compiler has the ability to warn you when you’re code is going to behave in ill-advised ways. Worse yet, they fall into the trap of believing either that their code is correct if it compiles without warnings or that if the compiler accepts your code then any compiler will. Unfortunately, these beliefs are the developer equivalent of a two year old’s lack of object permanence. These two tragedies aside, the vast majority of developers are clueless as to how static analysis can and should be used to ensure code quality.

Let’s rewind a bit and work through these.

In the beginning was the language specification. It was a bright, shiny idea given form. Lest you get the idea that these documents, venerated by both compiler authors and language wonks alike, are intrinsically sane; please recall that the original Ada spec allowed minus signs in the middle of numeric literals and that 8 and 9 were perfectly acceptable octal digits in C. Now a computer language without a compiler is fairly dull. Enter the compiler authors. These individuals, who number about 1000 in the world and of whom I’ve personally known about a dozen, are highly proficient at taking the language specification and giving it life. The way they do this is far more Pollock than Vermeer. Why? Well, a language is the embodiment of a worldview. Unlike source code control systems, which are only created when someone gets fed up with the way that the current one they’re using does one particular thing to such an extent that they can’t bear living under its yoke any longer, languages come from the world of paradigms. Why? If your pet peeve is small, you’ll probably be able to either work around it or get it added to the language (typically contingent upon who you know). If your peeve isn’t small, any attempt to modify the language you’re using will have the same result as updating the value of Planck’s constant or dropping a storm trooper platoon into the middle of a city on Vulcan.

It can’t possibly be that bad you say. Actually, it can. And, in fact, it is. A long time ago, Apple was transitioning from it’s Pascal-based OS to a C-based one. The transition was fraught with byte-prefixed, null-terminated strings. The resultant code was pretty horrific. Just recently, I was working on a feature that required me to move between BSTR, wstring, COM and char strings. This because the language wasn’t designed with the notion that strings could be more than just US English. Compare this to Swift which is a pure Unicode language right down to the variable names.

Every compiler writer brings their own unique experiences and skill set to realizing the worldview embodied in the language specification. No two compilers will realize it the same way. Oh, they’ll be close and probably agree 90% of the time. The biggest area of difference will be in what each compiler considers important enough to warn the developer about. On one end of the scale, you could argue that if the language specification allows something that the developer is free to write the code accordingly. At the other, the compiler would report every questionable construct and usage. All compilers that I’m aware of fall somewhere in the middle.

Unfortunately, rather than biting the bullet and forcing developers to recognize their questionable and problematic coding choices, compilers have traditionally allowed code to have a pass by not enforcing warnings as errors and by having the default warning level be something uselessly low. To make matters worse, if you do choose to crank up the warning level to its highest severity, many times, the operating system’s headers will fail to compile. I’ve discovered several missing macro symbols that the compiler just defaulted to 0. Not the expected behavior. Microsoft’s compiler have the sense to aggregate warnings into levels of increasing severity. gcc however, does not. The omnibus -Wall turns out to not actually include every warning. Even -Wextra leaves stuff out. The real fun begins when the code is expected to be compiled on different operating systems. This can be a problem for developers who have only ever worked with one tool chain.

So, let’s say that you realize that compilers on different platforms will focus on different issues. You’ll probably start considering trying other compilers on the same platforms. Once all those different compilers, set to their most severe, pass your code is good right? Not so fast. Remember, a compiler’s job is to translate the code, not analyze it. But, you do peer reviews. Tell me, how much of the code do you look at? How long do you look at it? Do you track all the variable lifetimes? Locks and unlocks? In all the code paths? Across compilation units?

Of course you don’t. That would be impossible. Impossible for a person. That’s why there are static analysis tools. My current favorite is Coverity. And yes, it costs money. If you can sell your software, you can pay for your tools.

But the fun doesn’t end there. Modern compilers can emit additional code to allow for profile guided optimization (PGO). Simply put, instrument the code, run the code, feed the results back into the compiler. Why? Because hand tweaking the branch predictors is an exercise in futility. Additionally, you’ll learn where the code is spending its time. And this matters because? It matters because you can waste a lot of time guessing where you need to optimize.

Finally, there’s dynamic analysis. This is realm of run-time leak detection. Wading through crash dumps and log files is a terrible way to spend your time.

So, are you developing quality software or just hacking?

Read Full Post »

There’s an old computer joke. “There are 10 kinds of people. Those who count in binary and those who don’t.”

I think there are two kinds of software developers, those who constantly reexamine their skill set and those who like working on antique cars. Mind you, I have nothing against antique cars. A good chunk of my personal library is dedicated to computer history. I love reading about how we got to where we are. I’ve been privileged to have been an observer to some of that history. That being said, ours is a discipline where answers are only as good as the half-life of their questions.

What’s the lifetime of a question you ask. It depends. If the question is “What is the length of the diagonal of a right triangle given the lengths of the other two sides?”, pretty damn long. On the other hand, if it’s “What is the best way to teach people how to write a program?”, well, tick-tock.

So, what does this have to do with shelfware? As it happens, a fair number of the projects I’ve worked on have become shelfware. Sometimes it was tech that the world just wasn’t ready for. Other times, it was a case of the tank shell and mosquito. Way too big of a hammer for the problem. (The bigger the hammer, the greater the resources needed to create and eventually wield it.)  There are those time when you’ve created a solution either without a problem or by the time you’ve got your solution, the problem has either been solved another way thank you very much, or simply no longer exists. (Remember how everyone got all wound up about Y2K?) Finally, some projects simply run out of time.

The interesting thing is that no one has as their goal the creation of shelfware. But, you know what? Shelfware happens. So, what is one to do? My favorite exercise in such is the post mortem. As with all things, I ask my two questions. “What do I have?” and “What do I need?” What I have is easy to answer. It’s one of the above cases. What I need is to determine is how it got there and what can I do to prevent the same sequence of events from occurring again.

Part of what comes from this analysis help me improve my situational awareness. Contrary  to popular belief, it is not enough to be good at what you do and expect that your insight, code or experience will suffice. Software is like an elaborate dining experience. If you can’t get the right dishes out to the right people in the right order at the right time, there will be hell to pay. By the same token, if your team isn’t able to fire on all cylinders, you’re bound to either lose the race or crash and burn. Eventually. Nothing is more pathetic than a software team who’s sole purpose is to sustain a dead product. But I’ll leave zombieware for another day.

Situational awareness is all well and good, but it’s a external. The more important thing for me is to see what I can do to up my personal game. This is not to say that I wait for a train wreck to do this, only that unlike my usual mental fitness endeavors (consuming all the Apple WWDC and other conference sessions, Coursera classes, etc.) and scanning the horizon for interesting directions that the industry is moving toward, I like to take stock of my core skill set.

It takes skill to drive a nail with one or two hammer strikes, but it’s far more efficient to use a pneumatic nail gun. More fun too. And you don’t need the same level of skill. The same is true of software development. In the early days of both C and C++, there were no libraries. Everything was handcrafted in every single program. The number of times any given problem was solved, to varying degrees of success and quality, is staggering. People created problem-specific solutions which were then expanded upon for other projects. As usually occurs, they accreted unrelated and inseparable bits. By the time an opportunity to refactor the code base into some semblance of modernity, it is usually deemed too great a risk. So, while you’re becoming the expert in dinosaur care and feeding, the next generation of developers has been tinkering away with the latest and greatest.

I know, I know, the fundamentals you say. Okay. So, how about you show my your linear and logarithmic interpolation skills? No? You used Matlib perhaps? Hmm. Well then, how about explaining why 8.3 file names are storage efficient? Surely balancing binary trees. But wait, don’t you get that from maps in C++ and just about every other modern language? The finer details of memory allocation and pointers? Have you ever talked to a Java or Python developer? Binary tree common ancestors? Understanding how to implement that must be important. Seriously? In thirty-two years of developing software full time, I have never needed to re-balance a binary tree or find a common ancestor, except for interview questions. Which probably makes me unqualified to create the next great search algorithm. And that’s okay. All these answers reached their half-life long ago. Well, maybe not Tango trees. But they’re still a solution looking for a practical problem.

Over time, all these problems have had their solutions codified into either the language or the language’s standard library. This is a good thing. Map / reduce was once new and shiny. Now it’s built into Apple’s Swift language. I would much rather people who live to create the best sorting algorithm do so. That way everyone benefits. I don’t want to have to care if my data is partially sorted, whether I have fifty or fifty million items, if I have strings or structures. I want to sort my data in order to do something else with it, not because sorting it amuses me. By the same token, no one in their right mind would even consider creating a GUI from the ground up today. I’ve personally had to do this more than once. It’s a boat load of work and not portable. Why should data structures and their manipulation be any different? After all, we create abstractions so that we can handle problems of greater complexity.

So, if I’m not looking at these, then what?

Currently, three things have my attention. The first is Apple’s Swift. I’ve been watching this language develop over the past two years into a nearly complete solution for application creation. The second is C++11/4/7. I started working with C++ in the mid ’90s and it has managed to keep pace with developments in the processor and compiler spaces. Anyone who is still writing code as if nothing has changed since C++03 really needs to look at C++14. Between auto, atomics and memory handling changes, C++ continues to bring its A game. Finally, and most importantly, I’ve been looking at secure coding. Having recently taken the US CERT secure coding course, I was surprised at the level of code exploitation possible because of patterns of coding that go back decades. If OpenSSL has issues being exposed on a near weekly basis, would we expect that other major pieces of our infrastructure are not equally at risk?

To remain relevant in the world of software development, one must not only be competent in the current use of technology, but also be actively learning and incorporating the advances in the materials (languages / frameworks), methodologies (paradigms) and tools (editors, compilers, IDEs, etc.). Not to mention other peoples’ code.

Everything can and should be learned from,  even shelfware. It’s merely experience without exposure.

Read Full Post »

Apparently, I am the first person to complete SEI CERT’s online version of their course in C/C++ secure coding. With all my pausing the videos to take notes, it took me forty hours to get through, but it was worth the time. Actually, it’s two courses, one in security concepts and the other in secure C/C++ coding.

There’s four full days worth of material presented. The videos are chunked into 2 – 60 minute segments, which is helpful as the presenters are speaking to an actual class. This choice of format is both good and bad. Good in that you see real questions being asked. Bad in that, as with many classroom situations, the presenters go off into the weeds at times.

There are six exercises covering major elements (strings, integers, files, I/O, etc.). For these, web interfaced Debian Linux VMware VMs are provided. These boot quickly and have all tools required to perform the exercises. An hour is allotted for each, which is plenty of time. Following each is a section covering the solutions. This is especially helpful as there’s no one to ask questions of during. It’s also possible to download a copy of the VM being used. Sadly, no provision for Windows or native Macintosh environments.

Two books are provided (in various formats). The Secure Coding in C and C++, Second Edition and The CERT® C Coding Standard, Second Edition: 98 Rules for Developing Safe, Reliable, and Secure Systems (2nd Edition).

Generally, I found the material was well delivered. The content in the areas of (narrow) strings, integers and memory handling were exhaustive. The sections on file handling and concurrency were almost completely lacking any Windows coverage. Given that number of systems used by businesses and individuals based on Windows, this is quite disappointing. There was also a bit of hand waving around the use of wide characters. Not providing complete coverage here is a true deficiency as properly dealing with the reality of mixed string types is a reality that isn’t going to go away.

As to this being a C and C++ class, well, sort of. There are nods to C++, but like so much code out there, the C++ is tacked on the side. It really should be treated as a separate class as is done with Java. Many recommended mitigations can with the proviso that they were not portable. I think that when you’re dealing with security, the fact that there exists a mitigation outweighs its portability. These things can be abstracted to achieve portability. As someone who’s spent a fair amount of time doing cross-platform development (Unix-MacOS, MacOS-Windows, Linux-Windows), these are important issues to me.

To those who argue that securing code will make it slow, I would ask what a single security compromise would cost their company in reputation and direct monetary terms.

Would I recommend the class? Yes. There are important concepts and real-world examples on display here. I challenge anyone to take this class and not be horrified by the way most C code is written.

Could I teach this material? Definitely.

Read Full Post »

HackerRank has become a popular showcase for software developers. Its many participants have the ability to choose their language. Recently Apple’s Swift language has been added.

This is all well and wonderful, but as I written about in previous posts, non-UI oriented use of Swift has not been a priority. It’s not impossible, but you have to scrounge for the bits you need to use it.

As those who have used HackerRank know, non-UI is the way of things. So, here are a few bits that make it possible to use Swift in the HackerRank.

Note: In production code, one should always check when unwrapping variables. In HackerRank, input will be well-behaved and the focus is on the internal workings rather than the input mechanism.

Reading a string from the input stream

let string_value = readLine(stripNewline: true)!

Reading a single number from the input stream

let numeric_value = Int(readLine(stripNewline: true)!)!

Reading a string array from the input stream

let stringArray = input.characters
                       .split {$0 == " "}
                       .map (String.init)

Reading a numeric array from the input stream

let numbericArray = input.characters
                         .split {$0 == " "}
                         .map (String.init)
                         .map {(var s) -> Int in return Int(s)!}

Reading mixed values from the input stream

let stringArray = input.characters
                       .split {$0 == " "}
                       .map (String.init)

let iValue1 = Int(stringArray[0])!
let sValue  = stringArray[1]
let iValue2 = Int(stringArray[2])!

Writing single values to the output stream

print(foo)

Writing values in string to the output stream

print ("There are \(found) or \(expected) things")

Writing array values to the output stream

print (array.map { String($0) }.joinWithSeparator(" "))

Why not use extensions, you may ask. You can do that if you like. I thought it would be better here to show the minimal elements. And generally HackerRank in very limited in its input and output aspects.

This should cover all the usual cases for input and output to enable you to use Swift with HackerRank. So, the next time you’re using HackerRank and you don’t see Swift as an option, be sure to ask the author to add it.

Read Full Post »

This is my third post on the Swift command line. In the first, I wrote on how to do command line input with Swift. In the second, I cleaned up the code after a year of tinkering with Swift. In this third installment, I get rid of more Objective-C.

Almost a month ago, Apple released Swift into the open source community. Pretty spiffy that. So, being the language wonk that I am, I bopped over to the Swift site and downloaded the sources. After a bit of scrounging, I discovered that there existed a standard library call to read a line from the input stream. I was a bit disappointed that I’d not seen this talked about anywhere, but that’s life.

So, I’ve updated my code sample and present it below. I do some forced unwrapping as the point is to highlight the process. Be sure to code defensively.

I’ll probably come back to this one more time once I’ve had a chance to look more into the Swift standard library so I can eliminate my untidy Objective-C dependent putString() routine.

//
//  swift_intput_routines.swift
//
//  Created by Charles Wilson on 9/27/2014.
//  Revised by Charles Wilson on 9/27/2015.
//  Revised by Charles Wilson on 12/21/2015.
//
//  Copyright (c) 2014 – 2015 Charles Wilson. All rights reserved.
//
//  Permission is granted to use and modify so long as attribution is made.
//

import Foundation

func putString (outputString : String = “”)
{
if !outputString.isEmpty
{
NSFileHandle.fileHandleWithStandardOutput().writeData((outputString as NSString).dataUsingEncoding(NSUTF8StringEncoding)!)
}
}

func getString (prompt : String = “”) -> String?
{
if !prompt.isEmpty
{
putString(prompt)
}

return readLine(stripNewline: true)
}

func getCharacter () -> Character
{
let inputValue     : UInt32      = UInt32(getchar())
var inputCharacter : Character

inputCharacter = Character(UnicodeScalar(inputValue))

return inputCharacter
}

func getInteger (prompt : String = “”) -> Int
{
if !prompt.isEmpty
{
putString(prompt)
}

return (getString()! as NSString).integerValue
}

func getFloat (prompt : String = “”) -> Float
{
if !prompt.isEmpty
{
putString(prompt)
}

return (getString()! as NSString).floatValue
}

Here’s the main to exercise it.

//
//  main.swift
//
//  Created by Charles Wilson on 9/27/2014.
//  Revised by Charles Wilson on 9/27/2015.
//  Revised by Charles Wilson on 12/21/2015.
//
//  Copyright (c) 2014 - 2015 Charles Wilson. All rights reserved.
//
//  Permission is granted to use and modify so long as attribution is made.
//

import Foundation

var name = getString("What is your name? ")!

if name.isEmpty
{
    name = "George"
    
    putString("That's not much of a name. I'll call you '\(name)'\n")
}
else
{
    putString("Your name is '\(name)'\n")
}

let age = getInteger("How old are you \(name)? ")

putString("You are \(age) years old\n")

// get a value without a prompt
let number = getInteger()

putString("\(number) is a nice number\n")

let floating = getFloat("Enter a floating point number: ")

putString("\(floating) works for me\n")

var c : Character

putString("Type stuff. Enter ^ when done.\n")

let sentinel : Character = Character(UnicodeScalar("^"))
var in_c     : Character

repeat
{
    in_c = getCharacter()
}
while ( in_c != sentinel )

putString("\n\n")
putString("bye\n")

Read Full Post »

I’m a big fan of notebooks. Whoever has the unenviable task of sorting through my stuff after I’ve made that final blog entry will be bored to tears with the piles of notebooks they’ll have to go through, assuming that they don’t simply rent a dumpster and chuck the whole pile. Not my problem though.

To be fair, the consistent use of notebooks to record a project’s designs, progress and failures does not make you a da Vinci. The world is replete with pen wielding lunatics. That being said, I believe that using notebooks (actual paper ones) helps give shape to nascent ideas. The simple act of putting pen (or super high-tech stylus) to paper (be it college rule loose leaf or archival quality) forces you to think a bit more about what you’re doing. Perhaps this is why I’ve never been much of a fan of those who believe that they can create a complex piece of software without some form of design. I’m not talking space shuttle here. A simple data flow diagram, threat model and use case statement is all I ask (the basics). These are the bare minimum.

Whiteboards a all well and fine, but unless you’ve got a dedicated scribe hanging about, what happens on the whiteboard tends to stay on the whiteboard. Besides as a clever person once said, “I couldn’t reduce it to the freshman level. That means we don’t really understand it.

At work, my notebooks are a mix of the mundane and the technical. I track meetings, conversations, designs and random thoughts. I find this helpful when trying to make sense of all the balls in the air and occasional ones that end up on the ground. At the end of each year, I participate in the great year-end summary exercise. For this my notebooks are invaluable. This year was no exception. What was exceptional was the numbers.

  • 193 code reviews (other people’s)
  • 81 user stories / bugs (evaluation, implementation, review)
  • 24 technical documents (papers, presentations and the like)

Once I’d finished doing my sums, I went back to my 2014 summary. Not even close. I’m not really sure why.

What did I do in 2015?

I spent a lot of time in some very specific areas:

  • writing security code
  • developing threat models
  • teaching Visual Studio to Linux developers
  • improving code quality through use of static analysis and compilers
  • coordinating open source and third-party library use

What did I learn in 2015?

  • developers expect a language to work based on the platform they learned on and not the spec (which no one ever reads)
  • people really, really don’t understand the difference between authentication and authorization
  • developers don’t see security as their problem
  • developers don’t trust static analysis (The code has never failed, so why is that change important?)
  • talking about cyclomatic complexity metrics is like trying to explain Klingon opera to a Star Wars fan

What’s up for 2016?

Who can say. Security doesn’t look like a bad guess. And it really doesn’t matter how fast machines get or how much storage they have. Programs will grow to consume all available resources.

The only certainty is that I’ll be filling another set of notebooks.

Read Full Post »

One of the interesting things that happens to me when I attend events like yesterday’s PDX Summit III is that it gets me thinking about things in a new and more connected way. For many who know me this will be perceived to mean that for some indeterminate length of time that I’ll be a bit more random than usual.

To misappropriate the Bard, “There are more things in heaven and earth, Horatio, than are accessible from your contact list.”

This morning I started reading Galileo’s Telescope and it got me thinking in terms of the big data / open source elements brought up at the summit. Before you injure your neck doing that head tilt puzzled look thing that dogs do, let me explain.

I have a great affinity toward data visualization. I could probably press my own olive oil with the stack of books I’ve got on the subject. So when I saw that Galileo had written a text entitled Sidereus Nuncius, my first thought was, “if you took nuncius (message) and pushed it forward into present day English, you’d end up at announce, denounce and enounce. What if you pushed it backward in time? How about sideways toward French? If we visualized this map, what would it look like? How would we navigate it?

I’ve always found it fascinating how speech informs thought. We live in a society where using ‘little words’ is encouraged in an effort to be more inclusive. The problem is that these ‘big words’ aren’t big for the sake of big. They encapsulate entire concepts and histories. We talk about ‘the big picture,’ ‘big data,’ and the like, but in our attempt to make it all accessible all we seem to be doing is creating a meaningless assemblage of words and acronyms, that at the end of the day, have the precision of a ten pound sledgehammer in a omelet shop.

What if instead of constantly, reducing our communication to the green card, red card of sports; we instead could point to the 21st century version of Korzybski’s Structural Differential and literally be on the same page? How would language acquisition be improved for both native and foreign languages, if you could build understanding based on the natural evolution of the language’s concept basis? What would the impact on science be if we could visualize past crossover points between disciplines? How much more readily would students learn the concepts of computer science and engineering if they could put present day abstractions into the context of past constraints rather than simply memorizing a given language, framework or operating system’s implementation?

Yeah, this is one of those posts that has no conclusion. It’s a digital scribble intended to be a jumping off point for future endeavors.

Read Full Post »

Update: Apple released the Swift source. My update based on what I learned is here.

A bit over a year ago, I wrote a post about how I wanted to do command line input in Apple’s then new Swift language.

The post was pretty well received and still get a hit / week.

On the heals of finishing the WWDC 2015 session videos, I decided to start tinkering a bit more with the language. Toward this end, I’ve been re-implementing old command line games from the 1970s that were written in BASIC. This forced me to actually go back to my command line input code. Dutifully allowing XCode to update it to Swift 2 syntax, I noticed that it relied way too much on Objective C bits. I also didn’t like the way it was handling the translation from string to numeric. So, I’ve updated it.

Yes, I should just put it up on GitHub. Maybe later. For now, just cut and paste it.

//
//  swift_intput_routines.swift
//
//  Created by Charles Wilson on 9/27/2014.
//  Revised by Charles Wilson on 9/26/2015.
//
//  Copyright (c) 2014 - 2015 Charles Wilson. All rights reserved.
//
//  Permission is granted to use and modify so long as attribution is made.
//

import Foundation

func putString (outputString : String = "")
{
    if !outputString.isEmpty
    {
        NSFileHandle.fileHandleWithStandardOutput().writeData((outputString as NSString).dataUsingEncoding(NSUTF8StringEncoding)!)
    }
}

func getString (prompt : String = "") -> String
{
    if !prompt.isEmpty
    {
        putString(prompt)
    }
    
    var inputString : NSString = ""
    let data        : NSData?  = NSFileHandle.fileHandleWithStandardInput().availableData
    
    if ( data != nil )
    {
        inputString = NSString(data: data!, encoding: NSUTF8StringEncoding)!
        inputString = inputString.substringToIndex(inputString.length - 1)
    }
    
    return String(inputString)
}

func getCharacter () -> Character
{
    let inputValue     : UInt32      = UInt32(getchar())
    var inputCharacter : Character
    
    inputCharacter = Character(UnicodeScalar(inputValue))
    
    return inputCharacter
}

func getInteger (prompt : String = "") -> Int
{
    if !prompt.isEmpty
    {
        putString(prompt)
    }
    
    return (getString() as NSString).integerValue
}

func getFloat (prompt : String = "") -> Float
{
    if !prompt.isEmpty
    {
        putString(prompt)
    }
    
    return (getString() as NSString).floatValue
}

I also update the test routine as I wasn’t checking the float bits.

//
//  main.swift
//
//  Created by Charles Wilson on 9/27/2014.
//  Revised by Charles Wilson on 9/26/2015.
//
//  Copyright (c) 2014 - 2015 Charles Wilson. All rights reserved.
//
//  Permission is granted to use and modify so long as attribution is made.
//

import Foundation

var name = getString("What is your name? ")

if name.isEmpty
{
    name = "George"
    
    putString("That's not much of a name. I'll call you '\(name)'\n")
}
else
{
    putString("Your name is '\(name)'\n")
}

let age = getInteger("How old are you \(name)? ")

putString("You are \(age) years old\n")

let number = getInteger()

putString("\(number) is a nice number\n")

let floating = getFloat("Enter a floating point number: ")

putString("\(floating) works for me\n")

var c : Character

putString("Type stuff. Enter ^ when done.\n")

let sentinel : Character = Character(UnicodeScalar("^"))
var in_c     : Character

repeat
{
    in_c = getCharacter()
}
while ( in_c != sentinel )

putString("\n\n")
putString("bye\n")

Read Full Post »

It took me a bit longer than I’d’ve liked, but finishing all the Apple WWDC 2015 videos (110-ish) in under three months is pretty satisfying.

I’m impressed at the speed with which Apple is executing the change of primary development language from Objective-C to Swift. I expected three years, but it looks like they’ll have things wrapped up in two. This is no mean feat. I’ve now experienced three core language shifts within Apple now. The first was from the Apple ][ 6502 assembly to the Macintosh 68000 assembly / Pascal hybrid. The second was the move to C. This was particularly tedious for those of us attempting to keep both camps happy. You haven’t lived until you’ve dealt with byte-prefixed, null-terminated strings. With the adoption of NextStep and the BSD/Mach micro-kernel can the transition to Objective-C. I’ll admit, I made fun of Objectionable-C. By that time, I’d spent the better part of a decade using C++. A bit of snobbery on my part. Those two children of C have fundamentally different views of the world. I cut my teeth on iOS using Objective-C and appreciated its extensibility when compared with C++. But, it didn’t have the base that C++ did. A billion devices later, well, that’s a different story. Now we have Swift. I believe that it represents the next generation of language. Not object-oriented or message-oriented, but protocol-oriented.

The number of sessions dedicated to tools was impressive as always. As was the quality of the presentations. Thankfully, we were spared the pain of having Apple’s french speakers presenting in English discussing graphics which the word banana coming up so often that one would think there was a drinking game just for that session.

I’m looking forward to tinkering with the WatchOS bits. Those sessions are probably a staple of developers.

Props goes out to the Xcode developers of continuing to bring a quality product to the table. An AirPlay view for the simulators would be nice (hint, hint). The sessions dedicated to the profiling, power and optimization of code are worth watching multiple times.

As is the case with many mature elements of the operating systems, security had fewer explicit sessions. Instead, security was a pervasive theme along with privacy.

One cannot talk about this years sessions without mentioning the brilliant leveraging the synthesis of scale and privacy to created ResearchKit.

The care that Apple puts into the sample code is truly inspiring. Having suffered through hundreds of pages of AOCE documentation, today’s entry into Apple development seems easy. Easy on the individual component level at least. There is not more that one would have to learn in order to create software from beginning to end with the level of quality and feature richness that the world has come to expect from applications on the Apple platforms.

Leaving the best to last, I’ll reflect on an issue that’s always bothered me with the transition strategy that Apple has used in the past. It’s not so much that I didn’t like the solution they came up with to deal with transitioning from one methodology to another. Or that I had a better answer, I didn’t. The price always seemed rather steep to me. I speak of binaries with multiple code and data resources used to allow a user to download a single image and run it anywhere. This was used in the transition from 68000-based machines to PowerPC ones and again when moving to Intel’s architecture. On iOS, we’ve seen the number of duplicate resources steadily climb as the screen geometries and densities have increased. The thing of which I speak is the double-headed axe of app thinning and on-demand resources. The ability to release an application to the store with all the bits for all the supported devices and be able to download only those that will actually be usable on a given device is tremendous. Couple that the a way to partition an application in such a way that only the resources within a user’s window of activity are present on the device and you have a substantial savings in both time and memory. Well done.

It’s been many years now since I’ve been able to attend WWDC in person and given the popularity of the conference, it’s not likely that I’ll be going any time soon. I’m content for the moment to be able to access all the content, if not the people, that someone attending would be able to. I look forward to next year’s sessions.

Read Full Post »

The future has always been a contentious place. I should know, I’ve spent most of my career there.

We’ve come a long way from the idea that the world only needed half a dozen computers [Thomas Watson, Jr.]. We now have some many computers that we managed to exhaust the 32-bit IPv4 address space. The solution embodied in IPv6 creates other issues, but that’s a topic for another post.

The interesting part of working in the computer domain is that feeling of being one step ahead of the langoliers. It can be at once exciting and terrifying. It is not for the faint-of-heart or those who believe that the need for learning stopped after their last final exam.

Lately, I’ve been watching an oddly converging divergence of ideas. One head of this hydra follows the path of the ever bigger. Bigger data sets, bigger pipes, bigger computations and unfortunately bigger OS’s. A second head constantly works toward making the whole morass vanish. I remember when I began to see fewer watches as people realized that their phone could do that. That emergency camera that the insurance company tells you to keep in your car? Answering machines? Travel alarm clocks? MP3 player? Portable DVD player? I would really hate to be in Garmin’s consumer division. Another head wants to be everywhere. It’s no longer sufficient to be that operation you could run out of a garage. Now, we have to be able to have stuff, both artifact and intangible, available everywhere. Remember when the fastest way to see a first-run Hollywood film overseas was to be on a military base? Speaking of military bases, you may have noticed that people are recognizing that security is important. The final head is fixated on why computers are this fixed assemblage of hardware. What if I really do need 20TB of memory and a 16K node mesh?

With all this “progress” going on, it’s all that the poor, beleaguered software developers can do just to keep up on one of these. But, that’s okay right? These are all unrelated. Right?

Well, we’ll get to that. For the moment, let’s see if you and I think the same about who’s doing what.

Bigger

Amazon and Google are both doing cloud, but the company I find interesting here is Microsoft. Azure takes the problem of software at scale reduces it to some fundamental building blocks (compute, storage, database and network). Operating system? We don’t need not stinking operating system! For those of you who remember what an IBM 1130 is, you’ll love Azure. It’s like driving a TR6 on the PCH at 80 mph (you could get from LA to SF in like, I don’t know five-ish hours). The world is yours until you crash. [Disclaimer: I have never driven a TR6.] Want more CPUs or storage or network, add more.

Invisible

The battery in my first mobile is heavier than my current phone. Apple’s biggest coup isn’t that it creates ever smaller technologies. They represent the technological equivalent of Michelangelo, who famously remarked about the process of sculpting his David:

It’s simple. I just remove everything that doesn’t look like David.

When Apple introduced the iPhone, developers were all torches and pitchforks. This wasn’t how things were done. Where’s the disk? How do I see this other application’s files (app was still a trending meme). Apple took away bits that we were accustomed to, but didn’t actually need. Most of the time. Sometimes they pulled an Apple Round Mouse. But mostly they drove development in a direction that only those of us who have been given the task of making a wireless keyboard that can run for six months on a pair of AA batteries understood. How to write code that wouldn’t make the device die in under four hours. To a large extent this was the evidence of Gates’ Law. We are now on the cusp of the Apple Watch which promises to hide the technology behind the technology even further. As someone who uses Apple Pay on a regular basis, I’m looking forward to see how the Watch does.

Everywhere

This is where the divergences converges. Azure instances can change locality temporally. As a result, your customers access servers in their vicinity. The user interfaces of software written for MacOS or iThings is multi-language and multi-locality (units) capable by default. Unlike Azure, iCloud isn’t so much a platform for developer as it is a vast warehouse of data. Apple’s recent announcement of ResearchKit has already shown how much impact an everywhere technology can have.

Secure

As one who has had the distinct displeasure of pulling his company’s internet connection on 2 November 1988, I believe that security is important. My master’s thesis was focused on computer viruses. I deal with the failure of developers to apply sound security practices to open source and commercial software on an ongoing basis.

For a really long time, no one really took securing the computer all that seriously.

Now, if you look at both Microsoft and Apple, you see security systems in a serious way. On iOS, it’s baked in. On Windows it’s half-baked. Yes, that’s a bit of snark. Security shouldn’t be an option. In iOS, if an application wants access to you contact list, it must declare that it wants to be able to access those APIs. The first time they attempt to access them, the user is prompted to allow the access. At any time, the user can simply revoke that access. Every application is sandboxed and credentials are held in a secure store. On Windows, security is governed by policy. These policies are effectively role-based. This is fine as far as it goes, but like the days-of-old, if you’re the wrong role at the wrong time running the wrong application (virus), you can deep fry any system. Hence my comment about it being half-baked.

Do we seriously believe that banks should be running on an operating system that isn’t build from the ground up around security?

Fixed

This final hydra head is perhaps the most interesting to me as it holds the most promise. It represents the hardware analog to Azure. Today, you may be able to configure an Azure instance, but that configuration only goes so far. If you look back to the dim days (which for some reason or other were in black and white, even though we had color movies as far back as 1912). Back then if you wanted more oomph, you ordered it (and an additional power drop). Now you are greatly constrained. Remember that 20TB system I mentioned earlier? Why can’t I get one? Because our manufacturing model is based on scale. This has been a good thing. It’s made it possible for me to have a laptop that doesn’t weigh 16lbs with a run time 2 hours. Isn’t that great? Ask a left-handed person sometime. As the number of actual computer manufacturers dwindles, we’re seeing more white box systems cropping up. These are being used to create the application clouds. But at a time when power is real money, how much are we wasting in resources to access the interesting bits of these boxes? More and more we see the use of storage arrays. All well and good. So, where are the processor arrays? The graphics arrays? What if I need 12 x 5K monitors? The people who crack this nut will make a great number of people very happy.

The Future Won’t be Brought to Us by AT&T

Once AT&T was the go-to place for the future of the future. Not any more. The future is far bigger than anyone imagined it to be and certainly far larger than any one company is capable of providing.

The question is, how do we identify the people who are ready to not only build that future, but to build it out?

Read Full Post »

« Newer Posts - Older Posts »