Feeds:
Posts
Comments

Archive for the ‘Apple’ Category

There have been numerous times when a new technology has led to a major shift in how we thought about how computers and software should be built. We are about to see one of those shifts. At least that’s what I’ve come to believe.

Let’s pop into the Wayback and set our sights on the early ’80s. At that time computers had one processor. Hardware-based floating point were the domain of mainframes and minicomputers. Communications between computers existed only for the well-heeled. Security meant keeping your computer locked up.

Life was pretty simple. If you wanted something done, you did it yourself. When software was shared it was done via the US Postal Service on 9-track tape.

Fast forward to the early ’90s. Desktop computers were fairly common. Uniprocessors still ruled. Hardware floating point was now readily available. The internet had just been introduced. Gopher was slowly to be displaced by the combination of FTP and web search engines. Security issues were a thing that happened, but was, on the whole a black art practiced by a small number of individuals and required skills that you needed to develop yourself.

It was around this time that I was casting about for a thesis topic for my Master’s in Electrical and Computer Engineering. I took on the topic of virus-resistant computer architectures (AARDVARK). Did I mention that it was 1992? Just researching the state of the art in computer viruses was a huge task. No Google, Amazon or ACM online article search. As to the other side of the equation, the how and why of hacking, well, I’ll leave that for another time.

By the time I was done, I’d proposed a computer architecture with separate instruction and data spaces where the application’s binary was encrypted and the key loaded in a separate boot sequence was stored in a secure enclave, accessible only to the binary segment loader. Programs were validated at runtime. I conjectured that such a computer would be ideal for secure use and could be built within the 18 months.

Everyone thought it was a great design and the school even worked with me to apply for a patent. The US Patent Office at that time didn’t get it. After five years we abandoned the effort. I was disappointed, but didn’t lose sleep over it.

Fast forward to 2012 when Apple released the iOS 6 security guidelines. Imagine my amusement when I see echos of AARDVARK. It’s all there: signed binaries, secure enclave, load validation. Good on them for doing it right.

Let’s step back and consider the situation. Computers are really small. They have integrated hardware floating point units, multi-processors and now, with the advent of this generation of iPhone, hardware-based security. The internet has gone global. Google indexes everything, Open source is a thing. So, we’re good?

Not so much. The Apple iPhones are an oasis in a vast desert of security badness. Yes, IPv6 has security goodness available, but IPv4 still rules. Secure programming practices are all but non-existent. Scan and contain is the IT mantra. Threat modeling is an exercise for the academic.

This brings to last year. Microsoft announced Azure Sphere. Application processor, dual-MCU, networking processor, security processor. All firewalled. All in the same package. The provided OS was a secured version of Linux. Each device is registered so only the manufacturer can deploy software, push updates and collect telemetry via the Azure cloud.

There must be a catch. Well, as you know, there’s no such thing as a free burrito.

The first device created to the Azure Sphere specification is the Mediatek MT3620. And no, you can’t use it for your next laptop. The target is IoT. But, there’s a lot of horsepower in there. And there’s a lot of security and communications architecture that developers won’t have to build themselves.

Microsoft is touting this a the first generation. Since they started with Linux and ARM, why wouldn’t you want to get something with more power for systems that have security at their core. If Microsoft approached this as Apple has the iPhone, iPad, AppleTV and Apple Watch; why shouldn’t we expect consumer computers that aren’t insecure.

But will I be able to use them for software development? That’s a tricky question.

When I envisioned AARDVARK, my answer was no. That architecture was designed for end-user systems like banks and the military. You can debug a Sphere device from within Visual Studio, so, maybe it’s doable. You’d need to address the issue of a non-isomorphic ownership model.

Are users willing to bind their device to a single entity? Before you say no, consider how much we’re already put in the hands of the Googles and Facebooks of the world. Like it or not, those are platforms. As are all the gaming systems.

Regardless, I believe that we will end up with consumer compute devices based on this architecture. Until then we’ll just have to watch to see whether the IoT sector gets it and by extension the big boys.

Either way, the future is Sphere.

Read Full Post »

The book Creative Selection by Ken Kocienda was recommended to me. This is unusual in that I’m typically the one recommending books to others.

This book follows the creation of the iPhone as seen through the eyes of the author, who was a software developer at Apple at the time. There are many interesting aspects to his story, to his interactions with the movers and shakers within Apple including eventually with Steve Jobs, and to the creative dynamic that made the iPhone possible at all.

Over the years, I’ve had the opportunity to participate in the world of Apple’s alpha and beta hardware, as well as that of many other technology firms as an engineer at firms working with Apple. I’ve also been on the other side of the fence, designing, developing and managing hardware and software, and their associated beta programs. It’s a challenging environment.

I’ve also had the opportunity of presenting my work to those holding the levers of power who had, shall we say, a less than gentle manner of showing disapproval with those thing not meeting with their standard of quality. There is an interesting combination of exhilaration and dread surrounding such presentations.

Elaborating on the details of the book in this post wouldn’t serve to encourage the would-be reader. If you are a follower of the history of Apple or technology, you’ll find things that will expand your view of the period and the dynamics surrounding it. You should also come away with a bit of insight into how Jobs looked at features. At least within the scope of the work that Kocienda presented him.

As a first outing, Kocienda does a reasonable job of painting a picture of the time and place that was Apple before they changed the world of mobile phones. The book is illustrated, which is unusual in a day and age of contemporaneous photography. I can’t imagine there not being numerous of photos from the proto-iPhone during its gestation. The pace of the book is decent and he conveys a sense of presence in the narrative. I felt that it could have been tighter, although that would have reduced the already roughly 200 pages of prose even further.

All-in-all, the book is well supported by references and is approachable by those not of the software tribe. You should be able to dispatch this one in a few hours. It’ll be joining my other volumes on the history of technology of this period.

 

Read Full Post »

A quick check of my iOS app Wumpus Hunter Neo-classic shows me that I’ve got some updates to make before iOS 11 releases. There’s been some changes to the way the UI handles scene transitions. The app itself seems to be otherwise behaving itself.

Read Full Post »

One of the challenges in learning a new language, be it human or computer, is learning to think in the language. Learning Swift is no different.

I’ve been spending the past several months working on the first iOS (iPhone) app that I’ll be releasing through the Apple app store. I’m writing it entirely in Swift 3, the latest version of the language. Yesterday, I was looking over the code to see if there were any places where I’d been thinking in a non-Swift fashion. This moment of reflection was triggered by my viewing of a very recently released class in Swift from one of the major online education sites. I was disappointed that the instructor was using an idiom that I knew was easily simplified in Swift. My disappointment made me wonder if I had similar issues.

The following is actual code from my upcoming app. I had originally created it’s core in C as an extensible example for teaching that language. The setup is as follows:

  • the map is an array of connections from each room
  • the number of connections from each room is fixed
  • we are explicitly tracking the number of a given type of hazard

We want a function that returns true if there is hazard in one of the connecting rooms. Here’s the original code:

var hazardNear = false
        
if hazardCount > 0
{
    for nextRoom in 0 ..< numberOfConnections
    {
        if hazardRooms.contains(exits[room][nextRoom])
        {
            hazardNear = true
                    
            break
        }
    }
}
        
return hazardNear

Pretty conventional C-esque stuff. The first thing that jumped out at me was the reliance on knowing the number of connections. Swift arrays are true collections. Let’s treat them as such.

var hazardNear = false
        
if hazardCount > 0
{
    for nextRoom in exits[room]
    {
        if hazardRooms.contains(nextRoom)
        {
            hazardNear = true
                    
            break
        }
    }
}
        
return hazardNear

When viewed this way, it’s obvious that we’re just filtering the collection based on a condition. So, just do that.

var hazardNear = false
        
if hazardCount > 0
{
    hazardNear = exits[room].filter{hazardRooms.contains($0)}.count != 0
}
        
return hazardNear

We’re filtering the array of rooms for ones that contain the hazard and checking for a non-zero count. But this is still a bit clunky. Let’s examine what we’re really asking.

var hazardNear = false
        
if hazardCount > 0
{
    hazardNear = !exits[room].filter{hazardRooms.contains($0)}.isEmpty
}
        
return hazardNear

It’s far more clear to simply ask if the filtered array is non-empty directly. What’s left is one final simplification.

return hazardCount > 0 && !exits[room].filter{hazardRooms.contains($0)}.isEmpty

One might ask whether having an explicit count is necessary as the hazard array has a count. True. Although the hazard array is built based on the number of hazards, so I still need it to be around.

Overall, I fairly pleased with the results of the exercise. In the end, the code is much clearer in intent.

Read Full Post »

One of the things I’ve always liked about Apple is the way they revisit and revise their tools. Although there is comfort in logging on to any *nix system and knowing that ed or vi or gcc will be there, I believe that the lack of investment in standard tools hurts more than knowing that anyone who has written code since 1980 will still be at home. This is the technical equivalent of having a rotary gang switch to change channels on your flat-screen TV because that’s what your grandparents were used to and they still need to use the TV. You know what, they managed to adapt to there being a wireless remote and 500 channels.

Swift is probably the most visible of today’s Apple tool changes. As we fast approach the release of Swift 3.0, it’s good to check in and see what the future of software development of Apple products is going to be like. This is one of those growth and pain stories.

From the language standpoint, Swift 3.0 brings a greater level of consistency in terms of behavior and style. Compared to other languages, new developers will have fewer exceptions to keep track of. For those following the latest Stanford iOS class on iTunesU (CS193P) thinking that when they’re done that they’ve learned Swift, well, sorry, but in a few months you’ll need to update all your code.

One of the biggest changes in Swift 3.0 is the way that the Apple OS-specific libraries are handled. In the same way that C++ and Python aren’t bound to a particular operating system, neither is Swift. You don’t need to be on MacOS or iOS to use Swift. The problem is that most applications that interact with people need to be bound to some kind of operating environment. On Windows that’s typically done by using .Net. With Java, you’ve got a laundry list of environments (EE, SE, ME). For Swift, you’re binding to Apple’s plenitude of support libraries. All of which have, for the past decade or so, been based on Objective-C. The great thing about new languages is that they incorporate lessons learned. The devil of them is that vast body of existing code they need to interface to.

In the case of MacOS and iOS, the libraries make a lot of sense if you’re writing code in 2004. We’ve learned a lot about languages and frameworks and operating systems since then. But, as with all things, there are only so many hours in the day. Inevitably, you build a core around the language and, if you’re nice, create bridges (shims) to the external bits. If you’re Apple, you’re usually particularly nice and create toll-free bridges (ones built for you). If you’re Microsoft, you have the joy of multiple layers of conversions to look forward to. Eventually as the language evolves, you get to the point where the rules you gave people as to how the bridging worked do align with the language. That’s where we are today with Swift 3.0.

So, by now, you’re saying to yourself that this must be really bad to merit such a long preamble. Sort of. I came across a nifty article on what’s new in Swift 3.0 and decided to download the May 9th Swift trunk [see update below] and see for myself. There’s a nifty video from the same person. He does a good job of covering the changes. Remember my earlier reference to the Stanford class? I’ve been following it myself. I thought, “let’s see how what’s being taught tracks.” For my first attempt I chose a small app that uses gestures to tinker with a face drawn to the screen. Now as those of you familiar with iOS will be aware, there are two ways of implementing UI binding to code. You can either write it in code or use Interface Builder. The thing about using IB is that it handles a bunch of tedious details for you. It also hides stuff and is the source of weird errors if you screw things up.

In implementing this app, I of course used IB. And you know what, I crashed and burned. After installing the new tool chain and restarting Xcode, my modest app showed a variety of error stemming from the changes in Swift 3.0, all of which were easily corrected thanks to the very clear error messages and suggested remediation provided. The app launched and drew the baseline image, but any gesture failed miserably. Maybe it was the project. I created a new project with nothing but a raw view and added a tap recognizer in IB and wired it up to the view with a print diagnostic. It happily built, but also crashed on tap.

2016-05-26 15:16:31.150 f2[21677:736529] -[f2.ViewController tap:]:
  unrecognized selector sent to instance 0x7ffe59d31eb0
2016-05-26 15:16:31.156 f2[21677:736529] *** Terminating app due to
  uncaught exception 'NSInvalidArgumentException', reason:
  '-[f2.ViewController tap:]: unrecognized selector sent to instance
  0x7ffe59d31eb0'

Creating the action in code worked. To be fair, this tool chain is coming from the Swift site and not Apple. I then checked to see what happened if I tossed in a vanilla button. Same problem. That wouldn’t do. The answer turned out to be held in the AppDelegate. As it happens the new tool chain generated a warning for each of the standard boiler functions.

/Users/charleswilson/Documents/projects/2016_CS193P/FaceIt/FaceIt/
  AppDelegate.swift:17:10: Instance method
  'application(application:didFinishLaunchingWithOptions:)' nearly matches
  optional requirement 'application(_:didFinishLaunchingWithOptions:)' of
  protocol 'UIApplicationDelegate'

The provided code reads:

func application(_ application: UIApplication,
  didFinishLaunchingWithOptions launchOptions: [NSObject: AnyObject]?)
  -> Bool

It seems that the functions need additional decoration. The warning notes that the function nearly matches. Allowing FixIt to do its thing yeilded:

@objc(application:didFinishLaunchingWithOptions:)
  func application(_ application: UIApplication,
  didFinishLaunchingWithOptions launchOptions: [NSObject: AnyObject]?)
  -> Bool

Apparently, additional decoration is required to bridge properly. So I went back to the standard IB added recognizer. Here’s the action.

@IBAction private func tap(sender: UITapGestureRecognizer)

Once the objc decoration is added, all is well.

@IBAction @objc(tap:) private func tap(sender: UITapGestureRecognizer)

Not a big deal to make the correction once you realize what’s required. Hopefully, others will take this as a heads up and not be bogged down as they update their code.

Overall, I have a high degree of confidence in Swift 3.0, the changes make the language better. So, go get it yourself and see the future of Apple software development.

2016-06-05 Update

I pulled the May 31 Swift trunk with the same results. I’d imagine that Apple will clean this up really soon as WWDC is just around the corner.

2016-06-09 Update

I pulled the June 6 Swift trunk and got the same results. Cutting it close for WWDC.

Read Full Post »

HackerRank has become a popular showcase for software developers. Its many participants have the ability to choose their language. Recently Apple’s Swift language has been added.

This is all well and wonderful, but as I written about in previous posts, non-UI oriented use of Swift has not been a priority. It’s not impossible, but you have to scrounge for the bits you need to use it.

As those who have used HackerRank know, non-UI is the way of things. So, here are a few bits that make it possible to use Swift in the HackerRank.

Note: In production code, one should always check when unwrapping variables. In HackerRank, input will be well-behaved and the focus is on the internal workings rather than the input mechanism.

Reading a string from the input stream

let string_value = readLine(stripNewline: true)!

Reading a single number from the input stream

let numeric_value = Int(readLine(stripNewline: true)!)!

Reading a string array from the input stream

let stringArray = input.characters
                       .split {$0 == " "}
                       .map (String.init)

Reading a numeric array from the input stream

let numbericArray = input.characters
                         .split {$0 == " "}
                         .map (String.init)
                         .map {(var s) -> Int in return Int(s)!}

Reading mixed values from the input stream

let stringArray = input.characters
                       .split {$0 == " "}
                       .map (String.init)

let iValue1 = Int(stringArray[0])!
let sValue  = stringArray[1]
let iValue2 = Int(stringArray[2])!

Writing single values to the output stream

print(foo)

Writing values in string to the output stream

print ("There are \(found) or \(expected) things")

Writing array values to the output stream

print (array.map { String($0) }.joinWithSeparator(" "))

Why not use extensions, you may ask. You can do that if you like. I thought it would be better here to show the minimal elements. And generally HackerRank in very limited in its input and output aspects.

This should cover all the usual cases for input and output to enable you to use Swift with HackerRank. So, the next time you’re using HackerRank and you don’t see Swift as an option, be sure to ask the author to add it.

Read Full Post »

This is my third post on the Swift command line. In the first, I wrote on how to do command line input with Swift. In the second, I cleaned up the code after a year of tinkering with Swift. In this third installment, I get rid of more Objective-C.

Almost a month ago, Apple released Swift into the open source community. Pretty spiffy that. So, being the language wonk that I am, I bopped over to the Swift site and downloaded the sources. After a bit of scrounging, I discovered that there existed a standard library call to read a line from the input stream. I was a bit disappointed that I’d not seen this talked about anywhere, but that’s life.

So, I’ve updated my code sample and present it below. I do some forced unwrapping as the point is to highlight the process. Be sure to code defensively.

I’ll probably come back to this one more time once I’ve had a chance to look more into the Swift standard library so I can eliminate my untidy Objective-C dependent putString() routine.

//
//  swift_intput_routines.swift
//
//  Created by Charles Wilson on 9/27/2014.
//  Revised by Charles Wilson on 9/27/2015.
//  Revised by Charles Wilson on 12/21/2015.
//
//  Copyright (c) 2014 – 2015 Charles Wilson. All rights reserved.
//
//  Permission is granted to use and modify so long as attribution is made.
//

import Foundation

func putString (outputString : String = “”)
{
if !outputString.isEmpty
{
NSFileHandle.fileHandleWithStandardOutput().writeData((outputString as NSString).dataUsingEncoding(NSUTF8StringEncoding)!)
}
}

func getString (prompt : String = “”) -> String?
{
if !prompt.isEmpty
{
putString(prompt)
}

return readLine(stripNewline: true)
}

func getCharacter () -> Character
{
let inputValue     : UInt32      = UInt32(getchar())
var inputCharacter : Character

inputCharacter = Character(UnicodeScalar(inputValue))

return inputCharacter
}

func getInteger (prompt : String = “”) -> Int
{
if !prompt.isEmpty
{
putString(prompt)
}

return (getString()! as NSString).integerValue
}

func getFloat (prompt : String = “”) -> Float
{
if !prompt.isEmpty
{
putString(prompt)
}

return (getString()! as NSString).floatValue
}

Here’s the main to exercise it.

//
//  main.swift
//
//  Created by Charles Wilson on 9/27/2014.
//  Revised by Charles Wilson on 9/27/2015.
//  Revised by Charles Wilson on 12/21/2015.
//
//  Copyright (c) 2014 - 2015 Charles Wilson. All rights reserved.
//
//  Permission is granted to use and modify so long as attribution is made.
//

import Foundation

var name = getString("What is your name? ")!

if name.isEmpty
{
    name = "George"
    
    putString("That's not much of a name. I'll call you '\(name)'\n")
}
else
{
    putString("Your name is '\(name)'\n")
}

let age = getInteger("How old are you \(name)? ")

putString("You are \(age) years old\n")

// get a value without a prompt
let number = getInteger()

putString("\(number) is a nice number\n")

let floating = getFloat("Enter a floating point number: ")

putString("\(floating) works for me\n")

var c : Character

putString("Type stuff. Enter ^ when done.\n")

let sentinel : Character = Character(UnicodeScalar("^"))
var in_c     : Character

repeat
{
    in_c = getCharacter()
}
while ( in_c != sentinel )

putString("\n\n")
putString("bye\n")

Read Full Post »

Update: Apple released the Swift source. My update based on what I learned is here.

A bit over a year ago, I wrote a post about how I wanted to do command line input in Apple’s then new Swift language.

The post was pretty well received and still get a hit / week.

On the heals of finishing the WWDC 2015 session videos, I decided to start tinkering a bit more with the language. Toward this end, I’ve been re-implementing old command line games from the 1970s that were written in BASIC. This forced me to actually go back to my command line input code. Dutifully allowing XCode to update it to Swift 2 syntax, I noticed that it relied way too much on Objective C bits. I also didn’t like the way it was handling the translation from string to numeric. So, I’ve updated it.

Yes, I should just put it up on GitHub. Maybe later. For now, just cut and paste it.

//
//  swift_intput_routines.swift
//
//  Created by Charles Wilson on 9/27/2014.
//  Revised by Charles Wilson on 9/26/2015.
//
//  Copyright (c) 2014 - 2015 Charles Wilson. All rights reserved.
//
//  Permission is granted to use and modify so long as attribution is made.
//

import Foundation

func putString (outputString : String = "")
{
    if !outputString.isEmpty
    {
        NSFileHandle.fileHandleWithStandardOutput().writeData((outputString as NSString).dataUsingEncoding(NSUTF8StringEncoding)!)
    }
}

func getString (prompt : String = "") -> String
{
    if !prompt.isEmpty
    {
        putString(prompt)
    }
    
    var inputString : NSString = ""
    let data        : NSData?  = NSFileHandle.fileHandleWithStandardInput().availableData
    
    if ( data != nil )
    {
        inputString = NSString(data: data!, encoding: NSUTF8StringEncoding)!
        inputString = inputString.substringToIndex(inputString.length - 1)
    }
    
    return String(inputString)
}

func getCharacter () -> Character
{
    let inputValue     : UInt32      = UInt32(getchar())
    var inputCharacter : Character
    
    inputCharacter = Character(UnicodeScalar(inputValue))
    
    return inputCharacter
}

func getInteger (prompt : String = "") -> Int
{
    if !prompt.isEmpty
    {
        putString(prompt)
    }
    
    return (getString() as NSString).integerValue
}

func getFloat (prompt : String = "") -> Float
{
    if !prompt.isEmpty
    {
        putString(prompt)
    }
    
    return (getString() as NSString).floatValue
}

I also update the test routine as I wasn’t checking the float bits.

//
//  main.swift
//
//  Created by Charles Wilson on 9/27/2014.
//  Revised by Charles Wilson on 9/26/2015.
//
//  Copyright (c) 2014 - 2015 Charles Wilson. All rights reserved.
//
//  Permission is granted to use and modify so long as attribution is made.
//

import Foundation

var name = getString("What is your name? ")

if name.isEmpty
{
    name = "George"
    
    putString("That's not much of a name. I'll call you '\(name)'\n")
}
else
{
    putString("Your name is '\(name)'\n")
}

let age = getInteger("How old are you \(name)? ")

putString("You are \(age) years old\n")

let number = getInteger()

putString("\(number) is a nice number\n")

let floating = getFloat("Enter a floating point number: ")

putString("\(floating) works for me\n")

var c : Character

putString("Type stuff. Enter ^ when done.\n")

let sentinel : Character = Character(UnicodeScalar("^"))
var in_c     : Character

repeat
{
    in_c = getCharacter()
}
while ( in_c != sentinel )

putString("\n\n")
putString("bye\n")

Read Full Post »

It took me a bit longer than I’d’ve liked, but finishing all the Apple WWDC 2015 videos (110-ish) in under three months is pretty satisfying.

I’m impressed at the speed with which Apple is executing the change of primary development language from Objective-C to Swift. I expected three years, but it looks like they’ll have things wrapped up in two. This is no mean feat. I’ve now experienced three core language shifts within Apple now. The first was from the Apple ][ 6502 assembly to the Macintosh 68000 assembly / Pascal hybrid. The second was the move to C. This was particularly tedious for those of us attempting to keep both camps happy. You haven’t lived until you’ve dealt with byte-prefixed, null-terminated strings. With the adoption of NextStep and the BSD/Mach micro-kernel can the transition to Objective-C. I’ll admit, I made fun of Objectionable-C. By that time, I’d spent the better part of a decade using C++. A bit of snobbery on my part. Those two children of C have fundamentally different views of the world. I cut my teeth on iOS using Objective-C and appreciated its extensibility when compared with C++. But, it didn’t have the base that C++ did. A billion devices later, well, that’s a different story. Now we have Swift. I believe that it represents the next generation of language. Not object-oriented or message-oriented, but protocol-oriented.

The number of sessions dedicated to tools was impressive as always. As was the quality of the presentations. Thankfully, we were spared the pain of having Apple’s french speakers presenting in English discussing graphics which the word banana coming up so often that one would think there was a drinking game just for that session.

I’m looking forward to tinkering with the WatchOS bits. Those sessions are probably a staple of developers.

Props goes out to the Xcode developers of continuing to bring a quality product to the table. An AirPlay view for the simulators would be nice (hint, hint). The sessions dedicated to the profiling, power and optimization of code are worth watching multiple times.

As is the case with many mature elements of the operating systems, security had fewer explicit sessions. Instead, security was a pervasive theme along with privacy.

One cannot talk about this years sessions without mentioning the brilliant leveraging the synthesis of scale and privacy to created ResearchKit.

The care that Apple puts into the sample code is truly inspiring. Having suffered through hundreds of pages of AOCE documentation, today’s entry into Apple development seems easy. Easy on the individual component level at least. There is not more that one would have to learn in order to create software from beginning to end with the level of quality and feature richness that the world has come to expect from applications on the Apple platforms.

Leaving the best to last, I’ll reflect on an issue that’s always bothered me with the transition strategy that Apple has used in the past. It’s not so much that I didn’t like the solution they came up with to deal with transitioning from one methodology to another. Or that I had a better answer, I didn’t. The price always seemed rather steep to me. I speak of binaries with multiple code and data resources used to allow a user to download a single image and run it anywhere. This was used in the transition from 68000-based machines to PowerPC ones and again when moving to Intel’s architecture. On iOS, we’ve seen the number of duplicate resources steadily climb as the screen geometries and densities have increased. The thing of which I speak is the double-headed axe of app thinning and on-demand resources. The ability to release an application to the store with all the bits for all the supported devices and be able to download only those that will actually be usable on a given device is tremendous. Couple that the a way to partition an application in such a way that only the resources within a user’s window of activity are present on the device and you have a substantial savings in both time and memory. Well done.

It’s been many years now since I’ve been able to attend WWDC in person and given the popularity of the conference, it’s not likely that I’ll be going any time soon. I’m content for the moment to be able to access all the content, if not the people, that someone attending would be able to. I look forward to next year’s sessions.

Read Full Post »

The future has always been a contentious place. I should know, I’ve spent most of my career there.

We’ve come a long way from the idea that the world only needed half a dozen computers [Thomas Watson, Jr.]. We now have some many computers that we managed to exhaust the 32-bit IPv4 address space. The solution embodied in IPv6 creates other issues, but that’s a topic for another post.

The interesting part of working in the computer domain is that feeling of being one step ahead of the langoliers. It can be at once exciting and terrifying. It is not for the faint-of-heart or those who believe that the need for learning stopped after their last final exam.

Lately, I’ve been watching an oddly converging divergence of ideas. One head of this hydra follows the path of the ever bigger. Bigger data sets, bigger pipes, bigger computations and unfortunately bigger OS’s. A second head constantly works toward making the whole morass vanish. I remember when I began to see fewer watches as people realized that their phone could do that. That emergency camera that the insurance company tells you to keep in your car? Answering machines? Travel alarm clocks? MP3 player? Portable DVD player? I would really hate to be in Garmin’s consumer division. Another head wants to be everywhere. It’s no longer sufficient to be that operation you could run out of a garage. Now, we have to be able to have stuff, both artifact and intangible, available everywhere. Remember when the fastest way to see a first-run Hollywood film overseas was to be on a military base? Speaking of military bases, you may have noticed that people are recognizing that security is important. The final head is fixated on why computers are this fixed assemblage of hardware. What if I really do need 20TB of memory and a 16K node mesh?

With all this “progress” going on, it’s all that the poor, beleaguered software developers can do just to keep up on one of these. But, that’s okay right? These are all unrelated. Right?

Well, we’ll get to that. For the moment, let’s see if you and I think the same about who’s doing what.

Bigger

Amazon and Google are both doing cloud, but the company I find interesting here is Microsoft. Azure takes the problem of software at scale reduces it to some fundamental building blocks (compute, storage, database and network). Operating system? We don’t need not stinking operating system! For those of you who remember what an IBM 1130 is, you’ll love Azure. It’s like driving a TR6 on the PCH at 80 mph (you could get from LA to SF in like, I don’t know five-ish hours). The world is yours until you crash. [Disclaimer: I have never driven a TR6.] Want more CPUs or storage or network, add more.

Invisible

The battery in my first mobile is heavier than my current phone. Apple’s biggest coup isn’t that it creates ever smaller technologies. They represent the technological equivalent of Michelangelo, who famously remarked about the process of sculpting his David:

It’s simple. I just remove everything that doesn’t look like David.

When Apple introduced the iPhone, developers were all torches and pitchforks. This wasn’t how things were done. Where’s the disk? How do I see this other application’s files (app was still a trending meme). Apple took away bits that we were accustomed to, but didn’t actually need. Most of the time. Sometimes they pulled an Apple Round Mouse. But mostly they drove development in a direction that only those of us who have been given the task of making a wireless keyboard that can run for six months on a pair of AA batteries understood. How to write code that wouldn’t make the device die in under four hours. To a large extent this was the evidence of Gates’ Law. We are now on the cusp of the Apple Watch which promises to hide the technology behind the technology even further. As someone who uses Apple Pay on a regular basis, I’m looking forward to see how the Watch does.

Everywhere

This is where the divergences converges. Azure instances can change locality temporally. As a result, your customers access servers in their vicinity. The user interfaces of software written for MacOS or iThings is multi-language and multi-locality (units) capable by default. Unlike Azure, iCloud isn’t so much a platform for developer as it is a vast warehouse of data. Apple’s recent announcement of ResearchKit has already shown how much impact an everywhere technology can have.

Secure

As one who has had the distinct displeasure of pulling his company’s internet connection on 2 November 1988, I believe that security is important. My master’s thesis was focused on computer viruses. I deal with the failure of developers to apply sound security practices to open source and commercial software on an ongoing basis.

For a really long time, no one really took securing the computer all that seriously.

Now, if you look at both Microsoft and Apple, you see security systems in a serious way. On iOS, it’s baked in. On Windows it’s half-baked. Yes, that’s a bit of snark. Security shouldn’t be an option. In iOS, if an application wants access to you contact list, it must declare that it wants to be able to access those APIs. The first time they attempt to access them, the user is prompted to allow the access. At any time, the user can simply revoke that access. Every application is sandboxed and credentials are held in a secure store. On Windows, security is governed by policy. These policies are effectively role-based. This is fine as far as it goes, but like the days-of-old, if you’re the wrong role at the wrong time running the wrong application (virus), you can deep fry any system. Hence my comment about it being half-baked.

Do we seriously believe that banks should be running on an operating system that isn’t build from the ground up around security?

Fixed

This final hydra head is perhaps the most interesting to me as it holds the most promise. It represents the hardware analog to Azure. Today, you may be able to configure an Azure instance, but that configuration only goes so far. If you look back to the dim days (which for some reason or other were in black and white, even though we had color movies as far back as 1912). Back then if you wanted more oomph, you ordered it (and an additional power drop). Now you are greatly constrained. Remember that 20TB system I mentioned earlier? Why can’t I get one? Because our manufacturing model is based on scale. This has been a good thing. It’s made it possible for me to have a laptop that doesn’t weigh 16lbs with a run time 2 hours. Isn’t that great? Ask a left-handed person sometime. As the number of actual computer manufacturers dwindles, we’re seeing more white box systems cropping up. These are being used to create the application clouds. But at a time when power is real money, how much are we wasting in resources to access the interesting bits of these boxes? More and more we see the use of storage arrays. All well and good. So, where are the processor arrays? The graphics arrays? What if I need 12 x 5K monitors? The people who crack this nut will make a great number of people very happy.

The Future Won’t be Brought to Us by AT&T

Once AT&T was the go-to place for the future of the future. Not any more. The future is far bigger than anyone imagined it to be and certainly far larger than any one company is capable of providing.

The question is, how do we identify the people who are ready to not only build that future, but to build it out?

Read Full Post »

Older Posts »

%d bloggers like this: