Feeds:
Posts
Comments

Posts Tagged ‘Apple Watch’

It took me a bit longer than I’d’ve liked, but finishing all the Apple WWDC 2015 videos (110-ish) in under three months is pretty satisfying.

I’m impressed at the speed with which Apple is executing the change of primary development language from Objective-C to Swift. I expected three years, but it looks like they’ll have things wrapped up in two. This is no mean feat. I’ve now experienced three core language shifts within Apple now. The first was from the Apple ][ 6502 assembly to the Macintosh 68000 assembly / Pascal hybrid. The second was the move to C. This was particularly tedious for those of us attempting to keep both camps happy. You haven’t lived until you’ve dealt with byte-prefixed, null-terminated strings. With the adoption of NextStep and the BSD/Mach micro-kernel can the transition to Objective-C. I’ll admit, I made fun of Objectionable-C. By that time, I’d spent the better part of a decade using C++. A bit of snobbery on my part. Those two children of C have fundamentally different views of the world. I cut my teeth on iOS using Objective-C and appreciated its extensibility when compared with C++. But, it didn’t have the base that C++ did. A billion devices later, well, that’s a different story. Now we have Swift. I believe that it represents the next generation of language. Not object-oriented or message-oriented, but protocol-oriented.

The number of sessions dedicated to tools was impressive as always. As was the quality of the presentations. Thankfully, we were spared the pain of having Apple’s french speakers presenting in English discussing graphics which the word banana coming up so often that one would think there was a drinking game just for that session.

I’m looking forward to tinkering with the WatchOS bits. Those sessions are probably a staple of developers.

Props goes out to the Xcode developers of continuing to bring a quality product to the table. An AirPlay view for the simulators would be nice (hint, hint). The sessions dedicated to the profiling, power and optimization of code are worth watching multiple times.

As is the case with many mature elements of the operating systems, security had fewer explicit sessions. Instead, security was a pervasive theme along with privacy.

One cannot talk about this years sessions without mentioning the brilliant leveraging the synthesis of scale and privacy to created ResearchKit.

The care that Apple puts into the sample code is truly inspiring. Having suffered through hundreds of pages of AOCE documentation, today’s entry into Apple development seems easy. Easy on the individual component level at least. There is not more that one would have to learn in order to create software from beginning to end with the level of quality and feature richness that the world has come to expect from applications on the Apple platforms.

Leaving the best to last, I’ll reflect on an issue that’s always bothered me with the transition strategy that Apple has used in the past. It’s not so much that I didn’t like the solution they came up with to deal with transitioning from one methodology to another. Or that I had a better answer, I didn’t. The price always seemed rather steep to me. I speak of binaries with multiple code and data resources used to allow a user to download a single image and run it anywhere. This was used in the transition from 68000-based machines to PowerPC ones and again when moving to Intel’s architecture. On iOS, we’ve seen the number of duplicate resources steadily climb as the screen geometries and densities have increased. The thing of which I speak is the double-headed axe of app thinning and on-demand resources. The ability to release an application to the store with all the bits for all the supported devices and be able to download only those that will actually be usable on a given device is tremendous. Couple that the a way to partition an application in such a way that only the resources within a user’s window of activity are present on the device and you have a substantial savings in both time and memory. Well done.

It’s been many years now since I’ve been able to attend WWDC in person and given the popularity of the conference, it’s not likely that I’ll be going any time soon. I’m content for the moment to be able to access all the content, if not the people, that someone attending would be able to. I look forward to next year’s sessions.

Read Full Post »

The future has always been a contentious place. I should know, I’ve spent most of my career there.

We’ve come a long way from the idea that the world only needed half a dozen computers [Thomas Watson, Jr.]. We now have some many computers that we managed to exhaust the 32-bit IPv4 address space. The solution embodied in IPv6 creates other issues, but that’s a topic for another post.

The interesting part of working in the computer domain is that feeling of being one step ahead of the langoliers. It can be at once exciting and terrifying. It is not for the faint-of-heart or those who believe that the need for learning stopped after their last final exam.

Lately, I’ve been watching an oddly converging divergence of ideas. One head of this hydra follows the path of the ever bigger. Bigger data sets, bigger pipes, bigger computations and unfortunately bigger OS’s. A second head constantly works toward making the whole morass vanish. I remember when I began to see fewer watches as people realized that their phone could do that. That emergency camera that the insurance company tells you to keep in your car? Answering machines? Travel alarm clocks? MP3 player? Portable DVD player? I would really hate to be in Garmin’s consumer division. Another head wants to be everywhere. It’s no longer sufficient to be that operation you could run out of a garage. Now, we have to be able to have stuff, both artifact and intangible, available everywhere. Remember when the fastest way to see a first-run Hollywood film overseas was to be on a military base? Speaking of military bases, you may have noticed that people are recognizing that security is important. The final head is fixated on why computers are this fixed assemblage of hardware. What if I really do need 20TB of memory and a 16K node mesh?

With all this “progress” going on, it’s all that the poor, beleaguered software developers can do just to keep up on one of these. But, that’s okay right? These are all unrelated. Right?

Well, we’ll get to that. For the moment, let’s see if you and I think the same about who’s doing what.

Bigger

Amazon and Google are both doing cloud, but the company I find interesting here is Microsoft. Azure takes the problem of software at scale reduces it to some fundamental building blocks (compute, storage, database and network). Operating system? We don’t need not stinking operating system! For those of you who remember what an IBM 1130 is, you’ll love Azure. It’s like driving a TR6 on the PCH at 80 mph (you could get from LA to SF in like, I don’t know five-ish hours). The world is yours until you crash. [Disclaimer: I have never driven a TR6.] Want more CPUs or storage or network, add more.

Invisible

The battery in my first mobile is heavier than my current phone. Apple’s biggest coup isn’t that it creates ever smaller technologies. They represent the technological equivalent of Michelangelo, who famously remarked about the process of sculpting his David:

It’s simple. I just remove everything that doesn’t look like David.

When Apple introduced the iPhone, developers were all torches and pitchforks. This wasn’t how things were done. Where’s the disk? How do I see this other application’s files (app was still a trending meme). Apple took away bits that we were accustomed to, but didn’t actually need. Most of the time. Sometimes they pulled an Apple Round Mouse. But mostly they drove development in a direction that only those of us who have been given the task of making a wireless keyboard that can run for six months on a pair of AA batteries understood. How to write code that wouldn’t make the device die in under four hours. To a large extent this was the evidence of Gates’ Law. We are now on the cusp of the Apple Watch which promises to hide the technology behind the technology even further. As someone who uses Apple Pay on a regular basis, I’m looking forward to see how the Watch does.

Everywhere

This is where the divergences converges. Azure instances can change locality temporally. As a result, your customers access servers in their vicinity. The user interfaces of software written for MacOS or iThings is multi-language and multi-locality (units) capable by default. Unlike Azure, iCloud isn’t so much a platform for developer as it is a vast warehouse of data. Apple’s recent announcement of ResearchKit has already shown how much impact an everywhere technology can have.

Secure

As one who has had the distinct displeasure of pulling his company’s internet connection on 2 November 1988, I believe that security is important. My master’s thesis was focused on computer viruses. I deal with the failure of developers to apply sound security practices to open source and commercial software on an ongoing basis.

For a really long time, no one really took securing the computer all that seriously.

Now, if you look at both Microsoft and Apple, you see security systems in a serious way. On iOS, it’s baked in. On Windows it’s half-baked. Yes, that’s a bit of snark. Security shouldn’t be an option. In iOS, if an application wants access to you contact list, it must declare that it wants to be able to access those APIs. The first time they attempt to access them, the user is prompted to allow the access. At any time, the user can simply revoke that access. Every application is sandboxed and credentials are held in a secure store. On Windows, security is governed by policy. These policies are effectively role-based. This is fine as far as it goes, but like the days-of-old, if you’re the wrong role at the wrong time running the wrong application (virus), you can deep fry any system. Hence my comment about it being half-baked.

Do we seriously believe that banks should be running on an operating system that isn’t build from the ground up around security?

Fixed

This final hydra head is perhaps the most interesting to me as it holds the most promise. It represents the hardware analog to Azure. Today, you may be able to configure an Azure instance, but that configuration only goes so far. If you look back to the dim days (which for some reason or other were in black and white, even though we had color movies as far back as 1912). Back then if you wanted more oomph, you ordered it (and an additional power drop). Now you are greatly constrained. Remember that 20TB system I mentioned earlier? Why can’t I get one? Because our manufacturing model is based on scale. This has been a good thing. It’s made it possible for me to have a laptop that doesn’t weigh 16lbs with a run time 2 hours. Isn’t that great? Ask a left-handed person sometime. As the number of actual computer manufacturers dwindles, we’re seeing more white box systems cropping up. These are being used to create the application clouds. But at a time when power is real money, how much are we wasting in resources to access the interesting bits of these boxes? More and more we see the use of storage arrays. All well and good. So, where are the processor arrays? The graphics arrays? What if I need 12 x 5K monitors? The people who crack this nut will make a great number of people very happy.

The Future Won’t be Brought to Us by AT&T

Once AT&T was the go-to place for the future of the future. Not any more. The future is far bigger than anyone imagined it to be and certainly far larger than any one company is capable of providing.

The question is, how do we identify the people who are ready to not only build that future, but to build it out?

Read Full Post »

%d bloggers like this: