I’ve been looking at Swift for about a month now. My first thought when I see a new language is,
How would I teach this language to someone new to programming?
After spending countless hours dealing with the little things in a language that contribute to the lack of patience developers have with non-developers, I still hold out hope that we will have learned from our past and can create a language which will enable those of us who toil in darkness to get out a bit more.
Back in the day, when a new language emerged from the primordial soup, it was accompanied by a language description. This document is ensures that in the case where the creator(s) of the language are run down by a rampaging gaggle of salvage yard geese mysteriously loosed by the supporters of the favorite language de jour will still be available to the six people already using it. [Just like Apple’s linker that was written in Oberon. But that’s a story for another day.] Lest you are under the mistaken impression that this document is the be-all and end-all of the language, I would refer the gentle reader to the first Ada spec which indicated that a unary minus could be present in the middle of a digit string.
Sometime later, a language manual would appear. If you are very lucky, this will be written by a teacher (Pascal User Manual and Report). Alternately, it may be written by a practitioner who is known for their ability to create concise code like awk (The C Programming Language). If you’re really lucky, the author may be both a teacher and practitioner who’s book printings required the deforestation of small pacific islands (The C++ Programming Language). Regardless of the provenance, only those who live on the bleeding edge or college (more recently high school) students embrace these tomes of wonderment.
Assuming that the language becomes popular enough to catch the attention of people other than the full-stack crowd, a book may appear whose clarity will ensure that it is longer lived than the inevitable dummies book. This rare collection includes the A FORTRAN Coloring Book, Basic BASIC, and Programming in C.
So far, Apple has released three documents on Swift. The first was the language reference. The second details Swift-ObjectiveC interoperability. The latest is the Swift standard library reference.
This year’s WWDC included seven Swift-specific sessions and eight others that referred to it. This level of coverage is quite impressive, but then again, they’ve been working on the language for about four years.
Enough background already, how would I teach Swift as a first programming language?
Unfortunately, right now, today, I can’t. You can tinker with Swift in playgrounds. You can integrate Swift and ObjectiveC. You can create swift-based iOS or OSX applications. What you can’t do is write a CLI program that is pure Swift.
Look at any programming language instructional methodology. What’s the second thing they teach? The second? Yes, the second. The first, since K&R, is hello, world.
The second thing that you have people do is prompt for their name and say hello back to them. Output is important, but without input programs are pretty boring. I’m not ignoring the vast and glorious mound of ObjectiveC and by extension C and C++ code that’s accessible to swift, but that’s not the same as being able to create the same things in swift.
Generally, I find swift a compelling language, but today it’s not a first language. I’m hoping that Apple will correct this deficiency in the not too distant future.
So, that was the post. It’s now a week later. Why is it still sitting unpublished? Well, I just wasn’t happy with my conclusion. Having had a bit of a mull, I’ve not changed my mind but I believe that I need to revisit my basic assumption as to what constitutes the baseline for teaching a first computer language.
The idea that to teach a person how to program you should have as little magic at play a possible. What is magic? Elaborate command invocations for one. Just being able to use the word invocation should be enough of a clue. Requiring the construction of things that have nothing to do with the actual language is another. This is probably the aspect that I have the most difficulty with.
“I’ll teach you how to program, but first you’ll need to lash together the user interface.” That would be all well and fine except that print is provided. Why don’t I need to provide a mechanism for stuff to go out if I need to provide one for stuff that comes in?
So, where do we start? The advantage of the pre-GUI age was that there was one true interface to the computer. The way we thought about our programs was dictated by the programming language we used. For a long time after the GUI was introduced we tried to treat our interfaces as extensions of our programs rather than partner environments. Even after we decided that there was sufficient power to run multiple applications at once, we were still mucking about with low-memory globals.
Trying to make the UI an independent entity took the idea of an abstraction penalty to new heights. The things that worked didn’t scale. And, in general, the things that scaled didn’t work. We won’t even talk about speed. Or fragile base classes … I’ll leave GUI evolution posts for another time.
Suffice it to say that the bones of many developers were used to pave the smooth road on which today’s applications travel to get from creator to customer. Somewhere in the process, we went from being a bunch of villages connected by trails to a planet full of complexity and wonder.
So now, we think about desktop, embedded systems, mobile devices, web, distributed systems, databases and games in radically different ways. These ‘once computers’ are now ‘delivery platforms.’ In order to create a product that aims to make use of (or be available on) multiple of these, it is necessary to perform the equivalent of running a restaurant where the staff are all expert in what they do, but each speaking their own language. To complicate matters, sometimes they want to use the same tools in the kitchen (usually the knives) and the customers tend to fight over getting the ‘best’ table.
If I teach someone C, C++, Java, lisp, PHP, python, or [insert language here], I don’t have to teach them the UI language of the system at the same time. With Swift I do. Is this going to complicate things? Probably. Will it take longer? If I want to be sure that they realize that this UI metaphor isn’t ‘the one true’ metaphor, absolutely.
I believe that Swift has a lot of potential. I would hate to see it restricted to being used only in the context of ‘Developing for Apple devices with Swift.’
[…] Swift: Second Things First QA is Not a Speed Bump […]