Feeds:
Posts
Comments

Archive for the ‘Education’ Category

Apparently, I am the first person to complete SEI CERT’s online version of their course in C/C++ secure coding. With all my pausing the videos to take notes, it took me forty hours to get through, but it was worth the time. Actually, it’s two courses, one in security concepts and the other in secure C/C++ coding.

There’s four full days worth of material presented. The videos are chunked into 2 – 60 minute segments, which is helpful as the presenters are speaking to an actual class. This choice of format is both good and bad. Good in that you see real questions being asked. Bad in that, as with many classroom situations, the presenters go off into the weeds at times.

There are six exercises covering major elements (strings, integers, files, I/O, etc.). For these, web interfaced Debian Linux VMware VMs are provided. These boot quickly and have all tools required to perform the exercises. An hour is allotted for each, which is plenty of time. Following each is a section covering the solutions. This is especially helpful as there’s no one to ask questions of during. It’s also possible to download a copy of the VM being used. Sadly, no provision for Windows or native Macintosh environments.

Two books are provided (in various formats). The Secure Coding in C and C++, Second Edition and The CERT® C Coding Standard, Second Edition: 98 Rules for Developing Safe, Reliable, and Secure Systems (2nd Edition).

Generally, I found the material was well delivered. The content in the areas of (narrow) strings, integers and memory handling were exhaustive. The sections on file handling and concurrency were almost completely lacking any Windows coverage. Given that number of systems used by businesses and individuals based on Windows, this is quite disappointing. There was also a bit of hand waving around the use of wide characters. Not providing complete coverage here is a true deficiency as properly dealing with the reality of mixed string types is a reality that isn’t going to go away.

As to this being a C and C++ class, well, sort of. There are nods to C++, but like so much code out there, the C++ is tacked on the side. It really should be treated as a separate class as is done with Java. Many recommended mitigations can with the proviso that they were not portable. I think that when you’re dealing with security, the fact that there exists a mitigation outweighs its portability. These things can be abstracted to achieve portability. As someone who’s spent a fair amount of time doing cross-platform development (Unix-MacOS, MacOS-Windows, Linux-Windows), these are important issues to me.

To those who argue that securing code will make it slow, I would ask what a single security compromise would cost their company in reputation and direct monetary terms.

Would I recommend the class? Yes. There are important concepts and real-world examples on display here. I challenge anyone to take this class and not be horrified by the way most C code is written.

Could I teach this material? Definitely.

Read Full Post »

One of the interesting things that happens to me when I attend events like yesterday’s PDX Summit III is that it gets me thinking about things in a new and more connected way. For many who know me this will be perceived to mean that for some indeterminate length of time that I’ll be a bit more random than usual.

To misappropriate the Bard, “There are more things in heaven and earth, Horatio, than are accessible from your contact list.”

This morning I started reading Galileo’s Telescope and it got me thinking in terms of the big data / open source elements brought up at the summit. Before you injure your neck doing that head tilt puzzled look thing that dogs do, let me explain.

I have a great affinity toward data visualization. I could probably press my own olive oil with the stack of books I’ve got on the subject. So when I saw that Galileo had written a text entitled Sidereus Nuncius, my first thought was, “if you took nuncius (message) and pushed it forward into present day English, you’d end up at announce, denounce and enounce. What if you pushed it backward in time? How about sideways toward French? If we visualized this map, what would it look like? How would we navigate it?

I’ve always found it fascinating how speech informs thought. We live in a society where using ‘little words’ is encouraged in an effort to be more inclusive. The problem is that these ‘big words’ aren’t big for the sake of big. They encapsulate entire concepts and histories. We talk about ‘the big picture,’ ‘big data,’ and the like, but in our attempt to make it all accessible all we seem to be doing is creating a meaningless assemblage of words and acronyms, that at the end of the day, have the precision of a ten pound sledgehammer in a omelet shop.

What if instead of constantly, reducing our communication to the green card, red card of sports; we instead could point to the 21st century version of Korzybski’s Structural Differential and literally be on the same page? How would language acquisition be improved for both native and foreign languages, if you could build understanding based on the natural evolution of the language’s concept basis? What would the impact on science be if we could visualize past crossover points between disciplines? How much more readily would students learn the concepts of computer science and engineering if they could put present day abstractions into the context of past constraints rather than simply memorizing a given language, framework or operating system’s implementation?

Yeah, this is one of those posts that has no conclusion. It’s a digital scribble intended to be a jumping off point for future endeavors.

Read Full Post »

It took me a bit longer than I’d’ve liked, but finishing all the Apple WWDC 2015 videos (110-ish) in under three months is pretty satisfying.

I’m impressed at the speed with which Apple is executing the change of primary development language from Objective-C to Swift. I expected three years, but it looks like they’ll have things wrapped up in two. This is no mean feat. I’ve now experienced three core language shifts within Apple now. The first was from the Apple ][ 6502 assembly to the Macintosh 68000 assembly / Pascal hybrid. The second was the move to C. This was particularly tedious for those of us attempting to keep both camps happy. You haven’t lived until you’ve dealt with byte-prefixed, null-terminated strings. With the adoption of NextStep and the BSD/Mach micro-kernel can the transition to Objective-C. I’ll admit, I made fun of Objectionable-C. By that time, I’d spent the better part of a decade using C++. A bit of snobbery on my part. Those two children of C have fundamentally different views of the world. I cut my teeth on iOS using Objective-C and appreciated its extensibility when compared with C++. But, it didn’t have the base that C++ did. A billion devices later, well, that’s a different story. Now we have Swift. I believe that it represents the next generation of language. Not object-oriented or message-oriented, but protocol-oriented.

The number of sessions dedicated to tools was impressive as always. As was the quality of the presentations. Thankfully, we were spared the pain of having Apple’s french speakers presenting in English discussing graphics which the word banana coming up so often that one would think there was a drinking game just for that session.

I’m looking forward to tinkering with the WatchOS bits. Those sessions are probably a staple of developers.

Props goes out to the Xcode developers of continuing to bring a quality product to the table. An AirPlay view for the simulators would be nice (hint, hint). The sessions dedicated to the profiling, power and optimization of code are worth watching multiple times.

As is the case with many mature elements of the operating systems, security had fewer explicit sessions. Instead, security was a pervasive theme along with privacy.

One cannot talk about this years sessions without mentioning the brilliant leveraging the synthesis of scale and privacy to created ResearchKit.

The care that Apple puts into the sample code is truly inspiring. Having suffered through hundreds of pages of AOCE documentation, today’s entry into Apple development seems easy. Easy on the individual component level at least. There is not more that one would have to learn in order to create software from beginning to end with the level of quality and feature richness that the world has come to expect from applications on the Apple platforms.

Leaving the best to last, I’ll reflect on an issue that’s always bothered me with the transition strategy that Apple has used in the past. It’s not so much that I didn’t like the solution they came up with to deal with transitioning from one methodology to another. Or that I had a better answer, I didn’t. The price always seemed rather steep to me. I speak of binaries with multiple code and data resources used to allow a user to download a single image and run it anywhere. This was used in the transition from 68000-based machines to PowerPC ones and again when moving to Intel’s architecture. On iOS, we’ve seen the number of duplicate resources steadily climb as the screen geometries and densities have increased. The thing of which I speak is the double-headed axe of app thinning and on-demand resources. The ability to release an application to the store with all the bits for all the supported devices and be able to download only those that will actually be usable on a given device is tremendous. Couple that the a way to partition an application in such a way that only the resources within a user’s window of activity are present on the device and you have a substantial savings in both time and memory. Well done.

It’s been many years now since I’ve been able to attend WWDC in person and given the popularity of the conference, it’s not likely that I’ll be going any time soon. I’m content for the moment to be able to access all the content, if not the people, that someone attending would be able to. I look forward to next year’s sessions.

Read Full Post »

How does the new series stack up to the old?


I watched the original Cosmos series when it premiered. Like many I was captivated by the way in which Dr. Sagan told a story. It was made all the better because the story was actually true. He took us on a survey of the Universe. Small to enormous, past to future, Sagan walked us in the footsteps of man’s discovery of the world around him. He also didn’t shy away from the topic of the earth’s limited resources and the impact man was having in the way in which we were extracting, utilizing and disposing of them. That series and James Burke’s Connections set the standard in my mind for how science and history could be presented to a wide audience.

So, when I heard that Dr. Tyson (the man who drove the getaway car) would be hosting Cosmos: The Next Generation, I was excited. I’d seen bits of his “Great Courses” class The Inexplicable Universe: Unsolved Mysteries and thought it was interesting. The production quality was a bit wonky, but I attributed that to it being a class.

I watched Cosmos: A SpaceTime Odyssey. Twice. The first time broadcast and the second via iTunes. The science was great. The images from space were stunning. The message of planetary stewardship carried an even greater urgency. And yet, I found myself not really being all that moved. Not like the original. And that bothered me.

It bothered me because I couldn’t quantify what it was that I didn’t like. Finally I realized that it was the ‘reenactments’ that were bothering me. The original series went to great lengths to stage the reenactments. The new series used stylized animation. For me the result was that these abstracted the events being depicted. It came across as though you were being told a story instead of being a witness to the event. The net effect of which is that your experience is more akin to sitting in a movie theater watching a cartoon about Robin Hood vs. standing feet from charging horses in a jousting match put on by the Society for Creative Anachronism. One could absolutely argue that neither one is real. But I would then ask, which one has a greater impact? If I set up a lab experiment with lenses and prisms, I know it has more reality than images in a book or animation on a tablet.

There’s talk of a second season (without Dr. Tyson). If it does come to pass, I hope they will consider using people instead of paint. In a world where we’ve replaced doing science with watching it, every little bit helps.


I hope that everyone takes the time to watch both the old and new Cosmos. Getting teens to watch it would be good too.

Read Full Post »

Update 2015-12-21: Apple released Swift source code. I’ve updated my sample again to reflect what I learned.

Update 2015-09-27: It’s been a year and much has happened with Swift. Please see my latest post on Swift command line input for current code.


Recently I’ve been watching Stanford intro CS classes. I like to see how they present the fundamental concepts and techniques of programming. This got me thinking about those missing bits of Swift that would allow me to actually write a command line-based application. [see my previous post Swift: Second Things First] Having these bits would be to allow me to teach Swift as a first language without having to teach the abstractions and interfaces required to properly develop for a graphical interface. I’m not much into attempting to teach in a way that breaks the “go from strength to strength” methodology. If you’re going to teach me to sing, it’s a whole lot easier if I don’t have to learn how to spin plates at the same time.

So, I spent some time and created a simple set of routines that when added to a Swift command line application allow you to get and put strings, integers and floats. Not exactly rocket science, which begs the question, “Why didn’t Apple do it?” Well, since I’m not Apple, I have no idea.

Here without further comment is the content of the file I wrote. That it is not the best Swift code, I have no doubt. If you can make it better, cool. And Apple, it you read this, please make something sensible of it.

Update: I’ve manually wrapped a few lines as WordPress is clipping.

//
//  swift_intput_routines.swift
//  swift input test
//
//  Created by Charles Wilson on 9/27/14.
//  Copyright (c) 2014 Charles Wilson.
// Permission is granted to use and modify so long as attribution is made.
//

import Foundation

func putString (_ outputString : NSString = "")
{
  if outputString.length >= 1
  {
    NSFileHandle.fileHandleWithStandardOutput().writeData(
               outputString.dataUsingEncoding(NSUTF8StringEncoding)!)
  }
}

func getString (_ prompt : NSString = "") -> NSString
{
  if prompt.length >= 1
  {
    putString(prompt)
  }

  var inputString : NSString = ""
  let data        : NSData?  = NSFileHandle.fileHandleWithStandardInput().availableData

  if ( data != nil )
  {
    inputString = NSString(data: data!, encoding: NSUTF8StringEncoding)!
    inputString = inputString.substringToIndex(inputString.length - 1)
  }

  return inputString
}

func getInteger (_ prompt : NSString = "") -> Int
{
  if prompt.length >= 1
  {
    putString(prompt)
  }

  var inputValue : Int = 0
  let inputString = getString()

  inputValue = inputString.integerValue

  return inputValue
}

func getFloat (_ prompt : NSString = "") -> Float
{
  if prompt.length >= 1
  {
    putString(prompt)
  }

  var inputValue : Float = 0.0
  let inputString = getString()

  inputValue = inputString.floatValue

  return inputValue
}

And here’s a little test program that uses it.


//
//  main.swift
//  swift input test
//
//  Created by Charles Wilson on 9/27/14.
//  Copyright (c) 2014 Charles Wilson. All rights reserved.
//

import Foundation

var name = getString("What is your name? ")

if name.length == 0
{
  name = "George"

  putString("That's not much of a name. I'll call you '\(name)'\n")
}
else
{
  putString("Your name is '\(name)'\n")
}

let age = getInteger("How old are you \(name)? ")

putString("You are \(age) years old\n")

let number = getFloat("Enter a number with a decimal point in it: ")

putString("\(number) is a nice number")

putString("\n\n")
putString("bye\n")

You’re probably wondering why I don’t use print(). Well, print() doesn’t flush stdout’s buffer. And, I really like to enter data on the same line as the prompt. And for those of you who say that you can use print() in XCode’s output window, I’ll remind you that a simulator isn’t the target device.

“But, wait. You didn’t comment the code.” No, I didn’t. By the time a student has enough understanding of Cocoa to compose the I/O routines provided above, the comments would be unnecessary.

So, there you have it. The typical second program that you’d ask a student to write.

Read Full Post »

I’ve been looking at Swift for about a month now. My first thought when I see a new language is,

How would I teach this language to someone new to programming?

After spending countless hours dealing with the little things in a language that contribute to the lack of patience developers have with non-developers, I still hold out hope that we will have learned from our past and can create a language which will enable those of us who toil in darkness to get out a bit more.

Back in the day, when a new language emerged from the primordial soup, it was accompanied by a language description. This document is ensures that in the case where the creator(s) of the language are run down by a rampaging gaggle of salvage yard geese mysteriously loosed by the supporters of the favorite language de jour will still be available to the six people already using it. [Just like Apple’s linker that was written in Oberon. But that’s a story for another day.] Lest you are under the mistaken impression that this document is the be-all and end-all of the language, I would refer the gentle reader to the first Ada spec which indicated that a unary minus could be present in the middle of a digit string.

Sometime later, a language manual would appear. If you are very lucky, this will be written by a teacher (Pascal User Manual and Report). Alternately, it may be written by a practitioner who is known for their ability to create concise code like awk (The C Programming Language). If you’re really lucky, the author may be both a teacher and practitioner who’s book printings required the deforestation of small pacific islands (The C++ Programming Language). Regardless of the provenance, only those who live on the bleeding edge or college (more recently high school) students embrace these tomes of wonderment.

Assuming that the language becomes popular enough to catch the attention of people other than the full-stack crowd, a book may appear whose clarity will ensure that it is longer lived than the inevitable dummies book. This rare collection includes the A FORTRAN Coloring Book, Basic BASIC, and Programming in C.

So far, Apple has released three documents on Swift. The first was the language reference. The second details Swift-ObjectiveC interoperability. The latest is the Swift standard library reference.

This year’s WWDC included seven Swift-specific sessions and eight others that referred to it. This level of coverage is quite impressive, but then again, they’ve been working on the language for about four years.

Enough background already, how would I teach Swift as a first programming language?

Unfortunately, right now, today, I can’t. You can tinker with Swift in playgrounds. You can integrate Swift and ObjectiveC. You can create swift-based iOS or OSX applications. What you can’t do is write a CLI program that is pure Swift.

Look at any programming language instructional methodology. What’s the second thing they teach? The second? Yes, the second. The first, since K&R, is hello, world.

The second thing that you have people do is prompt for their name and say hello back to them. Output is important, but without input programs are pretty boring. I’m not ignoring the vast and glorious mound of ObjectiveC and by extension C and C++ code that’s accessible to swift, but that’s not the same as being able to create the same things in swift.

Generally, I find swift a compelling language, but today it’s not a first language. I’m hoping that Apple will correct this deficiency in the not too distant future.


So, that was the post. It’s now a week later. Why is it still sitting unpublished? Well, I just wasn’t happy with my conclusion. Having had a bit of a mull, I’ve not changed my mind but I believe that I need to revisit my basic assumption as to what constitutes the baseline for teaching a first computer language.

The idea that to teach a person how to program you should have as little magic at play a possible. What is magic? Elaborate command invocations for one. Just being able to use the word invocation should be enough of a clue. Requiring the construction of things that have nothing to do with the actual language is another. This is probably the aspect that I have the most difficulty with.

“I’ll teach you how to program, but first you’ll need to lash together the user interface.” That would be all well and fine except that print is provided. Why don’t I need to provide a mechanism for stuff to go out if I need to provide one for stuff that comes in?

So, where do we start? The advantage of the pre-GUI age was that there was one true interface to the computer. The way we thought about our programs was dictated by the programming language we used. For a long time after the GUI was introduced we tried to treat our interfaces as extensions of our programs rather than partner environments. Even after we decided that there was sufficient power to run multiple applications at once, we were still mucking about with low-memory globals.

Trying to make the UI an independent entity took the idea of an abstraction penalty to new heights. The things that worked didn’t scale. And, in general, the things that scaled didn’t work. We won’t even talk about speed. Or fragile base classes … I’ll leave GUI evolution posts for another time.

Suffice it to say that the bones of many developers were used to pave the smooth road on which today’s applications travel to get from creator to customer. Somewhere in the process, we went from being a bunch of villages connected by trails to a planet full of complexity and wonder.

So now, we think about desktop, embedded systems, mobile devices, web, distributed systems, databases and games in radically different ways. These ‘once computers’ are now ‘delivery platforms.’ In order to create a product that aims to make use of (or be available on) multiple of these, it is necessary to perform the equivalent of running a restaurant where the staff are all expert in what they do, but each speaking their own language. To complicate matters, sometimes they want to use the same tools in the kitchen (usually the knives) and the customers tend to fight over getting the ‘best’ table.

If I teach someone C, C++, Java, lisp, PHP, python, or [insert language here], I don’t have to teach them the UI language of the system at the same time. With Swift I do. Is this going to complicate things? Probably. Will it take longer? If I want to be sure that they realize that this UI metaphor isn’t ‘the one true’ metaphor, absolutely.

I believe that Swift has a lot of potential. I would hate to see it restricted to being used only in the context of ‘Developing for Apple devices with Swift.’

Read Full Post »

I’ve completed yet another of the volumes languishing on my bookshelf. This one is The IQ Answer: Maximizing Your Child’s Potential by Frank Lawlis. The book addresses he issue of how to enable children to achieve who have attention disorders or learning disabilities.

The book is fairly introspective. Very workbook-esque.

Read Full Post »

« Newer Posts