Feeds:
Posts
Comments

Archive for the ‘Education’ Category

Last week (17-21 August 2020) I had the pleasure of being staffing (trainer/facilitator) the first joint MDIC / FDA / MITRE Medical Device Cybersecurity Threat Modeling Bootcamp. The mind behind the training material was Adam Shostack. Originally planned as an in-person training, the pandemic forced a shift to on-line delivery.

The objectives of the bootcamp (from the MDIC site):

  • Intensive, hands-on sessions on threat modeling.
  • Learn about structured, systematic and comprehensive approach to threat modeling for engineering more secure systems from SMEs from public and private sector.
  • Learn the latest updates on medical device cybersecurity and related areas from representatives of FDA and industry.
  • Networking opportunity with SMEs from MedTech and non-MedTech sectors to learn on cybersecurity best practices that can be incorporated into the medical device industry
  • Contribute to the discussions on the development of Medical Device Threat Modeling Playbook

For anyone not familiar with Adam’s threat modeling training methodology, it is a highly interactive, small group focused training. When the training staff got together for three days in Washington, D.C. in February of 2020, this was the way we pre-flighted the bootcamp. To his credit, Adam deconstructed the material and re-envisioned it for a remote audience.

This first bootcamp had about sixty participants from across the medical device industry and include manufacturers, HDOs and regulators. The training provided a good introduction to the concepts of threat modeling and encouraged an appreciation of the needs of development, security, management and regulators. Instead the typical classroom style presentation followed by table-based group interactions, we had topic-based videos which the participants viewed in a dedicated on-line learning system, individual assignments, entire bootcamp presentations and group working sessions .

As an outcome of this bootcamp was to assist in the creation of a “playbook” for medical device threat modeling, the entire procedure was shadowed by members of the working group responsible for that effort.

So, what was my take-away as a trainer and practitioner?

Providing live distance learning is hard. The dynamic is completely different from in-person training. I’ve been taking remote live distance training classes since the proto-Coursera Machine Learning and Database classes from Stanford, nearly a decade ago. As a learner, the ability to stop the video and take notes and go back over things was invaluable. The lack of interactivity with the instructor was a drawback. This was my first experience on the other side of the screen. As a trainer and facilitator, keeping remote participants on-topic and on-schedule was challenging. Having the ability to use multiple computers (one for interaction [43″ 4K display] and another for staff side-channel discussion) was invaluable. In an in-person setting, I’d’ve had to leave the group or try and flag down another staff member, distracting from the flow.

Observationally, I think the dynamic of the participants is a bit diminished. Typically, you’d have breaks, during which participants would exchange ideas and make connections. At the end of the day, groups would have dinner together and discuss what they’re learned in greater detail.

I believe that, overall, the training was successful. My group indicated that they’d come away with a better understanding of threat modeling and a greater appreciation of the context in which the activity exists. We have another session coming up and I’m sure that it will incorporate all the lessons learned from this one. I’m looking forward to it.

The training is focused on threat modeling generally and so those not in the medical device industry would also profit from it. If you’re interested, I recommend that you visit the MDIC site linked above.

Read Full Post »

I get called upon to do fairly incongruous things. One day it’ll be C++ usage recommendations. Another will find me preparing background materials for upper management. Some days, I’m prototyping. Always something new.

As of late, I’ve been bringing modern software threat modeling to the development teams. Threat modeling is one of those things that, for the most part, exists only in the realm of the mythical cybersecurity professionals. This is a sad thing. I’m doing what I can to change people’s perceptions in that regard.

Within cybersecurity, there is a saying. “You can either build it in or bolt it on.” As with mechanical systems, bolting stuff on guarantees a weak point and a usually lack of symmetry. From the software development standpoint, attempting to add security after the fact is usually a punishing task. It is both invasive and time-consuming.

But the bolt-on world is the natural response for those who use of the flawed hypothesis model of cybersecurity analysis. The appeal of the flawed hypothesis analysis lies in the fact that you can do it without much more than the finished product and its documentation. You can poke and prod the software based on possible threats that the documentation points toward. From the specific anticipated threats one can generalize and then test. The problem is that this methodology is only as good as the documentation, intuition, and experience of those doing the analysis.

So, what’s a software development organization to do?

Enter threat modeling. Instead of lagging the development, you lead it. Instead of attacking the product, you reason about its data flow abstraction. In doing so, you learn about how your design decisions impact your susceptibility to attack. From there, you can quantify the risk associated with any possible threats and make reasoned decisions as to what things need to be addressed and in what order. Pretty much the polar opposite of the “death by a thousand cuts” approach of the flaw hypothesis model.

Sounds reasonable, but how do we get there?

Let me start by saying that you don’t create a threat model. You create a whole pile of threat models. These models represent various levels of resolution into your system. While it is true that you could probably create an über threat model (one to rule them all, and such), you’d end up with the graphical equivalent of the Julia Set. What I’ve found much more manageable is a collection of models representing various aspects of a system.

Since the 1970’s, we’ve had the very tool which we’ll use to create our models. That tool is the data flow diagram. The really cool thing about DFD’s is that they consist of just four components. In order to adapt them to threat modeling we need add only one more. The most important piece it the data store. After all, there’s not much to look at in a computer system that doesn’t actually handle some sort of data. We manipulate the data via processes. These agents act upon the data which moves via flows. And finally we need external actors, because if the data just churns inside the computer, again, not much interest. That’s it. You can fully describe any system using only those four primitives.

Okay, you can describe the system, but how does this relate the threat modeling? To make the leap from DFD to threat model, we’ll need one more primitive. We need a way to designate boundaries that data flows cross. These we call threat boundaries. Not the world’s most imaginative nomenclature, but hey, it’s simple and easy to learn.

Once we have a particular DFD based on a workflow, we add boundaries where they make sense. Between the physical device and the outside world; or the application and operating system; or application and its libraries; or between two processes; or … (you get the idea). Again the threat model isn’t intended to be the application. It’s an abstraction. And as Box said, “all models are wrong … but some are useful.” It helps to keep in mind what Alfred Korzybski said, “the map is not the territory.” Anyone who’s traveled on a modern transit system would never confuse the transit map for the area geography. Nod to Harry Beck.

With the boundary-enhanced DFD, we can get to work. For the particular road I travel, we reason about the threat model using a STRIDE analysis. We consider each of the elements astride (pun) each data flow with respect to each of the six aspects of STRIDE: spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privilege. Not all aspects apply to all combination of our four primitives. There are tables for that. Each of these can be appraised logically. No chicken entrails required. At the end of the day, you have a collection of things you don’t have answers to. So, you bring in the subject matter experts (SMEs) to answer them. When you are done what remains are threats.

Threats. Spiffy. But not all threats are equal. Not in potential for damage, or likelihood, or interest. For a goodly length of time, this was a big problem with the whole threat modeling proposition. Lots of stuff, but no objective way to triage it.

Enter the Common Vulnerability Scoring System (CVSS). This is the Veg-O-Matic of threat risk quantification. CVSS considers the means, complexity, temporality and impact areas of a threat. From these it computes a vulnerability score. Now you have a ranking of what the most important things to consider are.

For many industries, we could stop right here and use the CVSS score directly. No so in the land of FDA regulation. For that land adds another dimension, patient safety (PS) impact. The augmented CVSS-PS ranking guides us to a proper way to objectively rate the threats from most to least severe.

Now, we can take these ranked threats and present them, complete with SME feedback, to the core and risk teams for determination and disposition.

But we’re really not done. The threat modeling process isn’t one-and-done. We take what we learn and build it into the base assumptions of future products. Once the product is built, we compare it to the original threat model to verify that the model still represents reality.

Well, that was a lot of exposition. Where’s the facilitation and teaching?

Actually, the exposition was the teaching. And the teaching was and explanation of how I go about facilitating.

When all is said and done, a threat model needs to be built in. That is, engineering owns it. The whole facilitation thing, that’s a skill. It needs to live in the teams, not in some adjunct cybersecurity group. Applying CVSS consistently takes a bit of practice, but again we’re back to facilitation.

As to actually teaching threat modeling, that takes the better part of a day. Lots of decomposition, details and diagrams. I like teaching. It’s a kind of cognitive spreading of the wealth. The same is true of facilitation, just more one-to-one.

Read Full Post »

Sometimes you can spend years trying to find a book that you can recommend to someone who’s asked you a question. My latest read, The Software Craftsman: Professionalism, Pragmatism, Pride is one such book. A recent volume in the Robert C. Martin book series, this volume by Sandro Mancuso is not what it appears to be. And that, is a good thing.

When you look at the other books in the Martin series (Working Effectively with Legacy Code, Agile Estimating and Planning, Clean Code, The Clean Coder, Clean Architecture, …) you see topics decomposed and methodologies expressed by which the title’s subject is achieved. That’s not what you get with The Software Craftsman. In my case, that was a very fortunate turn of events.

This is not to say that the journey of the software craftsman is not discussed. It is and in a reasonable amount of detail. But an equal amount of time is given to the ecosystem within which the craftsman practices. These parts of the book are not for the consumption of the craftsman or aspirant, but for the owners of the firms who employ (or should employ) them.

The book does well in describing the trials and tribulations of a member of the craft; from the point where they realized that they aspired to more than the dichotomy of coder / architect; to the creation of the volume itself. It lays bare this false dichotomy within the broader context of the entire point of software development. That being to produce value to the customer and income to the creator. Within that context, there is the easy path of whatever works and the hard path of building a thing that no only does what it is supposed to, but does it in a way which is both high quality and highly maintainable.

At it’s core, this is book about philosophy. In a landscape of Google and go; and compile it, link it, ship it, debug it; this is a thoughtful volume. It makes the point that I’ve never seen in print, that the individual software developer is responsible for their own career development. Not their manager, not their company, but they themselves are responsible. Heady stuff this.

As to the remainder of the book’s material, it’s more a wake up call to upper management. There you’ll find discussion of recruiting, hiring, retaining, shaping change and showing ROI. I know of very few who could look at this volume and come away unmoved.

It might be the separation of authority and responsibility, the hire for what we needed yesterday, the CYA so we get our bonus, or the factory worker mentality encouraged by so many firms today. If you can read this book and not get something out of it, you’re part of the problem.

Truly quality software is designed, built, and tested by passionate individuals working together toward the creation of something which will well serve the customer. Everything else is just code. Any 10 year-old can be taught to write code. I know, I’ve done it. Do you want your life’s critical systems to be build by 10 year-olds? Of course not, that’s a ridiculous question. How about people who are just doing it because they make a better than average day’s wage?

I hope you’re intrigued. At the very least, I hope you’ll reflect on your own views of the responsibilities of a software developer. At fewer than 250 pages, you can read this book in one or two sittings, but reading the book is only the starting point.

 

Read Full Post »

I spend much of my time these days doing long-term strategic research and planning. Part of that time is spent identifying areas where technology training is warranted. The ways and means I use to create and present training materials have been developed through years of trial and error. In the midst of one particular line of research into a non-training-related area, I found Building an Innovative Learning Organization by Russell Sarder.

The book is relatively short, about 220 pages, but in many ways, you really don’t need more than that to cover the concepts of training. While it’s true that it would take far more to cover all aspect of training, from organization by-in, to facilities, to choice of materials, to length of courses, etc., those are details. And the details are as pointless as ornaments without a tree if you don’t have the fundamentals in place. That’s where this book shines.

Yes, there are all the requisite elements of a business-oriented book (voices from industry, outcomes of research, anecdotes, and the like). Not to mention the mound of acronyms tossed in for good measure. But, I expect those. This book asserts that learning should be a systemic attribute of any thriving company. As such, learning must be part of the culture of the company for it to be successful. You cannot slap training on the side and expect that you will have any serious ROI to the company. It would be like thinking that buying Girl Scout cookies or Boy Scout popcorn has a substantive impact on the members of either organization. Yes, it does provide financial support for programs, but it’s not “the program.”

Training needs leaders, resources, people interested in learning, and a purpose (lest we forget why we do training in the first place).

Training has a structure and that structure is not one-size-fits-all. People have varying modalities of learning. Even the best material won’t work well for everyone. This is were that whole (materials, time, place, etc.) details thing comes into play. But, again the focus of the book is to lay out the challenges and considerations, not specifics.

Finally, you need to see that training produces results. This can be fiendishly difficult to measure, so it’s vitally important to set expectations before doing the training. Being happy is not considered a valid measure of ROI for the company.

As mentioned earlier, the book is replete with references and for those who create training material or even those who want to create an environment within their company where can be effective. It is a good starting point. For those who have been involved in training for some time, the book can serve as a reference that can be used to educate management in the scope, cost and investment (they’re different) necessary to create a learning environment that will have long-term benefits.

Overall, a decent read. I found the interviews with CLOs (chief learning officers) incisive. As with all organization-level things, there are no easy answers. And you do get what you pay for. You’ll dispatch this book in a few hours and then find yourself going back over it later.

 

Read Full Post »

I just finished reading Walter Isaacson‘s biography of Leonardo da Vinci. As with his previous biographies, this one is just as well-researched and presented.

Leonardo is one of those one name people. Long before Prince, Madonna or Usher (funny how they’re all designations of some sort) the name da Vinci was attributed to only one individual. All this fuss over a distracted, self willed, person who started far more things than he finished.

Yes, he was a prima donna. Yes, he tended to tinker over a thing far after the commissioner expected it to be complete. Yes, he was distracted by, well, just about anything. But, what a mind.

Best of all, he just didn’t seem to give a damn. About expectations, glory or money. Which is not to say that he didn’t care about comfort. He liked pretty things (and pretty people). But one gets the distinct impression that what he really wanted was to have a patron (patron sounds so much nicer than sugar daddy) who would appreciate the quality and not quantity of his work. He wanted the freedom to explore the universe (such as it was in the late 15th and early 16th century).

He loved pageantry. He loved to learn, to teach and to collaborate. He was a self-promoter who appears to have been not really up to the task.

He was always stretching, always reaching beyond his understanding. He was always reinventing himself.

People would commission him based on his past works. Many times this lead nowhere for them. With da Vinci, they should have been thinking about what he might do rather than what he had done. How poorly would he fare in today’s world where people are hired to essentially give repeat performances. This being especially true in the technology sector. Kind of like wanting to visit an exotic land where you stay at Holiday Inn, eat at McDonalds and everyone speaks English. Or in perhaps more relevant terms, you invest in a startup that’s going to change the world with a guaranteed return and no risk.

Leonardo was the definition of the deep bench. It wasn’t until the end of his life that he found in the king of France a person who got that you don’t “hire” a Leonardo for what he does, but rather for who he is and how he changes those around him. I find it quite disappointing they expectation that people have of being assured an immediate return at a cut rate. This is the measure of mediocrity in both the individual and business worlds. People want to be given a fish and have no patience to learn how to fish themselves. How much better would the world be if we sought out and nurtured those capable of creating a multiplier effect?

He was also very human. He could be unreliable and ill tempered. His relationship with his relatives was the stuff of reality television.

Isaacson does an excellent job of putting meat on the bones of this icon of creativity.

I’ve read quite a few treatments of da Vinci’s life s one is by far the best. So many seem to be intended to ride the tide of get-genius-quick that is so pervasive today. Nothing like everyone being above average. He didn’t become the man come icon overnight. He became who we know over a lifetime, with the attendant work. As Isaacson noted, he is seen as a genius rather than a craftsman because, but certainly not solely, of his habit of not releasing his work until it was perfected. Granted for most people this would be attributed more to OCD than genius (and with good reason).

Isaacson’s narrative style is engaging and I hope that someone takes the time to translate it to a long-form, visual format.

Overall, I came away with the sense that da Vinci was a real person who inhabited a real world. I can’t say I’d’ve liked to have lived there, but it’d’ve been fun to visit.

Read Full Post »

I’ve been reading Isaacson’s DaVinci biography (that’s another post) and thinking about metaphors, analogies, teaching and learning.

Teaching is hard. The world is a complex place, so that’s to be expected. Learning is hard, although many people expect it to be easy. I mean, really, like, you can just Google things.

Well, really, not so much.

For me teaching is all about the group and the motivating example. Humans learn best by metaphor, going from the known to the unknown. Kind of like having one foot firmly planted on the lip of the hot tub and testing the temperature with the other. Just jumping in might work. Not something to rely on though. If you give people a framework they can relate to, it affords them a place from which to extend what they know.

On my high school senior physics class final was a problem that began, “A rock explodes into three pieces …”. Really? Why? It’s been a lifetime since that event and yet the premise of the problem still sticks with me. During my undergraduate studies, I had a physics professor whose motivating examples were based on James Bond situations. As contrived as physics problems tend to be in order to tease out a self-contained use of some specialized equation, at least contextualizing thing via James Bond gave them a veneer of reason. Mostly. Sort of.

During my graduate studies, I dropped a class in neural networks because the professor presented the material in such an abstract fashion that I couldn’t anchor it. It wasn’t until I took Andrew Ng’s first Machine Learning class on Coursera (which was one of two first offered) that neural networks actually made sense to me. He presented the material in the context of real-world use cases.

I’m not say that everything can be learned by simply having a good story. If you work with computer software long enough, you’ll have to confront numbers represented in binary, octal or hexadecimal. You’ll just have to memorize the conversions. The same is true for operator precedence.

Let’s look at learning for a minute, lest everyone think that I’ve forgotten it.

In order to learn something, assuming that it’s not rote memorization, you must accept the framework within which it exists. Unless you put can do that, things won’t stick. You will forever be condemned to Google it hell. I can usually tell the people who will have difficulty learning a programming language when they complain that it’s not like the language they’re used to. As I like to say, “you can program C in any language.” Some people never get past that point. And we all suffer because of it.

I’m not limiting this to C-based languages. The interpreted world has more than their fair share of people still programming BASIC in any language. I like to think of them as the Python without classes crowd. I’m not sure where the whole “classes are bad” mentality came from, but it seems to have a strong following.

For a less software example, consider using a word processor. Do you still type two spaces after the period? Unless you’re using a typewriter, all you’re doing is messing up the formatting software (technically hyphenation and justification (H&J) system). Try this experiment. Take a word processing document and look at how it formats the text with both a single and double space. This becomes especially evident when full-justifying paragraphs.

All well and wonderful, but what about the pretzels?

Yeah, about those. It struck me that this whole teaching / learning thing can be likened to making pretzels. You know the big, soft, knotted, salt-covered ones. Consider the dough as the learner, the salt and shape the material to be learned and the kitchen equipment the methodology. The cook is the teacher. If the dough is frozen or dried out, it can’t be shaped. This is a refusal to accept the rules of the material. If the equipment is inadequate or the cook lacks an understanding of how to use it, the results will be inconsistent. Likewise, if the cook doesn’t understand how to handle the dough or when to apply the salt, things will probably not be the best. It is only when all three elements are brought together properly that the expected outcome is achieved consistently.

In the realm of teaching this means that the teacher needs to be able to create a motivating example and framework that works for the learners. This changes over time. Just as the world changes. The teacher should be always looking for signs that a student is frozen and be ready with additional material they may more readily relate to. The most difficult cases are the dried out students. They see no need to learn the new material and are at best taking up air. At worst, they are disruptive. These individuals should be given to understand that their presence is optional and that others should be allowed to learn.

Finally, as a teacher, always, always be looking for what you can learn from the students. The world is bigger than your little pretzel shop.

Read Full Post »

When I was asked to create and teach a Python class, I had to ask myself, “where is the starting point for this language?”

When it comes to computer languages, I like them logical, powerful, compact and fast. The language currently at the top of my list is Swift. When it comes to longevity, C++ wins hands down. Python is neither compact nor fast. It is, however, very popular. It’s also very flexible.

The C language has so many children that it’s easy to use analogs. What about Python?

I’ve taken intro CS courses from Harvard, Rice and Stanford all of which use Python. They all teach C programming in Python in my opinion. I get where they’re coming from. You’ve spent years using FORTRAN then Pascal then Java.

Happily, my first language was FORTRAN. You think you need to build all your own data structures if you use C. Consider yourself lucky. But that’s a story for another day. My second lanugage as APL. Go ahead try to teach APL the way you teach C. Knock yourself out. A bit later I picked up BASIC, which after FORTRAN was trivial. Next came Forth. That took a bit to wrap my procedural head around. My experience with APL had taught me that it’s perfectly fine to focus on the data and not the process. Forth’s focus on the stack is strangely intriguing. The fact that it lives on in UEFI and Postscript is a testament to the fact that there is value in that view of the world.

So my approach was to start with data representation. In Python the world is all objects and references. But for some reason, people don’t want to approach the language from that standpoint. They like to talk about how easy it is to write ‘hello, world’ programs or how it’s more readable than Perl. Aside from APL, I don’t know anything that’s less readable than Perl. Except maybe TECO macros.

Now that I had a place to start, life should be a hop, skip and a jump to classes, yes? Not so fast pilgrim. It’s easy to explain that everything is an object and that integers are a sub-class of rationals. It’s easy to explain that 0 takes 12 bytes of storage. You can even justify the lack of a character object. But I think you do a true disservice if you don’t address the fact that strings are Unicode based. I’ve dealt with enough internationalization issues to know that to gloss over this would be a disservice to the student. I probably spent more time working on the string section of my class than any other.

The reason? It’s one thing to present information that raises obvious questions like, “so you’ve told me that there as multiple ways to represent the same grapheme and that these strings will have different code units, but what am I supposed to do about it?” Or “how am I supposed to sleep knowing that a Vai 4 digit isn’t the same as an Arabic numeral 4?” You might as well throw them under a bus if you honestly believe that you’ve discharged your duty as a teacher by telling the students that there are land mines out there and that they should bring an umbrella. You’ve essentially just given them a compelling reason to never use the language.

Once all the ‘core’ data types are out of the way, it’s time for some core data structure like list, tuple, and namespace.

Functions, generators, and lambdas come next. These are relatively straightforward. The trick to generators is to show the equivalent implementation in C++. Yes, they are different beasts, but you can get close enough. Similarly with lambdas.

Now, you’re in a position where classes can be reasonably explained. A point to mention here is that back before embarking on a exposition of data types, keywords and variable conventions were addressed. Now, for most students, this goes in one ear and out the other. Having arrived at classes, all those naming conventions come back like an overeager Sheltie wanting to play. Ignore them at your peril. Enumerations are also introduced here.

For many, all the object bits will now come into focus. This is a good thing. I’ve never liked the approach where students are taught how to use bits and pieces of libraries without the foundation to understand what’s actually happening. They end up with a false sense of accomplishment and may never seek to build an accurate model of the languages world. They’re like Jeff Goldblum’s character in ‘The Fly’ having no clue how things actually work since he just specified what he wanted a given part to do and plugged the parts together. We all now how that worked out for him. It also emphasizes their importance of debugging you system.

I’ve never understood why some people shy away from classes in Python. They act as though organization is an inherently evil thing. They also probably have all their laundry in two piles in the middle of their bedroom (one dirty, one clean).

Classes point us to resource management, but we can’t do that discussion until input / output is covered. That gives us the idea of using classes (file streams) and their methods to process data. Here’s where string formatting and core data conversion comes in.

Up until now, exceptions have been alluded to. Now there’s enough structure to not only address, but give meaningful examples of their use.

At this point, you’re done with the core language. So we’ll address unit testing. We’ve done bits of this along the way since the introduction of classes, but now we can talk about the unit test library.

What remains are sections on sequence and associative containers. An important aspect of this is teaching how to select the appropriate container. Yes, you can use list and tuple alone, but there are better things to do with your life than reinventing the wheel. Technical interviews insisting that people be able to balance binary trees notwithstanding.

Finally, a brief introduction to the standard library. Before Googling, how about being aware what’s already in the box.

You’ll note a distinct absence of web browsers, GUI applications, client-server systems, etc. Just the language here.

Would my class make you a Python expert. By no means. As with all things, you become proficient through years of study and practice. It is my hope that my Python class would give you a good start on that journey.

Read Full Post »

The thing about teaching a class is that it can’t actually be done. You can only teach an individual.

I’ve been teaching since I was in middle school. Hard to believe, but true. That effort, to teach my younger sister (by six years) how to do addition was an utter failure. My next major outing was to create a one week segment for my 12th grade physics class on black holes, including a test. I believe that one fared better, although I don’t believe my endeavor to expose my classmates to then cutting edge cosmology was necessarily appreciated.

Throughout college, I was a TA and grader for various CS classes. I spent a summer a the Nature and Conservation director at a Boy Scout camp and two summers teaching programming to teens. By the time I entered the professional arena, I knew teaching (tech transfer) was in my DNA.

A decade ago, I worked for a company where teaching C++ was part of the job.

Fast forward to my current position. I’ve had the opportunity to create and present Modern C++ (C++14) training within my company. This has come in two flavors, one to jumpstart them into C++ (C developers) and one to bring them up to speed on the start of the language (C++98/03 developers). Both classes have about 15 hours worth of material.

The first challenge in teaching modern C++ is that of linearization. C++ has a wonderful breadth. Unfortunately, It can be challenging to present the material in such a way as to be both meaningful and at the same time not resort to appeals to Oz-ian “pay no attention to the man behind the curtain.” My success in this area I attribute to years of exposure to the materials of James Burke.

The second, far more interesting challenge, is hitting that Goldilocks zone where everyone is learning. Even when teaching C++ to C developers, there will be those who immediately take to its conceptual frameworks and there will be those who probably never will. It would be easy to cater to the former and simply write off the later as Luddites. Alternately, one could obsess on the later group and end up boring the former to tears. A fundamental balance can be achieved by using labs which build upon a coherent problem and lead the student to embrace ever more abstract aspects of the language.

In the case of my modern C++ for C++98/03 developers class, I take an entirely different approach. With them I use a progression from changes in the language, to important element of the standard library, to useful Boost bits and finally to the contributions made by the GSL. Within this progression, I give attention to each feature or class using a presentation / discussion format. Unlike the jumpstart class, I can’t use the labs to modulate the pace of the class. Each group I teach will progress at their own pace. (I limit my class size to about 20). In this advanced class, I also find myself researching answers to specific, real-world issues that the students are encountering. I then fold these results back into the materials I present.

As with any modern company, there is a mix of platforms under development. This has necessitated my doing a bit of bounds checking to be sure that the materials I present will work in a Visual C++ / gcc / VxWorks world. With the advanced class, I present not only the modern (C++14) methods (with a bit of C++17 previews), but also the pre-C++11 mechanisms as not everyone has the luxury of constantly upgrading their tool chains.

Overall, it has been an enjoyable experience. One I’m sure I’ll be repeating in the future.

Note: As a nod to an interesting Stanford professor (Mehran Sahami) and in the voice of Starfire, I have taken up the habit of “the throwing of the candy.”

Read Full Post »

I’m big on education, think Swift is a great language, and believe games can be a practical way to motivate learning. So, how did I put this into practice?

What’s My Motivation?

During my career, I’ve had the opportunity to teach programming and software development (two distinctly different things) to both teens and adults. One thing that’s always struck me is the disjoint nature of the material. Not in terms of the subject matter, but rather with respect to the examples being used. I learning a spoken language, you don’t abandon a part of speech as you acquire another. Learning is both cumulative. As we learn, we revise our approach.

In teaching programming, we seem to be so focused on being focused, that we divorce ourselves from the actual processes that go on when we solve real-world problems. In the past few years, I’ve noticed that people are producing programming language courses reduced to five minute info-bites. Here’s the thing, software development is a long-form practice.

Early Insight

I put together my first programming curriculum in 1981 when I was a instructor at Computer Camp, Inc. in Santa Barbara, California. The students were teens and the problem in my mind was motivation. Unlike adults, most of the teens I’ve taught over the years don’t approach programming from experience. They have a beginners mind. This is both good and bad for a teacher. The good is that they don’t have bad habits yet. If properly taught, they will think in the language. The bad is that we, as experienced developers, have come to see programming languages as a collection of “computer language components” and not as a methodology for solving problems as expressed in a specific syntax. As a result, the vast majority of software written today would have the spoken equivalent of transliteration. All the words are there and a native speaker could probably make sense of it, but they would suffer greatly.

In 1982, I found myself tasked with teaching an advanced BASIC programming class. It was then that I hit upon the idea of a dungeon crawler. The students were interested from the outset. They appreciated that everything they were spending their precious time on was leading to the outcome. They looked at the language as a means to solve a problems and not a way to take a solution from another language and reapply it.

So, now I understood that it was possible to motivate and teach people how to think in a programming language. Could I leverage this understanding?

Teaching Revisited

In 2008 I had the opportunity to teach electrical engineers C++ and SystemC. These were individuals who’s software development experience was grounded in C programming. Their code and indeed, approach to software development, was procedural as once might expect. In order to teach them SystemC, people must first learn C++ (the language SystemC is written in). After working with the materials we had been using, I felt strongly that we weren’t motivating an appreciation and understanding of object orientation. I had the opportunity to participate in the creation of an entirely new C++ curriculum. From the beginning it introduced object orientation. There is a interesting shift that takes place when the responsibility for the data shifts from all the code that touches it to objects that manage it.

The Stanford Way

I’ve been watching Stanford’s iOS development course (CS193P) since it was first made available. It has undergone an interesting evolution over the past decade. Initially, it taught Objective-C development and iOS programming. This included pure (non-GUI) Objective-C and test driven development. With the fundamentals in place, the model-view-controller paradigm was taught as the foundation of iOS development. Then the class shifted into the standard piece-part methodology we see everywhere, albeit with a distinctly iOS bent.

Over the years, both the pure language and test driven development aspects went away. These were relegated to reading material. Objective-C was supplanted by Swift. More sophisticated areas were covered as the iPhone evolved. By the end of the course, students can build complex apps. But it feels like people are learning APIs rather than the language. But what can you do in 10 weeks? Would people actually pay for a college course to learn Swift and then another for iOS development?

Enter Wumpus

About five years ago, someone asked me to teach them how to make iPhone games. They had no software development experience and little desire for the traditional approach of learning via classes or books. They understood the ins and outs of game play and had a keen sense of what made a game playable.

The process that followed was the condensation of forty years of writing code and developing software. Today, when we work with just about any OS API, we have to deal with a context. But how do you motivate the very idea of the context. How do you teach people to work effectively with the net result of over fifty years of software development practices without just expecting that people will simply accept that this is the way it is and you just have to accept it? You can easily create an animation, but what is happening behind the scenes? Being able to understand and explore these questions is what will determine if someone will be capable of working beyond the software equivalent of writing pulp fiction.

In the end, I settled on teaching software development through the very old game of Hunt the Wumpus. This game appeared in the original Unix distributions. It has simple rules, a bit of action, some random elements and is, on the whole, able to be understood by a nine-year-old. It’s implementation can be used to demonstrate multi-dimensional arrays, randomization, object-orientation, internationalization, error handling, data visualization and testing.

As this was before Swift, it was implemented in Objective-C.

Personally, I used my implementation of Wumpus to experiment with iOS. Specifically, I was tinkering with storyboards in Xcode. I wanted to see if it was possible to implement the user interface of Wumpus entirely using scenes representing the rooms within the game. This is, of course, a horrific abuse of the scene concept and is the equivalent of unfolding an array of objects into individual routines. It did, in fact, work. And I would not ever recommend that the technique be used for production code.

Enter Swift

Two years ago Apple announces Swift. Immediately, I started working with it. Like many languages before it, Swift incorporated lessons learned. In the case of Swift, many lessons were learned. You can look at my earlier posts to see my past musings on the language.

In May, I found myself with sufficient time on my hands to undertake a rewrite of Wumpus in the soon to be released (now just released) Swift 3. Concurrently, iOS 10 was to come out and would be supported by Xcode 8. Changes all around. My initial Wumpus model was readily brought over from Objective-C. Over time I realized that many of the things in that implementation could be completely folded down to a single line of Swift code. Swift wasn’t an extension of an older language. In fact, as the language evolved from version 1 to 3, many elements initial present were removed or replaced. Today’s Swift is much more consistent as a result.

I knew the pieces of the user interface that would be required and set about recreating them. This time is a sane fashion. Once this was done, I began the process of connecting the view to the controller layer and eventually the model. All the while, adopting the Swift 3 and iOS 10 idioms.

At this point, I had a playable version of Wumpus. There was a main scene that took you to the rules or the game. The rules were a static chunk of text. The credits was static attributed text. You could navigate the maze and be moved (scene with alert) or die (scene with alert). Shooting came in and initially used a scrolling picker with the room numbers. Dull stuff.

Just Add … Everything

Now came the interesting bits. The iOS-specific bits.

It’d be dull to cover this in detail, so here’s a rough sequence.

  • 30+ background images
  • danger annunciator images
  • tint overlay to gray scale backgrounds
  • ambient sound across scenes (looped soundtrack)
  • incidental sounds within scenes (looped for danger and one-shot for events [moved, died])
  • added settings controls for all audio volumes
  • asset catalog used for both image and sound management (simplified access)
  • rebuilt settings using a table with dynamically constructed cells with action handlers
  • saved statistics using class-based archiver
  • rebuilt statistics using dynamic data generation from the statistics data
  • segues and segue unwinding (navigation control)
  • timers (scene auto-transition from title scene)
  • tap gestures (eliminating navigation buttons)
  • replaced static rules text with chunked pages and swipe gestures
  • custom font (Kalam)
  • parallax (titles, danger annunciators and event imagery)
  • dynamically constructed attributed text (credits)
  • endless scrolling text loop (credits)
  • dynamically constructed tables from plist data (statistics field names)
  • static collection view replacing lame picker interface (shoot scene)
  • app analytics (Firebase)
  • ad support (AdMob)
  • JSON processing (credits source import)
  • core data (credits attributed string construction)
  • built to work with both iOS 9.3 and 10.0 (core data had a major change)
  • social network (Facebook / Twitter) posting

Testing, Testing

An important part of creating an iPhone application is being able to ship it. But before that you should really test it. A lot. Really.

To do that you need to do the dance of getting certificates and creating an app instance. With these you can push builds to Apple’s servers where they can be accessed by internal testers (all builds) and external testers (specific builds, after review [sort of]). Then comes the great fun of prodding the testers.

Collateral Damage

It’s been tested. All the features (for this release) are present. And it’s time to ship, right? Actually, no. You can’t ship an app without creating a bucket and a half  of collateral images (screenshots) for the app store. There’s also the small matter of the web site that will support the app. And no self-respecting app would go up without a game play video.

About those images. You technically only need one set at the highest screen geometry. The others will be generated by scaling. Now, you’ve gone to all the trouble of adopting an adaptive user interface so things look reasonable on all the various screen geometries, so not generating imagery for every size would just be lazy. Happily, all these can be generated from simulator screen capture. Image having to round up half a dozen devices just to do screen caps. Did I mention that video? Well, you can’t video capture from the simulator. So, for those of you who look at my app on the store, there’s just the one from my current iPhone.

I do keep referring to Wumpus as an iPhone app. Well, it is. I designed it for portrait-only. Now this doesn’t prevent you from putting it on an iPad. The problem is that Apple has never updated the screen size used from iPhone apps on an iPad. It’s this pointlessly scrunched up screen size. It looked brain dead. So, I went back and tweaked the layout to be less egregious. It’s not pretty, but why are you running it on an iPad in the first place?

Can I go now?

What could possible be left to do?

  • specification of age rating
  • description for the store
  • verification that you own or have license for all the bits you’re using
  • text for alerts presented to the user, if certain features are used

About that whole licensing point. Wumpus uses a lot of images and audio tracks. They all need to be acknowledged properly. That was a driving factor in using Core Data to track them. All the ones I used were either public domain or minimally encumbered. The biggest problem I had was not finding them, but selecting from among them.

And yes, now it’s ready to ship.

Ship It

So, about two weeks ago, submitted Wumpus for review. Well, I tried to. Apple will only review apps built against finalized OS libraries. Wait. Wait. So I added a few more bits to fill the time. On Monday 12 September 2016, I was able to submit Wumpus for review. After a brief diversion of trying to find out how to answer new privacy questions related to the use of Firebase and AdMob. Then came the wait. Did I forget something? Was there some horrible error condition lurking waiting for the mystical Apple auto application checkers to detect. Would the review be delayed by more relevant applications (honestly, that’s just about every other app)? Nah, it was all good.

On Wednesday 14 September 2016, I got an auto-generated email informing me that my app was available for sale. Pretty anti-climactic really. If you have an iPhone/iPad, you can download it today. The related web site is also online.

And?

So where’s the tie-back to the teaching programming / software engineering? That was the point, right? Absolutely. I’m not done. Although Wumpus represents an interesting résumé piece and I’ll be extending it with additional technologies (such as web, Apple Watch), my take away is an example that I know I can use to teach both Swift and iPhone development. Like all good stories, this one leaves me wanting more.

Read Full Post »

Apparently, I am the first person to complete SEI CERT’s online version of their course in C/C++ secure coding. With all my pausing the videos to take notes, it took me forty hours to get through, but it was worth the time. Actually, it’s two courses, one in security concepts and the other in secure C/C++ coding.

There’s four full days worth of material presented. The videos are chunked into 2 – 60 minute segments, which is helpful as the presenters are speaking to an actual class. This choice of format is both good and bad. Good in that you see real questions being asked. Bad in that, as with many classroom situations, the presenters go off into the weeds at times.

There are six exercises covering major elements (strings, integers, files, I/O, etc.). For these, web interfaced Debian Linux VMware VMs are provided. These boot quickly and have all tools required to perform the exercises. An hour is allotted for each, which is plenty of time. Following each is a section covering the solutions. This is especially helpful as there’s no one to ask questions of during. It’s also possible to download a copy of the VM being used. Sadly, no provision for Windows or native Macintosh environments.

Two books are provided (in various formats). The Secure Coding in C and C++, Second Edition and The CERT® C Coding Standard, Second Edition: 98 Rules for Developing Safe, Reliable, and Secure Systems (2nd Edition).

Generally, I found the material was well delivered. The content in the areas of (narrow) strings, integers and memory handling were exhaustive. The sections on file handling and concurrency were almost completely lacking any Windows coverage. Given that number of systems used by businesses and individuals based on Windows, this is quite disappointing. There was also a bit of hand waving around the use of wide characters. Not providing complete coverage here is a true deficiency as properly dealing with the reality of mixed string types is a reality that isn’t going to go away.

As to this being a C and C++ class, well, sort of. There are nods to C++, but like so much code out there, the C++ is tacked on the side. It really should be treated as a separate class as is done with Java. Many recommended mitigations can with the proviso that they were not portable. I think that when you’re dealing with security, the fact that there exists a mitigation outweighs its portability. These things can be abstracted to achieve portability. As someone who’s spent a fair amount of time doing cross-platform development (Unix-MacOS, MacOS-Windows, Linux-Windows), these are important issues to me.

To those who argue that securing code will make it slow, I would ask what a single security compromise would cost their company in reputation and direct monetary terms.

Would I recommend the class? Yes. There are important concepts and real-world examples on display here. I challenge anyone to take this class and not be horrified by the way most C code is written.

Could I teach this material? Definitely.

Read Full Post »

Older Posts »

%d bloggers like this: