Feeds:
Posts
Comments

Posts Tagged ‘education’

Last week (17-21 August 2020) I had the pleasure of being staffing (trainer/facilitator) the first joint MDIC / FDA / MITRE Medical Device Cybersecurity Threat Modeling Bootcamp. The mind behind the training material was Adam Shostack. Originally planned as an in-person training, the pandemic forced a shift to on-line delivery.

The objectives of the bootcamp (from the MDIC site):

  • Intensive, hands-on sessions on threat modeling.
  • Learn about structured, systematic and comprehensive approach to threat modeling for engineering more secure systems from SMEs from public and private sector.
  • Learn the latest updates on medical device cybersecurity and related areas from representatives of FDA and industry.
  • Networking opportunity with SMEs from MedTech and non-MedTech sectors to learn on cybersecurity best practices that can be incorporated into the medical device industry
  • Contribute to the discussions on the development of Medical Device Threat Modeling Playbook

For anyone not familiar with Adam’s threat modeling training methodology, it is a highly interactive, small group focused training. When the training staff got together for three days in Washington, D.C. in February of 2020, this was the way we pre-flighted the bootcamp. To his credit, Adam deconstructed the material and re-envisioned it for a remote audience.

This first bootcamp had about sixty participants from across the medical device industry and include manufacturers, HDOs and regulators. The training provided a good introduction to the concepts of threat modeling and encouraged an appreciation of the needs of development, security, management and regulators. Instead the typical classroom style presentation followed by table-based group interactions, we had topic-based videos which the participants viewed in a dedicated on-line learning system, individual assignments, entire bootcamp presentations and group working sessions .

As an outcome of this bootcamp was to assist in the creation of a “playbook” for medical device threat modeling, the entire procedure was shadowed by members of the working group responsible for that effort.

So, what was my take-away as a trainer and practitioner?

Providing live distance learning is hard. The dynamic is completely different from in-person training. I’ve been taking remote live distance training classes since the proto-Coursera Machine Learning and Database classes from Stanford, nearly a decade ago. As a learner, the ability to stop the video and take notes and go back over things was invaluable. The lack of interactivity with the instructor was a drawback. This was my first experience on the other side of the screen. As a trainer and facilitator, keeping remote participants on-topic and on-schedule was challenging. Having the ability to use multiple computers (one for interaction [43″ 4K display] and another for staff side-channel discussion) was invaluable. In an in-person setting, I’d’ve had to leave the group or try and flag down another staff member, distracting from the flow.

Observationally, I think the dynamic of the participants is a bit diminished. Typically, you’d have breaks, during which participants would exchange ideas and make connections. At the end of the day, groups would have dinner together and discuss what they’re learned in greater detail.

I believe that, overall, the training was successful. My group indicated that they’d come away with a better understanding of threat modeling and a greater appreciation of the context in which the activity exists. We have another session coming up and I’m sure that it will incorporate all the lessons learned from this one. I’m looking forward to it.

The training is focused on threat modeling generally and so those not in the medical device industry would also profit from it. If you’re interested, I recommend that you visit the MDIC site linked above.

Read Full Post »

I’ve been reading Isaacson’s DaVinci biography (that’s another post) and thinking about metaphors, analogies, teaching and learning.

Teaching is hard. The world is a complex place, so that’s to be expected. Learning is hard, although many people expect it to be easy. I mean, really, like, you can just Google things.

Well, really, not so much.

For me teaching is all about the group and the motivating example. Humans learn best by metaphor, going from the known to the unknown. Kind of like having one foot firmly planted on the lip of the hot tub and testing the temperature with the other. Just jumping in might work. Not something to rely on though. If you give people a framework they can relate to, it affords them a place from which to extend what they know.

On my high school senior physics class final was a problem that began, “A rock explodes into three pieces …”. Really? Why? It’s been a lifetime since that event and yet the premise of the problem still sticks with me. During my undergraduate studies, I had a physics professor whose motivating examples were based on James Bond situations. As contrived as physics problems tend to be in order to tease out a self-contained use of some specialized equation, at least contextualizing thing via James Bond gave them a veneer of reason. Mostly. Sort of.

During my graduate studies, I dropped a class in neural networks because the professor presented the material in such an abstract fashion that I couldn’t anchor it. It wasn’t until I took Andrew Ng’s first Machine Learning class on Coursera (which was one of two first offered) that neural networks actually made sense to me. He presented the material in the context of real-world use cases.

I’m not say that everything can be learned by simply having a good story. If you work with computer software long enough, you’ll have to confront numbers represented in binary, octal or hexadecimal. You’ll just have to memorize the conversions. The same is true for operator precedence.

Let’s look at learning for a minute, lest everyone think that I’ve forgotten it.

In order to learn something, assuming that it’s not rote memorization, you must accept the framework within which it exists. Unless you put can do that, things won’t stick. You will forever be condemned to Google it hell. I can usually tell the people who will have difficulty learning a programming language when they complain that it’s not like the language they’re used to. As I like to say, “you can program C in any language.” Some people never get past that point. And we all suffer because of it.

I’m not limiting this to C-based languages. The interpreted world has more than their fair share of people still programming BASIC in any language. I like to think of them as the Python without classes crowd. I’m not sure where the whole “classes are bad” mentality came from, but it seems to have a strong following.

For a less software example, consider using a word processor. Do you still type two spaces after the period? Unless you’re using a typewriter, all you’re doing is messing up the formatting software (technically hyphenation and justification (H&J) system). Try this experiment. Take a word processing document and look at how it formats the text with both a single and double space. This becomes especially evident when full-justifying paragraphs.

All well and wonderful, but what about the pretzels?

Yeah, about those. It struck me that this whole teaching / learning thing can be likened to making pretzels. You know the big, soft, knotted, salt-covered ones. Consider the dough as the learner, the salt and shape the material to be learned and the kitchen equipment the methodology. The cook is the teacher. If the dough is frozen or dried out, it can’t be shaped. This is a refusal to accept the rules of the material. If the equipment is inadequate or the cook lacks an understanding of how to use it, the results will be inconsistent. Likewise, if the cook doesn’t understand how to handle the dough or when to apply the salt, things will probably not be the best. It is only when all three elements are brought together properly that the expected outcome is achieved consistently.

In the realm of teaching this means that the teacher needs to be able to create a motivating example and framework that works for the learners. This changes over time. Just as the world changes. The teacher should be always looking for signs that a student is frozen and be ready with additional material they may more readily relate to. The most difficult cases are the dried out students. They see no need to learn the new material and are at best taking up air. At worst, they are disruptive. These individuals should be given to understand that their presence is optional and that others should be allowed to learn.

Finally, as a teacher, always, always be looking for what you can learn from the students. The world is bigger than your little pretzel shop.

Read Full Post »

The early years of computing were a like a Renaissance dance, lots of people who somehow manage to get to dance with each other at least once. A Mind at Play: How Claude Shannon Invented the Information Age gives us yet another place to stand and watch that dance.

Claude Shannon is one of those people who fundamentally changed the way we look at the world. The problem with fundamental change is that we tend to be on one side or the other of it. Today we speak of information theory as though it’s as obvious a concept as making paper. Kind of the same way we obsess over software developers being able to write code to sort numbers or reverse linked lists. At some point, the fundamental reality of the existence of high-quality libraries and data structures will make these queries as relevant as requiring people to explain a tape sort. But I digress.

He was a researcher, tinkerer, teacher, juggler, and for all appearances didn’t seem attached to labels. He had Vannevar Bush looking out for him. As an MIT professor, he had Danny Hillis and Ivan Sutherland, among others, as doctoral students. He worked with Alan Turing during World War II. And the box-switch-thing that turns itself off. That was him.

Reading the book, you get a sense of possibilities explored. So often people either dismiss or defer possibilities. He literally had a basement full of them. If only he’d know Ron Popeil, every home might have a few of them.

I don’t know how well he would fare in the world today. In his time, Bell Labs basically paid to have him around. He had cachet. He also helped focus people’s ideas. He brought this sensibility to MIT with him as a professor. We get so terribly wrapped up in being hyper-specialized, in know the what but not the why. To often we come across the proverbial Gordian knot and turn away. People are either unwilling to try, or believing themselves to be special, simply act as though the problem does not exist. (Treating people poorly and flaunting violations of the law fall into this category.) Few people are willing to question the fundamentals. What do you need? What do you have?

The interesting people are those who solve problems and help other people solve problems, not by merely telling them what the answer is, but by enabling them to see that solutions can come from places that aren’t necessarily rooted in the past ways of doing things.

In our day and age, when we focus on special skills and special languages and special hardware, it would behoove us to remember that there is no best skill or language or hardware. There is only the universe of problems. It is far more valuable to be able to help others see the shape of the solution than to be an individual capable of providing a answer to a well-defined question whose value will in time expire.

Read Full Post »

I’m big on education, think Swift is a great language, and believe games can be a practical way to motivate learning. So, how did I put this into practice?

What’s My Motivation?

During my career, I’ve had the opportunity to teach programming and software development (two distinctly different things) to both teens and adults. One thing that’s always struck me is the disjoint nature of the material. Not in terms of the subject matter, but rather with respect to the examples being used. I learning a spoken language, you don’t abandon a part of speech as you acquire another. Learning is both cumulative. As we learn, we revise our approach.

In teaching programming, we seem to be so focused on being focused, that we divorce ourselves from the actual processes that go on when we solve real-world problems. In the past few years, I’ve noticed that people are producing programming language courses reduced to five minute info-bites. Here’s the thing, software development is a long-form practice.

Early Insight

I put together my first programming curriculum in 1981 when I was a instructor at Computer Camp, Inc. in Santa Barbara, California. The students were teens and the problem in my mind was motivation. Unlike adults, most of the teens I’ve taught over the years don’t approach programming from experience. They have a beginners mind. This is both good and bad for a teacher. The good is that they don’t have bad habits yet. If properly taught, they will think in the language. The bad is that we, as experienced developers, have come to see programming languages as a collection of “computer language components” and not as a methodology for solving problems as expressed in a specific syntax. As a result, the vast majority of software written today would have the spoken equivalent of transliteration. All the words are there and a native speaker could probably make sense of it, but they would suffer greatly.

In 1982, I found myself tasked with teaching an advanced BASIC programming class. It was then that I hit upon the idea of a dungeon crawler. The students were interested from the outset. They appreciated that everything they were spending their precious time on was leading to the outcome. They looked at the language as a means to solve a problems and not a way to take a solution from another language and reapply it.

So, now I understood that it was possible to motivate and teach people how to think in a programming language. Could I leverage this understanding?

Teaching Revisited

In 2008 I had the opportunity to teach electrical engineers C++ and SystemC. These were individuals who’s software development experience was grounded in C programming. Their code and indeed, approach to software development, was procedural as once might expect. In order to teach them SystemC, people must first learn C++ (the language SystemC is written in). After working with the materials we had been using, I felt strongly that we weren’t motivating an appreciation and understanding of object orientation. I had the opportunity to participate in the creation of an entirely new C++ curriculum. From the beginning it introduced object orientation. There is a interesting shift that takes place when the responsibility for the data shifts from all the code that touches it to objects that manage it.

The Stanford Way

I’ve been watching Stanford’s iOS development course (CS193P) since it was first made available. It has undergone an interesting evolution over the past decade. Initially, it taught Objective-C development and iOS programming. This included pure (non-GUI) Objective-C and test driven development. With the fundamentals in place, the model-view-controller paradigm was taught as the foundation of iOS development. Then the class shifted into the standard piece-part methodology we see everywhere, albeit with a distinctly iOS bent.

Over the years, both the pure language and test driven development aspects went away. These were relegated to reading material. Objective-C was supplanted by Swift. More sophisticated areas were covered as the iPhone evolved. By the end of the course, students can build complex apps. But it feels like people are learning APIs rather than the language. But what can you do in 10 weeks? Would people actually pay for a college course to learn Swift and then another for iOS development?

Enter Wumpus

About five years ago, someone asked me to teach them how to make iPhone games. They had no software development experience and little desire for the traditional approach of learning via classes or books. They understood the ins and outs of game play and had a keen sense of what made a game playable.

The process that followed was the condensation of forty years of writing code and developing software. Today, when we work with just about any OS API, we have to deal with a context. But how do you motivate the very idea of the context. How do you teach people to work effectively with the net result of over fifty years of software development practices without just expecting that people will simply accept that this is the way it is and you just have to accept it? You can easily create an animation, but what is happening behind the scenes? Being able to understand and explore these questions is what will determine if someone will be capable of working beyond the software equivalent of writing pulp fiction.

In the end, I settled on teaching software development through the very old game of Hunt the Wumpus. This game appeared in the original Unix distributions. It has simple rules, a bit of action, some random elements and is, on the whole, able to be understood by a nine-year-old. It’s implementation can be used to demonstrate multi-dimensional arrays, randomization, object-orientation, internationalization, error handling, data visualization and testing.

As this was before Swift, it was implemented in Objective-C.

Personally, I used my implementation of Wumpus to experiment with iOS. Specifically, I was tinkering with storyboards in Xcode. I wanted to see if it was possible to implement the user interface of Wumpus entirely using scenes representing the rooms within the game. This is, of course, a horrific abuse of the scene concept and is the equivalent of unfolding an array of objects into individual routines. It did, in fact, work. And I would not ever recommend that the technique be used for production code.

Enter Swift

Two years ago Apple announces Swift. Immediately, I started working with it. Like many languages before it, Swift incorporated lessons learned. In the case of Swift, many lessons were learned. You can look at my earlier posts to see my past musings on the language.

In May, I found myself with sufficient time on my hands to undertake a rewrite of Wumpus in the soon to be released (now just released) Swift 3. Concurrently, iOS 10 was to come out and would be supported by Xcode 8. Changes all around. My initial Wumpus model was readily brought over from Objective-C. Over time I realized that many of the things in that implementation could be completely folded down to a single line of Swift code. Swift wasn’t an extension of an older language. In fact, as the language evolved from version 1 to 3, many elements initial present were removed or replaced. Today’s Swift is much more consistent as a result.

I knew the pieces of the user interface that would be required and set about recreating them. This time is a sane fashion. Once this was done, I began the process of connecting the view to the controller layer and eventually the model. All the while, adopting the Swift 3 and iOS 10 idioms.

At this point, I had a playable version of Wumpus. There was a main scene that took you to the rules or the game. The rules were a static chunk of text. The credits was static attributed text. You could navigate the maze and be moved (scene with alert) or die (scene with alert). Shooting came in and initially used a scrolling picker with the room numbers. Dull stuff.

Just Add … Everything

Now came the interesting bits. The iOS-specific bits.

It’d be dull to cover this in detail, so here’s a rough sequence.

  • 30+ background images
  • danger annunciator images
  • tint overlay to gray scale backgrounds
  • ambient sound across scenes (looped soundtrack)
  • incidental sounds within scenes (looped for danger and one-shot for events [moved, died])
  • added settings controls for all audio volumes
  • asset catalog used for both image and sound management (simplified access)
  • rebuilt settings using a table with dynamically constructed cells with action handlers
  • saved statistics using class-based archiver
  • rebuilt statistics using dynamic data generation from the statistics data
  • segues and segue unwinding (navigation control)
  • timers (scene auto-transition from title scene)
  • tap gestures (eliminating navigation buttons)
  • replaced static rules text with chunked pages and swipe gestures
  • custom font (Kalam)
  • parallax (titles, danger annunciators and event imagery)
  • dynamically constructed attributed text (credits)
  • endless scrolling text loop (credits)
  • dynamically constructed tables from plist data (statistics field names)
  • static collection view replacing lame picker interface (shoot scene)
  • app analytics (Firebase)
  • ad support (AdMob)
  • JSON processing (credits source import)
  • core data (credits attributed string construction)
  • built to work with both iOS 9.3 and 10.0 (core data had a major change)
  • social network (Facebook / Twitter) posting

Testing, Testing

An important part of creating an iPhone application is being able to ship it. But before that you should really test it. A lot. Really.

To do that you need to do the dance of getting certificates and creating an app instance. With these you can push builds to Apple’s servers where they can be accessed by internal testers (all builds) and external testers (specific builds, after review [sort of]). Then comes the great fun of prodding the testers.

Collateral Damage

It’s been tested. All the features (for this release) are present. And it’s time to ship, right? Actually, no. You can’t ship an app without creating a bucket and a half  of collateral images (screenshots) for the app store. There’s also the small matter of the web site that will support the app. And no self-respecting app would go up without a game play video.

About those images. You technically only need one set at the highest screen geometry. The others will be generated by scaling. Now, you’ve gone to all the trouble of adopting an adaptive user interface so things look reasonable on all the various screen geometries, so not generating imagery for every size would just be lazy. Happily, all these can be generated from simulator screen capture. Image having to round up half a dozen devices just to do screen caps. Did I mention that video? Well, you can’t video capture from the simulator. So, for those of you who look at my app on the store, there’s just the one from my current iPhone.

I do keep referring to Wumpus as an iPhone app. Well, it is. I designed it for portrait-only. Now this doesn’t prevent you from putting it on an iPad. The problem is that Apple has never updated the screen size used from iPhone apps on an iPad. It’s this pointlessly scrunched up screen size. It looked brain dead. So, I went back and tweaked the layout to be less egregious. It’s not pretty, but why are you running it on an iPad in the first place?

Can I go now?

What could possible be left to do?

  • specification of age rating
  • description for the store
  • verification that you own or have license for all the bits you’re using
  • text for alerts presented to the user, if certain features are used

About that whole licensing point. Wumpus uses a lot of images and audio tracks. They all need to be acknowledged properly. That was a driving factor in using Core Data to track them. All the ones I used were either public domain or minimally encumbered. The biggest problem I had was not finding them, but selecting from among them.

And yes, now it’s ready to ship.

Ship It

So, about two weeks ago, submitted Wumpus for review. Well, I tried to. Apple will only review apps built against finalized OS libraries. Wait. Wait. So I added a few more bits to fill the time. On Monday 12 September 2016, I was able to submit Wumpus for review. After a brief diversion of trying to find out how to answer new privacy questions related to the use of Firebase and AdMob. Then came the wait. Did I forget something? Was there some horrible error condition lurking waiting for the mystical Apple auto application checkers to detect. Would the review be delayed by more relevant applications (honestly, that’s just about every other app)? Nah, it was all good.

On Wednesday 14 September 2016, I got an auto-generated email informing me that my app was available for sale. Pretty anti-climactic really. If you have an iPhone/iPad, you can download it today. The related web site is also online.

And?

So where’s the tie-back to the teaching programming / software engineering? That was the point, right? Absolutely. I’m not done. Although Wumpus represents an interesting résumé piece and I’ll be extending it with additional technologies (such as web, Apple Watch), my take away is an example that I know I can use to teach both Swift and iPhone development. Like all good stories, this one leaves me wanting more.

Read Full Post »

One of the interesting things that happens to me when I attend events like yesterday’s PDX Summit III is that it gets me thinking about things in a new and more connected way. For many who know me this will be perceived to mean that for some indeterminate length of time that I’ll be a bit more random than usual.

To misappropriate the Bard, “There are more things in heaven and earth, Horatio, than are accessible from your contact list.”

This morning I started reading Galileo’s Telescope and it got me thinking in terms of the big data / open source elements brought up at the summit. Before you injure your neck doing that head tilt puzzled look thing that dogs do, let me explain.

I have a great affinity toward data visualization. I could probably press my own olive oil with the stack of books I’ve got on the subject. So when I saw that Galileo had written a text entitled Sidereus Nuncius, my first thought was, “if you took nuncius (message) and pushed it forward into present day English, you’d end up at announce, denounce and enounce. What if you pushed it backward in time? How about sideways toward French? If we visualized this map, what would it look like? How would we navigate it?

I’ve always found it fascinating how speech informs thought. We live in a society where using ‘little words’ is encouraged in an effort to be more inclusive. The problem is that these ‘big words’ aren’t big for the sake of big. They encapsulate entire concepts and histories. We talk about ‘the big picture,’ ‘big data,’ and the like, but in our attempt to make it all accessible all we seem to be doing is creating a meaningless assemblage of words and acronyms, that at the end of the day, have the precision of a ten pound sledgehammer in a omelet shop.

What if instead of constantly, reducing our communication to the green card, red card of sports; we instead could point to the 21st century version of Korzybski’s Structural Differential and literally be on the same page? How would language acquisition be improved for both native and foreign languages, if you could build understanding based on the natural evolution of the language’s concept basis? What would the impact on science be if we could visualize past crossover points between disciplines? How much more readily would students learn the concepts of computer science and engineering if they could put present day abstractions into the context of past constraints rather than simply memorizing a given language, framework or operating system’s implementation?

Yeah, this is one of those posts that has no conclusion. It’s a digital scribble intended to be a jumping off point for future endeavors.

Read Full Post »

How does the new series stack up to the old?


I watched the original Cosmos series when it premiered. Like many I was captivated by the way in which Dr. Sagan told a story. It was made all the better because the story was actually true. He took us on a survey of the Universe. Small to enormous, past to future, Sagan walked us in the footsteps of man’s discovery of the world around him. He also didn’t shy away from the topic of the earth’s limited resources and the impact man was having in the way in which we were extracting, utilizing and disposing of them. That series and James Burke’s Connections set the standard in my mind for how science and history could be presented to a wide audience.

So, when I heard that Dr. Tyson (the man who drove the getaway car) would be hosting Cosmos: The Next Generation, I was excited. I’d seen bits of his “Great Courses” class The Inexplicable Universe: Unsolved Mysteries and thought it was interesting. The production quality was a bit wonky, but I attributed that to it being a class.

I watched Cosmos: A SpaceTime Odyssey. Twice. The first time broadcast and the second via iTunes. The science was great. The images from space were stunning. The message of planetary stewardship carried an even greater urgency. And yet, I found myself not really being all that moved. Not like the original. And that bothered me.

It bothered me because I couldn’t quantify what it was that I didn’t like. Finally I realized that it was the ‘reenactments’ that were bothering me. The original series went to great lengths to stage the reenactments. The new series used stylized animation. For me the result was that these abstracted the events being depicted. It came across as though you were being told a story instead of being a witness to the event. The net effect of which is that your experience is more akin to sitting in a movie theater watching a cartoon about Robin Hood vs. standing feet from charging horses in a jousting match put on by the Society for Creative Anachronism. One could absolutely argue that neither one is real. But I would then ask, which one has a greater impact? If I set up a lab experiment with lenses and prisms, I know it has more reality than images in a book or animation on a tablet.

There’s talk of a second season (without Dr. Tyson). If it does come to pass, I hope they will consider using people instead of paint. In a world where we’ve replaced doing science with watching it, every little bit helps.


I hope that everyone takes the time to watch both the old and new Cosmos. Getting teens to watch it would be good too.

Read Full Post »

I’ve completed yet another of the volumes languishing on my bookshelf. This one is The IQ Answer: Maximizing Your Child’s Potential by Frank Lawlis. The book addresses he issue of how to enable children to achieve who have attention disorders or learning disabilities.

The book is fairly introspective. Very workbook-esque.

Read Full Post »