Feeds:
Posts
Comments

When I was an undergraduate, I heard a story about a DEC PDP 11/70 at a nearby school that had a strange hardware mod. A toggle switch had been added by someone and wired into the backplane apparently. The switch had two settings, “magic” and “more magic.” The identity of the individual or individuals having made the mod was lost. For as long as anyone could remember, the switch had been in the “magic” position. One day, some brave soul decided to find out what happened when the “more magic” setting was selected. Upon flipping the toggle, the machine crashed. It thereafter resisted any attempts to get it to run. After a bit, they gave up, flipped the toggle back to “magic“, power cycled the machine and hoped. The machine returned to its previous state of operational happiness. One could say that they’d been trying to achieve too much of a good thing.

We might read this and come away with the idea that, well, they just should have gotten on with their work and not worried about the plumbing. That’s certainly the majority view, from what I’ve seen. But why was the switch there in the first place? If it only worked in one position, shouldn’t they have just wired things without the switch?

Let’s consider the temporal aspect, no one remembered who, when, or why, let alone what. It may well have been the case that once “more magic” actually worked. Who can say. That whole documentation thing.

When I work with project teams and individual developers, I have a habit of saying “no magic.” It comes from having heard this story. I’ll say this when working with individuals who’s code I’m reviewing, teams who’s architecture I’m reviewing, or leads and architects while facilitating the creation of threat models. I don’t care whether the magic manifests as constants (magic numbers) from who-knows-where or logic that’s more convoluted than a Gordian Knot. Basically, if the reason that something is being used exists without the benefit of understanding, it shouldn’t be there. I don’t care who put it there or how smart they were. Someday someone is going to come along and try to change things and it will all go south. Enough of these in a code review and it’s likely to get a single summary review comment of “no.”

How does this relate to security? Do you know what the auto-completion for “never implement your” is? I’ll let you try that on you own, just to verify. Never implement your own crypto[graphic functions]. Why? Where should I start? The math is torturous. The implementation is damn hard to do right. Did you know that you can break poorly implement crypto via timing analysis? Even if you don’t roll your own crypto, are you using some open source library or the one from the operating system? Do you know when to use which? Are you storing your keys properly?

Magic, all of it.

Some people believe that security can be achieved by obscuring things. These also tend to be the same people who’ve never used a decompiler. You’d be amazed what can be achieved with “a lot of tape and a little patience.”

If your goal is to have software and systems that are secure, you can’t have magic. Ever.

So, when I see company with a core philosophy of “move fast, break things,” I think well aren’t they going to have more attack surface than a two pound chunk of activated carbon. Not amazingly, they did and we are worse off because of it.

You can’t secure software-based systems unless you understand how the pieces play together. You can’t understand how the pieces play together until you understand how each piece behaves. You can’t understand how a piece behaves if it’s got magic floating around in it. It’s also important to not just glom onto a technique or technology because it’s handy or trendy. As Brian Kernighan and P.J. Plauger said, “it is dangerous to believe that blind application of any particular technique will lead to good programs[2].”

While you’re out there moving fast and tossing things over the wall, keep in mind that someone else, moving equally fast, is stitching them together with other bits. The result of which will also be tossed over another wall. And while it is true that some combination of these bits product interesting and quite useful results, what is the totality of their impact? At what point are we simply trusting that the pieces we’re using are not only correct and appropriate to our use, but don’t have any unintended consequences when combined in the way we have done.

You need to know that every part does the thing you intend it to do. That it does it correctly. And that, it does nothing you don’t intend. Otherwise, you’re going to have problems.

I’ll close with another story. In the dim days, before people could use the Internet (big I), there were a number of networks. These were eventually interconnected hence the name interconnected networks or Internet for short. Anyway, back in the day (early ’80s), universities were attaching to the Internet backbone, which was in and of itself pretty normal. What was not normal was when someone accidentally mounted a chunk of the Andrews File System (AFS) onto an Internet node. It ended up mounting the entirety of AFS on the Internet. This had the unexpected side effect of making a vast number of students previously unprotected emails publicly available to anyone with Internet access. Mostly that meant other university students. AFS wasn’t actually designed to be connected to anything else at that time. Bit of a scandal.

Unintended consequences.


  1. Image credit: Magic Book By Colgreyis ©Creative Commons Attribution 3.0 License.
  2. Kernighan and Plauger, Software Tools, 1976, page 2 paragraph 4

When creating a class, it’s important to have a motivating example. In my experience, people learn best when they can see an immediate application to their own work. In the area of cybersecurity, this can be difficult. Examples in this space tend to be either too esoteric (return-oriented programming) or too divorced from the domain (credit card theft).

I’ve just finished up the creation of a two hour software security fundamentals class for management and developers. This is intended to provide a framework for integrating security into the software development process. Build it in vs. bolt it on. As I was putting the class together, the motivating example was just out of reach.

The push-back that must be overcome is that there already exists a process for dealing with security issues. It’s an extension to the standard quality assurance process. This process merely needs to be extended to include security-related testing, right?

Let’s look at that assertion for a moment.

How exactly does quality assurance work? Well, it’s based, by and large, on the flawed hypothesis model. Starting with the documentation, test cases are created to verify the assertions made therein. From there, scenarios are imagined. These are likewise verified. If issues (bugs) are discovered, generalizations are attempted. Any found point to larger problems in the code.

Sounds good, what’s the problem?

Consider the internet joke:

QA engineer walks into a bar. They order a beer, then order 0 beers, then order 999999999 beers, then orders a lizard, then orders -1 beers, then orders a eawlirensadk.

A customer walks into the bar and asks where the bathroom is. The bar bursts into flames, killing everyone.

That’s pretty much the problem with the flawed hypothesis model. You only verify the things you think of. If you’re only looking at how the bar serves beer, you’ll never catch issues involving other aspects of the system (here, bathroom location).

It’s a bit extreme as a motivating example, but everyone can relate to it, which is, of course, the point.

From there, the concept of flaws vs. bugs can emerge. QA finds bugs. On a good day, these may point to flaws. So, what’s the difference. For the purposes of discussion, flaws are design defects and bugs are implementation (code) defects. By its very nature, QA does not test design, only implementation.

At this point, management asks the question, isn’t this how it’s always been? Generally speaking, yes. Long gone are the days when people used program design language (PDL) to reason about the soundness of their software. At that time, security wasn’t much of a focus.

Enter threat modeling. By its very nature, threat modeling allows us to reason on the design. Why? Because it focuses not on the documentation, but rather the data flows and through extension the work flows of the system. Because we abstract ourselves from the implementation, we can reason about the system in ways that point us directly to security flaws.

To relate the impact to the real world, one has only to look at the cost to Samsung of not catching a design flaw in the Note 7 prior to release (US$17B). IBM estimates that relative to catching an issue a the design stage that the cost is 6.5 time higher in the implementation stage, 15 time higher during testing, and 100 times higher after release.

I’m in no way advocating the elimination of QA testing. You need both. As well as the processes we do in between, such as code reviews and static / dynamic analysis. But again, discovering issues in these stages of development is going to be more expensive. Defense-in-depth will always give you a better result. This is true not only in security, but the development process itself.

As I was finishing up my software security fundamentals class, the news broke regarding a high-profile technology firm that exposed the private data (images) of millions of individuals via their developer APIs. This is probably a case of failing to threat model their system. This isn’t the first time that this particular company has failed miserably in the area of security. It points out, in ways which greatly assist my efforts to get management on-board, that the failed hypothesis model is no substitute for critical analysis of the design itself.

As a system grows in complexity, it is critical to abstract out the minutiae and let the data flows point toward possible issues in the design. Threat modeling is one technique, but not the only one, that makes that possible.

I get called upon to do fairly incongruous things. One day it’ll be C++ usage recommendations. Another will find me preparing background materials for upper management. Some days, I’m prototyping. Always something new.

As of late, I’ve been bringing modern software threat modeling to the development teams. Threat modeling is one of those things that, for the most part, exists only in the realm of the mythical cybersecurity professionals. This is a sad thing. I’m doing what I can to change people’s perceptions in that regard.

Within cybersecurity, there is a saying. “You can either build it in or bolt it on.” As with mechanical systems, bolting stuff on guarantees a weak point and a usually lack of symmetry. From the software development standpoint, attempting to add security after the fact is usually a punishing task. It is both invasive and time-consuming.

But the bolt-on world is the natural response for those who use of the flawed hypothesis model of cybersecurity analysis. The appeal of the flawed hypothesis analysis lies in the fact that you can do it without much more than the finished product and its documentation. You can poke and prod the software based on possible threats that the documentation points toward. From the specific anticipated threats one can generalize and then test. The problem is that this methodology is only as good as the documentation, intuition, and experience of those doing the analysis.

So, what’s a software development organization to do?

Enter threat modeling. Instead of lagging the development, you lead it. Instead of attacking the product, you reason about its data flow abstraction. In doing so, you learn about how your design decisions impact your susceptibility to attack. From there, you can quantify the risk associated with any possible threats and make reasoned decisions as to what things need to be addressed and in what order. Pretty much the polar opposite of the “death by a thousand cuts” approach of the flaw hypothesis model.

Sounds reasonable, but how do we get there?

Let me start by saying that you don’t create a threat model. You create a whole pile of threat models. These models represent various levels of resolution into your system. While it is true that you could probably create an über threat model (one to rule them all, and such), you’d end up with the graphical equivalent of the Julia Set. What I’ve found much more manageable is a collection of models representing various aspects of a system.

Since the 1970’s, we’ve had the very tool which we’ll use to create our models. That tool is the data flow diagram. The really cool thing about DFD’s is that they consist of just four components. In order to adapt them to threat modeling we need add only one more. The most important piece it the data store. After all, there’s not much to look at in a computer system that doesn’t actually handle some sort of data. We manipulate the data via processes. These agents act upon the data which moves via flows. And finally we need external actors, because if the data just churns inside the computer, again, not much interest. That’s it. You can fully describe any system using only those four primitives.

Okay, you can describe the system, but how does this relate the threat modeling? To make the leap from DFD to threat model, we’ll need one more primitive. We need a way to designate boundaries that data flows cross. These we call threat boundaries. Not the world’s most imaginative nomenclature, but hey, it’s simple and easy to learn.

Once we have a particular DFD based on a workflow, we add boundaries where they make sense. Between the physical device and the outside world; or the application and operating system; or application and its libraries; or between two processes; or … (you get the idea). Again the threat model isn’t intended to be the application. It’s an abstraction. And as Box said, “all models are wrong … but some are useful.” It helps to keep in mind what Alfred Korzybski said, “the map is not the territory.” Anyone who’s traveled on a modern transit system would never confuse the transit map for the area geography. Nod to Harry Beck.

With the boundary-enhanced DFD, we can get to work. For the particular road I travel, we reason about the threat model using a STRIDE analysis. We consider each of the elements astride (pun) each data flow with respect to each of the six aspects of STRIDE: spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privilege. Not all aspects apply to all combination of our four primitives. There are tables for that. Each of these can be appraised logically. No chicken entrails required. At the end of the day, you have a collection of things you don’t have answers to. So, you bring in the subject matter experts (SMEs) to answer them. When you are done what remains are threats.

Threats. Spiffy. But not all threats are equal. Not in potential for damage, or likelihood, or interest. For a goodly length of time, this was a big problem with the whole threat modeling proposition. Lots of stuff, but no objective way to triage it.

Enter the Common Vulnerability Scoring System (CVSS). This is the Veg-O-Matic of threat risk quantification. CVSS considers the means, complexity, temporality and impact areas of a threat. From these it computes a vulnerability score. Now you have a ranking of what the most important things to consider are.

For many industries, we could stop right here and use the CVSS score directly. No so in the land of FDA regulation. For that land adds another dimension, patient safety (PS) impact. The augmented CVSS-PS ranking guides us to a proper way to objectively rate the threats from most to least severe.

Now, we can take these ranked threats and present them, complete with SME feedback, to the core and risk teams for determination and disposition.

But we’re really not done. The threat modeling process isn’t one-and-done. We take what we learn and build it into the base assumptions of future products. Once the product is built, we compare it to the original threat model to verify that the model still represents reality.

Well, that was a lot of exposition. Where’s the facilitation and teaching?

Actually, the exposition was the teaching. And the teaching was and explanation of how I go about facilitating.

When all is said and done, a threat model needs to be built in. That is, engineering owns it. The whole facilitation thing, that’s a skill. It needs to live in the teams, not in some adjunct cybersecurity group. Applying CVSS consistently takes a bit of practice, but again we’re back to facilitation.

As to actually teaching threat modeling, that takes the better part of a day. Lots of decomposition, details and diagrams. I like teaching. It’s a kind of cognitive spreading of the wealth. The same is true of facilitation, just more one-to-one.

The book Creative Selection by Ken Kocienda was recommended to me. This is unusual in that I’m typically the one recommending books to others.

This book follows the creation of the iPhone as seen through the eyes of the author, who was a software developer at Apple at the time. There are many interesting aspects to his story, to his interactions with the movers and shakers within Apple including eventually with Steve Jobs, and to the creative dynamic that made the iPhone possible at all.

Over the years, I’ve had the opportunity to participate in the world of Apple’s alpha and beta hardware, as well as that of many other technology firms as an engineer at firms working with Apple. I’ve also been on the other side of the fence, designing, developing and managing hardware and software, and their associated beta programs. It’s a challenging environment.

I’ve also had the opportunity of presenting my work to those holding the levers of power who had, shall we say, a less than gentle manner of showing disapproval with those thing not meeting with their standard of quality. There is an interesting combination of exhilaration and dread surrounding such presentations.

Elaborating on the details of the book in this post wouldn’t serve to encourage the would-be reader. If you are a follower of the history of Apple or technology, you’ll find things that will expand your view of the period and the dynamics surrounding it. You should also come away with a bit of insight into how Jobs looked at features. At least within the scope of the work that Kocienda presented him.

As a first outing, Kocienda does a reasonable job of painting a picture of the time and place that was Apple before they changed the world of mobile phones. The book is illustrated, which is unusual in a day and age of contemporaneous photography. I can’t imagine there not being numerous of photos from the proto-iPhone during its gestation. The pace of the book is decent and he conveys a sense of presence in the narrative. I felt that it could have been tighter, although that would have reduced the already roughly 200 pages of prose even further.

All-in-all, the book is well supported by references and is approachable by those not of the software tribe. You should be able to dispatch this one in a few hours. It’ll be joining my other volumes on the history of technology of this period.

 

As someone in the technology sector, on a fairly constant basis I get asked the grown-up equivalent of “what do you want to be when you grow up.” This is, of course, “where do you see yourself in N years.”

Now, most of the time, this is a question with all the gravity of “nice day, isn’t it?” Sometime, however, the inquiry is sincere. And my answer is provided with the same weight as the question.

And, for reference, my answer hasn’t changed all that much since I was about seven. Happily, the way that I answer has become a bit more sophisticated. My end game position is that of CTO (Chief Technology Officer).

Many of my contemporaries have gone the route of management. This is cool with me. You shouldn’t be doing engineering and science if you don’t have it in you. By in you, I intend the sense given by The Oracle in The Matrix when she told Neo that you know that you’re the one when you feel it “balls to bones.” Seriously. There are far easier ways to make a decent living than the constant demands and uncertainty that comes along with the endeavor of technological advancement. Hell, forget advancement, just using technology is a hard slog.

For me, working with technology and constantly expanding the reach of my understanding within that sphere is one of my core drives.

So like anything else I’ve ever set as a goal, I researched this thing I’ve set my sights on.

Let’s unwrap what I understand today.

It’s relative new

As C-suite positions go, the CTO is really young. Only the CISO (Chief Information Security Officer) position is newer. As you’d imagine it’s not like there weren’t technology companies before CTO roamed the Earth. Prior to recognizing that the technology landscape was changing so quickly and on such a continual basis that a board-level position focusing exclusively on the implications of such change, technology was the domain of either the CIO (Chief Information Officer) or CEO (Chief Executive Officer).

There was a realization that technology falls into two broad categories: present and future. You can think of these as tactical (product development) and strategic (futures research). Investopedia says that a CTO “examines the short and long term needs of an organization, and utilizes capital to make investments designed to help the organization reach its objectives … [the CTO] is the highest technology executive position within a company and leads the technology or engineering department.”

This division of labor is not unlike the was that Computer Science became an independent discipline. It too is dual-rooted. There were schools where the computer (singular) was managed by the Math department and those managed by the Electrical Engineering department. You can tell the difference in the focus in curriculum. It will be either theoretical (math) or applied (engineering) in nature.

It’s not only one

The position of CTO is in no way one-size-fits-all. Presently, it’s possible to identify four distinct sub-species of CTO. This diversity reflects the nature of the companies and how technology fits into their culture and mission.

We can identify these four by where the fall on the spectrum described by amount of business change and percentage of products and services based on information.

 

CTO quadrants

As can be seen, these are four very different animals. This is why you would expect the CTO from a relatively stable business in the manufacturing sector like GE (big thinker) to be very different from one at a business experiencing near constant change and highly-dependent on information in its products like Facebook (visionary). Neither of those would look anything like the stable business, high-dependency Apple (external-facing) or high change, low-dependency AT&T (information manager).

The Infrastructure Manager

CTO quadrant - infrastructure manager

 

Typically seen in companies with low dependency on information-related technologies, but with business models experiencing large amount of change (technology change impacting how the business is run), the Infrastructure Manager CTO reports to the CIO and is responsible for addressing how to build out and leverage technology to reduce cost and encourage technology adoption across business units in order to gain efficiencies.

The Big Thinker

 

CTO quadrants - big thinker

The Big Thinker CTO is the response to never-ending growth of things utilizing information technology. We see this type in companies with stable business models and a relatively low dependence on information as a part of their products. We see their focus on strategic initiatives such as:

  • Advanced technology
  • Competitive analysis
  • Technology assessment
  • Prototyping
  • Planning
  • Setting architectural standards
  • These CTO answer to the CEO and peer the CIO. Here we have a division of the IT and engineering departments. They act as change agents, typically having a relatively small elite staff. They are influencers rather than controllers.

The Visionary and Operations Manager

 

CTO quadrant - visionary

In companies in the throws of business change (increased technology complexity) and highly dependent upon information in their products and services, the CTO will be the Visionary and Operations Manager type. Answering to the CEO, this is the prime mover of the company. Their responsibilities are all encompassing. They drive business strategies and exploit new technologies and then implement those same technologies throughout the business and product groups. We see the CIO reporting to the CTO in this view of the world.

The External-facing Technologist

 

CTO quadrant - external technologist

Information-driven companies with stable business models will tend to have the External-facing Technologist CTO. As with the Big Thinker type, this CTO peers the CIO with both answering to the CEO. Here the focus in on the identifying new technologies, exploiting them, and evangelizing them both in and outside the organization.

Areas of Impact

If we visualize the areas of impact for the four type, we can see the natural focus areas for each.

infrastructure managerbig thinkervisionaryexternal technologist

Observations

Greg Brockman, Stripe’s CTO, said that other CTO’s “viewed themselves as the facilitators of the technology organization. Sometimes this was about connecting senior engineers. Sometimes it was mentoring. … I realized the most important thing to do was to empower our engineers to make big changes and improvements.”

“It’s not a simple job to understand all the technology out there,” says Unisys Corp’s global CTO Fred Dillman. “Today the pace of change is so much faster, and businesses are becoming more and more dependent on technology. So the CTO is being asked to be the real expert in technology and understanding what technologies will affect the business in the future and help determine when and where to invest.”

My fit

So, where do I see myself in all this? Tricky question.

Honestly, it varies. As the Version Control Systems Architect at Metrowerks, I was evangelizing source code control. At The Altamira Group, I rocked the visionary thing. Most of my CTO-esque activities have fallen into the Big Thinker bucket. Researching futures and educating engineers and management is where I spend the bulk of my time.

Roger Smith noted that “[t]he significant role of technology in strategic business decisions has created the need for executives who understand technology and recognize profitable applications to products, services and processes. many companies have addressed this need through the appointment of a chief technology officer (CTO) whose responsibilities include:

  • monitoring new technologies and assessing their potential to become new products and services
  • overseeing the selection of research projects to ensure that they have the potential to add value to the company
  • providing reliable technical assessments of potential mergers and acquisitions
  • explaining company products and future plans to the trade media
  • participating in government, academic and industry groups where there are opportunities to promote the company’s reputation and to capture valuable data

Integrating these technology-based activities into the corporate strategy requires that the CTO nurture effective relationships with key people throughout the company. These include the CEO, members of the executive committee, chief scientists, research laboratory directors, and marketing leaders.”

Regardless of the specific needs of the organization, I’ll continue to strive to provide the best information in a timely fashion to those who need it.

References

 

 

I always find myself impressed at how nation-states and their leaders exhibit repeating patterns of behavior. This is expertly explored through space, time and scale in John Lewis Gaddis‘ latest book On Grand Strategy.

Dovetailing beautifully into my previous post’s assertion that I am an experiential gestaltist, Gaddis’ work takes us from Persia to Greece, China, Spain, England, France, Russia and the Americas. The book deconstructs battles and their attendant strategies, the motivations of their commanders, and the moods of the peoples involved.

From the outset, Gaddis presents us with the metaphor he will return to time and again. That of the fox and the hedgehog. These represent the approaches of alert outward-directed probing with stealth and of unwavering belief and inward-directed defense of that belief.

He shows that time and again battles are lost because leaders lack the ability to see changes in the situation before them. This may manifest in populations simply abandoning territory as was the case in both Xercesattack of Athens and Napoleon‘s of Moscow. Forcing your attacker to extend their supply lines should give pause to any commander, and yet, time after time we see overconfidence leading to defeat.

We see how Elizabeth skillfully balances force and guile to turn a seemingly weak position with respect to the attacking forces of Spain’s (God will make it work out) Philip. Like Xerces, Philip believes that his forces cannot fail. Less so because of their intrinsic numerical advantage and more because of his steadfast belief in his divine mission. His confidence extended to failing to provide adequate direction to his various forces and ignoring losses due to bad weather. Elizabeth, on the other hand patiently and judiciously used her limited resources.

The British colonies in North America are examined and we see the interplay between the colonials and the empire. As the United States are forming, the choice to kick the can of addressing slavery down the proverbial road of history is in full display as they draft their Declaration of Independence and Constitution. We jump to the American Civil War where leaders are struggling with the consequences of being at once a nation based on democratic ideals and yet built on slavery. They were very well aware that the monarchies of Europe still looked on them as an untenable aberration. A hypocritical one at that.

And we see into the churn that formed the backdrops of both World Wars. Also, how England worked to engage the United States and how others tried to prevent its engagement.

Throughout it all, we are presented profiles of leaders who are either able or unable to navigate the ambiguities of the realities before them. There are those without a compass, unable to achieve goals because there are none. There are those whose compass is trusted to the exclusion of the terrain. They find themselves, like those today blinding following navigation apps, ending up going off cliffs and ending up in lakes. Knowing where you are going is important, but if you fail to allow for the terrain and weather conditions, you will not do well.

On the whole, the book provides us a valuable mirror. It is amazingly timely given that we are in a period where our leaders seem again poised to engage in actions demonstrating that they have failed to either study of learn from the teaching of Sun Tzu, Thucydides, Augustine and Machiavelli. Their message could be described as success is found in following the middle way, embracing both certainty of mission, preparation and proper respect for the fluid nature of engagement.

I’ve just completed Robert Wright‘s latest book, Why Buddhism Is True. For me, the attraction was the subtitle: The Science and Philosophy of Meditation and Enlightenment.

Reviewing a book on philosophy is like trying to explain your existential motivations to a dolphin. You know that they’re really smart, but you’re never really sure that they get anything out of the discourse. That being said, I present my attempt. Hopefully, it will be minimally head-tilt inducing.

By way of background, I count myself among the Buddhist community. This to me provides about as much information as if I’d said that I work with computers. Yup, me and a hundred of million others work with computers. It tells you nothing about the form, function, depth of involvement, etc. Hence my choice of the word community. There is no single locus within Buddhism. Even whether it is a religion, a philosophy, or both is a point of discussion. On this, I point back to the sub-title’s attraction to me.

The reason I can even attempt a review is that the book takes a practical (as in practice) view of the topic. As an engineer, I appreciate the quantifiable. On this point, the book does not disappoint.

If I had to re-title the book I would name it Meditation: What’s in it for Me? Why? Because in a world where people barely make it past headlines, it pretty much covers the core of the discussion. The problem with this title is that it leaves out all the interesting bits that get you from introduction to summary. Sort of like renaming Bill and Ted’s Excellent Adventure to “be excellent to each other.”

The author is journalist, professor of Philosophy, and is of the Theravāda (specifically Vipassanā) school of Buddhism. I follow the Mahayana (specifically Zen) school. For the requisite pun, you could say that the distinction between the two is all or nothing.

Let me say up front that I am not a Buddhist scholar. I can’t read Sanskrit, Pali, or even Kanji to save my life. As such, many of the names and terms-of-art within the Buddhist world make my brain hurt. I can’t pronounce them. I can’t remember them. But I’ve gotten to the point where I recognize them in context. As an experiential gestaltist, I strive to integrate everything. In the process, the source wrapper is often discarded. This book accomplishes that unwrapping and, although it does use the terms from the source languages (mercifully translated), presented in approachable language.

Using this approach of going from the known to the unknown, Wright covers the methodological process of meditation and its effects as he has experienced them. He also relates the Buddhist underpinnings of the whys and wherefores of meditation as seen by various schools.

Next, he explores the various working models of consciousness used within the psychological community. From there he harmonizes the two.

In the final chapters, he brings us back to the big question areas of universality and enlightenment. He finishes by answering the question as to the tangible worth of meditation and will being at one leads to a grey existence.

I won’t spoil the ending for those of you who like to see endings for themselves. If you have an interest in the interplay between meditation, psychology and Buddhist thought, you will find this to be an interesting read.

Sometimes you can spend years trying to find a book that you can recommend to someone who’s asked you a question. My latest read, The Software Craftsman: Professionalism, Pragmatism, Pride is one such book. A recent volume in the Robert C. Martin book series, this volume by Sandro Mancuso is not what it appears to be. And that, is a good thing.

When you look at the other books in the Martin series (Working Effectively with Legacy Code, Agile Estimating and Planning, Clean Code, The Clean Coder, Clean Architecture, …) you see topics decomposed and methodologies expressed by which the title’s subject is achieved. That’s not what you get with The Software Craftsman. In my case, that was a very fortunate turn of events.

This is not to say that the journey of the software craftsman is not discussed. It is and in a reasonable amount of detail. But an equal amount of time is given to the ecosystem within which the craftsman practices. These parts of the book are not for the consumption of the craftsman or aspirant, but for the owners of the firms who employ (or should employ) them.

The book does well in describing the trials and tribulations of a member of the craft; from the point where they realized that they aspired to more than the dichotomy of coder / architect; to the creation of the volume itself. It lays bare this false dichotomy within the broader context of the entire point of software development. That being to produce value to the customer and income to the creator. Within that context, there is the easy path of whatever works and the hard path of building a thing that no only does what it is supposed to, but does it in a way which is both high quality and highly maintainable.

At it’s core, this is book about philosophy. In a landscape of Google and go; and compile it, link it, ship it, debug it; this is a thoughtful volume. It makes the point that I’ve never seen in print, that the individual software developer is responsible for their own career development. Not their manager, not their company, but they themselves are responsible. Heady stuff this.

As to the remainder of the book’s material, it’s more a wake up call to upper management. There you’ll find discussion of recruiting, hiring, retaining, shaping change and showing ROI. I know of very few who could look at this volume and come away unmoved.

It might be the separation of authority and responsibility, the hire for what we needed yesterday, the CYA so we get our bonus, or the factory worker mentality encouraged by so many firms today. If you can read this book and not get something out of it, you’re part of the problem.

Truly quality software is designed, built, and tested by passionate individuals working together toward the creation of something which will well serve the customer. Everything else is just code. Any 10 year-old can be taught to write code. I know, I’ve done it. Do you want your life’s critical systems to be build by 10 year-olds? Of course not, that’s a ridiculous question. How about people who are just doing it because they make a better than average day’s wage?

I hope you’re intrigued. At the very least, I hope you’ll reflect on your own views of the responsibilities of a software developer. At fewer than 250 pages, you can read this book in one or two sittings, but reading the book is only the starting point.

 

The book, The Character of a Leader: A Handbook for the Young Leader, is an odd beast. There aren’t many non-fiction books I’ve read where the author uses a nom de plume. According to the Amazon description the author Donald Alexander is an executive officer within the United States intelligence community. Presuming this to be true, they’re desire is to provide a foundation for aspiring leaders and not their own aggrandizement. I say aspiring here because a leader isn’t a title or rank, but rather a state or behavioral characteristic. Leaders can at the same time be led. They are also in a constant state of self-education.

The author argues that a leader is grounded in a set of core characteristics and beliefs about themselves and others. This position is opposed to those who believe that one can be an effective leader and hold that there are no absolutes with regard to attitudes and actions (moral relativism).

Given the books short length (about 120 pages of main text), it struck me as unusual that the introduction was about 15 pages in length. Why not simply incorporate it into the body of the work? My view is that this device allows the author to create the questions that the main text then answers. In a way, it is as though a student approaches a teacher and in asking questions inspires the teacher to assemble a lesson for all their students. I look at it this way because that’s what I have done in similar circumstances. It’s not usually the case that people coming to me with questions realize that their questions are of import to others, but it is the obligation of those of us who people approach with such questions to “spread the wealth.” Noblesse oblige, if you will.

The book is divided into sections defining a working definition of leadership, leadership and character, leadership traits, expectations, becoming a leader, and the fundamental obstacle  to leading (tribalism). It concluded with a call for leading with integrity.

No one who has been in a position of leadership will be surprised at either the structure of brevity of this book. You could put the totality of the facts conveyed onto a business card (I’d’ve said index card, but no one knows what an index card is anymore). But just like a PowerPoint, you don’t need to write every word you’ll speak on the slides (they’re not really slides anymore either). This book is a touchstone. For those newly recognized leaders, this book is a cross between a travelogue and a cautionary tale. For the former, the inclusion of additional material would simply be superfluous. For the later, it might convey the idea that the actions of a leader are paint-by-number, whereas in reality the are very much free-hand.

There are numerous quotes by and about leaders from various periods in history. These both build the case for the author’s assertion that character is essential to being a leader and provide jumping off points for further exploration of specific aspects of leadership.

I am impressed at the tightness of the narrative and the compelling argument made by the author. They strike me as one of those individuals that I would very much enjoy learning from and working along side.

 

 

I spend much of my time these days doing long-term strategic research and planning. Part of that time is spent identifying areas where technology training is warranted. The ways and means I use to create and present training materials have been developed through years of trial and error. In the midst of one particular line of research into a non-training-related area, I found Building an Innovative Learning Organization by Russell Sarder.

The book is relatively short, about 220 pages, but in many ways, you really don’t need more than that to cover the concepts of training. While it’s true that it would take far more to cover all aspect of training, from organization by-in, to facilities, to choice of materials, to length of courses, etc., those are details. And the details are as pointless as ornaments without a tree if you don’t have the fundamentals in place. That’s where this book shines.

Yes, there are all the requisite elements of a business-oriented book (voices from industry, outcomes of research, anecdotes, and the like). Not to mention the mound of acronyms tossed in for good measure. But, I expect those. This book asserts that learning should be a systemic attribute of any thriving company. As such, learning must be part of the culture of the company for it to be successful. You cannot slap training on the side and expect that you will have any serious ROI to the company. It would be like thinking that buying Girl Scout cookies or Boy Scout popcorn has a substantive impact on the members of either organization. Yes, it does provide financial support for programs, but it’s not “the program.”

Training needs leaders, resources, people interested in learning, and a purpose (lest we forget why we do training in the first place).

Training has a structure and that structure is not one-size-fits-all. People have varying modalities of learning. Even the best material won’t work well for everyone. This is were that whole (materials, time, place, etc.) details thing comes into play. But, again the focus of the book is to lay out the challenges and considerations, not specifics.

Finally, you need to see that training produces results. This can be fiendishly difficult to measure, so it’s vitally important to set expectations before doing the training. Being happy is not considered a valid measure of ROI for the company.

As mentioned earlier, the book is replete with references and for those who create training material or even those who want to create an environment within their company where can be effective. It is a good starting point. For those who have been involved in training for some time, the book can serve as a reference that can be used to educate management in the scope, cost and investment (they’re different) necessary to create a learning environment that will have long-term benefits.

Overall, a decent read. I found the interviews with CLOs (chief learning officers) incisive. As with all organization-level things, there are no easy answers. And you do get what you pay for. You’ll dispatch this book in a few hours and then find yourself going back over it later.

 

%d bloggers like this: