Feeds:
Posts
Comments

Archive for the ‘Future’ Category

There are books that inspire. There are also people who inspire. This review is about a book that inspired a person who inspires. It also inspired that person to write a book to inspire people.

I’ve been following Simon Sinek for a bit now. For the past few years he’s been speaking about a way of looking at things that differs from the common view. He’s also written a book (releasing in October 2019) about it titled, “The Infinite Game.” This review isn’t about Simon, his videos, or upcoming book. It’s about the book that inspired him.

That book, by James Carse, is Finite and Infinite Games.

Unlike many of my recent reads, this one was a bit on the compact side, coming in at under 150 pages. The interesting thing to me approaching the book was how such a short volume had inspired Sinek to take its concept on the road.

In typical university professor process, Carse opens by describing the endpoints of a spectrum, games of a finite and infinite nature. He asserts these to be just that, endpoints. We are left to conclude that there are other games within the spectrum, but none are explored. Although this would have allowed for a larger volume, there really isn’t much point. If you’re the kind of person who buys this book and doesn’t appreciate that it’s been written by a philosophical / political / historical viewpoint, the additional material wouldn’t have enabled you to get any more out of it. On the other hand, if you do, you’ll have little difficultly extrapolating the spectrum and its related historical and political examples.

Once the groundwork is laid, the author spends a chapter exploring a seemingly obvious point. “No one can play a game alone.” This appeals to our sense of self. We exist in relationship to the not-self. We cluster our selves into communities in relationship to other communities and nation-states likewise.

From here, the book explores our traditional view of games. Winners and losers. We see how this binary / hierarchical viewpoint demands the existence of a time-bound context (world). For Whovians this represents a fixed point in time. Examples would be the 1962 United States World Series (baseball) champion or the victors of a battle. The interesting element of which is that of any fixed point in time, it never changes. It is never different. It also never improves. Implicit in this view is the requirement of losers as well as accepted, well-defined rules.

In the broader world we see analogs to zero sum games. We see companies constantly defining themselves in terms of being the number one firm in something contextualized by a specific time period. These tend toward ever increasingly pointless hair splitting. The question becomes, “to what end?”

An interesting point, which I’d not previously considered, was that finite games require an audience. That is witnesses. These serve to validate the victor and their victory. They also are responsible for carrying the memory of the event, anchoring it in time.

It is only now, two thirds of the way into the book, that the other end of the spectrum is examined. To do so, the author has us look to nature (well actually anything not contrived by human beings).

As one might have come to conclude, the world (and by extension universe) got along just fine without us and will probably to do again. Carse points out that in and of itself, the world has no narrative structure, it simply is. In the same way an infinite games is the complete opposite of a finite game. It has no fixed rules, no audience, no winners, no losers and is not time bounded.

So what is the point of an infinite game?

To keep playing.

Players come and go. Objectives change. But, at the end of the day, playing is its own motivation. In this view of the world, there are no enemies to be defeated, there are rivals to out do. Without rivals (other players) you aren’t playing a game. There is no attempt to reach a pinnacle, but rather to be pushed to exceed ones own success.

In a finite game you declare victory. Once at the top of the heap, it’s all down hill. Finite games are by their very nature self-limiting. There is no incentive to excel once you’ve attained supremacy. We have seen time-and-again companies creating new and innovative technologies only to hold them back because they were already number one. These same companies lost that position to others not held back by past glories.

I have seem many times how in the software world, companies have milked the “completed” product cow while at the same time refusing to invest in keeping that same cow current in terms of technology. It is only after decades of neglect that they realize that they can no longer add features and that a generation or two of graduates have passed by since anyone was taught how to work with the technologies used in said cash cow. And then the cow dies.

T.S. Elliot said, “Immature poets imitate; mature poets steal; …” Pablo Picasso said, When there’s anything to steal, I steal” Steve Jobs spoke likewise. These are views of those constantly improving their craft. Learning from and incorporating the best we see in others vs. simply attempting to exploit the weaknesses we see is a hallmark of the infinite game player. The other players of their infinite games are not opponents but rather rivals. Opponents seek our defeat. Rivals seek our respect. Opponents want their fixed point in time. Rivals desire that every day we push them to be their best.

Not long ago, someone said to me, “unicorns want to be around other unicorns.” According to Guy Kawasaki, Steve Jobs said, “A players hire A players; B players hire C players; and C players hire D players. It doesn’t take long to get to Z players. This trickle-down effect causes bozo explosions in companies.” I prefer Eric Dietrich’s thought, “A Players Don’t Hire A Players — They Partner with A Players.”  We can look to the rise and fall of stack ranking in the technology world to see the negative impact of the finite game in a world where the goal is to create the future.

Upon finishing the book I was left with a sense of affirmation and sadness. I recommend this book to anyone who intends to undertake any endeavor over the long term.

 

 

Read Full Post »

There have been numerous times when a new technology has led to a major shift in how we thought about how computers and software should be built. We are about to see one of those shifts. At least that’s what I’ve come to believe.

Let’s pop into the Wayback and set our sights on the early ’80s. At that time computers had one processor. Hardware-based floating point were the domain of mainframes and minicomputers. Communications between computers existed only for the well-heeled. Security meant keeping your computer locked up.

Life was pretty simple. If you wanted something done, you did it yourself. When software was shared it was done via the US Postal Service on 9-track tape.

Fast forward to the early ’90s. Desktop computers were fairly common. Uniprocessors still ruled. Hardware floating point was now readily available. The internet had just been introduced. Gopher was slowly to be displaced by the combination of FTP and web search engines. Security issues were a thing that happened, but was, on the whole a black art practiced by a small number of individuals and required skills that you needed to develop yourself.

It was around this time that I was casting about for a thesis topic for my Master’s in Electrical and Computer Engineering. I took on the topic of virus-resistant computer architectures (AARDVARK). Did I mention that it was 1992? Just researching the state of the art in computer viruses was a huge task. No Google, Amazon or ACM online article search. As to the other side of the equation, the how and why of hacking, well, I’ll leave that for another time.

By the time I was done, I’d proposed a computer architecture with separate instruction and data spaces where the application’s binary was encrypted and the key loaded in a separate boot sequence was stored in a secure enclave, accessible only to the binary segment loader. Programs were validated at runtime. I conjectured that such a computer would be ideal for secure use and could be built within the 18 months.

Everyone thought it was a great design and the school even worked with me to apply for a patent. The US Patent Office at that time didn’t get it. After five years we abandoned the effort. I was disappointed, but didn’t lose sleep over it.

Fast forward to 2012 when Apple released the iOS 6 security guidelines. Imagine my amusement when I see echos of AARDVARK. It’s all there: signed binaries, secure enclave, load validation. Good on them for doing it right.

Let’s step back and consider the situation. Computers are really small. They have integrated hardware floating point units, multi-processors and now, with the advent of this generation of iPhone, hardware-based security. The internet has gone global. Google indexes everything, Open source is a thing. So, we’re good?

Not so much. The Apple iPhones are an oasis in a vast desert of security badness. Yes, IPv6 has security goodness available, but IPv4 still rules. Secure programming practices are all but non-existent. Scan and contain is the IT mantra. Threat modeling is an exercise for the academic.

This brings to last year. Microsoft announced Azure Sphere. Application processor, dual-MCU, networking processor, security processor. All firewalled. All in the same package. The provided OS was a secured version of Linux. Each device is registered so only the manufacturer can deploy software, push updates and collect telemetry via the Azure cloud.

There must be a catch. Well, as you know, there’s no such thing as a free burrito.

The first device created to the Azure Sphere specification is the Mediatek MT3620. And no, you can’t use it for your next laptop. The target is IoT. But, there’s a lot of horsepower in there. And there’s a lot of security and communications architecture that developers won’t have to build themselves.

Microsoft is touting this a the first generation. Since they started with Linux and ARM, why wouldn’t you want to get something with more power for systems that have security at their core. If Microsoft approached this as Apple has the iPhone, iPad, AppleTV and Apple Watch; why shouldn’t we expect consumer computers that aren’t insecure.

But will I be able to use them for software development? That’s a tricky question.

When I envisioned AARDVARK, my answer was no. That architecture was designed for end-user systems like banks and the military. You can debug a Sphere device from within Visual Studio, so, maybe it’s doable. You’d need to address the issue of a non-isomorphic ownership model.

Are users willing to bind their device to a single entity? Before you say no, consider how much we’re already put in the hands of the Googles and Facebooks of the world. Like it or not, those are platforms. As are all the gaming systems.

Regardless, I believe that we will end up with consumer compute devices based on this architecture. Until then we’ll just have to watch to see whether the IoT sector gets it and by extension the big boys.

Either way, the future is Sphere.

Read Full Post »

There’s been a lot of churn lately over the price of Bitcoin. There’s also been much talk about uses (commercial and otherwise) for the blockchain technology that underlies it. I’ve taken an interest, as of late, in the subject and I found a promising source of information in the Michael J Casey, Paul Vigna book The Truth Machine.

If you’re looking for a book on how to write blockchain software. This book isn’t going to be of much interest to you. If you have interest in the history of the technology, its applications and societal impacts, you will find it to be a solid read.

Bitcoin and  Ethereum, and the like are discussed with enough depth to provide non-developers a good sense of use cases. This is after all the point of a technology. Otherwise you’ve got yourself shelfware. All the major players are discussed running the gamut from crypto-anarchists to Wall Street bankers. I find the dynamic of these two extremes battling over this technology fascinating.

One definitely comes away with the sense that the technology is still very much a work in progress. Personally, I find it unfortunate that the public sees it as another get-rich-quick methodology. This loops back to the volatility of the Coin markets.

Another, and arguably more important, takeaway is that the systems currently in place do not scale (or at least have yet to be proven to scale). That Bitcoin can process six transactions per second in a world where Visa processed ten thousand is quite telling.

The underlying problem of who’s in charge comes to mind. Without some form of human governance, poor programming will result in bad actors taking advantage of the system. Nowhere does the book address the issue of longevity. The thing about paper (or animal hide for that matter) is that it has a permanence that has proven itself outside of our advances in storage technology. If we did move to a blockchain-based system of ownership tracking, what happens when another, better one comes along? What will the impact of quantum computing be in a world where work is proportional to CPU expended? What happens when people decide that it’s easier to steal the resources used to mine the coin?

I only have a few issues with the book.

First, for a book on a complex technological subject, I expect extra fact checking. I noticed two rookie mistakes in this department. ASIC is defined as ‘Application-Specific Integrated Chips’ whereas it should be ‘Application-Specific Integrated Circuit.’ It also doesn’t require quotes. In the realm of techno-history, the authors attributed Public-Key Cryptography to Whitfield Diffie and Martin Hellman. This is of course incorrect. That distinction belongs to James H Ellis (at least until the NSA owns up to when they started using it). Although his work was once classified, it have been publicly recognized for some time now.

Second, the book wanders off the path of techno-history and into the realm of  conspiracy theory and political opinion when they introduce ‘speculation’ of hacking in the MH370 incident and spend the bulk of the last chapter on the latter.

On the whole, I found the book to be a very good and current survey of the landscape of blockchain technology and its history.

 

Read Full Post »

The Chrysanthemum and The Sword is an exploration of what makes the Japanese tick. At least from the standpoint of a early 20th century scholar. Ruth Benedict was one of that era’s foremost anthropologist.

I could go into the interesting discussion of how the American and Japanese cultures are compared and contrasted. Or how she describes the Japanese approach to Buddhism as being free of non-corporeal entanglements. I’ll leave those to the earnest reader.

What made the greatest impression on me was the lengthy and detailed exploration of on (恩) and giri (義理). These can broadly thought of as debt and obligation. In the west, we have a fixation on equivalent exchange. We like to pretend that there is nothing which cannot be fully bought and paid for. The Japanese fully recognize that this is not the case and have build a society around the concepts of overlapping obligations.

One that I have always found difficult to explain is that of shogimu (諸義務) or obligation to one’s teacher/mentor. This is always an asymmetrical relationship. The apprentice has nothing to offer in exchange in comparison to what they are given. In the west, we, as they say, “just take the money and run.” This is especially true in regard to the attitude of Googling for the answers. People have come to believe that they can monetize the collected knowledge of the world without cost to themselves. The problem comes not to this first generation, but to the second and finally fully realized in the third generation of internet users. The problem is that of who supplies the knowledge.

One of the big complaints of the pre-internet era was how big companies hoarded knowledge, making it available only at a premium. Should it not be free to all? Between the small band of developers (relatively speaking) who contributed to open source, the internet and google, we find ourselves in a place where you don’t need a master to monetize. And so rather than having to pay for software as a function of complexity and craftsmanship, we tolerate mediocre software because it’s free (with ads or in exchange for our personal information).

Meanwhile that high quality software we wouldn’t pay for when it was $400 with a year of updates, we will pay for when it’s a $120 annual subscription to a cloud service. Except that now if we stop paying, we lose access.

But back to the thorny question. Do we really believe that the Google-for-code and I-built-it-all-from-other-people’s-stuff crowd are going to give back to the community? What will happen when those who made all these goodies possible and available retire? If we take the C++ community as an example, there are hundreds supporting millions. This doesn’t scale.

Answers? Nope.

Thoughts? Some.

Personally, I’ve got a bucket full of on, so I should get back to it.

Read Full Post »

The future has always been a contentious place. I should know, I’ve spent most of my career there.

We’ve come a long way from the idea that the world only needed half a dozen computers [Thomas Watson, Jr.]. We now have some many computers that we managed to exhaust the 32-bit IPv4 address space. The solution embodied in IPv6 creates other issues, but that’s a topic for another post.

The interesting part of working in the computer domain is that feeling of being one step ahead of the langoliers. It can be at once exciting and terrifying. It is not for the faint-of-heart or those who believe that the need for learning stopped after their last final exam.

Lately, I’ve been watching an oddly converging divergence of ideas. One head of this hydra follows the path of the ever bigger. Bigger data sets, bigger pipes, bigger computations and unfortunately bigger OS’s. A second head constantly works toward making the whole morass vanish. I remember when I began to see fewer watches as people realized that their phone could do that. That emergency camera that the insurance company tells you to keep in your car? Answering machines? Travel alarm clocks? MP3 player? Portable DVD player? I would really hate to be in Garmin’s consumer division. Another head wants to be everywhere. It’s no longer sufficient to be that operation you could run out of a garage. Now, we have to be able to have stuff, both artifact and intangible, available everywhere. Remember when the fastest way to see a first-run Hollywood film overseas was to be on a military base? Speaking of military bases, you may have noticed that people are recognizing that security is important. The final head is fixated on why computers are this fixed assemblage of hardware. What if I really do need 20TB of memory and a 16K node mesh?

With all this “progress” going on, it’s all that the poor, beleaguered software developers can do just to keep up on one of these. But, that’s okay right? These are all unrelated. Right?

Well, we’ll get to that. For the moment, let’s see if you and I think the same about who’s doing what.

Bigger

Amazon and Google are both doing cloud, but the company I find interesting here is Microsoft. Azure takes the problem of software at scale reduces it to some fundamental building blocks (compute, storage, database and network). Operating system? We don’t need not stinking operating system! For those of you who remember what an IBM 1130 is, you’ll love Azure. It’s like driving a TR6 on the PCH at 80 mph (you could get from LA to SF in like, I don’t know five-ish hours). The world is yours until you crash. [Disclaimer: I have never driven a TR6.] Want more CPUs or storage or network, add more.

Invisible

The battery in my first mobile is heavier than my current phone. Apple’s biggest coup isn’t that it creates ever smaller technologies. They represent the technological equivalent of Michelangelo, who famously remarked about the process of sculpting his David:

It’s simple. I just remove everything that doesn’t look like David.

When Apple introduced the iPhone, developers were all torches and pitchforks. This wasn’t how things were done. Where’s the disk? How do I see this other application’s files (app was still a trending meme). Apple took away bits that we were accustomed to, but didn’t actually need. Most of the time. Sometimes they pulled an Apple Round Mouse. But mostly they drove development in a direction that only those of us who have been given the task of making a wireless keyboard that can run for six months on a pair of AA batteries understood. How to write code that wouldn’t make the device die in under four hours. To a large extent this was the evidence of Gates’ Law. We are now on the cusp of the Apple Watch which promises to hide the technology behind the technology even further. As someone who uses Apple Pay on a regular basis, I’m looking forward to see how the Watch does.

Everywhere

This is where the divergences converges. Azure instances can change locality temporally. As a result, your customers access servers in their vicinity. The user interfaces of software written for MacOS or iThings is multi-language and multi-locality (units) capable by default. Unlike Azure, iCloud isn’t so much a platform for developer as it is a vast warehouse of data. Apple’s recent announcement of ResearchKit has already shown how much impact an everywhere technology can have.

Secure

As one who has had the distinct displeasure of pulling his company’s internet connection on 2 November 1988, I believe that security is important. My master’s thesis was focused on computer viruses. I deal with the failure of developers to apply sound security practices to open source and commercial software on an ongoing basis.

For a really long time, no one really took securing the computer all that seriously.

Now, if you look at both Microsoft and Apple, you see security systems in a serious way. On iOS, it’s baked in. On Windows it’s half-baked. Yes, that’s a bit of snark. Security shouldn’t be an option. In iOS, if an application wants access to you contact list, it must declare that it wants to be able to access those APIs. The first time they attempt to access them, the user is prompted to allow the access. At any time, the user can simply revoke that access. Every application is sandboxed and credentials are held in a secure store. On Windows, security is governed by policy. These policies are effectively role-based. This is fine as far as it goes, but like the days-of-old, if you’re the wrong role at the wrong time running the wrong application (virus), you can deep fry any system. Hence my comment about it being half-baked.

Do we seriously believe that banks should be running on an operating system that isn’t build from the ground up around security?

Fixed

This final hydra head is perhaps the most interesting to me as it holds the most promise. It represents the hardware analog to Azure. Today, you may be able to configure an Azure instance, but that configuration only goes so far. If you look back to the dim days (which for some reason or other were in black and white, even though we had color movies as far back as 1912). Back then if you wanted more oomph, you ordered it (and an additional power drop). Now you are greatly constrained. Remember that 20TB system I mentioned earlier? Why can’t I get one? Because our manufacturing model is based on scale. This has been a good thing. It’s made it possible for me to have a laptop that doesn’t weigh 16lbs with a run time 2 hours. Isn’t that great? Ask a left-handed person sometime. As the number of actual computer manufacturers dwindles, we’re seeing more white box systems cropping up. These are being used to create the application clouds. But at a time when power is real money, how much are we wasting in resources to access the interesting bits of these boxes? More and more we see the use of storage arrays. All well and good. So, where are the processor arrays? The graphics arrays? What if I need 12 x 5K monitors? The people who crack this nut will make a great number of people very happy.

The Future Won’t be Brought to Us by AT&T

Once AT&T was the go-to place for the future of the future. Not any more. The future is far bigger than anyone imagined it to be and certainly far larger than any one company is capable of providing.

The question is, how do we identify the people who are ready to not only build that future, but to build it out?

Read Full Post »

%d bloggers like this: