Feeds:
Posts
Comments

Archive for the ‘Computers’ Category

I believe that you can learn a lot about people through the things they fill their heads with. Over time the mechanisms for this process have grown in number and availability. Once people would travel great distances seeking out teachers. Of course if you were powerful enough you could have them come to you. Once we managed to get the teaching down in permanent (well mostly) form, you didn’t actually need to bother with the whole physical presence thing. Still the needing the scribe thing made this practical only for the silly rich. By the time we get to the 20th century the super clever public library idea meant that you could recommend a book to someone without running the risk that their dog would eat the only copy for a thousand miles. The mid 20th century added audio and by the end video to the menu. With the advent of the internet, we could not only find materials to borrow from libraries with ridiculous ease, but could reserve it and get an email when it was ready for pickup. Then came the Kindle, Zinio, Netflix and the iTunes store. Now if someone is ingesting some bit of knowledge, in all likelihood, you can too (and within minutes).

So, what’s with the Burke-ian prologue?

Well, I was reading The New Yorker‘s article The Shape of Things to Come, about Jony Ive and the future of Apple. Among the bits of past, present and impact; was a fascinating bit. Ive was watching Moon Machines [iTunes Store]. Not expected that.

I’ve always been a bit of a space wonk, so I was interested just on the face of it. What I found fascinating was that a person born in 1967 England best know for early 21st century industrial design saw something of interest in a series dedicated to the United States’ Apollo program.

Having now watched the series, there are things that jump out at me. As with every time I take in something that’s been recommended (if person with the time constraints of a SVP at Apple mentions that they see value the spending time, that’s a hint one would be ill advised not to take advantage of), I strive to understand how it relates to the person, their work and goals. Ive’s comment speaks volumes.

… like the Apollo program, the creation of Apple products required “invention after invention after invention that you would never be conscious of, but that was necessary to do something that was new.”

The Apollo program was a tech start-up writ large. The goal was abstract; the time tables unyielding; the cost astronomical (literally); the toll on people and their relationships severe. In the end, the successes were ascendant and the failures devastating. The six episodes take on major aspects which had to work together in order to assure the success of the program.

The lessons of Apollo are applicable to endeavors in science, business, politics and design. Issues of control, quality, planning, communication and contingency are laid bare. As are their failures. Of particular distinction are the moments of crisis. Unlike anything before or since, we have documentation of and visibility into the people who stepped up to lead their teams and the processes through which they overcame them.

In an era of ever-increasing abstraction and the misplaced belief that you don’t actually need to understand how things work in order to produce something of quality, Moon Machines provides timely lessons. The quality of the end product begins with the confluence of domain and technology, not the application of one to the other. The speed and manner of disposing problems during a crisis depends greatly on the depth of understanding extant in the team of the two questions: What do I have? and What do I need? As well as the understanding of how to get to the latter using the former.

In the end, I have a greater admiration of those involved in Apollo thanks to a comment by Jony Ive.

Read Full Post »

It’s been five months since Apple’s Worldwide Developer Conference ended. This year the WWDC iOS application had the session videos available sooner than ever. I’d been watching them during my commute to and from Portland’s downtown. Well, this past week, I finished watching the last one. And with 107 videos, it’s Apple’s version of Netflix putting up a few years of Doctor Who episodes for people to gorge on.

Truth be told, I would have finished a month ago. So, what was the hold up? iOS 8 released. In doing so, it revealed that some change in the OS caused the WWDC app to crash when you tried to view a video. Very sad. The really unfortunate part is that I was only four videos away from having watched them all.

Well, I’m happy to report that the WWDC app is now working fine and I’d encourage anyone who’s doing OSX or iOS development to take advantage of them. There is a tremendous amount of information on the latest tools, technologies and techniques. Swift and Metal feature prominently.

Now, I’m not saying that these are all Jobsian quality presentations. As much as I appreciate that software development is a global endeavor, whoever decided that someone with a pronounced French accent should be littering his presentation with the word banana should be forced to watch that session from beginning to end. Subtitles would also have helped some sessions.

All-in-all these presentations are well crafted and delivered. Apple also continues its tradition of providing substantive sample code. Nothing is worse than code snippets that don’t convey the true flavor of using an API as you would in practice. Happily gone are the days when the AOCE (Apple Open Collaboration Environment) documentation was 1200 pages of paper with no index. Not the best way to pick up a new technology.

So if you need a break from watching the original run of The Tomorrow People, the WWDC sessions are there for you.

Read Full Post »

In the mid-90s I was introduced to the phrase “Compile it. Link it. Ship it. Debug it.

Yeah, it’s sadly funny, but what’s that got to do with QA not being a speed bump?

Once we developed software, then we had waterfall, then agile, then test-driven development. Next? Who knows. Here’s a clue. We’ll still be developing software.

In a world of would be Sherlock Holmes’, we are in desperate need of more John Watson’s. And like the classic pair, the end product is much better when they are working in a tightly coupled fashion. And by that I mean both geographically and temporally. Let’s look at some common software development models of developer/tester interaction designed to fail.

Remote Test

There are some who believe that the phone/IM suffices as a mode of communication when developing complex software. Unfortunately, there is a proportional relationship between the distance between the developer and tester; and the scale of misunderstandings. “Frog protection” anyone? Nothing brings a developer back to reality from Nerdvana faster than someone standing in front of them waiting for them to explain exactly what it is that the software is supposed to be doing. Out of sight, out of mind.

Software First, Test Plan Later

This is especially interesting when QA has the responsibility of saying that the story associated with the feature is working. Let’s make it even more fun by saying that we can count story points when the developer says they’re done, but we won’t actually get QA sign-off until the next sprint. Technical debt-ville.

Unit Tests Mean We Can QA Later

Everyone who believes that they can proof read their own documents, stop reading now. The hallmark of a great tester is that the first thing that comes to mind when they look at your software is the last on yours. How can I break this. Developers struggle to get the “happy” path working. Where do most problems come up? The “sad” path. If people do get to write unit tests, and the code is structured to allow unit tests, it’s more likely than not that only the happy path will be tested. After all, it’s QA’s job to test what doesn’t work right?

Testers as Second Class Citizens

Holmes may be brilliant, but Watson is the proxy for the rest of humanity. You know, the ones who don’t keep heads in their refrigerator. The ones who we expect to, you know, pay money for the software. Not everyone reads Shakespeare in the original Klingon. Testers keep the “pizza under the door” crowd honest. Am I being a bit harsh on developers? Perhaps, just a wee bit.

Beta Test as Test

Throwing it over the wall taken to the extreme. Well, not quite the Google, “it’s not a product, it’s a beta” extreme, but GMail does pretty much set the goal post for that one. Guess what, when a potential customer gets hold of a beta, ninety percent of them will treat it as though it’s a complete product. Do you want your bank using your “beta as test” software? Would you use a beta compiler for your grandmother’s pacemaker? How about a drone? The close cousin to this is the weenie move of calling your software 0.9 for a decade and thinking that somehow insulates you from your commitment issues. Tossing your software out to the world without having the courage to “own it” is just lame. Doing so because you can’t be bothered to do the work is unprofessional. “Lost your company’s data? You can’t complain. It’s beta software afer all.”

It’s Time to Start Treating Testers Like the Partners We All Desperately Need

I’ve had the privilege to work with some very gifted individuals who time after time brought code, that I was sure was solid, to its metaphorical knees. They helped me to explain the complex in human terms. Treat them well and they will improve both the end product and the process. Remember, QA is not a speed bump, a nuisance to be endured. It is the whetstone that keeps the knife keen.

 

Read Full Post »

Update 2015-12-21: Apple released Swift source code. I’ve updated my sample again to reflect what I learned.

Update 2015-09-27: It’s been a year and much has happened with Swift. Please see my latest post on Swift command line input for current code.


Recently I’ve been watching Stanford intro CS classes. I like to see how they present the fundamental concepts and techniques of programming. This got me thinking about those missing bits of Swift that would allow me to actually write a command line-based application. [see my previous post Swift: Second Things First] Having these bits would be to allow me to teach Swift as a first language without having to teach the abstractions and interfaces required to properly develop for a graphical interface. I’m not much into attempting to teach in a way that breaks the “go from strength to strength” methodology. If you’re going to teach me to sing, it’s a whole lot easier if I don’t have to learn how to spin plates at the same time.

So, I spent some time and created a simple set of routines that when added to a Swift command line application allow you to get and put strings, integers and floats. Not exactly rocket science, which begs the question, “Why didn’t Apple do it?” Well, since I’m not Apple, I have no idea.

Here without further comment is the content of the file I wrote. That it is not the best Swift code, I have no doubt. If you can make it better, cool. And Apple, it you read this, please make something sensible of it.

Update: I’ve manually wrapped a few lines as WordPress is clipping.

//
//  swift_intput_routines.swift
//  swift input test
//
//  Created by Charles Wilson on 9/27/14.
//  Copyright (c) 2014 Charles Wilson.
// Permission is granted to use and modify so long as attribution is made.
//

import Foundation

func putString (_ outputString : NSString = "")
{
  if outputString.length >= 1
  {
    NSFileHandle.fileHandleWithStandardOutput().writeData(
               outputString.dataUsingEncoding(NSUTF8StringEncoding)!)
  }
}

func getString (_ prompt : NSString = "") -> NSString
{
  if prompt.length >= 1
  {
    putString(prompt)
  }

  var inputString : NSString = ""
  let data        : NSData?  = NSFileHandle.fileHandleWithStandardInput().availableData

  if ( data != nil )
  {
    inputString = NSString(data: data!, encoding: NSUTF8StringEncoding)!
    inputString = inputString.substringToIndex(inputString.length - 1)
  }

  return inputString
}

func getInteger (_ prompt : NSString = "") -> Int
{
  if prompt.length >= 1
  {
    putString(prompt)
  }

  var inputValue : Int = 0
  let inputString = getString()

  inputValue = inputString.integerValue

  return inputValue
}

func getFloat (_ prompt : NSString = "") -> Float
{
  if prompt.length >= 1
  {
    putString(prompt)
  }

  var inputValue : Float = 0.0
  let inputString = getString()

  inputValue = inputString.floatValue

  return inputValue
}

And here’s a little test program that uses it.


//
//  main.swift
//  swift input test
//
//  Created by Charles Wilson on 9/27/14.
//  Copyright (c) 2014 Charles Wilson. All rights reserved.
//

import Foundation

var name = getString("What is your name? ")

if name.length == 0
{
  name = "George"

  putString("That's not much of a name. I'll call you '\(name)'\n")
}
else
{
  putString("Your name is '\(name)'\n")
}

let age = getInteger("How old are you \(name)? ")

putString("You are \(age) years old\n")

let number = getFloat("Enter a number with a decimal point in it: ")

putString("\(number) is a nice number")

putString("\n\n")
putString("bye\n")

You’re probably wondering why I don’t use print(). Well, print() doesn’t flush stdout’s buffer. And, I really like to enter data on the same line as the prompt. And for those of you who say that you can use print() in XCode’s output window, I’ll remind you that a simulator isn’t the target device.

“But, wait. You didn’t comment the code.” No, I didn’t. By the time a student has enough understanding of Cocoa to compose the I/O routines provided above, the comments would be unnecessary.

So, there you have it. The typical second program that you’d ask a student to write.

Read Full Post »

Over the years, I’ve gotten used to the reality that the vast majority of people who work in the technology field only do it for the money. The cab drivers who have told me they want to “get into” computers because “it’s easy money” don’t phase me. Similarly, the sea of “recruiters” who contact me spouting techno-babble get a pass for their cluelessness. As the embodiment of evil would say, they “are mercifully devoid of the ravages of intelligence.”

Every now and then though, an email comes to my in box that begs the question, “How does this person not get fired?”

Let’s look at this tremendous work of ignorance and hypocrisy. We’ll say that it can from M at Foo (a major technology corporation). I’ve colorized the text in blue. All other styles applied to the text are original.

Let’s begin.

Hi Charles,

I recently found your profile in our database, and your background is impressive. The [Foo] Media Division will be flying several candidates in for interviews at our Seattle headquarters in April and considering you. The roles we are filling will all be located in Seattle and a full relocation package and immigration support would be provided if you are selected.

Someone did a database keyword query and it included my name. Spiffy. If only there was a single thought in the first sentence instead of two. The grammar goes down hill from there. The deluge of prepositional phrases in sentence two points to a completely disorganized mind. Once again we see multiple thoughts presented. This time, however, the author neglected the second verb. One would assume that it is “are.” Forgiving this error as a typo, one is left with the distinct impression that (1) people are ignorant as to the location of Foo and (2) they will be available to go to Seattle on short notice. Although it is nice to know that the positions will be in Seattle, the fact that immigration support “would be provided” indicates that my resume has, in fact, not been read. Additionally, following the trend established in previous sentences, multiple thoughts are present. Finally, why should I care that I would be relocated when I don’t know what the position is yet?

We are looking bring on board Senior (7+ yrs. industry experience) Software Developers with experience designing and architecting highly scalable and robust code in Java, C++ or C#.  Strong OOD skills and CS fundamentals are required. Working with big data or machine learning can be a major plus.  In addition we have roles for Principal Engineers, Software Development Managers, Software Developers in Test and Technical Program Managers. If you fall into one of these categories we offer a different interview process independent of this event and eager to support you in learning more about these roles.

It appears that the fact that the position is senior merits both bolding and underlining, lest I miss it. It also seems that what is meant by senior is up for debate. I ask you, gentle reader, why would you abbreviate years by dropping two letters only to add a period? Here we see a neglected preposition (of). I will refer back to my unread resume as the reason for my assertion that this sentence is unnecessary. Let use press on.

These ever senior software developers (bold, underline) must have experience designing and architecting. I am reminded of the George Carlin sketch about the kit and caboodle. Redundant anyone? Moving on, let’s consider “highly scalable and robust code”. I have yet to see code which is highly-scalable (note the proper use of hyphenation) generally demands that is also be robust. This is my opinion, but I would imagine that people would generally agree that non-robust code tends not to be very scalable. As to my languages of record, I will again refer to my seldom read resume.

Obviously, the next sentence is of critical import as it is bolded and underlined in its entirety. Now, if anyone out there knows a developer who can architect a highly-scalable system and yet is lacking computer science fundamentals and strong object-oriented design skills, please introduce me.

Slogging along we have an obvious statement regarding a working understanding of the two biggest buzzwords in the heap today. That these can be a plus makes for a fairly nebulous statement. Is experience in these disparate areas important? Will it be part of the job?

Now we wander off into the weeds by telling me that they’re also looking to fill other positions. So, if they’re completely off the mark, not to worry?

If interested in exploring Development opportunities with us, the first step will be to complete our coding challenge ideally within the next 3 to 5 days.  If you need more time, please let me know. After the hiring manager reviews your ‘successful’ code, we’ll contact you to confirm your onsite interview where you will meet key stakeholders from the [Foo] Media team.

Back in multiple-thought land, let’s begin by ignoring the subject of the sentence. And now that you’ve bothered to read this far, here’s the catch. You have 3 to 5 days to complete a coding challenge. The plot thickens. But it’s not really 3 to 5 days. You can ask for special dispensation. It is nice to know that my code will be successful and that I will be contacted to confirm my onsite interview. But wait, we have another thought here. at the onsite interview, I’ll meet key stakeholders. For the less techno-babble encumbered, those would be the marketing and project manager.

Please click here [link removed, sorry] for the coding challenge and include your full name and email address in the tool. The application works best in Firefox or IE. There is no time limit, but if you do take breaks it counts against your completion time. Please expect the challenge to take between 10 – 90+ minutes.  The KEY is to write your absolute BEST code.  Additionally, be aware that should you be selected for interviews, you will also be asked to produce code on the white board.

Here’s a puzzling set of instructions. If they have read my resume and managed to send me an email, why is it that they need me to create an account in “the tool.” “The tool?” Seriously? I don’t recall moving to The Village.

Not so fast, now it’s “the application” and it works best in Firefox and Internet Explorer. Best? How about telling me the required version of browser to keep from getting half way into this “challenge” and having “the tool” spew like a unicorn doing the technicolor yawn.

And in a fit of verbal vomit worthy of a Willy Wonka legal contract, we are told that (1) there is no time limit, (2) the amount of time you take matters, (3) the estimated time to complete is somewhere between 10 minutes and God knows how long, and (4) [this is the big one] we are expected to write “your absolute BEST code.” And as an afterthought let’s tack on a comment about being able to produce code on a white board “should you be selected for interviews.”

Let’s think about this. Okay, you really didn’t need to, but it’s a nice way to slow down the pacing of the post.

In case you hadn’t figured it out, the fourth in this set of nonsensical requirements is what inspired my title. It comes from a scene in “Men in Black.”

James Edwards: Maybe you already answered this, but, why exactly are we here?

Zed: [noticing a recruit raising his hand] Son?

Second Lieutenent Jake Jenson: Second Lieutenant, Jake Jenson. West Point. Graduate with honors. We’re here because you are looking for the best of the best of the best, sir!

Zed: [throws Edwards a contemptible glance as Edwards laughs] What’s so funny, Edwards?

James Edwards: Boy, Captain America over here! “Best of the best of the best, sir!” “With honors.” Yeah, he’s just really excited and he has no clue why we’re here.

How do I create my best code? [aside from not intensifying absolutes] I think about the problem. Solving a problem in 10 minutes or less implies to me that the person (1) has solved the same problem so many times that they have reached the level unconscious competence with regard to it, (2) did the first thing that came to mind, or (3) guessed. You know the best way to not create high-scalable systems? By not thinking much about the problem.

Lastly, please send your updated resume directly to me: [M]@[foo].

Should I do this before I embark on the “challenge” or after? Who else would I send my updated resume to? And why bother restating your email address (incompletely) when I could simply reply this email?

NOTE- If you are currently interviewing with another [Foo] group, we ask that you finish that process. In the event you are in college (at any level) or graduated within the last six months, we invite you to directly apply to positions via this link: www.[foo].com/college.

Note is followed by a colon. And what happened with the whole lastly thing? He we have indication that Foo’s recruiting system can’t track who’s taking to you. So much for robust. We again see that no resumes have been read here. More than that, why would this even enter into the equation of an email to someone who is expected to have 7+ years of industry experience?

Thank you for your time and look forward to receiving your code challenge response.

There can’t possible be more you say. Not so dear reader. The great two-for-one sentence wrangler strikes again.

Warm regards,

[M]

At least the closing was without incident.

For a company that claims to be seeking the very best, they have a funny way of showing it. If you would like to offend the highly-educated and technically experienced developers you seek to hire, send them emails that simultaneously say that they (1) aren’t worthy of a proofread email and (2) aren’t deserving of a phone screen with a person.

After I’d read this email several times, I looked M up on LinkedIn. Their profile is private. That was a first for me with regard to an internal recruiter.

Well done Foo. Well done.

Read Full Post »

I’ll be the first to admit that I obsess over security. My internship in college dealt with Unix security. I’ve created encrypted protocols for wireless data communication. And for my master’s thesis, I created a highly virus-resistant computer architecture (AHVRC – aka Aardvark). I wrote it in 1993. I put it up on the web in 1999.

So, what to my wondering eye did appear a few days ago? None other than the latest installment of Apples “iOS Security” document.

Personally, I like reading Apple documentation. But then again, I read owner’s manuals. Anyway …

So, I find myself reading iOS Security and keep thinking, “that’s what I would have done.” Wait, that’s what I did do.

I was casting about for a thesis topic and my department chair noted that no one was doing anything in secure architectures. So I spent a chunk of time thinking and put a little 124 page missive together. Now gentle reader, you having taken it upon yourself to read a few pages in begin thinking, “this can’t be serious, it’s got animals instead of sub-systems.” True, true. The master level is supposed to have a certain level of awe and wonder associated with it. Boring. Here’s a little secret. In a traditional master’s program, you devote the equivalent of three courses to the research and writing of a document (thesis). The point of the thesis and its defense demonstrates mastery of the discipline. The defense is done publicly. Anyone may attend. You must advertise it to the student body. Some number of professors, typically in your discipline and of your choosing, make up the group who decide if you and your work are up to snuff. Question may be asked in any area of your studies, but primarily the discussions will revolve around your thesis. Hence, being called a defense. Once the professors have had at you, the gallery gets their shots.

You already knew that didn’t you. Well, that’s not the secret.

The secret is that the defense is conducted within the context of the thesis. They attack, but you get the build the world. Think of it as a duel. You get to choose the weapons.

Nothing warms the cockles of my heart more than to see the distinguished faculty discussing a highly technical matter in the context of dolphins, gophers and kinkajous.

I even applied (with Rose-Hulman generously funding) for a patent. Had I had more patience and a more informed examiner at USPTO, I probably would have a patent for the work.

I’m not sure if the developers at Apple ever read my thesis or referenced my patent filing. I do find the similarities in the two architectures interesting.

I hope everyone who reads this posting takes the opportunity to read both documents. Apple’s because they present the state-of-the-art in application security model implementation. Mine, because I think I’m pretty well pleased with myself about it.

Read Full Post »

Lately, we seem to be a bit over-exuberant in our desire to point fingers while running about in circles yelling “Look! Look! [insert major firm’s name here] screwed the pooch big time!”

While I believe it important to identify and correct security issues in as expedient fashion as possible, the endless echo chamber adds nothing of value.

I’ve read the code at the center of Apple’s recent SSL security issue. Yes, it’s bad. What it isn’t is unexpected.

I’m sure the old “goto’s are the source of all kinds of badness, from security holes to acne” crowd are probably have their pitchforks and torches at the ready. I, however, will not be joining them for this outing.

Hopefully, the thoughtful reader has already:

  • reviewed the code in question
  • is aware of the difference between -Wall and -Weverything
  • has read Lyon’s commentary
  • uses static analysis tools as an adjunct to [not a replacement for] code reviews
  • realizes why it is so important to fail first
  • knows that this type of thing isn’t going to change any time soon

You may be asking yourself, “What is this failing first of which I speak?”

Simply put, every function/method/routine should fail first and fail fast. Far too many people are out there writing “happy path” code. This is to say their code properly handles situations where everything goes to plan. Hence, the happy bit.

This kind of code isn’t really interesting code in my opinion. Ever since my classmates in college started showing me their programs, I’ve taken great pleasure in poking holes in them. My master’s work in computer viruses was an exercise in attempting to create a design that was intended to fend off attacks. Working at GE Space and later on Visual SourceSafe gave me an appreciation of systems that did not forgive failure. As a result, when I look at a problem, I think first about what will go wrong. Not what could, but what will. As was once said, “constants aren’t and variables won’t.” Or as Fox Mulder was wont to say, “trust no one.” Parameters will be invalid, globals will change while you’re trying to use them and other routines you call won’t do what you need them to do. This includes system routines. In the time since I began using *nix systems around 1980, I’ve seen attempts to set the time crash the system, attempts to set print output size return snarky commentary, and countless weird responses to out of memory conditions. To make matters worse, many people are still fixating on making their development environment behave as though we still lived in a world where the VT100 was the newest tool in the box. We don’t want to refactor the code because the re-validation effort would be outrageous.  We write our code to the least common denominator.

So, what’s a developer to do? Here is my hit list. It’s not exhaustive. It won’t end world hunger. It is entirely my opinion.

Validate All Parameters

Not some, not just pointers, not just the easy ones, but all of them.

Fail as Soon as You Fail

Don’t create a variable and continuously retest it as you go down the routine, get to the bottom, unwind and return.

Fail First

Since things can go wrong, test for them first. Failure code tends to be small, success code large. When you put the failure case after the success, it greatly reduces its connectedness to the related if statement. When you put it up front, you can immediately see what’s supposed to happen when things go south. Additionally, since we fail first, this can be done before you actually put in ‘operational’ code. This allows you to put together a framework for the code faster.

Return Something Meaningful

There are very few instances where you should create a routine that doesn’t return either a boolean or an enumeration of state. Use an error context when necessary. Don’t use a global. Ever. Ever ever. errno has got to be one of the worst ideas anyone ever had. Although errno shifted up 8 bits is right up there.

Initialize All Local Variables

The compiler may indeed initialize them for you. If you’ve got a smart compiler (they really are good these days), then this won’t cost you anything. Static analyzers will tell you to do this.

Have a Single Exit Point

Please don’t whine about how it makes your code look. You’re probably going to have other issues if you don’t. Things like memory and lock management in languages like C and C++. The future generations who are trying to debug around your routine will thank you for not making them dig through the chicken entrails to figure out how they got out of the routine.

Scope Your Variables

Having all your locals at the top of the routine because they’re easier to find says something. It’s not a good something. Did I mention that compilers were really good? Having variables in the smallest possible scope help them. Create a scope if you need to, there is no penalty from the code police for using braces

Scope Your Clauses

This includes if‘s, case‘s, for‘s and anything else that can have more than one statement. This would have made the Apple bug much more obvious.

Use the Appropriate Flow Control Structures

We can see that had Apple used else if’s instead of a series of separate if’s the code would have shown the dependent sequencing and made the goto’s unnecessary. There is a time and place for goto’s, but when they are used to such an extent as in the Apple case, it indicates structural issues. There are multiple control mechanisms for a reason. Using elaborate variable manipulation when you could use a break boggles the mind.

Stop Pretending You’re Smarter than the Compiler

If you’re not keeping up with what Intel, the gcc or the clang folks are doing with compilers or Intel is doing with profiling feedback; then don’t even try to pretend that you can provide more help to the compile than it can figure out on its own.

Use a Static Analyzer

Static Analyzers have way more patience for looking at code lifetimes than a person ever will. They don’t have attitude and will readily call your baby ugly. We need that. We have a whole boatload of ugly babies.

Make Code Reviews a Required Part of the Development Process

This may be one of the hardest things to do. It’s that ugly baby thing. It’s “the code” and not “our code.” I have yet to meet a developer who would disagree that they desire to produce the best product they can. Code reviews should be about objective measures and not bike sheds.

Test it Until it Falls Over

Good developers and good testers are like orchids. Unless you treat them well, they aren’t around very long. A good tester is a good developers best friend. A bad developer will drive off a good tester. A bad tester can kill a project. A good tester will understand the product and it’s use. They also are really sadistic people who take great pleasure in tormenting the product. What they do is find the sharp bits that are protruding and point them out. The thing about developing software is that after a while you stop thinking about what it might do and only think about what it does do. I can’t count how many times I’ve heard a developer ask, “Why would you do that?”

Be Realistic

If you don’t allow for the review feedback loop, you will suffer. It takes time to make corrections. Done is only done when you’ve exercised your error paths. If you don’t do failure testing, you will suffer. If you believe that once the happy path passes that you are ready to release, you will suffer. Notice the pattern?

Read Full Post »

I’ve been reading James Hogan’s works since 1980. That being said, this is the first non-fiction book of his that I’ve read.

It also took the longest to finish.

Mind Matters: Exploring the World of Artificial Intelligence is an interesting walk through the history of AI.

As someone who’s never believed in hard AI, I found the book to be a humbling reminder of the limitations of technology.

It would be really nice to have a cybermind out there to find better cat videos, but then again I don’t have a flying car yet either.

 

Read Full Post »

« Newer Posts