Imagine you’re about to launch the biggest strategic program in your company’s history. Who would you put in charge of it if you had your pick:

  • The best project manager currently in your employ?
  • The best project manager you could recruit using your usual internal and contract recrutiers?
  • NASA’s David Lavery, the Mars Science Laboratory program executive?

Me too.

Why has NASA been able to deliver such a stunning stream of successful projects … from the Spirit and Opportunity Mars rovers, to Cassini, to Chandra, to Odyssey, to, most recently, Curiosity?

Answer: It decided to learn from its failures rather than running from them by “holding people accountable” (ManagementSpeak for “finding a convenient scapegoat while making sure we never find out what’s really going on”).

In the late 1990s, the Mars Climate Orbiter, Polar Lander, and Deep Space 2 missions all went wrong due to easily prevented technical flubs. The Climate Orbiter mission, for example, used English units of measure for some calculations, which other calculations that used the results assumed were in metric units. This led to its entering the Martian atmosphere at too steep an angle.

In hold-’em-accountable corporate cultures, company executives would do their best to find out who screwed up and fire the sorry SOBs before they could do any more damage.

They would, that is, ensure that a new set of sorry SOBs would screw up future projects in different but parallel ways.

NASA, to its credit, performed thorough post-mortem analyses instead. The first is titled, “Mars Climate Orbiter Mishap Investigation Board Phase I Report“; the second is titled, “Report on the Loss of the Mars Polar Lander and Deep Space 2 Missions.”

While much in these reports is so deeply technical that only the nerdiest engineers would appreciate it, everyone involved in project management, sponsorship or governance in any organization should take the time to read Section 3 of the Polar Lander report, because that’s the section that discusses the management failures that underlay the missions’ technical mistakes.

The deep root cause turned out to be funding and schedule pressure: NASA tried to get these missions done quickly and on the cheap … too quickly and too cheaply. This prime root cause led to a number of project management failures, the most important of which were:

  • Inadequate staffing: Not only was the project team too small, but critical JPL experts weren’t part of the team – the project didn’t have sufficient depth of highly knowledgeable staff.
  • Excessive overtime: 60 hour weeks were habitual and 80 hour weeks were common for extended periods of time, because there just weren’t enough people on the team to get the job done on schedule any other way.
  • Insufficient communication and collaboration: Inadequate staffing and excessively long work weeks inevitably led to team members working heads-down and with blinders on. They weren’t in a position to help each other out, check each others’ work, or otherwise function as a team.
  • No system testing and validation, relying instead on analysis and modeling alone.

NASA isn’t alone in having projects fail, although unlike, say, an automobile manufacturer that releases a design flaw into production, NASA can’t issue a recall to fix the problem.

Still, just as recalls are more expensive than fixing a flaw during the design process (but less expensive than plowing into Mars at high speeds instead of landing on it), it’s a pretty good bet that your company would profit from managing its projects better, too.

So learn from NASA’s mistakes, just as NASA did. That avoiding the project management mistakes listed above makes a difference is hard to deny, given the string of successes that’s followed.

Even more important than learning from NASA’s mistakes, though, is adopting its technique for learning from its mistakes. NASA invested heavily in an independent review, didn’t duck its findings, and implemented serious changes in its procedures as a result.

I keep reading opinionators who extol the efficiency of private enterprise as compared to how gummint agencies do things. And yet, while I know of quite a few failed projects in private enterprise, I’ve yet to read of any companies that undertook equivalent attempts to understand how their management practices contributed to the failures.

It is, in the end, the difference between taking responsibility and holding people accountable. Because when executives hold people accountable, what they’re really doing is failing to take responsibility – not only for hiring and retaining the people they now need to hold accountable, but, far more important, for creating the circumstances that led to the failure and allowing them to persist.

It’s a great way to lose good employees while retaining bad managers.

If it’s a project, plan it.

Seems obvious, doesn’t it?

But it isn’t. I was reminded of this in a recent project postmortem. While the project wasn’t a catastrophe, it was pretty ragged, or so I was told. And when I asked to see the project charter and schedule, and the project manager explained it was such a simple project that he hadn’t bothered to create them, the root cause swam into focus.

The usual thought process about project plans is, the bigger the project, the bigger the plan. It isn’t a bad thought process, either.

What it is is misleading, because it implies that the amount of planning needed as projects increase or decrease in size is described by a straight line, when in fact it’s best described by an S-curve.

The curve flattens out at the far right because as a project (or multi-project initiative or multi-initiative program) continues to increase in size, there comes a point of diminishing returns, beyond which the project manager is simply micromanaging.

In other words, while it can make sense to go beyond “Get dressed” to “Put on your shoes,” and even from there to “Tie your shoes” as the level of task granularity,  going beyond “Tie your shoes” to “Grasp left shoelace between left thumb and left forefinger; then grasp right shoelace between right thumb and right forefinger” means you have the wrong people on the project team.

At the far left, where projects are small and simple? The curve also flattens out, well above zero. There is an irreducible minimum level of planning you should do for even the smallest of projects, for three reasons: (1) Without a plan you can’t be sure you fully understand what’s actually needed; (2) even with a small project, everyone on the team needs to know what they’re responsible for and when it’s due; and  (3) you don’t know it’s that small until after you plan it.

The project in question didn’t suffer from the first issue. It was a software upgrade. Everyone did understand its point and what it was supposed to accomplish (upgrade the software without disrupting the business).

This doesn’t make the issue irrelevant. It means the project manager got lucky. As it happens, there was no desire to look for business advantages from new features the upgrade provided, but there could have been. The project manager should have asked, and would have had he created a charter.

The second issue? You bet. The project in question got messy during testing. (Yes, it can happen … Trust me!)

The reason it got messy was that (another situation you’ve never heard of before) there was no test plan, so everyone involved in testing just banged away on the keyboard until they couldn’t think of anything else to put in.

Which was fine until the time came to cut over to the new system. That’s when the team member responsible for testing got the jitters, and started to think of new tests to run. And more new tests. And more …

The project manager was understandingly unhappy with Testing Guy, but his unhappiness was misplaced. As it turned out, Testing Guy wasn’t a software quality assurance professional, and had no background to understand that there were such things as formal test plans, let alone what a formal test plan looks like.

Which would have been okay had the list of project deliverables included a test plan, or if the task list had included a task labeled “create test plan.” But it didn’t include a test plan on the deliverables list because there was no list of project deliverables, because there was no project charter. There was no appropriately labeled task because there was no project schedule either, because the project was too small to need either one.

That leaves the third and most obvious reason that even the smallest project needs a charter and schedule: Until you have them, you don’t know the project is small enough that it doesn’t need a charter and schedule.

Unless you have an exceptionally well-developed beezer, you can’t determine a project’s size by smell. What you need is a plan — to know what you’re supposed to produce, what it’s for, what it will take to build, and when it’s due. You might then find the project really was too small to plan.

But I doubt it. Even small projects are hard. And the smaller you assume they are, the harder they’ll turn out to be.