“Could you please stop saying there’s no such thing as an IT project?” a reader politely asked. “When I have switches/routers/servers that are out of support from their vendors and need to be replaced with no business changes I have to call these IT projects.”

I get this a lot, and understand why folks on the IT infrastructure side of things might find the phrase irritating.

And I agree that projects related to IT infrastructure, properly executed, result in no visible business change.

But (you did know “but” was hanging in the air, didn’t you?) … but, these projects actually do result in significant business change.

It’s risk prevention. These projects reduce the likelihood of bad things happening to the business — bad things like not being able to license and run software that’s essential to operating the business; to purchase and use hardware that’s compatible with strategic applications, and so on.

It’s important for business executives to recognize this category of business change project, if for no other reason that none of us want a recurrence of what happened to IT’s reputation following our successful prevention of Y2K fiascoes. Remember? Everyone outside IT decided nothing important or interesting had happened, and that’s if they didn’t conclude we were just making the whole thing up.

Successful prevention is, we discovered, indistinguishable from the absence of risk. So we need to put a spotlight on the business risks we’re preventing so everyone recognizes our successes when we have them.

Not to mention the need for everyone to be willing to fund them.

Which leads to a quick segue to IT architecture, which, depending on the exact framework and source, divides the IT stack into information systems architecture, subdivided into applications and data; and technology architecture, subdivided into platforms, infrastructure, and facilities.

Switches and routers, along with everything else related to networking are infrastructure. With the exception of performance engineering, infrastructure changes ought to be invisible to everyone other than the IT infrastructure team responsible for their care and feeding.

Servers, though, belong to the platform sub-layer, along with operating systems, virtualization technology, development environments, database management systems … all of the stuff needed to build, integrate, and run the applications that are so highly visible to the rest of the business.

The teams responsible for platform updates know from painful experience that while in theory layered architectures insulate business users from platform changes, in fact it often turns out that:

  • Code written for one version of a development environment won’t run in the new version.
  • The vendors of licensed COTS applications haven’t finished adapting their software to make it compatible with the latest OS or DBMS version.
  • Especially in the case of cloud migrations, which frequently lead to platform, infrastructure, and facilities changes, performance engineering becomes a major challenge. And as everyone who has ever worked in IT infrastructure management knows, poor application performance is terribly, terribly visible to the business.

Et cetera.

Not that these platform update challenges are always problems. They can also be opportunities, for clearing out the applications underbrush. Part of the protocol for platform updates is making sure all application “owners” (really, stewards) aren’t just informed of the change but are actively involved in the regression testing and remediation needed to make sure the platform change doesn’t break anything.

The opportunity: If nobody as the steward for a particular application, retiring it shouldn’t be a problem.

On a related topic, regular readers will recall the only IT infrastructure metric that matters is the Invisibility Index. Its logic: Nobody notices the IT infrastructure unless and until something goes wrong.

Invisibility = success. Being noticed = failure.

Something else regular readers will recognize is that Total Cost of Ownership (TCO) is a dreadful metric, violating at least three of the 7 C’s of good metrics. TCO isn’t consistent, complete, or on a continuum: It doesn’t always go one way when things improve and the other when they deteriorate; it measures costs but not benefits; and it has no defined scale, so there’s no way to determine whether a given product’s TCO is good or bad.

But perhaps we should introduce a related metric. Call it TCI — the Total Cost of Invisibility. It’s how much of its operating budget a business needs to devote so those responsible for the IT infrastructure can continue to keep it invisible.

They’ll keep it invisible by running what aren’t IT projects. But are quite technical nonetheless.

All IT organizations test. Some test software before it’s put into production. The rest test it by putting it into production.

Testing before deployment is widely regarded as “best practice.” This phrase, as defined here, translates to “the minimum standard of basic professionalism.”

Which brings us to organizational change management (OCM), something else all organizations do, but only some do prior to deployment.

There is, you’ll recall, no such thing as an IT project, a drum I’ll continue to beat up to and beyond the anticipated publication date of There’s No Such Thing as an IT Project sometime in September of this year.

Which brings us to a self-evident difference between testing, aka software quality assurance (SQA), and OCM: SQA is about the software; OCM is about the business change that needs the new software.

As we (Dave Kaiser and I) point out in the upcoming book, organizational changes mostly fall into three major buckets: process, user experience, and decision-making.  Process change illustrates the SQA parallel well.

Probably the most common process change goal is cost reduction, and more specifically reducing the incremental cost of processing one more unit.

As a practical matter, cost reduction usually means layoffs, especially in companies that aren’t rapidly growing. For those that are growing rapidly it means employees involved in the process will have to handle their share of item processing more quickly.

In a word, employees will have to increase their productivity.

Some unenlightened managers still think the famous I Love Lucy chocolate factory episode illustrates the right way to accomplish this increase. But for the most part even the least sophisticated management understands that doing things the exact same way only faster rapidly reaches the point of diminishing returns.

Serious process change generally results in different, and probably fewer distinct tasks in the process flow, performed by fewer employees because there are fewer tasks and those that remain will be more highly automated.

Which brings us back to OCM and when it happens in the deployment sequence.

Managers don’t need a whole lot of OCM know-how to understand the need to re-train employees. But many still blow it, teaching employees how to operate the new software: Click here and this happens; click there and that happens.

Training shouldn’t be about how to operate software at all. It should be about how employees should do their changed jobs using the new software.

But training is just the starting point. What’s often also lost in translation are all the other organizational changes employees have to adjust to at the same time. Three among many:

> Realignments: Employees often find themselves reporting to new managers. This, in turn, usually leads to a severe case of heads-down-ism until employees figure out whether spotlighting problems in the new process flow will be welcomed, or if a new manager’s style is more along the line of messenger-shooting instead.

> Metrics: With new processes often come new process optimization goals, which in turn should mean new process metrics, but too-often doesn’t.

The first rule of business metrics is that you get what you measure — that’s the risk you take. So if a company changes a process without changing its metrics, employees will do their best to continue using the old process, as this is what’s being measured.

> Colleagues: Some work that had been performed by employees who work in a different city, building, floor, or cubicle down the hall, and oh, by the way, these folks used to know each other by name. That work might now be performed by total strangers who live in a different country and time zone, and speak a different native language.

Just adapting to different accents can be challenging enough. Add cultural and time-zone differences to the mix, make everyone involved unknown to each other, and the opportunity for process traffic jams increases, not by increments but by multiples.

No matter what the intended change, for it to be successful all these factors, and others, will have to be addressed.

Change leaders can address them before instituting the change, helping the organization and everyone in it prepare. Or, they can leave it up to everyone to muddle through.

Muddling through does have one advantage: Change leaders can blame anything and everything that goes wrong on change resistance.

Given a choice between effective planning and blaming the victims … well, it’s hardly even a choice, is it?