Nuance is dead. May it rest in peace.

The headline: PC shipments crater and tablets are the bogeymen,” (Woody Leonhard, InfoWorld, 10/10/2013).

First, let me assure you, Woody is a bright guy and I have a lot of respect for him. And second, writers don’t always write their own headlines. In any event, here’s what “crater” means: Worldwide, PC shipments are down around 8 percent, depending whose numbers you believe. In the U.S. PC shipments are up … up … a couple of percent.

Hardly the stuff of cratering.

The more interesting factoids in the story (and as I work for Dell I’m not an entirely disinterested party, although I have no connection to the product side of the company): The major manufacturers (HP, Lenovo, Dell) saw slight growth in their sales worldwide. Lenovo and Toshiba saw double-digit growth in the U.S. Yes, that’s right, growth … also not the stuff of cratering.

An interesting sidelight is that Apple’s U.S. sales took a sizable hit (more than 10%).

The truly fascinating bit was that Acer and Asus saw their sales plummet — down around 25% from last year. That really is cratering.

How to interpret these numbers? It’s probably much as pointed out here a few weeks back (“The post-PC era isn’t post-PC. It’s PC plus,” 9/16/2013). What cratered was consumer demand for PCs. Acer, Asus, and even Apple have a strong consumer orientation. Enterprise demand is, most likely, stable — enough from new ventures and replacement units to continue to support the major manufacturers.

It isn’t as attention-getting as “crater,” though.

As long as I’m quibbling with my friends at InfoWorld I might as well take issue with another recent piece they ran: “The end of the CIO as we know it — and IT feels fine,” (Galen Gruman, 10/11/2013). I’m afraid it got a lot wrong, starting with when the CIO title first appeared and what drove it. It wasn’t, as the article claimed, 15 years ago, driven by Y2K combined with the rise of ERP and eCommerce.

As evidence … why would any of those change “Director of Electronic Data Processing” to “Chief Information Officer”?

No, the CIO title came into existence twice that long ago, in the early 1980s. The driver: A newly introduced technology for mainframe computers called the database management system.

DBMS licenses were expensive. Very expensive in the context of what companies were already spending on their mainframe systems. The real, tangible cost-justification for spending the additional money was that it increased programmer productivity. Which it did. (Disagree? Imagine having to program without one.)

Except that, as anyone who’s tried it knows, programmer productivity is excruciatingly hard to measure, which means proving the tangible benefits of the new technology would have been excruciatingly hard.

So IBM’s marketing department came up with a new concept: The primary value EDP provided wasn’t increased employee productivity, as we’d been claiming until then. That was secondary. The big value was the information itself and what companies could do with it to improve decision-making. What, you thought this was new with data warehouses, data mining, and big data?

Whether you agree or disagree with the concept, the title “Chief Information Officer” flowed directly out of this idea — that information is where the big value is.

The concept’s legitimacy is questionable, by the way. Among its drawbacks: It elevates the importance of management decision-making above the value of actual work. But that’s a different, and very long diatribe.

Anyway, Galen’s piece is one of many that are appearing these days that predict we’ve entered the end-times for the CIO, and probably for corporate IT as well. Read any of these pieces. Squint at them sideways and you can predict the outcome of following this advice: A proliferation of “islands of automation,” because when companies push IT into business departments, nobody will be willing to pay for integration, let alone have the skills to handle it, let alone the authority.

And, the advice-followers will see significant fortification of political siloes. The reason? A big chunk of spending that used to be strategic (or at least enterprise in scope) will now be tactical, or, more accurately, departmental.

Siloed.

For a very long time … since, in fact, the advent of the DBMS … IT’s most important role has been integration. Maybe if CIO had stood for “Chief Integration Officer” we wouldn’t need to have this little chat.

* * *

Speaking of the 9/16 column, which talked about how IT’s role is expanding while its budget isn’t, I’m embarrassed. At the end I promised a follow-up that talked about what CIOs can do to survive the experience. But my vacation distracted me. Sorry. Next week for sure.

– Bob

Enterprise technical architecture management (ETAM) is a topic I’ve probably written too much about already. That’s the price you pay for not paying for your reading material.

Let’s start here: Even atheist programmers know how God was able to create the entire universe in only six days: He, she, or it (KJR takes no position on deistic gender) didn’t have an installed base to worry about.

Same coin, opposite side: Clayton Christensen, in his milestone book The Innovator’s Dilemma, recommends that any company wanting to launch a new venture that falls outside its current comfort zone should incubate it as an entirely separate business, in a different location, with a different infrastructure, success metrics … everything.

Entirely irrelevant to this discussion, but it just occurred to me and I’m feeling impulsive today (see “price you pay for not paying,” above): A common mistake in mergers-and-acquisitions circles is ignoring the obverse of this point. When large corporations acquire small entrepreneurships, they often move them over to the large-corporate systems and infrastructure. It’s a seemingly logical move — a step toward achieving the acquisition’s so-called synergy targets.

It’s a great example of the Great Theory But syndrome. This really should work out best for everyone. But what happens all too often is that the acquiring company’s information technology, designed to scale up to mega-proportions, doesn’t scale down very well.

The result: The once highly-profitable entrepreneurship, now loaded down with chargebacks paid to the mothership for overkill systems that cost two or more times what they’re used to spending for whatever-it-is, becomes an unprofitable subsidiary.

All in the name of an improved enterprise technical architecture.

Waddaya know? I guess it isn’t entirely irrelevant to this discussion, because the second great law of management is the first great law of enterprise technical architecture management (ETAM): Form follows function. In context, it means the most elegant, highly integrated, cleanly designed architecture is the wrong architecture if it imposes an unaffordable burden on the business.

Enterprise technical architecture management (ETAM)

Companies have two choices for building, integrating, enhancing and maintaining their applications portfolio, and the information repositories and underlying platforms and infrastructure that support it. The first is for every project team to face the world as if it was God and the universe hadn’t yet been created. Lacking both omniscience and the necessary budget to do anything else, though, what they and the other teams that are also operating in isolation will be contributing to will look more like a big pile of stuff than a clean, well-organized system.

That’s the first choice. The second is to make every project team responsible for fitting its work into the existing set of structures as cleanly and elegantly as possible, which is to say, for every project team to be responsible for technical architecture management.

It goes further. For any number of reasons, starting with companies choosing option #1 because it’s cheaper and ending with mergers and acquisitions, in most businesses, leaders wake up one morning to discover that whatever the cause, the information technology they rely on has become a big pile of stuff. (If you want a more refined version of “big pile of stuff,” see “9 warning signs of bad IT architecture,” InfoWorld, 5/24/2012.)

Now they have three choices. They can (1) shrug, tell IT that gee, that is too bad and we expect you to get the job done anyway, “the job” defined as “get projects done quickly and cheaply even though the architecture mess makes that impossible.”

Or, they can (2) write IT an enormous check to charter an expensive, multi-year clean-up-the-mess program, during which IT won’t be able to accomplish very much else, because everyone who might help the business accomplish it is fully committed to the clean-up effort.

That leaves (3) nibbling away at the problem: Defining what “good” means (what architecturally sound solutions look like) and requiring every software change effort to clean up some of the mess as part of the software change while also making sure to avoid adding to the mess that already exists.

Oh, by the way, the nibbling-away-at-it option looks exactly like what the avoiding-the-problem-in-the-first-place option looks like, except for not having a problem to nibble away. In both cases, the enterprise gets to a clean architecture and stays there because every technology-related project does its part to make sure of it.

In case it isn’t obvious by now, knowing how to nibble away at the problem on a project-by-project basis without adding to the mess is one of IT’s 18 critical success factors.

Only “ETAM integrated into delivery methodologies” sounds a lot more impressive, doesn’t it?