Did we give up too easily?
We used to automate processes. Back in the early days, before we called ourselves Information Technology, Information Systems, or Management Information Systems, that’s what we did.
We didn’t optimize processes, orchestrate them, or develop metrics to assess their performance. We obliterated them, replacing them with computer programs that called on humans only when an input or decision was required the computer couldn’t handle on its own.
To be fair, we didn’t always do this all that well. Some of these automations sped up business processes that contributed little business value beyond keeping some people busy who otherwise might have caused all sorts of mischief.
There were also quite a few cases where the program paved a metaphorical cow path, faithfully reproducing in automated form every process step that had previously been executed by a bored human being, even if these steps had nothing at all in common with what an engineer might construct if given the goal and a blank sheet of paper.
But even with these flaws, IT’s predecessors delivered orders-of-magnitude improvements in business efficiency. And then Something happened, and suddenly, overnight, IT became the biggest bottleneck to business improvement.
My theory is that Something = Methodology, but perhaps I’m being unfair. I’m not, after all, a business historian, so while I lived through some of the mayhem, my personal mayhem experience isn’t a statistically significant random sample.
Based on my personal experience, direct and second-hand through colleagues, here’s the way process automation happened back in the good old days we neocodgers see in the warm glow of imperfect memory:
Someone from the business would drop by EDP (electronic data processing, IT’s ancient forebear), sit on the corner of a programmer’s desk, and ask, “Can you get the computer to do x?”
After a short discussion the answer was either yes or no, and if it was no, a longer discussion usually led to a useful alternative the computer was capable of.
The programmer would go off and program for a week or so and call the business person back to review the results and suggest course corrections. In not all that long the computer had automated whatever it was the business person wanted automated.
Usually, in the interim, other notions occurred to the business person, who, while reviewing how the solution to the initial request was progressing, would ask, “Can you also get the computer to do y?”
Over a span of a few years these solutions to business problems accumulated, turning into the big legacy systems many businesses still rely on.
If we’d only had the wit to call what we were doing a methodology and label it Agile.
Had we done so we might have avoided quite a lot of the discreditation that happened to IT during the years of waterfall wasteland that, starting in the early 1980s, transformed us from the department that automated stuff, generating enormous tangible business benefits, to the Department of Failed Projects.
For that matter, had we continued in our quest to automate the bejeezus out of things in our naively Agile sort of way, disciplines such as Lean and Six Sigma might never have achieved their current level of prominence.
Not that Lean and Six Sigma are terrible ideas. In the right contexts they can lead to solid business improvement.
What they’ve turned into for some businesses, though, are Strategic Programs, and religions for some practitioners. For these devoted adherents they’re the answer to all questions, before actually asking the questions.
What they’ve also turned into is a sort of IT-less shadow IT — a way to improve business processes without any need to involve IT, and, more important, without having to ask the Department of Failed Projects to deliver very much.
Let’s imagine the executive team at some enlightened (or misguided — your call) company reads the above and, convinced, calls to ask how best to return IT to its process automation roots. What would a modern version look like?
Mostly, it would treat each process as a black box that turns inputs into outputs. IT’s job would be to understand what the inputs and outputs are, and to develop, through a combination of inference and listening to the experts, an algorithm that reliably turns the defined inputs into the desired outputs.
That’s it — the entire methodology in one paragraph, understanding that “algorithm” can hide a lot of complexity in its four syllables.
Might this work? Beats me. It’s an idea I’m just starting to play with. Next week I might strenuously disagree with this week’s me.
Don’t hesitate to weigh in.