HomeIndustry Commentary

Lack-of-analysis paralysis

Like Tweet Pin it Share Share Email

The most significant challenge in communicating a new idea is convincing people it isn’t exactly the same as a superficially similar, older idea they’ve already embraced.

A classic example: During the introduction of object-oriented technology, many warhorse programmers were sure they’d been doing that stuff all along in COBOL. They then went on to write 10,000-line C++ objects with no encapsulated logic.

A similar, current example: The object-competent developers who are convinced there’s no difference between services and objects. (Although in their defense, clear, concise explanations of the differences are scarce. The best I’ve found, with help, is an IBM paper titled Service-oriented modeling and architecture by Ali Arsanjani. Recommended.)

This equation of the old with the new also seems to be the case with respect to Steven Spear’s contention, made in his new book Chasing the Rabbit, that continuously improved understanding matters more than continuously improved processes.

Last week’s column discussed how post mortems fit into this subject. My correspondence suggests that many who appear to agree haven’t yet recognized the distinction.

The usual approach to post mortems, also called “debriefing sessions,” is to make decisions: What worked well and should be preserved, what didn’t work well and should be discontinued, and what new ideas should be tried.

At IT Catalysts we’ve promoted this approach for years and it has proven quite useful. Based on Spear’s book, though, we’re rethinking it, because the type of post mortem proposed last week, based on Spear’s research, is quite different. The goal of the new approach is an improved understanding of How Things Work. Without this, decision-makers are guessing … trusting their guts rather than modeling an improved system of operation.

This emphasis on deep understanding runs counter to mainstream American culture. We aren’t a society that tends to value deep understanding. We value decisiveness and get impatient with what we’re pleased to call analysis paralysis.

And yet, a deep understanding of How Things Work is an investment in speed.

Consider three popular business decision-making loops: Colonel John Boyd’s OODA (Observe, Orient, Decide, Act), Deming and Shewhart’s PDSA (Plan, Do, Study, Act) and Six Sigma’s DMAIC (Define, Measure, Analyze, Improve, Control). All three depend on the ability to integrate new information into an existing framework of understanding (Observe and Orient in OODA; Study and Act in PDSA; Measure, Analyze and Improve in DMAIC).

Of the three, only OODA is explicit regarding the value of speed: Those whose OODA loops are faster tend to win, by confusing an opponent, thereby creating more opportunities for mistakes.

Yet even OODA depends on the quality of analysis as well as its cycle time. OODA’s Decide is the creation of a “Decision Hypothesis,” and its “Act” constitutes a test of that hypothesis. With only a shallow understanding of How Things Work, the decision hypothesis will be little more than a dressed-up guess, at which point OODA practitioners either lose time to arguing or make fast, bad decisions that will puzzle their opponents more than confusing them.

Take it home to running an IT organization. A deep understanding of How Things Work improves everything IT does. For example:

  • Developers with deep knowledge … of how the business operates, and of its supporting systems … can address new business challenges in a tiny fraction of the time required by even the most efficient of formal methodologies, because most of the outputs of business analysis, systems analysis and application engineering will already exist as a conceptual model in the developer’s mind.
  • Managers who have deep knowledge of the people who report to them … their individual strengths, aptitudes and career direction, and who works best with whom … will assign responsibilities far more efficiently, and to far better effect, than those who don’t value this knowledge.
  • CIOs who have deep knowledge of the company executives they work with … their organizational goals, personal aspirations, need for power or recognition, and political interrelationships … will be far more effective in gaining the time, attention and resources IT needs to get its job done properly.
  • IT organizations whose members have a shared understanding of everything that has to happen to properly deliver and manage the enterprise’s information technology will be in a position to do so efficiently and collaboratively instead of wasting time and energy by being at cross purposes.

Investments in deep knowledge are similar to investments in infrastructure. Both add significant overhead to the organization. Both constrain it, too, focusing its energies into known, predictable, highly scalable ways of doing business.

Comparing the two, knowledge has one advantage.

It’s more versatile.

Comments (5)

  • As a govt organization, we don’t conduct post-mortems because no one farther up the line (all the way to President, Congress and voters) don’t really care about it).

    OTOH, our little 4-person development group conducts post-mortems on various architectural approaches. The main reason we are still on LAMP (Perl) is because we tried java and C++ and didn’t see any additional ease on the design end nor did we see anywhere in the trade mags, not just in our own attempts, any real speed improvements in using objects.

    The post-mortem we came up with on it reflected what several teachers (of either approach) have said. You have Data and you have Processes. If you want to run all your data thru processes like Cobol or Perl, it is possible and here are the problems with your data changes.

    Same deal if you want to run everything from the point of view of Data ‘objects’ using Java or C++. You still need generic processes (call them interfaces if you wish) and they create a different set of problems that takes an equal amount of energy and time to work around or deal with.

    The post-mortem allows us to pick out who is talking about the real issues and who is just taking an irrelevant religious stand on language use without comprehending what the true issues are: business processes and user interfaces.

  • “continuously improved understanding matters more than continuously improved processes.”

    I’m not sure I buy that. The process is what produces the product…which is what the customer cares about. They are both required for product improvement – improved understanding will usually be a precursor for improved process (though accidents do happen).

    “At IT Catalysts we’ve promoted this approach for years and it has proven quite useful. Based on Spear’s book, though, we’re rethinking it”

    Are you sure this isn’t a change in name only? Humans, particularly the more intelligent ones, can assimilate new knowledge effortlessly as they observe their surroundings. Perhaps your organization is already achieving a higher level of understanding during each engagement and bringing that forth as improvements are suggested during your post-mortems? Of course, if you start explicitly talking about what new understandings you have reached, that knowledge will be shared more thoroughly than if you just share the conclusions that the individuals reached based on their new knowledge.

    Chris

    • I’m pretty sure they’re different. A process is “how we do things around here.” The underlying conceptual model of how the system works is “why our process is organized the way it is,” and is also, and more importantly, what allows an organization to predict the results of a process change.

      • Hi Bob,

        The process of obtaining “deep knowledge” is often referred to as “double-loop learning”. Argyris first proposed the idea in 1976, but it’s percolated into a lot of other areas.

        Double-loop learning occurs when participants explicitly examine and experiment with their ‘theories of action’. Essentially this just means that we all believe we understand how the world works, and that the processes we implement will align with this world view.

        Double-loop learning calls for the explicit and continuous re-examination of this worldview.

        For example, the change from the Taylorist view that everyone could be treated as a ‘cog in a machine’ to the more humanist view of knowledge workers today comes as a result of double-loop learning.

  • This concept of really “knowing” is something I encountered in a seminar at KMWorld in 2007. We got an extremely shortened class in SenseMaking, a process performed by Cognitive Edge. It emphasized deep understanding, usually by having youngsters follow the old-timers around and gathering stories of How Things Worked.

    They also gather anecdotes from people to make sense of a culture or problem(s) at a workplace. It’s really interesting stuff.

    One thing I’m seeing in software companies is the increasing separation of Support and Development. I know that programmers are kept away from customers for a reason – nobody wants customers bugging them for every little bug. However, Support will be handicapped in solving problems quickly because all they know is what the programmers tell them to ask, and Development only gets what Support thinks it’s seeing if the answers don’t fit the expected parameters.

    That’s all a very long way of saying if you don’t know how the program should work, you won’t know what’s broken.

Comments are closed.