Understanding comes first.
Yes, yes, I know. It sounds like one of the Seven Habits. I’d be happier if it was one of the seven virtues, seven wonders of the world, or even one of the seven dwarves.
It’s okay. Covey’s authorship and my insane jealousy of his success notwithstanding, understanding should come first. So credit where it’s due, and anyway, this is a completely different context. While as a matter of both good manners and wisdom it is a good idea to understand what the other feller is trying to say before you start to pick it apart, it has nothing to do with this week’s subject.
This week we’re talking about documenting stuff, writing about stuff, designing stuff, and stuff like that.
Starting with this admittedly trivial aspect of the subject: If you find yourself using “thing” and “stuff” a lot in your writing (and “You know what I mean” in your speech), there’s a decent chance you haven’t thought your subject through. Thing and stuff are vague generalities whose use should be reserved for the most general cases only. Otherwise you can always find a more precise word or phrase that helps readers home in on what you’re talking about.
But that’s more symptom than anything else. I’m talking about the admirable but ultimately misplaced focus many analysts and designers have to get the documentation right. Not that getting it wrong is better, understand. It’s that …
An illustration: Imagine you’re documenting a business process, as is required for various business certifications that start with “ISO,” as well as being insisted upon by one or two maturity model variants. You get the experts into a room, ask what triggers the process in question, ask “and then what happens?” over and over again, and use the answers to build a comprehensive flow chart comprised of a few hundred boxes connected by appropriate arrows and requires a large-format printer to render.
You’ve accurately documented the process, which is useful. The shortcoming: While you’ve documented the process you don’t understand it.
In part, it’s a forest/trees problem — excessive detail can obscure the essentials. As we’re using process analysis as our exemplar (and I’m constructing a strawman to flail at), imagine that instead of setting a goal of “documenting the process” we made our goal understanding it instead. What would we have done differently?
First, we’d have started by listing the process’s outputs. They’re the essence of what matters. Everything else is just the means to producing them; any other means is equally valid.
Next, the inputs — the raw materials the process transforms into its outputs.
Following that … and this won’t be surprising to regular readers … are the organization’s priorities with respect to process optimization. Organizing a process to (for example) maximize flexibility can lead to a very different design than optimizing for, say, a low defect rate (Chapter 3 of Bare Bones Change Management provides a reasonably complete account of process optimization parameters and their trade-offs).
Now is it time for the flow chart? Sorta. Now is the time for flow charts that follow guidelines along the lines of what the Rational Unified Process advises for developing use cases: If you have more than about seven steps in your process description you need to re-think your process description.
Which is often four steps too many, as a very large number of business processes have only three steps to describe: Collect information -> Update databases -> Create process outputs.
Simplistic? Not really, although it is an awfully simple account. Its value is in encouraging this question: Is there a simpler way to collect all the information and use it to update the database?
Because the next step is to drill down each step into the process flow inside it, also adhering to the seven-or-so step guideline. Three layers is almost always enough detail; I’ve never seen a process that’s needed more than four (I’ll save you the math — that’s enough room to describe 2,401 process steps).
But if you follow these guidelines without making understanding the point, all you’ll have accomplished is to document the process differently. Everything I’ve described here is just a means to that end — a way to facilitate understanding.
Not that understanding, all by itself, will do you much good either. But it’s a prerequisite to what you do need.
No, not love, no matter what Sergeant Pepper’s Lonely Hearts Club Band sings, and no matter how pleasant the experience.
What you need, in the world of business, at least, are insights. You get them by understanding something deeply enough to visualize it.
Which is one reason I wish we could stop calling the stuff “documentation.” It might describe what the stuff is, but it misses what matters … what it’s for.
I don’t think I understand what it is you’re trying to say 😉
Sounds as if when we have a meeting of all the involved people, we’re supposed to figure out processes that would be better for the company based on business cases.
But doing that would have to be done in lieu of producing a gigundamous process chart that provably demonstrates each participant’s understanding to upper management.
RE: examples of reams of information that are (nearly) useless because they are too detailed
99% of most UNIX MAN pages tell you in great detail what the command does but don’t describe typical usage / how to use the command to do something useful. The 1% that is useful (to me as a fairly novice user) are examples that work.
The same is typically true of code with comments on every line – the comments usually tell you what the line does but don’t tell you why it has to be done.
Some might say a typical dictionary is another good example. The OED would be (was?) an exception.
I’m a process change trainer, and I think Bob created a straw man. I don’t know of anyoe who creates a process map without an eye toward outputs or requirements. Sometimes we can skip heavy mapping and get to requirements quickly. In a teamwork/kaizen workshop I teach regularly, we take a small task that an individual performs, one that doesn’t have a lot of rules about it, one that sometimes crosses into other departments, and we ask why it has to be done at all… what are the outputs? And if it must be done, is there a way to do it that’s more efficient. An example was a cumbersome process of gathering up the distributed copies of PO’s from the receiving department. It annoyed everybody and no one did what they were “supposed” to do. After understanding that the useful outputs were really already collected in ERP and the paperwork didn’t need to be distributed in the first place, we got to work getting permission to not print or distribute copies, and the changes reverberated up to management.
Other times a process is cumbersome and critical enough that we map the existing process first because we are going to take a few weeks to understand exactly who needs what. Quoting is a common example. You don’t just try a new streamlined quotation process to get quotes to customers 50% faster, and discover three months later that updating quotes with minor changes requires reentering all the data because the old quote can’t be retrieved easily (I’m inventing a scenario since I don’t know of anybody who hasn’t checked outputs against requirements thoroughly). That sort of half-baked solution creates a lot of embarrassment and finger-pointing, and it might keep everyone from trying something new in the future.
And that would be the real failure.
Jay … It isn’t about not understanding the outputs and inputs. That’s important mainly to make sure everyone agrees on the basics – what a process is for.
My beef is with process engineering or optimization efforts that don’t clearly identify what the process needs to be optimized for, and I’ve seen painful examples where (for example) consultants optimized a process for cycle time and achieved dramatic improvements, never even measuring throughput, which, as it happened, was far more critical.
And sadly, they’d cut that in half.