“Proof” is a tricky concept. “Proof of concept” is, if anything an even trickier concept.

Before we go on, a warning: If you buy into what follows and try to promote the ideas, you’ll gain a (or add to your existing) reputation as a persnickety pain in the keister. You might be better off just going with the flow, without worrying about all those nasty details that can be the difference between successful implementations and pouring money down a rabbit hole.

Still with me? Don’t say I didn’t warn you.

Mathematicians and geometricians more or less invented the notion of proof. They start with explicit assumptions, including a class of assumptions that are the rules of logic. They rigorously apply logic to their assumptions to create, one at a time, proved statements they can then rely on to take their proof to the next stage in the sequence (so long as their assumptions hold).

It’s like FORTH programming, where you combine defined verbs to create new defined verbs (and if you knew that you have, like me, joined the Geezing Geek Club).

Wouldn’t it be lovely if you could prove business concepts through pure logic? But you can’t.

Businesses are, among other things, collections of processes and practices. And business processes and practices have to handle, not only the mainstream set of inputs, but also a wide variety of exceptions. How employees handle many of those exceptions isn’t documented, because documenting them is impractical. There are too many of them and none happen often enough to be worth the time to document once an employee figures out what to do with it and moves on.

Which is just one reason proofs of concept are necessary in the first place. Add the rest (listing them is left as an exercise for the reader) and you get to the result: Logic can only get you so far. Then you need evidence.

Beyond mathematicians and geometricians is the next level of professional provers — researchers in the hard sciences. They develop hypotheses in much the same way mathematicians develop proofs, except that as scientists deal with the physical universe they have to accept that their assumptions might not always hold; also that there are almost always too many variables that might affect a phenomenon to include them all in a formal mathematical model.

Which is why they have to test their hypotheses through observation and experiments.

What scientists and philosophers of science figured out a long time ago, though, is that they can never prove anything through observation and experimentation, if for no other reason (and there are actually lots of other reasons) than that the next time they do the exact same thing, something different might happen.

So all good researchers understand that the best they can ever do is fail to disprove a proposition. Subject it to enough different tests and have it not fail any of them and, over time, they start to have confidence in it.

Which is why scientists have great confidence in Einstein’s theories of relativity and the Darwin/Mendel/Fisher theory of evolution by natural selection, and, for that matter, an increasing level of confidence in the theory of anthropogenic climate change: Each makes predictions about what scientists should find if the theory is true; so far, when scientists have looked, they’ve found what the theory predicts … or they’ve found something that doesn’t completely invalidate the theory but does call for modification or elaboration, after which the modified, elaborated theory is what’s subjected to future testing.

But they’re never certain, and good scientists never say they’ve ever proved anything. They say they’ve tested it thoroughly and it’s held up.

Think you’ll ever have the opportunity to test your business concepts this thoroughly?

Nope. At best you’ll be allowed to conduct one or two so-called proofs of concept, which are, by the above standards far far short of proof. Your average “proof” of concept is really nothing more than an attempt to disprove the simplest and easiest application of whatever the concept is to your business.

It’s a bit (but just a bit) like strapping a jet engine to the back of a cart and adding a big parachute. You might win a drag race with it, but you certainly haven’t proven the concept of jet-powered automobiles.

You’ve barely scratched the surface of testing it. If you have any more confidence than that, you’ll almost certainly find yourself in the middle of a fiery crash later on.

Understanding comes first.

Yes, yes, I know. It sounds like one of the Seven Habits. I’d be happier if it was one of the seven virtues, seven wonders of the world, or even one of the seven dwarves.

It’s okay. Covey’s authorship and my insane jealousy of his success notwithstanding, understanding should come first. So credit where it’s due, and anyway, this is a completely different context. While as a matter of both good manners and wisdom it is a good idea to understand what the other feller is trying to say before you start to pick it apart, it has nothing to do with this week’s subject.

This week we’re talking about documenting stuff, writing about stuff, designing stuff, and stuff like that.

Starting with this admittedly trivial aspect of the subject: If you find yourself using “thing” and “stuff” a lot in your writing (and “You know what I mean” in your speech), there’s a decent chance you haven’t thought your subject through. Thing and stuff are vague generalities whose use should be reserved for the most general cases only. Otherwise you can always find a more precise word or phrase that helps readers home in on what you’re talking about.

But that’s more symptom than anything else. I’m talking about the admirable but ultimately misplaced focus many analysts and designers have to get the documentation right. Not that getting it wrong is better, understand. It’s that …

An illustration: Imagine you’re documenting a business process, as is required for various business certifications that start with “ISO,” as well as being insisted upon by one or two maturity model variants. You get the experts into a room, ask what triggers the process in question, ask “and then what happens?” over and over again, and use the answers to build a comprehensive flow chart comprised of a few hundred boxes connected by appropriate arrows and requires a large-format printer to render.

You’ve accurately documented the process, which is useful. The shortcoming: While you’ve documented the process you don’t understand it.

In part, it’s a forest/trees problem — excessive detail can obscure the essentials. As we’re using process analysis as our exemplar (and I’m constructing a strawman to flail at), imagine that instead of setting a goal of “documenting the process” we made our goal understanding it instead. What would we have done differently?

First, we’d have started by listing the process’s outputs. They’re the essence of what matters. Everything else is just the means to producing them; any other means is equally valid.

Next, the inputs — the raw materials the process transforms into its outputs.

Following that … and this won’t be surprising to regular readers … are the organization’s priorities with respect to process optimization. Organizing a process to (for example) maximize flexibility can lead to a very different design than optimizing for, say, a low defect rate (Chapter 3 of Bare Bones Change Management provides a reasonably complete account of process optimization parameters and their trade-offs).

Now is it time for the flow chart? Sorta. Now is the time for flow charts that follow guidelines along the lines of what the Rational Unified Process advises for developing use cases: If you have more than about seven steps in your process description you need to re-think your process description.

Which is often four steps too many, as a very large number of business processes have only three steps to describe: Collect information -> Update databases -> Create process outputs.

Simplistic? Not really, although it is an awfully simple account. Its value is in encouraging this question: Is there a simpler way to collect all the information and use it to update the database?

Because the next step is to drill down each step into the process flow inside it, also adhering to the seven-or-so step guideline. Three layers is almost always enough detail; I’ve never seen a process that’s needed more than four (I’ll save you the math — that’s enough room to describe 2,401 process steps).

But if you follow these guidelines without making understanding the point, all you’ll have accomplished is to document the process differently. Everything I’ve described here is just a means to that end — a way to facilitate understanding.

Not that understanding, all by itself, will do you much good either. But it’s a prerequisite to what you do need.

No, not love, no matter what Sergeant Pepper’s Lonely Hearts Club Band sings, and no matter how pleasant the experience.

What you need, in the world of business, at least, are insights. You get them by understanding something deeply enough to visualize it.

Which is one reason I wish we could stop calling the stuff “documentation.” It might describe what the stuff is, but it misses what matters … what it’s for.