Five or six years ago, I talked about using telecommunications for direct marketing at the Direct Marketing to Business conference.

Since I knew less than my audience about direct marketing to business, I chose the only sensible alternative: I talked about what I did know – in this case, the role of telecommunications in customer service.

The tricky part in a presentation like this is convincing the audience your subject and their interests coincide. “Think of marketing as the top of a funnel,” I told them. “Product quality and customer service are the other end. So long as sales and marketing pour new customers into the top faster than the rest of your company lets them drain out, your business will grow.”

The rest of the speech talked about giving customers direct dial-up access to order-entry and order-status systems, Electronic Data Interchange, fax-on-demand, and enhancing call centers with Computer Telephone Integration. The overall message: providing these exceptional service offerings adds an edge to a company’s customer retention efforts, and since your most likely next sale comes from the customer who just bought from you, marketing needs to get heavily involved in service.

Five or six years ago these ideas were new enough that I wasn’t entirely sure they were anything more than podiumware. Now they’re not just commonplace, they’re yesterday’s news. Everyone knows the importance of service, and the value of retention has been quantified: typically, preserving a current customer has five times the value of acquiring a new one.

Some highly oversimplified history. In about the mid-’70s product was king, American product quality was awful, and Japan, having adopted Total Quality Management (TQM) practices, kicked America’s economic rear-end.

It took a decade, but America eventually improved its product quality to a point where quality no longer won the pot – it was just the ante that let you play. Service became the new differentiator – hence my speech.

You can use a differentiator to gain marketshare or support higher margins. The Japanese used quality at a competitive price to gain marketshare in every industry it attacked. The old WordPerfect used service for the same purpose. Audi now touts both product quality and service as part of its premium image, supporting higher prices.

(The technology industry is something of a paradox in this respect – we pay ever-higher fees for ever-poorer levels of service. Disagree if you like, but first consider that this one subject keeps Ed Foster busy full-time … and his Gripe Line feature elsewhere in these pages doesn’t suffer from repetition.)

To stay ahead of the pack when it comes to providing service, companies have invested heavily in technology. In addition to EDI and fax-on-demand, we’ve pressed imaging, workflow, computer telephone integration into use, along with the World Wide Web which has superseded more primitive means for providing direct access to our systems. Has the investment paid off?

Interesting question. Since our competitors have made the same investments we have, we may all have wasted our money. More likely, our measure of success is keeping our heads above water in the ocean of competition in which we’re all immersed. Such are the hazards of finding suitable performance measures.

As happened to quality, service has reached a point of diminishing returns as a differentiator, becoming just another part of the ante. The ante, of course, is important and the systems you provide that support your company’s TQM and service efforts remain vital to your company’s future.

They’re vital. They just aren’t going to be enough anymore.

As CIO, (or future CIO) you need to stay ahead of the rest of the company in thinking how to apply technology to reshape your company’s strategy. Since it’s the nature of technology to provide or enhance capabilities, you need to anticipate the capabilities your company will need to lead the pack. After you’ve perfected product quality and surrounded your products with high-quality service, what do you do next? Look for another differentiator, of course.

That’s what we’re going to explore next week.

I spent five long years studying the behavior of electric fish in graduate school before becoming a professional programmer in the world of commerce. Not a day of my fish research was wasted – I’ve reused nearly everything I learned in graduate school in my business career.

You’re probably expecting a segue from reuse to object technology. Nope. Not this week. We’re going to apply part of the philosophy of science to your day-to-day business decision-making.

My colleagues and I had many a fine discussion about which theories had scientific value and which ones provided bull-fodder when mixed with a few mugs o’ brew. The short version: only theories that have both explanatory and predictive power are scientifically useful, because theories that explain but don’t predict can’t be tested.

Businesses deal with theories all the time. To their misfortune, businesses have only one way to test a theory: Try it and see what happens. Sadly, the results still don’t tell us much. Businesses want to make money, not test theories, so they don’t apply the kinds of experimental and statistical controls that lead to confidence in the results. One perfectly valid business theory may be associated with marketplace failure (perhaps due to poor execution) while another – a really stupid idea – ends up looking brilliant because the company that followed it did enough other things right to thrive.

While business theories are rarely as certain as, say, the laws of thermodynamics, they’re often good enough to be worth using – if they’re useful and not just interesting. Good business theories must be useful, and that means they have to provide guidance when making decisions.

And that takes us to last week’s column on the difference between client/server computing and distributed processing. “Client/server”, you’ll recall, refers to a software partitioning model that separates applications into independent communicating modules. The test of client/server isn’t where the modules execute, it’s their separation and independence.

Distributed processing is a hardware and network architecture in which multiple, physically independent computers cooperate to accomplish a processing task.

You can certainly implement client/server computing on a distributed architecture – they go together naturally – but they’re not the same thing.

I could almost hear some readers saying, “Oh, come on. That’s just semantics,” while writing the column, but the distinction matters. In other words (we’re there!) we’re dealing with a useful business theory.

One of our teams used it recently while helping a client sort through some product claims. One vendor touted its “paper-thin client” – it uses X-Windows on the desktop – as one of its desirable design features. A thin-client design was just what we were looking for, because we wanted to reuse a lot of the core system business and integration logic in new front-end applications.

Looking at the product more closely we discovered something wonderful. The vendor hadn’t implemented a thin client at all. It had built a fat client that mixed presentation and business processing together, but executed it on the server. Their system used paper-thin desktops, not paper-thin clients.

Thin desktops may be just what you’re looking for. Thin desktops reduce the cost of system management (fewer desktop software installations) and can give you highly portable applications. They come at a price though – an impoverished interface and a much higher processor load on the server, to name two.

We weren’t looking for thin desktops. We wanted to reuse a lot of the application logic built into the system, and that meant disqualifying this particular product.

Take a minute to think about some of the claims you’ve read about the network computer (NC). Ever hear someone refer to it as a thin-client architecture? I have, but it isn’t any kind of client architecture. It’s a distributed computing architecture. Whether the applications you run on an NC use thin clients, fat clients, or just terminal emulators depends on how you or the vendor partition the application logic and where you execute the various modules that make up the application.

Think the distinction between distributed processing and client/server computing is “just a theory” or “just semantics”? Think again: it’s central to your technical architecture.