Back in the early 1980s, when the world was young, I had more hair, and Lotus 1-2-3 ran just about as fast on a 286 processor in character mode as Excel does now on a Pentium under Windows/NT, the toughest auditing problem in a big spreadsheet was tracking down circular references. (Okay, I admit it … I caught myself saying to a friend the other day, about rap music, “It all sounds the same to me.” I’m starting to geeze.)

Anyway, you’ll recall that with a circular reference, a cell’s formula refers to its own value, which means the spreadsheet never yields the same result twice. The simplest example: the formula =A1+1 in cell A1.

In some situations, for example simulations and formulas that require recursion, circular references can be useful. Most of the time, though, they give you meaningless results.

Here’s another example of a circular reference: “Client/server systems cost more than mainframe systems.”

You’ve heard this repeatedly from authoritative sources in the industry that charge so much for advice they must be right. Except they’re demonstrably wrong, because this calculation involves a circular reference. Don’t believe me?

How many times have you heard this semi-true statement: “The mainframe won’t go away, but it’s role will change. It will become the biggest server on your network.”

Okay, let’s see. Client/server systems cost more than mainframe systems, and mainframes will be servers in client/server systems. Put these two facts together and you’ve proved that client/server systems that use mainframes as servers cost more than themselves!

Ain’t logic a wonderful thing?

Want more proof? I’m glad you asked. Imagine a small system … it has ten tables, five concurrent users, no more than a thousand records in its biggest master table, and handles about one transaction a minute. Which will be more expensive to develop and maintain: a mainframe, COBOL/CICS/3278 system, or something you build in Access, Paradox, Visual Basic or Delphi on a small PC network? Hint: the word “COBOL” shouldn’t appear in your answer.

Now imagine an airline reservation system: millions of records in its main tables, thousands of transactions per second, and oodles of concurrent users. (An oodle is somewhere between a whole lot and a bazillion.) This time, yer basic COBOL/CICS/3278 system wins.

If you’re so inclined, imagine a graph. The y-axis is cost, the x-axis is a composite size/compexity index. When x is small, non-traditional architectures cost much less than their mainframe/terminal-host equivalents. The line representing mainframe/terminal-host costs increases with size more slowly than the line for non-mainframe systems.

Somewhere, the lines cross: “client/server” systems – actually, systems built with a multitier application partitioning model, non-mainframe servers, and GUIs – sometimes cost more, sometimes cost less than “mainframe” systems (those built with a single-tier application partitioning model, mainframes as the host, and a character-mode user interface).

This isn’t a one-size-fits-all situation: the optimal architecture depends on circumstances. Big batch jobs require big mainframes. Kludge together a solution based on too-small servers and your costs go up. Talk about blinding revelations.

And the circumstances are far more complicated than most pundits (but not yours truly) would have you believe: it’s a mix ‘n match situation. “Client/server” vs “mainframe” tangles a bunch of separate threads. Next week we’ll explore this subject in detail, untangling these threads and using them to cross-stitch a portrait stunning in its beauty and complexity (help! The metaphor police are after me!)

Strictly speaking, the term “client/server” refers to an application partitioning model. In client/server applications, developers separate applications into multiple, independent, communicating processes. They can all run on a mainframe. They can all run on a single personal computer. They can run on a server, or some can run on each.

Of course, we’ve been doing this, in limited fashion, since the invention of operating systems and subroutine libraries. Which means the opposite of “client/server” isn’t “mainframe”.

It’s “bad programming”.

If I were starting a business today, I wouldn’t dream of using Microsoft Office.

No, Corel hasn’t bribed me. This is simple economics. Corel has priced its office suite so much lower than Microsoft I can’t imagine enough of an incremental benefit to MS Office to cover the spread.

Very few of InfoWorld’s readers are opening a business today, but many of you have MS Office ’97 staring you in the face. There’s no concurrent use licensing, file formats aren’t backward compatible, and it costs a lot of money compared to its competition.

Besides, many of you have expressed strong interest in the NC, which means you’re open to non-Microsoft applications. If you’re going to make your move, now’s the time.

Last I heard, IS budgets were still pretty tight, so the savings should be pretty interesting to a frugal CIO. “Except,” I can hear some of you thinking (I have mentioned my telepathic abilities, haven’t I?) “software is just a small part of the total cost of PC ownership. So this will just be nibbling around the margins.”

The numbers you hear bandied about on this subject are ridiculous. If you really believe each PC costs you eleven grand a year, put out an RFP. Any number of outsourcers will be delighted to provision your desktops for at least 20% less than that (I’d make the offer myself, but I don’t like lines that long).

So here’s what I’m going to do. Over the next few months, as a bizarre hobby, I’m going to put a better model together. InfoWorld’s Features and Test Labs may join me … we’re still working out the details. The point of this model won’t be the “real” costs. The point will be to help you probe the issues you face managing this resource, so you can make realistic decisions.

Here are the key elements this kind of model needs:

Fixed and Variable Components: Any useful financial analysis separates costs into those that don’t vary with volume and those that do. Analysis often benefits from separating “semi-variable” costs – those that increase in large steps at certain volume increments) – from purely variable costs.

Partitioning of Benefit-driven Expenses and Overhead: Some expenses increase as employees take greater advantage of technology. Training is a good example: the higher your training costs, the more different applications employees know how to use, which means they’re gaining more value from their systems. On the other paw, you have to spend some money before the employee ever turns the system on. Include network connections in this category. Understanding the difference is critical.

Separation of Non-Technical Issues: I’m heartily bored with hearing how much time goes into managing hard disk space, doing backups, and all the related drivel. Take away the PC and employees will spend more time than that handling and filing paper, cleaning out filing cabinets when they get full, and wasting all the other time that gets used with non-technical alternatives. If we include these costs at all it will be as negative numbers, crediting the time saved from doing things the old ways.

Recognition of the Real World: PC cost models pretend we work rigid schedules at hourly rates. Time spent in non-value-adding activities like training is lost to the organization.

It’s a great theory that ignores our day-to-day experience. When most of us take a vacation, our coworkers absorb some of our work and the rest piles up until we get back. We deal with it when we return, and our vacation (or training class, or whatever) costs our employers effectively nothing.

Still believe each PC costs you $11K per year? Call me … I’m selling shares in a bridge crossing the East River, and I’m just sure you’ll want a piece of the action.