According to Bernard Grun’s awesome The Timetables of History, King Herod died in the year 4 B.C. Christ was probably born the same year — if not he was born earlier, of course, since it was Herod’s call for a census that sent Joseph and Mary on their trip to Bethlehem.

All of you who still get riled up about when the millennium really starts should refocus your energy on fixing how we number years. Since the millennium began 2001 years after the year “-4” — 1997 AD if I’m doing my sums right — today’s column is really in the Oct. 26, 2002 edition of InfoWorld. The future is now.

Late in the real year 2000 I ran my first column trashing the Network Computer. Several years of marketing nonsense have muddied definitions almost beyond repair, so let’s try to restore some clarity to the situation: The NC, as defined by Larry Ellison, who coined the term, is a networked device that can execute Java code, connected to servers that download Java applications for local execution.

If local storage were expensive and bandwidth cheap, the NC would have made lots of sense. As it is, the whole attraction of the NC depended on two assumptions.

The first is that Microsoft will continue to avoid its DLL obligations. If you haven’t figured this out yet, Microsoft either created DLL hell deliberately or is so awesomely incompetent that our language lacks the words to describe its ineptitude. If Microsoft were to require registration of all DLLs and publication of their exact specifications, new versions of DLLs would not change functionality and DLL hell would be gone forever.

Of course, so would Microsoft’s ability to break competitor’s applications through the publication of new versions of DLLs, which is why I’ve concluded that this is the result of malfeasance rather than incompetence.

That’s one assumption. The second is the mirror of the first — it assumes someone would register all Java applications and applets, requiring that they all have fixed, published specifications. Otherwise we’d simply trade DLL hell for applet hell, and the sole advantage claimed for NCs — reduced cost of ownership — would vanish from the equation. Nobody has taken this essential step, and it’s pretty late in the game for the NC’s proponents to figure it out.

The IS Survival Guide didn’t have the clout to kill the NC — not in the real year 2000, not now.

Oracle has that clout. Since Oracle invented the concept of the NC, when it abandons the idea it’s safe to declare the NC completely dead. And the new version of Oracle’s ERP suite makes it clear the company has lost interest.

The new version, according to Oracle, is exciting because it migrates functionality back to big, centrally managed servers. It abandons client/server computing for a browser-based interface with all logic executing on the Web server.

If you like the idea of substituting WAN reliability and performance for replication, and you think Oracle has built a rich enough GUI into a browser, go ahead and buy it. Don’t, however, fall for the mistaken idea that if something is browser-based then it fits the NC model.

The point of the NC is for code — downloaded or cached — to execute on the desktop. The point of a browser is to provide an intelligent GUI presentation for code executing on a server. Yes, the browser can provide a home for Java applets, but Oracle’s centralized architecture isn’t suited for doing much on the desktop. That takes bandwidth, which means local, not centralized, servers.

Defenders of Oracle and the NC will rightly point out that the NC can host a browser, so Oracle’s ERP suite is compatible with an NC. That’s true, but no more relevant than the ability of the PC architecture to run Java Virtual Machines (JVMs) for hosting applications intended for NCs.

What’s relevant is that Oracle’s ERP team ignored the NC architecture in building this release. It isn’t built around Java applications downloading to desktop JVMs for execution.

And if Oracle’s ERP team ignores the NC architecture, who exactly is supposed to pay attention to it?

A CIO of my acquaintance once described his priorities for the new megasystem his developers were busily constructing. “What I want is the database,” he said. Waving his hands disparagingly, he added, “I don’t much care about the applications that feed it.” Guess what? His system never got built, and he didn’t last two years in his job.

We’re continuing to develop the technical architecture section of our integrated IS plan. This week we home in on the information layer. The techniques for managing this layer are well understood. Instead, let’s elevate the discussion (elevate in the sense that snipers like high places). Our goal this week is to place information in the proper context. We begin by avoiding the mistakes of the aforementioned CIO, realizing that although information is the center of our universe, applications drive the business.

We made a horrible mistake when we changed our name from electronic data processing (EDP) to management information systems (MIS) back in the early ‘80s.

When we were EDP, we did something valuable – we processed something, and as we did so, we automated manual processes.

MIS managed information. Even worse, we declared that our purpose is providing information to managers. Helping employees do useful work became a byproduct.

We would be much better off calling ourselves “process automation systems.” We got offtrack because of database management technology. With the advent of the DBMS, we changed how we designed systems. We put information in the center of design. Information, we realized, is more stable than the programs that make use of it.

Next we figured out that because information is the heart of our designs, it must be at the heart of the enterprise. So far so good, but then we left the halls of reason and jumped to the notion that it’s information, not processing, that delivers the most value.

Take a fresh, hard look at this. IS delivers the bulk of its value through process improvement: lower unit costs, reduced cycle time, and increased accuracy.

This is just as well. If we really do think information is the point of it all, our efforts are way out of whack with the company as a whole. About 80 percent of an average company’s information is unstructured. (I’ve run across this estimate several times, and it passes the “feels right” test, too.) It’s text, voice, and pictures. A simple-minded feller might figure that if information is the point of it all, and 80 percent of all information is unstructured, well then 80 percent of our efforts should be devoted to the management of unstructured information. They aren’t, of course. Eighty percent of our efforts go to managing alphanumeric data – the kind we know how to process. Telephone systems and personal computers – the technologies that handle unstructured information – have been the poor stepchildren of IS, not because we couldn’t manage the information, but because we couldn’t process it.

We’ve been able to get away with this so far. No longer, though. Maybe it’s the influence of e-mail and the World Wide Web, but companies are waking up to this deficiency.

Want to know your future? Look at a modern call center. It records every conversation digitally, along with every screen visited during the call. It’s indexed and ready for online retrieval to help call center management assess individual performance. It’s also available for computing sophisticated performance statistics.

Today these systems are closed and proprietary. Tomorrow they’ll store everything in the same document management system that will store scanned images and word processing documents, all linked through a common index. That’s just one example of how you will have to manage and process information in the near future.

Thought your information layer was in good shape? Maybe for today, but you have some serious planning to do.