According to Bernard Grun’s awesome The Timetables of History, King Herod died in the year 4 B.C. Christ was probably born the same year — if not he was born earlier, of course, since it was Herod’s call for a census that sent Joseph and Mary on their trip to Bethlehem.

All of you who still get riled up about when the millennium really starts should refocus your energy on fixing how we number years. Since the millennium began 2001 years after the year “-4” — 1997 AD if I’m doing my sums right — today’s column is really in the Oct. 26, 2002 edition of InfoWorld. The future is now.

Late in the real year 2000 I ran my first column trashing the Network Computer. Several years of marketing nonsense have muddied definitions almost beyond repair, so let’s try to restore some clarity to the situation: The NC, as defined by Larry Ellison, who coined the term, is a networked device that can execute Java code, connected to servers that download Java applications for local execution.

If local storage were expensive and bandwidth cheap, the NC would have made lots of sense. As it is, the whole attraction of the NC depended on two assumptions.

The first is that Microsoft will continue to avoid its DLL obligations. If you haven’t figured this out yet, Microsoft either created DLL hell deliberately or is so awesomely incompetent that our language lacks the words to describe its ineptitude. If Microsoft were to require registration of all DLLs and publication of their exact specifications, new versions of DLLs would not change functionality and DLL hell would be gone forever.

Of course, so would Microsoft’s ability to break competitor’s applications through the publication of new versions of DLLs, which is why I’ve concluded that this is the result of malfeasance rather than incompetence.

That’s one assumption. The second is the mirror of the first — it assumes someone would register all Java applications and applets, requiring that they all have fixed, published specifications. Otherwise we’d simply trade DLL hell for applet hell, and the sole advantage claimed for NCs — reduced cost of ownership — would vanish from the equation. Nobody has taken this essential step, and it’s pretty late in the game for the NC’s proponents to figure it out.

The IS Survival Guide didn’t have the clout to kill the NC — not in the real year 2000, not now.

Oracle has that clout. Since Oracle invented the concept of the NC, when it abandons the idea it’s safe to declare the NC completely dead. And the new version of Oracle’s ERP suite makes it clear the company has lost interest.

The new version, according to Oracle, is exciting because it migrates functionality back to big, centrally managed servers. It abandons client/server computing for a browser-based interface with all logic executing on the Web server.

If you like the idea of substituting WAN reliability and performance for replication, and you think Oracle has built a rich enough GUI into a browser, go ahead and buy it. Don’t, however, fall for the mistaken idea that if something is browser-based then it fits the NC model.

The point of the NC is for code — downloaded or cached — to execute on the desktop. The point of a browser is to provide an intelligent GUI presentation for code executing on a server. Yes, the browser can provide a home for Java applets, but Oracle’s centralized architecture isn’t suited for doing much on the desktop. That takes bandwidth, which means local, not centralized, servers.

Defenders of Oracle and the NC will rightly point out that the NC can host a browser, so Oracle’s ERP suite is compatible with an NC. That’s true, but no more relevant than the ability of the PC architecture to run Java Virtual Machines (JVMs) for hosting applications intended for NCs.

What’s relevant is that Oracle’s ERP team ignored the NC architecture in building this release. It isn’t built around Java applications downloading to desktop JVMs for execution.

And if Oracle’s ERP team ignores the NC architecture, who exactly is supposed to pay attention to it?

Electric fish are fascinating critters (at least to me — regular readers will remember I spent years studying these suckers). One of their more remarkable features is their electric organ – the gadget they use to generate electricity. It started out as a muscle. Aeons of evolution eliminated its ability to contract while increasing the amount electricity its cells generate.
That’s how evolution works — it grabs whatever is convenient and adapts it for whatever use is called for.

We do a lot of this in IS as well, continually adapting and evolving our legacy systems, databases, and computing platforms to whatever new requirements pop up. And this is a good thing to do.

Eventually, though, we find ourselves in evolutionary dead-ends, where our adaptations, kludges, shortcuts, and patches turn into barriers that prevent further change. Mother Nature handles this situation through extinction. You’d probably prefer a different strategy.

The alternative to evolution is design, and design is what distinguishes architecture from gluing a bunch of stuff together wherever it happens to fit. In this, the final article in our series on technical architecture (hey, don’t cry!), we deal with design.

Architects, whether designing IS infrastructures or office buildings, have to be both technically and artistically inclined. So far we’ve talked about the analytical, technical aspects of architecture.

Good designs, though, are as much a matter of art — aesthetics — as of technical prowess. Aesthetics pays off, because ugly designs turn into unreliable, clunky, nasty implementations. It can’t be logically proven, but it’s so nonetheless.

Defining aesthetics is more or less impossible, though, because aesthetics is mostly a matter of taste. You still need to make consistent design decisions, though. To do so, develop a set of clear, consistent principles designers can use as a starting point. And to develop good design principles, you need to understand the important design issues.

A design issue is any technical problem you need to solve or computing function you need to deliver on a regular basis. For example, physical connectivity is a design issue. You have several design principles to choose from: One connection per end-user device with network gateways to resources as needed; multiple end-user connections (for example network, modem and desktop TAPI); or ad hoc decisions as seem appropriate for each situation.

I’m a big fan of a single connection to the desktop and doing everything through the network, but that may not be the right solution for you.

Take another design issue: How to handle data redundancy. You have several design principles to choose from. You can: modify your systems to eliminate redundancy; define master/slave relationships among your data stores and periodically resynchronize everything; build technology that propagates update transactions to all redundant data, keeping everything synchronized in real time; or live with the mess and not worry about the redundancy. Pick one and don’t lose your guts.

In fact, for each design issue the most important thing is to pick just one design principle and live with it — and also, make sure your design principles are consistent with each other.

Which brings us to standards. Many design principles establish the need for a standard. In fact, every standard you establish should stem from a design principle — otherwise you don’t need it. For example, you may establish a design principle that all data will be stored in a single mainframe RDBMS, accessed through standard ODBC calls. This principle calls for selection of a standard ODBC-compliant RDBMS.

Now the really hard part: Your design principles are important, but they aren’t religion. You’ll sometimes have to make pragmatic decisions that violate a design principle or standard. Figure a way to mitigate the impact, and do what’s right for the business.

Your company’s strategic, tactical, and infrastructural goals drive the applications, information, and ultimately the computing platforms you provide. These define the design issues you need to resolve, which in turn cause you to select a set of consistent design principles.

It’s these design principles that lead you to choose specific technical standards.

Otherwise, your standards are just red-tape.