Will this be the year of the fat network?

Regular IS Survivalists will recall this term. Introduced last year as a more accurate alternative to “thin client” (which originally referred to the presentation module of a well-designed n-tier application, but which has, through misuse, lost all of its meaning) fat network architectures come in three flavors:

  • Windows terminals, in which client modules that range from slender to obese execute on servers, relegating desktop systems to manage keystrokes, mouse clicks, and the display.
  • Network computers (NCs), which download applications stored on servers to the desktop for execution.
  • Applications that use a browser as their user interface platform, either in vanilla form or enhanced through the use of some kind of plug-in.

I call them fat network architectures because the only thing they have in common is the need for bigger, more reliable servers and (usually) more network bandwidth than applications designed to be stored on and executed from local hard drives.Fat network architectures (do the translation in your head every time you read “thin client”) won’t hit the jackpot this year … or if they do, something is seriously wrong with how IS sets its priorities. Here’s why:

First of all, fat network architectures place a premium on sturdy, high-performance distributed systems. From my conversations with several specialists in the enterprise systems management discipline, the state of distributed systems management in most data centers is deficient. Even if moving to a fat network architecture is your top priority, you’ll do it next year — this year, you’ll focus on stabilizing your servers and managing them better.

Secondly, most CEOs won’t give you the time, budget and resources for a major architectural change this year. They just spent a king’s ransom on infrastructure in the form of Y2K remediation, often delaying new business initiatives as a result. Now, they need you to focus on these business initiatives, and the new applications and application integration needed to support them. With Y2K, the CIO dictated the company’s IS priorities. Now it’s time for a different balance.

Yes, you can build the new stuff using one of the three fat network architectures. This strategy is less risky than wholesale conversion anyway, so more power to you. If you choose a fat network technique that replaces PCs with something else, though, make sure you first inventory the applications used by affected employees to do their jobs today. Otherwise, you could accidentally cripple them when you roll out the new stuff.

Oh, and don’t forget to fortify your distributed systems.

The third reason this won’t be the year of the fat network is perhaps the most intriguing. The personal computer is more than a “client” these days (remembering that in the device sense, “client” means “device hosting client processes” just as “server” means “device hosting server processes”). The personal computer is now a server as well — it has become a personal information hub, storing and synchronizing with an expanding array of information appliances. Examples?

  • The personal digital assistance (PDA) is the most visible information appliance these days. Millions of employees rely on them where they used to rely on a paper day planner for their calendar, task management, and address book.
  • MP3 players will find uses beyond downloading music. Whether for “talking books” or executive briefings, information managed by the PC and downloaded to an MP3 player can be a productive alternative to the dangerous practice of conducting business via cell phone while commuting.
  • Speaking of books, how about electronic ones? The PC becomes the device that manages the user’s library, downloading books of current interest into the electronic book reader as needed. Start thinking about how your business could use electronic books instead of printed paper — the possibilities are stupendous.

Years of bushwah about “total cost of ownership” have conditioned a lot of IS managers to think of the PC as a liability to be minimized. In reality, it’s exactly the opposite.It’s an asset, ready and waiting to be leveraged.

Somehow, we printed, “The money saved the dwarfs that spent on remediation.”

I was explaining why we used two-byte year fields in the first place, and meant to point out that because storage cost so much when we wrote our legacy systems, the money saved was much greater than what we spent to fix the problem.

Instead, an extra “the” put me in Middle Earth, doing my Gandalf impression. (To be honest, I like the printed version better than the original, for reasons I can’t begin to explain.) My apologies if I accidentally offended any low-altitude readers.

A legacy system is like any other corporate asset. You invest in it, you maintain it, and you maximize the returns you get from it. With all the money you (and the dwarfs) spent remediating your legacy systems, you had better get some extra leverage from them, don’t you think?

One way to do this is web-to-host integration — products that take functionality from your legacy systems and make it available through a browser. Like so many other subjects in information technology, though, the obvious solutions are often short-sighted.

Go back to the basics — the need for IS to manage technical architecture. Technical architecture management is a core discipline for IS these days, as described in this space over a year ago (from 8/17/98 through 10/5/98). You’ll recall that your technical architecture consists of three layers: Application, Information, and Platform. Business value comes from the Application layer. Applications make use of Information (which includes both databases and unstructured data), and runs on the Platform layer (which includes, among other items, hardware, networks, operating software, and database management systems).

Let’s approach web-to-host integration from an architectural perspective — a process that begins by describing the application-layer functionality you need and then traces a path for its implementation that maintains as much simplicity and design elegance as possible in the lower layers of the architecture. What functionality are you looking for?

You may be looking for a less-costly way to deliver 3270 screens to your end-users, figuring you can simplify your platform layer by leveraging your intranet. If so, congratulations — you’re taking an architectural view.

On the other hand, you may be trying to export legacy functionality to your company’s web site — perhaps to encourage customer self-service, or as part of a supply-chain optimization effort.

If so … you’re about to make a big mess.

Your goal is to export mainframe functionality via the Web. Beyond any shadow of a doubt, though, you’ll also need to export that functionality through other channels as well — through the Web, for sure, but also through interactive voice response (IVR), your call center, possibly through tools customized for your direct sales force …

Get the idea? Once you start thinking you need a web-to-host integration solution, you’ll start thinking you need a separate host integration solution for every other category of application. That’s bad enough.

What happens next is worse. Since your Web site, call center, and IVR system will have separate host connections, they’ll also have their own, separate, possibly inconsistent host integration logic.

Host integration logic belongs in the mid-tier, where all applications … Web, call center, IVR, back-office … can use it. That’s what Enterprise Application Integration (EAI) systems are for. The case for EAI is compelling: The alternative is to create a spiderweb of separate interfaces between each legacy system and every front-end application. Do this and managing the impact of legacy system maintenance is a polynomial nightmare.

EAI isn’t simple. It is, in fact, a very difficult technology to implement well. In a first-class EAI implementation you have to figure out how to use host wrappers and integration tools to map your legacy systems into a well-designed object class hierarchy, both to present them and to “persist” any updates.

A good EAI implementation is very hard. The alternative, though, isn’t hard … it is, in the long run, completely unworkable.