Draw a circle.

Surround it with several more circles. Connect the new circles to the one in the center with lines.

This is your systems architecture. Before you read on, label each of the circles. I’ll wait.

OK, done? You almost certainly chose one of the following two labeling schemes: You either tagged the central circle with some synonym for mainframe, host, or server and the outside circles as terminals, PCs, or network computers; or you called the central circle a personal computer and the outside circles mainframe, server, minicomputer, World Wide Web, and other resources you can access through your PC.

If you chose the mainframe-centric labels, chances are you like the fat-network architectures now fashionable under the misleading “thin client” moniker. If, like me, you put the end-user in the middle of modern computing architectures, using a PC to draw on whatever resources are currently needed, you probably worry about the whole fat-network approach to systems design.

Fat-network systems come in two basic flavors: Windows terminals and “Webware” interfaces. What’s interesting about these two flavors is that they have exactly nothing in common except that they have both have been misrepresented as thin clients.

The Windows terminal approach, whether sold by Citrix, Microsoft, or any of the dozen or so remote-control vendors, is remarkable primarily because of how unremarkable it is. It ignores the application architecture entirely, instead providing an alternative for deploying the same old stuff. The term “architecture” really shouldn’t be applied to a Windows terminal solution at all, since all it does is extend the keyboard and screen across a network. It does, however, put the host firmly in the middle, since with Windows terminals IS tightly controls the resources available to end-users.

Webware is more intriguing. When you design applications around Webware, you have at your disposal browsers, JavaScript, downloaded Java applets and applications, Java servlets and server-based applications, active server pages, Notes/Domino applications, Perl scripts, and enough other choices to delay any development project for a year while you sort them all out.

A Webware architecture means some code gets downloaded for desktop execution while other code executes on the server. The only option not available to you when designing Webware is storing software (other than a “thin” 50MB or so browser) on local hard drives. In other words, you never use the fastest and cheapest storage you have. That makes sense … if you have decided to put the host in the middle.

Take a moment to go beyond cost of ownership to deal with the more interesting benefit of ownership, and you’ll discover that GUI applications installed on the desktop and Webware-based applications aren’t mutually exclusive alternatives. You’ll want to use n-tier, thin-client (in the true, skinny presentation layer sense), probably object-based architectures to build both.

As a general rule you’ll use desktop GUIs for high-use applications targeted to a known desktop platform … for example, whenever you’re building or selecting software to be used by employees as part of their core job functions.

When you do, follow the rules for easy, stable installations: Don’t touch the registry, don’t put anything in the Windows folder or any of its subfolders, test builds for memory leaks, and deploy the full application in a scale-model test environment before implementing it in production. Design every module for portability and manageability, too.

When you don’t know what end-users will run on their desktops, or when you need to deploy an application for occasional use, go for Webware. Here, the increased functionality of a desktop-installed GUI no longer outweighs the benefit of easier deployment.

When you’re building for Web access, you have no control over bandwidth, so force developers to test applications on lowest-common-denominator (another misused term; they’re actually greatest-common-factor) systems. And … remember that testing and manageability thing? Building on Webware doesn’t eliminate the need for any of that.

When you’re building (or buying) for deployment on your intranet, don’t forget: Fatten up that network.

Putting the end-user in the middle doesn’t mean you want the data center to seem remote.

Having seen The Phantom Menace over the weekend, our admin, formerly of the real estate persuasion, dropped by to chat. The climactic scene, she said, told her she’s been working for us too long: (Warning: If you haven’t seen it yet, skip ahead because I’m about to reveal the ending.) After Anakin Skywalker blew up the Trade Federation’s spaceship in the nick o’ time, thereby inactivating the entire ‘droid army (a plot twist that caught yours truly completely off-guard) her only thought was “server’s down.”

“Hmmm,” I thought to myself. “I’m sure glad the Mandalorian CIO fell for the thin-client hype when designing the ‘droid army. And I just knew fat network architecture came from the dark side of the force!”

Over lunch the same day I talked to another colleague who’s a yachtsman (OK, he owns a boat). He told me about a guy who sells gas to everyone in his marina. The guy had to hire an assistant to pump the gas. Why? Seems his company installed a new computer system. His new system relies on a fat network architecture, with results as predictable as The Phantom Menace’s ending: Each transaction now takes several minutes to complete. Until he hired the assistant the line of boats grew long and their owners became increasingly impatient.

Want to bet that with a fatter client, response time would have been good enough to avoid the cost of that assistant?

Nearly everything you read about thin clients is bogus. If you want to keep things straight, keep a single fact in mind and you won’t go wrong: “clients” and “servers” are software processes, not hardware devices. Clients request services, servers provide them. The devices we call servers are really computers running operating systems designed to host server processes.

Remember that clients and servers are software and you won’t write monolithic applications, install them on web servers so they’re accessible through a browser and congratulate yourself on implementing a thin-client solution even though your business logic is inaccessible.

You also won’t fall for the “software on the desktop is bad; software on the server is good” claptrap that led to the hiring of the gas-pumping assistant and the fall of the ‘droid army. Read any article promoting this philosophy and you’ll find a common design priority: Minimizing Total Cost of Ownership (TCO).

As an IS manager you generally won’t design software, but you will establish priorities for those who do. Is your most important one minimizing cost, or is it maximizing end-user effectiveness? Remember, form follows function, so when your software architects design to minimize cost you’ll get a very different result than when they design to maximize end-user effectiveness.

Want to maximize end-user effectiveness? Isolate business logic and integration logic into modules that are separate from presentation logic and callable whether they’re installed on a local hard drive or on a network server. That way, software design doesn’t dictate run-time deployment.

Want to maximize end-user effectiveness? Since response time is probably the single most important part of “user-friendly”, install each module based on frequency of use rather than on IS convenience. Put the most frequently used modules on hard drives, moderately used modules on centrally managed LAN-attached servers, and infrequently used modules on centrally-located WAN-attached servers.

Want to maximize end-user effectiveness? Design solutions so that for critical applications desktops can continue to function (albeit in “degraded mode”) when the network or a server is down.

Keep in mind that with fat-network designs, when a server crashes everyone is down. If an application relies on three physical servers, each delivering 99% reliability, the application will only be up 97% of the time — about an hour’s worth of crashes each week — so designing for server unavailability is important. (In contrast, when a desktop crashes a single individual is down … and a three-finger salute will fix things.)

Remember: Clients should always be thin. Whether you deploy a fat desktop or a fat network is an entirely separate question.