Imitation, I’m told, is the sincerest form of flattery. Given the occasional criticism of Gartner in this space over the years, I was especially flattered by Gartner Senior Analyst Nikos Drakos. At the Gartner Europe Spring Symposium/ITxpo 2001, Drakos made use of the technology life cycle — Hype, Disillusionment, Application — first described in this column back in May, 1996. Referring to peer-to-peer computing, Drakos reportedly said that it’s just approaching the top of the hype cycle, but after passing through the inevitable subsequent phase of disillusionment (which will take until 2004), peer-to-peer has great potential in the business environment.
Attribution would have been nice, but you can’t have everything. In any event, while the unattributed flattery was nice, Drakos’ prediction is both shaky and funny.
The funny part requires historical perspective. Remember the early days of client/server computing? Part of its attraction was its ability to move processing to the desktop. That way, as the number of end-users increased, so did the available computing power.
Our understanding was pretty fuzzy back then, of course. Distributed processing relates to the platform layer of technical architecture. Client/server is an application-layer issue, and may or may not relocate processing tasks. But that’s okay — a lot of people in IT are still pretty fuzzy about this distinction. Which explains at least some of the confusion connected to advocacy of what’s usually called “thin-client” (really, fat-network) computing architectures.
The logic behind fat network computing is the allegedly high cost of distributing applications to the desktop and managing them there. It “just makes sense” to centralize everything on bigger servers that are professionally managed in the data center, according to this line of thinking, although why it “just makes more sense” than taking advantage of desktop computing cycles to run the distributable parts of an application is rarely articulated.
Which is okay, because some thin-client architectures do run the distributable parts of an application on the desktop — they’re downloaded on-demand from a server in the form of Java applets. Or, the applets can load from a cache on the local hard drive, and are only downloaded when a new version is available on the server.
Which apparently is very different from Windows automatically downloading updated versions of DLLs when new versions are available on a server — that’s a fat-client technique. I guess the client is fat when software is installed instead of cached. It’s kind of hard to tell.
And it just got harder, because the very same pundits who explained why fat-network computing “just made sense” while client/server does not are now explaining that peer-to-peer computing is in your future. Why? Because it makes use of all that wasted computing power available on the desktop, of course! It just makes sense. Maybe it does — but it sure is amazing how the very same pundits who were sneering at “fat clients” just a year ago — and probably still do — now extol the virtues of just another version of fat client computing.
Napster made peer-to-peer computing fashionable again. Never mind that most Napster downloads came from a small number of big honkin’ machines on the Napster network. Those big honkin’ machines weren’t labeled “server” on the network diagram, after all, and there are a lot of pundits in this industry who are on constant alert for the Next Big Thing.
I say, don’t wait until 2004. If peer-to-peer has such great business potential, take advantage of it right now. Make your end-users’ hard drives shareable.
Okay, maybe that isn’t peer-to-peer’s potential. Maybe it’s making use of all those “wasted” computing cycles on the desktop. That has nothing to do with what Napster was about, of course, but it does have something to do with a screen saver the folks at SETI are using to distribute some of its processing to volunteers around the Internet.
Sound attractive? Maybe, but the major bottlenecks of most business applications are I/O and running the user interface, not calculations. So let’s use peer-to-peer for those. For starters we can, with sophisticated DBMS software, distribute our terabyte databases across the hard drives of all of our desktop PCs.
Or maybe not. Because what happens when employees turn off their desktops before leaving for the night?
That leaves the user interface. Great idea! It’s so great, in fact, that we’re done. We moved that to the desktop in the earliest days of client/server computing.