“After the temple was destroyed, prophecy was given to fools.” – The Talmud, brought to my attention by Dov Trietsch
Month: February 2008
Carr-toonish engineering
If someone does something that’s patently ridiculous, but manages to draw enough attention that it generates a lot of discussion, has that person performed a valuable service or just wasted our time?
But enough about Paris Hilton, Kiefer Sutherland and Lindsay Lohan. In our own industry we can ask a similar question about Nicholas Carr, who, as mentioned last week, has predicted that the “technical aspect of IT” (which in Carr’s world is IT infrastructure management) will move to the Internet, which will become the CPU-cycle-provisioning equivalent of an electrical power plant.
With the technical part gone, handling the non-technical remainder (I bet you didn’t know application design, development and integration are non-technical undertakings) won’t require a separate in-house IT organization any more. Instead, they will become a mixture of Software as a Service (SaaS) applications and business-department-developed code that runs on the utility computing infrastructure.
Sometimes, fault-finding isn’t the best way to evaluate a new idea. A superior alternative is to be helpful and positive — to figure out how to make it work (for a historical example, see “Inhaling network computers,” KJR, 1/13/1997.)
What will be required for IT to go away?
First of all, let’s assume Carr isn’t simply “predicting” the success of IT infrastructure outsourcing, as I contended last week — that he’s serious about utility computing in the electrical generation sense.
In the deregulated electrical power industry, generation companies pump 60-cycle current onto the grid, metering how much they provide. End-customers draw electricity off the grid. Their local provider acts as a broker, buying current from the low-cost provider and metering consumption.
This is, by the way, how you can buy wind-generated electricity if you prefer. You don’t really get the exact electrons pushed onto the grid by a wind farm. You simply instruct your broker to buy enough of them to satisfy your consumption. The rest is just balancing the books.
For utility computing to work, we would need a similar metering and billing infrastructure. We’d need a lot more, too. For example:
- Web 3.0: We will need a grid computing architecture that runs applications wherever CPU cycles happen to be cheapest (with suitable metering and billing — see above).
- Virtualization: This will have to be perfect, so that the CPU cycles you buy can run your applications no matter what operating system they were written for.
- Quality of Service: Different applications need different levels of performance. Buyers will need a way to specify how fast their cycles have to be, and without the help of those pesky engineers who would be housed in an IT department if it hadn’t been disbanded.
- AI-based data design: With professional, centralized IT evaporated into the business, which will be building whatever custom applications remain, there will no longer be an organizational home for data designers. The only alternative is technology smart enough to handle this little engineering chore.
- Automated, self-tuning pre-fetch: Last week’s column demonstrated the impact of latency in the communications channel on linked SaaS-supplied systems — the speed of light slows table joins to a crawl.This is fixable, so long as systems are smart enough to automatically pre-fetch records prior to needing them. Every SaaS vendor will have to provide this facility automatically, since businesses will no longer employ engineers able to manually performance-tune system linkages.
- New security paradigm: Sorry about the use of “paradigm.” It fits here. You’ll be running all of your applications on public infrastructure, on the wrong side of the firewall (which — good news! — you’ll no longer need). Think it will be hard for someone with ingenuity to plant a Trojan that monitors the cycles and siphons off every business secret you no longer have?
- AI-based data warehouse design: Let’s assume for the sake of argument that the Carr-envisioned future happens as he predicts. You will still want to mine all of your data, in spite of it being schmeared out across the SaaS landscape.I see two choices. The first, almost unimaginable, is an efficient, distributed, virtual data warehouse, reminiscent of the sea shell collection Steven Wright keeps scattered on beaches all over the world.The alternative is the same data warehouse technology we’ve grown accustomed to. Except we don’t have IT anymore, so we’ll need an AI design technology to pull it together for us, performance optimized and ready for analysis.
Look far enough into the future and all of this is possible. Heck — look far enough and broadcast power is possible.
Now, look ahead just far enough that you’re at the end of any useful business planning horizon. You’ll reach a very different conclusion.