I wuz hacked! Call the police!

Oh, wait. There aren’t any.

It’s funny, in a not-at-all-amusing sort of way, how we’re able to maintain mental models of the world we know aren’t true until someone rubs our noses in the disconnect. The hack job on the KJR archives is a perfect example.

My mental model of the world: Someone commits a crime. The victim calls the police, who investigate and, more often than not, catch and prosecute the perpetrator.

How it really works: If we want enough police for my mental model to work, we’d better open our checkbooks wide to our local taxing authorities, because whether the subject is stolen bicycles or hacked blogs, the local gendarmes aren’t going to do anything more than take your name and express their sympathy unless we hire a lot more of them.

KJR is hosted by a cloud vendor. It provides a nice little toolkit for building websites and also the tools needed to install a WordPress instance, which means it qualifies as a PaaS vendor (platform as a service).

And as I always knew, only this time it was personal, putting your infrastructure in the cloud doesn’t make it someone else’s problem. I had to figure out how to clean up the mess, how to secure my archives better, and how to get Google to take down its warnings.

* * *

Whether you’re a cloud evangelist, skeptic, or somewhere in between, read “Why PaaS? Dev, test, staging, no waiting,” (Andrew Oliver, InfoWorld, 5/30/2013).

The business case for the cloud has been foggy since the technology’s inception. It’s rested on “advantages” like cloud spending coming out of the OpEx rather than CapEx budget. This never made any sense, but especially it makes no sense at a time when, in the aggregate, the business community is sitting on large cash reserves because most companies don’t need more employees and can’t figure out any better place to invest them than stock buy-backs or the bond market.

Nor is cloud computing necessarily cheaper than its data-center equivalents. It’s more flexible, but not cheaper: Cloud computing lets you add and shed capacity as you need it.

What Oliver adds to the discussion is a special dimension of capacity management, namely that for most businesses, the capacity needed for development and (especially) test environments is occasional rather than fixed.

Also: In traditional computing environments, managing development and test environments is, shall we say, a non-trivial task.

But with cloud providers you can spin up copies of your production environment quickly, and relatively affordably because you only pay for the capacity when you need it.

And, done right, you don’t have to migrate test to production – just switch a DNS entry and test becomes production. Then you spin down your old production environment and go on your merry way.

What’s particularly likable about this aspect of the cloud business case is that it fits the KJR model of the world: Most of the time, what matters aren’t lofty, vague, strategic visions. What makes one business successful while another fails depends, more often than not, on the basic blocking and tackling. And in IT there isn’t a lot of blocking and tackling that’s more basic than change control.

* * *

Back when the earth was young and I had more of a use for barbers, there was this innovative product called the personal computer. It changed things. A lot.

But something it didn’t change at all was the business need for mainframe computers. What PCs did for businesses was to take on tasks mainframes had a hard time scaling down for, like electronic spreadsheets and word processing.

But PCs also opened the door for distributed computing, which did reduce the business need for mainframe computers. The progression was PCs, to PC-oriented LANs, to LANs connecting PCs to servers and minicomputers, to “the network is the computer.”

I’m starting to wonder whether we have a similar dynamic going on right now with mobile computing and the cloud.

See, mobile computing, properly understood, is more than “we need to support smartphones and tablets.” It’s a matter of making a company’s computing resources available to everyone who needs them and should have access to them, wherever they are and on whatever device they’re using, in a presentation well-suited to that device.

So I wonder whether mobile computing is opening the door for the cloud in much the way the PC opened the door for distributed systems, because one of the cloud’s virtues is that it can deliver your applications wherever they’re needed.

Yes, an analogy may not be the same thing as being the same thing (creds to the Economist), but exploring analogies can provide useful insights.

Think this is one of them?

Xcel Energy has asked the Minnesota Public Utilities commission to approve a 10% rate increase. This matters to everyone interested in cloud computing (I think). Here’s why: A major reason for the request is falling demand for electricity.

The connection isn’t clear?

Early in the days of cloud computing, Nicholas Carr’s ridiculous-but-nonetheless-highly-influential The Big Switch: Rewiring the World, from Edison to Google (W. W. Norton, 2008) proposed strong parallels between the evolution of electrical power generation and the coming evolution of information technology provisioning.

While it was mostly ridiculous  (see “Carr-ied away,” Keep the Joint Running, 2/4/2008), power generation and computing-over-the-internet do have one common characteristic ­– that when lots of customers are able to share the use of large, centrally owned, commodity resources, economies of scale drive down costs.

It’s a great theory. It rests, however, on a number of assumptions, some of which have already been subjected to real-world testing by electrical utilities. For example:

Assumption #1 — providers can get infrastructure for less: Electrical utilities can build and operate power plants more cheaply than consumers or businesses. It’s true, except when it isn’t: Some manufacturers, for example, own their own hydroelectric plants because that’s more economical than buying power, and some consumers are installing solar panels on their roofs, providing a significant fraction of their total need for electricity.

It’s the same in the cloud, only more so, because the raw cost of computing infrastructure is so low, and margins are so thin, that most companies can buy the same stuff cloud vendors rely on at pretty much the same price. A similar equation applies to managing it all.

Assumption #2 — uncorrelated demand: Start with scalability and flexibility. Cloud providers invest in fixed costs so as to decrease incremental costs. That’s called scalability — hardly a new concept in IT. When that’s what IT needs, the economics of Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) don’t work, because, see Assumption #1.

But in many circumstances, what businesses need isn’t scalability, it’s flexibility — the ability to add and shed capacity as the processing load varies. The reason highly scalable cloud providers can sell flexibility to their customers is that they rely on different customers needing the same resources at different times, averaging each other out. While individually their demand varies, in the aggregate demand is predictable.

This only works, though, when customer demand is uncorrelated — when their individual computing loads are unpredictable.

But for a lot of companies, variation in demand is very predictable, the result of having seasonal businesses. The holiday season, for example, affects lots of companies exactly the same way. Their computing demand is correlated, very much parallel to what power companies face in the summer, when everyone runs their air conditioners at the same time.

Except that power companies can handle peaks by buying electricity from each other and from independent generation companies. Cloud providers can’t. They need enough infrastructure to handle correlated peak loads, reducing their economies of scale. How much? The industry is too immature for us to know the answer yet, which brings us to …

Assumption #3 — Growth: Cloud computing doesn’t just shift the cost of infrastructure to providers. It shifts risk as well, namely the risk of excess capacity.

Call it the dark side of scalability, which is that when the incremental cost of processing an increase in volume is small, the incremental savings when processing a decrease in volume is just as small.

Welcome to Xcel Energy’s world.

Imagine a cloud provider whose demand starts to fall. Their fixed costs don’t change, just as Xcel still has to maintain its power plants, even when their capacity isn’t needed.

Unlike Xcel, cloud providers don’t need a PUC’s permission to approve a rate increase. They need the marketplace’s permission.

It’s Hobson’s choice. They either lose money by keeping their rates competitive, or enter a death spiral by raising their rates enough to be profitable, leading to customer defections, leading to more excess capacity, leading to a need to increase rates even more.

Even the biggest providers are vulnerable. Maybe more so, because commodity businesses have razor-thin margins to begin and they’ll have the biggest infrastructure investments.

So to the extent you migrate critical applications to IaaS or PaaS providers, make sure they’re fully portable. And add the steps needed to move them to a different provider to your business continuity plan.

Just in case.