I don’t get it.

I just read Lucas Carlson’s excellent overview of microservices architecture in InfoWorld. If you want an introduction to the subject you could do far worse, although I confess, it appears microservices architecture violates what I consider to be one of the fundamentals of good architecture. (It’s contest time: The first winning guess, defined as the one I agree with the most, will receive a hearty public virtual handshake from yours truly.)

My concern isn’t about the architectural value of microservices vs its predecessors. It’s that by focusing so much attention on it, IT ignores what it spends most of its time and effort doing.

Microservices, and DevOps, to which it’s tied at the hip, and almost all variants of Agile, to which DevOps is tied at the ankle, and Waterfall, whose deficiencies are what have led to Agile’s popularity, all focus on application development.

WAKE UP!!!!! IT only develops applications when it has no choice. Internal IT mostly buys when it can and only builds when it has to. Knowing how to design, engineer and build microservices won’t help you implement SAP, Salesforce, or Workday, to pick three examples out of a hat. Kanban and Scrum might be a bit more helpful, but not all that much. The reasons range from obvious to abstruse.

On the obvious end of the equation, when you build your own solutions you have complete control of the application and information architecture. When you buy solutions you have no control over either.

Sure, you can require a microservices foundation in your RFPs. Good luck with that: The best you can successfully insist on is full access to functionality via a RESTful (or SOAPy, or JSON-and-the-Argonauths) API.

Halfway between obvious and abstruse lies the difference in cadence between programming and configuration, and its practical consequences.

Peel away a few layers of any Agile onion and you’ll find a hidden assumption about the ratio of time and effort needed to specify functionality … to write an average-complexity user story … and the time needed to program and test it. The hidden assumption is that programming takes a lot longer than specification. It’s a valid assumption when you’re writing Angular, or PHP, or Python, or C# code.

It’s less valid when you’re using a COTS package’s built-in configuration tools, which are designed to let you tweak what the package does with maximum efficiency and minimum risk that the tweak will blow up production. The specify-to-build ratio is much closer to 1 than when a team is developing software from scratch, which means Scrum, with its user-story writing and splitting, backlog management, and sprint planning, imposes more overhead that needed.

And that ignores the question of whether each affected business area would find itself more effective by adopting the process that’s built into the COTS package instead of spending any time and effort adapting the COTS package to the processes they use at the moment.

At the full-abstruse end of the continuum lies the challenge of systems integration that’s lying in the weeds there, waiting to nail your unwary implementation teams.

To understand the problem, go back to Edgar Codd and his “twelve” laws of relational data normalization (there are thirteen of them; his numbering starts at zero). Codd’s framework for data normalization is still the touchstone for IT frameworks and methodologies of all kinds, and just about all of them come up short in comparison.

Compare the process we go through to design a relational database with the process we go through to integrate and synchronize the data fields that overlap among the multiple COTS and SaaS packages your average enterprise needs to get everything done that needs to get done.

As a veteran of the software wars explained to me a long time ago, software is just an opinion. Which means that if you have three different packages that manage employee data, you bought three conflicting opinions of what’s important to know about employees and how to represent it.

Which in turn means synchronizing employee data among these packages isn’t as simple as “create a metadata map” sounds when you write the phrase on a PowerPoint slide.

To the best of my knowledge, nobody has yet created an integration architecture design methodology.

Which shouldn’t be all that surprising: Creating one would mean creating a way to reconcile differing opinions.

And that’s a shame, because a methodology for resolving disagreements would have a lot more uses than just systems integration.

“Haven’t you read Amazon’s and Microsoft’s recent press releases on this?”

This was in response to a challenge to the “save money” argument for migrating applications to the public cloud.

I understand just as well as the next feller that press releases serve a valid purpose (what’s the feminine of “feller” anyway?). When a company has something important to announce, press releases are the more than 140 characters explanation of what’s going on.

That’s in contrast to the difference between facts (“We’re changing our pricing model”) and smoke (“You’ll save big money”). I say smoke because:

First and foremost, Fortune 500-size corporations that can’t negotiate pricing for servers and storage comparable to what Amazon and Microsoft pay for the gear they use to run AWS and Azure just aren’t trying very hard. They have access to the same technology management tools, practices, and talent, too.

Second: Smart companies are building their new applications using cloud-native architectures — SOA and microservices orientation; multitenancy; DevOps-friendly tool chains that automate everything other than actual coding, and so forth (“and so forth” being ManagementSpeak for “I’m pretty sure there’s more to know, but I don’t know it myself”).

But migrating to cloud-native architectures that are easily shifted to public or hybrid clouds is quite different from migrating applications designed for data-center deployment. And it’s the latter that are the ones that are supposed to save all the money.

Sure, applications coded from non-SOA, non-microservices, non-multi-tenant designs can probably be recompiled in an IaaS environment. But once they’ve been recompiled they’ll probably need significant investments in performance engineering to get them to a point where they aren’t unacceptably sluggish.

Oh, one more thing: Moving an application to the cloud means stretching whatever technologies are used for application and data integration through the firewall and public network that now separates public-cloud-hosted applications to those that have yet to be migrated.

Based on my admittedly high-level-only understanding, not even all enterprise service buses can achieve high levels of performance when, instead of moving transactions around at wire or backplane speeds, they’re now limited to public networking bandwidths and latencies.

Complicating integration performance even more is the need to integrate applications hosted in multiple, geographically disbursed data centers, as would be the case when, for example, a company migrates to, say, Salesforce for CRM, internal development to Azure, and financials and other ERP applications to Oracle Cloud.

For many IT organizations, integration is enterprise architecture’s orphan stepchild. Lots of companies have yet to replace their bespoke interface tangle with any engineered interface architecture.

So lifting and shifting isn’t as simple as lifting and then shifting, any more than moving a house is as simple as jacking it up, putting it on a truck, and hauling it to the new address. Although integration might not be as fraught as the house now lying at the bottom of Lake Superior.

Which isn’t to say there’s no legitimate reason to migrate to the cloud. (Non-double-negative version: There are circumstances for which migrating applications to the cloud makes a great deal of sense.) Here are three circumstances I’m personally confident of, and I’d be delighted to hear of more:

> Startups and small entrepreneurships that lack the negotiating power to drive deep technology discounts, and that will benefit from needing a much smaller full-time and permanent IT workforce.

> Applications that have wide swings in workload, whether because of seasonal peaks, event-driven spikes, or other drivers, the result is a need to rapidly add and shed capacity.

> A Mobile workforce or user base that needs access to the application in question from a large number of uncontrolled locations.

At least, this was the situation the last time I took a serious look at it.

But this isn’t a column about the cloud. It’s about the same subject as last week’s KJR: How to avoid making decisions based on belief, prejudice, and denial. The opening anecdote shows how easy it is to succumb to confirmation bias: If you want to believe, even vendor press releases count as evidence.

In that vein, here’s a question to ponder: Why is it that, after centuries of success for the scientific method, most people most of the time (including many scientists) operate so often from positions of high certainty and low evidence?

The answer is, I think, that uncertainty causes anxiety. And people don’t like feeling anxious.

But collecting and evaluating evidence is hard and often tedious work — not a particularly popular formula.

Isaac Asimov once started a Q&A session by saying, “I can answer any question, so long as you’ll accept ‘I don’t know’ as an answer.”

If Dr. Asimov was comfortable not knowing stuff, the rest of us should be at least as comfortable.

I think.