Let’s see if we can pull this all together.

In recent weeks we’ve talked about teams and team dynamics. We’ve talked about the too-often perverse relationship between knowledge and certainty. We’ve talked about culture and how its self-reinforcing nature can result in appalling behavior just as it can help bring out the best in people.

Teams, as described here from time to time, are groups of people who trust each other, and are aligned to a common purpose.

Toss in some additional reflection and discussions with various correspondents over the past few weeks and it’s clear that while trust and alignment are important team-ness ingredients, they aren’t the whole recipe.

Another is interdependence. In the world of sports, members of baseball, football, and basketball teams depend on each other move-by-move to get the job done. Golfers competing in the Ryder Cup, in contrast, do root for each other, but don’t nudge the ball when nobody’s looking. Likewise tennis players in the Davis Cup who presumably don’t use mirrors to try to blind members of opposing teams from the stands.

The world of business can be even more extreme: Many companies pit members of the so-called “sales team” against each other in the quest to receive the sales incentives that only go to the top 10% of producers.

And some business leaders still buy into the old MBO (management by objectives) method of setting management goals, assuring that each manager will do whatever it takes to achieve his or her objectives whether or not it’s at the expense of other members of the “management team” trying to achieve their goals.

Does this mean the “sales team” and “management team” are only teams in scare quotes?

Not entirely, because of another ingredient of team-ness. That’s affinity – a shared sense of identity that’s independent of both trust and purpose. Independent, that is, except for a desire to beat other, competing groups.

Which gets us to culture. Shared identity can be and often is independent of trust and purpose. It’s never independent of culture.

Here in KJR-land our working definition of culture is how we do things around here. It’s the informal, unwritten rules the affinity group … the tribe … enforces far more strictly and ruthlessly than HR enforces any of what’s spelled out in the company’s policies and procedures.

Identity politics … tribalism, that is … isn’t limited to politics.

Because if it were, how would you explain soccer riots?

It’s time to connect all this theory to your work-a-day responsibilities as an IT manager.

As the golden rule of engineering is form follows function, start with what you want. I imagine that in most situations, most of the time, you want the men and women who work in your organization to accomplish important results.

Most of the time, they’ll accomplish important results more effectively as a result of teamwork than of working in isolation. So you need to encourage team-building in the trust-and-alignment sense.

But like it or not, achieving trust and alignment is hard work that requires constant, steady leadership. That’s in contrast to achieving an us vs them tribal sense of identity, complete with unwritten rules governing how we do things around here. You’ll get that in spite of your best efforts to prevent it.

What you can do, sometimes, if you’re lucky and the wind is blowing in the right direction, is to channel your employees’ natural tendency to form up into rival tribes, so tribal and team identities coincide, or at least overlap heavily.

It isn’t a perfect solution by any means. Yes, project teams that have a strong sense of tribal identity will work harder and collaborate better internally than employees assigned to a project whose sense of team identity is limited to trust and alignment to a common purpose.

But that same sense of tribal identity will make the team less likely to collaborate with other teams they think of as the them to their own us.

Is there anything you can do to limit the extent to which the tribes take over?

There is. You can keep projects short, so project-based tribes disband before their tribalism starts to dominate the cultural landscape. And, you can populate new project teams cross-functionally, redefining us and them frequently enough to break down tribal animosities faster than new ones can form.

Or, you can do what most managers seem to do: Hope for the best, complementing hope with an occasional lecture about how we’re all on the same team.

That’ll work well.

“Haven’t you read Amazon’s and Microsoft’s recent press releases on this?”

This was in response to a challenge to the “save money” argument for migrating applications to the public cloud.

I understand just as well as the next feller that press releases serve a valid purpose (what’s the feminine of “feller” anyway?). When a company has something important to announce, press releases are the more than 140 characters explanation of what’s going on.

That’s in contrast to the difference between facts (“We’re changing our pricing model”) and smoke (“You’ll save big money”). I say smoke because:

First and foremost, Fortune 500-size corporations that can’t negotiate pricing for servers and storage comparable to what Amazon and Microsoft pay for the gear they use to run AWS and Azure just aren’t trying very hard. They have access to the same technology management tools, practices, and talent, too.

Second: Smart companies are building their new applications using cloud-native architectures — SOA and microservices orientation; multitenancy; DevOps-friendly tool chains that automate everything other than actual coding, and so forth (“and so forth” being ManagementSpeak for “I’m pretty sure there’s more to know, but I don’t know it myself”).

But migrating to cloud-native architectures that are easily shifted to public or hybrid clouds is quite different from migrating applications designed for data-center deployment. And it’s the latter that are the ones that are supposed to save all the money.

Sure, applications coded from non-SOA, non-microservices, non-multi-tenant designs can probably be recompiled in an IaaS environment. But once they’ve been recompiled they’ll probably need significant investments in performance engineering to get them to a point where they aren’t unacceptably sluggish.

Oh, one more thing: Moving an application to the cloud means stretching whatever technologies are used for application and data integration through the firewall and public network that now separates public-cloud-hosted applications to those that have yet to be migrated.

Based on my admittedly high-level-only understanding, not even all enterprise service buses can achieve high levels of performance when, instead of moving transactions around at wire or backplane speeds, they’re now limited to public networking bandwidths and latencies.

Complicating integration performance even more is the need to integrate applications hosted in multiple, geographically disbursed data centers, as would be the case when, for example, a company migrates to, say, Salesforce for CRM, internal development to Azure, and financials and other ERP applications to Oracle Cloud.

For many IT organizations, integration is enterprise architecture’s orphan stepchild. Lots of companies have yet to replace their bespoke interface tangle with any engineered interface architecture.

So lifting and shifting isn’t as simple as lifting and then shifting, any more than moving a house is as simple as jacking it up, putting it on a truck, and hauling it to the new address. Although integration might not be as fraught as the house now lying at the bottom of Lake Superior.

Which isn’t to say there’s no legitimate reason to migrate to the cloud. (Non-double-negative version: There are circumstances for which migrating applications to the cloud makes a great deal of sense.) Here are three circumstances I’m personally confident of, and I’d be delighted to hear of more:

> Startups and small entrepreneurships that lack the negotiating power to drive deep technology discounts, and that will benefit from needing a much smaller full-time and permanent IT workforce.

> Applications that have wide swings in workload, whether because of seasonal peaks, event-driven spikes, or other drivers, the result is a need to rapidly add and shed capacity.

> A Mobile workforce or user base that needs access to the application in question from a large number of uncontrolled locations.

At least, this was the situation the last time I took a serious look at it.

But this isn’t a column about the cloud. It’s about the same subject as last week’s KJR: How to avoid making decisions based on belief, prejudice, and denial. The opening anecdote shows how easy it is to succumb to confirmation bias: If you want to believe, even vendor press releases count as evidence.

In that vein, here’s a question to ponder: Why is it that, after centuries of success for the scientific method, most people most of the time (including many scientists) operate so often from positions of high certainty and low evidence?

The answer is, I think, that uncertainty causes anxiety. And people don’t like feeling anxious.

But collecting and evaluating evidence is hard and often tedious work — not a particularly popular formula.

Isaac Asimov once started a Q&A session by saying, “I can answer any question, so long as you’ll accept ‘I don’t know’ as an answer.”

If Dr. Asimov was comfortable not knowing stuff, the rest of us should be at least as comfortable.

I think.