If you want a perfect example of the entitlement mentality, look no further than the Las Vegas casinos, where counting cards while playing blackjack gets you ejected from the game.

The casinos consider themselves so entitled to a statistically guaranteed profit that skillful play breaks the rules.

Card counting is risk management, a discipline that divides responses to random, unfortunate events into four categories:

  • Prevention (also known as avoidance), reducing the likelihood that a risk will actually happen.
  • Mitigation, limiting the damage it would do if it does.
  • Insurance, spending now so that if it happens, you’ll recoup your financial loss.
  • Acceptance, hoping the risk doesn’t become real, and taking your lumps if it does.

Card counting supports both prevention and mitigation — prevention because card counters can better predict whether accepting another card will bust their hand and whether the dealer’s play will bust the house’s hand; mitigation because when the cards remaining in the deck tilt the odds toward the house, they bet less, reducing their losses.

The prevent/mitigate/insure/accept framework is just the ticket for risks you’re able to spot. Sadly, though, we human beings just aren’t all that good at spotting them, especially for plans we deeply want to succeed. One of the most important reasons, described in In Daniel Kahneman’s must-read book, Thinking, Fast and Slow (seriously … you must read it), is “overconfident optimism” — the tendency most people have to tell themselves and each other persuasive-sounding stories rather than basing decisions on objective evidence.

And so, over and over again, plans fail.

As you’ll recall from last week’s column, Success = aI + bE + cL, where a, b, and c are weighting factors, and I, E, and L stand for idea, execution, and luck. Success comes from some combination of a workable idea, strong execution, and good luck.

But what does “good luck” really mean? It has two components. One is that risks you failed to anticipate didn’t turn into reality — bad things that might have happened didn’t happen, like, for example, everyone on the project team coming down with the flu.

The other is that factors beyond your control that affect your success turned out well … for example, here in Minnesota we had an unseasonably warm and dry winter, which means the state spent far less for both snow removal and heating government offices than it had budgeted. This good luck contributed to a surplus — financial success due entirely to L.

Risk management is the discipline of reducing “c” and increasing “b” — of moving as much of the impact of random chance as possible out of the realm of Luck and into the realm of Execution.

Say, for example, company leadership decides it’s time to take you seriously about building information technology into the company’s products … integrating the technology-supported service dimension of the business into what it sells, supporting higher prices and margins.

You work with one of the company’s product managers, putting specifics behind the concept, building it all out, and promoting the daylights out of the result (if I published KJR on YouTube we’d cut to montage of meetings, people sketching on whiteboards, engineers sitting in front of CAD screens, and marketeers sketching on storyboards).

The result is a fiasco. Why? Because while nobody realized it until after the product launch, the whole concept depended on good luck on several fronts, and it didn’t turn up.

How could the company have recognized these risks instead of ignoring them? One alternative: a technique Kahneman relates, invented by his colleague, Gary Klein, called the premortem session. How it works: A group of individuals knowledgeable about the decision … not all of them have to be stakeholders … answer this simple question: “Imagine that we are a year into the future. We implemented the plan as it now exists. The outcome was a disaster. Please take 5 to 10 minutes to write a brief history of that disaster.”

Compare the premortem to the usual, dry risk-planning session, where everyone brainstorms a dreary list of possible risks, after which the project manager drafts contingency plans, all of which are rubber-stamped, except for the ones that involve spending, which are rejected out of hand.

It replaces it with imaginative story-telling … a far better way for people to figure out and explain what really might go wrong. Sometimes, a premortem might kill a superficially plausible but actually unworkable idea. That’s a good thing, and something risk management never does.

More often, rather than killing a bad idea it will improve execution for a good one.

It transforms risk management from afterthought to an integral part of the planning effort.

Crowdsourced prediction is the Next Big Thing.

It’s a shoe-in, because it combines last year’s Next Big Thing — the Cloud — with ridiculing experts, a perennial crowd-pleaser.

Crowdsourced prediction is based on a simple premise — that crowds are wiser than experts. Take InTrade, which lets people bet on such matters as which Republican presidential candidate will become the nominee (yes, it’s really just an on-line bookie). Those who place their faith in markets insist that on-line betting on these outcomes delivers more accurate results than the experts. As in many domains, experts predict more poorly than random chance, this is likely. For example:

2009 counted as a good year for actively managed mutual funds. According to “‘Active’ Did Better in ’09,” (Annelena Lobb, 1/6/2010, The Wall Street Journal), their performance improved markedly that year — almost half outperformed simple index funds.

Pure random chance would have resulted in exactly half underperforming — the experts do worse relying on their expertise than they’d do by relying on a Ouija Board.

These aren’t stupid people, and they do know their subject. How could they do such a bad job?

My best guess: Business success entails luck as well as skill. Because investment experts can’t predict luck, they ignore it, substituting patterns they perceive that aren’t actually there.

These substituted patterns are convenient narratives, not empirically tested theories, which means they’re more likely to be wrong than right.

The issue goes well beyond stock-pickers. Many other sorts of experts also rely on unsubstantiated narratives to support their predictions — among them political commentators and, here in the field of information technology, market analysts.

In my expert opinion, of course.

Which is why Crowdsourcing is the new savior of the predictions business. And yet, if the Crowd makes a prediction that’s awesomely accurate today, how can it change tomorrow? InTrade’s predictions, for example, seem to change on a daily basis.

Might the purported accuracy of crowdsourcing be nothing more than circular logic –accurate because we define “accurate” as “what the crowd is saying”?

The study I’ve never found, which would answer this question quite well is of horse racing.

If crowdsourcing works as advertised, were we to tabulate the results of all horseraces we’d find that exactly one-third of all horses that ran with 2:1 odds won. Otherwise, the so-called wisdom of crowds is just another in a long line of appealing narratives that have nothing at all to support them beyond their natural appeal.

As crowdsourcing depends on Cloudsourcing, let’s move on, to a prediction: “Why aren’t we in the Cloud?” will supersede “Why aren’t our factories in China?” as the most-often asked rhetorical question in business.

It’s time, because those who have been involved in making Cloud-based computing work have started to figure out that its economics are just as situation-dependent as those of offshoring.

<Digression> Whether they offshored manufacturing or programming, business decision-makers focused on raw price more than the whole picture of total cost, plus risk, plus the increased complexity of managing operations halfway around the world, with all the attendant differences in language, culture, public policy, and simple clock time. It was all about cheap labor.

That’s the case even though, when it comes to manufacturing, it appears that direct labor contributes astonishingly little to the cost of manufactured items (in the case of automobiles, roughly 10%). As for software development, cost is rarely as important as such factors as reliable on-time delivery, code quality, and fit to function.

Which is why offshoring ended up disappointing its clients far more often than you likely read in the cheerleading articles that dominate the business press.</Digression>

Tally up the Cloud’s direct costs and the decision to go there is far from no-brainer territory, especially for companies big enough to need such niceties as identity management (Active Directory or an alternative) and server-managed print queues.

The Cloud shines brightest (there’s a visual!) when processing loads are unpredictable and highly variable. That’s when its ability to add and shed capacity more or less on demand is hugely advantageous.

For small and mid-sized companies, add the economies of scale that come from making technology management Someone Else’s Problem. For new, growth-oriented companies, also add cost avoidance, from not having to build a data center.

So here’s some advice you can use (finally!): Be prepared. When asked “Why aren’t we in the Cloud?” answer, “We are, wherever it makes financial and strategic sense.”

Or, if it doesn’t, answer, “We’re on the lookout for opportunities. So far, much to our surprise, we’d have to spend more to go there.”