Warning: If you’re planning to watch any Marvel Universe movies but somehow just haven’t gotten around to it, plot spoilers follow. But then, on the other hand, if you haven’t already watched any of these movies you probably never will, which will make what follows just a bit less compelling. Such are the hazards of building an intellectual edifice on a pop culture foundation.

I have a weakness for superhero movies. I also have a weakness for chewing on Hey, Waitasec! questions that don’t occur to me until a few days later.

That’s questions like why, in the first Avengers movie, during the whole battle for New York City, the entire U.S. Airforce never bothered to show up.

But never mind that. We can chalk it up to dramatic license, because had a squadron or two of advanced jet fighters equipped with heat seeking missiles joined in, this would have just cramped our superheroes’ style(s).

Black Panther doesn’t get off so easily.

Oh, don’t be like that. My gripe: The entire plot centers on the most technologically advanced country on the planet, Wakanda, relying on a governance model built on an inherited monarchy complemented with trial by combat.

What could possibly go wrong?

Plenty could, and in the movie it does. What fixes it? If you’re thinking it’s everyone in Wakanda saying, “Hey, waitasec! Shouldn’t we be convening a constitutional convention?” you’d be wrong. It ends up getting fixed by a second trial by combat, with everyone in Wakanda perfectly willing to follow the lead of a bullying psychopath should he win round two as well.

He doesn’t — the good guy wins this one, luckily enough — but really, this is a terrible way for a nation to decide on who is going to lead it.

What does this have to do with you and your leadership responsibilities?

Well, maybe it’s a stretch, but some executives do seem to admire the trial-by-combat approach to choosing who gets to decide what, and how. They encourage inter-manager rivalries on the grounds that this leads to more energy and initiative.

Which it does. That the energy and initiative are largely wasted doesn’t seem to matter very much.

Less of a stretch is something fundamental in any organization, from the board of directors on down: Figuring out how to choose the right person to put in charge of each area of responsibility.

The lesson from Black Panther? Strip away the plot and specific characters and you come to this: The tests through which Wakanda chooses its leader have nothing at all to do with the tests its leader has to deal with when holding its leadership office.

Well, in the movie it sorta does because in it the leader doesn’t lead all that much. He acts like those fighting alongside him only better. Yes, he’s inspirational, but no, he doesn’t seem to think in terms of strategy, tactics, and logistics all that much.

Or, more broadly, that leaders of any organization need to think in terms of … well, in terms of the eight tasks of leadership.

Anyway, when choosing the leaders who report to you, don’t make this mistake. Too many times, executives outsmart themselves when choosing managers, when an unstructured conversation built around “These are the challenges you’re going to face if I put you in the job. How would you go about facing them?” would do the job far better, and far more naturally.

But enough carping about Black Panther. Let’s carp about The Avengers: The Age of Ultron instead, and more specifically, how much better things would have turned out had Tony Stark understood a core principle of application development: You always test software. Testing it before you put it in production is better.

I mean seriously: Launching a full-fledged, self-motivated AI into PROD … in this case, a real-world environment in which it had full access to a wide range of destructive weaponry … without first examining its behavior in TEST? Seriously?

Now to be fair, had Tony Stark followed standard testing protocols followed by ITIL-style change management, the movie would have been horrifically dull.

But since there was a movie, and in it you can see what happens with insufficient testing protocols, maybe this would be a good time to review your own testing methods … not only when you deploy software, but also when you deploy new processes and practices that affect how Real Paying Customers do business with your business.

I’m on vacation this week, so I’ll leave you to finish it. Your homework assignment: In the Comments, post your Hey, Waitasec! analysis of Captain America: Civil War.

And yes, plot spoilers are encouraged.

Once upon a time I worked with a company whose numbers were, so far as I could tell, unreliable.

Not unreliable as in a rounding error. Not unreliable as in having to place asterisks in the annual report.

Unreliable as in a billion dollars a month in unaudited transactions being posted to the general ledger through improvised patch programs that gathered data from an ancient legacy system in which the “source of truth” rotated among three different databases.

Our client’s executive team assured us their financial reportage was squeaky clean. The employees we interviewed who were closer to the action, in contrast, predicted a future need for significant, embarrassing, and high-impact balance-sheet corrections.

Assuming you consider multiple billions of dollars to be significant and embarrassing, not to mention high impact, a few years later the employees were proven right.

How do these things happen? It’s more complicated than you might think. A number of factors are in play, none easy to overcome. Among them:

Confirmation bias: We all tend to accept without question information that reinforces our preferences and biases, while nit-picking to death sources that contradict them. Overcoming this — a critical step in creating a culture of honest inquiry — starts with the CEO and board of directors, and requires vigilant self-awareness. If you need an example of why leading by example matters, and how leader behavior drives the business culture, look no further.

Ponzi-ness: Ponzi schemes — where investment managers use new investor money to pay off longer-term investors instead of using it to, well, invest — often don’t start out as fraudulent enterprises launched by nefarious actors.

My informal sampling suggests something quite different: Most begin with an investment manager making an honest if overly risky bet. Then, rather than fessing up to the investors whose investments have shrunk, they find new investors, putting their funds into bets that are even more risky in the hopes of enough return to pay everyone off and get a clean start.

It’s when that attempt fails that Ponzi-ness begins.

Middle managers aren’t immunized against this sort of behavior. It’s how my former client got into trouble. A manager sponsored the effort to replace the creaky legacy system. Part of the business case was that this would replace a cumbersome, expensive, and error-prone month-end process with one more streamlined and efficient.

When the legacy replacement didn’t happen on schedule the manager was still on the hook for the business case, leading him to turn off the maintenance spigot — hence the need for improvised transaction posting programs.

Delivering pretend benefits by increasing risk is the essence of Ponzi-ness.

View altitude and failed organizational listening: Management knows how the business is supposed to work. They are, in general, several steps removed from how it actually works, depending on lower-level managers to keep them informed, who rely on front-line supervisors to keep them informed, who in turn rely on the employees who report to them to make sure (that is, provide the illusion) that they know What’s Going On Out There.

Executives enjoy the view from 100,000 feet; middle managers from 50,000. Smart ones recognize their views are at best incomplete and probably inaccurate, so they establish multiple methods of “organizational listening” to compensate.

Those who skip levels to direct the action are, rightly, called micromanagers. And yet, everyone below them in the management hierarchy has a personal incentive to keep bad news and their manager as far apart as they can. The solution is to recognize the difference between expressing interest in What’s Going On Out There and needing to direct it.

Managers should listen to everyone they can, but instruct only those who report to them directly.

Holding people accountable: As discussed in this space numerous times and detailed in Leading IT, managers who have to hold people accountable have hired the wrong people. The right people are those who take responsibility. Managers never have to hold them accountable because they handle that little chore themselves.

But those who have bought into the hold ’em accountable mantra effectively block the flow of What They Need to Know because why on earth would anyone risk telling them?

If something is amiss in an organization, someone in it knows that something is wrong, and usually knows what to do about it.

What they too-often lack is an audience that wants to know about the problem, and, as a consequence, has no interest in the solution.