HomeCognitive Enterprise

Public/private sector inversion

Like Tweet Pin it Share Share Email

Think of this as KJR’s pledge week.

No, I’m not asking for donations. I’m asking for your time and attention before you let yourself read this week’s missive.

Specifically, when I decide what to write about each week I’m doing too much guessing based on too little information. I’m asking you to let me know what you’d like me to cover this year in KJR, and, almost as important, what I should be writing about so your non-KJR-subscribing colleagues would find it more compelling.

One more thing: My ManagementSpeak inventory is running low. After 23 years of one a week it’s entirely possible there just isn’t all that much more bafflegab to translate. So instead, if you don’t hear something that deserves the ManagementSpeak treatment, send me your favorite quotes instead.

I will ask you to apply one filter on these: As with all things KJR I’m looking for what’s off the beaten path — dictums that haven’t yet been widely discovered but deserve to be read by a discerning audience.

Okay, that’s enough Pledge Minute. Back to this week’s KJR.

—————————————-

The private sector has discovered data. Where a decade ago, business leaders were encouraged to trust their guts, they’re now encouraged to trust their data scientists.

It’s role reversal. Back then, governmental policy-making was heavily data-driven. As Michael Lewis (no relation) explains in The Fifth Risk, an extensive, intensive, and essential responsibility of many cabinet-level agencies is the collection of highly valuable data and managing the databases that contain them.

Just in time for businesses to invest heavily in data collection, management, analytics, and interpretation, the federal government is shifting much of its policy-making to a more instinctive approach, and in doing so is shifting budget and resources elsewhere.

Here at the Keep the Joint Running Institute, the Joints in question are organizations of all sizes and shapes; our charter is how to keep them running. As a general rule we (and that’s a royal we) stay away from political matters. Politicized matters? In bounds whenever they’re relevant.

And so (you were wondering when a point might emerge, weren’t you?) as your organization, for all the right reasons, embraces data-driven decision-making, here are a few cautionary notes you and your colleagues might find helpful:

Culture before tools: If you’re a longtime subscriber you’re familiar with the idea that when trying to institutionalize data-driven decision-making, a “culture of honest inquiry” is a prerequisite for success. In case you aren’t, the principle (but not its achievement) is simple: Everyone involved wants to discover the right answer to each question, not to prove themselves right.

Solving for the number: A culture of honest inquiry is what enlightened leaders strive for. While still on the journey, though, be on the lookout for someone using these new, powerful analytical tools to manipulate filters, choices of statistical techniques, and thresholds to support their pre-determined preferred result — for ammunition, not illumination.

GIGO: “Garbage In, Garbage Out” was widely recognized back when IT was known as the Data Processing department. That Big Data lets organizations collect and manipulate bigger piles of garbage than before changes nothing: Before you release your data lakes into the watershed, make sure your data scientists assess data quality and provide appropriate cautions as to their use.

Ease vs Importance: When it comes to data, some attributes are easier to measure than others. Even professional researchers can fall prey to this fallacy — that hard to measure means it doesn’t matter — without the blind spot ever quite reaching the threshold of consciousness.

Interpolation is safer than extrapolation: Imagine a regression analysis that yields a statistically significant correlation. And imagine that, in your dataset, the lowest value of x is $20 and the highest value is $200. Predicting the outcome of spending $40 is a pretty safe bet.

Predicting the outcome of spending $10 or $300? Not safe at all. Straight lines don’t stay straight forever. They usually bend. You just know where the line doesn’t bend — you have no idea where it does, and in what direction.

Machine guts need skepticism, too: Machine learning depends on neural networks. It’s the nature of neural networks that they can’t explain their reasoning — mostly, they’re just very sophisticated correlation finders. They’re useful in that they can plow through a lot more data than their human counterparts. But they’re still correlations, which mean they don’t imply causation.

But of course, to us the unwary, they do.

Courage: Take a timid business — more accurately, a business made timid by business leaders who consider avoiding risks to be the pinnacle of business priorities. Now add data and analytics to the mix.

What human data scientists and their AI machine-learning brethren do is spot potentially useful patterns in the data. These patterns will sometimes suggest profitable actions.

When all is said and done, when a pattern like this, along with the potentially profitable actions, are put in front of a timid business leader, much more will be said than done.

It’s unfortunate but not uncommon: Taking action is inherently unsafe, an insight that’s true as far as it goes.

What it misses: Playing it safe is usually even more of a risk, as competitors constantly search for ways to take your customers away from you.

Play it too safe and not only won’t you take customers away from them. You’ll fail to give your own customers a reason to stay.

Comments (15)

  • Yet another excellent column.

    But, a troubling weakness I see for the future is that IT management becomes more and more the only organizational resource that has a chance of really understanding and generally guiding the organization through increasing technically vague, but critical waters.

    So, I’d like to see some columns this year on the different tool sets one can use to understand the diversity of others. After watching what’s been going on in Washington these last 12 years, where people keep presenting facts, but expect different results from their audience, it occurs to me that IT, like our politicians, need to learn new sets of skills and tactics.

    To have integrity, these new skills and tactics should come from a better and diversified understanding of people. So, yes, this includes Kathy Kolbe’s theories of different styles of doing things, but I’m sure there are other models to consider presenting, as well.

    My fear is that expecting other organizational officers to really understand new technologies like data mining without an IT director having additional tools of understanding at his or her disposal is just asking too much of all concerned.

  • Adding to the comment from Bob in a way.

    IT is radically changing with the cloud. The obvious part is that critical pieces of infrastructure are moving to large external service providers. The less obvious is the drain on core IT skills such as database management, system performance, and even application stability. With the cloud offering services, organizations appear to be losing some critical internal knowledge and finding those who work outside of silos is becoming harder.

    SaaS has created further challenges where IT’s role is more confusing and project risks are increasing. Organizations struggle with the one size fits all model of many SaaS providers and vendor lock is a larger issue with core data no longer being close to home (if you are lucky it is an API away).

    Lots to think about.

  • I’m always reminded that “there are lies, damned lies, statistics.”. Trusting your gut is fine provided you don’t have indigestion.

  • Quote: A corporation is like a cesspool–the largest chunks always rise to the top. Laurence J. Peters, Peter’s People.

    Not sure what to write about but I just read about how diet soda messes up your metabolism because of the disconnect between the taste of sweetness and the amount of calories the body actually receives. Not sure how that regulatory thing works, but it’s working over millions of individual bacteria and sound a lot like how corporations get offtrack.

    Another interesting tidbit from the story was that diet soda was developed in the 1950s so diabetics could still drink soda safely. I find that hilarious since use of diet soda leads to diabetes apparently. In a similar vein, I read the history of opium and found that morphine was invented to deal with opium addiction and heroin was invented to deal with morphine addiction.

    Expectations versus results.

    • Your point is valuable, but some issues:
      1. Morphine is a natural substance- the most abundant opioid in opium latex. Extracting it is simple: hydrochloric acid solubilizes it and sodium carbonate precipitates it from solution. The advantage of morphine vs. opium latex is that it can be injected (subcutaneously, IV), which is useful in situations where the patient cannot swallow and when immediate action is needed.

      2. The bit about diet soda messing up your metabolism is based on a study by an Israeli group which administered saccharin, sucralose and aspartame to mice, and claimed that these altered the gut microbial flora and induced glucose intolerance (nature.com/articles/nature13793). The problem is that aspartame is completely hydrolyzed in the small intestine, at least in humans (it’s a dipeptide formed of 2 amino acids), and not enough would remain to travel onward to the large intestine to influence the bacteria that live there. No one’s replicated this study.

  • Speaking of quotes: I have an aphorism that I wrote based on my own experience as someone who stumbled onto the possibilities of PC software in the ’80s, then learned to write my own, inspired more often than not by an inner voice ranting, “There’s GOT to be an easier way to do this!”

    So the quote below is mine, I tell you, mine! Copyright © Jim Carls! but you are free to use it with attribution (although I can’t imagine other people have not thought of it). I finally used it in the introduction to the “Shortcuts” section of my user manual (http://www.ffez.com/help/design/):

    “If necessity is the mother of invention, then laziness is surely the father.”

  • Along with interpolation is more likely to be accurate than extrapolation, my statistics professor presented convincing evidence (at least I was convinced) that addition and multiplication are safer (less likely to be way off) than subtraction or division. Tiny errors in addition and multiplication tend to cancel each other out; in subtraction and division they tend to exacerbate each other. Another trap is when a tiny number is to be multiplied by a large number. The answer is very hard to predict; I was involved in two separate multi-million dollar lawsuits over that failure to predict anywhere near accurately.

  • As far as future topics go, I’m afraid I’m not the source for you. I paroled myself from IT about 10 years ago but keep reading your column because it’s still informative and useful. Thanks for your many years of publishing!

    Regarding today’s column, I would have added one more point: Data vs. Information. It’s sort of hinted at inside one or two of the others, as is the correlation vs. causation issue, but I think it’s important enough to be a factor of its own.

    This is too long for a quote, but it’s a thought I’ve put forth for years and if you find a use for it you’re welcome. “Business is like a three-legged stool. The seat is the business and the three legs are the customers, the employees, and the shareholders. You can trim the legs unevenly for a while, but eventually the whole stool falls over (collapses) if they’re too far out of whack.” Of cousre, it’s usually the employee and customer legs that keep getting trimmed; maybe that will change somewhat now that the self-appointed guardians of the economic world, the Business Roundtable, have recognized there’s more to life than short-term shareholder value.

  • Hi Bob,

    My company sees Agile as a panacea these days. Of course, there are many misconceptions about what it is and how it should be used.

    Some are even saying that with the new agile approaches, we can dispense with things like understanding the business case for a project and things like releasing software that is past beta.

    I am going to be working on a project that has very high expectations. I want to temper some of those expectations. I have always liked the “good fast or cheap: pick 2” framework.

    In today’s agile, big data, machine learning world, does that still apply? I know that people are going to be pushing back hard if I even suggest that the laws of physics still apply, but do they? Or has something changed?

    I appreciate your thoughts.

    Regards,
    Tony

    • Tony …

      Thanks for the suggestion. I’ll take on the challenge in an upcoming column. In the meantime, at the risk of sounding like all I want to do is flog book sales, you might want to consider getting hold of a copy of There’s No Such Thing as an IT Project. The chapter on “Fixing Agile” might provide some useful insights for you and your organization.

      One more thought in advance of the column: You might consider expanding your “laws of physics” perspective to encompass the KJR six-dimensional alternative: Fixed costs, incremental costs, cycle time, throughput, quality, and excellence. Of these you get to pick three. You might find this useful for contrasting what Agile and Waterfall are supposed to optimize.

  • As to suggestions for column subjects: I find the advice to managers on issues of managing people rather than projects to be the most useful. I still refer to very old columns of yours dealing with not stopping subordinates from making mistakes (including the applicant who needed to refer to a written list of OSes she’d worked with), piercing the fiction of the internal customer, what your employees should be protected from, why “subordinates” isn’t a bad word, and others.

    Also, I think Tony K’s suggestion for some guidance on over-use of Agile would be great. I’m being told to start managing infrastructure projects using Agile. At the same time we’re being told that the idea of documentation is obsolete since everything is subject to constant change. Well, that may work for some things, but when I have a network in a country where I have no IT staff, trying to fix that network without documentation is an … effort.

  • Bob- excellent and insightful article, as always. Just one minor issue:

    Neural networks (and deep learning, a fancy term for huge, multilayer neural networks made possible by advances in hardware) are only one form of machine learning, and you’re right, providing explanations has always been a problem with them: printing a matrix of numerical weights is not meaningful in terms of the problem domain.

    Google is doing some work on this, but I don’t believe that there’s been any breakthrough as yet. Because of this, you may not know that there’s a fatal flaw in the learning process until someone discovers it by chance, as when Facebook’s face-recognition algorithm was found to be inaccurate for Blacks. (This may be due to 2 factors: not enough Black faces in the training set, and inadequate customization of the contrast-enhancement/edge-detection pre-processing to skin tone. It’s known that in traditional pre-digital studio photography, dark skin tones require much brighter ambient light, or the features appear indistinct.)

    However, other machine learning techniques – notably those based on statistical methods (e.g., linear/logistic regression techniques and the related Support Vector Machines, Naive Bayes used in most spam filters and their sequential extension, Hidden Markov models, Decision Trees, and the APriori technique used by supermarkets in deciding where to place items in the store to facilitate shopper convenience) have always been fully explainable, because the outputs of the statistical model (e.g., regression coefficients) are more readily translatable to the problem’s characteristics.

    In fact, when ML is employed in domains where the ML might have to be either vetted by domain experts (or defended when challenged in court – e.g., determining credit-worthiness), an explainable statistical ML method is preferred over Deep Learning even if it gives modestly inferior results.

Comments are closed.