In case you missed the news, Israeli scientists have taught a goldfish how to drive.

Well, not exactly. They placed it in a bowl with various sensors and actuators, and it correlated its initially random movements to which movements moved it toward food.

The goldfish, that is, figured out how to drive the way DeepMind figured out how to win at Atari games.

This is the technology – machine-learning AI – whose proponents advocate using for business decision-making.

I say we should turn over business decision-making to goldfish, not machine learning AIs. They cost less and ask for nothing except food flakes and an occasional aquarium cleaning. They’ll even reproduce, creating new business decision-makers far more cheaply than any manufactured neural network.

And with what we’re learning about epigenetic heritability, it’s even possible their offspring will be pre-trained when they hatch.

It’s just the future we’ve all dreamed of: If we have jobs at all we’ll find ourselves studying ichthyology to get better at “managing up.” Meanwhile, our various piscine overseers will vie for the best corner koi ponds.

Which brings us to a subject I can’t believe I haven’t written about before: the Human/Machine Relationship Index, or HMRI, which Scott Lee and I introduced in The Cognitive Enterprise (Meghan-Kiffer Press, 2015). It’s a metric useful for planning where and how to incorporate artificial intelligence technologies, included but not limited to machine learning, into the enterprise.

The HMRI ranges from +2 to -2. The more positive the number, the more humans remain in control.

And no, just because somewhere back in the technology’s history a programmer was involved that doesn’t mean the HMRI = +2. The HMRI describes the technology in action, not in development. To give you a sense of how it works:

+2: Humans are in charge. Examples: industrial robots, Davinci surgical robots.

+1: Humans can choose to obey or ignore the technology. Examples: GPS navigation, cruise control.

0: Technology provides information and other capabilities to humans. Examples:Traditional information systems, like ERP and CRM suites.

-1: Humans must obey. Machines tell humans what they must do. Examples: Automated Call Distributors, Business Process Automation.

-2: All humans within the AI’s domain must obey. Machines set their own agenda, decide what’s needed to achieve it, and, if humans are needed, tell them what to do and when to do it. Potential examples: AI-based medical diagnostics and prescribed therapies, AIs added to boards of directors, Skynet.

A lot of what I’ve read over the years regarding AI’s potential in the enterprise talks about freeing up humans to “do what humans do best.”

The theory, if I might use the term “theory” in its “please believe this utterly preposterous propaganda” sense, is that humans are intrinsically better than machines with respect to some sorts of capabilities. Common examples are judgment, innovation, and the ability to deal with exceptions.

But judgment is exactly what machine learning’s proponents are working hard to get machines to do – to find patterns in masses of data that will help business leaders prevent the bad judgement of employees they don’t, if we’re being honest with each other, trust very much.

As for innovation, what fraction of the workforce is encouraged to innovate and are in a position to do so and to make their innovations real? The answer is, almost none because even if an employee comes up with an innovative idea, there’s no budget to support it, no time in their schedule to work on it, and lots of political infighting it has to integrate into.

That leaves exceptions. But the most acceptable way of handling exceptions is to massage them into a form the established business processes … now executed by automation … can handle. Oh, well.

Bob’s last word: Back in the 20th century I contrasted mainframe and personal computing systems architectures: Mainframe architectures place technology at the core and human beings at the periphery, feeding and caring for it so it keeps on keeping on. Personal computing, in contrast, puts a human being in the middle and serves as a gateway to a universe of resources.

Machine learning is a replay. We can either put machines at the heart of things, relegating to humans only what machines can’t master, or we can think in terms of computer-enhanced humanity – something we experience every day with GPS and Wikipedia.

Yes, computer-enhanced humanity is messier. But given a choice, I’d like our collective HMRI to be a positive number.

Bob’s sales pitch: CIO.com is running the most recent addition to my IT 101 series. It’s titled The savvy CIO’s secret weapon: Your IT team | CIO .

When you buy your next smartphone, what features will you compare to make your decision?

So far as I can tell, the dominant contrasts appear to lie in their cameras – cutting edge smartphones have, in addition to their main camera, ones for wide-angle, telephoto, and front-facing (selfie-mode).

Add to that a bunch of different image manipulation features and you have a complex comparison.

Next question: When you buy your next point-and-shoot camera, what features will you compare to make your decision?

Answer: Most likely you won’t buy your next point-and-shoot camera. You’ll upgrade your smartphone instead, and for higher-end photography you’ll buy a DSLR.

As a category, point-and-shoot cameras are, along with Monty Python’s famous Norwegian blue parrot, on the way to joining the choir invisible. Steadily-improving smartphone cameras are rapidly extinguishing them, just as steadily-improving digital cameras extinguished those that use film.

Which will get us to this week’s topic in a moment, right after we ask ourselves whether better product management on Nikon, Canon, or Olympus’s part might have staved off this category calamity.

The answer: Probably not. Product management is an in-category discipline, which is why Canon’s product line dominates the DSLR marketplace but doesn’t provide OEM componentry for any smartphone. More broadly, it’s why the major camera companies didn’t add telephone-oriented features to their point-and-shoot cameras.

Which (finally!) brings us to this week’s topic: Whether IT should organize according to software product management principles rather than software project principles (see, for example, this lucid explanation on martin.Fowler.com).

The answer? No, but IT shouldn’t continue to organize around software projects, either. As all enlightened members of the KJR community (if you’ll forgive the redundancy) know, there’s no such thing as an IT project. It’s always around business change or what’s the point?

Organizing IT around IT products is certainly better than organizing it around IT projects … product-mode thinking does expressly incorporate business improvement into its planning and governance practices, more easily incorporates agile thinking into the mix, and solves the problem of maintaining a stockpile of expertise instead of disbanding it once the initial implementation project has completed.

On the other hand, most agile variants keep teams in place until the backlog has shrunk beyond the point of diminishing returns, so this last “benefit” is something of a strawman.

Meanwhile, the product perspective brings with it a potentially crippling disadvantage – the inevitability of internal competition. Here’s how it works:

Imagine you’re the product manager for your company’s in-house-developed, highly optimized, strongly supported SCM (supply chain management) system. You and your team have deep expertise in its workings, which lets you respond quickly and efficiently when business needs change or new needs arise.

Meanwhile, as a result of several mergers and acquisitions, three other business units also have SCM systems and supporting teams, each with capabilities roughly comparable to what your team brings to the table.

And so, the CIO and IT planning committee decide it’s time to rationalize the applications portfolio, building out the architecture to a hub-and-spoke model anchored by Oracle ERP Cloud (this is, understand, just a ferinstance, not an endorsement).

Suddenly, your team’s expertise is irrelevant. And so, being savvy at the game of corporate politics, you invite the head of your business unit’s supply chain function to join you for conversational lubricants, in which conversation you explain the disadvantages of forcing his team to use a software vendor’s plain-vanilla SCM module instead of the finely-tuned application they’re used to.

Describing how this scenario plays out is left as an exercise for the reader. Suffice it to say, it wouldn’t be pretty.

Bob’s last word: More problematic than everything else, the product perspective leaves in place defining IT’s relationship with the rest of the business as a vendor dealing with internal customers. This is a bad idea, something I’ve been explaining since1996.

IT should be organized to support business optimization, where each business function defines optimization according to what it will take for it to run differently and better, and the company defines IT’s relationship with the rest of the business as partner and collaborator in delivering profitable value to Real Paying Customers.

Bob’s sales pitch: Not the first time I’ve mentioned this, and it won’t be the last – you’ll find an in-depth explanation of how to make this work in There’s no such thing as an IT Project: A handbook for intentional business change. And it isn’t just in-depth coverage of content that matters. Dave and I took pains to make sure it’s readable, too.