In case you missed the news, Israeli scientists have taught a goldfish how to drive.

Well, not exactly. They placed it in a bowl with various sensors and actuators, and it correlated its initially random movements to which movements moved it toward food.

The goldfish, that is, figured out how to drive the way DeepMind figured out how to win at Atari games.

This is the technology – machine-learning AI – whose proponents advocate using for business decision-making.

I say we should turn over business decision-making to goldfish, not machine learning AIs. They cost less and ask for nothing except food flakes and an occasional aquarium cleaning. They’ll even reproduce, creating new business decision-makers far more cheaply than any manufactured neural network.

And with what we’re learning about epigenetic heritability, it’s even possible their offspring will be pre-trained when they hatch.

It’s just the future we’ve all dreamed of: If we have jobs at all we’ll find ourselves studying ichthyology to get better at “managing up.” Meanwhile, our various piscine overseers will vie for the best corner koi ponds.

Which brings us to a subject I can’t believe I haven’t written about before: the Human/Machine Relationship Index, or HMRI, which Scott Lee and I introduced in The Cognitive Enterprise (Meghan-Kiffer Press, 2015). It’s a metric useful for planning where and how to incorporate artificial intelligence technologies, included but not limited to machine learning, into the enterprise.

The HMRI ranges from +2 to -2. The more positive the number, the more humans remain in control.

And no, just because somewhere back in the technology’s history a programmer was involved that doesn’t mean the HMRI = +2. The HMRI describes the technology in action, not in development. To give you a sense of how it works:

+2: Humans are in charge. Examples: industrial robots, Davinci surgical robots.

+1: Humans can choose to obey or ignore the technology. Examples: GPS navigation, cruise control.

0: Technology provides information and other capabilities to humans. Examples:Traditional information systems, like ERP and CRM suites.

-1: Humans must obey. Machines tell humans what they must do. Examples: Automated Call Distributors, Business Process Automation.

-2: All humans within the AI’s domain must obey. Machines set their own agenda, decide what’s needed to achieve it, and, if humans are needed, tell them what to do and when to do it. Potential examples: AI-based medical diagnostics and prescribed therapies, AIs added to boards of directors, Skynet.

A lot of what I’ve read over the years regarding AI’s potential in the enterprise talks about freeing up humans to “do what humans do best.”

The theory, if I might use the term “theory” in its “please believe this utterly preposterous propaganda” sense, is that humans are intrinsically better than machines with respect to some sorts of capabilities. Common examples are judgment, innovation, and the ability to deal with exceptions.

But judgment is exactly what machine learning’s proponents are working hard to get machines to do – to find patterns in masses of data that will help business leaders prevent the bad judgement of employees they don’t, if we’re being honest with each other, trust very much.

As for innovation, what fraction of the workforce is encouraged to innovate and are in a position to do so and to make their innovations real? The answer is, almost none because even if an employee comes up with an innovative idea, there’s no budget to support it, no time in their schedule to work on it, and lots of political infighting it has to integrate into.

That leaves exceptions. But the most acceptable way of handling exceptions is to massage them into a form the established business processes … now executed by automation … can handle. Oh, well.

Bob’s last word: Back in the 20th century I contrasted mainframe and personal computing systems architectures: Mainframe architectures place technology at the core and human beings at the periphery, feeding and caring for it so it keeps on keeping on. Personal computing, in contrast, puts a human being in the middle and serves as a gateway to a universe of resources.

Machine learning is a replay. We can either put machines at the heart of things, relegating to humans only what machines can’t master, or we can think in terms of computer-enhanced humanity – something we experience every day with GPS and Wikipedia.

Yes, computer-enhanced humanity is messier. But given a choice, I’d like our collective HMRI to be a positive number.

Bob’s sales pitch: CIO.com is running the most recent addition to my IT 101 series. It’s titled The savvy CIO’s secret weapon: Your IT team | CIO .

I suspect you’re no more in the mood this week to read about business strategy, IT strategy, the intersection of business and IT … my usual stuff … than I was to write about it. And I vowed not to talk about anyone named Trump, Pence, Biden, or Harris, on the grounds that the odds of my having anything new to say is vanishingly small.

So I took one from the vault this week, originally published three years ago. For one reason or another it seems fitting this week. Hope you enjoy it.

# # #

The problem with quadrant charts isn’t that they have two axes and four boxes. It’s the magic part — why their contents are what they are.

Well, okay, that’s one of the problems. Another is that once you (you being me, that is) get in the quadrant habit, new ones pop into your head all the time.

Like, for example, this little puppy that came to me while I was watching Kong: Skull Island as my Gogo inflight movie.

It’s a new, Gartnerized test of actorhood. Preposterousness is the vertical axis. Convincing portrayal of a character is the horizontal. In Kong, Samuel L. Jackson, Tom Hiddleston, and John C. Reilly made the upper right. I leave it to KJR’s readers to label the quadrants.

While this might not be the best example, quadrant charts can be useful for visualizing how a bunch of stuff compares. Take, for example, my new Opinionization Quadrant. It visualizes the different types of thinking you and I run across all the time … and, if we’re honest with each other, the ones we ourselves engage in as well.

It’s all about evidence and certainty. No matter the subject, more and better evidence is what defines expertise and should be the source of confident opinion.

Less and worse evidence should lead to skepticism, along with a desire to obtain more and better evidence unless apathy prevails.

When more and better evidence doesn’t overcome skepticism, that’s just as bad as prejudice and as unfounded as belief. It’s where denial happens — in the face of overwhelming evidence someone is unwilling to change their position on a subject.

Rationality happens when knowledge and certainty positively correlate. Except there’s so much known about so many subjects that, with the possible exception of Professor Irwin Corey (the world’s foremost authority), we should all be completely skeptical about just about everything.

So we need to allow for once-removed evidence — reporting about those subjects we lack the time or, in some cases genius to become experts in ourselves.

No question, once-removed evidence — journalism, to give it a name — does have a few pitfalls.

The first happens when we … okay, I start my quest for an opinion in the Belief/Prejudice quadrant. My self-knowledge extends to knowing I’m too ignorant about the subject to have a strongly held opinion, but not to acknowledging to myself that my strongly held opinion might be wrong.

And so off I go, energetically Googling for ammunition rather than illumination. This being the age of the Internet and all, someone will have written exactly what I want to read, convincingly enough to stay within the boundaries set by my confirmation bias.

This isn’t, of course, actual journalism but it can look a lot like it to the unwary.

The second need for care is understanding the nature and limits of reportage.

Start here: Journalism is a profession. Journalists have to learn their trade. And like most professions it’s an affinity group. Members in good standing care about the respect and approval of other members in good standing.

So when it comes to reporting on, say, social or political matters, a professional reporter might have liberal or conservative inclinations, but are less likely to root their reporting in their political affinity than you or I would be.

Their affinity, when reporting, is to their profession, not to where they sit on the political spectrum. Given a choice between supporting politicians they agree with and publishing an exclusive story damaging to those same politicians, they’ll go with the scoop every time.

IT journalism isn’t all that different, except that instead of being accused of liberal or conservative bias, IT writers are accused of being Apple, or Microsoft, (or Oracle, or open source) fanbodies.

Also: As with political writing, there’s a difference between professional reporters and opinionators. In both politics and tech, opinionators are much more likely to be aligned to one camp or another than reporters. Me too, although I try to keep a grip on it.

And in tech publishing the line separating reporting and opinion isn’t as bright and clear as with political reporting. It can’t be. With tech, true expertise often requires deep knowledge of a specific product line, so affinity bias is hard to avoid. Also, many of us who write in the tech field aren’t degreed journalists. We’re pretty good writers who know the territory, so our journalistic affinity is more limited.

There’s also tech pseudojournalism, where those who are reporting and opinionating (and, for that matter, quadrant-izing) work for firms that receive significant sums from those being reported on.

As Groucho said so long ago, “Love goes out the door when money comes innuendo.”