“The danger of the past was that men became slaves. The danger of the future is that men may become robots. – Erich Fromm
Month: January 2022
Something fishy about artificial intelligence
In case you missed the news, Israeli scientists have taught a goldfish how to drive.
Well, not exactly. They placed it in a bowl with various sensors and actuators, and it correlated its initially random movements to which movements moved it toward food.
The goldfish, that is, figured out how to drive the way DeepMind figured out how to win at Atari games.
This is the technology – machine-learning AI – whose proponents advocate using for business decision-making.
I say we should turn over business decision-making to goldfish, not machine learning AIs. They cost less and ask for nothing except food flakes and an occasional aquarium cleaning. They’ll even reproduce, creating new business decision-makers far more cheaply than any manufactured neural network.
And with what we’re learning about epigenetic heritability, it’s even possible their offspring will be pre-trained when they hatch.
It’s just the future we’ve all dreamed of: If we have jobs at all we’ll find ourselves studying ichthyology to get better at “managing up.” Meanwhile, our various piscine overseers will vie for the best corner koi ponds.
Which brings us to a subject I can’t believe I haven’t written about before: the Human/Machine Relationship Index, or HMRI, which Scott Lee and I introduced in The Cognitive Enterprise (Meghan-Kiffer Press, 2015). It’s a metric useful for planning where and how to incorporate artificial intelligence technologies, included but not limited to machine learning, into the enterprise.
The HMRI ranges from +2 to -2. The more positive the number, the more humans remain in control.
And no, just because somewhere back in the technology’s history a programmer was involved that doesn’t mean the HMRI = +2. The HMRI describes the technology in action, not in development. To give you a sense of how it works:
+2: Humans are in charge. Examples: industrial robots, Davinci surgical robots.
+1: Humans can choose to obey or ignore the technology. Examples: GPS navigation, cruise control.
0: Technology provides information and other capabilities to humans. Examples:Traditional information systems, like ERP and CRM suites.
-1: Humans must obey. Machines tell humans what they must do. Examples: Automated Call Distributors, Business Process Automation.
-2: All humans within the AI’s domain must obey. Machines set their own agenda, decide what’s needed to achieve it, and, if humans are needed, tell them what to do and when to do it. Potential examples: AI-based medical diagnostics and prescribed therapies, AIs added to boards of directors, Skynet.
A lot of what I’ve read over the years regarding AI’s potential in the enterprise talks about freeing up humans to “do what humans do best.”
The theory, if I might use the term “theory” in its “please believe this utterly preposterous propaganda” sense, is that humans are intrinsically better than machines with respect to some sorts of capabilities. Common examples are judgment, innovation, and the ability to deal with exceptions.
But judgment is exactly what machine learning’s proponents are working hard to get machines to do – to find patterns in masses of data that will help business leaders prevent the bad judgement of employees they don’t, if we’re being honest with each other, trust very much.
As for innovation, what fraction of the workforce is encouraged to innovate and are in a position to do so and to make their innovations real? The answer is, almost none because even if an employee comes up with an innovative idea, there’s no budget to support it, no time in their schedule to work on it, and lots of political infighting it has to integrate into.
That leaves exceptions. But the most acceptable way of handling exceptions is to massage them into a form the established business processes … now executed by automation … can handle. Oh, well.
Bob’s last word: Back in the 20th century I contrasted mainframe and personal computing systems architectures: Mainframe architectures place technology at the core and human beings at the periphery, feeding and caring for it so it keeps on keeping on. Personal computing, in contrast, puts a human being in the middle and serves as a gateway to a universe of resources.
Machine learning is a replay. We can either put machines at the heart of things, relegating to humans only what machines can’t master, or we can think in terms of computer-enhanced humanity – something we experience every day with GPS and Wikipedia.
Yes, computer-enhanced humanity is messier. But given a choice, I’d like our collective HMRI to be a positive number.
Bob’s sales pitch: CIO.com is running the most recent addition to my IT 101 series. It’s titled The savvy CIO’s secret weapon: Your IT team | CIO .