In case you missed the news, Israeli scientists have taught a goldfish how to drive.
Well, not exactly. They placed it in a bowl with various sensors and actuators, and it correlated its initially random movements to which movements moved it toward food.
The goldfish, that is, figured out how to drive the way DeepMind figured out how to win at Atari games.
This is the technology – machine-learning AI – whose proponents advocate using for business decision-making.
I say we should turn over business decision-making to goldfish, not machine learning AIs. They cost less and ask for nothing except food flakes and an occasional aquarium cleaning. They’ll even reproduce, creating new business decision-makers far more cheaply than any manufactured neural network.
And with what we’re learning about epigenetic heritability, it’s even possible their offspring will be pre-trained when they hatch.
It’s just the future we’ve all dreamed of: If we have jobs at all we’ll find ourselves studying ichthyology to get better at “managing up.” Meanwhile, our various piscine overseers will vie for the best corner koi ponds.
Which brings us to a subject I can’t believe I haven’t written about before: the Human/Machine Relationship Index, or HMRI, which Scott Lee and I introduced in The Cognitive Enterprise (Meghan-Kiffer Press, 2015). It’s a metric useful for planning where and how to incorporate artificial intelligence technologies, included but not limited to machine learning, into the enterprise.
The HMRI ranges from +2 to -2. The more positive the number, the more humans remain in control.
And no, just because somewhere back in the technology’s history a programmer was involved that doesn’t mean the HMRI = +2. The HMRI describes the technology in action, not in development. To give you a sense of how it works:
+2: Humans are in charge. Examples: industrial robots, Davinci surgical robots.
+1: Humans can choose to obey or ignore the technology. Examples: GPS navigation, cruise control.
0: Technology provides information and other capabilities to humans. Examples:Traditional information systems, like ERP and CRM suites.
-1: Humans must obey. Machines tell humans what they must do. Examples: Automated Call Distributors, Business Process Automation.
-2: All humans within the AI’s domain must obey. Machines set their own agenda, decide what’s needed to achieve it, and, if humans are needed, tell them what to do and when to do it. Potential examples: AI-based medical diagnostics and prescribed therapies, AIs added to boards of directors, Skynet.
A lot of what I’ve read over the years regarding AI’s potential in the enterprise talks about freeing up humans to “do what humans do best.”
The theory, if I might use the term “theory” in its “please believe this utterly preposterous propaganda” sense, is that humans are intrinsically better than machines with respect to some sorts of capabilities. Common examples are judgment, innovation, and the ability to deal with exceptions.
But judgment is exactly what machine learning’s proponents are working hard to get machines to do – to find patterns in masses of data that will help business leaders prevent the bad judgement of employees they don’t, if we’re being honest with each other, trust very much.
As for innovation, what fraction of the workforce is encouraged to innovate and are in a position to do so and to make their innovations real? The answer is, almost none because even if an employee comes up with an innovative idea, there’s no budget to support it, no time in their schedule to work on it, and lots of political infighting it has to integrate into.
That leaves exceptions. But the most acceptable way of handling exceptions is to massage them into a form the established business processes … now executed by automation … can handle. Oh, well.
Bob’s last word: Back in the 20th century I contrasted mainframe and personal computing systems architectures: Mainframe architectures place technology at the core and human beings at the periphery, feeding and caring for it so it keeps on keeping on. Personal computing, in contrast, puts a human being in the middle and serves as a gateway to a universe of resources.
Machine learning is a replay. We can either put machines at the heart of things, relegating to humans only what machines can’t master, or we can think in terms of computer-enhanced humanity – something we experience every day with GPS and Wikipedia.
Yes, computer-enhanced humanity is messier. But given a choice, I’d like our collective HMRI to be a positive number.
Bob’s sales pitch: CIO.com is running the most recent addition to my IT 101 series. It’s titled The savvy CIO’s secret weapon: Your IT team | CIO .
Bob,
I’m glad that you found value in relating your former Biology graduate school education to current business and technology challenges. I’d be surprised if there were many people who could intelligently explain epigenetic heritability. I’m not sure that many technologists can do any better explaining AI.
I agree with your contention that there isn’t a lot of value in teaching computers to innovate, since innovation doesn’t truly happen very often. I see that the greatest value of machines is to unburden humans from their boring, difficult and error prone work. I think it’s a stretch to believe that machines are going to develop innovative solutions on their own.
Keep on challenging the group think of the business and technology pundits. Even if they don’t appreciate your criticism they may chuckle once in a while 😉
I think I recall a story from Pohl about a resurrected human kept going so he could deal with the machines. Some corporations might do better if some AI came along but avarice seems to require a human. As AI/ML evolves we can hope for help in making some decisions. I’m not sure where the training data comes from in many business decisions. A few wrong decisions can end the organization. Few remain to capture the what and why. I look at Boeing and a bunch of bad decisions that relate to some terrible trades that clearly have nearly killed the company. Marginal profit squeeze that were really expensive decisions.
Are we on the verge of living in the world described by Vonnegut in Player Piano?
It isn’t hard to imagine a path from here to there.
Humans are the only species that actively works towards making itself redundant.
I would love elements of a world like that envisioned by Frank Herbert in the great Dune series, one where humans did not rely on computers for everything and in so doing, found ways to enhance and evolve themselves mentally and physically.
A trivial example is to think of how many telephone numbers one could recall from memory *before* the advent of the cellphone. Nowadays, so much knowledge seems to be externalized into machines. That is not necessarily a bad thing but if people are not actively exercising their grey matter through critical thinking, many people find themselves wanting to be told what to do, how to think, etc. Sigh…
Artificial Intelligence is genuine stupidity.
IEEE had an article that is just the top of the iceberg: 7 ways AI fails.
https://spectrum.ieee.org/ai-failures
Yes you can do some useful things with AI and NN and related approaches but blindly applying them to everything guarantees bad side effects like the Black Swans of Talibs’ book.
At some point the government won’t be able to print enough money to bail you out of your mess even if you are too big to fail and too dumb to ensure you did fail.
The problem is not with AI per se it is with people hyping and using it who are too stupid to see the problems with using it the way they intend.
Hi Bob – I think that your scale (-2 to +2) seems lacking at the negative end.
-2: AI is in charge and humans must obey .. but … if a medical AI Diagnostic says you are getting something unpleasant done … I think you could still not do it (even if medically unwise).
SkyNet (to me) seems to be a -3 .. humans actively compelled to obey … the consequences for resistance are … dire … (and possibly result in a movie franchise)
– eg .. if SkyNet Medical Diagnostic says you need something unpleasant done …I would expect that to happen.
Enjoy your articles greatly.
Bob, when I read the article about the goldfish driving, and watched him steering his little cube of water wherever he wanted to go, I thought of you, and your graduate work with fish. It would be interesting to correlate the fish’s hunting behavior in the car against the hunting behavior in the aquarium
More than anything, it reminded me of early autonomous robots which, had photovoltaic cells on them, and were programmed to look for light. If they found it they stopped and charged; if they didn’t, they stopped for a random time, restarted, and continued to look, etc. If we can fit the fish’s car with that behavior, the car can drive the car to a good feeding point for the fish (right next to the light).
Superb article, as usual. However, there’s very little to differentiate between your +2 and 0. In both cases, it’s a case of human-machine “symbiosis” – the combination results in capabilities that neither one alone could have achieved.
Thus, remotely controlled surgical or industrial robots (I might also add bomb-disposal robots) keep the human safe from physical harm, or bypass the limitations of fat fingers when operating on tiny delicate structures embedded deep within the body. Similarly ERP/CRM systems provide you with distilled information, but you still are in charge – you have to act on this information. (I might also add the use of Web-search engines.)
People were disposing bombs or making decisions with very limited data in the past, but using electronic augmentation – either tools that are an extension of your body OR your mind – is simply more convenient/powerful/safe.
Interestingly, the critical data necessary to make decisions can sometimes be quite tiny. In WW2, an OSS agent in Paris – who became an oil analyst after the war – determined the success of Allied bombing runs that targeted the French railroads over the months prior to D-day using a simple parameter – the prices of oranges in Paris. The trains transported oranges from the south of France to Paris – if the price suddenly went up, it meant that the supply had been disrupted temporarily (until repaired). If not, the previous day’s raid had been unsuccessful. The Nazis thought that there was an entire network of French resistance spies keeping watch on the railroads, and spent an enormous amount of effort trying to locate this non-existent network.
I guess I didn’t explain the differences between HMRI levels as well as I’d thought.
Level 0 technology provides information (say, inventory levels), and, I agree, recommendations (re-order quantities based on a formula) that supply chain analysts can make use of. At level +2 humans tell the technology what to do.
What HMRI doesn’t do is provide a metric useful for comparing the overall effectiveness of one HMRI situation to another, which is what I think you’re looking for. For example, a call center is probably better at HMRI -1 than at other levels – the ACD tells each agent to connect, and which caller to connect to.
Compare that to the supply chain situation. Algorithmically arrived at recommended order quantities are useful, but supply chain analysts have to take additional, non-formulaic factors into account when making the final decision about how much of an item to order. I’d expect HMRI 0 to be optimal for this situation.
Does this help clarify?
I’ve been doing a lot of reading and thinking lately about autonomous lethal weapon systems, which aren’t science fiction, but are a thing already in use today. (Land mines are an early, simple, and tragically indiscriminate, example.) I expect more in the way of international treaties regulating (for those who care to follow the rules anyway) how they are used on the battlefield. Interesting to think of your HMRI in this context. Similarly, the Society of Automative Engineers standard SAE J3016 defines standards for automation in vehicles, which I also wonder how it might apply to AI on the battlefield.
Thanks. I wonder how this will play out: If all combatants are robots I’d think autonomous lethal weapons systems would be more humane (although we’d need to redefine “lethal”). How we’d get from here to there would be interesting. Whether any nation would restrict its use of the technology to robot vs robot battles is questionable. Okay, no, it isn’t – no nation would accept a restriction like this.
Makes my head hurt.
I read a paper from the U.S. Army War College that suggested that according to the humanitarian rules of armed conflict (which exist and are enforced by treaties) it might be a war crime *not* to use autonomous lethal weapon systems if it can be shown that doing so poses less risk to civilization populations than the alternatives.