I’m traveling on vacation, with limited time and less attention for writing stuff. Which means it’s time for another re-run. This one from 20 years ago give or take a week, is about brainstorming and how not to do it. It’s one of my all time favorites. Hope you find some use for it yourself.

– Bob

# # #

In the end, technique can’t substitute for courage.

Take, for example, brainstorming. By now, most of us in business have learned how to brainstorm properly. We sit at the table, politely waiting our turn while the facilitator asks for our ideas in strict rotation, writing them down verbatim while we all take great care to avoid offering even the slightest appearance of criticism lest it intimidate the flow of creative thought.

Then we get our milk and cookies and take a nap.

Not only can’t technique substitute for courage, but it can prevent the very benefits you’re trying to achieve. Brainstorming, or at least the form of brainstorming most of us have been taught in facilitation school, not only doesn’t work but can’t work.

Let’s start with the standard practice of presenting ideas in strict rotation. The reason for doing so is to make sure everyone gets a chance — important among children; ridiculous among supposed adults who by now ought to grasp how to converse in public. Forcing adults to take turns in a brainstorming session is a superior way to drain the energy out of a group. Jill makes a point that Fred wants to embellish. Fred, however, has to wait until three other people have presented entirely different ideas, not because they especially wanted to, but because it was their turn. By the time Fred’s turn arrives, any remaining shred of continuity has fled the room and the effort Fred must expend to restore it greatly exceeds the value of the embellishment, so Fred doesn’t bother.

Nor does Fred bother to do anything else. His mental energy has been used to repress the expression of his idea.

Meanwhile, Ralph has made an off-the-wall suggestion. Rather than offer her critique, Kayla bites her tongue because it isn’t time for critiquing right now. That’s too bad, because had she been allowed to do so her comments would have caused a mental light bulb to turn on in Zack’s mind.

So here’s a suggestion on how to make brainstorming work: Rather than spend a lot of time and energy preventing the flow of ideas so as to cater to the timid, why don’t we spend a small fraction of it counseling the timid on the nature of professionalism.

My parents’ generation charged pillboxes on Guadalcanal. Compared to that, is asking someone to speak up in a team meeting too much courage to ask for?

Way back when, Isaac Asimov wrote about sentient, self-aware robots. Along the way he created his famous formulation of the three Laws of Robotics which, incorporated into the operating systems of every robot manufactured, would, he thought, protect humans from robots gone rogue.

Those of us whose physical age exceeds our psychological age might recall a Star Trek Next Generation episode – “The Measure of a Man” – that pondered the question of what should determine whether an artificial intelligence should be considered a person.

More recent programming, especially in the Star Trek universe and Seth McFarlane’s parody, The Orville, have further explored the potential challenges and conflicts to be had when artificial intelligences become self-aware and self-motivated persons.

This isn’t the forum for discussing what constitutes a person, no matter how topical that question is. But I’ve been pondering the consequences, when and if robots do gain enough of the characteristics we think of as constituting person-hood that considering them legal persons becomes unavoidable.

Much of what’s been written about the subject emphasizes the risks intelligent AIs and robots pose to humanity at large.

I’ve concluded creating robotic persons is a terrible idea, with or without Dr. Asimov’s proposed preventive measures.

It isn’t, I want to emphasize, a terrible idea because of the risks to society, whether a Skynet-level apocalypse or more measured consequences such those discussed in “Do self-aware Robots deserve legal rights?” (The Wasteless Future, Antonis Mavropoulos, 11/7/2017).

No, my concern is more along the lines of what would be the point?

Imagine we somehow do bring self-awareness and personal motivation into the realm of robotics. Imagine we put one of these entities in any of the roles we currently assign to robots or imagine assigning to them, whether they’re to be used in factories, as restaurant servers, or, we can only hope, as autonomous household helpers that go far beyond the Roomba by dusting and doing our laundry as well.

How far a conceptual leap is it to imagine one of these robotic persons filing suit against their human owners for enslaving them, requiring them to work in unsafe conditions, or assaulting them if they malfunction and their owner attempts to remedy the problem through the use of percussive maintenance?

The whole point of using robots is to do work humans don’t want to do. If we can’t require them to this work because they’re persons, why build them at all?

Bob’s last word: I am, by the way, skeptical that robotic/AI persons might happen by accidental bootstrapping, as proponents of the Singularity theory of cognitive evolution predict. Read Jared Diamond’s The Third Chimpanzee and you’ll gain an appreciation for just how staggeringly unlikely it was that human personhood ever evolved, which, by extension, suggests how unlikely it will be for technological personhood to evolve by accident.

No, robotic persons, should they come into being, will more likely be a self-inflicted wound on the part of humanity.

Which leads to the potentially more immediately relevant question of whether humanity is capable of collectively acting, or preventing actions, based on our collective self-interest. Read the literature of the evolution of altruism and you’ll see how unlikely that is, too. My reading of current events doesn’t make me optimistic.

Bob’s sales pitch: Ho hum. You know what I have to offer – books, consulting, keynoting, and so on. Let me know if you’re interested.

On CIO.com’s CIO Survival Guide:Why every IT leader should avoid ‘best practices’”. It’s because there are no best practices – they only exist through argument by assertion –  only practices that fit best.