Crowdsourced prediction is the Next Big Thing.
It’s a shoe-in, because it combines last year’s Next Big Thing — the Cloud — with ridiculing experts, a perennial crowd-pleaser.
Crowdsourced prediction is based on a simple premise — that crowds are wiser than experts. Take InTrade, which lets people bet on such matters as which Republican presidential candidate will become the nominee (yes, it’s really just an on-line bookie). Those who place their faith in markets insist that on-line betting on these outcomes delivers more accurate results than the experts. As in many domains, experts predict more poorly than random chance, this is likely. For example:
2009 counted as a good year for actively managed mutual funds. According to “‘Active’ Did Better in ’09,” (Annelena Lobb, 1/6/2010, The Wall Street Journal), their performance improved markedly that year — almost half outperformed simple index funds.
Pure random chance would have resulted in exactly half underperforming — the experts do worse relying on their expertise than they’d do by relying on a Ouija Board.
These aren’t stupid people, and they do know their subject. How could they do such a bad job?
My best guess: Business success entails luck as well as skill. Because investment experts can’t predict luck, they ignore it, substituting patterns they perceive that aren’t actually there.
These substituted patterns are convenient narratives, not empirically tested theories, which means they’re more likely to be wrong than right.
The issue goes well beyond stock-pickers. Many other sorts of experts also rely on unsubstantiated narratives to support their predictions — among them political commentators and, here in the field of information technology, market analysts.
In my expert opinion, of course.
Which is why Crowdsourcing is the new savior of the predictions business. And yet, if the Crowd makes a prediction that’s awesomely accurate today, how can it change tomorrow? InTrade’s predictions, for example, seem to change on a daily basis.
Might the purported accuracy of crowdsourcing be nothing more than circular logic –accurate because we define “accurate” as “what the crowd is saying”?
The study I’ve never found, which would answer this question quite well is of horse racing.
If crowdsourcing works as advertised, were we to tabulate the results of all horseraces we’d find that exactly one-third of all horses that ran with 2:1 odds won. Otherwise, the so-called wisdom of crowds is just another in a long line of appealing narratives that have nothing at all to support them beyond their natural appeal.
As crowdsourcing depends on Cloudsourcing, let’s move on, to a prediction: “Why aren’t we in the Cloud?” will supersede “Why aren’t our factories in China?” as the most-often asked rhetorical question in business.
It’s time, because those who have been involved in making Cloud-based computing work have started to figure out that its economics are just as situation-dependent as those of offshoring.
<Digression> Whether they offshored manufacturing or programming, business decision-makers focused on raw price more than the whole picture of total cost, plus risk, plus the increased complexity of managing operations halfway around the world, with all the attendant differences in language, culture, public policy, and simple clock time. It was all about cheap labor.
That’s the case even though, when it comes to manufacturing, it appears that direct labor contributes astonishingly little to the cost of manufactured items (in the case of automobiles, roughly 10%). As for software development, cost is rarely as important as such factors as reliable on-time delivery, code quality, and fit to function.
Which is why offshoring ended up disappointing its clients far more often than you likely read in the cheerleading articles that dominate the business press.</Digression>
Tally up the Cloud’s direct costs and the decision to go there is far from no-brainer territory, especially for companies big enough to need such niceties as identity management (Active Directory or an alternative) and server-managed print queues.
The Cloud shines brightest (there’s a visual!) when processing loads are unpredictable and highly variable. That’s when its ability to add and shed capacity more or less on demand is hugely advantageous.
For small and mid-sized companies, add the economies of scale that come from making technology management Someone Else’s Problem. For new, growth-oriented companies, also add cost avoidance, from not having to build a data center.
So here’s some advice you can use (finally!): Be prepared. When asked “Why aren’t we in the Cloud?” answer, “We are, wherever it makes financial and strategic sense.”
Or, if it doesn’t, answer, “We’re on the lookout for opportunities. So far, much to our surprise, we’d have to spend more to go there.”
I’m sure you wouldn’t confuse precision with accuracy. A thousand people guessing can be highly precise but completely inaccurate. That confusion is the basis for crowdsourcing.
Bob, a “niggle” and a question:
Niggle: it is shoo-in . . . as is “shoo, git out o’ here” isn’t it?
Question: What is your conclusion about the validity of the data in the book “Wisdom of Crowds?” As I recall it predates “crowdsourcing” by a few (five or six) years.
“Shoe-in” is described as a “common misspelling.” If it’s so common, that means a crowd spells it that way, which means it must be right. Right?
Okay, that’s a stretch.
To answer your question, I’ve only read summaries. My best guess is that, as is so often the case, crowdsourcing will prove useful for several classes of problem but will be (and is being) touted as a panacea for all problems.
The book lists four preconditions for crowdsourcing, one of which is independence (crowd members can’t be influenced by each other). InTrade violates this precondition, as does the stock market, both of which are frequently cited as crowdsourcing exemplars.
Also, the term is and will be mis-used in all sorts of ways (for example as a cheap source of labor, like newspapers farming out Sarah Palin’s emails).
I’m sure the data in the book are accurate. I’m equally sure we’re at the beginning of a lot of learning that will have to happen before we figure out what it all means.
“Pure random chance would have resulted in exactly half underperforming” — not quite. Since actively managed funds pay management fees and transaction costs that index funds do not, pure random chance would have exactly half of the active funds underperforming by the amount of such fees. Therefore “almost half outperforming” is pretty much what you should expect any year. If 2009 was unusually good that way, then actively managed funds are a truly bad investment.
Crowdsourcing? S. M. Stirling put it very succinctly: “I do not believe in the collective wisdom of individual ignorance.” I think he was referring to elections, but it applies universally.
Stock prediction accuracy: Stock performance is based at least partly on what amounts to the insanity of investors, the exact forms of which will continue to be unpredictable until Hari Seldon invents psychohistory. Don’t believe investors are insane? Then why, when a company’s stock dividend misses the stock-guessers’ predictions, do investors punish the company instead of introducing the “analysts” to the charming practice of defenestration?
“If crowdsourcing works as advertised, were we to tabulate the results of all horseraces we’d find that exactly one-third of all horses that ran with 2:1 odds won.”
One caution: Parimutual betting being what it is, a 2:1 payout is not really a 2:1 payout since some of the money went to the place and show bettors.
The guys who are probably the best at crowdsourcing are the people who set the odds for sporting events like the line in a football game. For those people, the house is not trying to predict the score outcome but instead trying to make sure that the same amount of money is bet on each team, thereby ensuring the bet taker that they will collect a guaranteed 5% of the tote. That is a true study of crowd psychology.
Ah, but the topic is crowdsourcing as way to get more accurate predictions. You are, of course, right about the payout when you bet on a winning horse. The concept still holds, though – if crowds are as wise about the future as they’re supposed to be, the betting odds should match the actual odds.
Just as it should rain sixty out of every hundred days a meteorologist says there’s a 60% chance of rain.
Comments are closed.