We need reliable bot detectors.
The irresistible subject is Facebook, about which you’ve probably read more than enough to make your eyes glaze. Nonetheless, bear with me, because, as pointed out in this space not long ago, the central problem isn’t data privacy or policy enforcement failures.
No, bots, not Facebook’s data usage policies and violations thereof, are the central problem here. The reason it’s the central problem is that bots scale. Human beings don’t.
And just as Twitter’s failure to implement and deploy bot detectors directly led to zillions of bot-amplified tweet-storms during the 2016 election, so bots are the reason 50 million Facebook subscribers were caught up in the latest fiasco.
Bots and their detection and prevention are the big under-reported issue here, because until it’s addressed, even the most ingenious terms-of-use policies will have all the impact of an eye dropper in a forest fire.
Even if you don’t use Facebook, you and the business you support might nonetheless be on the front lines of the war against the bot apocalypse.
Bots scale. Humans don’t. That’s at the core of Facebook’s data breach. That’s because the initial breach wasn’t a breach at all. A researcher paid a group of Facebook users to take a personality test and to share personal information … a perfectly legal transaction.
Then came the bots, in the form of a crawler that, starting with this list of identified Facebook users, navigated their networks so as to harvest information from 50 million users who hadn’t given their permissions.
This is the nature of social networks: They are networks, which means that from any node you can navigate to any other node.
If the aforementioned researcher were to personally try to harvest data from 50 million connected Facebook subscribers, my back-of-the-envelope calculations say Facebook would have ceased to exist centuries before he finished the job.
But add bots to the mix and you get a very different result. They can crawl the nodes of a network orders of magnitude more quickly than a human can. That’s how they’re able to harvest personal information from millions after only receiving permission from a relative handful. Facebook purportedly disabled the ability to harvest friend data from its API in 2015. All this means is that instead of invoking the API, bots have to screen-scrape instead, which in turn means the bot is impersonating a human being.
Add this: Like it or not, we’re rapidly mastering the discipline predicted in Isaac Asimov’s Foundation Trilogy. He called it “psychohistory,” and its practitioners knew so much about human psychology that they could manipulate people to do just about anything. Asimov optimistically made psychohistorians a secret, benevolent group. Unsurprisingly, our actual psychohistorians are using their techniques to create robotic human impersonators that manipulate actual humans more for power and profit than the greater good.
Why would we expect anything else?
If you’re wearing your business/IT priority-setter hat right now, my best advice is, sadly enough, don’t unilaterally disarm. Your competitors are, or soon will take advantage of these techniques and technologies to sell more products and services. From this perspective you’re in an arms race. If you aren’t actively monitoring developments in these area and working with the business strategy team to see how you can profit from them, it won’t be long before you’re replaced by someone who understands these matters.
But if you’re wearing your human-who-doesn’t-want-the-bot-apocalypse hat, you might why Facebook, which is investing heavily in artificial intelligence research and development, doesn’t devote more of its R&D budget to bot detection … like, for example, any of it?
My guess: Facebook is investing heavily in human impersonation. It’s in the bot business … chatbot technology, for example … so why would it also develop bot detection technology?
Especially when its customers … businesses … see direct financial benefits from being able to deploy convincing chatbots and other human impersonations and no obvious profit from detecting such things.
Because make no mistake about it, you might be a Facebook user, but you aren’t a Facebook customer. Facebook follows the standard media business model. As pointed out in this space back in 2002 in the context of newpapers and television, when it comes to most media, you aren’t the customer. You’re the product. And in the world of business, it’s the customer who’s always right.
Products like us enjoy no such privileges.
Excellent, as usual!
This is why no serious person with any self-respect would have ANYTHING to do with ANY form of “social media”!
It feels like you’ve described “Second Foundation” meets “Aliens”. An original analysis both perceptive and quite troubling.
Yet, as an experienced programmer, I still believe computers are dumber than doorknobs. A program is just a tool written by humans for humans. As such, it can be used as a tool to manipulate the emotions, but with far greater leverage when amplified by a network composed of millions, rather than the dozens of 15,000 years ago.
15,000 years ago, the voice of the demagogue was a minority, balanced by the diversity and experience of the rest of the tribe and the family unit.
Suppose the accidental hypnotizing of an audience member by a hypnotist is an example of “inferred communication”. Then, to me, the question is, can the rest of us quickly and massively find ways to respond to the inferred communication of bots that are, in reality, addressed to us, even as those bots are literally speaking to others, but bring a manipulative forces to our felt and not felt emotions?
This same theory applies to US style health insurance. In the relationship between ourselves, our doctors, and our insurance companies, the insurance company is the customer, not us.
That being said, I don’t see how further socializing health care is going to make it better. In that situation, the government is the customer. The only plausible solution is to go back to a “fee for service” situation, where people pay direct costs for routine services, and use insurance as a backstop. But, recent history demonstrates people don’t want this.
Coming full circle back to Social media, would the public ever pay for private, curated, open/verifiable social media as an alternative to the “free as in beer” style we have now? Probably not.
I’ve made this point about healthcare numerous times, except that it’s the caregiver, not the insurance company, that’s the customer, because it’s the caregiver that makes the buying decision. Except that insurance companies decide whether to cover the caregiver’s prescriptions, so the insurance company does influence the buying decision.
Not sure where you’re getting the socialized medicine bit, though. Right now we have private insurance that in some cases is partially subsidized by government, and private sector caregivers except for the VA. I don’t know whether socialized medicine would fix healthcare because there are so many different thoughts about what’s broken. I suspect socialized medical insurance and/or socialized medical care would fix one glaring problem: the lack of price transparency. Whether the cure would be worse than the disease is another question entirely.
And meantime, the transition from our current system to a socialized one would pretty much tank the entire U.S. economy unless it was managed very, very carefully.
To your last point: I doubt it. There are people willing to pay for entertainment, as HBO will attest, but most of us are used to the media model, where we get our content free in exchange for being sold as the entertainment company’s product.
And here’s another question: what’s an aspiring media publisher to do?
Last year, I left the company I worked for, in order to grow a media startup. We are growing, slowly, although the term “non-profit” still applies to our for-profit venture.
Here’s the thing: about half of our traffic comes from Facebook. If we were to close our Facebook page, we would (I think) stop growing, and pretty soon we’d disappear.
Do I want to leave Facebook? Absolutely … and the recent news, and articles like this one, make me want to leave even more.
But at this point, I don’t see any feasible way to do so.
Thoughts? Nothing profound. Enforce extra-secure password requirements. Make sure your internal systems are properly patched and updated. Conduct regular phishing simulations. In other words, make it hard for anyone to gain unauthorized access to your site.
Monitor site content to make sure nobody gets through your precautions and posts something untoward on your site. Google your site name from time to time to make sure nobody issues counterfeit posts.
And most of all, scrub your content so it’s unassailable.
That’s all I have. Maybe another commenter will have a thought or three?
A big take away I got from Bob’s article was the gathering of info to profile populations. Another was the placement of ads, fake news, and fake opinions to distort on-line reality. If your concern has to do with ethics, morality, and societal consequences, then I say wait 6 to 18 months to see what actions Facebook and governments take to address the situation – you’re information’s already been taken.
But, as far as placing ads in Facebook goes, maybe it’s best to check with Facebook to see exactly how your information is being used now. Then, you can make a mindful decision.
As Golden State Warriors’ center Zaza Pachulia says, “Nothing easy.”
Comments are closed.