HomeIndustry Commentary

Insecurity. It isn’t just for psychologists anymore

Like Tweet Pin it Share Share Email

Imagine for a moment that a gang of bank robbers decided to target the big guys — Citi, JPMorgan Chase, Bank of America, Wells Fargo — you know, the ones where a billion dollars is petty cash.

The robberies always use the same basic techniques, and the amounts stolen are starting to add up.

Plus, it’s embarrassing. But so far nobody has managed to catch the culprits.

Do you think these companies would have the wherewithal to take care of the problem?

Listen to the apostles of capitalism and you might think so. And yet, in the contest between world corporatism and cybercriminals, the cybercriminals aren’t just winning. They’re winning with impunity, so much so that InfoWorld’s Roger Grimes — not the kind of person you’d call a hysteric ­– is using words like “crisis” and “catastrophe” to describe the situation.

Now I ain’t no expert. And as regular readers know I try to avoid the grand American inverse correlation between knowledge and strength of opinion, so I’m not claiming to have the solution, or even a solution.

Just some notions. Like these two for all corporations:

  • Spend more. No, you can’t solve problems by throwing money at them. You also can’t solve them by refusing to spend money on them.

Target, for example, expects its data breach will cost it something like a billion dollars in direct costs, and that doesn’t include damage to its brand and lost customer loyalty. And Target’s cybersecurity wasn’t all that much worse than average.

Its cybersecurity budget? Do some Googling and back-of-the-envelope scratching (I couldn’t track down the number) and you’ll probably arrive a number along the lines of $125 million. Do the math.

  • Practice identity management 101: I don’t have a statistically valid sample; I am invited into enough companies to think this conclusion is reliable: Way too many companies are way too sloppy about identity management.

We’re talking about the basics, not anything fancy. Lots of companies provision new employees by “making her like him” instead of by defining access rights and restrictions by role. Way too many add rights as employees take on new responsibilities without removing the ones they don’t need anymore.

This isn’t complicated. Just time consuming. Also, silo-busting, because HR should be the hub, not IT. After all, every hire, transfer, promotion and termination flows through HR, and these are the exact events that should trigger changes in rights and restrictions.

Corporations can certainly do better when it comes to protecting their cyber assets. The cyberprotection industry worries me more. In the aggregate they (truth in advertising: I’m a Dell employee. Elsewhere at Dell we have information security products and consultants, so in a sense “they” is “we”) … in the aggregate the cyberprotection industry has more money to spend on defense than the bad guys have to spend on offense.

Yes, offense is easier. And yet, if everyone involved pooled their knowledge and resources …

Phishing attacks are the biggest source of security breaches. Couldn’t, for example, IBM put Watson on the hunt? It’s a classic big-data-analytics problem. Even without creating a public repository for everyone in the world to send phishing emails they receive, IBM employs enough people to get this started.

If Watson-style technology can spot credit card fraud, surely its analytics can spot phishing attacks as well.

Here’s another: Stop with signatures already and deal with behavior. As in, the problem with computer viruses is that they make computers do things the computers’ owners don’t want them to do.

I know I’m going out on a limb here on the strongly-held-opinion-correlated-with-ignorance front. Still, bear with me.

What does malware do? It: wipes hard drives; sends out data without a triggering keyboard or mouse command; updates files and databases without a triggering keystroke or mouse command; sends out massive amounts of email without a triggering keystroke or mouse command …

How hard can it be to write features into the OS kernel that monitor for these sorts of malware tells? Pop a big message onto the screen warning users in plain English about what their computer has been instructed to do and ask if it’s something the user wants it to do.

These are probably naïve and simple-minded suggestions. I’m not, after all, an expert in the field and besides, I’m giving these ideas away for free.

Unlike yours truly, the cyberprotection industry has all the expertise it needs. It has, in the aggregate, big R&D budgets. How about coupling these resources with the same level of innovative thinking cybercriminals put into their attacks?

What’s clear: Our current strategy … identifying the next threat and responding to it … guarantees we’ll always be a step behind.

Comments (22)

  • Uh-oh. Again with an editing error?

    > and you’ll probably arrive ^^^ a number

    ‘at’ missing…

    • I’m guessing it’s browser-specific – don’t see the problem in either of mine. As you might suspect, I don’t have the wherewithal to test KJR every week in multiple environments. Anyway, sorry for the aggravation. When I receive this sort of thing it bugs me too.

  • The problem with Target was not their IT Security. It was their culture and their processes. FireEye and Symantec AV caught the malware, but the warnings were ignored and the losses went from the software and labor costs to somewhere north of $400 million or more. Most of the problems in IT Security are with management. They do not see IT Security as insurance, but as a loss. There are other problems such as expensive software that is difficult to implement and deploy, bad standards like PCI DSS 2.0 which lets the software developers publish and sell bad PCI software and puts the onus on the merchants, the weaponization of the Internet by nation states, corruption of security software by nation states, the mindset that security is not important, and that high school graduates are smart enough to defend against attackers who have college degrees or advanced degrees. There are likely other causes, but it chiefly comes down to mindset. The DOD is farthest along on IT Security. Their standards are rigorous, their compliance is high. They have a problem keeping competent people, but they know they will be attacked and they take IT Security seriously. They still have problems, but they do not have problems like Target does because they can generally contain damage. Yeah, Snowden hurt the NSA, but he was an insider and the documents were only released to journalists, unlike Sony whose executives are probably wondering why they did not do more to protect their network. I am sure some people have lost their jobs there now. Things may have to get worse before they get better.

  • You suggested 5 or 6 excellent ideas that are highly functional, that have close to zero chance of implementation, because it would be nearly impossible to explain how to implement them in a way that a non-IT guy could imagine.

    For example, could you explain how to get the warm and fuzzy people of HR would implement the security aspects of systems administration you suggest? The idea is a very good one, but I don’t see a way that a HR manager could understand it who doesn’t some talent systems analysis.

    I’m not saying it’s impossible, but as an experienced consultant, how would you begin to explain it to HR and other top management?

    • How to get HR to take the lead in this? Form a cross-functional team to figure out identity management, inviting HR to be part of the team, along with InfoSec and representative business managers. These last are very important because (a) they’ll be the end-users of the new process; and (b) they’ll be the ones who can persuade HR to participate.

      As for HR, its role is to be the coordinating hub, not to take on the nitty gritty technical aspects of the work. IT/InfoSec still have to design the directory forests and all that; turn business-level role definitions into implementable role definitions and so on.

      But HR has to drive the process day-to-day. Shouldn’t be that hard to persuade them of that.

      • Thanks for sharing some of your thinking here.

        I would think it might help if the IT director was seen as a thoughtful person who has the ability to identify meta-issues that usually affect the organization, rather than a geek, hot for the latest computer technology. Nice article.

  • How about credit card companies coming up with a sane design where merchants don’t *ever* have to store credit card numbers? A limited-use, revokable token should be able to do the job, while providing virtually nothing to thieves. (As in, if the token was tied to the merchant, it would be useless to anyone else.)

    I know that’s a bit off-topic from advice to merchants, but it irks me that the payment card industry solves this by putting the burden on merchants to protect their inherently insecure system.

    • Jonathan – good point and give Apple Pay a try (I hear Google Wallet is similar). I am using Apple Pay at merchants that will accept it. No credit card is passed. Of course this assumes that Apple doesn’t get hacked some day.

      • I think Apple was hacked – which is why we’ve been able to see so much more of Jennifer Lawrence (and others) than we used to.

  • I’m not that sure that large corporations have the basic blocking and tackling of security down. And I’m really sure smaller corporations have huge issues.

    Patching has become an art. Adobe keeps many people employed just by updating it’s pathetic Flash. Microsoft does a nice job of patching, but it’s a challenge to manage and keep up with. Zero day is way too common a word.

    What really baffles me is the large retailers who did not learn from Target. There should be a lounge full of CIOs looking for work. They obviously forgot to update their ERM and then didn’t send the troops after their POS systems.

    Krebs writes some great stuff in this space. Follow his blog, it’s a must read just like Bob’s.

    For CIOs out there. Do yourself a favor and try to phish your own company. You will be shocked. Then pick up the pieces and try to figure out how to solve your problem.

  • Security is a state of mind. That state of mind means looking and recognizing when things don’t seem right. Being vigilant requires more effort than blindly following a set of rules.

  • Use a “whitelist” app (example: Bit9) in conjunction with your normal security deterrents. Whitelist apps only allow registered, trusted apps/EXEs/DLLs to run. You can leave things “relaxed” so that folks can run other apps but can lock things down if you find some malware has made it into your world. Tough for malware to do much of anything if it can run its program.

    Yes: it is a pain if you lock everything down to just the accepted, tested, known good apps. But that will stop viruses, malware, whatever from ever being able to run.

    Just a thought…

  • The Australian DOD came out with a great study a few years ago that patching (OS & apps), restricting admin rights, and whitelisting would have prevented 93% of attacks they had experienced. We know what to focus on – we just need to do it.

  • Bob,

    Windows is constantly performing tasks without being prompted by a triggering keystroke or mouse command. These include both services (updating search indexes, monitoring and processing printer queues, updating the system date/time, and hundreds of other low-level tasks) and background programs (checking for new software versions, scanning for malware, checking for new e-mail, etc.). If you run Task Manager or, better yet, Process Explorer, you can get an overwhelming look at the complexity of activity that goes on behind the scenes.

    I can only imagine the chaos if the user was prompted to accept every single background task. And once we condition users to click “OK” to checking for new e-mail messages, scanning for malware, etc., how many of them will just click “OK” every time they’re prompted to do so? (I see this all the time with people clicking OK on error dialogs without reading them.) Even vigilant users could be deceived if the malware authors are able to trick the OS into mischaracterizing what it wants to do. If the OS asks you if you want to install Windows patches but is actaully asking permission to install malware, asking permission will be even worse.

    I’m no security expert, but I suspect the solutions will be much more complex than asking users permission for unprompted tasks.

    • I think you misunderstood. I didn’t say every newly initiated task. I said newly initiated tasks that look like the sorts of things malware does. Malware doesn’t start the indexer running, nor does it start downloading inbound emails. Again, I’m not an expert and I’m sure it’s more complicated than I’m making it out to be. I’m not sure enumerating the sorts of things malware does and detecting likely instances is so difficult as to be impractical, which was my suggestion.

  • What does malware do? It: wipes hard drives; sends out data without a triggering keyboard or mouse command; updates files and databases without a triggering keystroke or mouse command; sends out massive amounts of email without a triggering keystroke or mouse command …

    How hard can it be to write features into the OS kernel that monitor for these sorts of malware tells? Pop a big message onto the screen warning users in plain English about what their computer has been instructed to do and ask if it’s something the user wants it to do.

    I’m also too far removed from the internal workings of computers to comment knowledgeably, but seeing as many organizations, my own included, rely on batch processing (including sending out emails and data) overnight and over weekends, requiring a user to click a button at the time of execution is the opposite of the purpose. So as long as malware can insert itself as an already-authorized bit of batch processing, implementing this suggestion could be very difficult.

    The whole purpose of computers is to automate routine tasks, seems to me. So seems to me that malware writers will always be able to take advantage of that.

    So far security efforts have been directed at preventing unauthorized instructions from getting to the computer, or into its code. It’s nice out-of-the-box thinking, to find out if there’s a way to keep unauthorized code from ever running, but I expect it’s waaaaay tricky, once the code is in there, for the computer to determine whether it was ever properly authorized to begin with. Or whenever the computer programmers figure out how, the malware writers copy that method into their code.

    But then, like you, I’m too far removed from computer programming to comment knowledgeably. I am willing (and happy) to be proved incorrect!

    • Well, maybe. I say this because legitimate batch jobs are always either directly initiated by an operator or initiated by a scheduler. So even in these cases, confirming legitimacy shouldn’t be too onerous, I’d think.

  • I think you’ve nailed some of the problem. The biggest part of the problem is the company network itself. Personally, I don’t think companies need as much network as they’ve got.

    But the network guys won’t tell you that. And the network is where the CIO gets his money to do other stuff. And all of his advisers used to be in charge of the network. And thus the security stance is to protect the network rather than completely dismantle it and only network the things that must be constantly in communication with each other and leave the other things independent.

    Then monitor network traffic.

    The problem with that is you have to have people involved in designing the business connection instead of letting some evangelist from networking hook people up. And whoever heard of using HR processes to actually track employee’s work?

    • We’re destined to disagree about this, I think, as us usually the case when a root cause analysis ends up with “here’s who we can blame for this mess.”

      Among the problems with this analysis: The network isn’t where the CIO gets IT budget. Quite the opposite, the network is lumped into non-discretionary spending, which is the part that always gets squeezed.

      As for only connecting things that have to be connected … really, that’s all that’s ever connected. Thing is, with modern applications the boundaries aren’t neat and tidy enough that there’s much in the way of isolated processing.

  • The problem with computers today is that every software and hardware maker is embedding themselves into any PC that uses its products. They are deeply buried and they don’t want to bring attention to it. The monitor everything, view everything, record everything, “But no personal information” -hah! The government is doing the same thing. No one owns or controls their computer anymore. It is therefore impossible for any normal user or business user to know who is doing what inside their PC, and this is by design. Cybercriminals find it all too easy to do the same thing.

    • I would like to clarify – the reason PCs are not secure is because they are not secure by design. If a user was able to fully secure their PC from cybercriminals, they could also secure them against intrusion by the government, Dell, HP, Microsoft, Oracle, etc. These entities do not want this to happen.

  • I absolutely agree with the “make the new guy’s authorizations match existing person” problem. It is due to laziness and the fact that too many folks (business users and even the security and application folks) don’t really know what auths are needed for a specific function. I have argued for years that we should be using “job title” authorization groups.

    My favorite story is from a business that used the “make him like her” approach to security. Unfortunately the “her” in this case had left some months ago (when she left her authorizations were removed) and there was no way to tell what her authorizations had been. So the answer was to just throw a whole bunch of auths to the new guy until he could do the job. Hell of a way to run a railroad – or any business.

Comments are closed.