HomeIndustry Commentary

Security success

Like Tweet Pin it Share Share Email

The most inadvertently funny ad on television these days brags about a brilliant new feature of the latest iPhone. Apple has just invented cut and paste, hitherto available only in PC-DOS, OS/2, Windows, every version of the Mac OS, Linux and every other Unix flavor, every Palm PDA and smartphone since 1996, Android, and the original Apple Newton.

Impressive.

To be fair, which is nowhere near as much fun as indulging in ridicule, Apple had few choices. It needed to announce the feature. “Sorry we’re so late,” would have damaged its reputation for innovation. What’s left? Brag, and hope nobody notices the silliness.

Still, the latest iPhone addresses most of IT’s concerns. Exchange integration is, I’m told, superior to RIM’s, and the user interface is, of course, outstanding. The one big remaining hole is security. Friends who worry about such matters have told me Apple provide too little information for them to evaluate how well the iPhone is secured, and in the security business, ignorance is definitely not bliss.

IT’s job, though, is to secure what’s needed, not to need only what’s provably secure. Which gets to a subject I covered a few weeks ago in Advice Line.

The subject — how to assess information security’s performance — is important enough to cover here as well. Please forgive the repetition.

Assessing anyone’s performance means knowing what success looks like. Think that’s easy for InfoSec? No break-ins, no stolen data, no cyber-vandalism, and no successful malware attacks, perhaps?

If that was the answer, InfoSec would shut down your internal networks and Internet connections, disable all USB ports and CD-ROM and DVD writers, and declare victory.

Success is harder to define if you want to continue to conduct business, and more complicated than anyone will like or happily accept.

Here’s a sketch of the solution:

Step 1: Define IT’s goals for performance, stability, and ease-of-use. Ease-of-use is, by the way, an excellent example of the inverse relationship between the importance of a goal and the difficulty of measuring it objectively. Neglect to measure it, though, and you won’t get it, which is worse.

Step 2: Develop a threat inventory. This is a list of the types of attacks you know you need to plan for. It should take the form of a hierarchical list of threat categories (which is to say an outline), starting with something akin to the break-ins/data theft/cyber-vandalism/malware list, drilled down a few more levels.

Step 3: Establish the concept of acceptable countermeasures. An acceptable countermeasure helps achieve a security goal without degrading performance, stability, or ease-of-use beyond a level deemed tolerable … something you can’t define until you figure out how to measure performance, stability, and ease-of-use.

Very important: Some countermeasures are preventive, but not all. Sometimes, all preventive countermeasures will be unacceptable, in which case the countermeasures that are acceptable will involve detection and remediation.

Step 4: Define security goals: For each item in the threat inventory, your goal is to have implemented acceptable countermeasures that are at least as good as industry standard practice for that threat category.

Two points on this: (1) Industry standard practice is a moving target; it improves over time; (2) the reason you can’t always implement industry standard practice is that in some cases that would mean implementing an unacceptable countermeasure.

Step 5: Define what happens after a security incident. The answer: Analyze how it happened, and determine whether any acceptable countermeasure (not any countermeasure) would have prevented it. Based on the severity of the incident, you might decide to redefine what’s acceptable. Or you might not, figuring there are times when rapid detection and remediation is better.

It’s akin to accepting a level of criminal activity because some crime-prevention measures are unacceptably intrusive, cruel, or annoying, insuring for loss and otherwise dealing with the crimes that happen as a result.

Step 6: Define success. It is to comprehensively define all threat categories, and implement acceptable countermeasures for each. This definition of success leads inevitably to these metrics:

  • Planning failures:
    • Percent of actual attacks that are not listed in the inventory.
    • Percent of successful attacks that could not have been thwarted by an acceptable countermeasure and for which you had no planned response.
  • Implementation failures: Percent of actual attacks that (1) were successful; and (2) would have been thwarted by an acceptable countermeasure you didn’t properly implement.
  • Execution failures: Percent of actual attacks that were not detected and responded as specified in the defined countermeasures.

Nothing about this analysis is simple, particularly determining which potential countermeasures are acceptable and which are not.

Sure it’s difficult. That’s the nature of most good solutions.

Comments (1)

  • Hi Bob,

    I’m not a security pro either–more like an “accidental security pro”, to slightly misquote Mark Minasi–but as a one-man IT department, it falls under my purview. My skill set naturally leans more heavily toward technology than business, hence my faithful perusal of your newsletters.

    Like you, I’m unsure of what the security pros think about your outline but I plan to use it. This is the best security planning framework that I’ve seen for communicating rather technical difficulties to C-level executives. I work in an industry where all of the execs have broken risk meters and “the-sky-is-falling” security spending isn’t likely to get approval. I see great potential in applying your logic to our environment.

Comments are closed.