This time it’s Anthem.
The good news about Anthem’s loss of 80 million customer records is that no personal health information was stolen. Were I an Anthem customer, I’m quite sure I’d be thinking to myself, “All the thieves can do is steal my identity. Thank heavens they didn’t find out what I’m allergic to!”
Look forward to the usual rehashing of what Anthem should and shouldn’t have done. KJR won’t pile on, because there’s really no point.
Forget that. Of course KJR will pile on because really, how am I supposed to resist the temptation?
And so:
In Anthem’s official statement on the subject its CEO, Joseph Swedish, described what happened as a “… highly sophisticated external data attack.” Some aspects of the attack probably were highly sophisticated. But it also turns out the stolen “personally identifiable information” (PII), including customers’ social security numbers, was not encrypted in Anthem’s databases.
While I’ve never claimed to be an authority in information security, I know enough to give this advice: Don’t make it easy.
This is why you lock your car. It won’t stymie a professional car thief. It will, however, keep it out of the hands of amateur joy-riders while making stealing someone else’s car the easier choice for the professionals, and, by the way, satisfy your insurance company.
Anthem’s defenders point out that, like many other companies, it has to connect its customers’ data to data in external sources, and especially data in government databases. The social security number is the only choice for JOINing data about individuals when you don’t control the data design.
To encrypt it or not to encrypt it, that is the question.
No, I won’t go all Hamlet on you. Here there’s only one answer: Encrypt.
At this stage of the game, there really is no excuse for any company to use social security numbers as the primary identification key for customer and employee records. Assigning a sequential or randomly-generated identification number when first creating a customer master record is routine. It isn’t “best practice.” It’s the minimum standard of basic professionalism.
This way, JOINing records from internal databases doesn’t have to rely on social security numbers, so the inconvenience and difficulties associated with encrypting social security numbers, as pointed out in a Wall Street Journal story on the subject, doesn’t apply to processing that involves no external data.
But medical information doesn’t stay within any single corporation. For both treatment and payment purposes, medical records, including insurance information, has to be shared externally, and right now the social security number is, in the United States, the single universal identification key.
Whether used by an insurance company to add external information to its internal records for analytics purposes, or used by healthcare providers to link medical records from multiple sources so as to improve treatment, at some point in the proceedings, unencrypted social security numbers have to make an appearance.
Security professionals, along with those of us who like to pretend to more sophistication in the field than we actually have, differentiate data in motion from data at rest. Anthem encrypted its data in motion but not its data at rest, on the theory that hardening its perimeter constituted sufficient protection of its information assets.
This gets it exactly backward. For several years now it’s been understood by security professionals that just about every major trend affecting systems and information access, but in particular the rise of the cloud, more off-premises computing, and the increasing reliance of cyber-attackers on phishing attacks and Trojan horses, all result in a decrease in effectiveness of perimeter hardening and an increase in the importance of hardening assets.
When it comes to hardening information assets, encryption might not be the whole story but it certainly is the starting point. And JOINing two tables based on encrypted social security numbers isn’t all that hard. Properly authorized individuals can be given access to the decryption function, through which they can create temporary tables that have unencrypted social security numbers. They can use these tables for whatever analysis they like, destroying them as soon as they’re finished with them.
But hard, I’ll bet, isn’t the issue. Here’s what is: Converting decades of accumulated reports, queries and other computing flotsam and jetsam, combined with organizational habits built up over the same span of decades, that all rely on having access to unencrypted PII.
Tracking them all down and replacing them with more-secure alternatives would cost a lot of time and money, with an unmeasurable payoff. It’s a risk management issue mentioned in this space from time to time: Successful prevention is indistinguishable from absence of risk.
Sadly for the Anthems of the world, failed prevention is not.
You are very correct about one thing. We (the industry) have secured the heck out of the data transfer piece of the equation. But thats not even really the part that gets attacked anymore. Its inside the firewall and VPN and other business accesses that are getting to the data. Targets transactions (as sent over the wire) were safe, likely very safe. The protection for accessing their internal network was not.
Free advice to the folks at Anthem. Google “surrogate keys” and you will learn what the rest of us did many years ago.
Once you have surrogate keys your join issues are solved and you can encrypt the SSN.
Part II is when you are as big as Anthem you should be able to invest in intrusion detection systems to determine hacks. (Rumor has it that it took them months to figure it out). For a nice consulting fee, I’ll give them some ideas. They know where to find me…
The risk management issue here is that at the corporate level the penalty for this type of data loss is minimal aside from the momentary bad publicity. The dollar cost to protect the data is significant and ongoing. Nobody who is being judged by his/her numbers will readily spend that money until the cost of the data loss becomes significantly greater than the ongoing cost of prevention. From a risk management perspective this is simply self-insuring against the loss.
I agree. Don’t be up on the IT person, fine the heck out of the corporation . Put the money in a fund to defray losses, with the corp. liable for any shortfall, and any left over going to security education and research efforts. Pretty soon best practice would be minimum practice.
I agree with everything that has been said above. But I just don’t see anything close to required changes happening before years of financial and personal data carnage have occurred.
20 years ago I worked for a small home care agency when Medicare’s the billing protocol was changed. On our “home grown” billing app, I made the required changes in 5 days. Even though later, I was working with one Eastern Blue Cross to help out, it took them about 4 months to implement the changes, and the most of the other Blue Crosses an additional 5 or 6 months to implement.
My point is that these are huge organizations, using “safe” software choices, such that the security changes needed are likely to be hugely expensive, time consuming, and often, not well implemented. They don’t have programmers nearly at the level of Google or Apple because they don’t normally need that. But, they do now.
The fastest and cheapest solutions I see are:
1. A group like ANSI bring together the best minds in industry, academia and government to propose effective security standards for all industries, especially healthcare and make them mandatory. Jason’s comments above showed that real expertise in this area would be needed.
2. Dump the entire data operation into one of the modern healthcare packages, like Epic, that can handle both scale and the security issues, ASAP. No, I don’t work for Epic, but Blue Cross Anthem is one of my healthcare insurers.
As is usually the case, thanks for your article.
Another report indicated that the data was accessed via compromised account information in which case encryption or lack thereof didn’t make much difference.
That would depend on how the account was configured with respect to the steps required to enable decryption.
You make a good point – which is also a cautionary tale for those organizations that are sloppy about identity management. Restricting access to PII to only those employees who have a need for it would reduce the odds that a compromised account would be useful in this regard.
Besides identity management, maybe also access control, monitoring activity (same account in use in 2 geographically different locations; moving/downloading millions of records…)