Tuesday, January 27, 2009

Apathy at Monster Data Breach

Nothing new about data breaches of course. A recent report from the Identity Theft Resource Center lists 656 incidents for 2008. But the large majority of incidents are at places that you've probably never heard of (Spicy Pickle at Portage anyone?). Monster of course is a different story. Most people have probably been on their site at some point or another looking for a job. Monster-sized data breaches (cringeworthy pun intended) usually only occur a few times a month.

Which makes it all the more surprising that this breach has received almost no media attention. Googling the terms Monster, data and breach brought up just a handful of relevant results.

This is surprising for two reasons - (1) Monster already suffered a massive and highly publicized data breach in the summer of 2007 and they waited for 5 days before notifying, and (2) Monster admits that user selected passwords were compromised. There seem to be only two possible explanations for the lack of media interest - Monster has chosen not to notify customers, and they have not released details of the breach.

There has been a lot of discussion about whether data breach notification laws work (see my earlier post on this topic). The Monster incident underscores an obvious fact - incidents that do not involve notification letters receive less media attention. This makes sense of course; when you send out a notification letter to 50,000 people, chances are one of them is a reporter.

Without knowing the details of the Monster breach, it is impossible to judge whether notification was required. Data breach notification laws (currently in place in 44 states) have differing requirements but generally require notification if (1) some PII (Personally Identifiable Information) was compromised and (2) there is some chance that this information will be misused. Since Monster has users in all 50 states, it is presumably subject to the strictest of all these state laws. It would seem impossible for Monster to rule out item (2), so presumably Monster has been advised that according to all state laws user passwords alone are not considered PII.

It's very interesting that user passwords to Monster-type sites are not considered PII. For one thing, a Monster account itself can give access to PII. A resume usually has an address and phone number on it. Even if the compromised accounts were de-activated, a Monster username and password combination would probably in many cases also work on the corresponding hotmail or gmail account. This might no longer be Monster's direct problem, but you would think this would have made the breach bigger news in the media.

If you read the notification on Monster's website, the basic message is that stuff happens and that users should take preventative measures. I am not sure how this will resonate with users, but my guess is that Monster's users are not a particulary security sensitive group when it comes to this site. After all, whether you are an employer or a job seeker you are joining Monster so that people can reach you. The underlying expectation of privacy is low because of the inherently externally facing nature of the site.

Two final thoughts on the notification that Monster posted on its website. Initially, there was no justification given on the website for not notifying customers. In the last few days that was updated to say that this was done so as not to give phishers a template for phishing emails. I am not a lawyer, so I don't know whether the law allows companies to make these kind of judgment calls on whether to notify. But data breach notification laws differ significantly from state to state, and I find it hard to believe that this wiggle room exists in all 44 breach notification laws.

Another interesting point is the way that Monster defends its own data security practices. Monster claims to "devote significant resources" to security measures. This is a refreshing approach that I have advocated previously on this blog; your actual security commitment can and should be measured in dollars and cents. 

However it is disappointing that Monster has chosen not to disclose any details about their security "to maintain the integrity of these security and monitoring systems". There is nothing confidential about a company's general security narrative, and one reason to maintain such a narrative is precisely for situations like these. The web notification is posted from Patrick Manzo, Monster's Chief Privacy Officer. I don't know whether Monster has a CISO, but this would be a good time to bring him or her out of the woodwork. Breaches can happen to any organization, even ones that have their house in order. The important thing is to have a solid story when something does happen.


Wednesday, January 21, 2009

Heartland Security Breach - Here We Go Again

If you needed to get bad news out, today was the day. With the world watching the inauguration of President elect Barack Obama, Heartland Payment Systems announced a mammoth breach of their security. Heartland who you ask? New York readers can breathe a sigh of relief - this is not Heartland Brewery and the credit card you use to pay your tab is safe. But on second thought that card number might not be so safe after all. Heartland Payment Systems is the 6th largest credit card processor in the country and serves no less than 250,000 locations. There's a good chance your credit card number is floating somewhere in their system even if you have only ever used it to buy an occasional beer.

We don't know much about the breach right now; in fact, most people don't know much about Heartland (it's wikipedia entry is two sentences long as of today, but that's bound to change in the coming days). 

We are going to be hearing a lot about this breach, and it will follow TJMaxx, Hannaford, and other looseners of the security purse strings into the salespitch of every information security consultant.

But for now we know very very little about what actually happened. We can't conclude much on the very flimsy information we have, but here goes-

Obvious lesson #1: payment processors are a very attractive target for criminals. Their resources-to-sensitive data ratio is relatively low (in relation say to a bank), which makes them a softer target. They also process pure gold - attackers do not need to sift through mountains of other data and complex architectures they might encounter elsewhere.

Obvious lesson #2: the PCI system as it is currently implemented does not stop every attempt to steal credit card data. Like Hannaford before it, Heartland was PCI certified (by Trustwave, according to the Payment Systems Blog). 

The fact that PCI ≠ end-of-all-data-breaches-for-eternity has not stopped the renewed calls for PCI to be revamped, eliminated, or replaced. In an interview with Computerworld Avivah Litan of Gartner says "More radical security moves need to be taken by payments industry as a whole ... Such incidents show that the security requirements of ...PCI DSS being pushed by the major card companies is clearly not enough."

Unless Gartner is privy to some non-public information about this case, that's quite the rush to judgment. The grandiosely named www.2008breach.com  - Heartland's official site for breach information - has very scant information on the breach. So on what basis is Gartner saying that the security requirements of PCI DSS are not enough? While that may be the case, I would argue that there are at least a few other scenarios:

1) The PCI DSS requirements are enough to prevent the vast majority of data breaches, and the payment card industry accepts that incidents like this will happen from time-to-time. I don't work for Visa or Mastercard or anyone else in the payment industry, so I have no way of knowing if this is true. But clearly the PCI standards are meant to achieve a reasonable balance between security investment and risk reduction. This incident alone, and others like it, are not in and of themselves evidence that this hasn't happened.

2) The security requirements of PCI DSS are enough, but there was a failure of the enforcement mechanism (ie. QSAs and ASVs and the like). Again, the only detail we really know is something about "malicious software". It may very well be the case that strict adherence to PCI would have prevented this malicious software from getting installed or from being effective.

3) The security requirements are not the problem, but the broad license to introduce and interpret "compensating controls". This has always been an Achilles' Heel of PCI, since it introduces an almost entirely subjective element into the PCI process. There is very little accompanying PCI documentation to define the allowable and appropriate scope of compensating controls.

In the coming days and weeks we will get a better indication of what exactly happened. This breach may well reveal gaping holes in the security requirements in PCI as Gartner claims, but for now my money is on something less radical than that.

And one final thought on the entire PCI process. Because PCI is in such early days, there has never (as far as I know) been any real legal test of the liability mechanisms behind PCI auditing. If Trustwave mistakenly certified Heartland as PCI compliant, does it bear some of the costs associated with this breach? If the answer to this question remains negative, I don't know how we will ever get effective and reliable PCI audits. 




Sunday, January 18, 2009

Asking for Security

Security doesn't get built into software by accident. That's why I like the latest attempt to work security into contract language. This SANS effort by Jim Routh (CISO at DTCC) and Will Pelgrin (CSO for New York State) is a useful resource for companies looking to address security in the procurement process.

I'll dive into that document in a second (or a minute depending on how fast you read). But first to understand why security language is needed in RFPs, consider a few of the reasons why vendors sell insecure software products:

1) Nobody asked. The customer didn't ask for security, so the vendor didn't provide it.
2) No one can tell. The customer can't tell if the end product is secure, so the vendor knows they won't get called out as long as the external interface appears secure.
3) Don't know how. The vendor wants to provide secure software, but is undermined by developers or development teams that do not know how to code securely.
4) Impossible given the requirements. The developers know how to code securely but the nature of the product and functional requirements preclude the creation of a secure product.

There is no simple solution to items 3 and 4. But it shouldn't be a surprise that software is insecure when (1) no one requested security as a feature and (2) no one was legally required to provide it. All professionals suffer from an exaggerated view of their industry's importance, and security folks are no exception. Most companies buying software have a long list of requirements and security ain't at the top of the list. They want an application that-

a) works
b) doesn't crash
c) doesn't leave them eternally locked in
d) they can understand
e) doesn't cost gazillions of dollars to maintain

** fill in a bunch of items I overlooked **

f) is secure enough for their business environment

Notice that resilience to the most recent XSS vulnerability is not on this list - not even in item (f). Suppose a small software shop is contracted to build a web application for a medium sized business. If the customer does not emphasize security in the RFP or contract, why would the vendor go the extra mile - and divert resources from critical functionality - to add unrequested security?

Now of course contracts do not equal security and just telling everyone to consider the CWE/SANS Top 25 is not going to make these vulnerabilities magically go away. So if you will pardon the math speak, security language in the contract is usually a necessary, but not sufficient, condition to get a secure product.

Let's get back to the SANS Application Security Procurement Language.

Besides the usual suspects like background checks, patching, and training, the text is focussed on the CWE/SANS Top 25 list of software programming errors. At times this appears to be a fleshed out version of the OWASP Top 10. This list doesn't claim to be perfect or complete, but it brings up a number of items that most developers have never even considered.

One thing I was looking for and could not find is some sort of ranking. Exploiting "Use of a Broken or Risky Cryptographic Algorithm" or "Use of Insufficiently Random Values" requires some pretty hefty technical knowledge. On the other hand, exploiting "Client-Side Enforcement of Server Side Security" often only requires you to know how to install Firebug and play around with Javascript. Their relative importance should have been indicated.

But enough quibbling about what is otherwise a useful document. Most procurement these days says something about security and best practices and maybe some weird random reference to SSL and biometric readers on server room doors. The emergence of industry standards around security procurement language can help bring rational allocation of security resources within software development companies.

The article New York drafts language demanding secure code doesn't indicate whether there is a plan to require this language in the future in RFPs issued by New York State. My guess is that this text would have to be seriously modified in the real world on a per-project basis. I don't see how the full Application Security Procurement Language could make it into most RFPs because it would scare away most bidders. But requiring vendors to commit to at least having looked at something like the CWE/SANS Top25 and to have a basic security narrative within their organization is long overdue. The move towards security language in RFPs can have a similar effect as PCI - a contractually enforceable, imperfect standard leads to real security changes in organizations.

And finally a suggestion for an additional improvement - without dedication of sufficient resources within the development process, many of the items listed will just be a meaningless checkmark. Industry standards have already been formed about security spending in IT departments, and recent efforts like the OWASP Security Spending Benchmarks Project are leading to similar data within the development world. Procurement language should include a clause that vendors will "dedicate sufficient resources" to securing their products.

Sunday, January 11, 2009

Forrester security spending report

Data on security spending is hard to come by. That's why this recent Forrester report posted by Dark Reading is a nice treat. Unfortunately it seems like you have to cough up $995 for the full report, but the summary contains some interesting stats.

So, on to the data...IT security spending as a percentage of total IT spending is anywhere from 9.1% for small and medium sized businesses to 11.7% for large. Slightly higher than I expected, and not in line with a report by Gartner earlier last year. That report listed 5-10% for small and medium sized businesses, and (interestingly) a lower 3-6% for large enterprises. The difference in figures could be accounted for by different definitions, but that doesn't explain the reverse correlation between spending and size.

But the ballpark figures seem right - a CompTIA survey puts this figure at 12% in 2007. I read a figure of 10% for the US government a few months ago on GCN, but the link is no longer active.

I would have liked to know how exactly they qualify security spending; after all, the big trend amongst the Microsofts, Oracles, Ciscos etc of the world has been to intergrate security directly into their product offerings. If you buy a Cisco ASA and use it as a firewall and a VPN, is that considered purely security spending? These critical definitions are probably in the full report.

Its interesting how much data there is on security security as a part of overall IT security spending, and how little data there is on security spending as a percentage of development costs. The OWASP Security Spending Benchmarks Project plans to fill in that gap.

Tuesday, January 6, 2009

Hacking Obama

What were these guys thinking? Someone somewhere broke into Obama's Twitter account. And for good measure they also broke into Britney Spears' account (the last time I saw Obama and Spears in the same headline was in those bizarre political ads by the McCain campaign).

It takes some serious chutzpah to hack any account belonging to the future Commander in Chief. Why would someone pull off this kind of stunt? According to the Washington Post, Obama's compromised account was used to send some spam involving a survey. Other accounts were used to send out some pretty stupid messages about sex and drugs (which I will not reprint in an attempt to keep this post PG). It seems like Obama's account was spared that fate for some reason, but there are probably some folks in the Secret Service who will nonetheless not be amused by this entire incident.

The prankish nature of the attack makes me think that it did not require much sophistication. According to the official Twitter blog these individuals "hacked into some of the tools our support team uses to help people do things like edit the email address associated with their Twitter account when they can't remember or get stuck." This is the soft underbelly of most web services - you can turn your production environment into a fortress with WAFs and IDSs and what not, but you are only as secure as your help desk.

Password resets are the Achilles' Heel of today's authentication infrastructure. Banks have known this for years and have relatively strict password reset procedures (and in many countries locked Internet accounts can only be reset by walking into a branch) . But banks are in a fairly unique position - they usually have a close relationship with the customer, know a lot about them, and perhaps most importantly operate in an industry where strict security is expected. Services like Twitter are meant to be fun and can't impose those kind of requirements on their customer base. Heck, any one can set up a Twitter account in someone else's name.

It's hard to tell in this case if a back-end help desk server or portal was hacked, or if the logic of the password reset process was exploited. The latter could be done with zero technical skill (a la Sarah "I-met-my-husband-at-Wasilla-High" Palin's Yahoo email hack). Breaking into a server would involve either a lot more technical skill or a poorly configured server. Twitter hasn't revealed much about the breach so it's hard to tell which one it is. They are also still reeling from an apparently unrelated phishing attack over the weekend.

President-elect Obama is the first President 2.0 (who could have even dreamt of something like Twitter when Clinton was in office and Al Gore was just beginning to invent the Internet?). It will be very interesting to see how seriously the authorities take this incident. A failure to track down the criminals would be pretty scary - if the next President's account isn't safe, whose is?

Monday, January 5, 2009

Phishing scam spreading on Twitter

I don't Tweet. Not sure about you, but the answer to "What Are You Doing?" is usually boring, private, or working. I will probably cave in once Twitter hits the tipping point, but I figure we are still a good year away from that.

But Twitter has apparently gotten big enough to attract the attention of phishers. Over the weekend it was hit by a phishing scam that redirected people to a certain twitter.access-logins.com page. At that point it tries to harvest your Twitter login credentials.

I couldn't find any information on how widespread this attack is (previous social networking attacks like the Koobface virus that hit Facebook have had only limited impact). My guess is that a lot of people have fallen for this - phishing is kind of new on Twitter, and the URL could be legitimate. Active Twitter users are receiving so many messages that they cannot possibly check if each one is legit.

What is being done with the login credentials that have been harvested? I have absolutely no idea, but in the absence of hard facts let me venture a guess. Twitter itself could be used for spamming, click through fraud, page rank manipulation and the like. This is annoying for the victim but not much more.

Although most people use the same password for just about everything, I don't think there is a practical way for the Twitter attackers to use these credentials on other sites. This would require a more sophisticated spear phishing approach (a phishing attack that targets a particular person) that this does not appear to be. On the other hand, it would not be difficult to try all the harvested login credentials on say Citibank. But given the early detection of this phishing scam and the relative tech savviness of Twitter users, the impact of any such attack would be limited.

There's not a lot that can be done about these kind of attacks. Even careful users who are aware of phishing scams can easily fall victim.

A few quick lessons for security managers from this:
  • Emphasize the separation of work and personal email. This will help limit the damage if one of your employee's personal email accounts is compromised.
  • Enforce password complexity and expiry. This reduces the likelihood that employees can use the same password universally.
  • Make sure that phishing is part of your information security training. Remind employees to be careful where they enter their credentials.

Sunday, January 4, 2009

China arrests software counterfeiters

Ars Technica reported on Friday that China has jailed 11 ringleaders of what Microsoft calls "the world's largest software counterfeiting syndicate".

Well, if you have ever been to China you know that there is a lot of counterfeiting going on - fake purses, watches, shirts - you name it. You don't often hear the word counterfeit used for software (isn't usually pirated software?). My guess is that the term was introduced by the software companies to make it sound more criminal. Piracy makes us think of a 15 year-old kid who just wants to listen to some free music without the Man getting in the way. Counterfeit makes us think of sketchy criminals who are probably trying to get that same 15 year-old addicted to heroin.

Whatever you want to call it, in most places it is illegal (despite the protests of the software-should-be free crowd). In the past enforcement was weak and piracy was tolerated (Bill Gates famously said in 1998 that MS would tolerate Chinese piracy and once they got addicted would start to collect). But the goal posts have shifted considerably since then.

I don't claim to be an expert on the Chinese judicial system, but the sentences of 1.5 to 6.5 years for $2 billion in fraud seem a bit light by Chinese standards (the IHT reported last week about a certain Cai Wenlong receiving a suspended death sentence for embezzlement). But the fact that China is starting to seriously prosecute software piracy is a significant change of attitude in that country. Increased prosecution will do much more than DRM can ever achieve to discourage piracy.

I wrote in my 2009 predictions that increasing enforcement of cybercrime laws will radically change the information security industry. The software industry has understood that sufficiently motivated people can find their way around DRM and has been focussing on enforcement for a long time (through organizations like the BSA). In confronting cybercrime in general, it is much more difficult to push for enforcement since losses are spread so thin amongst so many different entities. Nonetheless I expect we will see more consumer organizations taking this approach in the coming years.

Saturday, January 3, 2009

SSL vulnerabilities and user education

Every couple of weeks a new exploit or vulnerability manages to get the attention of the press. Perhaps it was the slow holiday news cycle that led to the multiple articles claiming "SSL Broken!

Let me first confess that I haven't read through all the details, or even most of the details, of this research. Like almost everyone else, I have read summaries. We live in a reputation based world, and the fact that Arjen Lenstra's name is on the paper leads me to believe that the technical details are sound. They are claiming that they have forged certificates based on MD5. Ummm, okay - but how important is this?

SSL certificates obviously serve some purpose - otherwise Verisign, Thawte, and the like would be out of business. But the importance of certificates lies less in their cryptographic strength and more in the process they establish for clients to connect to servers. Money can be forged, checks can be forged, and of course most significantly hand written signatures can be forged.In 2009, for better or for worse, we still live in a society where most large scale real estate transactions still require a notary's seal right out of the Middle Ages (a few weeks back a newspaper in New York managed to forge their way into buying the Empire State Building). Almost every day-to-day business transaction is vulnerable to fraud when confronted by a motivated criminal. So why the fuss about the SSL certificates?

There is a simple reason the SSL news made it from the crowded security conference circuit and appeared fleetingly in the MSM. Users have been conditioned to believe that https means secure and that everything is OK as long as they see that little yellow pad lock. I recently saw a Verisign marketing video where they interviewed a bunch of people on the street and asked them what measures they take to protect themselves online. In this very unscientific survey a large number of respondents answered that they look for the little yellow lock that indicates SSL encryption.

It's pretty amazing that the average person on the street knows (kind of) what SSL encryption is, and has somehow been conditioned to look for the little yellow lock. Now of course an https enabled site is obviously more secure than an http site. Someone could theoretically be sniffing traffic, get your credit card number, and use it for dark and nefarious purposes. But is this the main threat facing users? How much fraud has actually occurred as a result of someone going onto an unencrypted site and a criminal sniffing that traffic?

Unfortunately there is very little data on this, and it is the kind of statistic that may be inherently immeasurable. But I am willing to venture a guess here - your online risk from sending your data unencrypted is dwarfed by your risk of generally sharing your data with a large number of entities on the Internet (and many others have commented on the fact that the real risk is not data in transit but data at rest). And your risk of giving your data to a site that you would have otherwise avoided because of a browser warning is even smaller. Do you know anybody who still pays attention to browser certificate warnings?

Expecting users to be able to make decisions about certificates is rooted in the absurd notion that the average user is capable of being their own sys admin. It reminds me a bit of the whole discussion around identity theft. Instead of telling people that they should limit the number of entities that they do business with (you do not need to have a credit card or reward members card from every company you have ever bought something from), we end up with convoluted advice about monitoring credit. Which leads to an entire industry of credit monitors gathering even more data...

But I digress. Let's get back to user education. User behavior is very very tricky business, as any marketing professional will tell you. As security professionals we are always calling for technology neutral laws. User education should for the most part be technology neutral as well. Forget the little yellow boxes and green browser bars and the like. The real message should be

1)  Use common sense,
2) Separate your online identity from your online fun. Try to use different computers/browsers/accounts for your business and purely personal browsing.
3) Don't run and install too much stuff on your computer
4) Don't have too many things on your computer at the same time (it is amazing how many web based attacks can be prevented by closing all your browsers before you buy online)
5) Don't give away personal information on the web when you don't need to
6) And again, use common sense.


Friday, January 2, 2009

Mass Law - Would You Have Been Ready?

Yesterday 201 CMR 17.00 was supposed to go into effect. 

Never heard of 201 CMR 17.00? Come on, don't you regularly monitor the Massachusetts Consumer Affairs and Business Regulations website for exciting legislative updates? 

Well in case this one slipped past you, 201 CMR 17.00 is a new data protection law in Massachusetts. This regulation is one of the more far reaching state laws. The law is short and written in plain English so I recommend just reading the text of the law instead of the many summaries available online.

What does this law mean for your business? Quite a lot - unless you operate locally or are planning on building a separate database for your Massachusetts customers. But don't panic - at least not for the next four months. The initial go-live date of January 1st has been pushed back to at least May 1st (and CSO Magazine thinks it will be delayed even further).

You can thank {insert name of person you blame for US economic meltdown} for the extension. But if you have been procrastinating, you shouldn't. The regulation is mostly pretty basic security management 101 type stuff - have a security policy, apply access controls, use encryption, etc. If you are not doing most of this already, the government of Massachusetts may be the least of your concerns.

The extension will help companies get a little breathing room, but for companies that haven't even started work on their security programs it may be too little too late. I used to teach a course in information security at the University of Leiden in the Netherlands (a great university in a beautiful city, especially in the spring time). Each semester when I gave students an extension for their term papers it was the same story - the good students didn't need it and the bad students hadn't even started their homework. They were so far behind that they still weren't ready when the second deadline rolled around.

With compliance, it pays to be one of the good students. I believe that compliance will be the major driver of security spending in the coming years. You can talk to your executive board about security ROI and preventing future losses until you are blue in the face, but the law is the law. A major advantage of having a solid information security program in place is that it puts you in pole position when new regulations come around. If you are operating globally, you are subject to literally thousands of data laws and regulations. Having a solid and reasonable information security program is the only way to avoid a losing game of whack-a-mole with this onslaught of new regulations.

This brings me back to a theme I have explored in the past when discussing breach notification laws. Outside of the most regulated industries, security management is about having an overall security narrative. If you have a solid common-sense security narrative in place, achieving compliance will involve applying tweaks where needed and translating your practices into bureaucratic language when requested.
 
Most companies have underestimated the effect future data security regulations will have on their business. One of the main reasons for this in my opinion is the very low rate of enforcement of existing regulations. The FTC, for example, has brought very few actual proceedings against companies for information security breaches. The settlement against TJX received very wide coverage in infosec circles, but is in a sense the exception that proves the rule. PCI does not release figures on fines, but there is no indication that widespread enforcement has really taken hold.

Some final thoughts on the Mass law, leaving aside for now the grand philosophical debate on whether government regulation is good or bad (libertarians please take a deep breath). Contrary to some of the consultant fueled hype, the regulation itself should not have a major impact on businesses that already take information reasonable seriously. The actual text of the regulation makes clear that no one is trying to shut your business down. The words "reasonable" and "reasonably" appear a full 16 times in the short text. It also states very clearly that the protection measures required are proportional to the size, scope, and type of business and amount of data stored.

Companies that are in non-compliance should focus on their security policy. A recent Cisco survey showed that only 77% of companies had an information security policy at all. My guess is that a substantial portion of those 77% have a policy in name only - a dusty pdf sitting in some network share that no one has looked at in months. All the other aspects of the law - the access control, encryption, and everything else - should be explicitly spelled out in the information security policy. And although external consultants can help polish a policy and define a roadmap for implementation, there is no alternative to organically integrating your security policy into your business. A good new year's resolution for all of us and fodder for a future post...