Thursday, March 26, 2009

Buying compliance

I have been thinking a lot about the price of compliance lately. Almost every day I get unsolicited emails from a vendor (frickin' provide-your-email-to-get-IT-content websites) pitching their product with the big label SOX COMPLIANT, HIPAA COMPLIANT in big letters (sorry for shouting, but those were direct quotes).

I still can't figure out how such misleading claims can be so prevalent. Take SOX compliance. SOX says precious little about information security, and only concerns itself with security insofar as it touches on financial controls. In other words - SOX requires you to have that security in place that prevents people from cooking your books. SOX says nothing about firewalls, and yet many firewalls are advertised as SOX compliant. 

Now part of this FUD undoubtedly originates with auditors. From my experience audit firms send their heavyweights for the financial part but tend to send some pretty inexperienced folks to do the IT audit. These recent grads sometimes are political science graduates who realized that all the good jobs are gone, did a quick CISA, and *poof* became auditors.

Here's a quick reality check for all of us in the information security industry - the IT audit is just a sideshow in the financial audit, and a pretty minor sideshow at that. And when it comes to being in IT compliance, the lion's share of that is your user access security. I'll say that again in case you missed it - when IT systems are responsible for the inaccuracy of a company's financial statements, it's because of programming errors, broken reporting processes, or someone having access to a system they weren't supposed to. It's almost never (at least not as far as I know) because someone managed to take advantage of the fact that Apache wasn't patched on the web server. When the auditor comes with a cookie-cutter checklist, you will usually be able to provide compensating controls for technical deficiencies but its much harder to fudge the integrity of your business processes.

The main area where you can fail a financial IT audit is in the area of user access to data and applications. If an organization has been sloppy about granting access to network shares or to critical systems, they are in material breach of some basic audit requirements. But there is no product in the world you can buy that will do this for you. Let me repeat that one more time because it contradicts at least 5 sales pitches you have heard in the last month - you cannot buy a product that will magically make sure everyone in your company only has access to the data they are supposed to have. This is because there is no product that can solve your office politics, and (despite the claims of some DLP vendors) there is no product that can really intelligently discern whether data is sensitive or not.

While we're on this can't-buy-me-compliance riff, let's not forget the buttons for "HIPAA Report", "SOX report" etc that some security products come with these days. There's almost a cartoon-like image to a SOX auditor coming in and asking for a report and the compliance manager saying, well, I'm glad you asked, I have this nifty SOX report button I will press here.

The somewhat heretical truth is that compliance requirements are basically the same across regulations, industry contracts, and even jurisdictions - make sure only the people who need access have it, make sure you know what happened when, and make sure you have a properly managed environment. But almost all of this is about process, not technology - and where there is a technological need, in most cases there are pre-existing modules or plugins that provide this functionality (I say in most cases, because there are some systems - most notably administrative credential management systems - that truly fulfill the spirit of compliance requirements yet have not been built into existing user management products).

The trend to built-in security and compliance has been underway for a long time, as the major vendors have integrated regulation-driven customer requests over the years. It will be hard for niche products to survive over time without being integrated into larger product suites for the simple reason that most organizations have a very strong incentive to limit the number of vendor relationships they have. Every new vendor represents a significant overhead in terms of sheer contact, interaction, contracts, etc, and of course the significant risk and complexity that it adds to the environment. 

So to circle back to buying compliance - the real costs of security related compliance are in the time and organizational capital required to bring about real business process change (the second half of that sentence sounds like buzz word drivel, but sometimes buzz words exist for a reason). While there are certain niche products that can assist with compliance requirements in certain industries, for the most part organizations - especially any midsize company - can get compliance by leveraging functionality in their existing infrastructure.

Friday, March 20, 2009

The Prioritized Approach to PCI

A week or two ago the folks at PCI published "The Prioritized Approach to PCI DSS Compliance".

This was a long time in coming. The original PCI list received a lot of criticism for just being a long laundry list. I don't think there's a single company in the world that could honestly claim to be in total compliance. Companies - especially those new to PCI - don't know where to start with the hundreds of new security requirements that have just landed on their lap.

So it's certainly a good thing that the PCI Council folks were kind enough to prioritize the requirements into 6 categories. (Although in the legal fine print the PCI Council insists that the prioritization is for information purposes only and that all requirements are still mandatory for compliance).

Let me start out by saying that I basically like the concept behind PCI. Too many security regulations are just empty fluff. For all its imperfections, PCI puts its money where its mouth is - detailing specific technical measures companies need to take to achieve security. PCI doesn't always get it right (more on that later in this post) and there clearly needs to be some change in the surrounding audit mechanism. But I still think that the standard as a whole kind of works in improving the overall security of the companies subject to it. This is a classical case of security economics at work; the PCI stakeholders (the credit card companies) have a direct financial interest in preventing actual breaches that lead to loss.

The PCI Standard can get away with requirements no state or federal regulation ever could. Legistlators try to avoid mentioning specific technologies in legislation. This is a good thing for limiting the influence of lobbyists and preserving technology neutrality (any one who has had the misfortunte to deal with electronic signature regulations knows how messy it can get when specific technologies find their way into legislation). But PCI is not a regulation and does not depend on a legislative process to be updated. It can specifically mention certain technological concepts and then update them over time. Although you sometimes mistakenly here in the news about "PCI being adopted as law" in certain states, this is almost always a complete misrepresentation (as in the case of Minnesota, whose PCI-inspired law only enacted a few very generalized versions of some of the PCI requirements).

The Prioritized Approach gets some priorities right and some of them can IMHO use improvement. The document says that data is culled form actual breaches, giving the PCI Council data that most of us don't have to determine what is important and what isn't.

The Prioritized Approach hits the nail on the head with priority one - removal of sensitive data and limit data retention. This information security principle seems so obvious and yet I rarely see it stated in such a clear and succint way. The lowest hanging fruit - the bananas practically falling off the branch - is the deletion of customer data you just don't need any more. It's also the easiest thing to get buy-in from the rest of your organization - it requires few resources and has a very high risk-reduction-to-effort ratio. So absolutely no argument on this no-brainer for priority #1.

The relationship between 2 and 3 is likely to generate the most controversy- it is here that the PCI Council prioritizes network security over applicaiton security. But in today's environment, is it better to have hardened applications sitting on open machines or the other way around? The actual threat from web application vulnerabilities is still not well understood, and there are a number of items on OWASP Top 10 type-lists that do not pose a particularly high risk. But something like failure to validate input (priority 3) is in my opinion responsible for more attacks than some of the networking PCI requirements (eg installing a personal firewall on employee laptops, priority 3).

Currently almost all items in PCI Requirement 6 (Develop and maintain secure systems and applications) are assigned priority 3, and almost all items in PCI Requirement 1 (Install and maintain a firewall configuration) are assigned priority 2. I predict this will probably evolve over time, since there are clearly some app security issues which are more important than network issues.

A couple of final thoughts - the Prioritized Approach puts the appropriately low priority on pyhsical security. One of my recurring pet peeves when dealing with vendor security whitepapers is the overemphasis on physical security. Physical security is well understood, is implicit in doing business, and has much stiffer criminal penalties associated with it. When looking to implement an information security program, "restricting physical access to publicly accessible network jacks" (priority 9.1.2) has been justifiably pushed down to priority 5.

All in all the Prioritized Approach is a nice addition to the PCI Standard. For all its troubles, PCI is the closest we have to an industry consensus on what is considered reasonable security.

Thursday, March 19, 2009

Security Spending Benchmarks Report

Today the OWASP Security Spending Benchmarks Project published its first quarterly report.

We've been working hard on gathering data over the last few months and it was well worth the effort. I had the privilege of leading this project, with great contributions from our 17 project partners and tremendous help from Jeremiah Grossman at WhiteHat.

We started out with a simple question - how much security spending is enough when developing web apps and software? We all know that there are a lot of web apps out there that are not sufficiently secure. Some of this bad security is a result of sheer ignorance. But most bad security results from lack of effort. Or applying that universal law of business (TIME=MONEY), due to a lack of resources.

In other words, most web apps are insecure because no one bothered spending the time or money to secure them. But since one could theoretically spend 90% of a project budget eliminating every esoteric web app vulnerability known to man, how much spending is enough? Executives want to know what the industry norm is, set aside that budget, and see the security issue disappear so the company can focus on its core business. Although this may not make some security purists happy, in the vast majority of companies security is a tax. And like with most taxes, when the rates are unclear and the tax code is too complicated, people start fudging and figure they'll take their chances.

The security tax is relatively well understood in web security's better funded cousin, network security. When it comes to basic network security there is a conventional wisdom of a security tax of somewhere between roughly 5 and 10%. No such consensus exists for development. It is this vacuum that the OWASP Security Spending Benchmarks Project addresses.

THE FACTS

From the get go we have focussed on making a different kind of survey - a collaborative data collection effort by the community without any added spin. We want the facts to speak for themselves with the goal of better understanding security spending in development. That is why we have kept our own thoughts and analysis limited to blogs like this but out of the actual report. Also in what I believe is a first for a security spending survey, we have also released the raw survey data that can be found on Survey Monkey.

There's a detailed description on the project page of our methodology and repsondents. A bit further down in this post you can read about some factors that may have affected the quality of our data. So if there are any statisticians in the crowd, please keep in mind that we realize that our resulting data is far from perfect. I do however think that our open community approach has led us to results that are at least as good as funded proprietary surveys that until now have been the only source of data on this topic.

So with that grain of salt sitting on the plate, let's take a deep dive into some of the key survey results...

Data breaches loosen the old security purse strings. The survey data validates the unsurprising fact that companies that have suffered security incidents are more likely to invest in security and have security executives on staff. Any one who has spent any time negotiating security budgets knows that security incidents are unfortunately a powerful (and somewhat belated) kick in the behind to get security projects rolling. This also makes sense from an enterprise risk perspective - most customers and even regulators can forgive one data breach (after all, stuff happens as they say). Two data breaches is of course a different ball of wax.

The recession/depression is not negatively affecting app security spending. At first it seems a bit surprising that we found that web application security spending is expected to either stay flat or increase in nearly two thirds of companies. After all, isn't everyone but the government basically broke? But of course the reason we are broke is bad risk management. As I predicted back at New Year's, we got into this big pile of financial $%*#& due to a fundamental inability to assess risk and there is going to be a ton of new risk related regulations in the financial sector. And once legislators get the old regulating pen out, you can betcha that infosec regulation is on the way too. In fact, the bailout , ummm, I mean American Recovery and Reinvestment Act contains a major strengthening of HIPAA data breach notification requirements.

Most companies have specific IT security budgets You don't get very far without a budget, and 67% percent of companies surveyed have specific IT security budgets. For companies that had suffered incidents, this was almost 90%. The interesting thing about having a specific security IT budget is that it pits competing security interests against each other at budget time.

There's very little development headcount dedicated mainly to security. In a whopping 40% of companies, there is less than 2% of developer headcount dedicated to security. This makes sense to me, because the job of developers is basically to build stuff without making really obvious security mistakes. As I have said before for many organizations I don't see a real alternative to some kind of build-then-try-to-break approach to producing secure code.

Most companies do third party security reviews. At least 61% of respondents perform an independent third party security review before deploying a Web application while 17% do not (the remainder do not know or do so when requested by a customer). I find this one of the most surprising statistics, considering the expense of third party reviews and the (probably false) assumption many companies might have that they can do this in house. This statistic seems to indicate that there is widespread acceptance of the breakers model of building secure code.

Security is important when hiring. For most companies it is completely infeasible to do a security review of every line of code a developer writes. That's why developer education and previous security experience are so important in producing secure code. That might explain the surprising fact that half of respondents consider security experience important when hiring developers, and a majority provide their developers with security training.

Competitive advantage is not an important factor in security spending decisions. Competitive advantage ranked last out of five factors that could influence security spending, while compliance ranked first. This reflects a reality that I see everywhere except for certain navel-gazing security conferences and highly sensitive industries - namely that regular ordinary folks do not make purchasing decisions on the basis of security. They expect basic security as part of a product or web application.

Web application security still has a relatively small part of the security spending pie. There are a couple of reasons for this in my opinion. Web app security is still somewhat new, at least compared to the classical network security approaches. Many standards and RFPs still place a very heavy emphasis on network vs. application security. For example, the recently announced Priority Approach for PCI prioritizes virtually all network security requirements ahead of web application security requirements. Another reason that web application security receives relatively little budget in my opinion is that it does not come bundled together with other functionality. Many security products allow new and visible ways of managing and mointoring users. Most web application security spending on the other hand leads to something that is almost completely invisible, namely a locked down application. Locked down applications don't make for very good powerpoint slides. "Total network awareness" products do.

I've got more to say on that but I feel we are digressing a bit. If you are still with me, I'd like to dedicate some ink/pixels to an honest critique of the data quality in the OWASP SSB survey.

LIES, DAMN LIES, AND STATISTICS

I always read survey results with a healthy dose of skepticism. Too often you read stories along the lines of "1 in 3 grandmothers reports losing social security check due to insecure mailbox", with a press release the next day announcing a new mailbox lock.

So having just reviewed our results, its time to analyze the accompanying grain of salt. Openness and collaboration is a big part of the OWASP Security Spending Benchmarks Project. Although we made every effort to maximize the quality of the data and analysis, the data is not perfect. Surveys never are, but of course businesses are constantly forced to make critical decisions on the basis of imperfect data. Ultimately the goal of the project is to capture the best possible picture of the development spending landscape and - equally important - to stimulate a discussion that can lead to further consensus on this issue.

I may have missed something, but here in no particular order are the possible issues that affect the reliability of survey results:

(1) People not answering truthfully.
Well, not much you can do about this. We kept this to a minimum during the OWASP SSB survey by rejecting responses that took less than 2 minutes to fill out and by spreading the survey through a trusted list of contacts.

(2) Intentional skewing of results by subverting the survey process
(eg. responding multiple times). Internet surveys that offer some degree of anonymity will always be vulnerable to this. Again, our survey methodology as described on the project page was designed to keep this phenomena to a minimum.

(3) A non-representative respondent base
I think that this is the most significant potential weakness of our current survey. Although we have a number of non-security partners involved, the current list of partners consists primarily of security consultants and other organizations. Although I don't think that this significantly skewed the results, there is a possibility that the companies we reached out to through our partner network were somewhat more security aware than a randomly selected company. In the next quarter, we plan to expand our partner base to include a greater number of non-security related companies.

(4) Badly formulated or suggestive questions.
This is the "Do you (a) support candidate X or are you (b) a heartless fascist" problem. There is an entire industry built around the proper phrasing of survey data to get accurate results. Although none of the partners is a survey expert to the best of my knowledge, many of us had been involved in similar efforts in the past and we attempted to phrase the survey in a neutral way that would lead to the most accurate results.

(5) Drawing incorrect conclusions from the collected data.
Our project report attempted to avoid this by sticking to the facts and leaving the analysis to the blogs. Also, unlike every commercial survey I have read in the last few years, our project actually releases raw data to the community to evaluate. There are no "proprietary method" or "confidential sources". In fact we welcome competing analysis and others using our data to further discussion around this topic.

(6) Opaque and non-verifiable process.
I think we steered safely clear of this pitfall as well. Our project plan is always available on the project homepage. Any one who is willing to commit some time and energy into promoting the survey and providing strategic input is welcome to participate.

THE NEXT STEPS IN THE OWASP SSB PROJECT

We are planning to build on the momentum of this first survey to get more companies involved and to get new thematic priorities. The current status of the project will always be available at the project homepage.

UPDATED PRESS COVERAGE:

This project generated siginificant press coverage, which is great for our goal of establishing industry wide benchmarks.

Click here for a video interview I gave Search Security on the project

Other coverage includes articles in SC Magazine, Dark Reading, Search Security, PC World, Information Week, and numerous other publications.

There has also been coverage in German and Spanish.

Saturday, March 14, 2009

California mulling new breach law

There's a new data breach law brewing in California.

Normally I don't pay too much attention to proposed or pending legislation, since so much of it is political grandstanding. I don't have the stats, but I imagine only a small portion of proposed laws ever make it very far in the legislative process, and even less actually make it into law.

But State Senator Joe Simitian is the same guy who authored the United States' first data breach notification law back in 2003. He's obviously a guy who has managed to get the ball rolling in the past. That law, which has since been copied in some form or another in 44 states, has in my opinion potentially been the number one driver behind the growth of the security industry in the last few years (and when I say that I include heavy hitters like PCI and SOX). Many CISOs today have their position because the CEO saw a data breach piece on the news. And that data breach piece would never have been in the news if it wasn't for breach notification laws.

Mr. Simitian has introduced a new law called SB20 that makes a few changes in his original breach notification law. It would require breaches of more than 500 records to be reported to the Attorney General's office. And it would require breach notification letters to contain more detailed information about what exactly was breached and how it happened.

There are still no financial compensation requirements in the bill. I heard that this provision was considered but ultimately rejected out of fear that companies would not report breaches.

Which brings me to the one part of breach notification laws that still doesn't make sense to me - ironically, companies with a more developed information security program are more likely to report breaches. A key component of any information security program and policy is to report incidents when they occur. In a smaller company or a company without a security policy, a lost back-up tape or laptop can be swept under the rug by a company eager to avoid triggering a breach notification. In companies with a security policy and security officer, incidents are escalated quickly (because of employee training). A lost laptop could then trigger a breach depending on what data was on it, whether it was encrypted, etc.

It would be interesting to see whether anyone has researched this. At the Security Breach Notification Symposium (appropriately held in California at the Berkeley Center for Law and Technology), Fred Cato estimated that only one in ten breaches is ever made public. My feeling is that public breaches are a much tinier percentage of overall actual breaches.

Here's my back-of-the-napkin calculation: A lost laptop with unencrypted PII triggers a breach under most states' data breach notification laws. It's notoriously difficult to find accurate statistics on laptop losses. But even by the most conservative estimates there are millions of laptops with unencrypted company PII in use in the United States and tens of thousands are either lost or stolen each year (OK, I am making up these figures but if you think about it they make sense). And yet the Identify Theft Resource Center counted only 656 publicized breaches for 2008 - and that counts all breaches, not just lost laptops.

So to make a long story short, breach notification laws as they are currently written do not capture even a tiny fraction of the breaches they were meant to address. I don't have an easy answer to this, but the next generation of data breach laws should contain some language that strengthens compliance by all companies, not just those with security policies in place.

Monday, March 2, 2009

Expansions to HIPAA in Stimulus Bill

With a trillion-odd dollars in spending, it was easy to overlook some of the details of the US stimulus package. Surprisingly, a full 21 pages of the 407-page American Recovery and Reinvestment Act are devoted to a significant tightening of HIPAA's privacy and security rules.

If you like to read stimulus packages, check out the actual bill here (warning - 13MB pdf). The HIPAA security and privacy part of the bill can be found on pages 144-165. The fact that roughly 5% of the document is devoted to security and privacy of health information underscores just how important the digitisation of IT is to the new administration (or just how vague the spending bill is, but let's try to be glass-half-full types). Obama has specifically mentioned the digitization of health care as a priority of his administration on numerous occassions. Tightened security and privacy helps lay the groundwork for this change.

HIPAA has forced major changes in who health care organizations can share information with but has arguably not forced specific changes within organizations (as opposed to say PCI). Nonetheless, HIPAA is one of the most overhyped reasons given for justifying security spending. Almost every security vendor presentation I sit through (and there are a lot...) opens with something about the need to ensure PCI, HIPAA, and SOX compliance. But with few specific technical rules on what companies should or should not do with personal data, it is difficult to use HIPAA to justify any specific security expenditure.

This bill isn't going to change that. HIPAA is more about the defensibility of an overall security narrative. It focuses on having policies and procedures in place and on the legality of data exchange with external partners. Although the privacy aspect of HIPAA is relatively binary (PHI is either shared with unauthorized parties or it isn't), the security aspect is very open to discussion. This is sharp contrast to say PCI, where the detailed requirements only leave limited room for interpretation.

There are nonetheless important expansions to HIPAA in the bill. The regulation now covers a broader range of entities and also contains very specific data breach notification requirements. My guess is that the new breach notification component will have the most direct effect on organizations. I have written of the diminished importance of breach notification laws, but health care providers are different. They operate in a political arena and often rely heavily on government contracts. Even if consumers don't really care about breaches, it will be hard for repeat offenders to successfully bid on government RFPs. Publicized breaches can have real costs for health care organizations.

HIPAA has also until now suffered from very weak enforcement. Two weeks ago CVS received what was only the second HIPAA fine in history. This fine was apparently for throwing out receipts with patient data directly into the trash. Failure to shred this information is so obviously wrong that it can't really be considered a specific HIPAA violation as such. The first HIPAA fine was for a mere $100,000.

The expanded HIPAA rules in the stimulus bill have significantly raised the potential penalties for violators. Until HIPAA violators start getting fined in a consistent and significant manner, it is unlikely that HIPAA will lead to significant security related spending.