Monday, November 9, 2009

Mass Security Regulation Gets Tech Priorities Wrong

The final version of a sweeping new data security regulation in Massachusetts was published last week. Some parts look pretty good. But some parts look like they are straight out of 1999.

Let's start with a bit of history, for the benefit of the 99.9999% of the population that does not spend its time following obscure state-level data security regulations. The Massachusetts regulation, known as 201 CMR 17.00, was introduced a couple of years ago to address a spate of breaches of personally identifiable information. The business community balked but the regulation survived. Since then it has undergone numerous revisions to address concerns that it imposes an undue burden on businesses.

The regulation has some fairly standard and common sense requirements on the policy and procedural side. But it is on the technical level that the latest - and supposedly final - version of the regulation sounds woefully out of date. Reading through the text gives an awkward time-warp feeling. Like a newly published technical manual talking about dial-up modems and floppy discs.

That 90s feeling starts with the title of the technical section - "Computer System Requirements". Hmmm...What about all the iphones and netbooks and what not floating around the enterprise? And more critically, while securing computers is important, isn't securing servers more important? A more inclusive title like "IT Systems Requirements" would have definitely made more sense.

So what are these "computer system" requirements any how? The only purely technical requirements in the regulation talk about anti-virus software, operating system security patches, firewalls, and encryption. If you are having bad flashbacks to the CISSP you took a decade ago, that's probably not a coincidence. Those are all important issues, but are they really crucial to most technical data breaches in 2009? What about secure configurations? What about securing web applications and secure code development?

So it seems that the security-apparatchik mentality of anti-virus programs, patches, firewalls, and encryption is unfortunately alive and well in the legislative branch. And of course those measures might be the best way to secure a home computer. But they simply do not reflect the reality that most enterprise data breaches that are not a result of stupidity occur as a result of insecure configurations and applications. Up-to-date virus definitions are usually neither here nor there.

Most large companies already know this. They have an internal risk function in place that prevents them from overspending on anachronistic security measures, except when required to do so by outdated regulations like 201 CMR 17.00. But for small and medium sized businesses – including small shops that manage millions of sensitive records – regulations like 201 CMR 17.00 will drive security spending priorities. These companies are inadvertently being misled into believing that securing their environment means buying an anti-virus program and setting up auto-update.

The truth is of course very different. Installing anti-virus software is easy, but actually locking down an environment is incredibly difficult for smaller companies. That is because it requires reconfiguring other applications that no one in the organization really understands. It requires fiddling around with Unix and database permissions and PHP users in systems that no one normally touches. At the end of the day, it is hard to secure systems you do not understand, and most smaller companies do not understand the systems they run internally.

The legislation does not even begin to allude to this. From an actual risk perspective, you are better off with an out-of-date anti-virus program and a really locked down internal environment than the other way around. You are also (sometimes) better off running an unpatched operating system with few services running than a patched one with a gazillion plugins and other third party components. Whoever wrote 201 CMR 17.00 can't be expected to know this, which is why when the law gets technical it just regurgitates some old security one liners that are found in CISSP prep courses.

Interestingly, even the weak and outdated technical requirements have a get-out-of-complying free clause. The technical requirements are all prefaced with the bizarre exemption “to the extent technically feasible”. As we speak they are building some sort of a black hole I understand nothing about underneath the French-Swiss border. So of course turning on a firewall or encrypting some data is “technically feasible”. I am not a lawyer, but I cannot see how any one could make an argument that any of the requirements listed in 201 CMR 17.00 are not technically feasible. There is a very ill-defined exemption at play here that will make it difficult for companies to understand what the regulation actually requires of them.

It is a shame that the poorly written technical portion of 201 CMR 17.00 detracts from what is otherwise a well written regulation. The sections on policy, training, and contractual language are important and will prompt some companies to get their data security house in order. It is only when the regulation tries to get even vaguely technical that it falters. I do not know whether "final version" really means "final version" this time around. But if there is room for one more revision before the March 1st compliance deadline, a few words on secure configurations and applications would go a long way to improving the regulation.

Saturday, October 31, 2009

YouSendIt Indictment is a Cloud Warning

IDG is reporting that the former CEO of YouSendIt, Khalid Shaikh, was indicted this week by a grand jury for launching DoS attacks against his former company.

Disgruntled ex-employees sabotaging their old company happens all the time. When that employee is a former CEO or CTO (and Shaikh was both) it makes you wonder what kind of data that person may have had access to. Especially when the company in question is one of the market leaders in so-called managed file transfer.

Managed file transfer companies help people get around limits on the size of email attachments. If you are sending a 2GB file, email is just not an option. Fedexing a DVD is a royal pain and makes you look about as tech-savvy as a government agency that still insists on receiving faxes. This has given rise to a large managed file transfer market, which includes vendors like Accellion, Axway, Globalscape, and many others. There are basically two types of file transfers - the one where your data stays on your servers, and the one where the vendor hosts the data. YouSendIt is in the latter category.

There is something very convenient about externally hosted managed file transfer - you don't have to configure and manage your own server, for starters. But you lose control of your data, and when your provider is breached your data might be exposed. This won't keep you up at night if the only files at risk are photos of your pet cat. But what about companies that use YouSendIt or other cloud services to transfer confidential files?

To be certain, there is absolutely no indication that Shaikh or any one else at YouSendIt accessed any data improperly. The only charges relate to the DoS attacks. But the incident serves as a good example that when your data is in the cloud, you need to be sure that your cloud provider has the right measuers in place to protect against external and internal attacks to their network.

There are not many enterprises that can withstand an attack from a technically sophisticated former insider who is willing to criminally attack the enterprise. After all, this person knows

1) the network and data architecture like the back of their hand
2) security vulnerabilities
3) passwords that haven't changed (and how many companies change all their passwords every time someone leaves?)

This is why the internal data handling policies of cloud providers are critical to the protection of their customer data. The more robust their data structure is, the less likely that an insider can compromise sensitive data.

So how secure is the data on YouSendIt's servers? YouSendIt has a detailed security policy that describes their overall security narrative. Against an insider, the most important security measure is data encryption. But their policy implies that data is not actually encrypted on YouSendIt's servers:

All files stored on YouSendIt servers are encoded and stored using a scrambled name, which makes it impossible for a network intruder to identify the file by its original name or read the contents of the file. In order to access and download a file from YouSendIt’s servers, either the full download link or complete user credentials are required.

I don't really know what "encoded and stored using a scrambled name" means, but I can't imagine it means encryption. After all, if they were actually encrypting, wouldn't they just say that?

So let's assume that there is no actual encryption - not just obfuscation - in place. This means that any employee of YouSendIt can access raw files if they gain access to the right server.

I have written before about how encryption in the internal environment is not always worth the price, particularly for database encryption which can be costly from both a complexity and licensing perspective. Encryption in that case does little to protect the organization against the bad apples in its midst, because those people likely have access to the raw unencrypted data in much easier to reach places. But in the cloud this whole calculus is reversed, which is why encryption of data should be a requirement for any cloud deployment.

I certainly do not mean to pick on YouSendIt. As far as cloud managed file transfer systems go, they at least have a detailed explanation of their security policies on their site. The fact that they are one of the larger companies in this space also provides some reassurance. But the indictment of their former CEO should act a a general wake-up call for anyone who is thinking of using cloud services for confidential information.

In the end, enterprises need to make their own risk assessment for using such cloud based services. For low to medium security files, using a cloud managed file transfer solution does not introduce significant new risk. However for highly sensitive files, incidents like the YouSendIt attack are further evidence that enterprises should either stick with internally hosted solutions, or should use the cloud with caution. Encrypting files prior to using the cloud is one measure that grants additional peace of mind at the cost of slight inconvenience.

Monday, October 26, 2009

SEC eyes Identity Theft

A few weeks ago, the SEC fined the Commonwealth Financial Network for its failure to mandate proper anti-virus software on its computers.

Here is the basic story - Commonwealth Financial has a decentralized advisor structure where independent contractors work out of about 1000 branch offices. These advisors access the Commonwealth online trading platform from their own computers. Commonwealth has a central IT office that supports these users.

Sound like a recipe for infected computers? Turns out it was. Using malware, an intruder managed to get the login credentials of some brokers. He (or she) then created a list of high value accounts and tried to execute some fraudulent transactions. At that point Commonwealth's clearing systems apparently picked up that something fishy was going on and shut down the illegal activity.

It would seem that Commonwealth's basic controls worked in this case - a criminal was unable to carry out fraud and potential victims were notified. But the data on the violated accounts was leaked (including information such as the net worth of individuals). And the SEC has a Safeguards Rule that requires broker dealers and Commission-registered investment advisors to "adopt written policies and procedures reasonably designed to protect customer information".

The SEC has not traditionally taken direct action on information security issues that are unrelated to the filings of publicly traded companies (by contrast other regulatory bodies like the FTC have been fining companies for bad information security practices for years). It is hard to say whether the Commonwealth fine indicates that this is about to change. The overall draft five year plan for the SEC released earlier this month contains a fleeting reference to identity theft on page 35 that may indicate a prioritization of this issue. A very detailed overview of the current issues being discussed can be found in the Federal Register.

Of course the Commonwealth fine is so low that it may actually have an adverse effect. It reinforces the business practice of risking low fines rather than changing business practices. The fines companies face for information security issues are dwarfed by the fraud-related fines that regulatory agencies in the United States issue. MoneyGram was fined $18 million the other day for turning a blind eye to fraudulent transactions on its network.

But the SEC action in the Commonwealth case does tell us something about how regulators look at information security. Two main issues were cited in the SEC action -

(1) the failure to actually require - rather than just recommend - anti-virus software, and

(2) the failure of the support center to properly follow up on a report the computer was infected.

Recommendations and Requirements

The first item underscores what legal departments have known for years and what CISOs are just starting to learn - that the most important thing for an organization is a well formulated and well communicated security policy. This is actually more important than most technical controls in addressing the overall enterprise IT risk.

Commonwealth might have avoided a fine entirely if it had just switched around a few words in its security policy. To regulators, there is a big difference between requiring and recommending, even if you can't actually enforce your requirements.

To technically require anti-virus software is a pain. Network Access Control (NAC) systems have struggled to gain acceptance outside of highly controlled corporations or environments like universities where infected users threaten availability, and not just security, of networks. The recent failure of the once-promising Consentry Networks is a sign that NAC vendors had over estimated the appetite for pure-play NAC appliances.

But there is a world of difference between getting a complex NAC solution to make sure everyone on your network has anti-virus software, and just telling people they have to get anti-virus. The latter is free. And although cynics would say that it does little to influence actual user behavior, it does help create a culture of security within the organization. And, critically, these policy mandates create a framework for liability and accountability when something goes wrong.

What You Don't Know Sometimes Cannot Hurt You

Item (2) raises an uncomfortable truth that undercuts the selling strategy of many security vendors. Namely, organizations are sometimes better off not knowing about security vulnerabilities than knowing about them and doing nothing about them. In this specific case, knowledge of a vulnerability came from a human being noticing their computer was infected. But most vulnerabilities come to light by an automated system detecting them. In that case ignorance is sometimes bliss.

Many security vendors pitch their products with "You have no idea how much bad stuff is going down on your network! Buy our new ZXT3000 to discover and mitigate threats ABC". For some businesses, this is an appealing proposition because their data is so sensitive that it is being specifically targeted. But for the large majority of organizations, buying the ZXT3000 (and apologies if such a product actually exists) is just going to create more liability than they previously had; after all, they may have the budget to buy the device, but they don't have the manpower to monitor all the alerts it creates. This is why many organizations have turned off their complex IPS systems. They turned it on, got gazillions of alerts, and then intuitively realized that having all these high severity alerts and not doing anything about them is worse than having no alerts at all.

Sunday, October 18, 2009

Visa Embraces End-to-End Encryption

It feels like it has been a slow last few months in the information security regulatory and compliance space. That is my excuse for why it has been quiet on this blog for a while (well, that and being very busy with other stuff).

PCI was back in the news last week with an announcement by Visa in support of end-to-end encryption. With all the hundreds of requirements that PCI imposes on merchants, it can come as a bit of a surprise that data is not in fact encrypted at all stages as it travels between the Point-of-Sale and the card brand. This latest announcement by Visa is a signal that the payment industry is finally looking to fix this.

Further evidence of this shift comes from an unexpected source. This week I had the chance to hear Heartland CEO Bob Carr talk at the SC World Congress in New York about the massive data breach his company experienced in January of this year. Now you might think that the Heartland CEO addressing a security conference would be as likely as Nancy Pelosi addressing the NRA. But ever since Heartland's data breach, Carr has been aggressively engaging the IT security community.

He has called for reform of the QSA system (more on that later) and Heartland is promoting a new end-to-end encryption standard being developed with Voltage called E3. The E3 system will ensure encryption of card data from the moment a card is swiped until it is transmitted to the card brands.

Heartland is not alone in proposing a more robust system for securing card data throughout the transaction lifecycle. First Data and RSA have a competing tokenization product that basically replaces sensitive card data with random numbers and offers both advantages and disadvantages when compared with the E3 end-to-end encryption approach.

So will these new technologies reduce the number of credit card data breaches? That is hard to say, because we don't know enough about the cause of most of these breaches. But it seems like a safe bet that implementing these systems will at a minimum substantially reduce the risk of data compromise between the PoS and the acquirer.

But What About Internet Sales?

Card Not Present (CNP) transactions are a different ball game. After all, card data in a CNP transaction needs to travel a long road before it is safely in an end-to-end or tokenized environment. Removing the number of nodes that store actual unencrypted data will not do anything to secure these initial stages of CNP transactions. But it will make it easier to identify where breaches occurred. And this, in turn, will help sort out the liability issues which are at the heart of the practical problems with PCI.

Untangling the Liability Mess

Let's take a closer look at the liability issue. One reason for the poor state of application security today is that organizations are often not held accountable for data breaches that do not involve card holder data. With nearly ubiquitous data breach laws in effect, this is usually not the result of willfully concealing a breach but rather because companies don't know - and aren't motivated to uncover - whether they have been breached. In an ecosystem where many parties have handled the same data set, breaches cannot be definitively traced back to the offending party. This leaves little inherent incentive to invest in security technologies.

Take for example the fraudulent use of Social Security Numbers. If a criminal manages to take out fake credit in the name of a certain John Smith through use of his SSN, address, and birthdate, there is almost no way to realistically figure out where the leak came from. After all, John Smith has probably directly and indirectly provided this information to gazillions of service providers and others over time.

Cardholder data is a different ball of wax. The payment card industry is in a unique position to trace back its fraud and does this very successfully in the physical world. A waitress who runs cards through a skimmer in the back of the restaurant will lead to a list of fraudulent charges that will in all likelihood be traced back to the merchant. So the restaurant is incentivized (have never really been sure if that word actually exists) to prevent such fraud - whether by trying to hire trustworthy employees, keeping a closer eye on employees, etc.

In the Card Not Present environment, the lack of end-to-end encryption makes pinpointing blame slightly more difficult since the identical data set may exist in a number of different systems belonging to different entities. Perhaps not more difficult in a breach on the scale of Heartland, but more difficult for the thousands of mini-breaches that occur all the time. By securing this travelling data, it becomes easier to actually locate where smaller scale breaches have happened.

The Failure of the QSA System

As the screws tighten on data-in-transit and it becomes easier to assign blame for misused cards, the issue of QSA liability becomes even more important.

The QSA system is badly broken, and Heartland is just another example of a certified entity that was breached shortly after certification. The lack of liability is the greatest failure of the PCI system. In a normal financial audit, part of the deal is that if the company totally opens its books and does nothing to willfully mislead its auditor, then the auditor takes a certain liability with regards to the statements it produces. With PCI, nothing of the sort exists. What is the PCI audit worth if no one is responsible for its conclusions?

(The oft-quoted notion that one is never truly PCI compliant because compliance is a just snapshot in time doesn't hold much water. After all, a financial audit is also just a snapshot in time, in the sense that if someone raided the corporate bank account a week after an audit then of course the audit results are no longer valid. Liability can exist even without continuous monitoring)

Small Businesses

The liability issue is especially critical for small businesses. Although PCI has been around for a while, there are still a vast number of the 6 million odd smaller merchants in the US whose only experience with PCI is a line item on their merchant fees (many acquiring banks actually itemize a "PCI fee" to merchants). The credit crisis has left smaller merchants in a precarious situation, with merchant accounts being shut down when accounts exceed normal charging activity. Add PCI fines to the mix and a small company could easily go out of business altogether if it falls afoul of its acquiring bank.

Which means that in the short-term, increased awareness of PCI is driving an increased use of external payment gateway systems to offload card processing altogether. Most gateways are relatively inexpensive (small monthly fees and a small per transaction fee). It seems unlikely that any small company can invest the necessary funds to really make their IT systems PCI compliant. Data security regulations like the Massachussetts law go to great lengths to emphasize that small businesses are only expected to spend relative to their size. PCI is less forgiving. Level IV merchants may be under less validation requirements than Level I merchants, but the actual requirements are the same. It is no wonder that merchants are exiting the online payment business completely.

Of course those small businesses still have to deal with the PCI requirements for their non-Internet processing, but increasingly these IT environments are separate from a company's web presence. Many companies are outsourcing this processing as well. Indeed, significant amounts of card data can pop up in unlikely places. A recent British survey revealed that 97% of call centers record sensitive card holder data data. Better to have those systems outside the gate as well.

The new move to end-to-end encryption is certainly good news for the payment card industry, but will also require businesses to invest in new equipment and generally reassess their card payment architecture. For many small web merchants it may well serve as a motivation to reduce their card gathering activities even further.

Tuesday, July 21, 2009

https Can Wait - SaaS Needs Better Authentication First

Twitter just got burned in the cloud. Some "hacker" managed to figure out a password to one of Twitter's Google Docs accounts. This guy went on to send a whole slew of confidential Twitter documents over to TechCrunch.

This kind of stuff happens all the time, but our collective Twitter obsession has catapulted this story to the top of the news. Twitter's role in the recent Iranian protests has given the fledgling service a new gravitas. An attack on Twitter, it would seem, is an attack on all of us. And to make things worse this was a direct attack on cloud services. This perfect storm even has the New York Times talking about cloud security.

First let's look at what actually happened. An administrative assistant at Twitter used the same password for her corporate Google Docs account as for a whole bunch of personal services. Enter some guy going by the name Hacker Kroll. He managed to reset her password by answering her "secret questions" and reviving a defunct hotmail account the assistant had given for password reset. A bit of Googling and voila - all the company's goodies from secret business plans to personal emails are in the public domain.

Reading over the chain of events, it seems like this could happen to pretty much any company using SaaS (which according to various studies means most companies). And it raises an uncomfortable question - can Google Docs be trusted for anything truly sensitive given the flimsy password authentication it relies on? For the average user, Citibank password=Amazon password=Salesforce password=Twitter password=Hotmail get the point.

The Inevitability of Password Recycling

So who is to blame for this gaping security vulnerability?

Let's start with who is not to blame. Users can't be blamed for doing what comes naturally. And in fact, sticking to a very small number of passwords makes sense from an availability perspective. The security risks arising from using the same passwords everywhere pale in comparison to the total catastrophe that ensues from actually getting locked out of accounts. The average user would rather risk a 0.01% chance of their online accounts being compromised than a 5% chance of being locked out of their accounts (OK, I'm making those numbers up but you get the point).

There is another reason not to blame users - they haven't been given any workable alternatives to password recycling. Users are justifiably nervous about browser-based password managers - it opens up a Pandora's box of cross-site scripting and other vulnerabilities, no matter how complex your passwords are. And systems like KeePass that allow users to store their passwords in encrypted form may be very convenient for a paranoid minority, but just don't meet the real world needs of the average user.

Some companies try to force unique passwords through complexity requirements or password expiration policies. These settings aren't always available (Google Docs doesn't allow password expiry) but in Salesforce for example these settings can be set administratively. But this still doesn't solve the problem of password recycling. If a given user has hundreds of pictures of their golden retriever on Facebook and all of his passwords are goldenretriever1, goldenretriever2, etc, there's no configuration setting in the world that's going to pick up on this.

So the solution isn't going to come from user education or unenforceable corporate policies. SaaS providers need to offer more secure cloud authentication alternatives, even if this means charging a premium. SaaS vendors will of course only react to a market need. Unfortunately there has been very little pressure on vendors and the focus to date has been disproportionately on old fashioned network security issues. This has come at the expense of improving the very weak authentication structure in place in most SaaS offerings today.

https and Barking Up the Wrong Security Tree...

Take for example the recent letter to Google from a group of security industry thought leaders calling on the company to enforce https rather than http. While that is a worthy goal, it builds on the security industry's https fetish while ignoring the much more significant cloud authentication crisis.

Defaulting to https protects against packet sniffing; an important security objective, but one that is less critical in the cloud than on corporate networks. Compared to guessing passwords, running a packet sniffer requires a high level of technical expertise and a high level of direct network access. The rewards are also limited - sniffing a Google Doc that is being transmitted in plaintext gives access to that one document. Compromising a password yields the mother lode. That's why the majority of attacks we hear about involve guessing user credentials, not performing network monitoring (the TJMaxx case notwithstanding). Nine times out of ten when the media talks about an account being "hacked into", they are not talking about a compromised router or server. They are talking about plain old password guessing a la Twitter or Sarah Palin Yahoo account.

Security risks in SaaS differ sharply from the traditional firewalled corporate network. At the risk of vast generalizations, https is more important than robust authentication in a walled environment, but in the cloud that priority order is flipped. Password authentication is often sufficient protection for in house corporate resources because there is usually at least one more hurdle to climb to actually get at the data. That hurdle might be knowing how to get onto a company VPN or even just knowing the URLs of the company's web facing resources. These aren't state secrets, but probably enough to deter the casual hacker. Remember, the only technical skills involved in many headline-grabbing "hacking" incidents are a bit of Googling and combing Facebook for clues to password reset questions.

Poor password management is of course still a problem within corporate networks, especially for shared passwords. I recently discussed this issue in an interview that was published in Computerworld today. The lack of administrative password management is another example of skewed security resource allocation; organizations that spends enormous sums on firewalls, IDSs and other network security devices and services often fail to properly secure system access accounts such as root passwords on Linux servers, administrative passwords in Windows, or sa passwords on databases. Indeed the lack of proper management of administrative passwords was apparently yet another security issue at Twitter.

But the shift to cloud services like Google Docs gives potential hackers an even lower hanging fruit than guessing at default or poorly chosen administrative passwords. Cloud computing increasingly means that the only thing standing between a hacker and confidential data is a single password. After all, there's no point in trying to gain access to a core router with a potentially stupid password when you could just guess away at and try your luck there. And as an added bonus to the password-guessing approach, the lucky guesser gets all the data served on a silver platter, all formatted and ready to go. No messy databases to sift through and no need to have any knowledge of SQL, IOS, or other unpleasant technicalities.

Adding Just a Bit of Security to the Cloud

Eliminating the all-you-need-to-do-is-guess-a-password vulnerability in cloud computing isn't rocket science. It is in fact much easier to address than the politically dicey issues involved with shared administrative passwords. And there is no reason SaaS providers can't charge for the service. SaaS providers such as Survey Monkey already offer https versions of their products at a cost. Incidents like the recent Twitter snafu will push mainstream SaaS providers to offer premium authentication services as well.

There are a couple of easy-to-implement solutions that would have prevented the Twitter hack and also the vast majority of other SaaS password-guessing attacks that have been going on lately. One method is to require an extra "corporate password" to get into an account, so that employees need to enter both an individual password and a second password maintained and periodically changed by the company. Not a perfect solution, but one that would deter the flood of amateur attacks that SaaS seems to attract.

There are other more robust methods to beef up security - users can be required, for example, to submit corporate email accounts as their back up accounts. Another option is to force users to dial into a corporate center to reset their password. They can be then be subjected to much more detailed questions to authenticate them.

Letting companies insert themselves into the authentication process will do a lot more than https to secure cloud services. There just aren't that many folks out there running Wireshark in hopes of stealing a spreadsheet off of Google Docs. As the recent Twitter breach indicates, there are many more people out there trying to guess your employees' maiden names and get to passwords that way. That's not to say that https isn't important. But it's much more important to beef up authentication first.

Tuesday, June 30, 2009

OWASP Security Spending Benchmarks Project Report for Q2 Published

Today the OWASP Security Spending Benchmarks Project Report for Q2 was published.

This project measures security spending in the development process. This quarter we focused on cloud computing. We were trying to measure how much use companies are making of cloud computing, how this affects spending, and how they are dealing with related legal and business issues.

We are lucky to have some great security folks volunteering their time on this OWASP project - Jeremiah Grossman, Rich Mogull, Dan Cornell, Bob West, and others have all provided valuable feedback and support. We were also very fortunate to have organizations like the Open Group and the Computer Security Institute (CSI) join our project over the last quarter. They join organizations such as eema, Teletrust and companies such as nCircle, Cenzic, Fortify and others that have been actively contributing to this effort. A full list of partners can be found on the project website.

Cloud computing gets some people's eyes rolling because it sounds like a marketing gimmick or meaningless term. But whatever you want to call it, infrastructure, platforms, and software are resources that are increasingly being outsourced or externally hosted. This has enormous security implications because it undermines the traditional notions of ownership and management that security has been based on in the past.

Here are the key findings in the OWASP Security Spending Benchmarks Q2 report:


1. Software-as-a-Service is in much greater use than Infrastructure-as-a-Service or Platform-as-a-Service. Over half of respondents make moderate or significant use of SaaS. Less than a quarter of all respondents make any use of either IaaS or PaaS.

2. Security spending does not change significantly as a result of cloud computing. Respondents did not report significant spending changes in the areas of network security, third party security reviews, security personnel, or identity management.

3. Organizations are not doing their homework when it comes to cloud security. When engaging a cloud partner, only half of organizations inquire about common security-related issues, and only a third require documentation of security measures in place.

4. The risk of an undetected data breach is the greatest concern with using cloud computing, closely followed by the risk of a public data breach.

5. Compliance and standards requirements related to cloud computing are not well understood. Respondents report having the greatest understanding of PCI requirements relating to cloud computing and the least understanding of HIPAA cloud requirements.


1) The fact that SaaS is reported as the most prevalent of all cloud models is not surprising at all. Leveraging Platform-as-a-Service requires a level of expertise and sophistication many companies still do not have. And Infrastructure-as-a-Service has been dogged by performance issues and has yet to really supply an appropriate ROI model.

2) It is more perplexing that organizations do not report significant spending changes as a result of cloud computing. On the face of it, one would expect that cloud computing would result in lower expenses in a number of security areas, particularly network security. The fact that this has yet to occur may mean that organizations have been slow to adapt security budgets as a result of their cloud activities. Over time, both budgets and the role of security management will be increasingly focused on managing and auditing cloud relationships. Which brings us to number 3...

3) It is also somewhat surprising that organizations are not doing their homework when it comes to cloud computing. The survey found that only a third of organizations ask for the security policies of cloud partners. With all the talk of cloud security dangers, you would expect there to be heightened awareness and that companies would take the time to look into cloud partners' security narratives. That this has not been happening indicates that companies see cloud computing in the same vein as other outsourcing arrangements - the actual under-the-hood operations or security are not that important as long as the issues are contractually addressed. This approach may be more a result of necessity than choice, since for a small company with significant operations in the cloud it is hard to see how they could make any significant assessment of their cloud partner's security posture.

4) Data breaches are and will always remain the main fear factor driving the security industry. While compliance has always a bit fuzzy (especially when it comes to non-technical regulations, where there is a lot of wiggle room), the same cannot be said of a breach. You have either been breached or you haven't, which probably accounts for the greater concern survey respondents reported. It is interesting however that despite this very high level of concern with data breaches, organizations are still doing very little to vet cloud partners. Most organizations seem to have come to the conclusion that although there are many data security dangers related to cloud computing, there is not much they can do to mitigate this risk.

(5) Compliance is the issue that is really raining on the entire cloud computing parade. While PCI has fairly detailed supporting documentation to guide companies, other standards and regulations are much more vague so it is easy to see why people are confused. Regulators are still struggling to understand Web 1.0, so I do not expect we will be seeing much concrete guidance in this area in the near future.


I gave a whole bunch of caveats the last time we published our survey results about why web surveys need to be taken with a healthy grain of salt. This still holds true for our cloud computing survey, and probably even more so because no one seems to agree on what cloud computing is. But even so there are some important take-aways from the data we collected.

The most significant warning sign in the survey results in my opinion is that companies are moving to the cloud without really inquiring about the security policies and posture of their cloud partners. And when they do ask about these issues, they rarely ask for documentation. This does not bode well for the future security of cloud computing. Although smaller companies rarely have the resources to truly assess the security of their cloud partner, asking for written documentation of security policies at least forces the cloud partner to maintain a security narrative they share with customers. As more customers inquire about security, this security narrative takes on an increasingly strategic role for the cloud partner.

You can read the full report here.

Saturday, June 27, 2009

Nevada Mandates PCI Standard, Part II

Did Nevada really mandate the PCI Standard into law last week?

It sure seems like it when you read Senator Wiener's bill SB 227. I am not a lawyer, but the following sentence seems pretty clear: "If a data collector doing business in this State accepts a payment card in connection with a sale of goods or services, the data collector shall comply with the current version of the Payment Card Industry (PCI) Data Security Standard, as adopted by the PCI Security Standards Council".

For anyone involved in information security management or compliance, this is a really big deal. PCI has just been catapulted from a contractual obligation to a full legal requirement.

No one seems to have seen this one coming. In fact, I am not even sure that the Nevada legislature really saw this coming and they may not have realized the very far reaching implications of this legislation. But more on that in a minute.

Ira Victor, President of the Sierra Nevada chapter of Infragard and Director of Compliance at Data Clone Labs, was kind enough to reach out to me this week after I published my original post on this topic. Ira was intimately involved in the discussions around the new Nevada PCI law and testified before the Nevada Senate Committee on Judiciary in support of the law. Ira has some terrific insight into the history of this bill that can be heard in my interview today with him on this topic.

Here's a quick history of the bill as related to me by Ira. The current law came about to replace NRS 597.970, an earlier bill mandating encryption that apparently left open the door for criminal liability and did not define encryption. To remedy these issues, the new bill is much more specific about encryption requirements and somewhat randomly also requires PCI compliance. In exchange it provides a safe harbor for companies that are PCI compliant.

There is a thick irony here. Businesses that objected to the original bill on the grounds that it was too harsh now have a much much stricter bill on their hands that actually mandates PCI. This is either a very bold and trailblazing move by Nevada, or a last minute oversight because businesses didn't understand the implications. My money is on the latter for a couple of reasons:

1. There is no precedent of any other state legally mandating PCI. Some people think PCI is good and some think it is bad. But either way there is something plain weird about a law mandating a specific contractual agreement between merchants and card issuers.

2. There is no reference to PCI in any of the discussions or testimony before the Nevada Senate Committee on Judiciary. Wouldn't such a major shift in infosec policy at least be discussed by law makers and special interest groups ahead of a vote?

My guess is that the Nevada legislature meant to waive liability for PCI compliant companies, but not to actually mandate PCI. Recent discussions in Massachusetts objected to the mere mention of encryption in that state's security regulation. I can't possibly see how the business community in Nevada would have knowingly agreed to the whole PCI enchilada without putting up a fight. Being forced to do PCI makes mandated encryption look like a walk in the park.

So if this law doesn't make sense, is it going to stick? Ira knows a lot more about the legislative process in Nevada than I do and he insists that there is very little wiggle room to delay this law. But I just don't see the state of Nevada actually enforcing this. How many small businesses can really claim to be PCI compliant? Even the PCI Council itself tacitly acknowledges as much through the publication of their Prioritized Approach.

For more on this topic you can listen to my interview with Ira here.

Saturday, June 20, 2009

Nevada Mandates PCI Standard

Nevada has recently passed a law mandating PCI compliance for companies accepting payment cards that do business in the state. It is scheduled to go into effect on January 1st, 2010.

This makes Nevada the very first state to actually mandate PCI. The prize for toughest-state-data-security-law used to belong to Massachusetts. But Mass has recently been wavering and its technical requirements are almost non-existent compared to PCI.

The Nevada law is no reason to panic and doesn’t really change much for companies dealing with credit card data. Those companies already have a contractual obligation to adhere to PCI. The Nevada law ups the ante by making this an actual legal requirement, but the standard itself remains the same. And as far as actual enforcement goes, the Nevada law says nothing about penalties whereas PCI has the ability to fine non-compliant companies.

The bigger change is for companies that deal with non-credit card personal data. The Nevada law defines nonpublic personal information as a social security number, driver’s license number, or account number in combination with a password. It mandates the use of encryption for the transfer of such data outside of a company's control (this requirement existed in various forms in previous Nevada legislation as well).

One would hope that there aren’t too many companies out there sending account information together with passwords unencrypted. That leaves full Social Security Numbers and the much-less-frequently used driver’s license numbers. (Interestingly, the regulation doesn’t consider the last four digits of the SSN to be personal information. Which is kind of strange when you consider that the last four digits are the most random parts of the number. Oh well).

I suspect there are many companies out there with Nevada customers who will have to play some catch-up when it comes to SSNs. Full SSNs are still frequently used as a primary identifier for many web services related to payroll and benefits as well as many services that have nothing to do with taxes.

Most of these services already encrypt data on the interface level – it is the exception rather than the rule today to see a plain old http login page that asks for your SSN. It’s much tougher to know what is going on behind the scenes. But does the Nevada law really require companies to change their back-end data processing?

Because the law only talks about the “secure system” and the area “beyond the logical or physical controls of the data collector”, it is doubtful that this regulation requires any sort of SSL encryption of data that is not going out in cleartext over public networks. Data behind firewalls or behind some form of password protection would not appear to require encryption based on this wording.

One positive potential outcome of the Nevada law is that it may encourage organizations to move away from using SSNs when they don’t have to (a trend that has already been underway for a while, particularly at universities). There is something particularly jarring about being asked to provide your SSN to get cable service. Strict new rules around handling SSNs may be the necessary kick in the pants for SSN-addicted companies to finally overhaul their authentication methods.

One final thought about the Nevada law itself. In what I believe is a first for state laws, it directly references FIPS, NIST, and other “established standards bodies” when discussing allowable encryption methods. Most data breach notification laws give an exemption for encrypted data without giving any meaningful definition of the term. This has allowed companies to avoid notifying of a data breach when the compromised data was somehow obfuscated. This law will make it harder to claim that some light obfuscation or encoding actually constitutes encryption.


Companies that sell encryption products have a field day with laws like this. But - like other data security regulation - you don’t need to buy anything to be in compliance with the Nevada data security law. You just need to make sure that you are not sending sensitive data in cleartext over public networks. This means a bit more messing around with certificates and configurations prior to releases but not much more. And of course you also need to make sure that anywhere you are storing this data at rest is considered part of your “secure system” or has some logical or physical controls in place.


The actual text of Nevada Senate Bill 227 can be found here.

A good overview of the evolution of data security legislation by Andrew Baer can be found here.

UPDATE: My newest post on this topic can be found here. You can also listen to my interview with Ira Victor who testified before the Nevada Senate Committee on Judiciary in support of the bill.

Tuesday, June 16, 2009

Opera Invites You to Join the Cloud

Or at least that's what Opera is claiming with the rollout of its new Opera Unite service. It will allow users to serve up web pages from their own computers.

Why would you want your humble desktop to serve up web content? So far Opera doesn’t have much of an answer to that. The sample apps they offer – a “fridge” to post notes to your friends, a way to share music with your friends – don’t exactly scream revolution or Web 5.0 (as Opera likes to refer to the service).

You might also be wondering what’s so special about Opera Unite; after all there is nothing new about being able to run a web server from your computer. Opera itself has supported BitTorrent for a while. And anyone can stitch together a webserver with Firefox plugins or just enable one without going through a browser.

What Opera Unite does is present this functionality to users in one unified service. People who would never dream of firing up a web server on their own might be tempted to give Opera Unite a whirl. Opera seems to be betting that user-friendly client-side content hosting can buck the trend towards increasingly hosted apps.


Some of Opera’s early features have been adopted by all major browsers. There are accounts of everything from tabbed browsing to private browsing having originated with Opera. So although Opera has  a very small user base outside of the mobile world, a successful Opera Unite service could be rapidly mimicked in other browsers. Will Opera Unite usher in a new era where every computer acts as its own server? Has the democratization of the cloud finally arrived?

I wouldn’t bet on this for a couple of reasons. In the enterprise, security concerns will prevent widespread adoption (more on security in a minute). And in the home, there are some basic performance issues that make me wonder if this will really fly.

Let’s start with the home. The most obvious problem is that for a server to work it needs to be powered on. Many people turn their computers off to save power and to be environmentally friendly. Strike one against Opera Unite.

Strike two is that many ISPs hit users hard on upload speeds. Anyone who has tried to use an online back-up service knows that there is a huge difference between uploading a gigabyte of data and downloading it. When I tried to check out Opera Unite’s demo page it was painfully slow. That might have to do with the sheer number of visitors the page is getting today, but it doesn’t bode well for future performance.


Which brings us to security. Opera Unite is by no means the first Web 2.0 service to expose the computer in ways our Web 1.0 ancestors would have found difficult to fathom. Browsers like the Mozilla-powered Flock and others already have their fingers deep into a user’s credentials.

Let’s dive into a few specifics of how Opera handles security. On the interface level, the awkwardly long URLs users need to type will certainly be an attractive target for spammers and phishers to exploit. Users have the option to password protect pages, but since the password is stored in the URL this offers very little security on shared computers.  And the under-the-hood security isn't any better; it doesn’t seem like any of the traffic between clients is encrypted, which is to be expected because managing the certificates would be a mess. 

For home users none of this is really a big deal; most users running a casual web server out of their living room are either not very security conscious or don’t have very high sensitive data sitting on their machine. But it is another strike against the notion of using Opera Unite in the enterprise.

In the interest of fairness, there is one nice security feature in the current experimental build of Opera 10.0 – Opera Unite is disabled by default. This is a good way to protect users who have no desire to run a web server on their computer.


Companies often worry about the cloud because they feel they lose control of data and the surrounding security measures. It's tough to lose control, but in reality most cloud providers are much better at providing security than the average enterprise. So for a small to medium sized business, their data is probably safer hosted in the cloud than hosted on site.

This calculus is even more true of home users. Most home users are incapable of managing their desktops, let alone a server. Opera users – like Firefox users – are probably more tech-savvy as a group than IE users. But even so why go through all the trouble of configuring and securing your environment locally when you could just use a hosted service like MobileMe, Facebook, Flickr, or any of the hundreds of other services that exist in every flavor and price point? When I watch a video on YouTube, I am reasonably confident that it does not come with malware. When I watch a video that is being served up from my buddy’s desktop, that level of confidence drops pretty dramatically.

Of course the big disadvantage of hosted services is ownership and control (a good example being the recent failed attempt by Facebook to drastically change its Terms of Service). But my feeling is that, at the end of the day, most end users don’t really care that much about control. They would rather have the advantages that come with the cloud – including automatic backups – than worry about the intellectual property issues surrounding their vacation photos.

That being said, Opera Unite could become very successful for casual home use, particularly as a means of regulating P2P data exchange. And if the developer community steps up to the plate there may be some very handy apps supporting this. The lack of security features is not an issue for the casual user and is not a significant factor in affecting adoption.

But even if Opera Unite scores with home users it does not seem likely to be a candidate for serious enterprise applications. The enterprise client is getting thinner and thinner and I can’t really see Opera Unite stopping that train.

Thursday, June 11, 2009

China Votes For Endpoint Security

China is setting up a green dam to help prop up the great firewall.

The green what? The full name is even weirder - The Green Dam Youth Escort. It's a piece of filtering software the Chinese government is requiring to be shipped with all computers as part of its "anti-vulgarity" campaign.

China has been keeping a tight grip on the Internet for a long time. But the Green Dam project marks the country's first widespread attempt to control activity from the actual computer itself. No longer content to just monitor and block network traffic, China's maneouver is a surprising declaration of faith in the importance of endpoint security. Cloud advocates tell us that the endpoint matters less and less, but the world's most populous country seems to be moving in a different direction.

Before we draw too many conclusions, its worth noting that so far the system is not working too well. The Green Dam suffers from the same problems inherent in any massive effort to control the endpoint (kind of a mega FISMA with actual client side software). For one, there are obvious security and performance concerns involved in such a mammoth roll-out. Global Voices has an interesting translation of Chinese language posts about the problems ordinary Chinese are experiencing with the software.

So it's easy to dismiss the Green Dam Youth Escort as a futile project with a really dumb name. But people who mocked China's early efforts to control the Internet as doomed to failure have largely been proven wrong. Despite the apparent ease of circumventing the "Great Firewall", China has been largely successful at controlling and monitoring great portions of its Internet traffic.

The Chinese government has not released many details on the planned scope and implementation of the Green Dam system. There are certainly early indications that the planned Green Dam filtering extends beyond adult material to include political terms as well. It is also unclear whether there will one day be a NAC-type system in which only devices with an approved Green Dam agent are able to connect to the Internet. So far the only government requirement seems to be that the Green Dam software simply ship with the product. But it's hard to imagine that the government intends to rely on voluntary compliance after mandating the software distribution.

The Politburo Votes for Securing the Endpoint

China clearly sees value in controlling the endpoint. While other countries have relied strictly on network methods to control illegal content (such as Australia's recent flirtation with net filtering technology), the Green Dam project marks to my knowledge the first - and undoubtedly the largest - attempt to control content from the actual endpoint.

This is not a political blog so I don't want to get into the very dicey ethical question of whether hardware manufacturers should follow network and service providers in aquiescing to the Chinese government's demands (some justify their compliance by arguing that even a censored Internet ultimately promotes democracy). But aside from the significant human rights issues involved, it is interesting to consider the IT security lessons that China's move holds for enterprises trying to control their own content and traffic, albeit for much different reasons.

More than anything the Chinese move is yet another indication that perimeters and client machines still matter. China's previous model was more or less endpoint agnostic - the idea being that if you were surfing stuff the government didn't approve of, this would be detected or blocked at the ISP level. China now seems to be less confident in that approach. The powers-that-be seem to have decided that even with 30,000 censors the best place to nip things in the bud is right at the user's machine.

Why the Endpoint Still Matters

It's not only in China and the world of Green Dam Youth Escorts that the endpoint still matters. In many enterprises, there are vigorous debates on the role of personal devices and the degree of access they may be granted to corporate networks.

In a theoretical de-perimeterized world, (as advocated by the Jericho Forum and others), the actual endpoint device is basically irrelevant. From a theoretical perspective this may make sense, but reality is much different. The end point still matters a lot for a few major reasons:

1. The Law. What physical machine you work on is often the legal distinction between stuff you own vs. stuff you don't own.

2. Standards and contracts. For better or worse, PCI clearly places much more emphasis on network and client side security versus other forms of security. A lot of RFP and contractual language has the same bias.

3. Forensics and Discovery. A company has a much easier time getting access to it's own computers than access to an employee's personal machine.

4. Management and Auditing. It is still way easier to manage a company device than do some clean-access check on a personal device. Especially for small and medium enterprises, NAC-type projects are often far too costly and complex.

Technical solutions will come up to address issues (3) and (4) above. But it is (1) and (2) - the legal and contractual world - that will keep the endpoint critical in the near term. While the future may be deperimeterized, this will require a tide shift in the clear network-centric language of most contracts, regulations, and standards today. It might happen fast when it does happen. But it hasn't happened quite yet.

In the meantime, the government of the world's largest IT user base is betting that the endpoint can deliver for its censorship and monitoring needs.

Saturday, June 6, 2009

The Security of Bing

This week Bing made its big debut. If you haven't yet heard of Bing, Microsoft has a 100 million dollar advertising campaign that should correct your ignorance pretty soon.

Bing hasn't yet hit prime-time. When I Googled the word Bing (ahhh...the irony) the first page of results still contained links to a surf board shop, a columnist for Fortune, and various complaints from the latter about brand name infrigement. That's going to change real fast (like by the time you are reading this), but it goes to show that Bing has a long way to go before it even begins to chip away at Google's dominance of search.

Bing has generally received good reviews - it seems to basically work (no repeat of the initial Vista disaster here) and is a refreshing alternative to Google. But suprisingly there hasn't been much talk about the security of Bing (except for a storm of criticism about the display of inappropriate material in search snippets).

Or maybe not so surprisingly. Most people think of cybercrime in terms of hackers and ID thieves. Search engines just search for stuff that's already out there. But the truth is that the vast majority of criminal and quasi-criminal activity on the web involves gaming the search engine system in one way or another. A lot of crime that just happens to use a computer gets incorrectly labelled cybercrime. But search engine crime (if I may coin a new phrase) is truly cybercrime. 

How does search engine crime work? Take the term "hotels New York". Every week thousands of people book millions of dollars of hotel rooms by Googling those three words. Since people only pay attention to the very first search engine results (and almost never click beyond the first page of search results), moving just one spot up the list of results can result in a huge increase in revenue. The difference between being number 8 and number 7 in the Google results for any hotel related word has a very real and measurable price attached to it.

Enter the cybercriminals. Botnets are harnassed to create spam links to sites to raise their profile. Browsers are hijacked to redirect unsuspecting users. Content is culled from one site to another to produce hits in unrelated searches. Spam links are created dynamically on the basis of user input. There are literally thousands of ways of gaming the search engine system. Some are outright criminal, while others are just very shysterish SEO (search engine optimization) techniques.

Of course not all search terms have equal cachet. Take the term CISO (Chief Information Security Officer). A reader who stumbled on my blog by googling the term "CISO" pointed out to me that my recent post on whether companies really need a CISO is now the 5th highest search result for the term "CISO" on Google. Now while this blog has a healthy readership and has built up some decent Google juice over time, what this high ranking really goes to show is that there aren't all too many people talking about CISOs or competing for this term in search engines. Which kind of reinforces the point the post was trying to make in the first place...

But I digress. Let's get back to Bing security. My guess is the majority of all malware, viruses, etc are aimed at gaming Google search results. Now while most of these techniques work to some extent at gaming Yahoo and other search engines, the fraudsters are aiming to maximize Google rankings above all else. 

How well has Google done in making it's search engine fraud and spam proof? Google has done pretty well with controlling gmail spam (despite recent glitches). But today Google search results are a mess - malware related sites are frequently misidentified and many common search terms point to malware. Just a bit of Googling - I mean binging around - makes it seems like Bing suffers a bit less from this problem. One reason is that Bing appears to provide less search results than an equivalent Google search. And the other more important factor is that cybercriminals have not yet honed their skills on gaming Bing.

Early indications are that Bing uses much the same malware filtering as other search engines. The web analytics market is already scrambling to understand the implications of Bing. Features like the (annoyingly slow) rollover function undoubtedly have serious implications in terms of both security and user behaviour. How these will be exploited by cybercriminals will have a major influence on the face of cybercrime in the coming months.

Wednesday, June 3, 2009

The Encryption Myth

Rich Mogull had an interesting post yesterday about some trends he has been observing in enterprise security. Rich is a guy with his ear to the ground when it comes to what security processes and products companies are actually implementing. Reading his assessment on the state of encryption got me thinking about why everybody is talking so much about database encryption and why so few people are actually doing it.

There are three encryption trends in particular that caught my attention - 

1. Laptop encryption is being commonly deployed
2. File and folder encryption is not in wide use
3. Database encryption is hard and not in wide use

Let's start with (1). Laptop encryption is the most no-brainer security mechanism out there for any organization dealing with personally identifiable information (PII). 

Do the math. Your organization will lose laptops. Some will contain personally identifiable information. And these days whether you like it or not most thefts or losses will be high profile enough that your legal department catches wind of them. And at that point if they're encrypted your IT department fills out a new PO and everyone gets back to work. And if they're not encrypted all of sudden you have a potential breach notification situation on your hands. Which isn't the catastrophe that security vendors make it out to be (and nothing like the $200-per-lost-record myth you hear bandied around) but unpleasant nonetheless. And certainly a situation that is worth shelling out a few bucks per laptop to prevent.

What about (2) and (3) then? Why is almost no one encrypting their file folders and databases server side?

The truth is that you just don't mitigate that much risk by encrypting files at rest in a reasonably secure environment. Of course if a random account or service is compromised on a server, having those database files encrypted would sure come in handy. But for your database or file folder encryption to actually save you from anything, some other control needs to fail.  

Contrast this with cleartext data traveling through a public network, where no further control needs to fail for your data to be compromised. Or in other words, depending on your definitions you could say that unencrypted Internet traffic is already compromised. This is why https is ubiquitous, while hardly anyone is using database encryption. 

This is not to say that database encryption or file folder encryption is useless - far from it. For the compliance and audit kudos alone there is much to be said for implementing these solutions. But if you only have one play to make, you are much much better off leveraging your organizational capital to actually make sure that your servers are locked down, your database permissions are not too liberal, and that your developers are coding securely. 

In good times it is easier to throw money (or a few extra DBAs) at a problem than to effect organizational change. There are organizations out there that have gone through painful database encryption implementations - with all the performance hit and complexity that comes with it - but have thousands of undermanaged accounts and weak configuration settings because the business owners of those processes are beyond the reach of the CISO. But during a recession, folks are only spending money when there is no alternative internal means of achieving the same goal.

There is also another simple reason for the failure of database encryption to hit prime time while laptop encryption is becoming more and more common. Database encryption - while kinda sorta required/recommended in PCI - doesn't naturally flow from any particular legal requirement. As Rich points out and as supported by the results of the OWASP Security Spending Benchmarks Report, compliance is the number one driver of security spending. Without a clearer regulatory paymaster, database encryption isn't heading anywhere in a hurry.

Monday, June 1, 2009

Locking Down the iPhone

The Center for Information Security has just published a report on secure configurations for the iPhone.

iPhones are a thorn in the side of many security administrators. While locking down Blackberries and other devices seems natural, the whole point of the iPhone is to be fun and easy and free. The CIS report gives IT administrators 20 odd tips on how to lock down the devices.

(You need to provide your personal information to get the report here. Just scroll down to the part on mobile devices. Downloading the report from the CIS was an interesting experience. It's not often you are required to agree with a philosophical statement to be allowed to download a whitepaper - By using the Products and/or the Recommendations, I and/or my organization agree and acknowledge that: No network, system, device, hardware, software or component can be made fully secure)

Is the iPhone ready for prime time in the enterprise? Although some large enterprises are already using it, the vast majority of iPhones are still in personal - not enterprise - use. But there are signs that Apple is trying to piggyback on the success that Macs have had in making inroads in the enterprise beyond their traditional graphics and artistic constinuency.

The CIS report is a very good resource - in conjunction with the Apple Enterprise Deployment Guide - for any enterprise thinking of managing or supporting the iPhone. For each security recommendation, it lists whether the setting can be remotely enforced on the device.

The report splits the various security recommendations into two levels - level I stuff that your users will actually let you do, and level II stuff that you will only be able to get away with in a security critical environment. 

Some of the level I stuff is slightly annoying to the user but otherwise pretty innocuous - require passcodes, enforce complexity, auto-timeouts, auto-updates and that kind of thing. But it is some of the level II stuff - disabling wifi, disabling Javascript, and other draconian measures that probably isn't going to fly with the average user. There's a lot of basic stuff you can't do on an iPhone without wifi. Why get iPhones for the enterprise if you can't do any of the fun stuff? 

Although all the recommendations are useful for both company and (paranoid) home use, there is one structural vulnerability in mobile platforms that they do not - and by construction cannot - address. Many (I am tempted to say most) web attacks on a client/browser somehow involve the abuse of a logged in session or the data that a logged in session left behind. This isn't just XSS (Cross Site Scripting) and XSRF (Cross Site Request Forgery) type stuff, but the abuse of any stored credentials or data on the device.

Now your run-of-the-mill brick PC or Mac are of course theoretically equally vulnerable to this. But users intuitively have a better idea of what sessions are active, what services they are logged into, and what their browser is up to when they are staring at a big screen. In the case of the iPhone, they are even further limited by the user interface from seeing what is really going on.

On mobile platforms, there is an even greater danger to all these live credentials and logged in sessions than fancy XSS type attacks. It is the gazillions of sites that actually have the gall, the chutzpah - the sheer audacity - to use your login credentials to other sites to populate your friend requests or what have you. When you are simulateously logged into 8 social networks, your email, a bunch of bookmarking sites, and who-knows what else, there is a good chance that there is someone out there trying to cull your contacts or something out of one of the other applications.

This practice isn't even considered shady any more and may even be permitted by the terms of use. I remember being asked by LinkedIn (which I consider one of the more privacy friendly social networking sites) if they could have my email password so they could invite all my contacts to link in with me. The pop-up box came up so naturally that many users probably just hand over their password by instinct.

Let's circle back to the iPhone. The iPhone is fundamentally at this point still a end user device. One of the main appeals of the iPhone is that it doesn't feel like work. If Apple were to start tinkering with its default settings and configurations to satisfy corporate IT security policies, this would negatively affect the consumer appeal of the product.

It's also unnecessary. 90% of enterprises require reasonable, but not draconian, security (OK, I made that number up but there is no doubt that the large majority of businesses, even if they deal with sensitive data such as health care or financial data, do not require fortress like data environments). So for these businesses - where data and networks need to be protected but where collaboration, creativity, and efficiency are just as important - there is no need to ban devices like the iPhone as long as sensitive data stays off the device. It is easier to be liberal with devices and strict with the data that gets on them than the other way around.

Since most people are not going to be editing spreadsheets on their iPhone, the main entry point for sensitive data is the corporate mail network. A good example of the threat mobile devices pose is the emailing of sensitive information, and in particular large attachments with personal data whose loss would trigger breach notifications. Many enterprises still do not have corporate policies prohibiting the emailing of such attachments.

Emailing sensitive personally identifiable information is a bad idea for many reasons. It's just too easy to get into a situation that results in a data breach by someone for example emailing a file to the wrong address. But mobile devices (not just the iPhone) are yet another reason why this is a bad - really bad - idea. Many webmail programs like gmail download emails onto end devices without being prompted. Many times these are not encrypted at the disc level upon download. So the unencrypted data is sitting on these devices. But of course these devices disappear/get lost/get stolen all the time. Depending on the data, the circumstances, and the state, it is certainly possible that this would trigger a breach notification requirement.

For larger enterprises some DLP type solutions may help mitigate this risk, although these are very involved implementations. Although I don't often mention particular vendors on this blog, there is one managed file transfer solution I have been a repeat customer of that addresses this issue well. Accellion is largely marketed towards the large file transfer market, but it also provides a simple and auditable way to offload sensitive data transfer from email networks. There are also numerous other managed file transfer solutions that are active in this space. At the end of the day it's easier to manage file transfer than to manage employee mobile phones.

Friday, May 29, 2009

Maine Gives 7 Days for Breach Notification

Maine is tightening the screws on its data breach law. Breaches will need to be reported within 7 business days unless the authorities request otherwise. The bill, signed into law by the governor last week, goes into effect in 90 days.

Maine is pretty much going at it alone by taking this step. The vast majority of the 44-odd states with data breach notification laws let companies decide what timing makes sense. Here's what most of them have to say-

The disclosure must be made without unreasonable delay, consistent with the legitimate needs of law enforcement… or consistent with any measures necessary to determine the scope of the breach and restore the reasonable integrity of the data system.

As far as I can tell, the only other state that defines a notification deadline is Florida (if someone else knows of other states please let me know). It gives 45 days after the discover of the incident and - unlike Maine - has stiff financial penalties for delayed notification.

The Maine and Florida laws might end up getting swallowed up if a pending federal data breach notification law is passed. It would pre-empt these state laws and gives no deadline for notification. The proposed federal law largely mirrors the prevailing state law language of avoiding "unreasonable delay". As this legislation is still under consideration and likely to change, now is a good time for policy makers at both the state and federal level to ponder whether breach notification laws should give hard deadlines.

Data Breaches and Reasonability

So who gets to decide what is a reasonable delay when notifying?
Getting notification in time obviously matters to consumers. The impact of identity theft is limited if consumers get the heads-up in time to take out a security freeze on their credit reports. ( Security freezes are available in most states and make access to credit reports much more difficult).

Deadlines aren't the only place that data breach laws refer to reasonability. Many states only require notification if there is a “reasonable” likelihood of identity theft resulting from the breach. I have written before about the way this has the ironic effect of punishing honest businesses with strong IT management. In borderline breach cases they are much more likely to notify than to make a questionable determination that there is no "reasonable" risk of identity theft.

Companies that don't notify never really get called out on it. The large majority of states still do not have a requirement for breaches that do not trigger a notification to be reported to the Attorney General or another state entity. Which of course makes it much easier to sweep repeated data breaches under the proverbial rug.

A judge recently ruled in favor of Hannaford in the lawsuit that data breach victims had brought against the supermarket chain in Maine. The judge cited the lack of a strict notification deadline, which may have prompted legislators to act. However the judge also cited the lack of a reasonable risk of identity theft in not awarding damages.

Identity theft is such a nebulous concept that it is very hard to measure when a reasonable risk exists or not. This is part of the reason that some state laws presume a reasonable risk to exist by virtue of the fact that certain personally identifiable information (PII) has been leaked. The one exemption that all states grant is for encrypted data, which has spawned an entire industry of full disc encryption products. But interestingly, the encryption the law talks about is very different than the encryption the vendors talk about.

Encryption and the Get-Out-Of-Notifying-Free Card

Security folks think of encryption in terms of DES, AES, RSA and other encryption algorithms that use public and private keys to encrypt data. Various algorithms have come in and out of fashion due to their vulnerability to mathematical attacks like differential cryptanalysis or real world attacks like differential power analysis.

Now let’s gently exit the world of cryptographers and enter the legal world. Most state laws don't define encryption at all, but when they do it looks something like this:

"Encrypted" means transformation of data through the use of an algorithmic process into a form in which there is a low probability of assigning meaning without use of a confidential process or key or securing the information by another method that renders the data elements unreadable or unusable.

But where’s the key? The minimum 128 bits? The ban on single DES? Turns out that for legal purposes, encryption requires some form of obfuscating. Doesn’t need to even involve a key. Doesn't even need to involve too many CPUs. You just need to make sure that the way you obfuscate and then unobfuscate is confidential.

So who's right? Should a lost USB stick with personal data encrypted by a simple vulnerable encryption algorithm (say single DES) require notificaiton? The purist/cryptographer answer would be yes. Does it require notification from a legal perspective? A lawyer would probably say no [although I am by no means a lawyer].

This time I think the lawyers are right. The risk of identity theft from personal information on lost media is already very small; after all, the person who finds a lost laptop, USB stick, or mobile phone is very unlikely to be interested in the data. Now suppose that data is encrypted in some light but ultimately breakable way. The likelihood of actual identity theft drops down to almost nil. What are the chances that the guy who found your iPhone on the subway is both interested in your data and capable of decrypting DES?

That's not to say of course that there isn't data that merits industrial strength encryption, especially when placed on a portable device. But for the purposes of breach notification in the case of loss, sometimes we do really need to keep in mind what is reasonable.

Tuesday, May 26, 2009

Botnets and Security Hype

A couple of weeks ago a team at the University of California Santa Barbara managed to take over a botnet for ten days. Their fascinating and well-written analysis is well worth reading for an objective and first-hand look at how a botnet really operates.

So how did they do it? A botnet is just the overly sci-fi name for a bunch of computers that are controlled by a central command-and-control structure. The number one challenge for botnet operators is hiding their command-and-control servers to avoid being taken down (the chances of actually being arrested are pretty close to nil). The Torpig botnet uses an increasingly popular technique where client machines try dialling into a set of pre-determined domain names and accept the first server to respond as the botmaster.

This is where the UCSB researchers moved in - they took over the Torpig botnet by sneakily claiming the domain name that was the next in line to be the command-and-control server. The botmasters behind Torpig had not claimed all the domain names that their victims were meant to dial into, either to save money or because they didn't see this coming. In any case, the UCSB found itself in control of a botnet with hundreds of thousands of hosts.

Don't try this at home. The researchers cooperated with law enforcement and other entities to avoid legal problems. This appears to have helped them steer clear of the hot water the BBC found itself in a few weeks ago for actually purchasing a botnet from criminals.

Botnets and the Hype Cycle

You've probably heard botnets talked about on the evening news. Botnets are a particularly successfully marketed part of the FUD-cycle of the information security industry.

But how bad is the botnet problem in reality? Not as bad as previously thought, according to the UCSB team. Previous studies have counted IP addresses rather than actual hosts when estimating the size of a botnet. Getting from IP addresses to actual machines is tough - DHCP leads to an overcounting, NAT to an undercounting, and there are many other factors at play. In the botnet the UCSB team analyzed, they counted 182900 hosts versus 1,247, 642 IP addresses, and there is evidence that IP addresses generally overcount actual machines.

But in many security reports IP addresses and computers are treated synonymously - the latest MacAfee report actually contains the sentence "In this quarter we detected nearly twelve million new IP addresses, computers under the control of spammers and others". Arrghhhh...

Coverage of the UCSB work in the MSM did not mention the overcounting. "Botnets smaller problem than originally thought" doesn't make much of a headline...

So I'm part of a botnet, so what?

Good question. Theoretically, a botmaster could read your email and abuse your other accounts to their heart's desire. In fact, the UCSB researchers performed a keyword analysis of their victims' emails (not sure how they got the legal clearance to do that...). But they are probably the only ones who bothered reading those emails. Botmasters want control of computers to make money and not to read about your date last Saturday. When someone breaks into your house they steal your valuables, not your diary.

Most online accounts and credit cards do not hold their users liable for fraudulent charges. In this way botnets operate a lot like insurance fraud or old-school credit card fraud. They are an annoyance that creates an indirect cost for everyone, but a cost that is sufficiently low that people are willing to bear it. We live in a society where people want to be able to use a 16 digit number they have given out hundreds of times to pay for stuff. If that means that everything costs 1% more to deal with fraud, so be it.

Brian Krebs (who should be on your reading list if he isn't already) posted a piece today about the dangers of allowing your PC to be compromised. Reading through his list of spam, click-through fraud, DoS attacks, and the like, I couldn't get past the feeling of dangerous for society - yes, dangerous for the user - not really. As far as some of the more nefarious password stealing stuff, there is little to no evidence so far that botnets are actually using user credentials for anything other than non-personal misuse of a person's credentials. This isn't great for society, but isn't something the average user is going to care about.

Seems like just the kind of situation that calls for Uncle Sam (or Uncle Barroso)...

Laying Down the Law

The UCSB authors fault registrars for not sufficiently responsive to requests for taking down botnets. While ISP responsibility for content and traffic is a tricky political issue, the content industry has been very successful in forcing ISP accountability for peer-to-peer traffic on their networks. Of course the content industry has a bunch of well paid folks in Washington, Brussels, and other corridors of power pushing their agenda. Botnets do not directly affecting an entire industry's bottom line and so there is no lobbying effort to move responsibility from the client to the registrars and ISPs.

This could change significantly if the national security angle of botnets takes flight. The apparent role of botnets in Internet disruptions during the Russia-Georgia conflict last year, allegations of Chinese cyber-espionage, and frequent stories in the press about the vulnerability of critical infrastructure have attracted the attention of US policy makers. There are even signs that countries like China - long considered a safe haven for hackers - are taking regulatory steps to address botnets.

Regulatory measures will not completely address the botnet issue, but would potentially significantly change the risk/time-invested/reward ratio. Botnets take a high degree of technical expertise to set-up and are of only limited value. A tighter regulatory regime could significantly reduce the incentive for botmasters.

User Education

You often hear about user education in botnet/information security stories, which all too often is vendor-ese for user indoctrination to buy security products. But the UCSB researchers - who have done a great piece of research and aren't selling anything - also focus on user education as a solution to the botnet issue. Their statement that the "malware problem is fundamentally a cultural problem" places the onus for preventing complex and sophisticated criminal activity on the people least capable of preventing it.

It would be nice if all users were capable of being system administrators. For enterprise users, it is fair to expect a minimal level of technical skill. But the truth is that the technical measures a home user needs to take to secure his or her computer are simply beyond the grasp of significant portion of Internet users. The stuff you can educate home users about - choosing better passwords, not recycling passwords, etc is not going to make a real dent in the botnet problem.