Tuesday, June 30, 2009

OWASP Security Spending Benchmarks Project Report for Q2 Published

Today the OWASP Security Spending Benchmarks Project Report for Q2 was published.

This project measures security spending in the development process. This quarter we focused on cloud computing. We were trying to measure how much use companies are making of cloud computing, how this affects spending, and how they are dealing with related legal and business issues.

We are lucky to have some great security folks volunteering their time on this OWASP project - Jeremiah Grossman, Rich Mogull, Dan Cornell, Bob West, and others have all provided valuable feedback and support. We were also very fortunate to have organizations like the Open Group and the Computer Security Institute (CSI) join our project over the last quarter. They join organizations such as eema, Teletrust and companies such as nCircle, Cenzic, Fortify and others that have been actively contributing to this effort. A full list of partners can be found on the project website.

Cloud computing gets some people's eyes rolling because it sounds like a marketing gimmick or meaningless term. But whatever you want to call it, infrastructure, platforms, and software are resources that are increasingly being outsourced or externally hosted. This has enormous security implications because it undermines the traditional notions of ownership and management that security has been based on in the past.

Here are the key findings in the OWASP Security Spending Benchmarks Q2 report:


1. Software-as-a-Service is in much greater use than Infrastructure-as-a-Service or Platform-as-a-Service. Over half of respondents make moderate or significant use of SaaS. Less than a quarter of all respondents make any use of either IaaS or PaaS.

2. Security spending does not change significantly as a result of cloud computing. Respondents did not report significant spending changes in the areas of network security, third party security reviews, security personnel, or identity management.

3. Organizations are not doing their homework when it comes to cloud security. When engaging a cloud partner, only half of organizations inquire about common security-related issues, and only a third require documentation of security measures in place.

4. The risk of an undetected data breach is the greatest concern with using cloud computing, closely followed by the risk of a public data breach.

5. Compliance and standards requirements related to cloud computing are not well understood. Respondents report having the greatest understanding of PCI requirements relating to cloud computing and the least understanding of HIPAA cloud requirements.


1) The fact that SaaS is reported as the most prevalent of all cloud models is not surprising at all. Leveraging Platform-as-a-Service requires a level of expertise and sophistication many companies still do not have. And Infrastructure-as-a-Service has been dogged by performance issues and has yet to really supply an appropriate ROI model.

2) It is more perplexing that organizations do not report significant spending changes as a result of cloud computing. On the face of it, one would expect that cloud computing would result in lower expenses in a number of security areas, particularly network security. The fact that this has yet to occur may mean that organizations have been slow to adapt security budgets as a result of their cloud activities. Over time, both budgets and the role of security management will be increasingly focused on managing and auditing cloud relationships. Which brings us to number 3...

3) It is also somewhat surprising that organizations are not doing their homework when it comes to cloud computing. The survey found that only a third of organizations ask for the security policies of cloud partners. With all the talk of cloud security dangers, you would expect there to be heightened awareness and that companies would take the time to look into cloud partners' security narratives. That this has not been happening indicates that companies see cloud computing in the same vein as other outsourcing arrangements - the actual under-the-hood operations or security are not that important as long as the issues are contractually addressed. This approach may be more a result of necessity than choice, since for a small company with significant operations in the cloud it is hard to see how they could make any significant assessment of their cloud partner's security posture.

4) Data breaches are and will always remain the main fear factor driving the security industry. While compliance has always a bit fuzzy (especially when it comes to non-technical regulations, where there is a lot of wiggle room), the same cannot be said of a breach. You have either been breached or you haven't, which probably accounts for the greater concern survey respondents reported. It is interesting however that despite this very high level of concern with data breaches, organizations are still doing very little to vet cloud partners. Most organizations seem to have come to the conclusion that although there are many data security dangers related to cloud computing, there is not much they can do to mitigate this risk.

(5) Compliance is the issue that is really raining on the entire cloud computing parade. While PCI has fairly detailed supporting documentation to guide companies, other standards and regulations are much more vague so it is easy to see why people are confused. Regulators are still struggling to understand Web 1.0, so I do not expect we will be seeing much concrete guidance in this area in the near future.


I gave a whole bunch of caveats the last time we published our survey results about why web surveys need to be taken with a healthy grain of salt. This still holds true for our cloud computing survey, and probably even more so because no one seems to agree on what cloud computing is. But even so there are some important take-aways from the data we collected.

The most significant warning sign in the survey results in my opinion is that companies are moving to the cloud without really inquiring about the security policies and posture of their cloud partners. And when they do ask about these issues, they rarely ask for documentation. This does not bode well for the future security of cloud computing. Although smaller companies rarely have the resources to truly assess the security of their cloud partner, asking for written documentation of security policies at least forces the cloud partner to maintain a security narrative they share with customers. As more customers inquire about security, this security narrative takes on an increasingly strategic role for the cloud partner.

You can read the full report here.

Saturday, June 27, 2009

Nevada Mandates PCI Standard, Part II

Did Nevada really mandate the PCI Standard into law last week?

It sure seems like it when you read Senator Wiener's bill SB 227. I am not a lawyer, but the following sentence seems pretty clear: "If a data collector doing business in this State accepts a payment card in connection with a sale of goods or services, the data collector shall comply with the current version of the Payment Card Industry (PCI) Data Security Standard, as adopted by the PCI Security Standards Council".

For anyone involved in information security management or compliance, this is a really big deal. PCI has just been catapulted from a contractual obligation to a full legal requirement.

No one seems to have seen this one coming. In fact, I am not even sure that the Nevada legislature really saw this coming and they may not have realized the very far reaching implications of this legislation. But more on that in a minute.

Ira Victor, President of the Sierra Nevada chapter of Infragard and Director of Compliance at Data Clone Labs, was kind enough to reach out to me this week after I published my original post on this topic. Ira was intimately involved in the discussions around the new Nevada PCI law and testified before the Nevada Senate Committee on Judiciary in support of the law. Ira has some terrific insight into the history of this bill that can be heard in my interview today with him on this topic.

Here's a quick history of the bill as related to me by Ira. The current law came about to replace NRS 597.970, an earlier bill mandating encryption that apparently left open the door for criminal liability and did not define encryption. To remedy these issues, the new bill is much more specific about encryption requirements and somewhat randomly also requires PCI compliance. In exchange it provides a safe harbor for companies that are PCI compliant.

There is a thick irony here. Businesses that objected to the original bill on the grounds that it was too harsh now have a much much stricter bill on their hands that actually mandates PCI. This is either a very bold and trailblazing move by Nevada, or a last minute oversight because businesses didn't understand the implications. My money is on the latter for a couple of reasons:

1. There is no precedent of any other state legally mandating PCI. Some people think PCI is good and some think it is bad. But either way there is something plain weird about a law mandating a specific contractual agreement between merchants and card issuers.

2. There is no reference to PCI in any of the discussions or testimony before the Nevada Senate Committee on Judiciary. Wouldn't such a major shift in infosec policy at least be discussed by law makers and special interest groups ahead of a vote?

My guess is that the Nevada legislature meant to waive liability for PCI compliant companies, but not to actually mandate PCI. Recent discussions in Massachusetts objected to the mere mention of encryption in that state's security regulation. I can't possibly see how the business community in Nevada would have knowingly agreed to the whole PCI enchilada without putting up a fight. Being forced to do PCI makes mandated encryption look like a walk in the park.

So if this law doesn't make sense, is it going to stick? Ira knows a lot more about the legislative process in Nevada than I do and he insists that there is very little wiggle room to delay this law. But I just don't see the state of Nevada actually enforcing this. How many small businesses can really claim to be PCI compliant? Even the PCI Council itself tacitly acknowledges as much through the publication of their Prioritized Approach.

For more on this topic you can listen to my interview with Ira here.

Saturday, June 20, 2009

Nevada Mandates PCI Standard

Nevada has recently passed a law mandating PCI compliance for companies accepting payment cards that do business in the state. It is scheduled to go into effect on January 1st, 2010.

This makes Nevada the very first state to actually mandate PCI. The prize for toughest-state-data-security-law used to belong to Massachusetts. But Mass has recently been wavering and its technical requirements are almost non-existent compared to PCI.

The Nevada law is no reason to panic and doesn’t really change much for companies dealing with credit card data. Those companies already have a contractual obligation to adhere to PCI. The Nevada law ups the ante by making this an actual legal requirement, but the standard itself remains the same. And as far as actual enforcement goes, the Nevada law says nothing about penalties whereas PCI has the ability to fine non-compliant companies.

The bigger change is for companies that deal with non-credit card personal data. The Nevada law defines nonpublic personal information as a social security number, driver’s license number, or account number in combination with a password. It mandates the use of encryption for the transfer of such data outside of a company's control (this requirement existed in various forms in previous Nevada legislation as well).

One would hope that there aren’t too many companies out there sending account information together with passwords unencrypted. That leaves full Social Security Numbers and the much-less-frequently used driver’s license numbers. (Interestingly, the regulation doesn’t consider the last four digits of the SSN to be personal information. Which is kind of strange when you consider that the last four digits are the most random parts of the number. Oh well).

I suspect there are many companies out there with Nevada customers who will have to play some catch-up when it comes to SSNs. Full SSNs are still frequently used as a primary identifier for many web services related to payroll and benefits as well as many services that have nothing to do with taxes.

Most of these services already encrypt data on the interface level – it is the exception rather than the rule today to see a plain old http login page that asks for your SSN. It’s much tougher to know what is going on behind the scenes. But does the Nevada law really require companies to change their back-end data processing?

Because the law only talks about the “secure system” and the area “beyond the logical or physical controls of the data collector”, it is doubtful that this regulation requires any sort of SSL encryption of data that is not going out in cleartext over public networks. Data behind firewalls or behind some form of password protection would not appear to require encryption based on this wording.

One positive potential outcome of the Nevada law is that it may encourage organizations to move away from using SSNs when they don’t have to (a trend that has already been underway for a while, particularly at universities). There is something particularly jarring about being asked to provide your SSN to get cable service. Strict new rules around handling SSNs may be the necessary kick in the pants for SSN-addicted companies to finally overhaul their authentication methods.

One final thought about the Nevada law itself. In what I believe is a first for state laws, it directly references FIPS, NIST, and other “established standards bodies” when discussing allowable encryption methods. Most data breach notification laws give an exemption for encrypted data without giving any meaningful definition of the term. This has allowed companies to avoid notifying of a data breach when the compromised data was somehow obfuscated. This law will make it harder to claim that some light obfuscation or encoding actually constitutes encryption.


Companies that sell encryption products have a field day with laws like this. But - like other data security regulation - you don’t need to buy anything to be in compliance with the Nevada data security law. You just need to make sure that you are not sending sensitive data in cleartext over public networks. This means a bit more messing around with certificates and configurations prior to releases but not much more. And of course you also need to make sure that anywhere you are storing this data at rest is considered part of your “secure system” or has some logical or physical controls in place.


The actual text of Nevada Senate Bill 227 can be found here.

A good overview of the evolution of data security legislation by Andrew Baer can be found here.

UPDATE: My newest post on this topic can be found here. You can also listen to my interview with Ira Victor who testified before the Nevada Senate Committee on Judiciary in support of the bill.

Tuesday, June 16, 2009

Opera Invites You to Join the Cloud

Or at least that's what Opera is claiming with the rollout of its new Opera Unite service. It will allow users to serve up web pages from their own computers.

Why would you want your humble desktop to serve up web content? So far Opera doesn’t have much of an answer to that. The sample apps they offer – a “fridge” to post notes to your friends, a way to share music with your friends – don’t exactly scream revolution or Web 5.0 (as Opera likes to refer to the service).

You might also be wondering what’s so special about Opera Unite; after all there is nothing new about being able to run a web server from your computer. Opera itself has supported BitTorrent for a while. And anyone can stitch together a webserver with Firefox plugins or just enable one without going through a browser.

What Opera Unite does is present this functionality to users in one unified service. People who would never dream of firing up a web server on their own might be tempted to give Opera Unite a whirl. Opera seems to be betting that user-friendly client-side content hosting can buck the trend towards increasingly hosted apps.


Some of Opera’s early features have been adopted by all major browsers. There are accounts of everything from tabbed browsing to private browsing having originated with Opera. So although Opera has  a very small user base outside of the mobile world, a successful Opera Unite service could be rapidly mimicked in other browsers. Will Opera Unite usher in a new era where every computer acts as its own server? Has the democratization of the cloud finally arrived?

I wouldn’t bet on this for a couple of reasons. In the enterprise, security concerns will prevent widespread adoption (more on security in a minute). And in the home, there are some basic performance issues that make me wonder if this will really fly.

Let’s start with the home. The most obvious problem is that for a server to work it needs to be powered on. Many people turn their computers off to save power and to be environmentally friendly. Strike one against Opera Unite.

Strike two is that many ISPs hit users hard on upload speeds. Anyone who has tried to use an online back-up service knows that there is a huge difference between uploading a gigabyte of data and downloading it. When I tried to check out Opera Unite’s demo page it was painfully slow. That might have to do with the sheer number of visitors the page is getting today, but it doesn’t bode well for future performance.


Which brings us to security. Opera Unite is by no means the first Web 2.0 service to expose the computer in ways our Web 1.0 ancestors would have found difficult to fathom. Browsers like the Mozilla-powered Flock and others already have their fingers deep into a user’s credentials.

Let’s dive into a few specifics of how Opera handles security. On the interface level, the awkwardly long URLs users need to type will certainly be an attractive target for spammers and phishers to exploit. Users have the option to password protect pages, but since the password is stored in the URL this offers very little security on shared computers.  And the under-the-hood security isn't any better; it doesn’t seem like any of the traffic between clients is encrypted, which is to be expected because managing the certificates would be a mess. 

For home users none of this is really a big deal; most users running a casual web server out of their living room are either not very security conscious or don’t have very high sensitive data sitting on their machine. But it is another strike against the notion of using Opera Unite in the enterprise.

In the interest of fairness, there is one nice security feature in the current experimental build of Opera 10.0 – Opera Unite is disabled by default. This is a good way to protect users who have no desire to run a web server on their computer.


Companies often worry about the cloud because they feel they lose control of data and the surrounding security measures. It's tough to lose control, but in reality most cloud providers are much better at providing security than the average enterprise. So for a small to medium sized business, their data is probably safer hosted in the cloud than hosted on site.

This calculus is even more true of home users. Most home users are incapable of managing their desktops, let alone a server. Opera users – like Firefox users – are probably more tech-savvy as a group than IE users. But even so why go through all the trouble of configuring and securing your environment locally when you could just use a hosted service like MobileMe, Facebook, Flickr, or any of the hundreds of other services that exist in every flavor and price point? When I watch a video on YouTube, I am reasonably confident that it does not come with malware. When I watch a video that is being served up from my buddy’s desktop, that level of confidence drops pretty dramatically.

Of course the big disadvantage of hosted services is ownership and control (a good example being the recent failed attempt by Facebook to drastically change its Terms of Service). But my feeling is that, at the end of the day, most end users don’t really care that much about control. They would rather have the advantages that come with the cloud – including automatic backups – than worry about the intellectual property issues surrounding their vacation photos.

That being said, Opera Unite could become very successful for casual home use, particularly as a means of regulating P2P data exchange. And if the developer community steps up to the plate there may be some very handy apps supporting this. The lack of security features is not an issue for the casual user and is not a significant factor in affecting adoption.

But even if Opera Unite scores with home users it does not seem likely to be a candidate for serious enterprise applications. The enterprise client is getting thinner and thinner and I can’t really see Opera Unite stopping that train.

Thursday, June 11, 2009

China Votes For Endpoint Security

China is setting up a green dam to help prop up the great firewall.

The green what? The full name is even weirder - The Green Dam Youth Escort. It's a piece of filtering software the Chinese government is requiring to be shipped with all computers as part of its "anti-vulgarity" campaign.

China has been keeping a tight grip on the Internet for a long time. But the Green Dam project marks the country's first widespread attempt to control activity from the actual computer itself. No longer content to just monitor and block network traffic, China's maneouver is a surprising declaration of faith in the importance of endpoint security. Cloud advocates tell us that the endpoint matters less and less, but the world's most populous country seems to be moving in a different direction.

Before we draw too many conclusions, its worth noting that so far the system is not working too well. The Green Dam suffers from the same problems inherent in any massive effort to control the endpoint (kind of a mega FISMA with actual client side software). For one, there are obvious security and performance concerns involved in such a mammoth roll-out. Global Voices has an interesting translation of Chinese language posts about the problems ordinary Chinese are experiencing with the software.

So it's easy to dismiss the Green Dam Youth Escort as a futile project with a really dumb name. But people who mocked China's early efforts to control the Internet as doomed to failure have largely been proven wrong. Despite the apparent ease of circumventing the "Great Firewall", China has been largely successful at controlling and monitoring great portions of its Internet traffic.

The Chinese government has not released many details on the planned scope and implementation of the Green Dam system. There are certainly early indications that the planned Green Dam filtering extends beyond adult material to include political terms as well. It is also unclear whether there will one day be a NAC-type system in which only devices with an approved Green Dam agent are able to connect to the Internet. So far the only government requirement seems to be that the Green Dam software simply ship with the product. But it's hard to imagine that the government intends to rely on voluntary compliance after mandating the software distribution.

The Politburo Votes for Securing the Endpoint

China clearly sees value in controlling the endpoint. While other countries have relied strictly on network methods to control illegal content (such as Australia's recent flirtation with net filtering technology), the Green Dam project marks to my knowledge the first - and undoubtedly the largest - attempt to control content from the actual endpoint.

This is not a political blog so I don't want to get into the very dicey ethical question of whether hardware manufacturers should follow network and service providers in aquiescing to the Chinese government's demands (some justify their compliance by arguing that even a censored Internet ultimately promotes democracy). But aside from the significant human rights issues involved, it is interesting to consider the IT security lessons that China's move holds for enterprises trying to control their own content and traffic, albeit for much different reasons.

More than anything the Chinese move is yet another indication that perimeters and client machines still matter. China's previous model was more or less endpoint agnostic - the idea being that if you were surfing stuff the government didn't approve of, this would be detected or blocked at the ISP level. China now seems to be less confident in that approach. The powers-that-be seem to have decided that even with 30,000 censors the best place to nip things in the bud is right at the user's machine.

Why the Endpoint Still Matters

It's not only in China and the world of Green Dam Youth Escorts that the endpoint still matters. In many enterprises, there are vigorous debates on the role of personal devices and the degree of access they may be granted to corporate networks.

In a theoretical de-perimeterized world, (as advocated by the Jericho Forum and others), the actual endpoint device is basically irrelevant. From a theoretical perspective this may make sense, but reality is much different. The end point still matters a lot for a few major reasons:

1. The Law. What physical machine you work on is often the legal distinction between stuff you own vs. stuff you don't own.

2. Standards and contracts. For better or worse, PCI clearly places much more emphasis on network and client side security versus other forms of security. A lot of RFP and contractual language has the same bias.

3. Forensics and Discovery. A company has a much easier time getting access to it's own computers than access to an employee's personal machine.

4. Management and Auditing. It is still way easier to manage a company device than do some clean-access check on a personal device. Especially for small and medium enterprises, NAC-type projects are often far too costly and complex.

Technical solutions will come up to address issues (3) and (4) above. But it is (1) and (2) - the legal and contractual world - that will keep the endpoint critical in the near term. While the future may be deperimeterized, this will require a tide shift in the clear network-centric language of most contracts, regulations, and standards today. It might happen fast when it does happen. But it hasn't happened quite yet.

In the meantime, the government of the world's largest IT user base is betting that the endpoint can deliver for its censorship and monitoring needs.

Saturday, June 6, 2009

The Security of Bing

This week Bing made its big debut. If you haven't yet heard of Bing, Microsoft has a 100 million dollar advertising campaign that should correct your ignorance pretty soon.

Bing hasn't yet hit prime-time. When I Googled the word Bing (ahhh...the irony) the first page of results still contained links to a surf board shop, a columnist for Fortune, and various complaints from the latter about brand name infrigement. That's going to change real fast (like by the time you are reading this), but it goes to show that Bing has a long way to go before it even begins to chip away at Google's dominance of search.

Bing has generally received good reviews - it seems to basically work (no repeat of the initial Vista disaster here) and is a refreshing alternative to Google. But suprisingly there hasn't been much talk about the security of Bing (except for a storm of criticism about the display of inappropriate material in search snippets).

Or maybe not so surprisingly. Most people think of cybercrime in terms of hackers and ID thieves. Search engines just search for stuff that's already out there. But the truth is that the vast majority of criminal and quasi-criminal activity on the web involves gaming the search engine system in one way or another. A lot of crime that just happens to use a computer gets incorrectly labelled cybercrime. But search engine crime (if I may coin a new phrase) is truly cybercrime. 

How does search engine crime work? Take the term "hotels New York". Every week thousands of people book millions of dollars of hotel rooms by Googling those three words. Since people only pay attention to the very first search engine results (and almost never click beyond the first page of search results), moving just one spot up the list of results can result in a huge increase in revenue. The difference between being number 8 and number 7 in the Google results for any hotel related word has a very real and measurable price attached to it.

Enter the cybercriminals. Botnets are harnassed to create spam links to sites to raise their profile. Browsers are hijacked to redirect unsuspecting users. Content is culled from one site to another to produce hits in unrelated searches. Spam links are created dynamically on the basis of user input. There are literally thousands of ways of gaming the search engine system. Some are outright criminal, while others are just very shysterish SEO (search engine optimization) techniques.

Of course not all search terms have equal cachet. Take the term CISO (Chief Information Security Officer). A reader who stumbled on my blog by googling the term "CISO" pointed out to me that my recent post on whether companies really need a CISO is now the 5th highest search result for the term "CISO" on Google. Now while this blog has a healthy readership and has built up some decent Google juice over time, what this high ranking really goes to show is that there aren't all too many people talking about CISOs or competing for this term in search engines. Which kind of reinforces the point the post was trying to make in the first place...

But I digress. Let's get back to Bing security. My guess is the majority of all malware, viruses, etc are aimed at gaming Google search results. Now while most of these techniques work to some extent at gaming Yahoo and other search engines, the fraudsters are aiming to maximize Google rankings above all else. 

How well has Google done in making it's search engine fraud and spam proof? Google has done pretty well with controlling gmail spam (despite recent glitches). But today Google search results are a mess - malware related sites are frequently misidentified and many common search terms point to malware. Just a bit of Googling - I mean binging around - makes it seems like Bing suffers a bit less from this problem. One reason is that Bing appears to provide less search results than an equivalent Google search. And the other more important factor is that cybercriminals have not yet honed their skills on gaming Bing.

Early indications are that Bing uses much the same malware filtering as other search engines. The web analytics market is already scrambling to understand the implications of Bing. Features like the (annoyingly slow) rollover function undoubtedly have serious implications in terms of both security and user behaviour. How these will be exploited by cybercriminals will have a major influence on the face of cybercrime in the coming months.

Wednesday, June 3, 2009

The Encryption Myth

Rich Mogull had an interesting post yesterday about some trends he has been observing in enterprise security. Rich is a guy with his ear to the ground when it comes to what security processes and products companies are actually implementing. Reading his assessment on the state of encryption got me thinking about why everybody is talking so much about database encryption and why so few people are actually doing it.

There are three encryption trends in particular that caught my attention - 

1. Laptop encryption is being commonly deployed
2. File and folder encryption is not in wide use
3. Database encryption is hard and not in wide use

Let's start with (1). Laptop encryption is the most no-brainer security mechanism out there for any organization dealing with personally identifiable information (PII). 

Do the math. Your organization will lose laptops. Some will contain personally identifiable information. And these days whether you like it or not most thefts or losses will be high profile enough that your legal department catches wind of them. And at that point if they're encrypted your IT department fills out a new PO and everyone gets back to work. And if they're not encrypted all of sudden you have a potential breach notification situation on your hands. Which isn't the catastrophe that security vendors make it out to be (and nothing like the $200-per-lost-record myth you hear bandied around) but unpleasant nonetheless. And certainly a situation that is worth shelling out a few bucks per laptop to prevent.

What about (2) and (3) then? Why is almost no one encrypting their file folders and databases server side?

The truth is that you just don't mitigate that much risk by encrypting files at rest in a reasonably secure environment. Of course if a random account or service is compromised on a server, having those database files encrypted would sure come in handy. But for your database or file folder encryption to actually save you from anything, some other control needs to fail.  

Contrast this with cleartext data traveling through a public network, where no further control needs to fail for your data to be compromised. Or in other words, depending on your definitions you could say that unencrypted Internet traffic is already compromised. This is why https is ubiquitous, while hardly anyone is using database encryption. 

This is not to say that database encryption or file folder encryption is useless - far from it. For the compliance and audit kudos alone there is much to be said for implementing these solutions. But if you only have one play to make, you are much much better off leveraging your organizational capital to actually make sure that your servers are locked down, your database permissions are not too liberal, and that your developers are coding securely. 

In good times it is easier to throw money (or a few extra DBAs) at a problem than to effect organizational change. There are organizations out there that have gone through painful database encryption implementations - with all the performance hit and complexity that comes with it - but have thousands of undermanaged accounts and weak configuration settings because the business owners of those processes are beyond the reach of the CISO. But during a recession, folks are only spending money when there is no alternative internal means of achieving the same goal.

There is also another simple reason for the failure of database encryption to hit prime time while laptop encryption is becoming more and more common. Database encryption - while kinda sorta required/recommended in PCI - doesn't naturally flow from any particular legal requirement. As Rich points out and as supported by the results of the OWASP Security Spending Benchmarks Report, compliance is the number one driver of security spending. Without a clearer regulatory paymaster, database encryption isn't heading anywhere in a hurry.

Monday, June 1, 2009

Locking Down the iPhone

The Center for Information Security has just published a report on secure configurations for the iPhone.

iPhones are a thorn in the side of many security administrators. While locking down Blackberries and other devices seems natural, the whole point of the iPhone is to be fun and easy and free. The CIS report gives IT administrators 20 odd tips on how to lock down the devices.

(You need to provide your personal information to get the report here. Just scroll down to the part on mobile devices. Downloading the report from the CIS was an interesting experience. It's not often you are required to agree with a philosophical statement to be allowed to download a whitepaper - By using the Products and/or the Recommendations, I and/or my organization agree and acknowledge that: No network, system, device, hardware, software or component can be made fully secure)

Is the iPhone ready for prime time in the enterprise? Although some large enterprises are already using it, the vast majority of iPhones are still in personal - not enterprise - use. But there are signs that Apple is trying to piggyback on the success that Macs have had in making inroads in the enterprise beyond their traditional graphics and artistic constinuency.

The CIS report is a very good resource - in conjunction with the Apple Enterprise Deployment Guide - for any enterprise thinking of managing or supporting the iPhone. For each security recommendation, it lists whether the setting can be remotely enforced on the device.

The report splits the various security recommendations into two levels - level I stuff that your users will actually let you do, and level II stuff that you will only be able to get away with in a security critical environment. 

Some of the level I stuff is slightly annoying to the user but otherwise pretty innocuous - require passcodes, enforce complexity, auto-timeouts, auto-updates and that kind of thing. But it is some of the level II stuff - disabling wifi, disabling Javascript, and other draconian measures that probably isn't going to fly with the average user. There's a lot of basic stuff you can't do on an iPhone without wifi. Why get iPhones for the enterprise if you can't do any of the fun stuff? 

Although all the recommendations are useful for both company and (paranoid) home use, there is one structural vulnerability in mobile platforms that they do not - and by construction cannot - address. Many (I am tempted to say most) web attacks on a client/browser somehow involve the abuse of a logged in session or the data that a logged in session left behind. This isn't just XSS (Cross Site Scripting) and XSRF (Cross Site Request Forgery) type stuff, but the abuse of any stored credentials or data on the device.

Now your run-of-the-mill brick PC or Mac are of course theoretically equally vulnerable to this. But users intuitively have a better idea of what sessions are active, what services they are logged into, and what their browser is up to when they are staring at a big screen. In the case of the iPhone, they are even further limited by the user interface from seeing what is really going on.

On mobile platforms, there is an even greater danger to all these live credentials and logged in sessions than fancy XSS type attacks. It is the gazillions of sites that actually have the gall, the chutzpah - the sheer audacity - to use your login credentials to other sites to populate your friend requests or what have you. When you are simulateously logged into 8 social networks, your email, a bunch of bookmarking sites, and who-knows what else, there is a good chance that there is someone out there trying to cull your contacts or something out of one of the other applications.

This practice isn't even considered shady any more and may even be permitted by the terms of use. I remember being asked by LinkedIn (which I consider one of the more privacy friendly social networking sites) if they could have my email password so they could invite all my contacts to link in with me. The pop-up box came up so naturally that many users probably just hand over their password by instinct.

Let's circle back to the iPhone. The iPhone is fundamentally at this point still a end user device. One of the main appeals of the iPhone is that it doesn't feel like work. If Apple were to start tinkering with its default settings and configurations to satisfy corporate IT security policies, this would negatively affect the consumer appeal of the product.

It's also unnecessary. 90% of enterprises require reasonable, but not draconian, security (OK, I made that number up but there is no doubt that the large majority of businesses, even if they deal with sensitive data such as health care or financial data, do not require fortress like data environments). So for these businesses - where data and networks need to be protected but where collaboration, creativity, and efficiency are just as important - there is no need to ban devices like the iPhone as long as sensitive data stays off the device. It is easier to be liberal with devices and strict with the data that gets on them than the other way around.

Since most people are not going to be editing spreadsheets on their iPhone, the main entry point for sensitive data is the corporate mail network. A good example of the threat mobile devices pose is the emailing of sensitive information, and in particular large attachments with personal data whose loss would trigger breach notifications. Many enterprises still do not have corporate policies prohibiting the emailing of such attachments.

Emailing sensitive personally identifiable information is a bad idea for many reasons. It's just too easy to get into a situation that results in a data breach by someone for example emailing a file to the wrong address. But mobile devices (not just the iPhone) are yet another reason why this is a bad - really bad - idea. Many webmail programs like gmail download emails onto end devices without being prompted. Many times these are not encrypted at the disc level upon download. So the unencrypted data is sitting on these devices. But of course these devices disappear/get lost/get stolen all the time. Depending on the data, the circumstances, and the state, it is certainly possible that this would trigger a breach notification requirement.

For larger enterprises some DLP type solutions may help mitigate this risk, although these are very involved implementations. Although I don't often mention particular vendors on this blog, there is one managed file transfer solution I have been a repeat customer of that addresses this issue well. Accellion is largely marketed towards the large file transfer market, but it also provides a simple and auditable way to offload sensitive data transfer from email networks. There are also numerous other managed file transfer solutions that are active in this space. At the end of the day it's easier to manage file transfer than to manage employee mobile phones.