Tuesday, March 29, 2011

Comodo, RSA, and Security Priorities

More details are coming in on the Comodo digital certificate hack by an Iranian hacker. The young man apparently exploited the use of plaintext usernames and passwords in a generally vulnerable certificate issuing system.

Coming on the heels of the recent RSA SecurID breach, it has been a bad last couple of weeks for security vendors in general, and for both the SSL certificate and two-factor authentication hardware token businesses in particular. But RSA's pain is likely to be more acute. The SSL certificate business is unlikely to suffer in the long term. Websites need certificates, even if they don't guarantee security. Folks might move away from Comodo in the short term, but even that seems unlikely given the small-time nature of a certificate purchase. SSL certificates are mostly viewed as a commodity, and price is the main differentiator.

But the hardware token business could be in for some rough times ahead. SecurID is at the end of the day a discretionary purchase based on a desire to have the gold-standard of security. It is the security equivalent of a luxury good. But you wouldn't buy a BMW if you thought it was just as prone to accidents as the Toyota down the street. If RSA SecurID doesn't provide a concrete measure of added, or at least perceived, security, CIOs will be reluctant to pay the premium that hardware solutions naturally command.

RSA's vague pronouncements about "Advanced Persistent Threats" might have done more harm than good. There may be some mitigating law enforcement issues we don't know about that are preventing RSA from really coming clean. But APT is all-too often used as a code word for stuff-we-can't-really-do-anything-about. Which is fair enough of course; RSA can genuinely make the case that they've sold security products for a long time, but everything is breakable and stuff happens.

The security of RSA's SecurID system was always a combination of the strength of their underlying algorithms combined with the strength of their underlying operations and environment. Ditto with the Comodo situation - the security of SSL certs is dependent on many factors and the difficulty of factoring the products of large prime numbers is way down the list. Comodo is a business, and in businesses significant numbers of people need access to significant amounts of sensitive data. Invariably there will be screw-ups in how those people handle those responsibilities. This time it seems like an Italian reseller was partially to blame.

But this raises the larger question of whether particularly sophisticated and expensive security products are justified when most organizations face threats that are far most basic. In other words, the recent Comodo and RSA hacks ironically underscore the point that SecurID tokens are somewhat of a Maginot line for many organizations where other much more immediate threats are present.

The way Comodo was hacked is particularly illustrative of this phenomena. The reseller credentials were apparently sitting around in plaintext (or at least that's what the Iranian hacker taking responsibility for the attack claims). Most businesses, and especially businesses that live primarily in the cloud, have web front-ends to critical data that do not involve two-factor authentication. This might be a Salesforce account, Google Docs, an administrative console to a CMS like Drupal, or whatever. And many web 2.0 businesses live in shared hosting or VPS environments where the root credentials to their accounts actually live in plaintext in the host's servers, often visible by anyone in support. Using two-factor authentication in this kind of environment strictly to increase the general level of security rarely makes economic sense.

It's hard to say if RSA or Comodo will suffer any lasting damage from these attacks. For the vast majority of businesses, the ease of implementation and integration of a two-factor authentication solution trumps abstract concerns about the system's hackability. And SecurID's large library of clients and authentication agents is in itself a security feature; a competing product with a smaller number of clients introduces new threats, since you have to either cobble together your own code (almost always a bad idea) or you end up with some of your systems not covered.

The Rise and Fall of Hardware Tokens?

One primary beneficiary of SecurID's troubles could be competing vendors who offer two-factor solutions that do not rely on actual hardware tokens. CA has quickly gotten on the bandwagon and is offering to switch out SecurID tokens with its own ArcotID system. On the one hand it's easy to see how an actual physical hardware token is "more secure" than a software token installed on a mobile phone; a software token could theoretically be subject to all kinds of OS-related attacks and other vulnerabilities in both the issuing and maintenance. On the other hand, the actual overall environment in which hardware tokens live - the issuing, recalling, and indeed the APT in a vendor environment itself - paint a much murkier picture.

SSL and two-factor are part of the longstanding conventional wisdom of the security industry. They have made their way into the requirement documents of countless RFPs and contracts. In fact, the use of SSL is often one of the only security requirements in specs for outsourced web applications. But the shortcomings of this checklist approach to security are clear when Comodo's certificates can be brought down by a sloppy reseller or RSA's own SecurID can be subverted.

Sunday, January 23, 2011

Security Scoreboard - Join the Conversation

This week Security Scoreboard made an exciting announcement - the company received angel funding and Dominique Levin has joined as full-time CEO.

Now that we have an expanded team and some cash (both good things), we would like to share some of our plans with the community. And more importantly, we would like to invite the community to join in and help shape the future of Security Scoreboard.

A bit of background...

Security Scoreboard's mission is to provide unbiased end user
 experiences with security solutions in order to help security professionals find the right vendor for their organization's challenges. Almost
 exactly one year ago, I launched Security Scoreboard out of a 
need that I felt as a security practitioner: I was not happy with the 
information available about end user experiences with security
 solutions. If you tried researching a security solution, you found plenty of product information from vendors themselves. If you were lucky you might have found some analyst and third party or trade publication reviews. All potentially relevant - but what about actual end-user experiences? There was a lack of information from users who had actually bought, implemented, and used different security solutions.

Security Scoreboard was built to answer this need.

 The response within the community was extremely positive and underscored the urgent need for a credible platform for unfiltered end-user voices. It also became clear over time that the Security Scoreboard movement had grown beyond the capability of one person to build and operate in their spare time. I am very excited that Dominique Levin - an industry veteran well known to many of you from her time heading up LogLogic - shares the original vision and has joined Security Scoreboard as its full-time CEO.

Challenges to Building a New Ecosystem

Security Scoreboard seeks to fundamentally change the way CISOs, CIOs and other "security consumers" evaluate vendors. There are four key ingredients to achieve this:

1. TRUST - Users need a way to determine the credibility of reviews
2. PRIVACY - Users need to be able to leave reviews with a reasonable degree of privacy
3. ACTIONABLE INFORMATION - Users need a way to get the information that matters to them quickly and efficiently
4. TRANSPARENCY - Users need to know how the site funds itself and the formula behind any pay-for-play.

Consumer reviews sites like TripAdvisor and Yelp face similar challenges in the consumer space. And while security professionals might be a slightly more skeptical bunch than your average person, the basic challenges Security Scoreboard faces are the same as other community driven review sites. These challenges are make-or-break for Security Scoreboard, so we want to share our thoughts on each one with the community -


1. TRUST - How do you know whether a review is legitimate?

Screening reviews for obvious plugs or badmouthing is a critical challenge. Users need to know how legit each review is. As Hoff, Lenny Zeltser, and others have pointed out, developing a reputation system allowing users to evaluate Security Scoreboard reviews is critical to our success.

We envision Security Scoreboard having tiered reviews – those written by loosely authenticated reviewers should be taken with a grain of salt, while those written by reviewers who have been vouched for by reputable entities should carry more weight. The nature of this reputation system needs to be rooted in the existing security community. We are exploring a number of tools to factor into this reputation system – from transitive tokens (more on this below) to leveraging existing security organizations and communities. At the same time we are studying what has worked and what doesn't work in other online communities facing the same challenge.

2. PRIVACY

Many security managers do not feel comfortable posting comments about vendors in public forums. Some might even regard their use of a particular solution as confidential information. On the other hand, as discussed above Security Scoreboard needs to vet that reviews have been posted by legitimate users.

Currently we have an informal and not completely scalable approach to vetting reviews while not publishing reviewers' identifying information. As we grow, we are building a more formal structure around reviewer identification. We are also looking into some fancier token-based systems, so that a current trusted user of the site can distribute these tokens to trusted colleagues without the site being aware of their identity. This can spill over into the privacy-overkill zone, so we intend to restrict ourselves to those reasonable privacy measures that would make typical users comfortable leaving reviews on the site. This is tightly bound with the credibility issue and is an issue we intend to continuously involve the community in.

3. ACTIONABLE INFORMATION

Credible reviews are only valuable if they lead to easily accessible and actionable data. Security Scoreboard strongly believes in openness of data and metrics (check out the analytics data for product categories or register to see the popular keywords associated with each individual vendor). As we gather more reviews and evolve the authentication schemes described above, we plan to build more sophisticated accompanying metrics to slice and dice data according to parameters that are important to end-users. Reviewer credibility will become an important factor in these algorithms.

There are some other obvious improvements that are on our short terms product roadmap. Some of you have noticed that Security Scoreboard currently does not let you rate a vendor’s individual products. For small vendors with one main product, rating the product and rating the company is pretty much the same thing. But for large companies like Symantec, McAfee, Microsoft, etc there is an obvious need to rate individual products rather than the vendor as a whole. We’re onto this, and will be shortly introducing changes to allow for rating of specific products as well as direct product comparisons.

4. TRANSPARENCY

Nothing kills credibility faster than backdoor pay-for-play. This lack of transparency affects a large portion of the third party information available today for IT systems in general.

Right now we are focused on building the community at Security Scoreboard and have not yet decided on a final revenue model. Vendors will play a role in this model, but we intend to keep completely openness about how the bills are being paid. Sponsored content and objective results are not mutually exclusive; for example the existence of Google Adwords has not eroded confidence in the organic results produced by the Page Rank algorithm. At Security Scoreboard we intend to have a similarly transparent and open revenue model from day one.

Help us build the future of Security Scoreboard

We are looking for community insight and input on all four of these challenges, and especially in building our reputation and privacy systems. The Security Scoreboard cause will stand or fall with the authenticity and credibility 
of product reviews and ratings.

 This is a movement for and by end users, so if you have some time to chime in, we would love to hear from you.

Joining the Discussion

If you want to join the discussion, please just send an email to voice at securityscoreboard dot com with your name and affiliation. Don't worry about spam - we'll be happy to take you off the list whenever you want.

This mailing list is open to anyone in the security community and beyond who is interested in contributing to our discussion - end-users, vendors, academics, and the like. If you think that Security Scoreboard is a useful tool and you are interested in influencing our future direction, please be sure to sign up and join the discussion!

Thursday, June 10, 2010

iPad and the Illusion of Privacy

It's been a bad week for Apple. First the wifi choked at Steve Job's iPhone 4 demo at WWDC. And now Gawker has reported that AT&T inadvertently leaked the email addresses of 114,000 iPad purchasers.

It should come as no surprise that the culprit here is a web application vulnerability. According to a story on Slashdot, a web service that was supposed to provide an AJAX-type response within AT&T's web apps was left exposed externally. Oops.

A lot of big name people are going to be pissed off. The celebrities on the list are not going to be happy about having their email address in the papers. And public officials who expensed the iPad will (or at least should) have some serious explaining to do as to why the taxpayer needs to subsidize their new toy.

Email addresses can be changed. But the leak also exposed something called the ICC-ID, a number that uniquely identifies a device's SIM card.

At the time of writing (night time EST 6/9) there is still no official announcement on what is going to happen with the leaked identifiers. My guess is that they can't be reliably changed without a manual recall. This raises privacy concerns for the affected users, since ICC-IDs are relatively liberally shared during the course of network communications.

But in the end it doesn't really matter. Using an iPad or an iPhone already binds your personal information to your web traffic in a much deeper way than your old fashioned Mac or PC. After all, most iPhones are full of apps that tie your real actual personal data - your name, credit card, address, etc to your device. iTunes works the same way. And unlike a full blown computer, iPhones and iPads afford very little GUI control of what is happening in the background. You could of course gain control through jail breaking. But that's not the MO of 99% of users, and violates the terms of service to boot.

I don't mean to justify the leak - users have a reasonable expectation that their personal information is not totally exposed for the world to see. But when you use an iPhone or iPad, you need to realize that your personal data is lurking in thinly veiled form in countless transaction and traffic logs. Although the 114,000 folks whose ICC-IDs are now public domain are slightly more at risk than the rest of us, it is not as though everyone else was operating in anonymity. The AT&T incident demonstrates that on rigid mobile platforms everyone's traffic is just one badly configured web service away from exposure.

It is amazing how quickly mobile communications has gone from the most secure to the least anonymous form of communication. Mobile security has a special place in my heart since the days I served as one of the dozen-odd members of the ETSI Secure Algorithm Group of Experts that standardized the GSM and UMTS encryption algorithms in the first half of the last decade. Back then it was easy - security derived from cryptography, and mobile Internet usage was barely getting off the ground. Today the underlying strength of the mobile cryptographic algorithms is almost irrelevant to most practical attacks. And anonymity is essentially impossible to achieve on devices locked down by both the manufacturer and the operator.

With a brick and mortar PC that connects to networks the old fashioned way, there is a certain default anonymity that even non-technical users can achieve. On locked down mobile devices - where Steve Jobs decides what applications can run and how - a user's identity is at best protected by a myriad of minimalistic authentication mechanisms. For most users, it is worth trading robust privacy in the interest of a rich user experience. That's why millions of users (myself included) own iPhones. But it also means that when the inevitable data breaches occur, there is a lot more information potentially at risk.

Tuesday, June 8, 2010

Napera selling security at the Google Apps Marketplace

Napera networks announced yesterday the availability of what appears to be the first systems management application in the Google Apps Marketplace.

Google Apps Marketplace was launched in March of this year and is exactly what the name implies - a place to buy and install apps that integrate directly with Google Apps. Most of the 45 offerings currently listed in the Security and Compliance category are related to email security. This makes sense since email is the most popular Google Apps product.

Napera's PC Security Informer is trailblazing as the first security management offering in the Google Apps Marketplace (there are of course plenty of competing cloud security management offerings like for example Shavlik PatchCloud).

Does buying security management from the Google Apps Marketplace make sense?

Luckily for Napera, the usual cloud security FUD will not hold much water with its potential Google Apps Marketplace customers. The small and medium sized businesses that are the target market for Napera's PC Security Informer have already moved big chunks of their infrastructure to the cloud. Since the data is already in the cloud, there is no reason that the security to protect that data shouldn't be in the cloud as well.

The bigger issue for most businesses will be business control, customization, privacy policies, and SLA's. Moving apps to a platform-as-a-service infrastructure is scary from a can-I-get-someone-on-the-phone-when-the-^%&$^-hits-the-fan perspective. And for applications deployed within Google Apps, there are multiple vendors to deal with. When you use a cloud-built-on-a-cloud service like Napera PC Security Informer, you are dependent on both Napera and Google for everything to run smoothly.

The litmus test for the success of any app in the Google Apps Marketplace is whether the integration advantages outweigh the lock-in. The biggest competition for Google Apps Marketplace security products will come from competing hosted solutions. With third party hosted solutions, what you lose in Google Apps integration might be gained back in control and peace of mind.

A challenge for Napera in driving adoption of the PC Security Informer is the nature of the Google Apps customer base. With 25 million users spread over 2 million businesses, the typical Google Apps customer is a ten-guys-working-virtually type of company. Those organizations are not in the market for systems management. Many of the larger companies using Google Apps are still dipping their toes in the water. Those companies are unlikely to realize much advantage in the tight integration with the rest of their Google Apps domain that is the main value added of the app approach.

I haven't used the product, but it would certainly be interesting to hear from someone who has. Which brings me to a plug for Security Scoreboard, the vendor review site for the security community. If you are a current customer of Napera Networks, please share your experiences on Security Scoreboard and help the rest of the community evaluate this vendor.

Monday, June 7, 2010

Flash Security Under the Microscope

On the heels of Apple's very public tussle with Adobe over Flash support on the iPad, Adobe announced a "critical vulnerability" in Flash on Friday.

Vulnerability announcements happen all the time. For better or worse, the nature of today's software industry is to build first and repair later. But its been months since Flash experienced a security issue of this scope. And the timing is not good for Adobe, as Steve Jobs specifically mentioned Flash security issues in his "Thoughts on Flash" manifesto in April. With the major media players deciding what graphics and animation standards to support, Flash is under the microscope.

I don't think security usually determines winners and losers in the mass market/desktop environment. But there are rare occasions when the cumulative perception of security vulnerabilities coupled with lingering privacy issues can form a tipping point in the fortunes of a technical standard or company. Many companies are immune to this phenomena due to a lack of alternatives (for all the user outrage, Facebook is not about to be upended by fledgling alternatives like Diaspora anytime soon). But with HTML5 and other open web based standards based offering competing functionality, a series of badly handled security vulnerabilities would not augur well for the future of Flash.

Incomprehensible warnings

Miscommunicating vulnerabilities like the one announced on Friday can fall into this straw-that-broke-the-camel's-back. At the time of writing (Saturday night June 5th) the Adobe announcement does not make clear that all users running current versions are vulnerable and that there is no available fix. Instead, Adobe published that anyone running Flash version 10.0.45.2 or earlier is at risk. Since there is no version 10.0.45.3, that basically means that by default everyone is potentially vulnerable. Since most non-RainMan users do not have the version numbers of their installed programs memorized, this should have been more explicitly spelled out in plain English. A more technical explanation could have been included to parse out which installations exactly are at risk.

But even more troubling for the average user is the lack of a viable fix. Adobe has announced that this vulnerability is being exploited in the wild (again an incomprehensible term for most users…). There appears to be no available patch for the Flash vulnerability. And for the accompanying Reader and Acrobat vulnerabilities the solution is to remove the authplay.dll component that ships with the product. How many users know what a dll is?

Of course Adobe isn't alone in producing vulnerability announcements that are inactionable for most users. And this announcement is far from the worst. My unscientific thumb-in-the-wind estimate would give this a B or B- for clarity on a weighted curve with other major software vendors. But even more problematic and potentially more damaging to Adobe's long term perch within everyone's browser is the lack of user control over Flash privacy settings.

Flash's privacy exception

Flash has long existed in its own little fiefdom on the desktop, immune to many of the privacy controls applied to browser plugins. But that situation could be ripe for change. When even Facebook's CEO - with the closest thing the planet has to a universal social network - is literally sweating up a storm over users' privacy concerns, more easily replaced plugins like Flash cannot continue indefinitely to fly under the radar.

Until now Flash has somehow gotten a free pass when it comes to user privacy. In response to user demand, all the major browsers include a private browsing mode that does not record cookies and generally does not leave digital fingerprints on the user's computer (earning it its more technical name, porn mode). But Flash doesn't play by this game. Most users are very surprised to learn that Flash cookies persist on their machines long after the user has diligently cleared caches, cookies, and even reset their browsers. The only clue for the average user is the seemingly mysterious way that programs like Pandora still remember them long after they thought they scrubbed their browser clean. Flash may have a webpage where users can theoretically manage their cookies, but I would guess that only a minuscule portion of users are even aware of its existence.

Regardless of whether you think Flash on a website is cool or just annoying, it's hard to get the full web experience without Flash and that's why its installed on 99% of browsers. If Flash were to lose its ubiquity the transition to competing standards could snowball. With so little information available about the latest vulnerability, it is difficult to know whether it is the result of overzealous feature integration at the price of security. But as the ubiquitous incumbent in the web multimedia war of 2010, Flash will be judged - fairly or unfairly - to a higher standard than some of its emerging competitors.

Tuesday, May 25, 2010

Google Secure Search and Security Overkill

Google announced on Friday the availability of a beta version of its secure search.

Secure search? Well, kind of. Google, of course, still retains all your search data. But users will now have the option of searching over an SSL connection. Just type https instead of http in the Google URL and your searches are safe from prying eyes, Google and your desktop notwithstanding.

The rushed timing of the latest announcement is no coincidence. Google has been in some serious hot water over the last few weeks for gathering data from insecure wifi connections using StreetView. Unlike previous Google privacy $@#%-ups, StreetView wifi-gate has users, and especially governments, genuinely annoyed. A big American company driving by in a van and kinda sorta intercepting wifi traffic understandably rubs a lot of folks the wrong way, especially in Europe.

There is a certain delicious irony here. Google gets busted for spying on unencrypted wifi connections, and responds by offering encrypted search. Google is basically saying that you had better think about encrypting your search results, because there are a lot of crazy folks out there who might be trying to listen in. Heck, even we might be accidentally listening in!

Ironic or not, at least this makes some sense. Last week I wrote about Facebook trying to address a growing privacy uproar by offering a totally unrelated security option. While offering a rare mea culpa for the wifi snooping, with secure search Google is also not-so-subtly castigating users for their use of insecure connections. A bit like Toyota reminding users about the importance of seat belts...

[Since we're on the wifi topic, one quick digression - I don't believe for a minute that Google has any interest in spying on user wifi data. With so much data on so many users, the last thing the company needs is to physically go to users to get their information. I take at face value the claim that a "programming error" was responsible for the extraneous data collection. Unlike Facebook, Google has a deeper well of general public sympathy to draw on and my theory is that the public, if not necessarily the legal, aspects of this incident will quickly blow over.]

But back to secure search, which has been in the works for a while and is much more than a response to the wifi incident. Cynics say that secure search is just a ploy by Google to keep precious search data from the ISPs. To me that doesn't hold much water. Secure search will never comprise more than a tiny slice of overall Google searches unless it is the default. And I don't see that happening any time soon.

At the risk of overestimating the importance of the security profession, I would argue that one of the main motivations behind the new service is Google's interest in placating the security purists within organizations considering its enterprise services. As the biggest cloud service provider in the world, Google's entire corporate future is tied into trust and security in web applications. If Google wants to convince users to ditch desktop applications and behind-the-firewall servers in favor of its web-centered universe, it needs to convince enterprises that it takes security uber-seriously. By being the first major player to offer to secure pre-authentication search data, Google casts itself as a cutting edge provider of secure cloud services.

But does secure search fulfill an actual security function that justifies its cost? Or is it a case of security overkill meets security theatre?

Searching away from prying eyes

Let's start with what Google searches over https achieve. By encrypting traffic, search traffic will be safe from network sniffers.

Here's my best stab at a list of people/entities you might not want seeing your Google searches:

1. Your husband/wife/roommate/parents
2. Your employer
3. The random sys admin at your Internet cafe, university, etc.
4. Your government or ISP

Secure search does nothing for (1). In case (2), your employer is probably not terminating SSL connections, but they may very well be, so you can't really count on your searches being secure. With (4), your government or ISP have enough information on you (like your entire traffic history) that your search history becomes largely irrelevant.

That leaves (3). The only real advantage of secure search is protecting you in random networks you might be connecting to. This seems like a pretty limited use case. And anyone sufficiently security conscious is already tunneling their traffic on public networks rendering moot the privacy advantage of secure search.

How did you get here?

There is one big privacy advantage to secure search - referrer headers are no longer passed, so the web page you are landing on no longer knows how you got there. The vast majority of Internet users do not realize a simple fact that is obvious to most of the security minded readers of this blog. When you search "furry animals" on Google, and land on www.myfurryanimals.com, the webmaster of the latter sees that you landed there by searching "furry animals". The entire web analytics industry is built around this fact.

But you don't need SSL to disable referrer headers. And besides, if someone is so concerned about privacy, they might be better off getting an anonymous proxy. Proxied traffic usually uses encrypted protocols, so at that point using secure search becomes superfluous.

I have no idea what the performance or functionality hit on secure search will be, but for now I don't see the numbers adding up in favor of the service. Searching on the beta Google secure search site seems slightly slower than regular http Google, (although admittedly from the confines of a crowded New York cafe on a Sunday evening). So here's my back-of-the-napkin calculation: I probably do 100 Google searches a day. If each of those is 1 second slower (and I'm just making up the numbers here), is it really worth an extra 1min and 40 seconds of my time every day to hide my Google search history from a hypothetical person who might want to look at it?

The large majority of users have so many toolbars, widgets, and logged in applications running at the same time that the entire concept of SSL encrypting their search traffic is ridiculous. But even for privacy conscious users this seems like one proverbial bridge too far. Secure search might give users a false sense of privacy with little tangible benefit.

An interesting project by the Electroric Freedom Foundation called Panopticlick shows that your browser basically provides servers enough information to be uniquely identifiable. With all the fonts, plugins, and other settings your browser has, even with an anonymous proxy a website can identify you as a return visitor. For the average user, online privacy is a bit of a heads they win, tail you lose proposition. No matter what measures they are taking, it turns out that they are still traceable online.

All this certainly should't be construed as meaning that secure search is useless. Whatever one thinks of Google's privacy policies, they do offer a rich set of user security configuration options. With options like tying password reset to a mobile number, showing the last IP addresses that logged into a Google Account, and default SSL for many enterprise applications, Google offers a significant arsenal of security options that is more robust than many of its competitors. I'm just not sure if secure search adds much to this portfolio.

Friday, May 21, 2010

Facebook and Security Minimalism

Facebook can't seem to catch a break. Just this Wednesday an XSRF bug was announced that gave access to birthdates users had designated as private.

Not that Facebook users care. I would bet dollars to proverbial donuts that no more than 0.01% of Facebook users has ever heard of XSRF. And more importantly, I would bet that almost none of them has really suffered from these vulnerabilities. Bad security in social networks is a non-story. Lax privacy policies, on the other hand, are much more in your face. No user is going to notice an insecure version of Python running on your webserver. But share their data with unauthorized contacts and the same user might go beserk.

User apathy notwithstanding, Facebook is making some half-hearted attempts to calm the masses. The company announced last Thursday the ability to limit the devices from which an account can be accessed. But this attempt to soothe the raging Facebook villagers with sharpened pitchforks is misdirected. Users are concerned about who Facebook is sharing their information with, not how. Device authentication - a poor man's security control at the best of times - is an unnecessary inconvenience to the vast majority of users and an insufficient safety control for the truly paranoid.

Facebook are not fools of course. You don't build a business that engages every tenth adult on the planet without honing a pretty good sense for which way the wind is blowing. The company realizes that it is under no obligation to provide any real security controls to its users. Providing window dressing security such as device authentication is a good way to appear conscientious to a public that tends to conflate security with privacy. And in any case, the risk that device authentication addresses - preventing User A from logging in as User B - is the one area where Facebook and its users have a common interest.

So Facebook's mission is not entirely at odds with security. Facebook has an interest in providing application security insofar as it does not impede its vision of becoming the web's authoritative social platform (more on that in a bit). But beyond that, why would Facebook provide security that involves substantial resources or limits its collaborative abilities? Facebook may throw its users a security bone when in comes on the cheap, but the company is under no obligation to provide anything beefier.

Really? Here is a simple fact most security folks won't like - unless you are in a regulated industry you are under almost no specific obligation to offer secure web applications. Unlike privacy regulations, this statement is true across all major jurisdictions. Laws will limit who you can share data with, and in some cases like children whether information can be collected to begin with. But they impose virtually no requirements on small businesses on how or even whether they need to secure their data.

This means that anybody can fire up a web application and start collecting, storing, and processing data that may or may not be sensitive to its owner. And they can do this while being under almost no legal or business requirement to provide adequate security.

With hosting costs approaching zero and development frameworks hiding the uglier layers of the stack, this means that any old schmo can be in business in no time. Just like you can blog on Blogger without touching any code, you can now build some pretty impressive quasi-professional web apps without touching any real code. Millions of people have done exactly that.

But what about big apps? Surely the ubiquitous brand name web applications are subject to some sort of control that two guys in their garage are not? Well, not really. The fact is that many web 2.0 apps that are in common enterprise use are probably not more than 5 or 10 guys. They may look like big businesses, but the beauty of the Internet is the ability for small organizations to amplify their presence and take on the trappings of the big boys.

Today there are gazillions of sites out there that will do anything from storing files to reformatting reports that are operating with almost zero intentional application security. And its not just small or medium sized businesses. Even the big players are, at the end of the day, only subject to restriction on who they can share data with. Having mostly evolved from small start-up operations, they take an understandably minimalist approach to information security. Application security - in stark contrast to privacy - is basically a good faith effort.

The kerfuffle around Google's recent StreetView-wifi snafu is a good example of the priority of privacy over security. Apparently Google's StreetView had collected information from open wifi networks. Google has attracted a great deal of negative press and is facing numerous investigations in Germany and elsewhere for this. At the same time, the numerous security vulnerabilities that are frequently exposed at most large companies hardly register on the legal radar screen, and is certainly not something that will get investigated. You don't trigger an EU investigation by having too many bugs to patch in a given release or by using a vulnerable version of PHP.

The More Social, The Less Secure

That's not to say that security and privacy are totally unrelated in social networks. If Facebook wants to build a platform that others can plug into, it necessarily opens the application to vulnerabilities. A very good example of this is the recent hiccup with Yelp. Facebook's Instant Personalization exposed users to a cross-site-scripting vulnerability on Yelp that could harvest user data. Without the Yelp integration, this vulnerability would never have happened.

When it comes to social networks, there is no free security lunch. Collaborative services are by definition less secure. Facebook is meant to be collaborative and thus can never offer the same level of security as a more gated service. This is the same reason that Times Square cannot be secured in the same way an airport can. One is meant to be open and one is meant to be closed, or at least controlled. Although hundreds of security vendors may try to secure web 2.0 applications, robust security and social collaboration are ultimately opposing aims.

Of course there are numerous technological standards that are being built precisely to secure the web 2.0 world. The move to OAuth from BasicAuth (a transition that Twitter will be enforcing next month) is a good example. But from a business perspective, the raison d'etre of most social applications is the collaboration, not the security. When web services communicate, there is generally only one level of authentication required to reach the crown jewels. Unlike traditional applications, most web services only require one screw-up or misconfiguration to expose your data.

Social Media vs the Enterprise

This single-line-of-defense approach that most web 2.0 applications take is sufficient for most individual personal data. Your Facebook settings may limit photos of you drinking at a keg party from your work colleagues, but their accidental exposure is not such a big deal and is a risk most users are willing to assume. In our personal lives, there is no breach notification, limited personal financial liability, and a much lower expectation of due care. Even the recent mess-up exposing your friends' private chats on Facebook is probably met with a stuff-happens shrug by everyone not directly affected.

But enterprises should - in theory - be more cautious. Breaches can carry real costs, financial liability is more substantial, and there are often contractual obligations to provide security. Indeed while there might be no affirmative legal obligation for application providers to provide security, customers may very well be prevented from using their services if there is insufficient security. After all, most enterprises that deal in personal data are under some form of contractual obligation to reasonably protect that data.

Will this raise the bar for small providers of cloud services? Unlikely. As I have frequently ranted about in the past, in non-regulated industries these obligations usually refer to some dinosaur-like provisions about SSL and biometric readers at server room entrances. Unless a company is truly conscientious about applying the meaning of due and reasonable care, there are practically no legal or contractual security requirements to which web applications are subject. (This is of course not true of heavily regulated industries like finance, but most corporations are operating in much more loosely regulated spaces).

Many enterprises are driving full steam ahead with the integration of minimally secured third party web applications into the enterprise. The transition from walled-off silo to full member of the application ecosystem is well underway. We may not be in a Jericho-Forum world of perimeterless utopia, but we are in the awkward teenage state of integrating our IT environment with hundreds of smaller companies - through APIs, SDKs, and the like.

Much like real life, digital hookups put enterprises at risk - you no longer have to worry just about who you are with, but everyone they have been with and will be with in the future. And just like real life, the process of evaluating the safety of a potential application hookup is largely heuristic. With an increasingly promiscuous digital environment, we lose the ability to do a full battery of tests on each potential partner (and belated apologies for the lame analogy).

For individuals, the risks of collaborative web services are far outweighed by the benefits. That's the reason that Facebook has 400 million users and why thousands of popular applications with only the thinnest veneer of authentication thrive. This risk calculus will also hold for many enterprises, both large and small. For more security heavy environments, however, fundamental changes will be needed. For some environments it will take a lot more than device authentication to make today's handy web applications ready for enterprise prime time.

Tuesday, April 27, 2010

Application Security Underfunded

Imperva and WhiteHat just came out with a report on security spending and resource allocation (registration required). This report is a must-read for anyone who is in charge of security budgets.

The basic gist of the report is that application security is not getting it's rightful share of the security spending pie. This is perhaps an unsurprising conclusion for a study sponsored by web application security vendors, but the real mystery is why the wider security industry is not talking more about this undeniable and perplexing spending imbalance in the security industry. Simply put, most threats are web based, but most security budgets are not. Why?

Here are a few reasons I can think of for the spending imbalance:
  1. Decision makers are unaware of the relative risks.
  2. Inertia
  3. Legal and regulatory requirements overlook web app security
  4. Perception that web application security cannot be solved by throwing money or resources at it.
All these factors feed into one another but there is one other factor at play that is internal to the security industry itself. By and large, the same security standards have traditionally been applied to an incredibly broad swath of companies. Rather than raising the standard for everyone, this approach has had the de facto effect of exempting certain companies from what they perceive to be irrelevant requirements. This in turn drags the entire market down to the lowest common denominator. By using the same hammer to hit all nails, the security industry has inadvertently generated a "security race to the bottom".

One Size Fits None


Some companies operate in highly regulated and highly sensitive environments where security is not up for debate. Let's call this the Fort Knox zone. In the Fort Knox zone, web application security is governed by detailed SLAs on remediating vulnerabilities and applying secure development processes. In this zone, the security of the web application is considered an inherent part of the finished product or service. Everybody thinks and breathes security. These are the big banks and the three letter agencies amongst others.

Then there's the Pragmatic zone, where security matters, but where business decisions are constantly being made to balance security against price, convenience, and functionality. Most businesses fit in the Pragmatic zone even though they might deal with sensitive data. Online health records is one example. For most people, the risk that a random hacker might find out their medical allergies pales in comparison to the risk that in an emergency a doctor might be unaware of those allergies. In the Pragmatic zone, security takes a back seat to functionality, but basic security remains highly desirable.

Finally there is the Whatever zone - a place where basically everything you use is at your own risk. This is the guy who runs a cool web service from his parents' basement that allows you to see when you and your buddies are both within stumbling distance of the same pub. In the Whatever zone, there is no guarantee - and often no mention - of security. It's not that security is trumped by other considerations. It simply was never really a consideration to begin with. And if you don't like it, don't use it.

The Failed Quest for the Esperanto of Security

Today's security industry speaks largely in the language of the Fort Knox zone. "Critical" and "severe" vulnerabilities are presented as something that must be fixed within as short a time as possible. But most businesses are actually graduates of the Whatever zone that are today in the Pragmatic zone. The shrill tone of vulnerability disclosures coupled with its frequently monolithic approach produces tone deaf customers and businesses.

In other words, the real problem is not that there are so many insecure apps out there, but rather that as an industry we set a bar that is both unattainable and inappropriate for many applications. Consider the very recently published OWASP Top 10 web application security risks. Many companies and many security folks view this list as an all-or-nothing proposition (although OWASP makes clear that it isn't). There is no inherent reason that all web applications need to be immune to all these threats. It just takes too much effort with far too little return. And this isn't even counting the opportunity cost of fixing security vulnerabilities.

The specific metrics of the WhiteHat-Imperva report underscore why the absolute approach does not work. Take for example the 38% of respondents who believe that 20 hours of developer time are needed to fix a vulnerability. Regardless of whether this figure is perception or reality, there is no way that a small operation is going to budget 20 hours to fix a seemingly obscure vulnerability when that time could be used to fix a visible bug or build a new feature. The return for spending lots of extra money to truly lock down most apps is just not there - not in the customer recognition, not in improved regulatory compliance, and often not even in a reduction in damaging security incidents (or at least not in a way that is readily measurable for organizations with limited resources).

So it shouldn't really surprise us that vulnerabilities aren't getting fixed. In most companies if the website doesn’t actually work there is hell to pay. But if there is an unfixed vulnerability almost no one knows or cares.

The User Has Spoken (while logging in over http)

This user indifference runs deep. I never cease to be amazed by the number of early and even mid-stage start-ups that don’t have login over https. From a security, or even a marketing, perspective secure login seems like a no-brainer – certificates are relatively easy to install and it is one of the few – perhaps the only – security mechanism that almost every single end user is on the lookout for. So it is very telling that many start-ups do not consider it worth investing even the slight pain-in-the-ass that using certificates introduces.

As security professionals this may seem jarring, but these start-ups know their business better than anyone else. They have figured out a well known secret of today's Internet -

Much of the Internet is pretty much useless if you follow security rules.

OK, that’s a bit harsh. But conventional security wisdom does not jibe with having fun or even getting things done online. There are just too many things you miss out on online if you actually abide by all the security rules that the purists preach. (Here's a simple list for starters - storing passwords on your iPhone, storing passwords in your browser, giving up your passwords for application integration, simultaneously logging on to numerous applications, and the list goes on).

So as an end user, you basically have a choice – seriously handicap your use of the Internet, or take your chances with a half-hearted that's-what-everyone-does attempt to minimize risk (aka anti-virus). The vast majority of end-users have opted for the latter. Or put differently, end users are fundamentally happy with the Whatever zone of security.

The low bar from home to enterprise

Most companies start operations in much the same way - in the Whatever zone of security. They need to push something out fast and get to market with the bare number of features. And the barely working mentality applies to security just as much as anything else.

It's here that the seeds of the specific spending disparity described in the Imperva-WhiteHat study first come to light. Application security comes with real project risk costs. This is in stark contrast to network security – you can secure your network layer fairly easily without risking screwing up your app. Compare the pain of setting up a WAF with the relative ease of setting up a firewall. When a small company needs to choose how to answer the security checkbox that most customers will never look beyond, the choice is clear. And so the imbalance is born - Network security 1, Application Security 0.

Of course start-ups start using the services of other start-ups, and before you know it you have a growing company within a relatively large enterprise ecosystem where everyone is using consumer-grade security without real threat analysis. It is this transition to the enterprise level where in theory the threat analysis should mature and security measures fundamentally reassessed. By that point though, the ship has gotten big and bulky and reversing course is no longer easy.

So often as companies go from the Whatever zone to the Pragmatic zone, they sweep app sec issues under the rug and hope (often correctly) that no one is going to notice or care. Today, too many enterprises are treating web application vulnerabilities as if they were still in the Whatever zone - and then if someone asks about security, they can proudly answer glad-you-asked, look-at-our-firewall. The details of the Imperva-WhiteHat report (and if you have made it this far in the post, you should really read the full report) reflect this - most security professionals report an internal culture that is either cavalier or helpless about web application vulnerabilities.

How is this going to change? If recent history is any judge, regulations and contracts will either break or re-enforce the current security spending imbalance. The current trend is towards the latter. At the risk of sounding like a broken record, I’ll mention again that even relatively recent pieces of legislation and standards (PCI, Massachusetts data security regulation, and for that matter most RFPs) completely gloss over application security. For reasons that I don't fully understand, the PCI Prioritized Approach puts most network security issues ahead of application security issues. And now Washington State has adopted a PCI-based law. This certainly doesn't bode well for correcting security spending imbalances any time soon.

Sunday, January 31, 2010

Security Scoreboard is Live!

I am very excited to announce the launch this week of Security Scoreboard - an online resource for researching and reviewing information security vendors. Security Scoreboard features over 600 vendors and aims to become a valuable resource for CISOs, CIOs, system administrators, and anyone who is in the market for information security products and services.

Why Security Scoreboard? As an information security executive at a mid-size company in New York City, I constantly face the challenge of trying to quickly identify and assess the security vendors who offer solutions in a given space. While there is a ton of available vendor content - webinars, press releases, whitepapers, etc - I have always found one vital resource to be missing in the purchase process. Until now there hasn't been one convenient and objective place to see side-by-side profiles of all the vendors addressing a specific security challenge. Security Scoreboard fills this gap by providing a starting point to get oriented about all available solutions before doing a deep dive on those vendors that seem the most promising.

CENTRALIZING RELEVANT INFORMATION

Let's take the example of a security manager looking for an enterprise privileged access management solution. Searching online will lead him or her to some of the larger players. But finding a comprehensive online list of all the players in this field is surprisingly hard, and involves plowing through an overwhelming amount of irrelevant information.

And once the main players are identified, finding basic objective information on each vendor can be tough. The average vendor publishes numerous press releases that then get picked up and replicated on multiple other sites. Independent opinions and information about the vendor get buried at the bottom of online search results.

That's where Security Scoreboard fits in the picture. Security Scoreboard is meant as a time saver - a quick way for security consumers to orient themselves and separate the wheat from the informational chaff.

This is especially useful for buyers in the SMB market. One of the things that strikes me every time I go to conferences like RSA is the sheer number of security vendors with unique and innovative approaches to various security challenges. At the same time, there has been a distinct lack of freely available resources for researching these vendors. Potential buyers lack an easy way to put a vendor in context, research its competitors, and objectively assess whether a vendor's solution will work for them.

The information imbalance is less of a problem for security pros in larger companies that have access to analyst and other services to research the competing claims of a large number of vendors. But security managers and CISOs at smaller companies usually do not have extensive access to such services. For them, even identifying who the main players are in a space can be a time consuming process. Security Scoreboard is of particular use for these often-overlooked consumers of IT security.

CUSTOMER REVIEWS

Security Scoreboard provides more than just a comprehensive directory of security vendors. Users can also leave reviews describing their good or bad experiences with security vendors they have worked with. Of course, like any public review site, there is no reliable way to verify that online reviews correspond to actual customers. But security pros by nature are sophisticated enough to process information accordingly and to spot obvious attempts to game the system.

RESOURCES FOR VENDORS

Most information security vendors will find their company listed with its own company page on Security Scoreboard. Vendors who are not yet listed can submit a request and will be added if they meet the basic requirements (a focus on information security and an active market presence). There is also a form vendors can fill out to get free monthly Google Analytics reports showing the search terms that are leading users to their company page.

Security Scoreboard will also be offering vendors the ability to convert their page to a "premium page" and expand on the very short summary currently in place for each company. This will give vendors who want to spruce up their company page an opportunity to get their message across next to the user reviews and other links related to their company. In keeping with the transparency that is at the heart of Security Scoreboard, this structure will be clearly described on the website.

To celebrate our launch, Security Scoreboard will be sponsoring some great prizes at the Security Podcasters and Bloggers Meetup at ShmooCon this coming weekend in DC. Come by and say hello if you are going to be at the event.

Monday, November 9, 2009

Mass Security Regulation Gets Tech Priorities Wrong

The final version of a sweeping new data security regulation in Massachusetts was published last week. Some parts look pretty good. But some parts look like they are straight out of 1999.

Let's start with a bit of history, for the benefit of the 99.9999% of the population that does not spend its time following obscure state-level data security regulations. The Massachusetts regulation, known as 201 CMR 17.00, was introduced a couple of years ago to address a spate of breaches of personally identifiable information. The business community balked but the regulation survived. Since then it has undergone numerous revisions to address concerns that it imposes an undue burden on businesses.

The regulation has some fairly standard and common sense requirements on the policy and procedural side. But it is on the technical level that the latest - and supposedly final - version of the regulation sounds woefully out of date. Reading through the text gives an awkward time-warp feeling. Like a newly published technical manual talking about dial-up modems and floppy discs.

That 90s feeling starts with the title of the technical section - "Computer System Requirements". Hmmm...What about all the iphones and netbooks and what not floating around the enterprise? And more critically, while securing computers is important, isn't securing servers more important? A more inclusive title like "IT Systems Requirements" would have definitely made more sense.

So what are these "computer system" requirements any how? The only purely technical requirements in the regulation talk about anti-virus software, operating system security patches, firewalls, and encryption. If you are having bad flashbacks to the CISSP you took a decade ago, that's probably not a coincidence. Those are all important issues, but are they really crucial to most technical data breaches in 2009? What about secure configurations? What about securing web applications and secure code development?

So it seems that the security-apparatchik mentality of anti-virus programs, patches, firewalls, and encryption is unfortunately alive and well in the legislative branch. And of course those measures might be the best way to secure a home computer. But they simply do not reflect the reality that most enterprise data breaches that are not a result of stupidity occur as a result of insecure configurations and applications. Up-to-date virus definitions are usually neither here nor there.

Most large companies already know this. They have an internal risk function in place that prevents them from overspending on anachronistic security measures, except when required to do so by outdated regulations like 201 CMR 17.00. But for small and medium sized businesses – including small shops that manage millions of sensitive records – regulations like 201 CMR 17.00 will drive security spending priorities. These companies are inadvertently being misled into believing that securing their environment means buying an anti-virus program and setting up auto-update.

The truth is of course very different. Installing anti-virus software is easy, but actually locking down an environment is incredibly difficult for smaller companies. That is because it requires reconfiguring other applications that no one in the organization really understands. It requires fiddling around with Unix and database permissions and PHP users in systems that no one normally touches. At the end of the day, it is hard to secure systems you do not understand, and most smaller companies do not understand the systems they run internally.

The legislation does not even begin to allude to this. From an actual risk perspective, you are better off with an out-of-date anti-virus program and a really locked down internal environment than the other way around. You are also (sometimes) better off running an unpatched operating system with few services running than a patched one with a gazillion plugins and other third party components. Whoever wrote 201 CMR 17.00 can't be expected to know this, which is why when the law gets technical it just regurgitates some old security one liners that are found in CISSP prep courses.

Interestingly, even the weak and outdated technical requirements have a get-out-of-complying free clause. The technical requirements are all prefaced with the bizarre exemption “to the extent technically feasible”. As we speak they are building some sort of a black hole I understand nothing about underneath the French-Swiss border. So of course turning on a firewall or encrypting some data is “technically feasible”. I am not a lawyer, but I cannot see how any one could make an argument that any of the requirements listed in 201 CMR 17.00 are not technically feasible. There is a very ill-defined exemption at play here that will make it difficult for companies to understand what the regulation actually requires of them.

It is a shame that the poorly written technical portion of 201 CMR 17.00 detracts from what is otherwise a well written regulation. The sections on policy, training, and contractual language are important and will prompt some companies to get their data security house in order. It is only when the regulation tries to get even vaguely technical that it falters. I do not know whether "final version" really means "final version" this time around. But if there is room for one more revision before the March 1st compliance deadline, a few words on secure configurations and applications would go a long way to improving the regulation.