Friday, May 29, 2009

Maine Gives 7 Days for Breach Notification

Maine is tightening the screws on its data breach law. Breaches will need to be reported within 7 business days unless the authorities request otherwise. The bill, signed into law by the governor last week, goes into effect in 90 days.

Maine is pretty much going at it alone by taking this step. The vast majority of the 44-odd states with data breach notification laws let companies decide what timing makes sense. Here's what most of them have to say-

The disclosure must be made without unreasonable delay, consistent with the legitimate needs of law enforcement… or consistent with any measures necessary to determine the scope of the breach and restore the reasonable integrity of the data system.

As far as I can tell, the only other state that defines a notification deadline is Florida (if someone else knows of other states please let me know). It gives 45 days after the discover of the incident and - unlike Maine - has stiff financial penalties for delayed notification.

The Maine and Florida laws might end up getting swallowed up if a pending federal data breach notification law is passed. It would pre-empt these state laws and gives no deadline for notification. The proposed federal law largely mirrors the prevailing state law language of avoiding "unreasonable delay". As this legislation is still under consideration and likely to change, now is a good time for policy makers at both the state and federal level to ponder whether breach notification laws should give hard deadlines.

Data Breaches and Reasonability

So who gets to decide what is a reasonable delay when notifying?
Getting notification in time obviously matters to consumers. The impact of identity theft is limited if consumers get the heads-up in time to take out a security freeze on their credit reports. ( Security freezes are available in most states and make access to credit reports much more difficult).

Deadlines aren't the only place that data breach laws refer to reasonability. Many states only require notification if there is a “reasonable” likelihood of identity theft resulting from the breach. I have written before about the way this has the ironic effect of punishing honest businesses with strong IT management. In borderline breach cases they are much more likely to notify than to make a questionable determination that there is no "reasonable" risk of identity theft.

Companies that don't notify never really get called out on it. The large majority of states still do not have a requirement for breaches that do not trigger a notification to be reported to the Attorney General or another state entity. Which of course makes it much easier to sweep repeated data breaches under the proverbial rug.

A judge recently ruled in favor of Hannaford in the lawsuit that data breach victims had brought against the supermarket chain in Maine. The judge cited the lack of a strict notification deadline, which may have prompted legislators to act. However the judge also cited the lack of a reasonable risk of identity theft in not awarding damages.

Identity theft is such a nebulous concept that it is very hard to measure when a reasonable risk exists or not. This is part of the reason that some state laws presume a reasonable risk to exist by virtue of the fact that certain personally identifiable information (PII) has been leaked. The one exemption that all states grant is for encrypted data, which has spawned an entire industry of full disc encryption products. But interestingly, the encryption the law talks about is very different than the encryption the vendors talk about.

Encryption and the Get-Out-Of-Notifying-Free Card

Security folks think of encryption in terms of DES, AES, RSA and other encryption algorithms that use public and private keys to encrypt data. Various algorithms have come in and out of fashion due to their vulnerability to mathematical attacks like differential cryptanalysis or real world attacks like differential power analysis.

Now let’s gently exit the world of cryptographers and enter the legal world. Most state laws don't define encryption at all, but when they do it looks something like this:

"Encrypted" means transformation of data through the use of an algorithmic process into a form in which there is a low probability of assigning meaning without use of a confidential process or key or securing the information by another method that renders the data elements unreadable or unusable.

But where’s the key? The minimum 128 bits? The ban on single DES? Turns out that for legal purposes, encryption requires some form of obfuscating. Doesn’t need to even involve a key. Doesn't even need to involve too many CPUs. You just need to make sure that the way you obfuscate and then unobfuscate is confidential.

So who's right? Should a lost USB stick with personal data encrypted by a simple vulnerable encryption algorithm (say single DES) require notificaiton? The purist/cryptographer answer would be yes. Does it require notification from a legal perspective? A lawyer would probably say no [although I am by no means a lawyer].

This time I think the lawyers are right. The risk of identity theft from personal information on lost media is already very small; after all, the person who finds a lost laptop, USB stick, or mobile phone is very unlikely to be interested in the data. Now suppose that data is encrypted in some light but ultimately breakable way. The likelihood of actual identity theft drops down to almost nil. What are the chances that the guy who found your iPhone on the subway is both interested in your data and capable of decrypting DES?

That's not to say of course that there isn't data that merits industrial strength encryption, especially when placed on a portable device. But for the purposes of breach notification in the case of loss, sometimes we do really need to keep in mind what is reasonable.



Tuesday, May 26, 2009

Botnets and Security Hype

A couple of weeks ago a team at the University of California Santa Barbara managed to take over a botnet for ten days. Their fascinating and well-written analysis is well worth reading for an objective and first-hand look at how a botnet really operates.

So how did they do it? A botnet is just the overly sci-fi name for a bunch of computers that are controlled by a central command-and-control structure. The number one challenge for botnet operators is hiding their command-and-control servers to avoid being taken down (the chances of actually being arrested are pretty close to nil). The Torpig botnet uses an increasingly popular technique where client machines try dialling into a set of pre-determined domain names and accept the first server to respond as the botmaster.

This is where the UCSB researchers moved in - they took over the Torpig botnet by sneakily claiming the domain name that was the next in line to be the command-and-control server. The botmasters behind Torpig had not claimed all the domain names that their victims were meant to dial into, either to save money or because they didn't see this coming. In any case, the UCSB found itself in control of a botnet with hundreds of thousands of hosts.

Don't try this at home. The researchers cooperated with law enforcement and other entities to avoid legal problems. This appears to have helped them steer clear of the hot water the BBC found itself in a few weeks ago for actually purchasing a botnet from criminals.


Botnets and the Hype Cycle


You've probably heard botnets talked about on the evening news. Botnets are a particularly successfully marketed part of the FUD-cycle of the information security industry.

But how bad is the botnet problem in reality? Not as bad as previously thought, according to the UCSB team. Previous studies have counted IP addresses rather than actual hosts when estimating the size of a botnet. Getting from IP addresses to actual machines is tough - DHCP leads to an overcounting, NAT to an undercounting, and there are many other factors at play. In the botnet the UCSB team analyzed, they counted 182900 hosts versus 1,247, 642 IP addresses, and there is evidence that IP addresses generally overcount actual machines.

But in many security reports IP addresses and computers are treated synonymously - the latest MacAfee report actually contains the sentence "In this quarter we detected nearly twelve million new IP addresses, computers under the control of spammers and others". Arrghhhh...

Coverage of the UCSB work in the MSM did not mention the overcounting. "Botnets smaller problem than originally thought" doesn't make much of a headline...

So I'm part of a botnet, so what?

Good question. Theoretically, a botmaster could read your email and abuse your other accounts to their heart's desire. In fact, the UCSB researchers performed a keyword analysis of their victims' emails (not sure how they got the legal clearance to do that...). But they are probably the only ones who bothered reading those emails. Botmasters want control of computers to make money and not to read about your date last Saturday. When someone breaks into your house they steal your valuables, not your diary.

Most online accounts and credit cards do not hold their users liable for fraudulent charges. In this way botnets operate a lot like insurance fraud or old-school credit card fraud. They are an annoyance that creates an indirect cost for everyone, but a cost that is sufficiently low that people are willing to bear it. We live in a society where people want to be able to use a 16 digit number they have given out hundreds of times to pay for stuff. If that means that everything costs 1% more to deal with fraud, so be it.

Brian Krebs (who should be on your reading list if he isn't already) posted a piece today about the dangers of allowing your PC to be compromised. Reading through his list of spam, click-through fraud, DoS attacks, and the like, I couldn't get past the feeling of dangerous for society - yes, dangerous for the user - not really. As far as some of the more nefarious password stealing stuff, there is little to no evidence so far that botnets are actually using user credentials for anything other than non-personal misuse of a person's credentials. This isn't great for society, but isn't something the average user is going to care about.

Seems like just the kind of situation that calls for Uncle Sam (or Uncle Barroso)...

Laying Down the Law

The UCSB authors fault registrars for not sufficiently responsive to requests for taking down botnets. While ISP responsibility for content and traffic is a tricky political issue, the content industry has been very successful in forcing ISP accountability for peer-to-peer traffic on their networks. Of course the content industry has a bunch of well paid folks in Washington, Brussels, and other corridors of power pushing their agenda. Botnets do not directly affecting an entire industry's bottom line and so there is no lobbying effort to move responsibility from the client to the registrars and ISPs.

This could change significantly if the national security angle of botnets takes flight. The apparent role of botnets in Internet disruptions during the Russia-Georgia conflict last year, allegations of Chinese cyber-espionage, and frequent stories in the press about the vulnerability of critical infrastructure have attracted the attention of US policy makers. There are even signs that countries like China - long considered a safe haven for hackers - are taking regulatory steps to address botnets.

Regulatory measures will not completely address the botnet issue, but would potentially significantly change the risk/time-invested/reward ratio. Botnets take a high degree of technical expertise to set-up and are of only limited value. A tighter regulatory regime could significantly reduce the incentive for botmasters.

User Education

You often hear about user education in botnet/information security stories, which all too often is vendor-ese for user indoctrination to buy security products. But the UCSB researchers - who have done a great piece of research and aren't selling anything - also focus on user education as a solution to the botnet issue. Their statement that the "malware problem is fundamentally a cultural problem" places the onus for preventing complex and sophisticated criminal activity on the people least capable of preventing it.

It would be nice if all users were capable of being system administrators. For enterprise users, it is fair to expect a minimal level of technical skill. But the truth is that the technical measures a home user needs to take to secure his or her computer are simply beyond the grasp of significant portion of Internet users. The stuff you can educate home users about - choosing better passwords, not recycling passwords, etc is not going to make a real dent in the botnet problem.

Thursday, May 21, 2009

Massachusetts Backtracking on Data Security Legislation

If you haven't heard of the Massachusetts data security law, you probably don't deal with too many security vendors. My inbox is cluttered with invitations to vendor-sponsored webinars warning of the dire consequences of this law. This "game changing" regulation requires companies to "fundamentally re-assess how you secure your assets"

Of course this isn't true. There was no need to panic before. And now there's really no need to panic, because the Massachussetts legislature may be watering the law down much further. A proposed state senate bill, SB 173, takes almost all the umphhh out of the original legislation:

- it removes the encryption requirement in favor of technological neutrality

- it defers to (much weaker) federal law when relevant

- it basically give a free pass to smaller companies

I don't know what the status of this bill is, although it seems like there is a general consensus that the original law will be watered down one way or the other.

So if you just went out and bought a bunch of fancy encryption gear or log readers or other stuff, you might want to check the return policy. Those might be great things to have, but they are probably totally irrelevant to being in compliance with state and federal laws. There is this bizarre consensus that spending money is more important than re-engineering processes in securing data, when in fact the exact opposite is true.

In case you missed this, let me say it one more time - there is absolutely no need to buy anything as a result of the Massachussetts legislation. Not for big companies. Definitely not for small or medium sized companies. In fact for companies with limited staff buying stuff will probably do more harm than good. You would be much better off locking down the configurations and enabling security features on your existing big vendor stuff (your AD, Exchange Server, Oracle, and the like) than starting to learn how to use new toys.

This isn't supposed to be a rant against security vendors. But there has been a great deal of misinformation (to put it diplomatically) surrounding these regulations. The you-should-buy-productX-because-of-the-new-Massachussetts-data-law argument was absurd to begin with and is even more absurd now that the legislation is on life-support.

The problem in the security space is that there is no real counter voice to the fallacy that you can or should buy compliance. The vendors have an obvious interest in hyping the laws. The analysts stoke the fire. Technical security types can't be bothered to read through a bunch of regulations and so they reluctantly drink the vendor Kool Aid. And everybody else doesn't care because information security legislation - with all due respect to our industry - is among the least important issues being discussed in the United States right now.

Security and the Small Business

There's one part of data security legislation that I find a bit perplexing - the small business exemption. This basically says that any security measures you need to take are only relative to the size and complexity of your business. It is a central part of the Massachussetts legislation as well as most other similar regs I have seen.

Now I get that small businesses require protection from overbearing regulations and legislation. But you can't run a nuclear power plant with a team of 10 people (well, at least I hope there's no out there running a nuclear power plan with just 10 people). Is there a minimum number of people you need to provide adequate data security?

The answer is probably not, as long as you have outsourced both your operations and their security. Really huge famous companies can be shockingly small. In one of the articles about the Craigslist/South Carolina AG feud going on these days there's one detail that really jumped out at me. Craigslist has only 30 employees. To me it's mindblowing that a company with one of the most popular websites in the world and one of the world's leading brands is run by less people than were in my subway car this morning. Yet I haven't heard any one argue that Craigslist is unable to provide sufficient security, or that they should be given a break - or need to be given a break - on their data security.

But not every company is Craigslist. To securely operate a vast complex database with a lot of personal information either requires a lot of money or a minimum number of people. Most small businesses don't have the money and will always be under enormous operational pressure to dedicate staff away from security.

Friday, May 15, 2009

Do Companies Need a CISO?

Is there a future for the security executive? It's hard for me to completely objective about this because I have some skin in the game. No one wants to wake up one day and realize that their job is going the way of the elevator operator.

Chief Information Security Officers (or whatever title they give the executive level security person) started to really take off in the early 2000s. Usually working with a small staff and small budget, the CISO is meant to drive all information security functions within a company and affect change across the enterprise. Especially after 9/11, the creation of a CISO office became de rigeur for companies eager to demonstrate their commitment to security and public safety.

But lately I get the feeling that the role of CISO is in decline. This may seem heretical coming from a security executive, but I believe that the information security risks enterprises face have been exaggerated and misunderstood. The security industry is itself in large part to blame. The industry overhyped threats and demanded too much time and money to mitigate risk. Companies went along by buying expensive security equipment and hiring lots of security staff. But now some companies are starting to wonder -

Do We Really Need A CISO?

A company usually hires a CISO when they believe that two conditions are met - (1) security is a uniquely pressing and urgent need within the organization, and (2) a dedicated office and executive is the best way to adequately address the security issue. 

But does every company actually need a CISO? Are both (1) and (2) true of every company? The sneaky italics in (1) are a hint of my personal take on this - no for (1), and well...no for (2). A minimum level of security is of course always necessary, as are functioning toilets, basic physical security, workplace diversity, and a hundred other issues that do not have their own dedicated teams. But a unique need more important than anything else? Perhaps for a bank or a hospital, but not for a widget maker. 

But wait! What about "the competitive advantage of security" and "the ROI of security"? So at the risk of some bubble bursting, Security does not necessarily have either competitive advantage or ROI to many businesses, even big businesses. And even when it does, a CISO is often unnecessary in an enterprise with low security requirements. Security responsibility can be assigned as just another task to a CIO or other executive.

Which brings me to a phrase I coined a while back (or at least will take the credit for coining) -security narrative. A company's security narrative is the overall story of how it handles security - basically the kind of information you would give the CEO of a potential customer if they asked what your company does for security. A CISO's job is to own the overall security narrative in an organization. 

Whether your company needs a CISO is essentially a question about whether your company needs a full time executive to own and manage its security narrative.  Not every company has a Chief Privacy Officer, a Chief Continuity Officer, a Chief Blogging Officer (yes, that one exists). But if privacy, continuity, or blogging is critical your company, you will have that CPO, CCO or CBO. It works the same with security. So how many companies actually do need a CISO?

500 CISOs at the Fortune 500? Je pense que non...

One of the speakers at RSA last month claimed that all Fortune 500 companies now have a CISO. This seems highly unlikely. But even if it's true, this is probably more a reflection of title inflation than anything else. If we define a CISO as someone who is responsible for managing security but who is not operationally involved, I suspect there are a substantial number of Fortune 500 companies without a CISO. An employee who spends a substantial part of their time configuring firewalls or managing the people who configure them is by definition not a CISO.

Let's take a break from the doom and gloom. Despite everything you've read so far, there are still a large number of companies that need a security narrative and need a CISO to own it. For these companies, the CISO function will become even more prominent in coming years. And these CISOs are as hard as ever to find...

It Ain't Easy Finding a Good CISO...

What makes a good CISO? In descending order of importance - 

  1. The ability to affect change.

  2. An understanding of how business processes and information interact.

  3. An understanding of the technologies used in your organization

  4. An understanding of legal and compliance issues.

These skill sets are not in and of themselves so unique - any executive in a technology driven company needs a bit of each one. The tough part is finding someone who has all four skills and is actually interested in information security. 

Oh wait - we're not done whittling down the list of potential candidates. Security to most organizations is and always will be a tax. Being the custodian of this tax function will never be as sexy as selling or building or whatever it is. There's many a qualified potential CISO who ends us getting enticed into more glamorous (and profitable) sides of the business.

Some Other Recent Thoughts On This...

Some people talk about Chief Risk Officer being the next generation of the CISO function. I don't buy this. Everything a company does involves risk, and there's only one person who is ever going to be really responsible for managing all enterprise risk. That's the CEO. 

The Verizon Business Security Blog has an interesting piece about how cloud computing is going to reduce the CISO to a custodian of vendor relationships (or "gracefully lose control"). 




Thursday, May 14, 2009

Data Accountability and Trust Act

There's a new bill brewing in the US Congress that could have a major effect on the information security industry. The Data Accountability and Trust Act would pre-empt the patchwork of state data breach notification laws with one federal law. It would also require companies to have a basic security program in place.

I have no idea what the chances are of this bill being passed into law. Following legislation is tough because there are dozens of proposed bills that never make it anywhere near being enacted into law. I guess that's why lobbying is a full time job.

My somewhat uninformed opinion (from a few hundred miles north of DC ) is that this particular piece of legislation probably isn't headed anywhere in a hurry. For one thing, it's been sitting around for a while; apparently the current legislation was already introduced in the previous Congress. It's also being considered with an even less likely candidate for passage - "The Informed P2P User Act" which - regardless of its merits or lack thereof - seems unlikely to pass without some modifications.

So the exact details of the bizarrely named "Data Accountability and Trust Act" are not that important because they will probably change. But this bill is one example of a general trend in regulation that will have profound consequences on the security industry. Let me start with my conclusion - I believe that enterprises are currently spending too much on security products and too little on process. And I believe that the evolving regulatory regime will shine a spotlight on this disparity. Or to put it simply - new regulation will decrease hard dollars spent on security and increase the soft cost in FTEs and organizational capital.

Let's start with the why of security spending. Forget the FUD about hackers and criminals and Russian Business Networks. Compliance is the real driver behind security spending (an assertion recently backed up by the OWASP Security Spending Benchmarks Project). A close second is the desire to actually secure the enterprise; that is, to avoid security breaches and incidence. But this too is really motivated by data breach notification laws on the state level. So directly or indirectly, compliance requirements are the driving force behind security spending.

Both these spending pillars would be undermined by the Data Accountability and Trust Act - the law favors process/policy over technology, and weakens breach notification requirements by preempting stronger state laws.

The Rise and Potential Fall of Breach Notification


A weakened and preemptive federal data breach notification law would be a real game changer for the security industry. There are already federally mandated breach requirements related to HIPAA in the stimulus package, but the effect of a generic breach law would go much farther. By clipping the wings of the much more stringent state laws, they will greatly reduce the "keep us out of the newspaper" motivation for security spending.

The main difference between the proposed weakened federal law and many of the state laws is subtle but critical. As the CDT (Center for Democracy and Technology) pointed out in their testimony to Congress last week, the proposed law leaves it up to an organization to make the determination that there is a low risk of identity theft after a breach. There is obviously a very strong incentive for organizations to come to the conclusion that the risk is indeed low. Because there is little precedent (data breach laws have only been around for a few years) and measuring this kind of risk is inherently subjective, there is a significant risk that real incidents will be swept under the rug. A number of state laws reduce this risk by requiring informing the Attorney General's office of all breaches. But a federal law could preempt this requirement.

Security Narrative vs. Security Product

Why do companies spend so much money and so little time on security? One big reason is a mistaken interpretation of PCI. Some of the PCI requirements clearly require you to buy something or at a minimum enable certain features within deployed systems. Requirement 1 refers to firewalls as a given, and it's hard to maintain your anti-virus software in requirement 5 without, well, buying anti-virus software. Other requirements such as logging can sometimes be done with in house products and sometimes require something to be purchased. But - contrary to popular belief - the vast majority of the hundreds of PCI requirements are actually not about technology.

What about state regulations? The press often erroneously refers to "PCI laws" when talking about recent data security regulations in states like Minnesota and Nevada. The truth is that the only faint similarity between PCI and these laws is a requirement to encrypt data. Other than encryption, I do not know of any state regulation in the United States that mandates a specific information security technology in any meaningful way (and this is a good time to say that I am by no means a lawyer).

The new regulations that are pending in the United States (I haven't sufficiently analyzed this issue internationally) are much more focussed on processes and policy and not technology. They force organizations to have an overall security narrative that defines how they reasonably restrict access to sensitive data to authorized parties. The security narrative is the critical requirement. Security products are one piece of this narrative, but by no means the most important.

What does this mean for vendors? Walking the expo hall at RSA in San Francisco a few weeks back got me thinking that a large number of security vendors are selling products that do not help a business build its security narrative. Most companies today are spending too much money and too few resources and organizational capital in addressing security issues. The current and next generation of regulations are clearly focussed on process, not on technologies. The vendors that will thrive in the future are the ones that support organizational, and not technical, security processes.

And one final negative trend for security budgets is the erosion of the concept that it is an organization, and not law enforcement, that is responsible for preventing cybercrime. Increased prosecution and criminalization of cybercrime will shift the expectations as to what reasonable measures organizations need to take to secure their data. After all, no one expects an armed guard at the entry to every office building.

The European Angle

What about international data security legislation? Although I have heard anecdotally that there are certain technology-specific regs in small international jurisdictions, the vast majority of data security regs are centered around the question of who you can share data with, and not how. And even those few technology specific regs allow sufficient allowance for compensating controls that you are never really forced into buying or deploying a specific technology.

It's worth also debunking another common myth in the security industry, which is that there is any requirement for specific technologies mandated by European directives. In Europe the issue of security regulation is at the center of a very sensitive political debate about ceding sovereignty over security issues to Brussels. While member states of the European Union have ceded very siginificant economic sovereignty to Brussels, there hasn't been any significant movement to give up control over security issues. In the debate between so-called Euroskeptics and EU federalists, cybersecurity has (somewhat oddly) been labelled as a security, not an economic issue. This has stymied European attempts to produce any meaningful cybersecurity legislation at the European level.

When I had the privilege to be one of the founders of the European Network and Information Security Agency (ENISA) in Brussels back in 05, the Agency's remit was very clearly in the area of general cooperation and information sharing. Although the status of the Agency has been slightly expanded over the years, any actual security related regulations will come from the European Commission and not ENISA. There is currently a lot of back and forth about the future role of a European Telecoms Agency that would cover info sec, but these conversations are still in early stages and there is no way there will be any substantial enforceable regulations in this area coming from the EU any time soon. Other EU regulations like the Data Privacy Directive are focussed on the who, and not the how, of sharing sensitive data. So too make a long story short, European regulations also require more people-spend and less technology-spend on information security.

By the way, if you have some serious time to kill, the full testimony before the Congressional Committee on Energy and Commerce can be found here.


Update: The New York Times published an editorial on May 25th that is generally supportive of the Data Accountability and Trust Act but that is critical of the pre-emption of stronger state data protection laws.

Wednesday, May 6, 2009

Red Flags Rule Delayed Again

The FTC has delayed its "Red Flags Rule" yet again. The Red Flags Rule basically requires companies to keep their eyes open for identify theft. It was supposed to go into effect on May 1st but has now been bumped until August 1st, 2009.

These regulations have caused a stir amongst businesses because they apply to almost any entity that grants credit. For a small business, the maintenance of an identify theft program could prove to be yet another expensive regulatory requirement. But the Reds Flags Rule also emphasizes the fact that a program needs to be "appropriate for your company...size and potential risks of identify theft" (the size exemption is also one of the major stipulations in the similarly delayed Massachusetts data security law). Which is a bit of a strange formulation - why do small businesses get a pass on security? After all, shouldn't a business be required to be have the necessary staff on board to operate securely?

But small or not, in its current formulation the Red Flags Rules affects millions of businesses - basically any company that in some way or another extends credit to consumers. Even with the considerable outreach the FTC has done on this issue, I can't imagine that this rule is on the radar of even a fraction of all these businesses. But those businesses seem to have a while until they really need to pay attention - a panel I attended at the recent RSA conference had a few folks from the FTC who were basically saying that actual enforcement is still a ways off. And undoubtedly it is the largest companies who will be looked at first.

Identity theft (a term which is often misused as a euphemism for companies granting credit too easily) is a much more prevalent problem in the US than in most of continental Europe. In many European countries, there is no way to get any meaningful credit without physically presenting documents like a passport or national identity card. And while those can be forged as well, this significantly raises the criminality bar and the associated penalties. So identity theft is essentially a trade-off; credit is either easily obtainable with a high rate of identity theft, or credit is a hassle to obtain with a low rate of identity theft.

The US has had very easy to obtain credit in recent years, and the ubiquity of e-commerce has only exacerbated this problem. But the pendulum is starting to swing in favor of tightening regulation of credit following last year's financial meltdown. The Red Flags Rule may ultimately prove less effective at reducing identity theft than other regulations that have been implemented to protect consumers. Most notably forty seven states now have security freeze laws. These laws basically allow consumers to set up a password so that any access to their credit report requires them to first "unlock" the report with this password.

Because these laws require people to pro-actively go out and place a freeze, there has not been widespread adoption (I can't find a reference right now but I remember reading a while back that there were only several tens of thousands of credit freezes in all of New York State as of a year ago). Some people have been scared off by stories of delays in lifting freezes and having mortgage applications denied as a result. This inconvenience factor figured very prominently in the business opposition to the original freeze laws - without the ability to quickly approve car financing, a sale might fall through.

The argument against credit freezes reminds me of the Simpsons episode where an excited Homer walks into a gun store to buy a rifle. When he discovers there is a 5 day waiting period he exclaims "But I'm mad now!". Slowing down access to credit is probably the only effective means to actually reduce identity theft, but carries with it other economic costs.