Here is the basic story - Commonwealth Financial has a decentralized advisor structure where independent contractors work out of about 1000 branch offices. These advisors access the Commonwealth online trading platform from their own computers. Commonwealth has a central IT office that supports these users.
Sound like a recipe for infected computers? Turns out it was. Using malware, an intruder managed to get the login credentials of some brokers. He (or she) then created a list of high value accounts and tried to execute some fraudulent transactions. At that point Commonwealth's clearing systems apparently picked up that something fishy was going on and shut down the illegal activity.
It would seem that Commonwealth's basic controls worked in this case - a criminal was unable to carry out fraud and potential victims were notified. But the data on the violated accounts was leaked (including information such as the net worth of individuals). And the SEC has a Safeguards Rule that requires broker dealers and Commission-registered investment advisors to "adopt written policies and procedures reasonably designed to protect customer information".
The SEC has not traditionally taken direct action on information security issues that are unrelated to the filings of publicly traded companies (by contrast other regulatory bodies like the FTC have been fining companies for bad information security practices for years). It is hard to say whether the Commonwealth fine indicates that this is about to change. The overall draft five year plan for the SEC released earlier this month contains a fleeting reference to identity theft on page 35 that may indicate a prioritization of this issue. A very detailed overview of the current issues being discussed can be found in the Federal Register.
Of course the Commonwealth fine is so low that it may actually have an adverse effect. It reinforces the business practice of risking low fines rather than changing business practices. The fines companies face for information security issues are dwarfed by the fraud-related fines that regulatory agencies in the United States issue. MoneyGram was fined $18 million the other day for turning a blind eye to fraudulent transactions on its network.
But the SEC action in the Commonwealth case does tell us something about how regulators look at information security. Two main issues were cited in the SEC action -
(1) the failure to actually require - rather than just recommend - anti-virus software, and
(2) the failure of the support center to properly follow up on a report the computer was infected.
Recommendations and Requirements
The first item underscores what legal departments have known for years and what CISOs are just starting to learn - that the most important thing for an organization is a well formulated and well communicated security policy. This is actually more important than most technical controls in addressing the overall enterprise IT risk.
Commonwealth might have avoided a fine entirely if it had just switched around a few words in its security policy. To regulators, there is a big difference between requiring and recommending, even if you can't actually enforce your requirements.
To technically require anti-virus software is a pain. Network Access Control (NAC) systems have struggled to gain acceptance outside of highly controlled corporations or environments like universities where infected users threaten availability, and not just security, of networks. The recent failure of the once-promising Consentry Networks is a sign that NAC vendors had over estimated the appetite for pure-play NAC appliances.
But there is a world of difference between getting a complex NAC solution to make sure everyone on your network has anti-virus software, and just telling people they have to get anti-virus. The latter is free. And although cynics would say that it does little to influence actual user behavior, it does help create a culture of security within the organization. And, critically, these policy mandates create a framework for liability and accountability when something goes wrong.
What You Don't Know Sometimes Cannot Hurt You
Item (2) raises an uncomfortable truth that undercuts the selling strategy of many security vendors. Namely, organizations are sometimes better off not knowing about security vulnerabilities than knowing about them and doing nothing about them. In this specific case, knowledge of a vulnerability came from a human being noticing their computer was infected. But most vulnerabilities come to light by an automated system detecting them. In that case ignorance is sometimes bliss.
Many security vendors pitch their products with "You have no idea how much bad stuff is going down on your network! Buy our new ZXT3000 to discover and mitigate threats ABC". For some businesses, this is an appealing proposition because their data is so sensitive that it is being specifically targeted. But for the large majority of organizations, buying the ZXT3000 (and apologies if such a product actually exists) is just going to create more liability than they previously had; after all, they may have the budget to buy the device, but they don't have the manpower to monitor all the alerts it creates. This is why many organizations have turned off their complex IPS systems. They turned it on, got gazillions of alerts, and then intuitively realized that having all these high severity alerts and not doing anything about them is worse than having no alerts at all.
They should really do something about this kind of issue especially nowadays, a lot of people can do this kind of fraud.ReplyDelete