It is a commonly held principle in many areas of business that if you can’t measure something “quantitatively”, it will be difficult to raise the quality objectively. The applicability of this statement to the world of IT security is clear. Without having some form of metrics in place, it is tough, if not impossible, to judge …
...it is possible to measure 'IT Security'.
You can measure compliance with standards. You can measure 'effectiveness' at year-end retrospectively against some measure like total cost of incidents/total security spend. You can do a lot of other things. I have spent a career in HMG and Industry doing just that, including lecturing on the subject.
A better question, however, is whether 'measuring' some aspect of 'security' is a fundamentally good idea. Security is not easily defined (unless you do it glibly), but it is obviously a process, not a state, and as much of an art as a science.
Art may certainly be 'appreciated' and 'criticised'. But would 'measuring art' help a lot?
I suspect what you are thinking of doing is 'selling' security to business. This certainly needs to be done properly - scare tactics do it very badly. But if you are thinking of doing this primarily through Benthamite measurement I suggest that you will run into difficulties - just as if you had tried to sell art, honour or beauty to customers in a purely utilitarian manner....
icon - security geek has left the building...
We can't even define it, let alone measure it
Ask 10 people what the word "security" means and you'll get 10 different answers. Ask them again the next day and you'll get 10 more.
To some, the word has come to mean "safety", to others it means being protected against crime. Other people will tell you it's to do with keeping viruses out of their computers and yet more will say it means stopping unaurthorised data being leaked.
While it's intuitively obvious that you can't manage what you can't measure, the first step is coming to a collective agreement about what a certain word means. This is the foundation of science: a common nomenclature . At present all we have is a Humpty Dumpty approach to marketing security, which exploits and maintains a total anarchy of ambiguous definitions, in order to push products which are niether suitable for purpose (if you can work out what that is supposed to be), nor comparable to any others or even proven to be utterly useless.
 From "Through the Looking Glass". When I use a word, it means just what I choose it to mean – neither more nor less. HD.
do you measure your fire insurance?
In comparing companies, Cost against Features, having first specified my mandatories.
And if you meant - "how do you decide whether to have fire insurance or not?", actuaries have that calculation down to a fine art - ask them for details....
OK, fire insurance is not a perfect analogy. Unless you're outsourcing the whole of your security, you can't easily do a simple cost/benefit comparison in the way you're suggesting. If you're doing it yourself (which is what I'd usually recommend), there's an almost infinite range of controls with their associated costs (tangible and intangible) that you may be prepared to bear in order to improve security. A risk analysis process will assist you in reaching what is intended to be a rational choice.
Now you want to measure whether your chosen controls are effective. Just because we didn't have a fire last year doesn't mean that our fire insurance premium was wasted. As you say actuaries (I are one :0) can give you a very good idea of the likelihood of a fire occurring. But if I implemented a new firewall and we didn't get hacked, who can tell me the probability that we would have been hacked had I just left the old firewall in place?
The answer (in many cases) is "no"
ISO 27001 (which is how I mostly earn my crust, these days) requires you to measure the effectiveness of your selected security controls. All very well and good ("if you can't measure it, you can't manage it" and so forth), but I've yet to identify a useful way of achieving said measurement.
If you're a government or a megacorp, you can do as Dodgy Geezer suggests - you'll probably experience enough security incidents that recording them and measuring their impact will provide a useful measure of the cost-effectiveness of your overall security. But for most organisations that are not of this size, security is much more like Cloggie's fire insurance. If you only have a security incident every few years (or even less frequently), you can't use it as a sensible statistical measure. You (hopefully) know what your security costs, but have no idea what it's saving (if anything). You could always turn everything off for a month or two in order to establish a baseline, but it may well cost you your job.
So we fall back on measuring those things that are easy to measure (i.e. already done for you) - how many viruses were blocked by the AV system, how many emails were blocked as spam, how many attacks were blocked by the IPS, etc. Most systems will report these kind of stats, so the cost of keeping track of them is minimal. If your only requirement is to keep the boss happy with some pretty coloured bar charts and pass your 6-monthly 27001 assessment, this may well meet your needs.
But do these numbers actually mean anything? If the number of detected viruses has gone up, is it because you've improved your AV system or are you coming under increased attack?
If anyone knows of a good solution to the problem of measuring security effectiveness, I'd be most interested to hear of it.
..in disguise of "MBA people" like all sorts of "metrics", because that is all what beancounting is about.
Can we imagine an Apple or BMW product invented based on "metrics" ? I can't.
The same is with security - it will be good if skilled professionals do have a say and it will be horrible if the pointy-haired have the last word.
I know of a major financial institution which does not bother to update the Java, Firefox and Flash properly on their ca 3000 PCs. They are working under the belief that all critical applications run inside a Citrix environment which is a bit better secured. The only flaw in this thinking is to run the Citrix client on the PC having the insecure Java, Flash and FF installed.
There exist good strategies to defend a network with high confidence and most of them boil down to locking down a system to the privileges it really needs. For example, why do office workers need a web browser on a PC processing sensitive information ? Why can't it run inside a VM or even better on a "social PC" ? PCs are cheap, security breaches not.
Why do sensitive PCs need a USB Port and a DVD drive ?
Because PHBs are in charge, that is the reason.
You and I agree that the statement
"in many areas of business that if you can’t measure something “quantitatively”, it will be difficult to raise the quality objectively"
is, not to be too coy about it, sheer unadulterated bullshit.
This kind of statement is emitted by people who simply don't know the business they are in. Not knowing it, they can't tell if they are doing well or badly. Some of these people are accountants, others are lawyers (maybe), and yet others are managers who haven't a clue what their company really does or makes.
It's just like New Labour: substitute box-ticking by idiots for intelligence and then be deluded into thinking you understand the situation.
I'd be very curious to know what the internal culture of BP is, given their well's oily diarrhea all over the Gulf of Mexico. Betcha dollars to donuts that the managerial ranks are filled with beancounters whose only quantitative measure of anything is "how much does it cost?" and whose cri de coeur is "can't we do it cheaper?"
Flame icon because this kind of managerial idiocy is a real hot button with me.
That problem has already been solved.
There are over 100 banks in a trial scheme. No idea when they will mention this, it's a closed trial.
Definition of security
The observation that a request for a definition of security is met with as many answers as there are respondents is reasonably right. The most common definitions are variations on 'security comprises confidentiality, integrity and availavility' and merely replace one word needing a definition with three. Which is not muh of an improvement. Until we can define security, there's simply no reason to try to measure it.
Actually, I suspect security *is* a measure: one good definition I have seen is 'security is a measure of how well a system withstands the effects of unwanted events'.
In this context 'system' refers to the 'general systems theory' type of system.
It's not yet a practical defintion, as there are no methods for measuring. But at least it points the right way.
Totally agree with @Chris Miller
I am in much the same position, and I have spent years providing 'security metrics' along similar lines. However, almost universally the customers (when prompted for opinion) say that they rarely require the stats except to prove that tools are worth the investment, they were defined in a contract somewhere by people in no way familiar with security (ditto networks, servers, applications etc that are defined with metrics). In fact, I have picked up before that contractually there was expectation to *not* detect 99.5% of viruses!
However, as Chris rightly pointed out, the stats of virus detection, spam detection etc not only do not represent the real stance of these tools  but also bear little representation of the true security stance across the organisation.
The real problem is that you are trying to index a scenario where you start not knowing if you know every threat and end knowing that you still do not know every threat, thus the index can only be representative of known knowns and not unknown unknowns. That is the nature of the job - risk mitigation - and I too have little idea how to really have a true index on success. All you can do is address likely impact and probability at the end of the day, by all the means that you use currently probably.
 for this you would need to know the ratio of detected vs not detected, and if they aren't detected how do you know you missed them: you assume that users report every case - does that really work?