Nothing gets my goat as badly as “risk-based security” talk that is suffocating discussions lately. It is so pervasive and so obnoxiously delivered that you end up wondering if the authors of the term even know how poorly they understand information security, risk management, and the organisation they support. Dunning-Kruger effect in action if there ever was one. Decisions, decisions To explain what I mean, let’s look at it from top down: 1. Everything starts with objectives. The things you want to achieve, stuff you want to do.
”… and of course we have set transitive trust between different sources of identity” said he with a glee. That’s when I knew for certain I was not speaking with someone that has had a long and successful track record of setting up identity management in an enterprise environment. Information security practitioners will often roll out the old and tired “transitive trust is bad” adage. Unfortunately the line is parroted too often without understanding why transitive trust is bad. ###Trust and related terms###
Sony Pictures information security team, small as it is, is in the crosshairs of all and sundry after the recent breach of significant proportions. As is typical for information security, once a victim is found the ritual and merciless victim bashing can begin. What most of these pieces forget is that the issues highlighted for Sony Pictures are present if not prevalent in majority of large organisations. ###Kick them when they’re down This scenario plays out time and again: A large organisation is in the news for industry average information security practices.
“Companies have to get security right every time – an attacker only has to get it right once.” This is probably one of the biggest lies that information security tells on a frequent basis, partially to get more money for ineffective security technologies and partially to maintain the illusion that perfect, long-term security is possible. ####Multiples levels defence has to fail In truth, companies have to get it wrong at prevention, detection, and response levels a number of times before a breach does any considerable damage.
In Part 1 we looked at the deterrence quality of security controls. It’s one of the three attributes of security controls that are often ignored; sometimes consciously but more often due to ignorance. Now we will look at another attribute that is too often neglected: awareness. Typically when discussing security awareness the immediate mental image is of mandatory courses, presentations and drab, unimaginative posters around the workplace. What this post talks about is the information security situational awareness: what is happening, where, why, and who is involved.
Fear of reprisal is one of the most potent stimulants for action. It is also one that information security generally ignores. To that end the need to “improve security by buying more technology” is the prevalent course of action for most IT shops in large and small organisations. That this is just perpetuating the losing race is not a message most IT security staff are willing to concede. There is a better way to improve information security posture of large and small organisations, and it starts by mimicking physical security, where psychology has played a significant role.
The initial empirical study of the observer effect (Hawthorne effect), which said that people change their behaviour to the better when observed, has seen equal measures of criticism and support over the years. Whilst a lot of the critiques were typically academic (i.e. no impact on the end effect, just argument on which factor influences it) there were also a number of empirical studies that failed to replicate the original study’s results. Two academic papers that I used in the past on the effect of being watched (quantum physics observer effect in real world psychology, if you will) have a lot of lessons for information security designers and architects, if only they will stop rolling out new boxes and start thinking about what it is they really need to do.
If Apple followed the ‘wisdom of the crowds’ in 2006-2007 they’d never made an iPhone. If smart CISOs paid too much attention to the article in the Information Risk Leadership Council’s latest article they’d be in as much trouble as they purportedly are right now. There is a lot wrong with CISOs that put all their hope and budget in prevention, but the word itself is definitely not the problem. Nor is the solution that CEB IRLC (Executive Board’s Information Risk Leadership Council) advocated - although they just followed the lead by NIST.
This is too good not to share. (By Dave Blazek, (CC BY-ND 3.0 US)) Hat-tip to Gabe Basset for a great find.
Recently I had an interesting conversation around incident response (IR) and preparedness for incidents. For some reason conversation centred around attack trees and how they can be used to better the information security posture of the company. My take on it is that attack trees are great for the security mindset that puts prevention first, second, and third and detection and response equal distant fourth. They’re not so good in the world where non-agent failure happens and your response to it defines how much damage the organisation wears in the end.