Human error is regularly blamed for cyber breaches, while numerous studies highlight people as the weakest link in the security chain. It's for exactly this reason that phishing emails and social engineering attacks are so common – because cyber criminals know that they work so well. The result is that people – otherwise known as 'wetware' - are frequently presented as the problem, due to their failure to act in a logical and rational way to protect their systems.
But a new school of thought is changing this perception, due to the acceptance that, however much security technology you throw at them, people are always going to take risks, gamble and make mistakes – in other words, they are never going to act like pre-programmed machines. And while AI and robots are taking on an ever-growing role in our lives and work, the human aspect is always going to be there – leading many in the industry to look for a better way.
People aren't the problem
In July this year, Facebook's head of security, Alex Starmos told the 2017 Black Hat Conference: "The security industry needs to worry less about technology and more about people." His point being that the technology can only provide so much protection, without taking into account the imperfections that make us human. Starmos argued that instead of just focusing on making the technology better, the industry needs to work with these human imperfections, providing technology and tools that are easier for people to use.
This sentiment has been echoed by many others this year, including Emma W, the People Centred Security Team Lead at the National Cyber Security Centre (NCSC). She told InfoSec 2017: "If security doesn't work for people, it doesn't work," while informing delegates that it is now official government policy that you can't describe people as the 'weakest link' in cyber-security.
A human-centred approach Thought leaders like these are driving a shift in mindset across the whole cyber security industry, from one that was very much focused on the technical aspects of keeping hackers at bay, to a more holistic and practical view of the best way to stop them. Doing this involves taking a design-centric view of the process, looking at the entire ecosystem with people as part of that, rather than implementing overly strict, technical and impractical rules and policies.
One good example is password guidelines, which, in some cases, are counter-productive. Choosing fifty different passwords all containing a certain number of letters, numbers and symbols, is near-impossible for most people – particularly when you're then forced to change them every few weeks or months. This has led the National Institute of Standards and Technology (NIST) in the US to abolish this practice in favour of long, easily remembered passwords that don't need to be changed – concluding that this will actually be more secure.
Meanwhile, Jessica Barker, who runs cyber.uk, has argued that how organisations communicate about the risks is key, as a lot of people simply don't understand the terminology. She stresses that messages need to be communicated clearly and concisely, explaining the risks and consequences of certain behaviours. Barker also makes the point that the cyber security industry needs to spend more time understanding why people do what they do, as a way to devise solutions, rather than immediately labelling them 'stupid'.
Intelligent security tools
Of course, cyber security technology is always going to be important, but here too, experts are starting to look at more human-centred tools to keeping the hackers out. One such provider is Forcepoint, which is using innovations in machine learning and analytics to track users' online behaviour to see what normal looks like - so as to flag up when something unusual is happening. It's a departure from the firewall-centric approach, which aims to keep out anything bad, to instead focusing on the most important data on a system and monitoring for any strange activity related to that data. It's the kind of more flexible and pragmatic approach recently suggested by Gartner, as part of its CARTA methodology (Continuous Adaptive Risk & Trust).
Up until now, cyber criminals have always been one step ahead, leaving the security industry constantly on the back foot. But it's becoming clear that the old way of doing things isn't enough on its own, and a new approach is badly needed. It's time to stop blaming the wetware – and work together instead!