This ensures that employees not only understand the nuances of the threat landscape but also possess the capability to respond effectively during a genuine cyber attack.
Applying behaviour change theory is key to reshaping cyber security practices.
It's about transforming instinctual reactions into more thoughtful responses, shifting from rapid, automatic system 1 thinking to more considered system 2 actions.
While security awareness training is useful, it's often treated as a compliance tick-box.
A proactive approach emphasising regular behavioural training over purely theoretical knowledge can lead to more secure practices, reducing the risks associated with human error.
In the realm of cyber security, 'human errors' is a term often used, but it's vital to emphasise that the focus should not be on blaming individuals. It's less about personal fault and more about the resources and training that an organisation provides its staff with.
While technology advances, human error remains a significant vulnerability in cyber security, accounting for an estimated 95% of cyber incidents.
The challenge lies in balancing accessibility and security, as users often opt for the path of least resistance. Encouraging change in human behaviour, rather than simply reinforcing security measures, is paramount to reducing such errors.
The susceptibility to social engineering or phishing scams is a prime example of human error in cybersecurity. However, individual shortcomings are not the sole culprits. Often, organisational factors, such as insufficient training, work-related stress, and inconsistent security awareness campaigns contribute to this vulnerability. As workloads peak, security can take a backseat, making employees an easy target for cyber frauds.
Phishing scams, with their deceitfully ingenious designs, exploit the users' knowledge gaps about cyber threats. These scams are quite insidious, able to ensnare even the most vigilant individuals. However, continuous and smartly devised simulated phishing programmes can significantly lower the risk of falling for these ploys.
Insights from laboratory-based phishing experiments illustrate the magnitude of this problem. Over 30% of government employees and about 60% of university students were found to click on suspicious links in phishing emails, revealing the gaps in real-world cybersecurity preparedness.
Intriguingly, some personality traits, such as narcissism and neuroticism, have been found to be linked to a higher susceptibility to phishing attacks. This underscores the need for a comprehensive understanding of human factors and behavioural studies in cybersecurity. Surprisingly, even warnings have proven ineffective, further highlighting the need for practical, behaviour-based approaches to combat phishing. Users often underestimate the probability of security breaches, creating a critical gap that cyber criminals exploit. To address this, we must move beyond raising awareness to fostering a culture of robust cybersecurity habits.
Password sharing, another significant human error in cyber security, can't solely be attributed to individual recklessness. Instead, it's often rooted in the broader organisational context and current work norms.
In high-speed, teamwork-based environments, password sharing may appear as an attractive shortcut towards achieving tasks, regardless of the substantial security risks involved. A plethora of factors, including heavy workload, time crunches, or the absence of definitive guidelines, can tempt employees into sharing passwords. The perceived bothersome nature of robust security protocols can also spur individuals towards such precarious actions.
To mitigate this, organisations can devise a comprehensive and user-friendly password management policy that underscores the importance of secure practices and discourages sharing passwords.
In reality, password sharing is a common occurrence, encompassing various demographics. As per Whitty et al. (2015), older adults exhibiting high perseverance and self-monitoring tendencies are more likely to share passwords. This practice can expose them to financial exploitation, one of the most prevalent forms of elder abuse (Bailey et al., 2015), as they often exhibit trust towards others, including strangers online. In contrast, younger adults, often those well-acquainted with digital technology, also share passwords, viewing security as a hindrance they need to circumvent (Smith, 2003).
This behaviour can have severe ramifications as many people reuse the same password across various platforms, hence by sharing one, they potentially give access to all their secured information. Cybercriminals exploiting this can infiltrate multiple platforms once they discover a single password, highlighting the gravity of password sharing in compromising cyber security.
The prevalent cyber security behaviour of neglecting software updates is another issue that needs attention. As in the other examples, it's vital to understand that this behaviour is not indicative of employee indifference but more a reflection of wider organisational practices.
Employees might not understand the importance of these updates, viewing them as inconveniences that disrupt productivity. However, this misconception can have serious security implications. Software updates serve more than just feature enhancements; they're integral to maintaining the security integrity of systems. These updates often come with patches for known vulnerabilities, and if left unpatched, they can provide cyber attackers an effortless gateway into an organisation's network.
Digging into the background, one common error in cybersecurity behaviours is the delay in, or outright avoidance of, installing software updates (Rajivan et al., 2020). Using experimental behavioural decision-making studies, Rajivan and colleagues found that risk-taking behaviours could partially explain some individuals' reluctance to install software updates. In particular, more risk-taking individuals tend to postpone the installation of these updates.
Unlike password sharing and phishing, the subject of installing software updates hasn't received much scholarly attention. Consequently, there is a pressing need for more research and awareness campaigns focusing on this aspect of cybersecurity behaviour.
It's estimated that a significant percentage of cyber incidents are due to behaviours that could be modified, underscoring the link between our actions and the safety of digital systems.
However, it's crucial to remember that this isn't about pointing fingers at employees. Instead, it's about the larger context: the resources, training, and environments that organisations provide. Enhancing cyber security isn't just about ramping up technical defences; it's also about effectively managing and understanding the behaviours that influence security.
Several habitual behaviours can unintentionally impact cyber security, ranging from impulsivity and procrastination to risk-taking. These behaviours, while seemingly mundane, can lead to significant security lapses.
In the upcoming sections, we'll delve into how certain behavioural patterns, specifically procrastination, impulsivity, and risk-taking, can influence cyber security. Through this exploration, we'll illustrate how a behaviour-focused, data-driven approach like CultureAI's can transform an organisation's cyber security posture and lead to a more secure digital environment.
Procrastination, an all too human trait, is one of the behavioural characteristics that can significantly impact cyber security behaviours. It is the tendency to delay or postpone tasks that require immediate attention, and in the context of cyber security, this can lead to major security vulnerabilities.
For instance, an employee might delay installing crucial software updates or neglect to change their passwords regularly due to a 'do it later' mentality. Similarly, important security measures like data backup or encrypting sensitive data may also be deferred, leaving an open window for potential cyber threats.
However, it's essential to note that procrastination, while seemingly an individual's issue, is often influenced by the organisational environment. For instance, when the importance of immediate action for security tasks is not adequately communicated or when the processes are time-consuming or complex, procrastination is more likely to occur.
Research by Dawson and Thomson (2018) underscores the relationship between cognitive abilities, personality traits, and compliance with cyber security policies. The "need for cognition" scale, which measures a person's propensity to participate in activities requiring cognitive efforts, has been found to influence the intent to comply with security protocols.
Impulsivity refers to a tendency to act quickly and without much thought, often leading to unplanned and potentially risky actions. In the context of cyber security, impulsivity can manifest itself in numerous ways, often resulting in hasty decisions that might compromise an organisation's data security.
An employee might impulsively click on an intriguing email link or download an interesting attachment without first verifying its source or considering the potential security implications. Similarly, impulsive decisions to share sensitive information over insecure networks or platforms can also present significant security risks.
Egelman and Peer's (2015) study discovered a correlation between impulsivity, as measured by the Barratt Impulsiveness Scale, and performance in the Security Behaviour Intentions Scale. This suggests that individuals exhibiting higher impulsivity may be more likely to engage in risky cyber behaviours.
Further, research by Hu et al. (2015) established a link between cognitive control, a crucial aspect of impulsive behaviours, and violations of information security policies. The pursuit of immediate gratification, often without considering potential future consequences, may contribute to impulsive behaviours that compromise cyber security.
Risk-taking behaviour is an inherent trait that varies significantly among individuals. Some people are naturally inclined to take risks, while others lean towards safer alternatives. In the context of cyber security, risk-taking can lead to behaviours that potentially endanger an organisation's cyber infrastructure.
People with high risk-taking tendencies might not follow cyber security protocols as they consider the chances of a cybercrime occurring to them as minimal. The underlying thought process could be the immediate reward of skipping an onerous task such as a software update or changing compromised passwords, without considering the potential long-term ramifications.
Egelman and Peer's (2015) research noted a connection between risk-taking behaviour and the Security Behaviour Intentions Scale. This link implies that those who engage in more general risk-taking behaviours in their daily lives could be more likely to exhibit risky cyber security practices.
One factor that contributes to risk-taking in cyber security is optimism bias. Individuals may underestimate the likelihood of experiencing a cyber-attack, mirroring the broader human tendency to discount the probability of negative events. This false sense of security can lead to complacency in adopting secure behaviours.
Understanding employee behaviours around cyber security is crucial for organisations seeking to fortify their defences against cyber threats.
Notably, the context in which these behaviours occur can significantly impact the nature and magnitude of potential security risks.
Thus, it is helpful to classify cyber security behaviours based on the environments where they predominantly happen: in the workplace and at home.
In an organisational context, employee cyber security behaviours can significantly impact an enterprise's overall digital safety. These behaviours, which are influenced by both internal motivations and external stimuli, are often dictated by company policies and regulations. To ensure policy adherence, organisations frequently provide assistance such as reminders about software updates, new threat alerts, and best practices for information security.
To understand these behaviours more holistically, researchers have categorised security behaviours into distinct types based on the level of expertise required to execute the behaviour and the intentions behind it. Stanton et al., developed a six-element taxonomy that includes: intentional destruction, dangerous tinkering, aware assurance, detrimental misuse, naive mistakes, and basic hygiene.
Of these, intentional destruction, detrimental misuse, dangerous tinkering, and naive mistakes reflect poor security behaviours, whereas aware assurance and basic hygiene depict positive behaviours. These categories serve as a broad framework that can encapsulate various organisational security behaviours, ranging from security assurance and compliant behaviours to risk-taking and damaging behaviours.
It's crucial to remember that such classifications provide valuable insights to organisations for understanding and improving their employees' behaviours. By recognising the variations in security behaviours, organisations can devise targeted, effective interventions that enhance the cyber security culture within the workplace.
Within the home environment, individuals are typically the sole managers of their cyber security behaviours. Unlike a work setting, where protocols and regulations shape and guide behaviour, home settings rely more on personal knowledge, awareness, and skills.
Although it was initially assumed that such abilities would be lower in home environments due to the lack of formal training programs, research has shown this is not necessarily the case. Individuals often bring security knowledge gained from other environments, such as their workplaces, into their homes. However, the behaviours exhibited at home can differ significantly due to a variety of factors including lack of enforcement, personal comfort, and perceived threat level.
A term coined to describe home users who proactively engage in cybersecurity practices is "cybercitizens". These individuals consciously apply their cybersecurity knowledge and skills within their home environments, demonstrating behaviours such as installing and updating antivirus software, exercising caution with emails and attachments, and choosing strong yet memorable passwords.
The emergence of the "cybercitizen" illustrates that not all home users are passive or indifferent to their cyber security responsibilities. Rather, with the right knowledge and motivation, they can play a vital role in maintaining a secure digital environment in their homes. Promoting this cybercitizen behaviour is a key aspect of creating a broader culture of cyber security.
In the ongoing effort to bolster cyber security, understanding and harnessing psychological principles can be invaluable.
As social engineering and cognitive hacking continue to exploit human vulnerabilities, equipping individuals with the psychological tools to counteract these threats becomes increasingly critical.
This section will delve into a range of psychological strategies that can aid in enhancing compliance with security policies, ultimately fostering safer online environments.
The term habituation in psychology, means that users become less responsive to alerts they see repeatedly. Sadly, most cybersecurity warnings fall prey to this habituation effect, making them less effective over time (Anderson et al., 2015).
A solution to this cyber fatigue lies in the use of what are termed polymorphic warnings. By continually shifting their design, these warnings keep users on their toes, effectively breaking the habituation cycle (Wogalter, 2006). This technique was tested and validated by Anderson et al. (2015), who discovered that computer users continued to engage with and respond to these ever-changing warnings.
The science behind this strategy is simple: our minds are more responsive to new and unusual things. Therefore, by creating a dynamic, attention-grabbing alert system, developers can maintain users' interest and ensure potential threats aren't overlooked (Moustafa et al., 2009, 2010; Kar et al., 2010).
This is not to say that novelty is the only factor that can enhance the effectiveness of security warnings. Research comparing Firefox, Google, and Internet Explorer found that multiple factors contribute to the success of these warnings. Considerations such as the type and appearance of the warning, the number of clicks needed to dismiss it, and even the amount of time spent on warnings can all influence user responses (Akhawe and Felt, 2013). In essence, for better cybersecurity behaviours, a comprehensive, multi-layered approach should be adopted.
Improving cybersecurity behaviours often draws upon understanding human motivations. Traditionally, we are incentivized by positive outcomes (rewards) and deterred by negative ones (penalties). Yet, in the realm of cybersecurity, the reward for adhering to safe practices is abstract—it's the prevention of a potentially devastating event such as a cyberattack.
Given this unique context, devising a more explicit reward and penalty system might encourage users to abide by security guidelines more diligently. Instead of relying solely on the intangible benefit of "avoiding harm," incorporating concrete incentives and deterrents, such as monetary rewards or penalties, could bolster the enforcement of secure behaviours.
Research has shown the effectiveness of such approaches. Experiencing a simulated cyber threat, such as a phishing attack, and the subsequent consequences can instil a strong sense of vigilance among users and motivate them to comply with safety protocols (Baillon et al., 2019).
Moreover, when considering rewards, humans generally favour guaranteed, albeit smaller, rewards over the prospect of a larger but uncertain gain. This psychological trait could be used to design reward systems that further encourage adherence to cybersecurity rules.
Intriguingly, our response to rewards and penalties is also shaped by our individual personality traits. Therefore, understanding these traits and tailoring the reward-penalty system accordingly could potentially enhance the effectiveness of promoting secure behaviours.
Encouraging a mindset focused on the potential future consequences of one's actions can play a critical role in enhancing cybersecurity behaviours. Often, individuals tend to disregard safety protocols due to an inability to grasp the possible negative impacts that could manifest down the line.
By motivating users to consider the future ramifications of their present actions, it can not only enhance thoughtful decision-making but also curb impulsive behaviours that often lead to online security risks. According to Eskritt et al., 2014, the process of contemplating future consequences is intrinsically tied to reflective decision-making. Additionally, it can contribute to the reduction of reckless online behaviours, as suggested by studies conducted by Bromberg et al., in 2015 and 2017.
Therefore, incorporating psychological methods that promote a more future-oriented perspective can be instrumental in bolstering secure behaviours in the cyberspace (Altintas et al., 2020). This approach could entail the development and dissemination of information campaigns that highlight the potential long-term impacts of neglecting cyber hygiene.
Drawing upon the insights from psychology and user behaviours, CultureAI seeks to revolutionise cyber security by transforming employees from potential security vulnerabilities into vigilant guardians of the network. Our platform integrates seamlessly with existing business tools, allowing for the continuous monitoring of over 40 types of security behaviours, including susceptibility to social engineering and phishing scams, password sharing habits, software update practices, and workstation security.
Rather than relying on employees' self-reported behaviours, CultureAI observes real-world security actions as they unfold, enabling a data-driven approach to human risk management. This behavioural data then informs the creation of employee risk scores, providing a dynamic tool for driving, personalising, and automating your broader cybersecurity awareness and human risk management programmes.
CultureAI's user-friendly dashboards make it easy to track behaviour changes over time, while its gamified approach, complete with leaderboards and rewards, makes the process engaging for employees. The key lies not in shifting user attitudes, but in providing consistent access to tools that promote behaviour change. With CultureAI, you get a comprehensive, easy-to-implement solution that turns the often daunting task of cybersecurity into an engaging, manageable process.
Further reading on this topic: Cyber Risk Management Platform