Rule breaking can be divided into two categories: those who have the best of intentions and those who don’t.
Let’s look at the first group. Unlike mistake makers, these rule breakers are aware of what they’re doing goes against security policy, but they choose to take a risk and do it anyway. At the same time, they have the best of intentions. They’re not seeking to harm the organization or its clients; they’re usually just trying to get their jobs done.
There are several factors behind this. The first is “decision latitude”: the level of freedom employees have to make decisions and act independently.
The more freedom an employee has; the less control the organization has. With greater freedom comes increased risk that an employee will make a choice that puts data at risk. That’s because other factors can come into play. The Dunning-Kruger effect is a psychological phenomenon where someone over-estimates their expertise or knowledge levels. Employees may receive some cybersecurity training and may believe they know when it’s “ok” to take a risk, even if it breaks the rules.
But there’s a balance to strike with decision latitude: reduce freedom too much, and employees will find workarounds to any policy that makes their life too difficult. That’s because cost-benefit analysis occurs. The employee will analyze the effort or time to follow the policy (the cost) against the need to protect data (the benefit). Too often, employees will consider the cost as belonging entirely with them and the benefit with the organization, and choose to tip the scales in their favor.
And regardless of where you fall on the decision latitude scale, social proof will also factor in. If all or most colleagues typically behave in a certain way, it then becomes more difficult to individuals to go against the flow. This justification can also tap into a person’s cognitive dissonance; the way they reconcile breaking the rules with their own moral code. For example, “it’s ok for me to do this because my boss does it too”, or “cutting this security corner makes me more efficient, which is best for the organization”.
Finally, sometimes people break the rules for personal gain. This can cover a broad spectrum of incidents. A familiar one is employees taking data, such as customer lists or project files, to get ahead in their new job. Many of the same psychological factors are at play here as well, and employees will usually look to justify their actions by claiming ownership over data, for example: “I built the customer list, so it’s ok if I take it.”
Occasionally people will be approached by competitors or nation states, who offer them money for sensitive data, or data is stolen and then offered to cybercriminals in exchange for financial reward. In these incidents too, there is typically also a secondary motivator that helps justify the actions, for example poor performance reviews or feelings of being underappreciated, or a pressing financial concern that can be alleviated with the payment that’s offered.
“IT leaders are most scared of malicious data breaches, with almost half (43%) saying a single malicious incident would have greater negative consequences for their organization than one caused by phishing or human error.“
What impacts do rule breakers have?
Our research shows IT leaders are most scared of malicious data breaches, with almost half (43%) saying a single malicious incident would have greater negative consequences for their organization than one caused by phishing or human error.
That’s because malicious attacks are typically planned and executed to do the most amount of damage. The employee understands how to navigate the systems and controls in place to obtain the data they want, and they have very clear intentions for it.
When it comes to impacts for the individuals, those who break the rules non-maliciously are most likely to be formally disciplined, while those who leak data for personal gain are most likely to get fired or leave voluntarily.
What can be done to stop rule breakers?
When people break the rules, they’re not intending to be caught. So whether they’re well-meaning or maliciously motivated, it can be difficult to detect these incidents.
Human layer security uses intelligent technologies to deeply understand an individual user’s behaviour in the context of organizational and industry security standards. It can then detect when an employee is behaving abnormally – such as sending an email to a webmail address – and block the content from being sent, while alerting administrators to the incident.
When data loss is designed to evade your perimeter, protection must be delivered at a personal level to block incidents.
Find out how Egress Prevent stops intentional data loss by email.