Phishing

How flood attacks build fake trust and poison AI models

by Marcus White
Published on 10th Nov 2022

AI is a valuable cybersecurity tool for discovering new, unknown threats that evade detection by other technologies. It can speed up and improve the accuracy of incident investigation, and reduce user friction. In a survey of 800 IT leaders carried out for our recent ‘How to separate cybersecurity hype from reality’ report, 77% told us they’re already using a cybersecurity product with AI.

The potential of AI to improve cybersecurity is well established, but what about the security of AI itself?

Like any technology, AI also presents an attack surface. It’s important to understand the risks associated with AI-based solutions, as well as the benefits vendors are claiming. There are technical challenges when it comes to securing AI. Adversarial system manipulation and poisoning of the data used to train algorithms can result in undetermined and incorrect outcomes and bias. Flood attacks are a popular method attackers use to poison AI models.

What’s a flood attack?

There are so many benefits of using AI correctly, that there was inevitably going to be a ‘competitor response’ from attackers. Many of the toolkits sold on the crime-as-a-service marketplace have tools specifically built in to attack AI models. When we talk about attackers trying to ‘poison’ AI models – we essentially mean they’re trying to train it in a negative way.

Flood attacks are a relatively straightforward way to negatively train a model. Attackers will send non-malicious communications in an attempt to build up trust before sending a dangerous email. They’ll either start benign conversations with many people in an organization or send hundreds of communications to just one individual.

This ‘flood’ of non-malicious emails is designed to manipulate AI models into thinking certain patterns of communication are normal, or certain relationships are stronger and more established than they really are. It’s an attempt to soften the defenses of an AI-based solution and increase the chances of the malicious email getting through when it’s eventually sent.  

How flood attacks poison AI models

There are certain risks and weaknesses that need to be considered in terms of data input/output, data bias, attacker evolutions, and the robustness of models. The following components of AI products are all highly useful in cybersecurity and will be quoted as positives in marketing materials. However, it’s important to understand their weaknesses and how attackers try to exploit them.

Social graphing

In social graphs, sets of nodes represent how individuals and endpoints are interconnected. For example, how often people communicate and via which devices. It offers a useful way of establishing relationships and can give an indication of trust. Flood attacks can manipulate social graphs by sending many benign emails in an attempt to create an existing ‘relationship’ with the target.

Natural language processing (NLP) techniques

NLP is all about understanding the ways people communicate and building models out of it – which is very valuable for email security. It evaluates both the content and context of an email,

providing a broader range of information to distinguish between malicious and benign content. In flood attacks, cybercriminals will pad out their messages with ‘good’ content in an attempt to appear safe ahead of the social engineering or malicious content coming later.

Anomaly detection

Anomaly detection works by defining what ‘normal’ looks like and then detecting deviations from it. Attackers want to poison the AI model by changing the definition of normal. Flood attacks can be used to build up a fake baseline of trust and ‘normal’ contact between attacker and target before a malicious communication is eventually sent.

Defending against flood attacks

The above tactics show why it’s important for organizations to know how much drift there is in the models their AI product uses. If an attacker messages someone every day, would that normalize the contact? If they injected hundreds of data points, could that skew the model and change what an anomaly looks like?

AI products need to recognize the techniques attackers use in flood attacks and feed them back into the model – they need to find a way to benefit from the attackers trying to be clever. By detecting these attempts and including them in the model, you can prevent it from being affected in the future and enhance security.

Flood attacks can be combatted by not letting people gain trust too quickly in the model. Verification of the content itself can also make sure conversations are meaningful. Zero trust models will evaluate the content of an email with the same scrutiny, regardless of who it’s from. You need a product that will catch the signs of social engineering in a one-off email – no matter how many benign emails have come from that account in the past.

AI can be valuable – but it’s important to know how to separate hype from real-world cybersecurity value. Our ‘AI: Cybersecurity hero or unnecessary risk?’ report covers the points raised in this blog in more detail, along with a guide to the questions you should be asking cybersecurity vendors about their AI products. Download the full report here.