Security

The AI arms race between attackers and defenders

by Egress
Published on 22nd Aug 2022

Artificial intelligence (AI) is used frequently in cybersecurity to identify threats. It has been used to create sophisticated algorithms, and predictive intelligence can detect malware, run pattern recognition, and prevent attacks before they cause damage. 

However, the widespread use of AI is a double-edged sword. The increase in remote working over the past couple of years has led to attackers coming up with new ways to leverage AI to target organizations. 

The rise of artificial intelligence as-a-service (AIaaS)

Cybercriminals are leveraging AI and machine learning technologies to improve their automation and targeting capabilities. Once they have perfected their services, they convert them to “as a service” business models and release them for use by other criminal groups. 

This means that automated large-scale, targeted attacks are becoming increasingly common – and defenders are struggling to keep up. 

AIaaS allows cybercriminals to launch malware attacks, engage in cyber extortion, launch distributed denial of service (DDoS) attacks, install keyloggers, steal money from digital wallets and banks, and send mass phishing emails. 

The price of these AIaaS solutions varies widely. For instance, research by Dark Reading has shown that on the lowest end, it’s possible to access basic phishing as-a-service solutions for $0.15-$15 per month. 

On the other end of the spectrum, you’ll find groups like The Shadow Brokers (TSB) –  group of hackers linked to the 2017 leak of hacked intel belonging to the US National Security Agency. They charge customers up to $23,000 per month to provide a monthly data dump service that allows them to access hacking tools stolen from the US government. 

How cybercriminals are leveraging AI to make attacks more effective

There are many ways cybercriminals weaponize AI to improve the effectiveness of their attacks. Below, we feature a list of some of the most prevalent types of attacks:

Mimicking secure, trusted systems using malware 

AI has allowed cybercriminals to develop sophisticated malware that enables them to launch attacks designed to infect and encrypt an organization’s files. Then, they typically hold this information hostage until the organization pays the set ransom. Over the past few years, ransomware families including Locky, WannaCry, NotPetya, and Cerber have wreaked havoc over organizations and consumers. 

However, paying the ransom doesn’t guarantee that the information will be recovered. In fact, 92% of organizations that pay the ransom do not receive all of their data back. 

Data poisoning

Cybercriminals can corrupt the datasets that are used to train AI machines. This results in data being manipulated and altered to reflect incorrect results. This can reduce the accuracy of an organization’s system, which can damage its reputation. 

These attacks are typically untraceable. And deep AI expertise is required to identify potential issues with the training data. It’s especially difficult to identify these issues if an unsupervised learning model has been used. 

Input attacks

Input attacks alter the input fed into a system to cause it to malfunction. Criminals can design these attacks to trigger at a specific time – even several months after the code has been altered. 

How AI is leading to an increase in phishing attacks 

Phishing is one of the most pervasive types of cybercrime, becoming more prevalent and sophisticated every month. It’s largely a numbers game, which gives it a reputation as one of the most affordable AIaaS options. The rise of AI makes it easier for cybercriminals to attack more victims. 

Natural language processing (NLP) is a branch of AI designed to help computers understand how we write and speak. It’s typically used to improve the effectiveness of automated customer service applications and can also detect phishing emails

However, cybercriminals are also leveraging NLP to create better phishing emails. While mass phishing emails are quick and easy to compose, targeted spearphishing emails are significantly more labor intensive. 

Last year, researchers discovered that cybercriminals are leveraging the deep learning language model GPT-3, combined with other AIaaS platforms, to create large-scale spearphishing campaigns quickly. Because they are so sophisticated, it’s difficult for defenders to combat them. 

Defending your organization against AI attacks

Researchers are continuing to study AIaaS tools to find ways to detect and mitigate attacks. However, the AIaaS economy is moving quickly, and cybercrime is becoming increasingly consumerized.

As a result, it’s left up to individual organizations to assess their own security defenses and strengthen their infrastructure. 

Learn how we can secure the number one risk vector - email. Request your personalized Intelligent Email Security demo.