A deepfake is a piece of video or audio content that has been manipulated with AI. Deepfakes have only started to gain mainstream interest in the past five years or so, and were originally used as vehicles for various types of internet mischief.
Of course, it didn’t take long for people to realise they can also be used for more sinister purposes. This example shows a highly-convincing deepfake of Barack Obama – and you don’t have to stretch your imagination too far to see why deepfakes have some people concerned.
In an age of rapid digital sharing, they could in theory be used to shift stock prices, influence voters or provoke tension between different groups. They could also offer public figures plausible deniability for dubious behaviour, by claiming real videos or recordings as deepfakes.
But what about at the individual level, in social engineering attacks? The bad news is that it’s already happening… and deepfake technology is likely to form a major part of a cybercriminal’s arsenal in the future. The good news is we can also use technology to detect them, if we act fast enough.
Why use deepfakes for phishing?
The goal of all forms of phishing it to trick people into parting with confidential data. Email has always been the go-to channel for phishing, simply because it’s the most widely used and easiest to trick people with. With other communication channels, you can see someone’s face or hear their voice to check if they’re genuine – but deepfakes could change that.
Deepfakes can be used as a powerful enhancement for business email compromise (BEC). The goal of BEC is to bypass traditional security measures by gaining access to an email account within a target organisation – even better if it’s the account of a senior executive. Cybercriminals can trawl social media to make these emails highly believable, picking up on sign-offs, signatures, lines of command, communication style, and even quirks of phrase.
Adding a video or voice deepfake to the mix can make these BEC attacks far more convincing. A cybercriminal might start by gaining access to an email account, then using a WhatsApp voice message, a voicemail, or a quick video call over Teams to follow up. An employee might have questions marks over the initial email, but the follow up would leave them with little doubt the request is genuine (assuming the deepfake is convincing enough).
Cybercriminals follow the numbers – and the more we use a technology, the more they’ll look to exploit it. As everyone knows, remote working has pushed the rise of digital communication into fast forward over the past year. Egress research shows that the use of all digital comms channels has risen since the pandemic started:
- 64% are sending more emails
- 58% are using videoconferencing more
- 58% are using messaging apps such as Teams more
- 35% are using WhatsApp/SMS more
Could a deepfake be used to target my organisation?
The word deepfake comes from a mix of deep learning and fake. The deep learning part refers to the large dataset of video and audio samples that are needed to reach a level of accuracy. These existing images and videos are combined and superimposed onto existing source media – the software can then synthesise a fabricated voice or face that looks seriously convincing.
We saw an earlier example of Barack Obama. Of course, there are hours and hours of video footage and audio samples of Obama online. It might be tempting to think the same thing couldn’t happen to a CEO of your business. However, consider the amount of photos and videos that exist of many of us on social media (both professional and personal). Then factor in team photos, webinar recordings, corporate videos, and press interviews that are all freely available on company websites.
On the face of it, you’d assume it would take a lot of technical video and sound editing skills to make a deepfake – but AI tools make it surprisingly easy for non-experts to create them. The more source footage available, the more convincing the final output will be. However, some research estimates that only four seconds of source audio is needed to create a deepfake.
How concerned should IT leaders be about deepfakes?
As it stands, you’re far less likely to be targeted by deepfake phishing than traditional email phishing. But that doesn’t mean there’s no risk. While it’s not something that will be top of IT leaders’ list of concerns right now, it should still be considered. After all, there was a time when spear phishing and whaling weren’t considered huge issues – now they’re endemic threats to data security.
A high-profile case from 2019 offers a prime example. AI was used to mimic the voice of a German conglomerate’s CEO and trick an employee at another business into transferring funds to the wrong bank account. Cybercriminals managed to steal almost $250,000 from a U.K. based energy company with the scam. The victim said it sounded just like the CEO, even down to his slight accent.
Deepfake technology is already pretty good, but it will get better. One proposed method of catching deepfakes was based on the fact that people in the videos didn’t seem to blink properly – but that weakness has already been ironed out with more advanced software. Cybercriminals will be sure to capitalise on the upcoming period of uncertainty where many businesses are caught unprepared.
Can we stop deepfake phishing?
Cybercriminals are relentless when it comes to finding the latest opportunity to exploit – so businesses need to be relentless with updating their defences. First, securing email is vital, as it’s still far and away the most common entry point into a business for cybercriminals. And even when deepfakes are used, they’ll likely be used in tandem with business email compromise.
The best way to detect deepfakes in the future will be to fight fire with fire, using machine learning techniques, which are already the most effective method for stopping business email compromise. Intelligent solutions such as Egress Defend do just that, using machine learning and natural language processing capabilities to catch the subtle signs of social engineering that can worm their way through traditional defences.
Defend balances security and training, educating employees about why certain emails have been flagged or blocked as risky. It gives employees the nudge they need back towards the place where they make smart security decisions, using technology to turn an organisation’s people into a powerful line of defence – and that’s exactly the approach we’ll need to take to beat deepfakes too.