What can cybersecurity companies do to combat deepfake technology?

Published on
12/12/2019 01:43 PM

There is something very unsettling about deepfake technology. It's not just the discomfort of Steve Buscemi's face on Jennifer Lawrence; deepfake's capabilities are something we should all be cautious of.

Deepfakes are like movie matting and face-swapping on steroids. However, you're not harming anybody by making an actor appear in two places at once on screen. Unfortunately, fraudsters can use deepfakes to create doctored videos or audio recordings for malicious motives. Only recently, the Wall Street Journal reported that a CEO's voice was 'deepfaked' in a phone call, demanding €220,000 to be transferred to the bank account of a Hungarian supplier. The fraudsters used artificial intelligence (AI) to mimic the CEO's voice, which not only does so to a believable standard, but also reinforces the authenticity of the request.

How does it work?

The term 'deepfake' comes from deep learning and fake. Deepfake AI uses generative adversarial networks that battle it out against each other: a 'generator' algorithm and a 'discriminator' algorithm. The generator creates a 'fake', while the discriminator tries to detect it. Overtime, both keep going in a self-reinforcing loop, learning each time and gradually getting better at faking and identifying. In the end, the generator creates a fake that the discriminator can't detect, and there you have your deepfake.

What do we do?

Sadly, the swindled CEO example is probably the first of many, and businesses will continue to be likely targets for fraudsters. Therefore, cybersecurity solutions have become more urgent than ever. One approach is to fight fire with fire. Some cybersecurity companies are using AI to detect deepfakes, such as ZeroFOX. This company is using AI-enabled computer vision and video analysis to speed up the otherwise time-intensive process of investigating millions of potentially doctored medias. Furthermore, ZeroFOX's new open source toolkit, Deepstar, is now available to help the community build and test detection techniques. Otherwise, many cybersecurity companies are in the very early stages of investigating how best to combat such a new problem.

Instead, elsewhere, preventative measures are being explored, such as watermark software to indicate authenticity or anti-virus software. However, detection remains a priority, as, realistically, anyone can make deepfakes from the comfort of their own home. Plus, there is not a lot to deter people from doing so. At the time of writing, across most landscapes, there are no laws in place to make deepfakes illegal. However, earlier this year, the state of Virginia became the first place to pass an amendment to criminalise it. This is specifically with revenge in mind, expanding the offences to include "falsely created" material.

Both cybersecurity companies and legislators are under immense pressure to protect businesses and people alike from deepfakes. Until then, businesses will have to sit tight and hope that they aren't next.

Join 34,209 IT professionals who already have a head start

Network with the biggest names in IT and gain instant access to all of our exclusive content for free.

Get Started Now