top of page

When Seeing (And Hearing) Isn’t Believing: Deep Fakes & Digital Deception – Part I

Following high-profile controversies around events such as the 2016 US Election, the storming of the Capitol and the Brexit campaign, the phrases ‘mis-information’ (designed to confuse) and ‘dis-information’ (designed to deceive) have entered public discourse in a way never seen before. Within these conversations, the term ‘deep fake’ is often used to describe online content, but its meaning is sometimes misunderstood.


Deep fakes lie at the extreme end of the digital deception spectrum; they are digitally altered media in which a person in an existing image or video is replaced with someone else's likeness. The aim may be to graft a celebrity’s face onto the body of a pornographic actor, or to add a third party’s lips to a video of a face, to make it appear that someone is saying something completely different. Early attempts to do this appeared amateurish and were easily detected, but in recent years technology has evolved and the best modern deep fakes are impossible to detect with the human eye.


But how did deep fakes come about and how are they produced? What are the potential issues with them? And how, or with what tools, can analysts detect them?

Synthetic audio-visual media can be generated using a variety of deep learning techniques. They primarily rely on Generative Adversarial Networks (GANs), which are a body of deep learning algorithms. GANs produce two artificial neural networks, a “generator” and a “discriminator”. The generator produces a sample output based on an underlying data set and tries to trick the discriminator, which has also “trained” on the same dataset. The discriminator evaluates the output of the generator and provides critical feedback about its success in replicating the underlying data. This pits them against each other – they are adversaries – to iteratively improve the deep fake. The end result? A realistic-looking video and/or audio. This relies on an underlying data set existing in the first place – the bigger, the better. This is why deep fakes of public figures are exponentially better than those made of people for whom a more limited amount of video footage is available – but over time the GANs can be trained on a tiny data set.


The advent of this technology can be beneficial. For example, it allowed food delivery company ‘Just Eat’ to alter its TV advert featuring Snoop Dogg for the Australian market (Just Eat is called ‘Menu Log’ in that country) without re-shooting the whole advert, saving both time and money. Yet it’s the nefarious intent behind most deep fakes – hiding the liar’s creative role in the deception – that poses the biggest threat. The threat is particularly acute when political figures are the subject of the deep fake. For example, earlier this year Facebook had to remove a well-timed deep fake of President Zelensky calling on Ukrainian soldiers to surrender to Russia at a critical point in the war.[1] Even though this was detected, the damage caused by a well-timed deep fake can be achieved before the deception can be revealed.

Paradoxically, increasing public resilience against deep fakes through raising awareness of their existence also undermines the credibility of authentic media sources. This has two main consequences. First, it allows for a “liars dividend” - in a world where everything can be faked, people can falsely dismiss authentic media as fake, playing on the presence of deep fakes and evading accountability. Second, any discrepancies in legitimate video footage can be explained away by characterising the whole video as a deep fake. In an extreme case, this led to an attempted military coup in the West African country of Gabon. The President, Ali Bongo, had been absent for months due to a severe stroke – but the government didn’t tell the public nor, it appears, members of the military. To quell growing speculation of Bongo’s death, the President was rolled out to address the nation, but the stroke had altered his appearance. Aware of the presence of deep fakes and believing the video to be one of them, and part of an attempt to cover up Bongo’s death, some Gabonese military officers proceeded to launch a coup.[2]


Thus, raising awareness to the public is a double-edged sword – on one hand, knowledge of how to create deep fakes can be exploited by hostile actors to deceive their adversaries, yet on the other hand that same knowledge helps investigators identify and combat digital manipulation. How analysts face this problem will be looked at in part 2…

[1] https://news.sky.com/story/ukraine-war-deepfake-video-of-zelenskyy-telling-ukrainians-to-lay-down-arms-debunked-12567789 [2] https://www.washingtonpost.com/politics/2020/02/13/how-sick-president-suspect-video-helped-sparked-an-attempted-coup-gabon/

bottom of page