top of page
Kiris Group Intelligence Security

When Seeing (And Hearing) Isn’t Believing: Deep Fakes & Digital Deception – Part II

There have already been instances of malicious actors creating deep fakes to deceive their adversaries, but it is important to note that knowledge of how they are made can help investigators and intelligence analysts tell the difference between a real piece of media and one that has been manipulated, or created from scratch.

There may be tell-tale signs in the metadata of a file (little tags of data that describe other data). This could help identify some digital fingerprints, such as the time of day the image or video was taken, with what device, and whether it was subsequently opened in an editor. This is not fool proof, though, since metadata itself can also be manipulated. Further, EXIF data attached to images is more routinely being removed from open sources – including as a matter of course when files are uploaded to social media or on messaging apps, where deep fakes are likely to be found. This presents further challenges for the investigator.

Alongside the most advanced deep fakes – the combining of voice and audio – the two component parts weaponised on their own can still have devastating consequences. We previously wrote about an FSB assassin falling victim to social engineering and revealing details of their unsuccessful attempt to poison Alexei Navalny on a phone call – with Navalny on the other end of the line posing as a senior FSB official. The weaponisation of synthetic voice cloning has further enhanced the scope of social engineering possibilities. In several fraud cases, fraudsters have spoofed voices to gain access to corporate systems and steal money. In one instance a bank manager, believing he was speaking to the director of a company (with whom he had spoken before) that the bank was preparing to lend money to for a potential acquisition, began transferring funds when told the acquisition was going ahead. Unfortunately he was actually duped by criminals using an audio deep fake and the transfers were received into those criminals’ accounts.

It has sometimes been possible for analysts to verify the authenticity (or otherwise) of a suspected deep fake by measuring shadows and tracking the position of the sun to assess whether they tally with the time of day the deep fake was allegedly taken. Reverse image searching also remains a useful tool for analysts seeking to identify deep fakes. This may lead to the original source from which a fake has been generated but, due to the fact that the images which form part of the deep fake have themselves been manipulated, that original source may not be easily identifiable. Still, with advancements in the sophistication of image search algorithms, it is possible that individual components of the image may be identified, providing useful information for an analyst.

In some cases, the deep fake just looks distorted, and can be identified as having been computer generated with careful observation using the naked eye. There is sometimes inconsistent emotion between the face and voice track of a video, and breathing rates can vary from human norms. Beyond this, there is a growing arsenal of algorithmic detection tools being developed and constantly improved to automatically detect deep fakes. Some, for example, compare the forensic traces of two parts of an image or video to produce a measure of how similar or dissimilar they are. However, the effectiveness of these tools varies considerably and many are still unable to achieve a sufficiently high detection rate to efficiently combat the proliferation of deep fakes.

There is undeniably an ongoing arms-race between producers of deep fakes and investigators. Early on in the history of deep fakes, for instance, researchers noticed that the subjects did not blink, and developed a machine-learning algorithm to track eye movement and identify fake content. But only a week after publishing their findings deep fake creators had “fixed” this problem, and included blinking in their new fake videos. This arms-race will almost certainly continue into the future, as long as there remain incentives, be they financial, political or simply personal amusement, for those that wish to create and disseminate deep fakes.

bottom of page