top of page

When Seeing (And Hearing) Isn’t Believing: Deep Fakes & Digital Deception – Part II

  • isobella52
  • Dec 19, 2022
  • 3 min read

There have already been instances of malicious actors creating deep fakes to deceive their adversaries, but it is important to note that knowledge of how they are made can help investigators and intelligence analysts tell the difference between a real piece of media and one that has been manipulated, or created from scratch.


There may be tell-tale signs in the metadata of a file (little tags of data that describe other data). This could help identify some digital fingerprints, such as the time of day the image or video was taken, with what device, and whether it was subsequently opened in an editor. This is not fool proof, though, since metadata itself can also be manipulated. Further, EXIF data attached to images is more routinely being removed from open sources – including as a matter of course when files are uploaded to social media or on messaging apps, where deep fakes are likely to be found. This presents further challenges for the investigator.


Alongside the most advanced deep fakes – the combining of voice and audio – the two component parts weaponised on their own can still have devastating consequences. We previously wrote about an FSB assassin falling victim to social engineering and revealing details of their unsuccessful attempt to poison Alexei Navalny on a phone call – with Navalny on the other end of the line posing as a senior FSB official. The weaponisation of synthetic voice cloning has further enhanced the scope of social engineering possibilities. In several fraud cases, fraudsters have spoofed voices to gain access to corporate systems and steal money. In one instance a bank manager, believing he was speaking to the director of a company (with whom he had spoken before) that the bank was preparing to lend money to for a potential acquisition, began transferring funds when told the acquisition was going ahead. Unfortunately he was actually duped by criminals using an audio deep fake and the transfers were received into those criminals’ accounts.


It has sometimes been possible for analysts to verify the authenticity (or otherwise) of a suspected deep fake by measuring shadows and tracking the position of the sun to assess whether they tally with the time of day the deep fake was allegedly taken. Reverse image searching also remains a useful tool for analysts seeking to identify deep fakes. This may lead to the original source from which a fake has been generated but, due to the fact that the images which form part of the deep fake have themselves been manipulated, that original source may not be easily identifiable. Still, with advancements in the sophistication of image search algorithms, it is possible that individual components of the image may be identified, providing useful information for an analyst.


In some cases, the deep fake just looks distorted, and can be identified as having been computer generated with careful observation using the naked eye. There is sometimes inconsistent emotion between the face and voice track of a video, and breathing rates can vary from human norms. Beyond this, there is a growing arsenal of algorithmic detection tools being developed and constantly improved to automatically detect deep fakes. Some, for example, compare the forensic traces of two parts of an image or video to produce a measure of how similar or dissimilar they are. However, the effectiveness of these tools varies considerably and many are still unable to achieve a sufficiently high detection rate to efficiently combat the proliferation of deep fakes.


There is undeniably an ongoing arms-race between producers of deep fakes and investigators. Early on in the history of deep fakes, for instance, researchers noticed that the subjects did not blink, and developed a machine-learning algorithm to track eye movement and identify fake content. But only a week after publishing their findings deep fake creators had “fixed” this problem, and included blinking in their new fake videos. This arms-race will almost certainly continue into the future, as long as there remain incentives, be they financial, political or simply personal amusement, for those that wish to create and disseminate deep fakes.





 
 
 

2 Comments


li lin
li lin
Jul 05, 2025

If you're curious to Unlock detailed image data from your photographs, an online EXIF reader is the way to go. This metadata can reveal a wealth of information beyond just what you see visually. You can discover the exact date and time the photo was taken, the specific camera and lens used, exposure settings, whether the flash fired, and much more. It’s a valuable tool for photographers who want to meticulously track their settings or for anyone interested in the technical aspects behind a photograph. It's like a digital fingerprint for your images.

Like

United Ranker
United Ranker
Jul 03, 2025

Deep fakes pose a growing risk not just online, but potentially to residential security as well—imagine voice-cloning used to bypass home access or smart systems. This blog sheds vital light on the evolving threats. Awareness and tech literacy are becoming just as crucial as locks and alarms.

Like
bottom of page