SEPTEMBER 18, 2019

From Fake News to Deepfake: The Solution Lies in Deep Learning

If the public outcry against Fake News, wasn’t enough to destabilize and imbue deep mistrust of the media, business, and our democratic systems, now w

If the public outcry against Fake News, wasn’t enough to destabilize and imbue deep mistrust of the media, business, and our democratic systems, now we have Deepfake.  In the past couple of weeks, we have seen high profile examples of deepfake, where deep learning, an advanced subset of AI, is used to manipulate videos that still look authentic.  Most notably was the video of Facebook CEO Mark Zuckerberg declaring “whoever controls the data, controls the truth”.

This video was produced in response to a prior deep-fake video made of Nancy Pelosi, where the House Speaker was made to appear as if she was drunk, by having her speech slurred. Facebook refused to have the video removed. To test Facebook’s resolve not to remove posts despite the misinformation, the video of Zuckerberg was created where he appears to be talking about Facebook’s plans of world dominion.

This episode triggered widescale discomfort. The following week the House Intelligence Committee had a hearing on deepfake. Undoubtedly members of Congress are concerned with the 2020 elections looming.

In its most sophisticated form, Deepfake refers to the A.I. rendering of fake videos that can’t be detected by human analysis with the naked eye, as videos indeed look very authentic. This is a real source of concern because if you can’t distinguish between real videos and AI-generated ones, how do you know that anything you watch is real or fake? As technology improves, this becomes a greater risk, because current discernable differentiations from the original footage will become further indistinguishable.

The methodology of creating deepfake videos varies considerably. Some of them are trivial, such as the video of Nancy Pelosi, where the effect of her slurred speech was achieved just by slowing the video speed to 70%. However, the most sophisticated deepfakes use deep learning. For instance, one method involves an auto-encoder architecture that “compresses” an image to its basic features (the latent vector) and then “decompress” it back using different basic features, of the face you want to paste on the original image.

The Threat Potential

In and of itself deepfake is not a threat, much like many things in life, such as TNT, deepfake is essentially neutral, with the potential to be used for both good and for bad. The increased dependence of people in social and digital media as their only source of information about the “outside world” can cause the political effect of such videos to exponentially increase. If you can generate a video of Barack Obama cursing Donald Trump,  what stops you from generating a video of a politician cursing minority groups before elections? Fake news is effective even after proven to be false. Thus, the public stain might never be cleansed completely. But politicians aren’t the only ones to be afraid: what happens if your ex-boyfriend or girlfriend decides to put your face in a (revenge) porn scene, as done to Gal Gadot?

The main issue is that while explosives or TNT can be regulated, the usage of deep learning is growing exponentially in day to day tasks and would be much harder to regulate.

Paving the Way Forward

The process of detecting between real and fake footage requires analysis of the output footage where inconsistencies can be identified in the low-level features. Deep learning classifiers fit this task, due to their ability to inspect the raw features of the image to detect” tell-tale” signs of fake images or videos.

For instance, in the autoencoder method mentioned above, the decompressed image is of poorer quality (e.g., some pixels lose their actual color) and these differences create inconsistencies, that while invisible to the naked eye, can be detected by a deep learning classifier. A research group from Adobe and UC Berkeley trained a convolutional neural network classifier to distinguish between real and fake images with 99% accuracy, as opposed to humans which achieved only 53% accuracy.

 

However, like in the cybersecurity domain, the first step towards the solution is the understanding of the problem and its ability to affect us. Once we know about the risk of Deepfakes, the same way we know about the risk malware, we can look for deep learning experts to assist us in implementing solutions that would outperform human capability and solve this challenge.