There are times when it is okay to fake something. That’s why we keep on hearing people saying, “fake it till you make it”. But this is not how it works when it comes to deepfakes. Deepfakes are AI-manipulated fake videos, audios, or any other digital representations. With this technology, people can simply create contents to put words in any politician’s mouth. Celebrities might find their faces in pornographic videos or images. With all the fake news and misinformation bombarding us every day, deepfakes make it worse. Seeing is no longer believing.
How do deepfakes work?
The basic of deepfakes is machine learning, deep learning (a subset of machine learning) in particular. Generative adversarial network (GAN) is the recent exciting deep learning method. The concept behind GAN is simple. It consists of two neural networks, which are generator and discriminator. They work simultaneously without human supervision. One receives training data sets to generate materials, while the other tries to detect the forgeries. This is a self-optimizing process, meaning that the generator will learn and improve through feedbacks from the discriminator. The process will continue until the discriminator is convinced that the deepfakes are genuine.
Cloning Billie Eilish’s face and voice using deepfakes
The concept of GAN is simple, but not for the data training process. There is an interesting video that shows another easy way to create deepfakes using the Github project AI toolbox. The creator built a text-to-speech to generate audio that sounds like Billie Eilish via Github real-time voice cloning. With this pre-trained model, all you have to do is to feed in a few seconds of reference audio for the voice that you plan to clone. The creator then filmed himself lips syncing to the synthesized voice. Next, he transferred his facial expression to Billie Eilish’s face using the First Order Motion Model for Image Animation. It is another open-source project. This simplifies the entire facial expression synthesis as you don’t have to start from scratches.
Deepfakes detection – AI vs AI
Deepfakes detection using AI is an endless race. That’s why researchers, government agencies, and social media platforms keep on working on it. Microsoft has recently unveiled a deepfake detector tool called Video Authenticator. This software has a rather similar working principle to photographic forensic techniques. With real-time video analysis, it breaks the video down frame-by-frame. It detects subtle colour fading or artefacts that are barely visible to the human eyes. The results will be in either a percentage or a confidence score to determine the content’s authenticity. The detector is trained using the Face Forensic ++ dataset and tested on the Facebook AI’s Deepfake Detection Challenge Dataset. This Video Authenticator is currently only available to the AI foundation as part of its Reality Defender 2020 program.
Bans and regulations
Many social media giants are taking deepfakes seriously ahead of the US presidential election. Facebook has banned deepfakes in Jan 2020, followed by YouTube announcing its ban reminder on deepfakes in Feb 2020. Whereas the latest platform to introduce similar policies is TikTok. Meanwhile, the regulators are also concerned about the impacts of deepfakes in the wrong hands. On 20 October 2020, the European Parliament has approved three proposals to underlie future AI regulations in the European Union (EU). The proposals state the need to regulate “high risk” AI like those with self-learning capabilities. The proposed regulation scope includes ethics, liability, and intellectual property.
Don’t blame the technology
Amid growing concerns about the misuse of this mind-blowing technology, deepfakes are yet to be widely used for malicious purposes. But to be honest, technology shouldn’t take the blame. In fact, the blame should go to those who misuse it. We want deepfakes for fun, not fraud, and certainly not fear.
More information about: