Deepfakes use a form of artificial intelligence called deep learning, which allows them to make images of fake events. Almost like Photoshop but instead of a person manipulating the images, an artificial intelligence is performing that action in their place. That’s not even the tip of the iceberg, people can program the AI to create videos of people where the AI learns people’s mannerisms and how they talk; from this, the AI is able to generate fake videos of people saying things that they have never actually said.
As you may imagine, this can cause problems, particularly when deep fakes depict people of power saying things that they otherwise would not say. When these videos come out they may depict a politician saying something scandalous – that they have not actually said but many people will still believe the video anyway. Deepfakes are particularly damaging around national elections, as a deep fake video has the power to change election results. Fraudsters can also create fake videos of influential figures who give out advice that urges people to do things that make them more susceptible to online scams, it is important to consider if the advice you are being given actually makes sense and if you can back up the information based off other sources.
How Are Deepfake Face Swap Videos Made?
Deepfake face swap videos are made by feeding thousands of different photos of two people through an AI program called an encoder, this finds and learns similarities between the two faces and reduces them to shared common features. During this process, the images are compressed and a second AI algorithm called a decoder is taught to recover images from the compressed images.
Due to the faces being different, you train one decoder to recover the first person’s face, and then another separate decoder to recover the second person’s face. For the face swap to happen you feed encoded images of a person’s face into the wrong decoder. Essentially take the features of one person’s head or face and digitally sticking them on another person’s body.
What is Required to Make a Deepfake?
To increase your chances of creating a convincing deep fake you need to be creating it on a high-end computer with a powerful graphics card, alternatively you can use the computing power of the cloud. Powerful computers drastically reduce processing time from potentially days and weeks to mere hours in some cases. Additionally, deepfake videos require a tremendous amount of expertise to make deep fakes look as convincing as possible.
How to Spot Deepfakes
Over time deep fakes are becoming increasingly sophisticated, making them more difficult to spot. Each time a vulnerability is spotted in deep fakes they usually adapt to no longer have that vulnerability. For example, in 2018 researchers realised that deepfake videos show people who don’t blink convincingly, this is due to the AI encoder used to make the deepfakes being fed photos of people primarily with their eyes open; as it is more unusual to find photos of people with their eyes shut. This caused the AI to struggle with this, however soon after this exploit was published by the researcher’s deep fakes began blinking more convincingly. There are other ways you can spot deep fakes including:
- Poor Lip Syncing: Do the words that you are hearing match with the lip movements of the person speaking? If not, you should refresh the page of your internet browser because it is entirely possible that it could be an issue with your internet connection, alternatively it could be a deep fake video that you are watching. Always be sceptical when there is poor lip-syncing.
- Patchy Skin: Does the person’s skin look real or are there patches of skin that don’t look natural to a person’s skin colour, patchy skin could be a sign that the video you are watching is a deepfake.
- Flickering Around the Edges of Faces: Poorly rendered deep fakes may flicker around the edges of people’s faces.
- Strange Lighting Effects: Deepfake videos can have lighting effects that don’t look natural, such as inconsistent illumination and reflections of the eyes or a weird-looking glint on eyeglasses.
Will Deepfake Videos Undermine Trust?
Deepfakes have the potential to disrupt public trust in the institutions that run society, a dystopian view of this could suggest that deepfakes may create a zero-trust society. Where people find it almost impossible to separate the truth from misinformation. If this happens it will be much easier for misinformation to be believed, reducing the public’s trust in institutions. This can lead to real world violence as people are manipulated into performing hostile actions against innocent parties.
What Are Shallowfakes?
Shallowfakes are less sophisticated than deepfake videos. However, they can still achieve similar results, they often involve speeding up videos or slowing them down. This can be used to make politicians look incompetent, causing them to lose credibility with supporters which has the potential to change election results.
What Is the Solution?
Major technology firms such as Google and Facebook are currently holding competitions with significant prizes for people to come up with methods to detect deep fakes. This could range from finding exploits in the deepfake videos to creating software that can help detect deepfakes. Another way to spot deep fakes is AI itself which already helps to spot fake videos.
However, this method is not without flaws as the detection process requires many hours of the person to determine if they are a deep fake. If there is a lack of video content for the AI algorithm to scan there is an increased possibility that the deepfake will remain undetected.
Another strategy that can increase the credibility of videos is by adding a digital cryptocurrency blockchain watermark on the videos; so that any changes to the original video can be checked and tracked on the blockchain. This can provide evidence for the victim of the deep fake to come out and show the blockchain record which shows that the video has been altered.
Deepfakes continue to evolve and become more convincing, time will tell if deepfakes are able to get to the point where they are almost indistinguishable to real videos and if detection methods are able to keep up.