You may have seen them. You may have heard of them. They’re deepfakes and they’re more common than you might have imagined.
For the most part, deepfakes are videos that manipulate the image and likeness of one person and place them onto another. Think of it as mixing CGI with deep learning AI and you’ve got a perfect recipe for deepfakes. If you thought only big movie studios were creating these types of videos, think again, everyone from university researchers to the government utilizes this technology for a variety of reasons. And with new software, programs, and apps, it’s easier than ever for an amateur or enthusiast to create them.

Deepfakes were first spotted on Reddit in 2017 when a user posted doctored pornographic videos featuring the faces of celebrities such as Taylor Swift or Scarlett Johansson. While a lot of deepfakes are easy to spot due to obvious lighting differences, blending issues or abnormalities on the edges of a face, badly rendered details such as hair, teeth, or jewelry, these videos are getting better and better as new technologies and algorithms are developed. Early on, one of the easiest ways to identify a deepfake was the lack of blinking, since most photos of people have their eyes open, but once it was announced as a way to distinguish a deepfake, creators started adding blinks to make their fakes harder to detect.
So how is a deepfake created?
To create a really good deepfake, you will need a sophisticated computer with robust graphics cards capable of handling advanced software and loads of media.
According to Ian Sample of the Guardian, “First, you run thousands of face shots of the two people through an AI algorithm called an encoder. The encoder finds and learns similarities between the two faces, and reduces them to their shared common features, compressing the images in the process. A second AI algorithm called a decoder is then taught to recover the faces from the compressed images. Because the faces are different, you train one decoder to recover the first person’s face, and another decoder to recover the second person’s face. To perform the face swap, you simply feed encoded images into the ‘wrong’ decoder.”

Deepfakes are not just limited to video. They now include audio deepfakes which use “voice skins” or ”voice clones,” from recordings such as interviews, movies, or even voicemails. Additionally, deepfakes might embody entirely fictional characters created with a GAN.
Sample writes, “A Gan pits two artificial intelligence algorithms against each other. The first algorithm, known as the generator, is fed random noise and turns it into an image. This synthetic image is then added to a stream of real images – of celebrities, say – that are fed into the second algorithm, known as the discriminator. At first, the synthetic images will look nothing like faces. But repeat the process countless times, with feedback on performance, and the discriminator and generator both improve. Given enough cycles and feedback, the generator will start producing utterly realistic faces of completely nonexistent celebrities.”
If you don’t have the know-how, hardware, or the time to create your own deepfakes, there are now companies who will make them for you. Or with Zao, an app for your smartphone, you can put your face onto an actor and star in your favorite movie or television show.
Why do people make deepfakes?
Like the original deepfakes from Reddit, the most common reason is for pornography. In September of 2019, Deeptrace, a cybersecurity company based in Amsterdam which monitors and detects synthetic media online, reported almost 15,000 deepfake videos in that month alone. Approximately 96% of the deepfakes were pornographic, with 99% of the mapped faces being from female celebrities. In this report, Deeptrace also concluded that “deepfake pornography is a phenomenon that exclusively targets and harms women. In contrast, the non-pornographic deepfake videos we analyzed on YouTube contained a majority of male subjects.” And with easier to use technologies being introduced everyday, the jump from celebrity porn to revenge porn has the very real and unfortunate chance of becoming a very real and unfortunate problem.

The sinister side of deepfakes can also include scams such as using a voice clone to trick a victim into transferring money for a loved one in need, or simply to humiliate a target or harm their reputation. Another issue lies in the eroding trust that deepfakes can have on the public. People caught in embarrassing situations on video can now call the video’s authenticity into question. If consumers constantly have to worry about synthetic media, how can they know that something they’re watching is real, or put their trust in it?
A factor that further aggravates the situation is the dwindling amount of journalists and reporters due to cutbacks who would have once been available in droves to seek out these fakes, investigate them, and uncover the truth or even the people behind them. Luckily, universities, governments, cybersecurity firms and tech companies have been funding research to patrol the internet, discover synthetic media, and warn society about harmful deepfakes. Tech giants like Amazon, Facebook, and Microsoft have also funded global competitions such as the Deepfake Detection Challenge to help uncover and deter harmful deepfake creators. Facebook has also taken steps to ban deepfakes that can easily dupe users.
Outside of porn and malevolent intent, there is another, lighter side to deep fakes which are used for entertainment or educational purposes, such as the popular YouTube creators, Ctrl Shift Face, who create spoof reels including one from The Shining, where Jim Carrey has been inserted into Jack Nicholson’s role; or Jordan Peele becoming former President Obama as a PSA to warn the public about the prevalence of deepfakes and how good they can be. The technology has been used to improve voice dubbing in movies, or even to bring actors back from the dead. A very positive use of deepfakes is when voice clones are adopted to replace the voices of people who have lost theirs due to accidents or disease.
With more people jumping on the synthetic media bandwagon, the advancements in AI, and accessibility to programs and devices that are capable of creating deepfakes, it is a technology that is here to stay. As media consumers we must remain aware that anyone, anywhere can create and share content, and the reasons for them doing so may not always be sound. We must remember that with deepfakes floating around the internet, the old saying still applies in the Digital Age, “If something is too good to be true, it usually is.”
The following sources were cited in this post:
- Ajder, H., Patrini, G., Cavalli, F., & Cullen, L. (2019, September). The State of Deepfakes: Landscape, Threats, and Impact. In Regmedia. Retrieved from https://regmedia.co.uk/2019/10/08/deepfake_report.pdf.
- Bloomberg Quicktake. (2018, September 27). It’s Getting Harder to Spot a Deep Fake Video. In YouTube. Retrieved from https://www.youtube.com/watch?v=gLoI9hAX9dw.
- Marcin, T. (2020, August 2). 13 of our favorite deepfakes that’ll seriously mess with your brain. In Mashable. Retrieved from https://mashable.com/article/best-deepfake-videos/.
- Sample, I. (2020, January 13). What are deepfakes – and how can you spot them?. In The Guardian. Retrieved from https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them.
