The Impact of Deepfake

Tech Update November

I think it is time to talk about deepfake. What is meant by deepfake, how does it work and what are its implications?

What is deepfake?

Using a form of artificial intelligence called deep learning to generate text, audio, photos and/or videos to create a fake event is called a deepfake. In 2017 a reddit user by the name of deepfakes shared deepfakes he and others in the r/deepfakes community created.


The term has since expanded, and it’s use widened. Some recent examples would be Barak Obama calling Trump a dipshit, Mark Zuckerberg bragging about having total control over billions of people’s data and a bit closer to home, our prime minister Mark Rutte talking about the climate crisis.


The technology to create these deepfakes has become more and more accessible.

Let’s take a look.

Tool- and Tech Stack

Deepfakes capture common characteristics from a collection of existing images and apply these characteristics (shapes, styles) to other images. The technology used to accomplish this is called Generative Adversarial Networks. GANs use a relatively new neural network architecture that was first published in 2014 by researchers from the University of Montreal. They are capable of creating realistic images and videos that look completely real to the naked eye, which is why are they used to create deep fakes.

A GAN is actually made of two different networks, a generator and a discriminator. The generator is responsible of generating images that resemble the training data while the discriminator predicts whether the newly generated images are real or fake. This leads to the two different networks training each other and becoming better over time.

Above you can see the basic GAN architecture. The network starts with an image of random noise which is fed through the generator and changed into an image that resembles something. Then, our fake image is fed into the discriminator and is compared to the training data. Finally, the discriminator predicts whether image is real or fake. But what now? Depending on the output of the discriminator, the generator is adjusted to generate the images differently, this loop goes on until the discriminator cannot detect that the images are fake anymore.

How would one build a GAN? The most used tool for neural networks is a Python library called Keras which uses Tensorflow as it backed to do the computations. Python is notorious for its easy and short syntax and ability to build working applications with a little amount of code. Let’s look at how to create a GAN using Keras.

First, we start off by defining our generator:

The generator is just a normal densely connected neural network. However, you can see that our output layer has 784 neurons. This is because each neuron represents one pixel of our fake output image, which will be 28×28. Next, lets define the discriminator.

Our discriminator is also just a densely connected network. You can notice that this time our input layer has 784 neurons, looks familiar? This is because the discriminator takes the output of the generator as its input. However, the discriminator ends with a single neuron, which will out put a value between 0 and 1, the probability of the image being fake or real. Now that we defined out two networks, we can out them together.

Here we basically just connect the two networks and compile them into one network. With this we can finally write our training loop.

We start off with loading or training data. We create or generator and discriminator, before combining them into our GAN. Then, we have our for loop, which defines what happens during a single epoch. First, random noise in generated which is then fed into the generator which outputs fake images. These fake images are fed into the discriminator, together with real images. Images are labelled with 1s or 0s, this is how the networks now if they are doing a good job, if not, they adjust to generate or classify images differently which over time leads to improvements.


That’s it!


As you can see, by using Keras we were able to generate a fully functional GAN in a short time. We can feed our network any kind of images and it will learn to generate fake one.


Generating these fake images this easy has implications. Looking at how social media nowadays is used to spread false information by just writing about it, the effectiveness of this tactic will be multifold when supported by deepfake audio and video. Seeing a video of someone stating something is not proof this person actually said it anymore. We have to be even more critical toward the content we consume. What is the source of this video? Can it be a deepfake?

It also means deepfake can be used as a denial tactic. For instance, a public figure gets caught on video doing something “wrong” can now claim the video to be a deepfake. Without tools to analyze such a video it is hard to counter such a claim.

Another disturbing thing is that deepfakes are being used for blackmailing purposes. Depending on the context it can be very damaging to a person if deepfake content of them saying or doing something resulting in a harmful scenario is published.

Solutions & Conclusions

Overcoming these dangers will always be a constant cat and mouse game. Research will make deepfakes better and research will make deepfake detection better. Platforms can incorporate these detection methods to shield its users from deepfakes.

One of these methods would be to use Benford’s law to identify whether or not an image has been generated by a GAN. Benford’s law, or the first-digit law recognizes that many real-life data sets has a small leading digit. When looking at the distribution of the leading numbers 1 through 9 we get the following chart:

This counter intuitive observation is true for data sets like the diameter of all the volcanoes in the world, populations of cities, companies stock market value and many, many more. This law is used in many different ways like:

  • Red-flag fabricated tax returns
  • Detecting Ponzi-schemes by looking at its results
  • Analyzing election data
  • Etc.

Study shows gan-generated images often fail in respecting Benford’s law, thus can be discriminated from natural pictures. We do have to take into account that the opposite is also true, Benford’s law can be used in GANs to create even better deepfakes.

Another method can be to use the same technology to generate the deepfakes and use it to detect them. A research paper from 2019 compares different neural networks that were trained to detect deep fakes and one of them was able to reach an accuracy of 96.36%, which is pretty impressive. While this method is feasible at the moment, as GANs will continue to get even better it is likely that many detection methods might not work anymore in the future.


In conclusion, counter technology might help, but the most important take away is, be aware of the source. Try to make sure there are multiple sources corroborating a narrative, don’t take any one thing you read or see at face value.

Share on linkedin
Share on telegram
Share on whatsapp
Share on facebook

More to explore

Privacy Preferences
When you visit our website, it may store information through your browser from specific services, usually in form of cookies. Here you can change your privacy preferences. Please note that blocking some types of cookies may impact your experience on our website and the services we offer.