One Question Quiz
Gal Gadot Deepfakes
Gal Gadot Deepfakes

ScienceFebruary 22, 2018

Face-swap on steroids: How ‘deepfake’ videos are messing with reality

Gal Gadot Deepfakes
Gal Gadot Deepfakes

Deepfake software has been used to create pornographic videos using the face of celebrities like Emma Watson, Natalie Portman and Gal Gadot. But in the age of ‘fake news’ and ‘alternative facts’, the deepfake problem could get a lot worse, explains computer graphics professor Neil Dodgson.

Over the past few weeks, a large number of ‘deepfake’ videos have been released online: pornographic videos where the face of the original actress has been replaced with someone else’s. Our ability to degrade and humiliate other human beings has gone up another notch.

The chilling thing is that no special expertise is required. The deepfake software is remarkably easy to use. All you need is a couple of hundred photos of the person’s face, a video you want to put them in and the software does the rest. The most popular deepfakes use the faces of celebrities like Emma Watson and Natalie Portman. It isn’t hard to see this being used on people other than celebrities. Pick anyone you fancy or have a grudge against and download their selfies to your computer. Think who could be in the firing line: politicians, bosses, teachers, sports coaches, schoolmates. The face of you, your partner, or your child, could be inserted into any video doing whatever the actors are persuaded to do.

Natalie Portman (left) and a deepfake rendering produced using hundred of images of Portman’s face (right)

Where did this come from?

This is different technology to that used in the movie industry. Movies have shown us talking apes, blue-skinned humanoids, hobbits and orcs, created with expensive CGI or hours in make-up. In the last decade, we’ve created techniques to reproduce human beings sufficiently well that the starring actor can be digital. This has led to much soul-searching in the movie industry where the ownership of image is so important, to the point where Lucasfilm has had to say the late Carrie Fisher will not be digitally recreated for Star Wars episode nine. We’ve always known, however, that creating digital humans this way requires the sort of human expertise and computer time that can only be afforded by the big movie studios.

The software behind deepfake was developed along a different line. It does away with the need for human expertise. It uses images rather than 3D models, and it builds on a recently developed artificial intelligence technique called deep learning. The method doesn’t produce seamlessly perfect results yet, but it’s only a matter of time.

Where is this headed?

Pornography has driven the headlines, but there are other uses. You could make a video where Jacinda Ardern or her new opposition counterpart appears to say whatever you like. Did you get caught on a surveillance camera stealing at work? Claim it’s a deepfake created by a jealous colleague. Yes, there’s a possibility for good here: we can digitally resurrect long-dead actors, or you could make a home movie where you star as the good guy alongside Bruce Willis. But the potential for harm is so much bigger. No video can be trusted to show reality.

We’ve known for years that what we see online or on TV is an approximation of reality. TV uses sound bites to back up a story. It has to. You don’t get to lead a political party by being a weak politician, but how easy is it to pick and choose to portray a strong politician as a weak, vacillating one? The 2016 US presidential election demonstrated just how far you can skew things, with the rise of “fake news” and “alternative facts”. All this, though, still depended on showing real evidence, photographs and video clips, carefully chosen to tell the story. This new software pushes things further, to a place where you cannot trust any evidence; to a place where anyone can produce plausible fake footage.

What can we do?

Our digital world changes so quickly that we always feel we’re trying to catch up. The software to produce deepfakes was developed only last year, but it’s already widely used. How can New Zealand respond? We can legislate and we can educate. We can ensure a person’s image belongs to them and that there’s legal recourse against those who use that image inappropriately. But that doesn’t stop the psychological trauma caused by a video of you doing something you never did. We need to educate people, especially young people, about what is and isn’t appropriate, and we also need to provide people with the mental tools to handle this sort of abuse – to handle a world in which you can trust absolutely nothing you see online.

Neil Dodgson is Professor of Computer Graphics at Victoria University of Wellington and chair of Victoria’s Spearheading Digital Futures multidisciplinary team addressing the opportunities and challenges of the digital age.


The Spinoff’s science content is made possible thanks to the support of The MacDiarmid Institute for Advanced Materials and Nanotechnology, a national institute devoted to scientific research.

Keep going!