Browsing through YouTube the other day, I stumbled upon a web series called Sassy Justice by the creators of South Park. The series uses deepfake technology to create a world with fictionalized versions of celebrities and politicians. While I was thoroughly entertained, I was also creeped out by how convincing some of these fake AI-generated videos were. You can watch the video I am talking about here.
While the more obviously blatant satire moments were easy to laugh off, it was the fake news reporting by a young Julie Andrews that made me stop and think, should we be worried about how sophisticated the software for creating deepfakes is getting?
And scarily enough, this is not the only video out there that is convincingly real. Some deepfake videos that have gone viral over social media over the past few years are of notable figures saying absurd things, like the one where Barack Obama calls Donald Trump a “complete dipshit”, or the one where Mark Zuckerberg brags about how he has “total control of billions of people’s stolen data”. So let’s take a deeper look at deepfakes.
What is a deepfake?
A deepfake is a sort of photo shopped video, where a form of artificial intelligence called deep learning, is used to make videos and images of fake events. There are many easily accessible web and app based software that will make deepfake videos for free! More sophisticated software comes at a higher price but the results are so good that it can often be difficult to distinguish the real image from the fake one.
What are deepfake videos used for?
It comes as no surprise that most of the deepfakes found on the internet are pornographic. Of all the deepfake videos online, over 95% are porn and 99% of those videos used female celebrity faces mapped on to the bodies of porn stars. As deepfake technology becomes more accessible, this could allow almost anyone to make a deepfake with even just a few photos of somebody else. Other than porn, there are many other deepfakes used to make spoofs, satire and other funny videos. But this mischief, unfortunately, has also led to deepfakes being used to create revenge porn.
Does deepfake refer only to videos?
Unfortunately, it is even easier to use deepfake tech to create completely fabricated still photographs. Audio can also be deep faked. It is possible to create “voice skins” or “voice clones” that mask your real voice and make it sound like another person or public figure. All you need is a sufficient number of voice recordings of that person speaking. Deepfake voices are difficult to pick out if well done. Many scams have been reported where deepfake voices have used recorded WhatsApp voice messages to fool others.
Who can make deepfakes?
Anyone with access to deepfake technology can make their own. Academic or industrial researchers, software developers, amateur video enthusiasts, visual effects studios, porn producers, and even you and I can make deepfakes. It has long been thought that governments have been developing similar technology to manage, discredit, and disrupt extremist or terrorist activities and groups.
However, it is not easy to make a convincing deepfake using a standard computer. Good deepfakes require high-end computers with powerful graphics cards, as well as technical expertise to touch up completed videos to make them perfect. But there are many companies and apps available that can help people to make deepfakes.
How can a deepfake be spotted?
Governments, tech companies and universities are all funding research into the creation and detection of deepfakes. It is easier to spot low quality deepfakes because of poor lip synching, patchy skin at the seams, flickering, and low quality rendering of fine facial features and hair. But spotting a deepfake is becoming more difficult as the deepfake technology improves. For instance in 2018, it was noticed by a group of researchers that deepfake faces don’t blink like a normal person would. This was because the algorithms didn’t learn about blinking as most of the photographs available for people had them with their eyes open. But soon after the research was published, deepfake tech was upgraded to include blinking.
How can deepfakes cause problems?
As deepfakes become more common, it is safe to expect more harassment, mockery, intimidation, and mischief to occur as a result. However, the threat to international political stability is also a concern. While governments have their own security systems in place, deepfakes of world leaders could potentially wreak havoc. A few years ago, Donald Trump flew back home from a NATO meeting earlier than scheduled because of seemingly genuine video footage that showed other world leaders making fun of him. Believable deepfakes can also be potentially used to create bad publicity and influence stock prices, manipulate voters and even provoke religious tension.
The more serious implication of deepfakes is that they will undermine trust. Deepfakes, synthetic media, and fake news, collectively contribute to the creation of a zero-trust society, where no one can distinguish fact from fiction. And once the premise that nothing can be trusted is established, it can be used to raise doubts about even actual events. Presented with enough fake videos portraying reality, it is harder to convince people to believe real reality and easier for miscreants to plausibly deny truth.
As the technology to create deepfake videos becomes readily available to the end user, it raises a point of concern for the justice system where faked video could be submitted as evidence. Deepfakes could stir up all sorts of trouble in the court, where real videos could be dismissed as fakes, or deepfakes accepted as the truth. Deepfakes can also cause potential security risks by tricking biometric systems that rely on facial or vocal recognition. The potential for scams is limitless.
What can be done about this?
Tech firms are working on a solution where they would be using an alternate artificial intelligence system or AI to help spot these fake videos. Some detection systems already exist that can spot fakes, but most of these operate on a serious limitation that they work better in case of celebrities where hours of video footage is freely available for the AI to learn about their faces. Work is being done on developing state of the art detection systems that will have the capability to recognize and flag fakes as and when they surface. Other firms are also working on developing another strategy that will employ a block chain online ledger system to record the origins of any video. Any manipulations or tampering can then be checked against the system.
Moving forward, we need to be more vigilant with what we trust from the internet, as deepfake Barack Obama warns us in this video
It’s a time when we need to rely on trusted news sources. How we move forward in this age of information is going to make all the difference.
This content is sponsored by Muhammad Tayyab.