Please note this post is published under “Opinion” category and reflects the personal views of the author. If you disagree or have an opinion you would like to offer, feel free to discuss in comments!
The progress in technologies such as artificial intelligence, cloud computing, and IoT is bringing the world closer and helping mankind solve some of its biggest challenges.
Whether it is eliminating poverty or curing previously incurable diseases, it is the tech that is lighting the way. But having said that, rapidly advancing technology and AI in particular is creating some challenges on its own.
Of course, science fiction would have us believe that AI is a threat to humanity — that it can take over the world and unleash terror upon mankind.
However, it shouldn’t be the prospect of killer robots that should worry us. But something else that is more imminent. And dare I say, equally sinister.
Some Background
Back in 1997, a landmark project called Video Rewrite program was published. It used existing footage of a person to create a completely new video, where they would be mouthing words from a different audio track. This project used machine learning techniques to make connections between the shape of a person’s face and the sounds they were producing.
Two decades later, a similar program called ‘Synthesizing Obama’ depicted the former US president saying things he never actually said.
These projects were mostly for academic research purposes. However, they would prove to be the precursors to what we now know as deepfakes.
The Rise of Deepfakes
It was the latter half of 2017 when a Reddit user named deepfakes begun posting edited videos. These videos were made using the deep learning technology (hence, the name) and involved faces of celebrities on bodies of pornographic actresses. Some of the less-harmful ones had the face of actor Nicolas Cage’s swapped onto different movie scenes.
Here is an example of a deepfake with famous actor Keanu Reeves:
The user published the machine learning code used to create these videos, and soon an entire community called r/deepfakes emerged on Reddit. Things went from bad to worse when FakeApp was launched in January 2018. Now, anyone could superimpose people’s faces on the bodies of others.
Fast forward to today. The scourge of deepfakes is now spread throughout the online sphere. Everyone can make fake videos of celebrities, politicians, and even their neighbors.
FakeApp uses AI to generate accurate facial constructions and apply it to a video. For this, one doesn’t need videos of the potential victim. They can make do with decent quality images that are available on a person’s social media timeline.
With the prospect of having your fake videos spread throughout the web, even threats like domain thefts and man-in-the-middle attacks seem less menacing in comparison.
Businesses are Feeling the Heat
While the prospect of AI spreading fake news is somewhat scary, deepfakes can completely devastate businesses. Companies can lose millions with criminals impersonating CEOs and putting out their fake videos.
With these videos becoming more realistic by the day, a convincing imitation can send a company’s stock plummeting. For instance, if a fake video of Jeff Bezos announcing defects in Amazon’s products went viral, the company would see it’s stock price crumble.
Even small businesses are concerned about this emerging threat, Julia Markle oversees digital content at ClothingRIC, a startup that disseminates coupons for clothing and lifestyle items. She believes her organization is even more exposed to forged videos. “Unlike Microsoft and Apple, we don’t have an army of lawyers or the experts in case we’re ever faced with a deepfake situation.”
However, the risk of manipulated videos damaging business exists for the corporate world.
Tech Giants Prepare for a Battle
Realizing the urgency of the matter, companies are clamoring for defenses and trying to develop detection tools.
1. Facebook
Facebook is releasing a dataset of videos and faces as a part of its Deepfake Detection Challenge (DFDC). The purpose of this activity, according to Facebook’s Chief Technology Officer Mike Schroepfer, is “to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer.”
Amazon, Microsoft, and a variety of academic and research institutions are involved in this campaign.
2. Google
Meanwhile, Google is making a similar contribution to the fight against deepfake. The search engine giant has released 3000 deepfake videos of its own in an effort to assist researchers working on detection tools.
Using professional actors, the company filmed a number of scenes and used publically available deepfake creation methods to develop the database. Researchers can utilize this dataset and train their detection tools, making them more effective and accurate.
3. Twitter
Twitter is another social platform that’s designing a policy to tackle the emergence of deepfakes. Calling it “synthetic and manipulated media”, Twitter has put out a serious of tweets last month seeking user feedback on the matter.
It has defined face-swapping AI videos as “media that’s been significantly altered or created in a way that changes the original meaning/purpose, or makes it seem like certain events took place that didn’t actually happen.”
Previously, the website banned fake pornographic videos, but it has yet to establish a broad policy to regulate manipulated videos on a larger scale.
Spotting Deepfakes for Everyday Netizens
Until a reliable mechanism is formed to trace deepfakes, viewers can look for some flaws to determine whether they are being duped.
Here are some of the signs that the video you are watching has been modified.
• Skin tone:
Normally, the skin tone of a person’s face in an edited video is different than the rest of the body. Moreover, it looks extra smooth.
• Unusual blinking:
So far, the algorithm isn’t advance enough to create videos where the person blinks normally.
• Slow speech:
People being impersonated in videos talk slowly. Meanwhile, the audio does not match their actual voice.
• Face borders:
In most manipulated videos, the face borders are blurry and subtly blend into the background.
• Bizarre look:
All in all, deepfakes have a strange look to them that is generally observable to the naked eye.
Fortunately, the technology hasn’t reached a point where fake videos are flawless. But this will gradually change in the coming years.
The Long Road Ahead
While it is reassuring that corporations like Google and Facebook are investing in deepfake detection tools, the technology they are up against is developing fast. As of now, the tech community is playing catch-up while the threat of more sophisticated deepfakes is knocking at the door. One can only wonder what sort of chaos will ensue as the line between fact and fiction gets blurred even further.