Facebook wants to combat deepfakes by making its own



With algorithmically-generated fabricated videos, otherwise called deepfakes, on the rise, Facebook is teaming up with Microsoft and seven academic institutions in the US for a Deepfake Detection Challenge.

The contest — meant to develop technology for detecting deepfakes and prevent people from falling prey to misinformation — is expected to from late 2019 until spring of 2020.

But training an algorithm to single out doctored videos isn’t an easy task, as it requires massive datasets of deepfakes.

Which is why, the social media giant said it will use paid, consenting actors to create a library of deepfake videos in order to train and improve tools to combat the threat of such videos plaguing the platforms.

“The goal of the challenge is to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer,” Facebook’s chief technology officer Mike Schroepfer said.

Although not all deepfakes are bad, they’re troubling for a reason.

It’s fake news taken to a whole new level of persuasion. It’s one thing to read a fabricated story about an non-existent event, but it’s another to witness real people, say politicians, doing and saying fictional things, ultimately questioning the legitimacy of information you see online.

The technology to manipulate images and videos is progressing at an unprecedented pace, outsmarting current capabilities to tell apart the real from the fake.

What’s more, the explosion of AI and machine learning has made it cheaper and easier to create deepfakes, to the point where you can create your own fake videos. Inversely, they are also getting harder to detect.

Last week, a Chinese app called Zao that allows users to convincingly morph their faces onto movie stars shot to the top spot in the entertainment section on the App store, though privacy concerns landed it in a privacy soup.

AI Foundation, an organization that aims to advance the responsible use of AI, launched a tool called Reality Defender last year that combines human moderation and machine learning to spot hyper-realistic fake videos.

But given the lack of a robust solution to the problem, the challenge is doubtless a promising step in the right direction.





Source link

WP Twitter Auto Publish Powered By : XYZScripts.com
Exit mobile version