The Deepfakes War Begins – FindLaw


FindLaw columnist Eric Sinrod writes regularly in this section on legal developments surrounding technology and the internet.

Relatively unheard of not too long ago, deepfakes are a growing problem with serious consequences. The war on deepfakes has commenced, and only time will tell whether the deepfakes problem will be eradicated — at least in large part. This could be a long and hard fight.

What Is a Deepfake?

A deepfake is a human image synthesis method based on artificial intelligence. Deepfakes so far have been implemented to create fake celebrity pornographic videos and revenge porn videos. In addition, deepfakes have been created to disseminate evil hoaxes and to spread false news. And the stakes are growing very high vis-a-vis elections when deepfakes potentially can be created to put words not actually spoken into the mouths of politicians.

Recently, according to the MIT Technology Review, generative algorithms have become so adept at synthesizing media that what they create is essentially becoming indistinguishable from reality. For this reason, and with the 2020 presidential election coming up, experts are working hard and fast to come up with means for detecting deepfakes.

Tech Giants Concerned

What, specifically, can be done? As one example, Google has become a soldier in the war on deepfakes. For its part, Google has released a huge database of known deepfakes. This database includes 3,000 AI-generated videos that were created based on publicly available algorithms. These videos can be referenced as part of acceleration efforts for coming up with deepfake detection tools.

Facebook also has announced that it will be releasing a similar database by the end of 2019.

The overall idea behind the efforts of Google and Facebook is to create a large body of examples that can assist in training and testing automated detection tools.

All good, right? Not necessarily. Some experts fear that once a solid deepfakes detection method is developed, the creators of deepfakes may be able to update their algorithms to evade this detection method. Accordingly, some experts argue that detection methods must be developed that can identify perfect deepfake synthetic images. That is no easy task.

No End in Sight

The deepfakes war is only just beginning. As long as there are incentives to create deepfakes, whether economic, political or personal, they will be a problem that needs to be addressed. Not only should the problem be addressed as part of potential technological solutions, it should be addressed socially, politically and as a matter of law.

As with everything technological, fasten your seatbelts! This war has started and will not end soon.

Eric Sinrod (@EricSinrod on Twitter) is a partner in the San Francisco office of Duane Morris LLP, where he focuses on litigation matters of various types, including information technology and intellectual property disputes. You can read his professional biography here. To receive a weekly email link to Mr. Sinrod’s columns, please email him at ejsinrod@duanemorris.com with Subscribe in the Subject line. This column is prepared and published for informational purposes only and should not be construed as legal advice. The views expressed in this column are those of the author and do not necessarily reflect the views of the author’s law firm or its individual partners.

Related Resources



Source link

WP Twitter Auto Publish Powered By : XYZScripts.com
Exit mobile version