Facebook has taken a major stand to protect the truth. In an understated blog post titled “Enforcing Against Manipulated Media,” the company has staked out a technological position pitting it against a brigade of sophisticated, savvy, and ever more clever creators of fake news that seek to influence the minds of the very people in your home or workplace. Facebook will now remove images and videos that have been purposely manipulated to deceive. Such videos, called “deep fakes,” have become very authentic-looking and sometimes almost impossible for a human to tell from the real thing.
If you haven’t heard the term “deep fake,” you need to go here and get educated, or here and get frightened, or here and get amused. The tl;dr version: a “deep fake” is a manipulated video that makes something that isn’t, as if it is. This technology has now trickled down from top-flight movie studios to people sitting in their mom’s basement.
If “deep fake”-making was confined merely to balding, incel, Twitter trolls sitting in their parents’ basements, this would not be a real problem–it would be more of an unpleasant inconvenience, like traveling by train in parts of Europe where bathing is less of a priority. But like any technology that can be turned to evil, the ability to manipulate video is being used by nefarious players. To see a sample of the war (literally, war) companies like Facebook, Twitter, and Google face daily, watch these excellent videos (here, here, here, here, and here) by former government rocket engineer Destin Sandlin, aka SmarterEveryDay.
A few fine, but important, points here. Facebook is not sitting with a censor pen to determine something’s fitness to see. It’s only looking to remove videos and images that are purposely posted to deceive. From the post:
Going forward, we will remove misleading manipulated media if it meets the following criteria:
It has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. And:
It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.
Another critical point:
This policy does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words.
This means that posts from the Babylon Bee are safe. Videos that summarize a speech or other news item are also safe.
But there are thousands of fake accounts producing hundreds of videos a day, with subtle differences, cross posting comments that look real, and in many cases, are real because they are posted by a cadre of people who are paid to post them, many times using stolen account credentials.
These hundreds of videos are produced using advanced AI, with subtle changes to backgrounds, and down-to-the-pixel manipulation, designed to fool the automated detection tools into thinking they are unique. To the human eye, they look authentic, and they all convey a similar message, either political or commercial in nature. The combination of videos, cross posted comments, and views can sometimes fool even the most experienced social media user into clicking it. Clicks generate views which generate revenue for the poster (really ill-gotten) and a greater chance of a video going viral.
A single viral video by a foreign government seeking to influence people in the U.S. can establish a conspiracy theory, or in some cases make it into the legitimate press. These governments can afford to dedicate hundreds or thousands of computers to the task, posting 24 hours a day. It’s war.
Thankfully, Facebook has the support of some really smart people, in top universities and other tech companies to take on the challenge:
We are also engaged in the identification of manipulated content, of which deepfakes are the most challenging to detect. That’s why last September we launched the Deep Fake Detection Challenge, which has spurred people from all over the world to produce more research and open source tools to detect deepfakes. This project, supported by $10 million in grants, includes a cross-sector coalition of organizations including the Partnership on AI, Cornell Tech, the University of California Berkeley, MIT, WITNESS, Microsoft, the BBC and AWS, among several others in civil society and the technology, media and academic communities.
This is one war the good guys must win. We cannot allow AI and “deep fake” technology to outrun our ability to detect and remove it. Regardless of your view of Facebook as a company, or your political views, you should applaud Mark Zuckerberg’s commitment to being one of the good guys in this war.