In advance of the 2020 Election and responding to criticism from not only the 2016 Election but daily social media use as well, Facebook has taken its first steps toward combating manipulated media, commonly called “deepfakes.”
Deepfakes, ranked as one of the top ethical concerns of 2020 by Notre Dame, are media that take a person in an existing image or video and replace them with someone else's likeness using artificial neural networks, making it look like an individual said or did something they did not.
In a blog post, Monika Bickert, Vice President, Global Policy Management at Facebook said collaboration is key to addressing deepfakes and other types of manipulated media. After discussions with 50 global experts with technical, policy, media, legal, civic and academic backgrounds, Facebook has established two stipulations for removing misleading manipulated media.
Media will be removed if:
- it has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say
- it is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.
If videos do not meet the standards for removal but are still deemed to be false or partly false by independent third-party fact checkers, people who see it, try to share it, or have already shared it will see warnings alerting them that it’s false.
Bickert says this is critical to Facebook’s strategy, and something heard specifically from conversations with experts.
“If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labelling them as false, we’re providing people with important information and context,” Bickert wrote.
Facebook, in collaboration with Amazon, Microsoft and the nonprofit Partnership on AI, launched the Deepfake Detection Competition last month. The competition, which ends March 31, 2020, invites people to build innovative new technologies that can help detect deepfakes—for a $1 million prize. For the program, Facebook developed its own deepfake videos with voluntary actors to provide a valuable dataset, for now and in the future.
Facebook is not the only entity taking on deepfakes. In fact, DARPA was the first. The government agency’s Media Forensic program launched in 2016, a full year before the first deepfake videos surfaced on social media platform Reddit.
Mostly recently, the Media Forensic program launched Semantic Forensics (SemaFor), which seeks to develop technologies that make the automatic detection, attribution, and characterization of falsified media assets a reality. The goal of SemaFor is to develop a suite of semantic analysis algorithms that dramatically increase the burden on the creators of falsified media, making it exceedingly difficult for them to create compelling manipulated content that goes undetected.
“There is a difference between manipulations that alter media for entertainment or artistic purposes and those that alter media to generate a negative real-world impact. The algorithms developed on the SemaFor program will help analysts automatically identify and understand media that was falsified for malicious purposes,” said Matt Turek, program manager in DARPA’s Information Innovation Office.
SemaFor will also develop technologies to enable human analysts to more efficiently review and prioritize manipulated media assets. This includes methods to integrate the quantitative assessments provided by the detection, attribution, and characterization algorithms to prioritize automatically media for review and response.
Photo: Deepfake example. Credit: DARPA