Days after Facebook Inc. announced an update to its policy against "deepfakes" and "manipulated media," lawmakers pressed a company executive on its approach to combating misinformation.
Deepfakes — a term that combines "deep learning," a complex form of machine learning, with the word "fake" — are video or audio recordings that use artificial intelligence and algorithms to superimpose the actions and speech of one person onto another.
The company's new policy, announced Jan. 6, is to remove misleading manipulated media that has either been edited or synthesized in ways that the average person cannot detect and is the product of artificial intelligence or machine learning that manipulates content to appear authentic. There are exceptions to this policy, however, such as content that is parody or satire.
At a Jan. 8 congressional hearing by lawmakers from both parties, Rep. Jan Schakowsky, D-Ill., who chairs the House Energy and Commerce Committee's subcommittee on consumer protection issues, asked Monika Bickert, vice president of global policy management at Facebook, if a manipulated viral video that appeared to show Speaker of the House Nancy Pelosi, D-Calif., slurring her words would be taken down under the new policy.
Bickert said the new policy specifically would not have applied to that video, but noted that the video would still be subject to Facebook's other policies on misinformation.
Facebook said in 2019 that it labeled the video as false and reduced its distribution.
Schakowsky took issue with the fact that the deepfakes policy only covers a video where the words a person are saying are manipulated, but not one in which the images are altered.
"I don't understand why Facebook should treat fake audio different from fake images," she said. "Both can be highly misleading and result in significant harm to individuals and undermine Democratic institutions."
Bickert said while Schakowsky's characterization of how the new policy would apply is correct, the company has a broader approach to misinformation that would label and obscure the image as false information and direct people to information from fact-checkers.
On the Republican side, Rep. Larry Bucshon of Indiana asked Bickert for more specifics on how Facebook determines if a video misleads a viewer. Bickert responded by pointing to the artificial intelligence and machine learning elements of the company's newly announced policy.
Bucshon also expressed concern about what rights the company affords to users who have content flagged as misleading, asking Bickert if someone who has had their content flagged as false has any options to contest the characterization. Bickert said in this situation someone can dispute the characterization directly with the fact-checker or notify a fact -checker that they have amended their content.
Still, the Republican congressman felt uneasy about Facebook's efforts to label the accuracy of content altogether.
"I want to stress that I am concerned over the efforts to make tech companies adjudicators of 'truth,'" he said. "In a country founded on free speech, we should not be allowing private corporations ... or for that matter the government, to determine what qualifies as the 'truth,' potentially censoring a voice because that voice disagrees with mainstream opinion."