This article is part of a two-part series on deepfakes. The other covering the legitimate business uses of the technology can be found here.
Manipulated media enhanced by artificial intelligence and machine learning is becoming more common, and lawmakers and tech giants have begun to explore remedies. But some industry experts say big tech companies should be doing more to combat abuse of the technology.
The emerging technology, known as "deepfakes," has gone viral in recent months on platforms like Facebook Inc. and Instagram Inc., stoking public awareness and debate about the integrity of the content found on social websites.
A recent survey from Pew Research Center, a nonpartisan "fact" tank, found that 77% of 6,127 U.S. adult respondents support restrictions on publishing and accessing altered videos and images. The survey also found that 61% of U.S. adults do not think the public should have the burden of recognizing altered videos and images.
"Now that this [deepfake technology] has gotten into the hands of basically the novice, one person can do as much damage as an entire graphic arts [team] could five years ago," said David Doermann, director of the University of Buffalo's Artificial Intelligence Institute in New York.
Big tech's response
Siwei Lyu, director of the Computer Vision and Machine Learning Lab at the University of Albany, State University of New York, said tools used to create deepfakes are becoming as accessible as popular photo-editing software, such as Adobe Systems Inc.'s Photoshop, which is readily available to consumers.
"You can have all these knobs and you can turn up and down these knobs to actually make several changes to the faces and expressions," said Lyu, who leads a research group that works on detecting and combating deepfakes.
Lyu said the first thing big social companies should do is implement detection algorithms to "stop those fake videos at the doorstep before they can actually be propagated on the internet."
When asked about the work Facebook is doing to combat deepfakes, a company spokesperson confirmed it has engineering teams working on designing systems to identify manipulated media. The spokesperson also said that part of Facebook's work involves "getting outside feedback from academics, experts and policymakers."
In May, Facebook announced it would partner with the University of Maryland, Cornell University and the University of California, Berkeley to research new techniques to detect manipulated media across images, video and audio. The company also rolled out fact checking for photos and videos in 2018. This process uses a machine learning model to identify potentially false content and sends it to fact checkers for review.
A fake video featuring Barack Obama shows elements of facial mapping used in deepfake technology.
Alphabet Inc.'s Google LLC and Twitter Inc. did not respond to requests for comment about what they are doing to combat deepfakes, though Google has previously said it is using deep-learning models to create collections of synthetic speech to help advance fake audio detection. Lyu's research also was, in part, funded through a Google Faculty Research Award.
Beyond detecting media as it is uploaded, Lyu said companies should dedicate resources to weed out existing deepfakes on their platforms.
"They have the resources to do that," he said, noting that YouTube LLC can quickly filter videos involving violent language or hate crimes and can do "this kind of detection and filtering" also.
Doermann, meanwhile, would like to see large platforms be more transparent about the work they are doing to combat manipulated media. According to Doermann, who also previously carried out research as a program manager on computer vision, human language technologies and voice analytics at a U.S. Department of Defense agency, big tech needs to make their efforts to detect and mitigate deepfakes public and to "educate their people and work together."
From an investor perspective, deepfakes themselves do not pose a unique risk to big tech companies, but they do represent one issue in a series of ongoing risks, such as privacy concerns, and whether the platforms incite violence, said Jonas Kron, senior vice president and director of shareholder advocacy at Trillium Asset Management, in an interview.
"A lot of our thinking about these issues and what we're asking of the companies and what we're looking for ... lands more in the governance realm, in terms of — is the company holding itself to certain human rights obligations in a meaningful way," Kron said.
Kron also said a substantial legal risk could emerge if Congress reforms Section 230 of the Communications Decency Act, which protects internet platforms from civil and state criminal prosecution for content created and posted by users. Members of Congress have floated the idea amid conversations about how to regulate big tech.
"230 is about shielding it from slander and libel laws, that creates a shield against defamation," Kron said. "That is what a deepfake essentially is. So if that 230 protection goes away, then, sure, the sky is the limit in terms of legal liability."
Additionally, Kron noted that deepfakes could pose a risk to future product development at major online platforms.
"Future business may be curtailed or challenged because of the lack of trust that comes from these controversies," he said of ongoing scrutiny of big tech. "For so many companies ... it's really all about the next big thing, and if the next big thing withers on the vine because of the lack of trust, that's a problem."
The solution to the litany of risks confronting big tech, Kron said, could be to adopt governance reforms, rather than address each issue individually.
"The risks are popping up all over the place, which is why I think governance reforms are worth considering, because they have the potential to provide systems to address a multitude of problems, rather than creating a solution to each problem," he said.
As for what role Congress will play, Rep. Adam Schiff, D-Calif., chairman of the committee that hosted a hearing on deepfakes last month, wrote a letter on July 15 attempting to make tech companies' work on the issue clearer. Specifically, Schiff wrote a letter to the CEOs of Facebook, Google and Twitter to learn more about what deepfake policies they have and whether they are conducting research into techniques to automatically detect deepfakes.
A month earlier, Schiff also introduced a bill that would instruct the director of national intelligence to hold a competition offering an award up to $5 million for people who stimulate "research, development or commercialization of technologies to automatically detect machine-manipulated media." The bill has passed the U.S. House, but has not yet been taken up by the U.S. Senate.
Companion bipartisan bills in both the House and Senate also were recently introduced that would require the secretary of the U.S. Department of Homeland Security to publish an annual report on the use of deepfake technology. The report would include a description of technological counter measures that could be used to address "concerns with digital content forgery technology" and an assessment of how "nongovernmental entities" use deepfakes. Neither the House nor the Senate has taken up the bill yet.
Advancing more comprehensive legislation does not appear to be a high priority at the moment. In fact, Schiff himself acknowledged legislation at this stage may be premature at the June hearing.
"We need to soberly understand the implications of deepfakes, the underlying AI technologies and the internet platforms that give them reach, before we consider appropriate steps to mitigate the potential harms," he said.