S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
Financial Institutions
Banking & Capital Markets
Economy & Finance
Energy Transition & Sustainability
Technology & Innovation
Podcasts & Newsletters
Financial Institutions
Banking & Capital Markets
Economy & Finance
Energy Transition & Sustainability
Technology & Innovation
Podcasts & Newsletters
10 Oct, 2024
➤ Voters face increased disinformation risks from AI-generated deepfakes ahead of the US elections.
➤ Deepfake campaigns can exploit audio to impersonate political figures, celebrities or officials
➤
With the US presidential election and several key congressional races in battleground states less than a month away, voters are more exposed than ever to election-related disinformation and deepfakes generated by artificial intelligence.
Pindrop Security Inc. CEO Vijay Balasubramaniyan and his team have been deploying deepfake detection technology to help journalists and voters be better informed. The technology is designed to help prevent fraud and cyberthreats that can come from the misuse or impersonation of others' voices in political advertising or other disinformation campaigns.
S&P Global Market Intelligence: Your career as a software engineer has included stints at
![]() Vijay Balasubramaniyan, CEO of Pindrop Security. Source: Pindrop. |
Vijay
Consumer sentiment around privacy seems to have changed dramatically in recent years, as more users are aware of their online footprint and becoming more savvy about how to protect it. While the government can issue guidelines or regulations, does the innovation in the infosec space suggest that the industry also has a critical role to play in making security measures easier to understand and adopt?
Consumers have become increasingly aware of how their data can be used and exposed online, and there are continued frustrations with the lack of regulation at the federal level to provide a uniform standard to protect business and consumer privacy. We conducted a Deepfake and Voice Clone Consumer Sentiment Report last October that revealed 60% of respondents were highly concerned with deepfakes and voice clones. In addition, 40% reported feeling optimistic that banks, insurance and healthcare providers have taken steps to combat deepfake fraud, but they have lower confidence in the news and social media sectors.
We believe the private sector has a critical role to play in driving forward security measures while regulation is being worked out. Pindrop is committed to changing this, and it's the reason why we have been advocating for Congress and other crucial members of our political system to make changes to how we deal with emerging deepfake threats — especially as they relate to the upcoming election.
Pindrop's goal is to create security and detection tools that are not just robust but also easy to adopt. We've developed advanced technologies that allow organizations to verify identities over voice channels in seconds. By doing this, we can ensure that privacy and security are both effective and efficient, which is essential for a safer digital landscape.
What sort of guidance can Pindrop provide to local or federal election administration officials to aid them in detecting election-related disinformation and deepfakes?
As deepfake technology becomes increasingly advanced, the threat to democratic processes grows with the ability to fabricate highly convincing audio and video content. This not only spreads disinformation but also creates misleading campaign messages, posing a significant risk to voter trust.
Many deepfake campaigns exploit audio to impersonate political figures, celebrities, or officials. Our technology analyzes voice characteristics — acoustic features, behavioral patterns, and more — to determine if content is authentic or artificially generated. By working with authorities, we can preemptively detect these manipulated audios before they circulate widely.
We strongly recommend that government bodies take a proactive stance, rather than a reactive one. This involves conducting regular audits of communication channels, training staff to recognize deepfakes and disinformation, and collaborating with private sector experts to create robust detection systems. A major priority should be public education on recognizing deepfakes, coupled with transparent verification processes, which will enhance voter confidence in the integrity of the information they receive.
Pindrop's diagnosis and detection of the New Hampshire robocall deepfake that contained an illegal spoofing or impersonation of President Joe Biden recently led to a $6 million fine for a consultant who violated Federal Communications Commission regulations, in addition to an indictment on felony charges of voter suppression and the misdemeanor impersonation of a candidate. Have there been other recent success stories of your technology being leveraged to prevent fraud or reduce the risk of other cyber incidents that you would like to highlight?
Pindrop has been a leader in voice authentication, including tracking the sources of high-profile deepfakes like those involving President Biden and Vice President Kamala Harris. For background on the incidents, The Kamala Harris deepfake was shared by Elon Musk on X on July 26 and it quickly garnered national media attention. As of July 31, 2024, Musk's post was still live and had over 133 million views, 245,000 reposts, and 936,000 likes. Another parody video of Harris was posted to X by @MrReaganUSA on July 31 and quickly made headlines.
Our research and Pulse detection technology was able to determine more precisely that the audio of Kamala was a partial deepfake, with AI-generated speech intended to replicate Harris's vocal likeness as well as audio clips from previous remarks by the vice president. Pulse is a tool designed for continuous assessment, producing a segment-by-segment breakdown and analyzing for synthetic audio every 4 seconds. This is especially useful in identifying AI manipulation in specific parts of an audio file — helping to spot partial deepfakes.
Our expertise can be pivotal for election officials at all levels to detect and counteract election-related disinformation, particularly deepfake audio used to impersonate political figures or spread misleading information.
You and your team have been invited to participate in initiatives like the
These conversations are not only extremely useful in gauging how we are handling and ideating solutions to cyberthreats on a government level, but also help us better understand and serve our customers based on the discussions we have. Pindrop has had very good discussions with Congress, but also the Federal Trade Commission, Federal Communications Commission, the National Institute of Standards and Technology, and the White House. AI defense and regulation will continue to evolve as the technology does, and we are honored and excited to be a part of this journey with elected officials to find viable solutions for both enterprises and consumers.