As tech leaders tout artificial intelligence's benefits to society, researchers urge the industry to do more to counter fears about superintelligent machines or autonomous robots acting against human interests.
Speakers at CogX, an annual artificial intelligence conference in London, largely dismissed the idea of killer robots as a Hollywood fantasy and a fear-mongering distraction from the much-needed development of AI in crucial areas such as energy, healthcare and transportation.
Panelists at the annual CogX conference
Steven Cave, executive director at the Leverhulme Centre for the Future of Intelligence, said dystopian fears around AI bear little reality to the actual technology. In his view, public discourse on AI has too often turned extreme.
"These fears seem to have their own momentum, completely independent of real-world algorithms. And on the one hand, we have to acknowledge these powerful, emotional responses that we have to AI," Cave told the crowd.
"At the same time … we should beware that [they] don't lead us to dismiss technologies that might, in fact, be enormously beneficial," he added.
Much of the concern stems from fears that AI could eventually outpace human intelligence.
From world-renowned physicist Stephen Hawking to Tesla Inc. and SpaceX CEO Elon Musk, a range of influential individuals have spoken out against artificial intelligence and made frightening predictions about its potential to destroy humanity.
Musk, an early investor in AI firm DeepMind, which was later acquired by Alphabet Inc., recently warned that artificial intelligence would create an "immortal dictator from which we can never escape."
But AI development has very little to do with "creating a human 2.0," Arohi Jain Rajvanshi, head of strategy AI Initiative at The Future Society, a nonprofit think tank, said during a separate panel.
"It's easier to imagine an extreme, rather than something that's more balanced," she noted, adding, "Of course, dystopian narratives embody some of our very real fears such as loss of human agency [and] identity control, which must be paid attention to."
At the same time, she warned against amplifying narratives around AI's potential to outperform humans over its positive contributions to society.
While conversations around AI have also often focused on its impact on data privacy and even the scope for creating a supercharged breed of autonomous military weapons, there has been considerable apprehension about mass job losses.
A PricewaterhouseCoopers report published in 2017 found that up to 30% of jobs in the U.K. were at high risk of full automation by the early 2030s. Approximately 38% of U.S. jobs, 35% in Germany and 21% in Japan may face a similar fate.
In response to widespread fears about the dangers of AI, Claire Craig, director of science policy at the Royal Society, an independent academic society, said the tech industry needs to work toward a more balanced public dialogue in order to avoid a situation where doomsday scenarios shape future policy and regulation.
As for overpowering robots and massive job losses, these concerns are "fine as an entry point" to the overall discussion on AI, but Craig added that they must not become the basis of the debate.
"The systemic changes [brought about by AI] will be more significant for health, life and wealth. The danger lies in not talking about the things that matter most," she told delegates.
Additional CogX 2018 coverage:
AI provides additional horsepower to existing cybersecurity arsenal