Proposals to regulate online services provided by companies such as Facebook Inc., Alphabet Inc. and Twitter Inc. have been criticized as a "short-sighted" move in the fight against extremism.
A day after the third major terrorist attack in the U.K. in three months, British Prime Minister Theresa May laid out plans to introduce legislation denying terrorists and extremist sympathizers digital tools used to communicate and plan attacks.
"We cannot allow this ideology the safe space it needs to breed. Yet that is precisely what the internet, and the big companies that provide internet-based services, provide," she said.
However, not everyone is convinced that enforcing additional restrictions is the right approach.
"It is disappointing that in the aftermath of this attack, the government's response appears to focus on the regulation of the internet and encryption," digital rights campaign organization Open Rights Group said June 4.
It argued that May's proposals would only push terrorists into "even darker corners of the web, where they will be even harder to observe."
John Horgan, a psychologist and terrorism expert at Georgia State University, agreed the trade-off may not be worth it.
"It is naive to think that increased control of the internet will somehow prevent attacks. People express radical views via social media every minute of every day," he said.
Warning against a knee-jerk reaction to terror attacks, Horgan maintained policing internet behavior alone would have graver consequences down the line.
May's calls for further intervention and regulation of the internet come after the Investigatory Powers Act, part of a larger strategy to combat terror, came into force in January.
Hailed as one of the biggest reforms in Britain's surveillance tactics, the law requires service providers to store details of people's internet activity for 12 months and make it accessible to a number of public authorities. The legislation — dubbed "the Snooper's Charter" — further gives security services and police authorization to hack into computers and phones, as well as to collect communications data in bulk.
These policies will deny extremists an online platform but they still do not deal with the underlying, offline factors, including the fundamental lack of trust between community and security services in the U.K., according to David Otto, counterterrorism expert at U.K.-based TGS Intelligence and a senior adviser for Global Risk International.
"What Theresa May needs to understand is ... this is not just a short-sighted approach. It is also extremely counterproductive," he added.
Observers equally voiced concerns over further regulation and widespread expectation that a Conservative government would seek to authorize backdoor access to end-to-end encryption.
Human rights group Liberty stressed that government censorship of the internet and ending encryption would have done little to stop last weekend's attack.
"We have wide-ranging, robust laws designed to combat terrorism, giving the police and intelligence agencies unprecedented powers," Martha Spurrier, director of Liberty, said in an interview.
"Restricting our freedoms further would hand terrorists a victory, granting them the power to change the very foundations our country is built on without even ending their murderous acts," she added.
Meanwhile, technology firms such as Facebook, Alphabet and Twitter countered allegations of providing a breeding ground for extremist context, with Twitter claiming to have suspended just fewer than 400,000 accounts related to terrorism in the second half of 2016.
Facebook director of policy Simon Milner said his company aims to be a "hostile environment for terrorists," while Google said it employs thousands of people and invests hundreds of millions of pounds to fight abuse on its platforms, including YouTube.
Pressure has been growing across Europe for internet platforms to do more about extremist content on their platform, including proposals in Germany to fine companies as much as €50 million for failing to promptly remove offensive content.
As of September 2016, Facebook removed 46% of content flagged by users, compared with 10% of content on YouTube and 1% of content on Twitter.
Even then, when it comes to curbing terrorist propaganda, social media platforms are simply "outgunned and outmatched," said Amarnath Amarasingam, a senior research fellow at the Institute for Strategic Dialogue, a London-based counterextremism think-tank.
To think radicalization happens exclusively online or on social media is "simply false," Amarasingam said. It is a multifaceted process, he continued, adding that "we had terrorism before social media."
As such, Amarasingam believes governments need to rethink "jumping ahead" and putting "restrictive" regulations into place.
Nonetheless, with the proliferation of extremist content available on the internet, many maintain the free exchange of extremist information cannot continue.
In this regard, "regressive" policy making may not be the way forward but, ultimately, a collaborative discussion involving policymakers, legal experts, intelligence agencies, police, technology companies and affected communities, according to Mubaraz Ahmed, an analyst at the Center on Religion and Geopolitics, which is part of the Tony Blair Institute.
He said: "The rights afforded in this country of freedom of expression and the right to privacy are part of the fabric of our liberal democracy.
"But it is these very same rights that are being exploited to devastating effect by these terrorists ... therefore action must be taken."