trending Market Intelligence /marketintelligence/en/news-insights/trending/vNuWWWuKWn-LtNjhPHPSdA2 content esgSubNav
In This List

New suit reignites debate on ISIS' use of social media


Gauging Supply Chain Risk In Volatile Times


Insight Weekly: Banks' efficiency push; vacuuming carbon; Big Pharma diversity goals


Smart thermostats gain traction in US, point to modest electricity savings


The Future of Risk Management Digitization in Credit Risk Management

New suit reignites debate on ISIS' use of social media

A new lawsuit is alleging that Twitter Inc., Google Inc. and Facebook Inc. are allowing the Islamic State to make sure violent content is only a click away on their popular platforms, and earning a profit in the process.

Filed Dec. 19 in federal court in the Eastern District of Michigan on behalf of relatives of three victims of the Pulse nightclub shooting in Orlando, Fla., the suit that alleges Omar Mateen, who killed 49 people and wounded 53 others at the nightclub in June, was "directly influenced" by postings by the Islamic State on Twitter, Facebook and YouTube, which is owned by Alphabet Inc.

"Defendants profit from ISIS by placing ads on ISIS' postings. For at least one of the Defendants, Google, revenue earned from advertising is shared with ISIS," the suit said. "Defendants incorporate ISIS' postings to create unique content by combining the ISIS postings with advertisements selected by Defendants based upon ISIS' postings and the viewer looking at the postings and the advertisements."

"The goal is to modify the companies' behavior, it's to get these companies to accept responsibility," Keith Altman, a lawyer at 1-800 Law Firm who is representing the families, said in an interview.

But the companies have frequently argued that they should not be held liable due to a provision in the federal Communications Decency Act known as Section 230, which states that content providers are not liable for content produced by their users or other third parties.

Yet Altman said the suit disputes the companies' "get-out-of-jail-free card." By displaying targeted ads, which vary depending on what a user has viewed, along with the posts, the sites are "not simply passing along content created by third parties," the suit alleges.

Courts have overwhelmingly agreed that Section 230, originally written in 1996, does apply to social media sites, Emma Llanso, director of the Free Expression Project at the Washington-based Center for Democracy and Technology, said in an interview.

"Their protection from liability doesn't depend on them doing a particular kind of monitoring or a particular kind of filtering, so the allegations in this complaint that there's different things they could do aren't really germane," she said.

She is also skeptical about whether Twitter or Facebook could really be considered the "author" of an ad simply because it appears around a post linked to ISIS.

Altman, who is also representing the family of Nohemi Gonzalez, a California college student killed in the terror attacks in Paris in November 2015, in a similar suit, said he hopes the suit will spur social media sites to do more to stop the spread of ISIS-linked accounts.

"I'm not really taking [Section] 230 on, but our position is just that 230 doesn't apply here," he said. "The good thing about this lawsuit is that nothing will stay like it is, they're gonna have to make changes and if these lawsuits were the vehicle to bring these concerns to the forefront, that would be great, regardless of what happens. That's the families' main concern in these cases."

Social media sites have already stepped up their efforts to police terrorist propaganda in certain ways.

Researchers at George Washington University's Program on Extremism found that repeated suspensions of Twitter accounts belonging to English-language ISIS supporters between June and October 2015, for example, had a "devastating" effect on their number of followers.

Further, Microsoft Corp., Facebook, Twitter and Google recently announced a partnership to create a database featuring digital fingerprints tied to violent and extremist content that would allow one company to track and remove the content while sharing it with the others.

"You can understand why they've created this kind of platform because they've been pursued left and right by people saying they're not doing enough," Llanso said. "But it also does create a centralized point for content control and censorship that could very easily be abused if governments start pressuring them about putting certain kinds of material in the database, or if they simply start making mistakes and erroneously include information in this database."