Los Angeles Mayor Eric Garcetti met with members of The Islamic Center of Southern California to show solidarity after the mosque shootings in New Zealand. Harrison Hill, USA TODAY
Tough questions are being asked about the role of social media in the wake of the horrific shooting that took the lives of at least 49 people at two New Zealand mosques. Sadly, tough questions with no easy answers.
The 28-year-old alleged white supremacist gunman not only livestreamed the rampage via helmet-cam on Facebook and Twitter, but footage of the massacre circulated even hours after the shooting, despite the frantic efforts by Facebook, YouTube, Twitter and Reddit to take it down as quickly as possible, each of which issued the requisite statements condemning the terror, and each of which have codes of conduct that are sometimes violated.
New Zealand mosque shootings: How U.S. racism might be fueling hate around the world
Ahead of the attack, the shooter posted a since removed hateful 74-page manifesto on Twitter.
And during the killing, he apparently referenced divisive YouTube star PewDiePie, who for the record subsequently tweeted, “I feel absolutely sickened having my name uttered by this person.”
“The attack on New Zealand Muslims today is a shocking and disgraceful act of terror,” said David Ibsen, executive director of the non-profit, non-partisan Counter Extremism Project (CEP) global policy organization. “Once again, it has been committed by an extremist aided, abetted and coaxed into action by content on social media. This poses once more the question of online radicalization.”
Mia Garlick from Facebook New Zealand issued a statement Friday, indicating that, “since the attack happened, teams from across Facebook have been working around the clock to respond to reports and block content, proactively identify content which violates our standards and to support first responders and law enforcement. We are adding each video we find to an internal data base which enables us to detect and automatically remove copies of the videos when uploaded again. We urge people to report all instances to us so our systems can block the video from being shared again.”
In its own statement, YouTube said that “shocking, violent and graphic content has no place on our platforms, and we are employing our technology and human resources to quickly review and remove any and all such violative content on YouTube. As with any major tragedy, we will work cooperatively with the authorities.”
Twitter echoed similar sentiments: “Twitter has rigorous processes and a dedicated team in place for managing exigent and emergency situations such as this. We also
cooperate with law enforcement to facilitate their investigations as required.”
Of course, not all social media companies are created equal.
One of the difficulties in tackling such issues, says UCLA assistant professor Sarah T. Roberts, is it is “somewhat about apples and oranges when we talk about mainstream commercial platforms in the same breath as some of the more esoteric, disturbing corners of the internet, both of which are implicated in this case. This person had a presence across a number of different kinds of sites. The approaches and the orientation to dealing with hate speech, incitement to violence, terroristic materials, differs in…
Latest posts by Mayra Rodriguez (see all)
- The Legendary Anti-Vaccine 13th Episode of ‘The Brady Bunch’ Is Real, Hilarious - April 29, 2019
- Why zero screen time for babies makes sense | Parenting With Pete - April 29, 2019
- Cepeda: Why boys need to get meaningful sex education — finally - April 29, 2019