X

How the New Zealand mosque shooting was designed to go viral

The terrorist attack was inextricably tied to Facebook, Twitter and YouTube.

Richard Nieva Former senior reporter
Richard Nieva was a senior reporter for CNET News, focusing on Google and Yahoo. He previously worked for PandoDaily and Fortune Magazine, and his writing has appeared in The New York Times, on CNNMoney.com and on CJR.org.
Richard Nieva
6 min read
Aucklanders Gather At Al-Madinah School To Remember Victims Of Christchurch Mosque Attack

Mourners gathered in Auckland, New Zealand. 

Getty Images

The tragic shooting spree in New Zealand has exposed yet again social media's difficulty policing itself.

A shooter in Christchurch killed 50 people at two mosques last Friday in a livestreamed attack that was designed to exploit how we share on the internet. In the hours after the attack, Facebook , Google and Twitter found themselves overwhelmed with trying to stop the spread of the footage online. New copies of the video, which are still fairly easy to find on the web, went up as quickly as the social media platforms pulled them down.

The spread of the mosque attack video brings new scrutiny to big tech's inability to take down the graphic content that flows through its products. Leaders from around the world are calling on the companies to do better.

The new criticism comes as lawmakers and the public begin reconsidering the scale and influence of Silicon Valley companies. For the past two years, large tech companies have been called to task for the unintended consequences of their platforms, which range from the rise of misinformation to the prevalence of data misuse.

How did this video get out on the web?

Last week a gunman opened fire at first one, then another, mosque in Christchurch, New Zealand, while worshippers were gathered for Friday prayers. The gunman wore a camera on his military-style helmet, streaming video of the massacre over Facebook Live as he stalked victims.

The massacre, which New Zealand Prime Minister Jacinda Ardern called a "terrorist attack," claimed 50 lives. An Australian suspect, Brenton Harrison Tarrant, emailed the prime minister a manifesto outlining his beliefs just minutes before the rampage began. He has been charged with murder.

Days before the shooting, the suspect also posted photos of what appear to be the weapons he planned to use in the attack to a now-suspended Twitter account. He also posted links to his manifesto.

What makes this different from other mass shootings?

Sadly, videos of mass shootings have become commonplace and are regularly captured on smartphones . The Christchurch killings, however, may mark the first time an attack was born of the internet era. As YouTube Chief Product Officer Neal Mohan told The Washington Post, "This was a tragedy that was almost designed for the purpose of going viral."

The gunman actively sought an audience. He promoted the link to his livestream, as well as a 74-page manifesto, on his Facebook account before the shooting started. He also posted the stream links and manifesto to 8Chan, a fringe message board, in order to spread his message online.

The manifesto itself is a work of internet culture, referencing hot topics and employing layers of irony that are common to the web. The manifesto mentions Fortnite, the popular video game, and Spyro the Dragon, another game. At one point in the video, the gunman says offhandedly, "Remember, lads, subscribe to PewDiePie." That's a reference to the popular YouTuber, a controversial video game commentator. The phrase has been widely used in the YouTuber's dogfight with T-Series, an Indian music video channel, to be the platform's biggest account. In some circles, the phrase is used as an ironic greeting.

(In response, the popular online personality, whose real name is Felix Kjellberg, tweeted that he was "sickened" by the shooting.)

The repeated use of internet tropes ensured the shooter's actions would find their way to a wider audience. Search for PewDiePie and stories mentioning the shooting are bound to come up.

The footage also looked like it could have come from Call of Duty, Battlefield or any other realistic war simulation game.

How did Facebook, YouTube and Reddit respond?

If the attacks were meant to go viral, the gunman accomplished that goal, creating a game of whack-a-mole for Facebook, YouTube, Twitter and Reddit.

Facebook said fewer than 200 viewers saw the stream as the attack was happening. Another 4,000 viewers saw the gruesome footage before the social network took it down and closed the alleged shooter's account. Facebook said it received its first user report about the video 12 minutes after the video ended, roughly a half hour after the attack started.

That was enough time for some users to download and then re-upload the video. And they did. Footage of the shooting popped up on Twitter, Facebook and YouTube, pushing the content moderation at the companies into overdrive. But it was too late. The video spread far and wide across the internet.

In the first 24 hours after the attack, Facebook said it removed 1.5 million copies of the video. Of those clips , 1.2 million were blocked at the point of upload, the social network said. The company also took down versions of the footage that had been edited and didn't contain graphic content, "out of respect for the people affected by this tragedy and the concerns of local authorities."

Facebook said its artificial intelligence tools didn't automatically catch the video because the system didn't have enough data to recognize that specific type of imagery. Guy Rosen, a vice president at Facebook, said that's difficult because "these events are thankfully rare." 

"AI has made massive progress over the years and in many areas, which has enabled us to proactively detect the vast majority of the content we remove," Rosen said in a blog post. "But it's not perfect."

YouTube also scrambled to stop the video from spreading. Mohan, the Google-owned company's product chief, said he assembled a war room of employees to work through the night to take down "tens of thousands" of videos. The company also encouraged users to flag any videos they saw.

YouTube's systems, which normally rely on a combination of human moderation and AI tools, also "automatically rejected" footage of the violence, a spokesperson said. It also temporarily suspended the ability to sort or filter searches by upload date. 

Reddit also banned groups, including the r/watchpeopledie subreddit, after users shared a link to the shooter's live video.

Still, the video made it out into the wilds of the web, including torrents, which require no individual site to host it.

How did law enforcement react to the video?

Authorities in New Zealand immediately asked people not to post footage of the shooting.

"Police are aware there is extremely distressing footage relating to the incident in Christchurch circulating online," New Zealand police said in a statement on Twitter. "We would strongly urge that the link not be shared. We are working to have any footage removed."

What did politicians say?

In the wake of the shooting, politicians around the world blasted tech companies for failing to control their platforms.

"The rapid and wide-scale dissemination of this hateful content -- livestreamed on Facebook, uploaded on YouTube and amplified on Reddit -- shows how easily the largest platforms can still be misused," Sen. Mark Warner, a Democrat from Virginia and a vocal critic of big tech, said last week.

Australian Prime Minister Scott Morrison and the country's opposition leader, Bill Shorten, accused Facebook of "going missing" when it comes to fighting hate speech and of playing an "unrestricted role" in the spread of extremism.

Tom Watson, the deputy leader of the UK's Labour Party, also called out the tech companies.

"The failure to deal with this swiftly and decisively represents an utter abdication of responsibility by social media companies," Watson said. "This has happened too many times."

On Tuesday, the House Homeland Security Committee said in a tweet that it had sent a letter to the CEOs of Facebook, YouTube, Twitter and Microsoft . The letter asked them to prioritize removing "violent terrorist content," including content from the "far-right" and "domestic terrorists."

Originally published March 19, 6:22 p.m. PT.
Update, March 21: Adds information on why Facebook's artificial intelligence tools didn't catch the video.