X

Facebook says coronavirus made it tougher to moderate content

The social network had fewer people to review content about suicide, self-injury and sexual exploitation.

Queenie Wong Former Senior Writer
Queenie Wong was a senior writer for CNET News, focusing on social media companies including Facebook's parent company Meta, Twitter and TikTok. Before joining CNET, she worked for The Mercury News in San Jose and the Statesman Journal in Salem, Oregon. A native of Southern California, she took her first journalism class in middle school.
Expertise I've been writing about social media since 2015 but have previously covered politics, crime and education. I also have a degree in studio art. Credentials
  • 2022 Eddie award for consumer analysis
Queenie Wong
3 min read
coronavirus-facebook-logo-9706

The social network started relying more on technology than on human reviewers.

Image by Pixabay; illustration by CNET

Facebook said Tuesday that the coronavirus affected how many people could review posts on the social network for violations of rules against content promoting suicide or self-injury. The COVID-19 pandemic also impacted how many workers could monitor Facebook-owned  Instagram  for child nudity and sexual exploitation.

From April to June, Facebook said in a blog post, it took action on fewer pieces of that type of offensive content because it sent its content reviewers home. Users also couldn't always appeal a content moderation decision. 

Facebook relies on a mix of human reviewers and technology to flag offensive content. But some content is more tricky to moderate, including posts related to suicide and sexual exploitation, so Facebook relies more on people for those decisions. The company has faced criticism and a lawsuit from content moderators who alleged they suffered from symptoms of post-traumatic stress disorder after repeatedly reviewing violent images.  

Guy Rosen, who oversees Facebook's work on safety and integrity, said during a press call that content about suicide and child nudity can't be reviewed at home because it's visually graphic. That makes it very challenging for content reviewers because when they're working from home, family members can be around them. 

"We want to ensure it's reviewed in a more controlled environment, and that's why we started bringing a small number of reviewers where it's safe back into the office," he said.

Facebook is also using artificial intelligence to rank how harmful content might be and flag which posts people need to review first. The company has been prioritizing the review of live videos, but if a user implied in a regular post that they were going to commit suicide, that would also be ranked very high, Rosen said. 

Facebook said it was unable to determine how prevalent violent and graphic content, and adult nudity and sexual activity, was on their platforms in the second quarter, because of the impact of the coronavirus. Facebook routinely publishes a quarterly report on how it enforces its rules.

Facebook has also been under fire for allegedly not doing enough to combat hate speech, an issue that prompted an ad boycott in July. On Monday, NBC News reported that an internal investigation found that there were thousands of groups and pages on Facebook that supported a conspiracy theory called QAnon, which alleges there's a "deep state" plot against President Donald Trump and his supporters.

Monika Bickert, who oversees Facebook's content policy, said Facebook has removed QAnon groups and pages for using fake accounts or for content that violates the social network's rules.

"We'll keep looking, you know, at other ways for making sure that we are addressing that content appropriately," Bickert said.

Facebook said that in the second quarter, it took action on 22.5 million pieces of content for violating its rules against hate speech, up from the 9.6 million pieces of content in the first quarter. Facebook attributed the jump to the use of automated technology, which helped the company proactively detect hate speech. The proactive detection rate for hate speech on Facebook increased from 89% to 95% from the first to second quarter, the company said. 

The proactive detection rate for hate speech on Instagram rose from 45% to 84% during that same period, Facebook said. Instagram took actions against 808,900 pieces of content for violating its hate speech rules in the first quarter, and that number jumped to 3.3 million in the second quarter. 

Facebook also took action in the second quarter on 8.7 million pieces of content for violating its rules against promoting terrorism, up from 6.3 million in the first quarter. 

The company said independent auditors will review the metrics Facebook uses to enforce community standards. The company hopes the audit will be conducted in 2021. 

If you're struggling with negative thoughts, self harm or suicidal feelings, here are 13 suicide and crisis intervention hotlines you can use to get help.

You can also call these numbers:

US: The National Suicide Prevention Lifeline can be reached at 1-800-273-8255. 
UK: The Samaritans can be reached at 116 123. 
AU: Lifeline can be reached at 13 11 14.