X

Facebook’s new Rosetta AI system helps detect hate speech

It should come in handy as the social network faces increased scrutiny over content moderation.

Richard Nieva Former senior reporter
Richard Nieva was a senior reporter for CNET News, focusing on Google and Yahoo. He previously worked for PandoDaily and Fortune Magazine, and his writing has appeared in The New York Times, on CNNMoney.com and on CJR.org.
Richard Nieva
2 min read
facebook-f8-2017-0149.jpg

Facebook CEO Mark Zuckerberg says artificial intelligence will be a big part of cleaning up the social network of hate speech.

James Martin/CNET

Facebook says it has a new weapon in fighting hate speech.

The social network on Tuesday announced a new artificial intelligence system, codenamed "Rosetta," for helping its computers read and understand the billions of images and videos posted to the social network every day. With the new system, Facebook could more easily detect what pieces of content violate its hate speech rules.

Normally, computers use a method called optical character recognition, or OCR, to tell what's in pictures or videos, but because of Facebook's massive numbers -- there are 2.2 billion people who use the social network each month -- OCR has its shortcomings. So Facebook said it built a system to work on a bigger scale.

The system, which is being used at both Facebook and Instagram, can also be used to improve photo search and surface content on the news feed. Rosetta works by extracting text in different languages from more than a billion images and video frames in real time.

Watch this: Facebook fixer-upper: Can artificial intelligence clean up your feed?

That's sure to come in handy as Facebook deals with scrutiny over content on its social network. For instance, Facebook has been accused of helping to spur violence in Myanmar, Sri Lanka and India. Last month, it said it was taking action to stop "spread of hate" in Myanmar due to disinformation posted on Facebook. The social network said it's removed 18 accounts and 52 pages associated with the Myanmar military because of ongoing ethnic violence against Rohingya Muslims.

In July, Facebook said it will begin to remove disinformation that is intended to spark or exacerbate violence. That includes both written posts and manipulated images. Previously, Facebook banned only content that directly called for violence. But the new policy now covers fake news that has the potential to provoke physical harm.

Last week, Facebook COO Sheryl Sandberg, along with Twitter CEO Jack Dorsey, faced a grilling from Congress over content moderation policies and security practices to keep users safe.

Facebook CEO Mark Zuckerberg has often said the company is looking to artificial intelligence to try to clean up Facebook, proactively detecting objectionable content instead of waiting for people to flag it. But while the company is developing the technology, he said the social network is hiring 20,000 human moderators to police the platform for harmful material. 

Separately, Facebook on Tuesday also said it was adding new languages to its set of automatic translation tools. The social network added 24 new languages -- now translating in more than 125 languages including Hausa, Urdu and Nepali.

The Smartest Stuff: Innovators are thinking up new ways to make you, and the things around you, smarter.

Special Reports: CNET's in-depth features in one place.