X
Robert Rodriguez

These watchdogs track secret online censorship across the globe

They measure what's being blocked or removed, and why.

Shortly after leaving Ethiopia's Bole Addis Ababa International Airport in a ride-hail car earlier this year, Moses Karanja faced an awkward situation: He couldn't pay his driver. While he was riding into town, the state-controlled telecom shuttered internet access, rendering the app useless. Neither Karanja nor the driver knew how much his trip should cost.

Karanja, a University of Toronto Ph.D. student, fished out some cash and came to an agreement with the driver. But the outage, which followed a series of assassinations in the country in June, prompted Karanja to look at how deep and long the shutdown was. He suspected some services, like WhatsApp , remained down even when other parts of the web came back up several days after the killings.

This story is part of [REDACTED], CNET's look at censorship around the world. 

Robert Rodriguez/CNET

Karanja was right. Working with a project called the Open Observatory of Network Interference, which crowdsources internet connectivity data from around the world, he found that Facebook , Facebook Messenger and the web version of WhatsApp were blocked after the initial outage, making it difficult for many Ethiopians to communicate. The services were inaccessible in Ethiopia as recently as August.

Data from OONI data provides a record of internet accessibility in places around the world where authorities are unlikely to acknowledge they've blocked access, says Karanja, whose studies focus on the intersection of politics and the internet. "You are sure to have a clear snapshot of the internet at a specific point in time in a specific place," he said. 

OONI is one of a handful of efforts to measure global online censorship, which isn't always as blatant as the shutdown Karanja witnessed in Ethiopia. Sometimes a government targets select websites, or requires disabling of videos or filtering of images from news feeds. It all adds up to censorship. OONI and similar projects document those attempts to control what citizens can say or see. 

Concerns about censorship are a global phenomenon, even in liberal democracies. India, the world's largest democracy, recently shut down the internet in Kashmir as the Hindu nationalist party that leads the country sought to impose more control over the Muslim majority region

Subtler forms of censorship, such as social media companies removing content or limiting its reach, raise the hackles of a diverse group of people, including YouTube performers, human rights activists and even President Donald Trump, who's among the conservatives who say policies employed by social media companies to fight fake news unfairly affect right-wing media

gettyimages-1151874605

A series of assassinations of government officials in Ethiopia back in June led to a days-long internet blackout. 

Michael Tewelde/Getty Images

Researchers at OONI use a collection of network signals submitted by volunteers that mean little individually but can point to interference when combined. The signs can seem like random quirks of the internet: 404 error messages and odd pop-up windows. OONI's researchers, however, use their data to uncover the techniques behind censorship. This lets them map what's been made invisible.

Arturo Filasto, an OONI founder, says censorship means the content you can see online varies depending on where you are in the world. "There are many parallel internets," he says.

The challenge, particularly in authoritarian countries, is to measure and track what's being blocked or removed, and why.

Logging the patterns

With its open-source OONI Probe software, the OONI project covers more than 200 countries, including Egypt, Venezuela and Ukraine. Volunteers install the OONI Probe app on their phones, tablets  and Mac or Linux computers (a beta version is currently available for all computers). The app periodically pings a preset list of websites and it records what gets sent back in response, discovering which websites are blocked, throttled or redirected.

The data comes in handy when internet users start noticing weird patterns. In 2016, OONI researchers used data from volunteers to investigate reports of ongoing media censorship in Egypt. They found users were often being redirected to pop-ups when they tried to access websites run by NGOs, news organizations and even porn sites. Instead of those websites, some of the pop-up windows showed users ads, and others hijacked the processing power of a device to mine for cryptocurrency. 

It was still happening in 2018, when attempts to reach websites including the Palestinian Prisoner Society and the UN Human Rights Council resulted in redirection.

Testing the filters

Online censorship isn't limited to blocked websites. Social media sites also filter content from news feeds and chats. In China, social media companies are liable to the government for the content that appears on their platforms and have signed a pledge to monitor their services for politically objectionable content, according to Human Rights Watch, an NGO. This leads to a system that strictly limits discussion of political topics.

Companies filter from users' chats and news feeds any images that could violate the government's standards. The standards aren't always transparent to users, and they change over time. Weibo, China's equivalent to Twitter , has twice tried to purge LGBTQ content from its platform, and it twice reneged after unexpected community outrage. Some content might be filtered in the leadup to major events and then allowed later.

Researchers at the Citizen Lab, a project of the Munk School of Global Affairs and Public Policy at the University of Toronto, wanted to learn how the filtering process works on WeChat, a Chinese messaging and social media app with more than 1 billion users. So they used WeChat accounts registered to Canadian phone numbers and sent messages to contacts with accounts registered to Chinese phone numbers. The contacts reported what they could and couldn't see on their end.

winnie-the-xi

Images of Winnie the Pooh were purged from Chinese social media sites after Chinese leader Xi Jinping was likened to the cartoon bear. 

From left: Disney, Xinhua News Agency

The researchers found details of how WeChat automates image filtering, and saw that the company was updating its processes in response to current events. The filtering wasn't limited to the infamous "Tank Man" photos from the 1989 pro-democracy demonstrations at Tiananmen Square. It included photos of current news events, such as the arrest of Huawei CFO Meng Wanzhou, the US-China trade war and the 2018 US midterm elections.

This is in line with well-known examples of purging, like when imagery of Winnie the Pooh was ordered to be expunged after netizens compared the cartoon bear to Chinese leader Xi Jinping. 

China's state capitalism model allows it to tune information in this way. Jeff Knockel, a postdoctoral fellow who led the Citizen Lab research, said China can require the social media companies within its own borders to filter images. Other countries would have to block the entire internet or specific websites to stop users from seeing certain content. 

"It allows the Chinese government to exert a finer level of control on these platforms," he said.

Tracking the takedowns

Image filtering happens in the US and other democracies too. Faced with criticisms over the spread of hate speech and violent content, Facebook, YouTube and Twitter are developing AI algorithms and hiring content moderators to cull what's shown on their platforms. But therein lies an unexpected dilemma. It's not always easy to tell whether a video containing violence should be banned for promoting terrorism or preserved as evidence of human rights violations. Advocacy groups have stepped in to bring attention to the problem and preserve information.

Witness, a human rights organization, trains global human rights activists to watch for takedowns of their videos. The disappearance of these activists' videos can remove the only evidence of incidents of police brutality, crackdowns on protesters and military strikes against civilians.

Projects such as the Syrian Archive track those takedowns in monthly reports. Started by Hadi al Khatib and Jeff Deutch in Berlin, the archive serves primarily as a central organization to store and vet videos. The team downloads videos of violence in the Syrian war posted to YouTube, which are sometimes later removed by the social media site's AI. The Syrian Archive then authenticates the videos and makes them available to human rights organizations.

gettyimages-1177739739

Videos of terrorist or wartime violence are often taken off social media platforms, but they can serve as vital documentation of human rights violations. Pictured is the aftermath of a car bombing in Syria. 

Picture Alliance

In 2017, the Syrian Archive found that YouTube took down about 180 channels containing hundreds of thousands of videos from Syria around the time the video service implemented new policies about removing violence and terrorist propaganda. One clip, for example, showed footage of destruction at four Syrian field hospitals as reporters described the attacks that littered the facilities with rubble. Deutch said his team helped prompt YouTube to restore most of the videos, but others were lost from the platform. 

There's value in keeping the videos accessible on social media platforms in addition to the Syrian Archive, Deutch said. Videos on YouTube or Twitter have more reach to make international groups aware of atrocities, and the UN Security Council cited video evidence from YouTube in a report about chemical weapons in Syria.

"The platforms themselves became these accidental archives," Deutch said.

Measuring reality

After the internet went down in Addis Ababa, Karanja, the Ph.D. student, immediately made plans to leave the country, as the internet outage made it impossible for him to sync up with his co-workers in other countries. So he flew to neighboring Kenya and worked from there. Still, the outage continued affecting him.

Karanja tried to call his Ethiopian contacts from Kenya using WhatsApp, but the service was unreliable. So he had to use conventional cell service, which cost 100 times more than WhatsApp's rates, he said. 

The hassle and expense bothered Karanja. But he figured he was lucky. The internet is crucial to daily life and business around the world, and many people in Africa's second most populous country couldn't use the apps they'd come to depend on.

"This is my story: monetary loss and inconvenience," Karanja said. "There are others who endured more."