top of page
Search

Algorithmic Censorship and How to Combat It on Social Media

Indigenous activists working to raise awareness for Missing and Murdered Indigenous Women and Girls (MMIWG) on May 5 found that their posts were being taken down on Instagram. This wasn’t an isolated occurrence. Palestinian solidarity movements protesting the forced evictions taking place in the Sheikh Jarrah neighbourhood in Jerusalem, Palestine, were also facing the same ‘technical’ issue at the same time.

"The stories started disappearing, and archived ones went suddenly blank. Everything I shared about other topics remained the same. The explicit deletion of Palestinian content can’t be anything but institutionalized racism and censorship." - @itslunaz

As a response to growing outrage, Instagram released a tweet saying that this was “a widespread global technical issue not related to any particular topic,” followed by an apology explaining that the platform “experienced a technical bug, which impacted millions of people’s stories, highlights and archives around the world.”

Creators, however, said that not all their story posts were affected.

This is not the first time social media platforms have been under fire for erroneous censoring of grassroots activists and racial minorities.

A large number of Black Lives Matter (BLM) activists were similarly angered when Facebook flagged their accounts, while accounts spreading racism and spewing hate speech against Black people remained untouched.

So this begs the question; are these really technical glitches, or are social media platforms employing discriminatory and biased policies?

Every time an activist’s post is wrongly removed, there are a few possible scenarios.

  1. Platforms can deliberately take down activists’ posts and accounts, usually at the request of and/or in coordination with the government/stakeholders.

  1. In some countries and occupied lands such as Kashmir, Crimea, Western Sahara and Palestine, platforms censor activists and journalists to allegedly “maintain their market access or to protect themselves from legal liabilities.”

  1. Social media platforms have also primarily removed posts through user-reporting mechanisms to handle unlawful or prohibited content that doesn’t adhere to community standards developed by the platform. But first, content moderators would have to review the reported posts to determine if there was a violation, and in the case of serious or repeat infringements, the user may be suspended or permanently banned.

Due to the large volume of reports received daily, there are not enough moderators to review each report adequately to analyze language subtleties and context in which certain graphic/triggering content and imagery is shared for public awareness. So when user-reporting is driven by partisanship and ideology, advocacy groups are finding their content being repressed and silenced.

This is where artificial intelligence (AI) comes into the picture, helping to identify and remove prohibited content. By utilizing natural language processing, AI programs can flag racist, violent and fraudulent content faster and better than humans can. Throughout the COVID-19 global pandemic, social media companies have relied on AI to substitute the thousands of human moderators who were sent home. Now, users have to contend with what algorithms decide what can and cannot be posted online, often having content misinterpreted as being abusive when it’s not.

There’s a false belief that AI is less prone to bias and can scale better when in reality they’re more easily disposed to error and can impose a bias on a colossal and systemic scale.

In 2019, researchers discovered that AI developed to identify hate speech was more predisposed to amplify racial bias instead.

“In one study, researchers found that tweets written in African American English commonly spoken by Black Americans are up to twice more likely to be flagged as offensive compared to others. Using a dataset of 155,800 tweets, another study found a similar widespread racial bias against Black speeches.” - TheConversation.com

In 2020, Facebook deleted over 30 accounts of Syrian journalists and activists on the basis of terrorism, when in reality they were campaigning against the very violence that stems from terrorism. MMIWG, BLM, Palestinian solidarity groups and the Syrian journalists have experienced firsthand the dynamic of “algorithms of oppression,” where older oppressive social relations and new modes of discrimination are being re-installed

The reality on the ground is that algorithms are here to stay. A strong commitment to recognizing and rooting out algorithmic biases must be made.This also means that the inclusion of more people from diverse backgrounds within this process is paramount to help mitigate the bias. And in the meantime, it is important to continue to hold platforms accountable for providing as much transparency and public oversight as possible.

Below are some methods being currently implemented by grassroots movements to combat algorithmic censorship:

  1. Increase your engagement with content and profiles by actively searching for activists' accounts and watching stories directly from their profile as opposed to your feed.

  2. Disengage with irrelevant content / spend more time on solidarity/resource posts. Engage with the stories you view (use reactions), and rewatch them if possible.

  3. Like, comment, save, reply, react, share and tag accounts when reposting their resources (even if it's a screenshot)

  4. If you notice your stories are being taken down or you're being blocked from using certain functions after sharing important content, report it to Instagram using the Help button in your app settings.

  5. Your outrage is a valuable currency. Spread the word, tag Instagram, and let them know the suppression of speech, especially when advocating for the causes you care about, is not acceptable.

0 comments

Recent Posts

See All
bottom of page