Select Page

Three tech design principles to help curb digital repression

Three tech design principles to help curb digital repression

By Evîn Cheikosman, Project Coordinator, Data Policy; Emily Ratté, Project Coordinator and Marcus Burke, Research Analyst, Media, Entertainment and Sport at the World Economic Forum.

Most of us do not have full visibility into the potential injustices, conflicts and oppression or violence that occur in any given country. We depend on citizen activists and journalists to be our eyes and ears.

Over this past year, however, we have seen an unparalleled level of censorship of individuals, indigenous communities and vulnerable populations on social media and other platforms. Activists and journalists on these platforms are increasingly suppressed and silenced to the extent of near invisibility. At the same time, perpetrators of injustices roam unpunished, rewarded with clicks and likes. Compounding this is the reality that digital communication platforms run on algorithms programmed to maximize engagement, which tend to promote attention-grabbing inflammatory content. Platforms also utilize artificial intelligence to remove anything that violates the terms and conditions of community guidelines (such as hate speech) and ban inappropriate accounts.

However, these processes can disproportionately impact marginalized communities, which find their accounts and content – especially political speech, conflict documentation and dissent – unfairly targeted for removal. Further, since the start of the pandemic, we have seen a proliferation of social bots liking, sharing and commenting on posts, which can generate online hate as well as amplify existing hate speech and disinformation by facilitating its spread and emboldening individuals with extremist viewpoints.

To understand the scale of this problem, we need to know how effective platforms are at removing harmful content, as well as removing content assumed to be harmful but in actuality is not. Facebook and Instagram provide some information on removed content they later restored, but we still have very little visibility into what percentage of content was removed by mistake. Furthermore, as of now, there exists no shared language on what constitutes “terrorism” or “organized hate”, the categories automated flagging systems use to validate removal of content.

According to Google’s January 2021-June 2021 transparency report, 9,569,641 videos were removed for violating YouTube’s Community Guidelines. Of videos flagged, 95% were removed through automated flagging, of which 27.8% were removed before gaining any views. Facebook’s January 2021-March 2021 Community Standards Enforcement Report shows Facebook removed 9 million pieces of content deemed content actioned by Dangerous Organizations: Terrorism and Organized Hate and 25.2 million pieces of content deemed hate speech. Instagram removed 429,000 pieces of terrorist content and 6.3 million pieces of hate speech content.

The dilemma in these numbers is that this also includes content shared by activists and journalists removed on the false pretence of terrorism or hate speech.

We have seen how these issues can cause tangible physical, emotional and political harm to individuals, such as in the case of anti-Muslim hate speech and disinformation on Facebook and WhatsApp, which contributed to genocide in Myanmar. Ambiguity in platforms’ community guidelines and the lack of shared definitions pave the way for bias in automated flagging systems, digital repression by government authorities and censorship of activists instead of the true perpetrators.

Marginalized communities: Targets of content removal

Indisputably, digital communication platforms amplify social justice causes, particularly racial injustice. Among top hashtags used on Instagram related to racial injustice, #blacklivesmatter and #endpolicebrutality have helped to educate and create calls to action against state-sanctioned violence and anti-Black racism. While social media giants have expressed solidarity with anti-racist movements, their algorithms have a track record of disproportionately removing content raising awareness of these issues. Whether it’s artificial intelligence incorrectly flagging content or moderators’ inability to manage the sheer volume of inflammatory language, banning such content results in silencing historically marginalized voices.

Three ethical technology design principles

These three technology design principles can be implemented to; reduce online and emanating physical harms; stop the digital repression of activists and journalists and hold certain actors and digital communications platforms accountable for censorship of documented injustices and human rights abuses.

Principle 1: Service provider responsibility

The burden of safety should never fall solely upon the end user. Service providers can take preventative steps to ensure that their services are less likely to facilitate or encourage illegal and inappropriate behaviours.

Principle 2: User empowerment and autonomy

Ensuring the dignity of users is of central importance, with users’ best interests a primary consideration, through features, functionality and an inclusive design approach that secures user empowerment and autonomy as part of the in-service experience.

Principle 3: Transparency and accountability

Transparency and accountability are hallmarks of a robust approach to safety. They not only provide assurances that services are operating according to their published safety objectives, but also assist in educating and empowering users about steps they can take to address safety concerns.

Based on the Safety by Design (SbD) principles from Australia’s eSafety Commissioner (eSafety), these principles were developed through extensive consultation with over 60 key stakeholder groups in recognition of the importance of proactively considering user safety during the development process, rather than retrofitting safety considerations after users have experienced online harm.

While digital injustices unfortunately exist in every society, organizations with a vested interest in developing, deploying and using ethical technology have a responsibility to make it difficult for people to perpetuate technology-enabled harms and abuses.

It is not enough to simply remove harmful content. Success rates for content removal mean very little if activists are completely censored online and marginalized offline, and if victims or survivors still have to deal with the social, reputational and psychological trauma caused by the digital abuser. We need transparency into the rate of error for mistaken content removals. We need the digital repression of vulnerable populations, activists and journalists to stop. And we need the perpetrators of digital injustices held to account.

*This article has been adapted from the World Economic Forum.


About The Author

News Service

News Services form an indispensable part of the newsroom toolbox. In Africa, there are several advanced providers of information, some servicing the entire continent while others are more regional, or country specific. The Namibia Economist employs a wide spectrum of local, regional, continental and international News Services.