Arabic-speaking women face growing online harms as platform moderation gaps persist

A new analysis argues that weak Arabic-language moderation systems on major digital platforms are leaving women across the SWANA region exposed to harassment, blackmail, and AI-generated abuse.

Arabic-speaking women face growing online harms as platform moderation gaps persist

Women across the Arab world are facing online harassment and abuse that platforms often fail to detect or remove, according to an analysis published by Mahmoud Elmasry.

The article argues that the problem is not a lack of platform policies. Companies such as Meta Platforms, TikTok, YouTube, and X already prohibit non-consensual intimate imagery, coordinated harassment, and manipulated content. The issue, the article says, is that enforcement systems do not function equally across languages and regions.

The analysis points to several recent cases. During Iraq’s 2025 election campaign, a manipulated video falsely depicting politician Aliya Nasif circulated widely online after AI face-substitution tools were used to fabricate explicit content. Despite public denials and fact-checking efforts, the material remained accessible across platforms.

The article also revisits the case of Lebanese journalist Ghada Oueiss, whose private images were leaked and altered during an online harassment campaign linked to her reporting on the murder of Jamal Khashoggi. Research later found that abusive content continued circulating long after reports were submitted to platforms.

In Yemen, women targeted through blackmail campaigns reportedly struggled to access even basic reporting mechanisms on Facebook. According to the article, local civil society organisations ultimately intervened to secure content removal after platform systems proved inaccessible or ineffective.

A central issue identified in the analysis is language infrastructure. Arabic moderation systems are often trained primarily on Modern Standard Arabic, while most online communication takes place in regional dialects. Researchers cited in the article argue that this contributes to both failures to detect harmful content and wrongful removal of legitimate speech.

The article also references previous reporting on internal moderation failures. Leaked documents from Meta reportedly showed high error rates in some Arabic-language moderation systems, while more recent studies found that harmful content targeting Arabic-speaking women continued to evade automated detection.

The analysis argues that these problems are linked to broader investment priorities. Moderation resources are concentrated in markets where regulatory scrutiny and advertiser pressure are highest, leaving Arabic-speaking users with weaker protections and slower responses.

Researchers and advocacy groups cited in the article call for greater investment in Arabic natural language processing, moderation teams familiar with regional context, and reporting systems that are accessible to non-English-speaking users.

Go to Top