News

When AI Becomes a Weapon: Technology-Facilitated Gender-Based Violence in Africa

This case study is one of the first featured in our African AI & Equality Toolbox, a collaboration between the AI & Equality initiative and the African Centre for Technology Studies, building upon Women at the Table‘s AI & Equality initiative methodology.  The case study is based on Code for Africa‘s research, Women at the Table’s partner in the <A+> Alliance for Inclusive Algorithms.


In January 2025, Ethiopian Mayor Adanech Abiebie woke to find deepfake videos showing her in fabricated intimate situations with political leaders spreading across social media. Within hours, these AI-generated lies had reached over 562,000 viewers, with 90% believing the false narrative that her political success stemmed from sexual relationships rather than competence.

Meanwhile, in Cameroon, President Paul Biya’s daughter Brenda faced a coordinated harassment campaign after publicly disclosing her sexual orientation. Ninety-two Facebook posts using identical templates reached 8.9 million people, designed to mock her identity and appearance. These cases represent a sophisticated, continent-wide crisis of Technology-Facilitated Gender-Based Violence (tfGBV) documented by Code for Africa across eleven African countries—weaponizing AI systems and exploiting algorithmic amplification to silence women and LGBTQ+ individuals.


When Algorithms Amplify Hatred

The most chilling aspect isn’t just the technology used to create these attacks—it’s how platforms’ own AI systems become unwitting accomplices. Research reveals that AI recommendation systems consistently amplify tfGBV content because emotional provocation generates high engagement that algorithms interpret as user satisfaction.

In Mayor Abiebie’s case, TikTok’s algorithm treated emotional responses to the deepfake video as signals to promote the content further. Users commenting with laughing emojis and sharing the fabricated material sent positive signals to the recommendation system. The AI interpreted coordinated harassment as user interest, promoting content to broader audiences.

This creates a vicious feedback loop where harmful content becomes self-amplifying. Analysis shows controversial content targeting women and LGBTQ+ individuals achieves 15-20% higher engagement rates than baseline content, leading to exponential reach multiplication without financial investment.


Industrial-Scale Digital Violence
Today’s tfGBV attacks represent sophisticated operations exploiting specific AI vulnerabilities:
  • Template-Based Coordination: In Brenda Biya’s case, 34 Facebook posts used identical copy-paste techniques, collectively achieving 8.05 million views while evading platform detection—despite being textbook coordinated inauthentic behavior.
  • Cultural Precision: Attackers leverage existing social biases, transforming cutting-edge technology into weapons for ancient prejudices. The deepfakes targeting Mayor Abiebie tapped into deeply rooted assumptions about women in leadership.
  • Evasion Technologies: Perpetrators systematically circumvent content moderation through “spamouflage” techniques (replacing letters with symbols), exploitation of cultural gaps in AI training data, and strategic timing for maximum algorithmic visibility.
  • Cross-Platform Warfare: Campaigns span multiple platforms with sophisticated coordination. Between September 2024 and March 2025, approximately 50 TikTok videos continued Brenda Biya’s harassment across different social media trends.
 


The Failures of Content Moderation
Current systems fail on multiple fronts:

  • Cultural Blindness: Content moderation trained on Western datasets misses African cultural contexts. Local slurs like “woubi” and “lélé”—French terms targeting LGBTQ+ individuals—consistently pass through automated systems.
  • Speed Problem: Code for Africa found 80% of tfGBV engagement occurs within 48 hours, creating narrow intervention windows. Current moderation systems respond too slowly, making them reactive rather than protective.
  • Coordination Detection Gaps: Platforms fail to identify obvious coordination patterns, as demonstrated by 34 identical posts evading detection despite clear synchronized timing and template sharing.


A Human Rights Crisis by Design
The violations documented in tfGBV campaigns begin with fundamental AI design decisions. Current platforms optimize for engagement metrics rather than community safety, creating structural vulnerabilities that make attacks profitable for platforms and effective for perpetrators. Development teams lack representation from affected communities and human rights specialists. Success metrics focus on user growth and engagement without systematic evaluation of impacts on vulnerable communities. Testing protocols fail to include harassment scenarios or community-defined harm assessment.


Rebuilding AI for Human Dignity
The research provides a roadmap for technical interventions:
  • Engagement Quality Assessment: Platforms must distinguish between positive engagement (learning, community building) and negative engagement (harassment, coordinated attacks) rather than optimizing purely for interaction quantity.
  • Real-Time Coordination Detection: Network analysis capabilities that identify template sharing, synchronized timing, and cross-platform coordination before viral amplification occurs.
  • Cultural Context Integration: Systematic inclusion of African languages, cultural references, and local knowledge through participatory dataset development and community expert integration.
  • Community Governance: Moving beyond tokenistic consultation to include affected communities in technical architecture decisions, policy development, and evaluation criteria with transparent algorithmic decision-making.
 
 
The Path Forward

The transnational nature of tfGBV requires coordinated responses: legal frameworks addressing AI-enabled coordination, platform accountability standards for harassment prevention, and international cooperation mechanisms for cross-border campaigns.

The women political leaders, LGBTQ+ individuals, and marginalized communities targeted by these campaigns aren’t asking for special protection—they’re demanding equal access to digital spaces free from systematic harassment. Their experiences transform individual trauma into collective knowledge that can reshape how AI systems relate to human dignity.

The technical interventions required to address tfGBV—coordination detection, cultural context recognition, community participation mechanisms—would benefit all platform users, not just those targeted by harassment campaigns. Building AI systems that protect the most vulnerable creates more robust, democratic, and sustainable digital environments for everyone.

The technology exists to build better systems. The legal frameworks can be developed. The community knowledge is available. What remains is the political will to prioritize human dignity over engagement maximization and community safety over viral growth.

The choice facing AI development is stark: continue building systems that systematically amplify digital violence, or fundamentally restructure technical architectures to embed human rights principles throughout the development lifecycle. The women whose experiences are documented here are pioneers of a more democratic approach to AI development—their suffering demands nothing less than fundamental transformation.