fA+Ir Webinars

Global technology facilitated Gender-Based Violence: Feminist AI across the globe

With Antonia Eser-Ruperti (UNESCO); Stephanie Mikkelson (UNFPA); Shohini Banerjee (point of view); Padmini Ray Murray (Design Beku); Raya Sharbain (The Tor Project); Paloma Lara Castro (Derechos Digitales)

In the last installment of the series for 2023, our focus is on Global technology facilitated Gender Based Violence: Feminist AI across the globe. Bringing together research + experts across the f<a+i>r network hubs in LAC, MENA & SEA. Violence doesn’t only stay online. Feminism teaches us that our bodies are not disposable. Based on research from PLAN International, almost 58% of women/girls/LGBTQ experience violence and harassment on social media, including journalists, activists, and human rights defenders. This webinar takes an intersectional approach and outlines the current landscape of #tfGBV with a critical analysis on cyber surveillance, (in Palestine and Gaza through facial recognition) as well as in other corners around the world.

Through evidence based facts and research, the speakers open up possibilities for:
1). Rethinking the governance of digital platforms, and ways to ensure inclusive implementation of AI within support networks for survivors of TFGBV; 2). Concerns around the UN Cybercrime treaty and its lack of a gender perspective; 3). Digital safety for women, girls and LGBTQ communities demands robusting systems of accountability; 4). Ensure that automated solutions with AI interventions do not override the human touchpoint to responders when providing support to survivors.

With Stephanie Santos (Chulalongkorn University, Thailand); Wassim Maktabi (The Policy Initiative,Lebanon); Saiph Savage (Northeastern University, US & Universidad Nacional Autonoma de Mexico)

A dynamic conversation with researchers on their outcome of researching gig work and labor economies. What are the conditions of gig workers as they attempt to generate income, and especially women around different parts of the world? Which conditions of exploitation underpin the nature of work for vulnerable laborers? Overlooking the risks of AI facilitated solutions across sectors will continue to increase violence and discrimination for black, brown bodies as well as women, and eventually gig workers. Tune in for more insights on labor protection solutions and possible legislation initiatives.

With Jamila Venturini (Derechos Digitales)

At a recent special event, Jamila Venturini, the Executive Director at Derechos Digitales, delivered a riveting presentation on their trailblazing AI project. This initiative is not just any AI endeavor; it’s a feminist crusade to reshape the AI landscape with a Latin American soul. By weaving in the rich tapestry of Latin American perspectives into the global conversation on AI ethics and human rights, Jamila and her team are challenging the status quo. They argue that laws aren’t enough to create change; we need human rights to be at the heart of AI development. The project’s ambitions are vast, yet precise: to create AI that doesn’t just replicate existing biases but actively combats them. It’s a call to arms for Latin American developers and data scientists who are not just coding but coding with a cause, striving for gender equality and rewriting the narrative of conventional AI development.


Key lessons from this endeavor underscore the power of diversity in teams, the magic that happens when communities truly collaborate, and the importance of designing systems that protect our autonomy and privacy. They champion open-source technology and believe in sharing knowledge to uplift everyone. But this isn’t just about technology. It’s about governance, redistributing power, and uplifting those who’ve been sidelined. The Latin American feminist movement isn’t just watching from the sidelines; they’re in the trenches, influencing the future of AI and tech infrastructure, despite the hurdles of unstable conditions and the relentless pressure of the market. Latin America is at a crossroads, facing a stark choice: rely on external tech giants or forge a new path where they control their digital destiny. This project doesn’t just aim to stir conversation; it seeks to inspire investment and government backing to support these crucial efforts at all levels of policy-making. It’s a visionary project that’s not just about AI but about shaping the very future of technology in a way that’s equitable, ethical, and empowering.

With Jamila Venturini (Derechos Digitales) and Paola Ricaurte (Tecnológico de Monterrey)

In the second part of this series, Jamila Venturini and Paola Ricaurte, in discussion, address the need to address discrimination and structural issues related to AI and technology. They highlight the need to document and support initiatives that may not conform to mainstream approaches. The goal is to bridge the gap in knowledge and protect people from abuses while fostering alternative forms of technology development. Jamila provides an example of how policymaker decisions can impact data protection and startups and argues for mechanisms that encourage localized, community-connected technology development while respecting human rights. They advocate for a feminist AI future that supports initiatives aligned with the region’s values and interests, as opposed to conventional practices that extract knowledge and wealth from the region and appreciates initiatives like FAIR for promoting these principles and connecting with like-minded groups.

with Hazel Biana (SAFEHER), Ivana Feldfeber (AyMurAI), Patricia Peña Miranda, Daniela Paz Moyano Davila (SOF+IA)

This illuminating discussion tackled technology facilitated gender based violence from a threefold point of view. In the first place, the AymurAI project involves implementing an open-source AI tool in criminal courts to collect and analyze anonymized court rulings. The long-term plan includes training of court officials, expanding to different courts, and eventually conducting data analysis and visualization to gain insights into gender-based violence, They showcase their tool’s features, including structured data collection and AI-based anonymization, to facilitate the publication of redacted court rulings. Secondly, the SafeHer tool is presented which incorporates AI-powered SOS alerts, nearby women commuter functions, and reporting mechanisms to ensure women’s safety in transit.

The prototype underwent testing, demonstrating its ability to detect distress signals and share live locations. Potential risks, such as fraudulent submissions, were considered and addressed through encryption. Community stakeholders, including law enforcement and transit authorities, were involved in the alpha launch to gather feedback and establish collaborations. The ultimate goal is to empower Filipino women in transit through a collaborative and technologically advanced solution. Lastly,SOF+IA seeks to empower women users to navigate the internet without limiting their expression. The chatbot guides users on reporting incidents on social media platforms, focusing on Facebook, Instagram, and Twitter. Legal counseling is offered despite the absence of specific laws on digital gender violence in Chile. The prototype also aims to capture basic data to create a database of cases due to the lack of institutional protocols for handling such incidents in Latin America.

with Laura Alemany and Luciana Benotti (EDIA), Tatiana Telles and Luz Elena Gonzalez (La Independiente)

In this installment of the FAIR global webinars, the discussion revolves around two fascinating projects aiming at demonstrating new technological models that can lead to more gender equal outcomes. The presentation emphasizes the importance of involving experts in discrimination to properly characterize biases and highlights the challenges faced in inspecting and addressing biases in different technologies. The first project conducted in collaboration with the Via Libre Foundation, focuses on addressing biases in language technologies, particularly large language models like GPT. The team developed a tool called EDIA to lower technical barriers and allow experts in discrimination to characterize biases without requiring programming or machine learning expertise. EDIA provides visual representations of word associations in the language model, allowing users to inspect and analyze biases. The tool also offers information on word frequency and sources, empowering users to make informed assessments of biases. The presentation emphasizes the importance of involving experts in discrimination to properly characterize biases and highlights the challenges faced in inspecting and addressing biases in language technologies.

with Laura Alemany and Luciana Benotti (EDIA), Tatiana Telles and Luz Elena Gonzalez (La Independiente)

La Independiente project aims to support women workers in Latin America through the use of AI. Two main aspects of the project involve AI algorithms focused on connecting women with similar career goals and experiences for mentoring, as well as using generative AI to provide guidance on finding new opportunities. The project includes the creation of web plugins to assist workers in various aspects such as enhancing self-presentation, negotiating, and setting reminders with employers. The team also developed an independent platform where women workers can connect, share experiences, and interact with a conversational agent that combines local worker knowledge with the capabilities of large language models to provide recommendations and guidance. Overall, the project aims to empower women workers by leveraging AI technologies to address their specific needs and challenges in the workplace.

with Haemiwan Fathony and Muhammad Ryandaru Danisworo, (Child marriage in Indonesia), Padmini Ray Murray and Shohini Banerjee (tfGBV), Marwa Soudi (AI Tutoring), Mayeli Sanchez Martinez (Community based natural resource), Susana Cadena (Public Procurement) and Jocelyn Dunstan Escudero (NLP Work standards)

The conversation, featuring six different prototypes focuses on applying feminist approaches to artificial intelligence to promote equality outcomes, inclusivity, and innovative solutions for addressing social problems and historical inequities. The project transitions from research papers to prototypes and pilot programs, emphasizing proactive problem-solving and exploring how new AI technologies can concretely benefit society.

The applied research papers delve into the practical aspects of AI and Automated Decision-Making (ADM) data, algorithms, models, networks, policies, or systems. These efforts aim to positively impact various social issues, enhance quality of life, and rectify historical exclusions. The project is structured around three regional hubs and a global network, fostering engagement among researchers, academics, and practitioners. It seeks perspectives from both the global south and north to refine the definitions and directions for feminist AI. The collaborative effort involves exploring current and emerging questions related to AI through a feminist lens and developing a research agenda for the ongoing Feminist AI Research Network.

with Attapol Thamrongrattanarit-Rutherford (Chulalongkorn University)

Attapol Thamrongrattanarit-Rutherford is an Assistant Professor at the Department of Linguistics at Faculty of Arts Chulalongkorn University in Bangkok, Thailand. He presented his work about Gender Bias in Natural Language Processing at the 2nd South East Asia regional meeting for the Feminist AI Research Network.

with Emily Denton (Google)

At the Global Launch of Feminist AI Research Network on January 26, 2022, Emily Denton, Research Scientist at Google, answered the question: What do benchmark datasets mean for Feminist AI, and where do we go from here in our collective work? To answer this question, they presented their NeuroIPS 2021 paper co-authored with Bernard Koch,  Alex Hanna, and Jacob G. Foster, entitled, Reduced, Reused and Recycled: The Life of a Dataset in Machine Learning Research. They conversed with Raejetse Sefala of DAIR (whose talk on Constructing a Visual Dataset to Study the Effects of Spatial Apartheid in South Africa is posted below)

with Raejetse Sefala (Distributed AI Research Institute)

In this illuminating discussion, at the Global Launch of Feminist AI Research Network on January 26, 2022, Raejetse Sefala, Research Fellow at Distributed AI Research Institute (DAIR), answered the question: What do benchmark datasets mean for Feminist AI, and where do we go from here in our collective work? To answer this question, she discussed her NeuroIPS 2021 paper co-authored with Timnit Gebru, Luzango Mfupe, and Nyalleng Moorosi entitled, Constructing a Visual Dataset to Study the Effects of Spatial Apartheid in South Africa. The discussion outlines the challenges of creating datasets and labels for neighborhoods and buildings, emphasizing the need for accurate data. The project’s goal is to track the growth and changes in neighborhoods over time, enabling better resource allocation and urban planning.

Discover and Subscribe to our Youtube Channel to keep up with our seminars