News

Kicking off Cohort 4!

We are so excited to be moving to cohort 4 of the Feminist AI Framework cohort of projects, which include papers, prototypes and pilots that emanate from applied research funded by the IDRC. Since the call for participation in 2022, F<A+i>r has welcomed feminist technologists to innovate on positive systemic change through AI for good tools, and community-responsive technologies.  At A+ Alliance, we remain committed to working with multidisciplinary teams across the majority world to set the tone on emerging and future technologies through feminist and responsible ethics. We cannot afford to live in a present and future driven by bias. We must put an end to all barriers to women’s rights. We want representation and equality. 

Cohort 4  is made up of a combination of 4 successful papers that will be moving to the prototype stage, 4 prototypes transitioning to the pilot phase and 2 new paper/research projects. 

We shall continue the journey we began together! 

FROM PAPER TO PROTOTYPE >>
Explainable AI-Based Tutoring : Enhanced teacher-student interaction facilitated by AI to change outcomes through resource allocation to underprivileged schools 
Team: Ideas Gym, Egypt

A human-centered approach to intelligent tutoring systems focusing on needs + preferences of students + teachers in Upper Egypt’s one room community schools populated predominantly by girls. Key considerations include system alignment with the localised context;  utilising GenAI for content creation in different languages; emphasising the importance of human oversight. The proposed AI-based tutoring system is positioned as an assistant, enhancing teachers’ capabilities rather than replacing teacher-student interaction

Indigenous Natural Resources Governance:Indigenous rights, water conservation & AI Exploring active participation of the Yaqui community in water resource management
Team: Técnicas Rudas, Mexico & Diversa Studio, Ecuador

The Water Governance project conducted in collaboration with the Yaqui community of Vícam unveiled the challenges inherent in water management in Mexico. The methodology and amassed image, audio, satellite and analyses can now serve as a foundational resource for devising strategies and policies that advocate for sustainable water management while upholding the traditions and values of indigenous communities.

PILOT ++ 
AymurAI | Measuring Gender-Based Violence in Latin America To understand patterns that lead to feminicide
Team: Data Género / Argentina

AymurAI aims to understand GBV from a judicial perspective, as well as foster a more transparent, innovative and accountable judiciary. When gathered, this data could be used to identify the patterns of violence that might ultimately lead to feminicide – and then to policy and potential remedies to hinder violence and the violent deaths of women and LGBTQ+ people.  Currently, we have a working prototype that has a great accuracy for criminal court rulings in Argentina. Every step of the development is open source, except for the training data. We have a synthetic data set that can be used to train a new model without exposing people’s personal information.

We have three main goals for 2024:

  1. Convince more Criminal Courts in Argentina to join our project.
  2. Conduct research through observation and interviews on how the tool is changing the behavior and relationship of the Criminal Court N°10 and N°14 with open data.
  3. Partner with other Spanish-speaking countries that could implement our AI tool.
PROTOTYPES NOT TO MISS! <<
La Independiente | Gender & AI Crowd work To help LAC gig workers connect,improve & organize online
Team: Pit Policy Lab, Mexico

To facilitate communication between Latin American and Caribbean crowd workers and help them identify relevant conversations, an intelligent conversational agent was developed to emulate the personality of Latin American heroines and assist users in searching for specific advice, articulating their interests, and navigating the platform. The conversational agent also recommends other crowd working women who might be valuable connections based on shared interests, expertise, and experiences.

FROM PROTOTYPE TO PILOT  >>
SafeHER | AI (Em)powered Mobility of Women To reclaim women’s mobility in urban transit | Team: De La Salle University, The Philippines

Focused on transport systems in Metro Manila,  SafeHer aims to empower women, challenge societal norms, raise awareness, and potentially influence policy changes concerning women’s safety and community action. If AI interventions are to work, they must focus on empowering women to gain control over their safety.  To ensure the safety app was what Filipino women needed. The prototype features AI-powered functionalities, including SOS Alert, Nearby female commuter identification, live location sharing, Medical ID, User Verification and Invitation, and scream detection.

SOF+IA | Technology facilitated Gender-Based Violence A report-a-bot | Team: Fundación Datos Protegidos & ODEGI- Observatorio de Estadísticas de Género e Interseccionalidades, Chile

SOF+IA is based on a conversational agent (‘chatbot’) using feminist principles that consider ethical issues, putting the needs and context of women who are exercising their rights to freedom of expression and opinion on social networking platforms –  particularly women with a public voice –  activists, academics, women involved in politics, and others who  live this type of situations daily – at the center of the prototype.  SOF+IA, named after public consultation in social media, means “Sistema de Oída Feminista” in Spanish.  In English this could be translated as ‘Feminist Hearing System’.

E.D.I.A. |  Democratising Technical Barriers for Bias Assessment in NLP To address discrimination in LLMs and Word embeddings | Team: Via Libre, Argentina

E.D.I.A. (Stereotypes and Discrimination in Artificial Intelligence) takes the tool built from the prototype stage and provides a methodology for social scientists and domain experts in Latin America to explore biases and discriminatory stereotypes inherent in word embeddings (WE) and language models (LM). They are building an ecosystem to gather community-built datasets that represent stereotypes in Argentina. Such datasets are the keystone to audit language technologies, to detect and characterize discriminatory behaviors and hate speech.It allows users to define the type of bias they wish to explore. Additionally, EDIA supports an intersectional analysis by considering two binary dimensions, such as female-male intersected with fat-skinny.   

 

< NEW RESEARCH  >

  • Thai NLP:  “Gender Inequalities in AI Chatbots: The gendered language set and dissolution of gender bias in language use in Thai’s service sectors” | Chulalongkorn University
  • Arabic NLP:  ‘Feminist Data and MENA Languages: Towards Building Feminist AI Tools’