Cohort Three of the Incubating Feminist AI project!

We are thrilled with our third Cohort which brings through our first Pilot developed from a paper and subsequent prototype, 5 prototypes being developed from papers, and an additional 7 papers (and counting) in the research and development pipeline.


> Data Género, Argentina

AymurAI is our first pilot project, developed by Data Género in Argentina initially as a paper: “Feminisms in Artificial Intelligence: Automation Tools Towards A Feminist Judiciary Reform in Argentina and Mexico”, through  to a successful “Prototype  for an open and gender-sensitive justice in Latin America – AymurAI”  – created in collaboration with Buenos Aires Criminal Court 10,  the pilot will expand to  other Buenos Aires Criminal Courts with a vision  to  open the data from legal rulings as a step towards a feminist judiciary reform throughout Latin America and beyond.

PROTOTYPE:  “Towards a Feminist Framework for AI Development: From Principles to Practice”  

> Derechos Digitales, Chile

This is a practical approach, with a feminist perspective located in Latin America, focused on the development of Artificial Intelligence (AI). It questions if it is possible to develop AI that does not reproduce logics of oppression. To answer they focus on the power relations immersed in the field of AI and make an interpretative analysis of the day-to-day experiences of seven women working in some field of AI or data science in the region, in dialogue with different statements of feminist principles and guidelines for the development and deployment of digital technologies. The prototype turns the paper into a living document and new methodology.

PROTOTYPE: “Tool to Overcome Technical Barriers for Bias Assessment in Human Language Technologies”

> Via Libre Argentina

This project deals with Automatic processing of language which has  become pervasive in our lives. Word embeddings are a key component of modern natural language processing systems, providing a representation of words. Word embeddings seem to capture a semblance of the meaning of words from raw text, but, at the same time, they also distill stereotypes and societal biases subsequently relayed to the final applications. Such biases can be discriminatory.

There are currently many tools and techniques to detect and mitigate biases in word embeddings, but they present many barriers for the engagement of people without technical skills. Most of the experts in bias, either social scientists or people with deep knowledge of the context where bias is harmful, do not have such technical skills, and they cannot engage in the processes of bias detection because of the technical barriers.

Via Libre developed a tool that is specially aimed to lower technical barriers to provide the exploration power to address requirements of experts, scientists and people in general who are willing to audit these technologies.


> DeLaSalle University, Manila

This project was eveloped from a paper on “AI (em)powered Mobility”  that focused on transport systems in Metro Manila which have been hailed as dangerous and unsafe for women. To address this, some machine learning applications powered by Artificial Intelligence (AI) have been created and developed to ensure women’s safety. These safety apps, however, do not tackle the underlying issue of perpetrators’ violence against women. Rather than empowering women to take full control of their mobility, these apps normalize violence and reinforce victim-blaming mentalities. To empower Filipino women, and to make sure that these apps are what women need and want, frameworks for future models of AI-driven safety apps should berethought.   SAFEHER in its prototype phase is doing just that.  It is being built and will be deployed as a MVP during the summer of 2023.

PROTOTYPE:  La Independiente: Gender Perspectives in AI Crowd Work 

>PIT Policy Lab, Mexico

Developed from their paper “Mainstreaming gender perspective in AI Crowd work in the Global South”.    During the prototype phase PIT will develop and deploy an MVP platform for women AI Crowd Workers based in LAC providing community, access to certifications, and opportunities to organize.  In addition during the prototype phase they are holding a Crowdwork Platform Forum: Policy Perspectives from a feminist design for discussions with crowdworks, platforms and experts on what needs to change and what can be done to improve the quality of experience for AI crowd workers in LAC and throughout the globe. 

PROTOTYPE:  Digital Gendered Violence in Chile.

> University of Chile, Chile

Development of a system for report, monitoring and response – orientation based on a feminist chatbot prototype”  used a feminist methodology of inquiry and community based co-creation to identify and create a system for reporting Digital Gendered (or Technology Facilitated Gender-Based) Violence from a variety of entry points with different types of reporting using a conversational agent.  Now in its prototype phase and in collaboration with CENIA, Chile’s Artificial Intelligence Center a robust application will be built  that will both facilitate reporting, empowerment, and the gathering of data to inform policy and hasten change.

Cohort 3 papers

And our amazing papers commissioned for Cohort 3 and forthcoming from:

PAPER: University of Chile / Chile
> Feminist NLP: An Annotated Corpus to Evaluate Sex Differences in Work Related Diseases in Chile

PAPER: Tecnicas Rudas / Mexico
> Community Perspectives of AI in Natural Resource Governance  

PAPER: DataLat / Ecuador
> Analyzing Public Procurement Anomalies in Ecuador with a Gender Perspective  

PAPER: Coding Rights / Brazil
> Compost engineers and their slow knowledge 

PAPER: IdeasGym / Egypt
> Explainable AI-Based Tutoring System for Upper Egypt Community Schools 

PAPER: Point of View & Digital Futures Lab / India
> Reimagining Automated Violence Interventions through Participatory Technology Design Feminist AI 

PAPER: Expectation State / Jordan
> Exploring the potential for responsible Automated Decision Making to improve accuracy and correct for bias in financial credit scoring by Microfinance Institutions in Jordan.