We are so incredibly proud of our <A+> Alliance’s f<a+i>r Feminist AI Research Network’s second cohort of projects, which include papers and now a series of prototypes that emanate from applied research funded by the IDRC.
No one has mapped how AI, or Automated/Algorithmic Decision-Making systems in developing countries function, and how AI and ADMs can be harnessed to deliver equality outcomes and new opportunities (instead of inhibition of rights and amplification of unequal power relations).
Join us in our search for new just, effective, and intersectional feminist models that use this urgent moment of movement to the digital realm to re-conceive and transform social services and policy so that they are inclusive and fit for 21st century governance and improved quality of life.
To springboard from well-established descriptions of ‘what’ potential harms in algorithms and machine learning exist to urgent focus on ‘how’ to course correct future harms with new data, algorithms, models, policies and systems can be researched and piloted for gender-transformative change.
f<A+I>r’s aim is to support the skill and imagination of Global South/the Majority World feminists in producing effective, innovative, interdisciplinary models that harness emerging technologies which correct for real life bias and barriers to women’s rights, representation and equality.
Feminisms in Artificial Intelligence: Automation Tools Towards A Feminist Judiciary Reform in Argentina and Mexico
The lack of transparency in the judicial treatment of gender-based violence (GBV) against women and LGBTIQ+ people in Latin America result in low report levels, mistrust in the justice system, and thus, reduced access to justice. To address this pressing issue before GBV cases become feminicides, AymurAI in prototype with Buenos Aires Criminal Court 10 will open the data from legal rulings as a step towards a feminist judiciary reform. The project identifies the potential of artificial intelligence (AI) models to generate and maintain anonymised datasets for understanding GBV, supporting policy making, and further fueling feminist collectives’ campaigns.
Stage: Prototyping now in Buenos Aires
A Tool to Overcome Technical Barriers for Bias Assessment in Human Language Technologies
There are currently many tools and techniques to detect and mitigate biases in word embeddings, but they present many barriers for the engagement of people without technical skills. As it happens, most of the experts in bias, either social scientists or people with deep knowledge of the context where bias is harmful, do not have such skills, and they cannot engage in the processes of bias detection because of the technical barriers.
This tool is specially aimed to lower the technical barriers and provide exploration power to address the requirements of experts, scientists and people in general who are willing to audit these technologies.
Paper has been selected (and won scholarship) to present December 2022 in Abu Dhabi at WiNLP (Widening Natural Language Processing) which promotes ideas and voices of underrepresented groups in Natural Language Processing (NLP). Project will also be published in the prestigious Journal of ACL (Association for Computational Linguistics).
Stage: Paper to prototype
Towards a Feminist Framework for AI Development: From Principles to Practice
A practical approach, with a feminist perspective and located in Latin
America, to the development of Artificial Intelligence (AI) asking if it is possible to develop AI that does not reproduce logics of oppression.
Workshops will explore deepening of the basic guide of questions from the initial paper, and practices for development with projects actively in development.
Stage: Paper to Workshopping phase
AI-(Em)powered Mobility of Women– Socio-cultural, Psychological, Personal, and Spatial Factors to Urban Transit Safety: Informing AI-Driven Filipino Women Safety Apps
Transport systems in Southeast Asian cities, particularly Metro Manila, have been hailed as particularly dangerous and unsafe for women and girls. To address this issue, some machine learning applications powered by AI have been created and developed. These safety apps do not, however, tackle the underlying issue of perpetrators’ violence against women. Rather than empowering women to fully take control of their mobility, these apps normalize violence and reinforce victim blaming mentalities. Thus, there is a need for revised frameworks of thinking for future AI models, and AI-driven safety apps not based on normalizing violence but rather empowering Filipino women.
In Mexico, there is a legal framework that guarantees language speakers access to interpretation in legal processes, however, there are no conditions for the full exercise of this right. Interpreters, key actors, face adverse conditions such as racism, gender discrimination and violence; as well as lack of timely payments for services from official bodies. These translation services are fundamental to achieving any justice in the courts for indigenous speaking women.
Workshops with translators, and subsequent prototype outline for systems workflow to help to indigienous interpreters in the Mexican judicial system will be produced.
AI for Digital Gendered Violence: Development of a feminist chatbot and alert, monitoring and response system on gender-based digital violence in Chile
This project aims to develop two solutions based on AI applied to monitoring, response and systematization in situations of Digital Gendered Violence,
1. Feminist chatbot to receive complaints of cases of digital gender violence, especially digital harassment on social networking platforms to deliver information and guide in the filing of complaints and legal-psychological support with Chilean civil society organizations
2. Automated algorithm that identifies situations of hate attacks against women on the twitter platform, raise alerts in order to identify biases that exist on this platform. Once alert is generated, an automated suggestion of access to the feminist chabot is generated.
Mainstreaming Gender Perspective in AI Crowd Work in the Global South: Diagnostic, policy recommendations and smart tools for women’s empowerment
Concerns remain about the quality of work opportunities for data annotation and labeling. Such tasks are often performed by online crowd workers; some of the broader impacts of such employment are lower wages, depersonalized work, and asymmetric power relations. Despite being promoted as an opportunity to create income and employment in regions where local economies are stagnant, there are not enough initiatives that address the impact of such work in the Global South through the lens of gender perspective, considering that 1 in every 5 crowd workers in the region are women.
PIT Policy Lab will collaborate with the UNAM Civic Innovation Lab and Puentech Lab’s gender experts to design a smart tool that will learn over time which microtasks and professional certificate recommendations are best for a specific female contributor to increase sense of self-efficacy, professional development, and well-being.
VR as learning assistant system for impaired women
Discussing concepts of impairment and disability, creating an outline of a Virtual Reality simulation which can be a learning assist system for impaired persons implemented as a therapeutic tool to enhance learning ability, and understanding of the world for living in a society.
Design of Data Science Projects for Inclusive Data Policies