Read the work on Feminist Frameworks by Derechos Digitales, NLP Bias by Data Género, and Feminisms in the Judiciary in Argentina & Mexico from Via Libre.
We Are incredibly excited to have received such a deep and interesting set of papers from our first cohort based in Latin America and the Caribbean.
Read them! They are:
‘TOWARDS A FEMINIST FRAMEWORK FOR AI DEVELOPMENT: FROM PRINICIPLES TO PRACTICE’ by Juliana Guerra, Derechos Digitales
This article is a practical approach, with a feminist perspective and located in Latin
America, to the development of Artificial Intelligence (AI). Is it possible to develop AI that does not reproduce logics of oppression? To answer this question, we focus on the power relations immersed in the field of AI and make an interpretative analysis of the day-to-day experiences of seven women working in some field of AI or data science in the region, in dialogue with different statements of feminist principles and guidelines for the development and deployment of digital technologies.
‘A TOOL TO OVERCOME TECHNICAL BARRIERS FOR BIAS ASSESSMENT’ by Laura Alonso Alemany, Luciana Benotti, Lucía González, Beatriz Busaniche, Alexia Halvorsen and Matías Bordone
Automatic processing of language is becoming pervasive in our lives, often taking central roles in our decision making, like choosing the wording for our messages and mails, translating our readings, or even having full conversations with us. Word embeddings are a key component of modern natural language processing systems. They provide a representation of words that has boosted the performance of many applications, working as a semblance of meaning. Word embeddings seem to capture a semblance of the meaning of words from raw text, but, at the same time, they also distill stereotypes and societal biases which are subsequently relayed to the final applications. Such biases can be discriminatory.
It is very important to detect and mitigate those biases, to prevent discriminatory behaviors of automated processes, which can be much more harmful than in the case of humans because their of their scale. There are currently many tools and techniques to detect and mitigate biases in word embeddings, but they present many barriers for the engagement of people without technical skills. As it happens, most of the experts in bias, either social scientists or people with deep knowledge of the context where bias is harmful, do not have such skills, and they cannot engage in the processes of bias detection because of the technical barriers.
We have studied the barriers in existing tools and have explored their possibilities and limitations with different kinds of users. With this exploration, we propose to develop a tool that is specially aimed to lower the technical barriers and provide the exploration power to address the requirements of experts, scientists and people in general who are willing to audit these technologies.
‘FEMINISMS IN ARTIFICIAL INTELLIGENCE: AUTOMATION TOOLS TOWARDS A FEMINIST JUDICIARY REFORM IN ARGENTINA AND MEXICO’ by Ivana Feldfeber, Yasmín Belén Quiroga, Clarissa Guevara
The lack of transparency in the judicial treatment of gender-based violence (GBV) against women and LGBTIQ+ people in Latin America results in low report levels, mistrust in the justice system, and thus, reduced access to justice. To address this pressing issue before GBV cases become feminicides, we propose to open the data from legal rulings as a step towards a feminist judiciary reform. We identify the potential of artificial intelligence (AI) models to generate and maintain anonymised datasets for understanding GBV, supporting policy making, and further fueling feminist collectives’ campaigns. In this paper, we describe our plan to create AymurAI, a semi-automated prototype that will collaborate with criminal court officials in Argentina and Mexico. From an intersectional feminist, anti-solutionist stance, this project seeks to set a precedent for the feminist design, implementation, and deployment of AI technologies from the Global South.