06/29/2022

Our second cohort of Feminist AI papers have been selected!

Teams come from Philippines, Thailand, Mexico, Chile and Argentina.

The reflection and work on Feminist AI continues to grow. We are over the moon to have our Scientific Advisory Committee select the following teams and papers to be written over the next 6 months.

They are:

‘AI-(EM)POWERED MOBILITY OF WOMEN’ by Hazel Biana

Transport systems in Southeast Asian cities, particularly Metro Manila, have been hailed as particularly dangerous and unsafe for women and girls. To address this issue, some machine learning applications powered by Artificial Intelligence (AI) have been created and developed.

These safety apps do not, however, tackle the underlying issue of perpetrators’ violence against women. Rather than empowering women to fully take control of their mobility, these apps normalize violence and reinforce victim blaming mentalities. Thus, there is a need for revised frameworks of thinking for future AI models, and AI-driven safety apps that are not based on normalizing violence but rather empowering Filipino women. These frameworks, however, should be examined empirically on individual and institutional levels, so that they may be accurate bases for AI development. In this study, Filipino women are consulted on urban transport planning through participatory methods. The unique demographics, circumstances, and locations of Filipino women in Southeast Asia, their corresponding transit systems and local communities, and their socio-cultural, personal, practical and spatial factors are examined through surveys, focus group discussions, and ethnographies of urban public transportation. Data gathered will be used to inform an empowering AI-Driven Filipino women safety app concept for future development, which may eventually be cascaded to the larger Southeast Asia region.

‘VR AS LEARNING ASSISTANT SYSTEM FOR IMPAIRED WOMEN’ by Natika Krongyoot

In this article, we want to propose an outline of Virtual Reality simulation which can be a learning assist system for impaired person. VR game can be implemented as a therapeutic tool aiming to enhance impaired person to develop their occupational routines. From the perspective of both education and ethics in technology, we want to design VR system to assist impaired person’s learning behavior by creating the game that have content and story that suitable for impaired person’s experience to promote the related capabilities of learning ability, understanding of the world for living in a society. First, we will discuss the concepts of impairment and disability to show the approach of understanding impaired and disable person. Second, we will discuss the VR system that currently used as a tool for impaired person to know it’s limitation when the game will be selected for impaired person, especially impaired women. Third, we will discuss the possibility of VR as a tool for learning and understanding the world. VR has seen as an unreal environment compared to physical reality. We want to argue that from both ethical and epistemological perspective VR can be used to enhance learning ability of impaired person. Last section, we will propose the possible model of VR system that suitable to be learning assistant which includes ethical design and promoting the learning capacity of impaired person.

‘GENDER PERSPECTIVE IN AI CROWD WORK IN THE GLOBAL SOUTH’ by Luz Elena Gonzàlez

Concerns remain about the quality of work opportunities for data annotation and labeling. Such tasks are often performed by online crowd workers; some of the broader impacts of such employment are lower wages, depersonalized work, and asymmetric power relations. Despite being promoted as an opportunity to create income and employment in regions where local economies are stagnant (Nickerson, 2014), there are not enough initiatives that address the impact of such work in the Global South through the lens of gender perspective, considering that 1 in every 5 crowd workers in the region are women (Berg & Ram; 2021; Varanasi, et.al, 2022).

PIT Policy Lab will collaborate with the UNAM Civic Innovation Lab and Puentech Lab’s gender experts to design a smart tool that can be used in crowd work platforms and that will learn over time which microtasks and professional certificate recommendations are best for a specific female contributor to increase her sense of self-efficacy, professional development, and Well-being.

‘CONVERSATIONAL AGENT TO SUPPORT WORTHY EXERCISE OF INTERPRETATION IN INDIGENOUS LANGUAGES IN THE LEGAL FIELD’ by Sofia Trejo

This project seeks to dignify the work of interpreters of indigenous languages in Mexico through a conversational agent. Currently, the interpreters face to deficiencies in the judicial system that hinder their professional work. The conversational agent seeks support on three levels: on a personal level through interaction work experiences will be collected with the interpreters; at the community level, the agent will make available the knowledge of the task of their work, finally, in a global evidence will be collected of the problems faced by interpreters and that legal interpretation of indigenous languages. It will give way to creating policies that improve their professional conditions.

‘IA FOR DIGITAL GENDER VIOLENCE: DEVELOPMENT OF A FEMINIST CHATBOT AND ALERT, MONITORING AND RESPONSE SYSTEM ON GENDER-BASED DIGITAL VIOLENCE IN CHILE’ by Patricia Pena Miranda .

This research proposal aims to identify good practices and experiences that exist internationally, in the development of two solutions based on artificial intelligence applied to the development of online system for monitoring, alert response in situations of Digital Gendered Violence, that included two solutions : First, a prototype of a feminist chatbot that allows receiving complaints of cases of digital gender violence, especially in situations of digital harassment on social networking platforms, so as to deliver information and guide in the filing of complaints and legal-psychological support with Chilean civil society organizations that address these situations.

Second, an automated algorithm that identifies situations of hate attacks against women on the twitter platform, raise alerts in order to identify biases that exist on this platform. Once this alert is generated, an automated suggestion of access to the feminist chabot is generated. The combination of both solutions finally seeks to develop a prototype platform to provide a comprehensive response and systematize cases of hate attacks and digital harassment in the social network and to contribute to public institutions and women's organizations and groups working in the prevention and alerts of situations of violence against women.

‘DESIGN OF DATA SCIENCE PROJECTS FOR INCLUSIVE PUBLIC POLICIES’ by Virginia Brussa

Our proposal aims to reformulate, based on feminist criteria, a model of formulation of data science projects for public officials. The process will have two validation workshops (and a workshop in Futuros Abrelatam) with officials, researchers, activists of the region. The final publication will contain the research-action itinerary carried out for the reformulation of the model with a feminist perspective. The analysis will include dimensions from Data Justice, intersectionality, design and environmental justice. This is to promote critical discussion, and deepen question formulation, team building, and the hybrid nature of the data. The latter based on recommendations derived centrally from the Bern Data Compact (2021) in the framework of the 2030 Agenda, efforts to close existing data gaps.

The article will stand out for the visibility of primary inequities in the design of public policy projects. This reformulation will give the opportunity to devise in the second phase a protocol for evaluation and civic monitoring of automated public policies. Experimentation will consist of two activities: a) strategy to enhance the model with plain language practices (coming from public innovation policies) and visual thinking to bring closer and translate the complex process to other stakeholders and people affected by automation. As background for the generation of this strategy we have the project on algorithms and justice (Ilda; 2020). This would be an intermediate level of the process to get to the prototype itself. b) prototyping of project formulation and evaluation methodology to apply to real problems of the public sector in alliance with public organizations and activists in the region. This will make visible another layer of inequities in the framework of public algorithms and validate again the changes to the first phase methodology.