Juliana works towards the construction of tools and an ethical AI framework in Latin America. The general objective of this project is to develop, make available, and adapt tools for the detection, prevention, and mitigation of biases in applications based on Natural Language Processing (NLP) for specific Latin American needs. The tools developed within the framework of this project will allow developers and users to evaluate and detect biases with harmful social impacts in models and in data, contributing to the implementation of Latin American ethics of AI.
Laura is a member of the Bias Diagnosis and Mitigation from Latin America team from Via Libre, Paper Cohort 2021.
We place special emphasis on mitigating biases based on gender, migrant population, aporophobia, ableism, and marginalization of historically disadvantaged communities in the Latin American context, which differs significantly from the context of studies and similar works made in countries of the global north.