Team: Laura Alonso Alemany, Luciana Benotti, Beatriz Busaniche (Fundación Vía Libre, Argentina)
The E.D.I.A. project continues its journey as part of the F<A+i>r Feminist AI project cohort, to democratize technical barriers for bias Assessment in Natural Language Processing (NLP) and address discrimination in Large Language Models (LLMs) and Word Embeddings (WE) as Language models and word representations in machine learning workflows have been shown to contain discriminatory stereotypes.
E.D.I.A. is a toolkit that allows people without technical expertise, but with lived experience to explore, characterize and audit biases and stereotypes in language models. There should be no barriers to citizens participating in AI bias assessment!
Via Libre, the Team behind E.D.I.A. is on a quest to build an ecosystem to gather community-built datasets that represent stereotypes in Argentina. It is time to build local resources! And Via Libre wants local communities to record their experiences in discrimination with E.D.I.A. Such datasets are the keystone to audit language technologies, to detect and characterize discriminatory behaviors and hate speech, allowing users to define the type of bias they wish to explore.
During this newest pilot phase E.D.I.A. will co-create then publish structured content and teaching materials so that E.D.I.A. methods can be replicated for other languages and contexts.
They are making a call for organizations, firms and governments to adopt E.D.I.A.
>> Visit E.D.I.A. <<
Milestones: E.D.I.A. has recently been selected as one of five funded projects in Mozilla’s Data Futures Lab 2024 Infrastructure Fund Award, which recognizes and celebrates innovative and disruptive research and supports projects committed to the creation of trustworthy data economies and to open source.