It is a well-known fact that LLMs express harmful biases in their predictions. The main source comes from the training datasets, which are too large and expensive to check thoroughly. In this open studio, Francesca Lucchini will explore how we can leverage the bias in LLMs and use them to examine massive datasets, discovering starting points for a data audit.
About the speaker: Francesca Lucchini Wortzman is a Tech Lead at CENIA, the National Artificial Intelligence Research Center. She has a computer science and masters degree from the Pontificia Universidad Católica de Chile, she specializes in machine learning applications related to urban analysis and computer vision. Francesca is passionate about gender equality and applied ethics in AI.
Join the Open Studio by joining the AI Equality Community on Circle!