Global Directory

Work Areas

Research Areas

Regions

Contact

Susan Leavy

Researching methods to mitigate bias in AI by uncovering bias in training data for ML algorithms.

About the work

Research on Justice in Artificial Intelligence addressing issues of discrimination in AI, combining insights from feminist theories, to uncover ways in which both discriminatory attitudes and social injustices captured in training data for machine learning algorithms can lead to bias in AI systems. Research focuses on language-based training datasets developing a scalable framework to systematically evaluate large datasets, with the goal of preventing AI systems from learning bias and perpetuating it in society.

Impact

Bias in AI has been identified as both a fundamental threat to human rights and a risk in terms of trust in AI technology. This research aims to develop a framework to incorporate theories of social justice and feminist theories, along with techniques from artificial intelligence, to enable the large-scale analysis of datasets for machine learning systems in order to prevent bias in AI.