Creating practical ways to incorporate a human rights-based approach in designing algorithms
Working with network members professor Daniel Gatica-Perez, Caitlin Kraft-Buchman and the Office of the High Commission for Human Rights programme officer, we are developing <AI & Equality>: A Human Rights Toolbox. A one-stop anti-bias and discrimination website will house a stand-alone interactive learning tool, coding notebooks, and community engagement forums for university students and faculty to actively engage with data and Human Rights concepts to understand the linkages and impacts of algorithmic creation that better reflect human rights values. My work also includes the project "Wikigender: a Machine Learning model to detect bias in Wikipedia's biographies", an exploration of the gender linguistic bias as it can be seen in the overview of the biographies in [Wikipedia] (https://wiki-gender.github.io/)
Computer science education is currently only focused on science and not the holistic implications coding has on society. Our ultimate objective is to bring a university generation to understand the scientist’s unique potential of social impact in the real world, bridging science and human rights policy to foster systemic resilience and more equal, just, robust democracies. The <AI&Equality> project will allow numerous universities to adopt multidisciplinary team collaborations between faculties, and consider incorporating blended technical and human rights based approach formally in the curriculum of machine learning and data analysis courses. Computer and data science students will learn about human rights and how they apply directly to their work, with a deeper understanding of the link between their code and the real world. This will eventually lead to more fair and transparent algorithms, which will affect equality outcomes and stability of democratic societies.