Caitlin Kraft-Buchman and Paola Ricaurte of the <A+> Alliance for Inclusive algorithms discussed "AI and Human Rights Challenges" for the Open for Good Webinar Series
The Human rights-related challenges of AI
Recognizing the need and importance of domain-specific data repositories, the Open for Good alliance has started a webinar series covering the domains of language technology and earth observation as well as highlighting the work of the Angolan Open Data Portal project.
To round off the series, the Open for Good Alliance is organizing a 2-part webinar with a focus on the human rights-related challenges of developing and deploying AI-based models and applications. The focus on the human rights-based approach complements the Webinar series well since once data is found, the question of whether this data is appropriate to use and how to achieve responsible AI are crucial steps for every AI practitioner, researcher and stakeholder.
Paola Ricaurte Quijano is LAC Regional Hub leader, an Associate Professor in the Department of Media and Digital Culture at Tecnológico de Monterrey and Faculty Associate at the Berkman Klein Center for Internet & Society at Harvard University. She was an Edmundo O'Gorman Fellow at the Institute of Latin American Studies at Columbia University (2018). She is co-founder (with Nick Couldry and Ulises Mejías) of Tierra Común, a network to promote reflection on data colonialism from the Global South. She is a member of the Alliance for Inclusive Algorithms, and Fr, Feminist Artificial Intelligence Research Network. She was a member of the collective Enjambre Digital for the defense of digital rights in Mexico. Her work focuses on the critical study of digital technologies from a decolonial and feminist perspective.
Caitlin Kraft-Buchman is CEO/Founder of Women at the Table ([email protected]), a growing global CSO based in Geneva, Switzerland focused on catalyzing new norms now. She is a co-founder and a leader of the Alliance for Inclusive Algorithms () forging 21st century systems change for a more inclusive, just and transformed world. is a global coalition of technologists, activists and academics who work for a machine learning that does not embed an already biased system into our future; building inclusion into code with race and gender equality at the core so that we leave no one behind.
Dr Dafna Feinholz-Klip is UNESCO’s Chief of Bioethics and Ethics of Science and Technology (Division of Youth, Ethics and Sports at the Social and Human Sciences Sector). A psychologist and bioethicist by training, she previously worked as a researcher in charge of the reproductive epidemiology department, and was a member of the Mexican Research Council, she was the Director of the Women and Health Program in Mexico, the Academic Coordinator of the Mexican National Commission of the Human Genome, the Executive Director of the Mexican National Commission of Bioethics, until she joined the UNESCO in 2009. From 2000-2006, she was the Founder and Chair of the Latin American Forum for Ethics Committees for Health Research (FLACEIS), an organisation supported by the WHO. In her work for the UNESCO, she helps set up and support national ethics committees and ethics committee training around the world.
Welcome by Open for Good Alliance - FAIR Forward
Introduction to the human-rights based approach to AI - Caitlin Kraft-Buchman
Exploring the issue of gender gaps in AI training datasets as a way to think critically about definitions of fairness - Paola Ricaurte Quijano
Outlook on global initiatives to tackle bias in AI - Dafna Feinholz-Klip
Interactive Q&A with participants
Have you encountered bias in the underlying datasets and applications/models in your work? Y/N
In your own experience, what type of AI biases have you confronted?
When it comes to the development and use of AI in your own organisation, do you think the human-rights aspect, namely data gaps and potential bias in historical data sets is recognized as an important issue and properly addressed?
Building on this the second session will cater for the needs of data science practitioners by focusing on:
How a human rights based approach can fit with AI and its creation
And how we can achieve more fair machine learning models... Blending a basic human rights workshop with tools in a Jupyter notebook to directly and immediately see how human rights principles can be applied and thought about analytically in code.