What are the most current technical and design approaches to unbiasing algorithms? What methodologies and environments are needed to mitigate bias? What would algorithms or Automated Decision-Making systems that actively correct for bias (rather than only mitigate bias) look like?
As part of the Alliance initiative we held an Algorithms 101 webinar on Affirmative Action Algorithms, 3 November dealing with how fairness can be integrated into the development software systems and processes that use Machine Learning, and where and how bias can creep in. How do we design algorithms to correct for bias, and what they might look like? Finally, how do we make this a reality?
Nyalleng Moroosi is a Software Engineer and Machine Learning Researcher with the Google AI team where she works on topics related to inclusion and fairness in machine learning. Before joining Google, Nyalleng was a senior Data Science researcher at South Africa’s national science lab, Council for Scientific and Industrial Research (CSIR). In her capacity at CSIR, she worked on projects ranging from: rhino poaching prevention to sentiment analysis of political parties on social media. She is an active member of Black in Artificial Intelligence group, and a founding member for the Deep Learning Indaba.
Elisa Celis is an assistant professor in Statistics & Data Science at Yale University, and a co-founder of the Society and Computation Initiative at Yale. Her research focuses on problems that arise in the context of the Internet and its societal and economic implications. She approaches these problems by using both experimental and theoretical techniques. Her work spans multiple areas including social and computing crowdsourcing, data and network science, and mechanism and algorithm design with an emphasis on fairness and diversity in artificial intelligence and machine learning. Celis holds a Ph.D. in Computer Science and Engineering and an M.Sc. in Mathematics, both from the University of Washington. She is a leader in the LatinX in AI and Women in Data Science communities, has received a JP Morgan-Chase faculty award, and was selected as one of the "100 Brilliant Women in AI Ethics".
Paola is a self-taught systems programmer who, since 1998, has worked and played with all things “open” in governments, NGOs, and the private sector. Currently she is the Head of Data Science and Engineering for the National Council for Science and Technology in Mexico's Government. In October 2019 she was named by the BCC one of the 100 most influential women of the World. In November 2018, she was awarded as an MIT Innovators Under 35 LATAM, Visionary of the Year 2018 for her work at the intersection of Data Science and Justice. She was also a 2016-2017 fellow and 2017-2018 affiliate at the Berkman Klein Center for Internet and Society at Harvard University and a 2015 Ford/Mozilla Open Web Fellow working with the ACLU of Massachusetts. Her passion for open government and data, civic tech, and civil rights has fomented a curiosity to explore how and where technology, openness, and code can strengthen human rights.