News

In October, November & December, Women At The Table & <A+> Alliance will facilitate a series of webinars on important questions in AI:

  • One hour + fifteen minute Deep dives on the sidelines of multilateral fora each dealing with a critical topic: data, governance, social protection, procurement, norms
  • Conversations with 6 regional partners on what is happening in AI in Asia, Mexico, Central America & the Caribbean, Andean Region & Southern Cone, Brazil, Anglophone Africa, Francophone Africa
  • culminating on 10 December Human Rights Day.

We will investigate

  • The explosion of new technologies and the data revolution — what happens next.
  • How a human rights framework could and should be used to ensure inclusion and gender equality in algorithms we create.
  • Why and where algorithms can be gender and race biased
  • Practical tools and solutions that can be used to mitigate and proactively correct for real life gender, race, and class biases in AI
  • What a feminist vision for an inclusive digital future looks like

October 20. 16:00-17:15 CET / 10:00- 11:15 EST

FEMINIST DATA

on sidelines of the UN Data Forum

Where does traditional data collection go wrong? From a feminist point of view, what would inclusive data collection look like Before, During, After data is gathered? What would be a methodology to create this? How can governments (and other actors) ensure / facilitate ?

with special guests:

  • Nuria Oliver Commissioner for the President of the Valencian Region in Spain on AI and Data Science against COVID-19, Co-Founder and Vice-President of ELLIS, the European Laboratory for Learning and Intelligent Systems
  • Lauren Klein, co-author, Data Feminism 
  • Emily Courey-Pryor, Executive Director, Data2x
  • Katie Clancy, Networked Economies, International Development Research Centre, Canada

3 November. 16:00-17:15 CET / 10:00- 11:15 EST

AFFIRMATIVE ACTION ALGORITHMS

on the sidelines of the Internet Governance Forum

What are the most current technical and design approaches to unbiasing algorithms? What methodologies and environments are needed to mitigate bias? What would algorithms or Automated Decision-Making systems that actively correct for bias (rather than only mitigate bias) look like?

with special guests:

  • Nyalleng Moorosi, Google AI
  • Paola Villarreal, National Council Science & Development, Mexico
  • Elisa Celis, Yale University

17 November. 16:00-17:15 CET / 10:00- 11:15 EST

AI, SOCIAL PROTECTION & PROCUREMENT

on the sidelines of the Business & Human Rights Forum

How can private and public sector AI be leveraged to ensure AI and ADM are unbiased? How can procurement be used to incentivise inclusion and influence of feminists in the conception, budgeting, design, deployment, and monitoring of AI & ADM systems?

with special guests:

  • Carina Lopes, Head of Digital Future Society Think Tank
  • Helani Galpaya, CEO of LirneAsia

1 December. 16:00-17:15 CET / 10:00- 11:15 EST

ALGO 101: THE FRONTLINE

Impacts of Algorithms & Automated Decision Making in every corner of the globe

How is bias mitigated or amplified by underlying social protection assumptions, or development aid assumptions in AI / Automated Decision-Making systems? What would an Automated Decision-Making social protection system look like if designed with a feminist perspective ?

with special guests:

*Jensine Larsen, World Pulse

and others to be announced.