Webinars

<FA+i>r: Feminist AI Research Network

With Antonia Eser-Ruperti (UNESCO); Stephanie Mikkelson (UNFPA); Shohini Banerjee (point of view); Padmini Ray Murray (Design Beku); Raya Sharbain (The Tor Project); Paloma Lara Castro (Derechos Digitales)
In the last installment of the series for 2023, our focus is on Global technology facilitated Gender Based Violence: Feminist AI across the globe. Bringing together research + experts across the f<a+i>r network hubs in LAC, MENA & SEA. Violence doesn’t only stay online. Feminism teaches us that our bodies are not disposable. Based on research from PLAN International, almost 58% of women/girls/LGBTQ experience violence and harassment on social media, including journalists, activists, and human rights defenders. This webinar takes an intersectional approach and outlines the current landscape of #tfGBV with a critical analysis on cyber surveillance, (in Palestine and Gaza through facial recognition) as well as in other corners around the world. Through evidence based facts and research, the speakers open up possibilities for:

1). Rethinking the governance of digital platforms, and ways to ensure inclusive implementation of AI within support networks for survivors of TFGBV; 2). Concerns around the UN Cybercrime treaty and its lack of a gender perspective; 3). Digital safety for women, girls and LGBTQ communities demands robusting systems of accountability; 4). Ensure that automated solutions with AI interventions do not override the human touchpoint to responders when providing support to survivors.

Watch the video here

The Future of Gig Work

With Stephanie Santos (Chulalongkorn University, Thailand); Wassim Maktabi (The Policy Initiative,Lebanon); Saiph Savage (Northeastern University, US & Universidad Nacional Autonoma de Mexico)
A dynamic conversation with researchers on their outcome of researching gig work and labor economies. What are the conditions of gig workers as they attempt to generate income, and especially women around different parts of the world? Which conditions of exploitation underpin the nature of work for vulnerable laborers? Overlooking the risks of AI facilitated solutions across sectors will continue to increase violence and discrimination for black, brown bodies as well as women, and eventually gig workers. Tune in for more insights on labor protection solutions and possible legislation initiatives.

Watch the video here

Feminist AI Frameworks: Towards a Feminist framework for developing AI from principles to practice (Part I)

with Jamila Venturini (Derechos Digitales)
At a recent special event, Jamila Venturini, the Executive Director at Derechos Digitales, delivered a riveting presentation on their trailblazing AI project. This initiative is not just any AI endeavor; it’s a feminist crusade to reshape the AI landscape with a Latin American soul. By weaving in the rich tapestry of Latin American perspectives into the global conversation on AI ethics and human rights, Jamila and her team are challenging the status quo. They argue that laws aren’t enough to create change; we need human rights to be at the heart of AI development. The project’s ambitions are vast, yet precise: to create AI that doesn’t just replicate existing biases but actively combats them. It’s a call to arms for Latin American developers and data scientists who are not just coding but coding with a cause, striving for gender equality and rewriting the narrative of conventional AI development.

Key lessons from this endeavor underscore the power of diversity in teams, the magic that happens when communities truly collaborate, and the importance of designing systems that protect our autonomy and privacy. They champion open-source technology and believe in sharing knowledge to uplift everyone. But this isn’t just about technology. It’s about governance, redistributing power, and uplifting those who’ve been sidelined. The Latin American feminist movement isn’t just watching from the sidelines; they’re in the trenches, influencing the future of AI and tech infrastructure, despite the hurdles of unstable conditions and the relentless pressure of the market. Latin America is at a crossroads, facing a stark choice: rely on external tech giants or forge a new path where they control their digital destiny. This project doesn’t just aim to stir conversation; it seeks to inspire investment and government backing to support these crucial efforts at all levels of policy-making. It’s a visionary project that’s not just about AI but about shaping the very future of technology in a way that’s equitable, ethical, and empowering.

Watch the video here 
with Jamila Venturini (Derechos Digitales) and Paola Ricaurte (Tecnológico de Monterrey)
In the second part of this series, Jamila Venturini and Paola Ricaurte, in discussion, address the need to address discrimination and structural issues related to AI and technology. They highlight the need to document and support initiatives that may not conform to mainstream approaches. The goal is to bridge the gap in knowledge and protect people from abuses while fostering alternative forms of technology development. Jamila provides an example of how policymaker decisions can impact data protection and startups and argues for mechanisms that encourage localized, community-connected technology development while respecting human rights. They advocate for a feminist AI future that supports initiatives aligned with the region's values and interests, as opposed to conventional practices that extract knowledge and wealth from the region and appreciates initiatives like FAIR for promoting these principles and connecting with like-minded groups.
Watch the video here 
with Hazel Biana (SAFEHER), Ivana Feldfeber (AyMurAI), Patricia Peña Miranda, Daniela Paz Moyano Davila (SOF+IA)
This illuminating discussion tackled technology facilitated gender based violence from a threefold point of view. In the first place, the AymurAI project involves implementing an open-source AI tool in criminal courts to collect and analyze anonymized court rulings. The long-term plan includes training of court officials, expanding to different courts, and eventually conducting data analysis and visualization to gain insights into gender-based violence, They showcase their tool’s features, including structured data collection and AI-based anonymization, to facilitate the publication of redacted court rulings. Secondly, the SafeHer tool is presented which incorporates AI-powered SOS alerts, nearby women commuter functions, and reporting mechanisms to ensure women’s safety in transit.

The prototype underwent testing, demonstrating its ability to detect distress signals and share live locations. Potential risks, such as fraudulent submissions, were considered and addressed through encryption. Community stakeholders, including law enforcement and transit authorities, were involved in the alpha launch to gather feedback and establish collaborations. The ultimate goal is to empower Filipino women in transit through a collaborative and technologically advanced solution. Lastly,SOF+IA seeks to empower women users to navigate the internet without limiting their expression. The chatbot guides users on reporting incidents on social media platforms, focusing on Facebook, Instagram, and Twitter. Legal counseling is offered despite the absence of specific laws on digital gender violence in Chile. The prototype also aims to capture basic data to create a database of cases due to the lack of institutional protocols for handling such incidents in Latin America.

Watch the video here 
with Laura Alemany and Luciana Benotti (EDIA), Tatiana Telles and Luz Elena Gonzalez (La Independiente)
In this installment of the FAIR global webinars, the discussion revolves around two fascinating projects aiming at demonstrating new technological models that can lead to more gender equal outcomes. The presentation emphasizes the importance of involving experts in discrimination to properly characterize biases and highlights the challenges faced in inspecting and addressing biases in different technologies. The first project conducted in collaboration with the Via Libre Foundation, focuses on addressing biases in language technologies, particularly large language models like GPT. The team developed a tool called EDIA to lower technical barriers and allow experts in discrimination to characterize biases without requiring programming or machine learning expertise. EDIA provides visual representations of word associations in the language model, allowing users to inspect and analyze biases. The tool also offers information on word frequency and sources, empowering users to make informed assessments of biases. The presentation emphasizes the importance of involving experts in discrimination to properly characterize biases and highlights the challenges faced in inspecting and addressing biases in language technologies.

Watch the video here.
with  Laura Alemany and Luciana Benotti (EDIA), Tatiana Telles and Luz Elena Gonzalez (La Independiente)
La Independiente project aims to support women workers in Latin America through the use of AI. Two main aspects of the project involve AI algorithms focused on connecting women with similar career goals and experiences for mentoring, as well as using generative AI to provide guidance on finding new opportunities. The project includes the creation of web plugins to assist workers in various aspects such as enhancing self-presentation, negotiating, and setting reminders with employers. The team also developed an independent platform where women workers can connect, share experiences, and interact with a conversational agent that combines local worker knowledge with the capabilities of large language models to provide recommendations and guidance. Overall, the project aims to empower women workers by leveraging AI technologies to address their specific needs and challenges in the workplace.

Watch the video here 
with Haemiwan Fathony and Muhammad Ryandaru Danisworo, (Child marriage in Indonesia), Padmini Ray Murray and Shohini Banerjee (tfGBV), Marwa Soudi (AI Tutoring), Mayeli Sanchez Martinez (Community based natural resource), Susana Cadena (Public Procurement) and Jocelyn Dunstan Escudero (NLP Work standards)
The conversation, featuring six different prototypes focuses on applying feminist approaches to artificial intelligence to promote equality outcomes, inclusivity, and innovative solutions for addressing social problems and historical inequities. The project transitions from research papers to prototypes and pilot programs, emphasizing proactive problem-solving and exploring how new AI technologies can concretely benefit society.

The applied research papers delve into the practical aspects of AI and Automated Decision-Making (ADM) data, algorithms, models, networks, policies, or systems. These efforts aim to positively impact various social issues, enhance quality of life, and rectify historical exclusions. The project is structured around three regional hubs and a global network, fostering engagement among researchers, academics, and practitioners. It seeks perspectives from both the global south and north to refine the definitions and directions for feminist AI. The collaborative effort involves exploring current and emerging questions related to AI through a feminist lens and developing a research agenda for the ongoing Feminist AI Research Network.

Watch the video here
with Attapol Thamrongrattanarit-Rutherford (Chulalongkorn University)
Attapol Thamrongrattanarit-Rutherford is an Assistant Professor at the Department of Linguistics at Faculty of Arts Chulalongkorn University in Bangkok, Thailand. He presented his work about Gender Bias in Natural Language Processing at the 2nd South East Asia regional meeting for the Feminist AI Research Network.

Watch the video here
with Emily Denton (Google)
At the Global Launch of Feminist AI Research Network on January 26, 2022, Emily Denton, Research Scientist at Google, answered the question: What do benchmark datasets mean for Feminist AI, and where do we go from here in our collective work? To answer this question, they presented their NeuroIPS 2021 paper co-authored with Bernard Koch,  Alex Hanna, and Jacob G. Foster, entitled, Reduced, Reused and Recycled: The Life of a Dataset in Machine Learning Research. They conversed with Raejetse Sefala of DAIR (whose talk on Constructing a Visual Dataset to Study the Effects of Spatial Apartheid in South Africa is posted below)

Watch the video here
with Raejetse Sefala (Distributed AI Research Institute)
In this illuminating discussion, at the Global Launch of Feminist AI Research Network on January 26, 2022, Raejetse Sefala, Research Fellow at Distributed AI Research Institute (DAIR), answered the question: What do benchmark datasets mean for Feminist AI, and where do we go from here in our collective work? To answer this question, she discussed her NeuroIPS 2021 paper co-authored with Timnit Gebru, Luzango Mfupe, and Nyalleng Moorosi entitled, Constructing a Visual Dataset to Study the Effects of Spatial Apartheid in South Africa. The discussion outlines the challenges of creating datasets and labels for neighborhoods and buildings, emphasizing the need for accurate data. The project's goal is to track the growth and changes in neighborhoods over time, enabling better resource allocation and urban planning.

Watch the video here

Artificial Intelligence 4 Development (AI4D)
Knowledge Synthesis

With Gloriana Monko (Dodoma Lab, University of Dodoma); Winston Ojenge (African Center for Technology Studies | ACTS, Kenya); Eunice Akyereko Adjei (Responsible Artificial Intelligence Lab); Daisy Salifu (International Centre of Insect Physiology and Ecology | AI4AFS hub); Adekemi Omotubora (EDU AI); Wambui Gachiengo (Villgro Africa) | Moderator: Khanysa Mabyeka
Watch the webinar 
Putting the needs of the community as a priority when designing AI tools is transformative. This panel brings together the outcome of six interventions from the AI4D Gender and Inclusive AI Research and Innovation Challenge innovation challenge, which work on the nexus of gender and AI in education, Intimate Partner Violence, agriculture, and healthcare on the African continent. In each of the projects,valuable methodological approaches have been deployed such as; context-based, project-based study,co-creation approach, and user-led inclusive design. The speakers explain ways to carry out feminist and gender responsive interventions, in every step of the development process, to reach inclusive outcomes.
With Adedeji Adeniran (Gender Responsive AI Network &  Centre for the Study of the Economies of Africa) ; Alice Amegah (World Bank Fellow); Gloriana Monko (Dodoma Lab, University of Dodoma) | Moderator: Mitchell Ondilli  (W@TT Core Team)
Watch the webinar
This conversation, stirred and driven by three experts in education and economics, puts together a roadmap for inclusive interventions and advocacy plans that entail holistic learning environments, role models, and stress on communities’s critical role in creating broader participation opportunities. Most importantly, they highlight the importance of shifting the question of participation around to: 
How can STEM be reimagined to fit the ambitions of girls to access new labor markets?

The opportunities for STEM in Africa are enormous. With a large growing demographic, stronger efforts must be put in place to brush away the stereotype that young girls and women on the African continent are not interested in STEM or that they are not participating. However when they are not participating, it is not a linear trajectory and their exclusion is based on larger socio-economic contexts and social imaginaries.
With Marie-Katherine Waller (Gender At Work); Daisy Salifu (AI for AFS); Kemi Omotubora (EDUAI); Caesar Wisdom Favor (HASH Lab) 
How can we learn through feminist practice? Peer to peer learning is a method that challenges hierarchical modes of acquiring knowledge, knowledge production and dissemination. Often, gender training is not enough to witness transformational change.This conversation demonstrates the ways in which gender training, intersectionality and inclusion needs to be undertaken through methodologies that merge experience and theory.  4 experiences and points of view on experiences working through peer-learning to develop effective and gender responsive AI tools on the African continent. 

Watch the video here 
With Caroline Mbaya (African Center for Technology Studies | ACTS, Kenya); Winston Ojenge (African Center for Technology Studies | ACTS, Kenya)
How can AI help women and men in the field of agriculture? Caroline Mbaya and Winston Ojenge from the African Center for Technology Studies (ACTS) in Kenya highlight the critical importance of rethinking the adequacy of technological tools developed for the farming industry and the diverse communities/users. For instance, linguistic diversity must be taken into consideration to avoid social exclusion or access to substantial information. Evidence-based research on the challenges that hinder the full participation of women and youth must be taken into account when developing AI Solutions. This conversation raises an important question: how well do AI developers/ entrepreneurs know about women and youth’s challenges pertaining to unpaid labor and care work? Caroline and Winston argue that there must be no double standards when developing farming solutions or when gathering the necessary data since remote or second-hand data collection can often not be fully reliable. 

Watch the video here 
With Nneka Mobisson (Mdoc,Nigeria); Rose Nakasi (AI and Data Science Lab, Makerere University); Sylvia Nabukenya (PhD Fellow in Bioethics, Makerere University); Wisdom Caesar Favor (Infectious Diseases Institute, Uganda)
The opportunity for digital transformation in healthcare through Artificial Intelligence on the African continent is enormous. However, there are several variables to be taken into consideration: How can one ensure that data sets are representative of the diversity of people, health conditions and context(s)? What are the risks and mitigation strategies? Who is responsible? And what forms of responsibility do individuals take upon themselves. This conversation brings together African public health researchers, from various domains, from Uganda and Nigeria, who map out the ways in which AI can respond to critical and urgent challenges such as the shortage of healthcare workers and dilapidated infrastructures. 

Watch the video here
With Divine Fuh  (University of Cape Town)
In his dynamic discourse, Dr. Fuh calls for a renaissance in AI development, one that embraces feminist and decolonial thinking, aligning cutting-edge technology with the rich, diverse tapestries of African wisdom and perspectives. He talks about "epistemic disobedience," urging us to consciously uncouple from the colonial legacies that have long skewed our understanding of knowledge itself.

Dr. Fuh's discussion is a vibrant tapestry of ideas, weaving together the threads of human dignity, the challenge to systemic inequality, and the quest for a world where discrimination has no place. It's a narrative that underscores the evolving journey of decolonial thought, a process that's not static but dynamic, inviting us to question the established norms and embrace a plurality of voices and perspectives. This is no mere academic debate; it's a clarion call for action. It's about reimagining the very essence of AI in an African context, breaking away from the vestiges of the past to forge a path toward a future where technology is not just innovative but also inclusive and just. It's about sustained efforts, a relentless drive to dismantle the biases that lurk within algorithms, and a commitment to a world where AI champions fairness and equity. Dr. Fuh's vision is clear: a future where AI serves all of humanity, unshackled from the chains of its colonial and patriarchal legacies.

Watch the video here
with Ernest Mwebaze (AI4D Gender Responsive AI Network) and Nyalleng Moorosi (The Distributed AI Research Institute)
In this scintillating discussion, Ernest Mwebaze and Nyalleng Moorosi outline an optimistic future for Africa’s progress in addressing AI bias, data governance and related challenges in Africa. Ernest emphasises a need for internal, community based, community led initiatives, which will respond to externalised pressures coming from large companies, AI institutions, universities, researchers, and industrial labs outside Africa. 

There is a recognition that the allure of solving trendy problems brought by external influences can divert focus. The speaker expresses a perspective that values the ongoing internal examination of problems, even if they may seem relatively small in scale. The fear is that neglecting internal issues might lead to another situation where knowledge has to be imported, akin to a new educational revolution. The discussion highlights the gap in data protection laws across African countries and the need for policies with enforcement mechanisms. The tension between the global knowledge landscape and the need for local governance is explored, with a call for homegrown solutions.

Watch the video here
With Nyalleng Moorosi (DAIR), Kathleen Siminyu (Mozilla) and Nombuyiselo Zondi (University of Pretoria)
In this illuminating discussion moderated by Nyalleng Moorosi In conversation with Kathleen Siminyu and Nombuyiselo ‘Mbuyi’ Zondi, the panelists bring into focus the unseen biases that occur when representation is not adequately considered. They expound on the need for an intersectional lens and a deconstruction of the notion that technologists solely hold the knowledge to develop models. Through an understanding of the diversity of language and dialects, they propose interdisciplinary collaboration to develop the ecosystem underlying different algorithms.

Watch the video here

 

 

Discover and Subscribe to our Youtube Channel to keep up with our seminars