AR EN PT ES

f<a+i>r Southeast Asia Hub

Introducing the Southeast Asian Hub

fr Southeast Asia Hub
What is feminist AI? Why do we need it? How can a feminist approach bring about a fundamental change in socio-economic systems and human rights? How can we catalyze a global movement from the global south that offers alternatives to achieve equity, social and environmental justice? And why is feminist AI also needed in many parts of the world, including Southeast Asia?
Feminist AI is the kind of AI that is inclusive and is geared toward contributing to a more equal society. The term ‘feminist’ here does not merely denote the fight for gender equality, though that naturally figures prominently in our work, it also refers to the attempt to harness the potential of AI to reduce other forms of inequality, such as that existing between the dominant and the traditionally excluded groups in society. This means that the notion of Feminist AI is already inclusive of the inequalities that exist as parts of cultural diversities also. The inequalities that we believe AI has the potential to help solve cannot be considered apart from the systemic inequalities and disparities that are based on cultures and the fact that each region of the world has its own distinctive traditions and values.
Systemic gender, racial, social, linguistic biases, as well as other intersectionalities, are at the core of current artificial intelligence processes emerging in the global North and then replicated in the global South. It is urgent to combat and correct these prejudices and discriminations through analyses and proposals with approaches that, from a feminist, decolonial, situated perspective. It is also urgent to offer alternative visions to respond to the problems we face as a region in the Global South. We seek to broaden the understanding of what a feminist AI implies in all its processes and how this framework can positively transform the logics associated with algorithmic decision-making systems. We want innovative ways to emerge which promote technological development in our region through critical reflection, methodological innovation, and experimentation.
To this end, we are promoting the Southeast Asia (SEA) Hub of the Feminist Network for Research in the social aspects of Artificial Intelligence to contribute to the development of innovation and critical action-research capacities. The members of the SEA Hub will be women and men from different sectors, including academia, activist, and development organizations, who are contributing to the discussion at the regional level. The SEA Hub will meet regularly and will generate products and resources that are the result of its research and dialogue.
Furthermore, the SEA Hub will support the launch of annual calls for projects that allow exploring how to Incubate Feminist Artificial Intelligence. During the period from 2022 to 2024, annual calls will be made to develop articles, prototypes, and pilots that allow for theoretical innovation, as well as how to build databases, models, systems, standards, or artificial intelligence devices under feminist values to attend the needs of specific communities in the Southeast Asia regions, values that advance the agenda of equity, inclusion, social and environmental justice.
 
 
 

Members of the working team

SORAJ HONGLADAROM

Soraj Hongladarom is Professor of Philosophy and Director of the Center for Science, Technology, and Society at Chulalongkorn University. His areas of research include applied ethics, philosophy of technology, and non-western perspectives on the ethics of science and technology

Supavadee Aramvith

Supavadee Aramvith is Associate Professor of Electrical Engineering and Head, Multimedia Data Analytics and Processing Research Unit, Chulalongkorn University. Her areas of research include video signal processing, AI based video analytics, and multimedia communication technology. She is very active in the international arena with the leadership positions in the international network such as JICA Project for AUN/SEED-Net, and the professional organizations such as IEEE, IEICE, APSIPA, and ITU

SIRAPRAPA CHAVANAYARN

Siraprapa Chavanayarn is an Associate Professor of Philosophy and a Member of the Center for Science, Technology, and Society at Chulalongkorn University. Her areas of research include epistemology, especially social epistemology, and virtue epistemology.

New Publication

The Hub has published a collection of papers that discuss the notion of feminist AI broadly construed. The volume contains six chapters in English and two in Thai. It is planned that the Thai papers will be translated and republished in English soon. Click here for the volume.
 

The 1st Network MeetingIncubating Feminist AI Project

Speakers: Proadpran Punyabukkana and Naruemon Prananwanich

October 11, 2021, 3.30 – 5.00pm, Thailand Time

The first network meeting of the Southeast Asian Hub of the Incubating Feminist AI Project was held on Monday, October 11, 2021, from 3.3opm to 5 pm on Zoom. The main speakers were Proadpran Bunyapukkana and Naruemon Prataranwanich. Proadpran is Associate Professor of Computer Engineering at the Faculty of Engineering, Chulalongkorn University, and Naruemon is Assistant Professor of Computer Science at the Department of Mathematics and Computer Science, Faculty of Science, also from Chulalongkorn. The purpose of the meeting was to introduce the Feminist AI Project to the public in Southeast Asia and to talk about general issues concerning gender biases and other forms of gender inequality in AI in general. Around 15 people attended. Proadpran and Naruemon, who are former students of Proadpran’s, talked about their works, which included assistive technology for the elderly and other forms of technology that were designed to help those who are disabled. She also talked about the number of women working in technical fields in the Global South. On her part, Naruemon talked about the need for computer scientists to learn more about their social environment and about the need for AI to be free from biases, which come from the data that are fed into the algorithm. The talk ended with an announcement for the call for papers for the Feminist AI Project and questions from the audience.

Key Issues discussed

  • Algorithmic bias
  • Women in STEM in the Global South
  • Need for computer scientists to learn more about their social environment and ethics

Key recommendations for action

  • More research needed on biases, especially gender bias in AI
  • Clear policy needed on the number of women working in STEM fields in the Global South
  • Education in the technical fields needs to include social awareness and ethics

The 2nd meeting: “Gender Bias in Natural Language Processing”

Speaker: Attapol Rutherford-Thamrongrattanarit

Junuary 31, 2022

The second network meeting of the South-east Asia Hub of the Incubating Feminist AI Project was held on January 31, 2022 from 4 to 5.30 pm Thailand time, also via Zoom. A larger audience attended the event when compared to the first one in October. The total of more than 50 people registered for the event and around 35 actually attended. The meeting was led by Associate Professor Attapol Rutherford from the Department of Linguistics, Faculty of Arts, Chulalongkorn University. Attapol is an example of the new generation of scholars who go right across disciplinary boundaries. He was educated as a computer scientist, having graduated with a Ph.D. in computer science from Brandeis University, but he is now working as a linguist at the Faculty of Arts, Chulalongkorn University, a traditional bastion of humanistic studies in Thailand. The topic of his talk was “Gender Bias in AI.” He presented a very clear account of the current research on gender bias in AI, giving examples from a wide variety of languages, such as Hungarian, English, Chinese, and Thai. The key idea in his talk is that in analyzing natural languages, the AI algorithm, working on data fed to them obtained from real-life usage, tends to mirror the biases that are already present in the data themselves. His talk is very useful for those who would like to start their research in the field, and he points out works that need to be done in order to combat the gender problem in AI. He suggested a way to ‘de-bias’ AI through a variety of means. Basically, this involves constant input and monitoring of how AI does its job.

Key Issues discussed

  • Gender bias in AI and how to combat it
  • Biases that have been found in several languages
  • Literature review of previous works in the area

 

Key recommendations for action 
AI can, and should, be made more gender friendly through both technical means

The 3rd Network Meeting – SE Asia Hub
Incubating Feminist AI Project

May 2, 2022, 3.00 – 4.30 pm, Thailand Time

Background of the event

The second network meeting of the South-east Asia Hub of the Incubating Feminist AI Project was held on January 31, 2022 from 4 to 5.30 pm Thailand time, also via Zoom. A larger audience attended the event when compared to the first one in October. The total of more than 50 people registered for the event and around 35 actually attended. The meeting was led by Associate Professor Attapol Rutherford from the Department of Linguistics, Faculty of Arts, Chulalongkorn University. Attapol is an example of the new generation of scholars who go right across disciplinary boundaries. He was educated as a computer scientist, having graduated with a Ph.D. in computer science from Brandeis University, but he is now working as a linguist at the Faculty of Arts, Chulalongkorn University, a traditional bastion of humanistic studies in Thailand. The topic of his talk was “Gender Bias in AI.” He presented a very clear account of the current research on gender bias in AI, giving examples from a wide variety of languages, such as Hungarian, English, Chinese, and Thai. The key idea in his talk is that in analyzing natural languages, the AI algorithm, working on data fed to them obtained from real-life usage, tends to mirror the biases that are already present in the data themselves. His talk is very useful for those who would like to start their research in the field, and he points out works that need to be done in order to combat the gender problem in AI. He suggested a way to ‘de-bias’ AI through a variety of means. Basically, this involves constant input and monitoring of how AI does its job.

The 3rd Network Meeting

The 4th Network Meeting – SE Asia Hub

June 6, 2022, 3.30 – 5.00pm, Thailand Time

Background on the event

Our fourth Network Meeting took place on June 6, 2022, after a rescheduling. The talk was led by Jun-E Tan from Malaysia. Jun-E is a scholar and a policy researcher, and has been involved in the issue of AI governance, especially in Southeast Asia, which is the topic of her talk in this Network Meeting. Dr. Tan opened by talking about what AI was and what were the security risks that were created by the technology. The risks were divided into four categories, namely digital/physical, political, economic, and social ones. Examples of the first category are the potential that AI could cause physical harm or attack. Political risks include disinformation and surveillance; economic risks include the widening gap between the rich and the poor, and social risks include threats to privacy and human rights. These were only some of the risks that Dr. Tan mentioned. Then she talked about how these risks could be mitigated through a system of governance. This included rapid responses by the government, adaptation of international norms such as the GDPR with some degree of localization. She also presented some of the challenges that Southeast Asian governments needed to face, such as the fact that Southeast Asian governments and the region in general did not have a strong voice in the international arena. There were also existing and ingrained challenges such as lack of technical expertise, authoritarian regimes and weak institutional frameworks. After her talk there was a lively discussion among the audience, which included how the system of governance here could promote the use of AI in such a way that creates a more gender-equal society.

Key Issues discussed

      Physical, political, economic, and social risks of AI

      Challenges facing Southeast Asian governments

Key recommendations for action

      Anchor AI governance in its societal contexts

      Build constitutionality around AI and data governance

      Enable whole-of-society participation in AI governance

The 4th Network Meeting

The 5th Network Meeting – SE Asia Hub
Incubating Feminist AI Project
August 22, 2022, 8 – 9.30pm, Thailand Time

Background on the event 

In this fifth session of the Asia Network Meeting, which is the last one for the first year of the Project, Eleonore Fournier-Tombs and Matthew Dailey talked about “Gender-Sensitive AI Policy in Southeast Asia.” Fournier-Tombs is a global affairs researcher specializing in technology, gender, and international organizations. Matthew Dailey is a professor of computer science at the Asian Institute of Technology. Fournier-Tombs started by pointing out various risks for women that are posed by AI, such as loan apps giving out more money to men than to women, job applications from women being downgraded by AI, and so on. She also talked about stereotyping through the use of language, as well as some of the socio-economic impacts this stereotyping and discrimination has caused. Then she talked about the project that she and her colleague Matthew Dailey were undertaking, where they looked at the AI situation in four countries in Southeast Asia, namely Malaysia, Philippines, Thailand, and Indonesia. They found out that all four countries had their own respective AI roadmap policies, but only Thailand had a fully functioning official AI ethics policy guideline. Toward the end of her talk she discussed how the instrument of the Universal Declaration on Human Rights became translated to working documents on AI policy, especially in the region. After Fournier-Tombs’ talk, Matthew Dailey followed on with his discussion of the projects that he was working with Fournier-Tombs, and that he was working with his students. The latter consisted of projects that implemented AI technology to various uses in Thailand, such as in facial recognition and in regulating the unruly Thai urban traffic. There was a lively discussion among the audience at the end.

Key Issues discussed 
  • AI policies in Southeast Asian countries
  • How women are impacted by AI and what instruments are there to mitigate the impact
  • How four countries in Southeast Asia responded to the AI challenge

Key recommendations for action 

  • More study of how the global mechanism such as the Universal Declaration on Human Rights becomes operative in the fields of AI policy

  • Research and development on gender-sensitive AI

The 5th Network Meeting

Capacity Building Workshop

On May 20, 2022, the Southeast Asian Hub of the “Incubating Feminist AI Project” launched its first capacity building workshop, entitled “Feminist AI and AI Ethics” at the Royal River Hotel in Bangkok. The workshop is part of the series of activities organized by the f<A+i>r network, a group of scholars and activists who join together to think about how AI could contribute to a more equal and inclusive society. The Project is supported by a grant from the International Research Development Centre, Canada.

The event was attended by around twenty participants from various disciplines and backgrounds. The aim of the workshop was to equip participants with the basic vocabulary and conceptual tools for thinking about the roles that AI could play in engendering a more inclusive society.

The workshop was opened by Suradech Chotiudomphant, Dean of the Faculty of Arts, Chulalongkorn University. Dr. Jittat Fakcharoenphol and Dr. Supavadee Aramvit were also presented at the Workshop. Jittat was the lead discussant and would take a key role in the group discussion, and Supavadee was a member of the Southeast Asia Hub of the Project. Then Dr. Soraj Hongladarom, Director of the Center for Science, Technology, and Society, presented a talk on “Why Do We Need to Talk about Feminist Issues in AI?” After presenting a brief definition and history of AI, Soraj talked about the reasons why we needed to consider feminist issues in AI, as well as other issues concerning social equality. Basically, the reasons are that gender equality is essential for the economic development of a nation. A nation where both women and men are given the same opportunities and equal rights will be more likely to create prosperity that will benefit everyone, especially when compared with a society that does not give women equal rights and opportunities. Furthermore, there is also a moral reason: Denying women their rights would be wrong because inequality itself is morally wrong. Then he talked about the various ways in which AI had actually been used, either intentionally or not, in such a way that the rights of women were violated. For example, AI has been used to calculate the likelihood of repaying loans. If the dataset is such that women are perceived by the algorithm as being less likely to repay, then there is a bias in the algorithm against women, something that needs to be corrected. Toward the end, Soraj mentioned that the Incubating Feminist AI project was currently launching a call for expressions of interest, where everyone was invited to submit. Details of the call can be found here.

Afterwards, the actual workshop began, with a lead talk and discussion by Dr. Jittat Fakcharoenphol from the Department of Computer Engineering, Faculty of Engineering, Kasetsart University. Jittat talked about the basic concepts in machine learning, the core matter of today’s AI, and then he presented the group with three cases for them to discuss, all of which were concerned with feminist issues in various applications of AI. These were feminist issues in AI in medicine, in facial recognition, and in loan and hiring algorithms. The participants divided themselves into three groups; they then chose a topic and started to have their discussion very actively. After about an hour of group discussion, each group presented to others what they had discussed and what their recommendations were. The participants showed a strong interest in the topics and everyone was convinced that AI needed to become more socially aware and that more work needed to be done to see in detail what exactly socially aware AI is going to be.

At the end of the meeting, Dr. Supavadee talked about her reflection on the Workshop and gave a closing speech. The workshop in fact was the first one, to my knowledge, that was engaged with feminist topics in AI, and it was a credit to the IDRC and the Incubating Feminist AI project that a seed was planted in Thailand and in Southeast Asia regarding the awareness that we must consider how AI can contribute to a more equal and more inclusive society, and how the traditional unequal status of women, especially in this part of the world, could be redressed through this technology.

Network meetings

  • 1st meeting (11 October 2021) (Speakers: Proadpran Punyabukkana and Naruemon Prananwanich)
  • 2nd meeting (31 January 2022) (Speaker: Attapol Rutherford-Thamrongrattanarit)
  • 3rd meeting (2 May 2022) (Speaker: Hazel T. Biana and Rosallia Domingo)
  • 4th meeting (30 May 2022) (Speaker: Jun-E Tan)
  • 5th meeting (June)

 

Capacity Building Workshop

  • Practical workshop on the basics of feminist AI – last week of May – speakers to be confirmed – scheduled on the fourth week of May

 

Collection of ‘Think Pieces’ or ‘Essays’ on Feminist AI

  • 8 essays planned – publication schedule – June 2022