f<a+i>r Southeast Asia Hub
Soraj Hongladarom is Professor of Philosophy and Director of the Center for Science, Technology, and Society at Chulalongkorn University. His areas of research include applied ethics, philosophy of technology, and non-western perspectives on the ethics of science and technology
Supavadee Aramvith is Associate Professor of Electrical Engineering and Head, Multimedia Data Analytics and Processing Research Unit, Chulalongkorn University. Her areas of research include video signal processing, AI based video analytics, and multimedia communication technology. She is very active in the international arena with the leadership positions in the international network such as JICA Project for AUN/SEED-Net, and the professional organizations such as IEEE, IEICE, APSIPA, and ITU
Siraprapa Chavanayarn is an Associate Professor of Philosophy and a Member of the Center for Science, Technology, and Society at Chulalongkorn University. Her areas of research include epistemology, especially social epistemology, and virtue epistemology.
Speakers: Proadpran Punyabukkana and Naruemon Prananwanich
October 11, 2021, 3.30 – 5.00pm, Thailand Time
The first network meeting of the Southeast Asian Hub of the Incubating Feminist AI Project was held on Monday, October 11, 2021, from 3.3opm to 5 pm on Zoom. The main speakers were Proadpran Bunyapukkana and Naruemon Prataranwanich. Proadpran is Associate Professor of Computer Engineering at the Faculty of Engineering, Chulalongkorn University, and Naruemon is Assistant Professor of Computer Science at the Department of Mathematics and Computer Science, Faculty of Science, also from Chulalongkorn. The purpose of the meeting was to introduce the Feminist AI Project to the public in Southeast Asia and to talk about general issues concerning gender biases and other forms of gender inequality in AI in general. Around 15 people attended. Proadpran and Naruemon, who are former students of Proadpran’s, talked about their works, which included assistive technology for the elderly and other forms of technology that were designed to help those who are disabled. She also talked about the number of women working in technical fields in the Global South. On her part, Naruemon talked about the need for computer scientists to learn more about their social environment and about the need for AI to be free from biases, which come from the data that are fed into the algorithm. The talk ended with an announcement for the call for papers for the Feminist AI Project and questions from the audience.
Key Issues discussed
Key recommendations for action
Junuary 31, 2022
The second network meeting of the South-east Asia Hub of the Incubating Feminist AI Project was held on January 31, 2022 from 4 to 5.30 pm Thailand time, also via Zoom. A larger audience attended the event when compared to the first one in October. The total of more than 50 people registered for the event and around 35 actually attended. The meeting was led by Associate Professor Attapol Rutherford from the Department of Linguistics, Faculty of Arts, Chulalongkorn University. Attapol is an example of the new generation of scholars who go right across disciplinary boundaries. He was educated as a computer scientist, having graduated with a Ph.D. in computer science from Brandeis University, but he is now working as a linguist at the Faculty of Arts, Chulalongkorn University, a traditional bastion of humanistic studies in Thailand. The topic of his talk was “Gender Bias in AI.” He presented a very clear account of the current research on gender bias in AI, giving examples from a wide variety of languages, such as Hungarian, English, Chinese, and Thai. The key idea in his talk is that in analyzing natural languages, the AI algorithm, working on data fed to them obtained from real-life usage, tends to mirror the biases that are already present in the data themselves. His talk is very useful for those who would like to start their research in the field, and he points out works that need to be done in order to combat the gender problem in AI. He suggested a way to ‘de-bias’ AI through a variety of means. Basically, this involves constant input and monitoring of how AI does its job.
Key Issues discussed
Key recommendations for action
AI can, and should, be made more gender friendly through both technical means
Background of the event
The second network meeting of the South-east Asia Hub of the Incubating Feminist AI Project was held on January 31, 2022 from 4 to 5.30 pm Thailand time, also via Zoom. A larger audience attended the event when compared to the first one in October. The total of more than 50 people registered for the event and around 35 actually attended. The meeting was led by Associate Professor Attapol Rutherford from the Department of Linguistics, Faculty of Arts, Chulalongkorn University. Attapol is an example of the new generation of scholars who go right across disciplinary boundaries. He was educated as a computer scientist, having graduated with a Ph.D. in computer science from Brandeis University, but he is now working as a linguist at the Faculty of Arts, Chulalongkorn University, a traditional bastion of humanistic studies in Thailand. The topic of his talk was “Gender Bias in AI.” He presented a very clear account of the current research on gender bias in AI, giving examples from a wide variety of languages, such as Hungarian, English, Chinese, and Thai. The key idea in his talk is that in analyzing natural languages, the AI algorithm, working on data fed to them obtained from real-life usage, tends to mirror the biases that are already present in the data themselves. His talk is very useful for those who would like to start their research in the field, and he points out works that need to be done in order to combat the gender problem in AI. He suggested a way to ‘de-bias’ AI through a variety of means. Basically, this involves constant input and monitoring of how AI does its job.
June 6, 2022, 3.30 – 5.00pm, Thailand Time
Background on the event
Our fourth Network Meeting took place on June 6, 2022, after a rescheduling. The talk was led by Jun-E Tan from Malaysia. Jun-E is a scholar and a policy researcher, and has been involved in the issue of AI governance, especially in Southeast Asia, which is the topic of her talk in this Network Meeting. Dr. Tan opened by talking about what AI was and what were the security risks that were created by the technology. The risks were divided into four categories, namely digital/physical, political, economic, and social ones. Examples of the first category are the potential that AI could cause physical harm or attack. Political risks include disinformation and surveillance; economic risks include the widening gap between the rich and the poor, and social risks include threats to privacy and human rights. These were only some of the risks that Dr. Tan mentioned. Then she talked about how these risks could be mitigated through a system of governance. This included rapid responses by the government, adaptation of international norms such as the GDPR with some degree of localization. She also presented some of the challenges that Southeast Asian governments needed to face, such as the fact that Southeast Asian governments and the region in general did not have a strong voice in the international arena. There were also existing and ingrained challenges such as lack of technical expertise, authoritarian regimes and weak institutional frameworks. After her talk there was a lively discussion among the audience, which included how the system of governance here could promote the use of AI in such a way that creates a more gender-equal society.
Key Issues discussed
● Physical, political, economic, and social risks of AI
● Challenges facing Southeast Asian governments
Key recommendations for action
● Anchor AI governance in its societal contexts
● Build constitutionality around AI and data governance
● Enable whole-of-society participation in AI governance
Background on the event
In this fifth session of the Asia Network Meeting, which is the last one for the first year of the Project, Eleonore Fournier-Tombs and Matthew Dailey talked about “Gender-Sensitive AI Policy in Southeast Asia.” Fournier-Tombs is a global affairs researcher specializing in technology, gender, and international organizations. Matthew Dailey is a professor of computer science at the Asian Institute of Technology. Fournier-Tombs started by pointing out various risks for women that are posed by AI, such as loan apps giving out more money to men than to women, job applications from women being downgraded by AI, and so on. She also talked about stereotyping through the use of language, as well as some of the socio-economic impacts this stereotyping and discrimination has caused. Then she talked about the project that she and her colleague Matthew Dailey were undertaking, where they looked at the AI situation in four countries in Southeast Asia, namely Malaysia, Philippines, Thailand, and Indonesia. They found out that all four countries had their own respective AI roadmap policies, but only Thailand had a fully functioning official AI ethics policy guideline. Toward the end of her talk she discussed how the instrument of the Universal Declaration on Human Rights became translated to working documents on AI policy, especially in the region. After Fournier-Tombs’ talk, Matthew Dailey followed on with his discussion of the projects that he was working with Fournier-Tombs, and that he was working with his students. The latter consisted of projects that implemented AI technology to various uses in Thailand, such as in facial recognition and in regulating the unruly Thai urban traffic. There was a lively discussion among the audience at the end.
Key Issues discussed
AI policies in Southeast Asian countries
How women are impacted by AI and what instruments are there to mitigate the impact
How four countries in Southeast Asia responded to the AI challenge
Key recommendations for action
More study of how the global mechanism such as the Universal Declaration on Human Rights becomes operative in the fields of AI policy
Research and development on gender-sensitive AI
The 5th Network Meeting
On May 20, 2022, the Southeast Asian Hub of the “Incubating Feminist AI Project” launched its first capacity building workshop, entitled “Feminist AI and AI Ethics” at the Royal River Hotel in Bangkok. The workshop is part of the series of activities organized by the f<A+i>r network, a group of scholars and activists who join together to think about how AI could contribute to a more equal and inclusive society. The Project is supported by a grant from the International Research Development Centre, Canada.
The event was attended by around twenty participants from various disciplines and backgrounds. The aim of the workshop was to equip participants with the basic vocabulary and conceptual tools for thinking about the roles that AI could play in engendering a more inclusive society.
The workshop was opened by Suradech Chotiudomphant, Dean of the Faculty of Arts, Chulalongkorn University. Dr. Jittat Fakcharoenphol and Dr. Supavadee Aramvit were also presented at the Workshop. Jittat was the lead discussant and would take a key role in the group discussion, and Supavadee was a member of the Southeast Asia Hub of the Project. Then Dr. Soraj Hongladarom, Director of the Center for Science, Technology, and Society, presented a talk on “Why Do We Need to Talk about Feminist Issues in AI?” After presenting a brief definition and history of AI, Soraj talked about the reasons why we needed to consider feminist issues in AI, as well as other issues concerning social equality. Basically, the reasons are that gender equality is essential for the economic development of a nation. A nation where both women and men are given the same opportunities and equal rights will be more likely to create prosperity that will benefit everyone, especially when compared with a society that does not give women equal rights and opportunities. Furthermore, there is also a moral reason: Denying women their rights would be wrong because inequality itself is morally wrong. Then he talked about the various ways in which AI had actually been used, either intentionally or not, in such a way that the rights of women were violated. For example, AI has been used to calculate the likelihood of repaying loans. If the dataset is such that women are perceived by the algorithm as being less likely to repay, then there is a bias in the algorithm against women, something that needs to be corrected. Toward the end, Soraj mentioned that the Incubating Feminist AI project was currently launching a call for expressions of interest, where everyone was invited to submit. Details of the call can be found here.
Afterwards, the actual workshop began, with a lead talk and discussion by Dr. Jittat Fakcharoenphol from the Department of Computer Engineering, Faculty of Engineering, Kasetsart University. Jittat talked about the basic concepts in machine learning, the core matter of today’s AI, and then he presented the group with three cases for them to discuss, all of which were concerned with feminist issues in various applications of AI. These were feminist issues in AI in medicine, in facial recognition, and in loan and hiring algorithms. The participants divided themselves into three groups; they then chose a topic and started to have their discussion very actively. After about an hour of group discussion, each group presented to others what they had discussed and what their recommendations were. The participants showed a strong interest in the topics and everyone was convinced that AI needed to become more socially aware and that more work needed to be done to see in detail what exactly socially aware AI is going to be.
At the end of the meeting, Dr. Supavadee talked about her reflection on the Workshop and gave a closing speech. The workshop in fact was the first one, to my knowledge, that was engaged with feminist topics in AI, and it was a credit to the IDRC and the Incubating Feminist AI project that a seed was planted in Thailand and in Southeast Asia regarding the awareness that we must consider how AI can contribute to a more equal and more inclusive society, and how the traditional unequal status of women, especially in this part of the world, could be redressed through this technology.
Network meetings
Capacity Building Workshop
Collection of ‘Think Pieces’ or ‘Essays’ on Feminist AI