Should you trust what AI says?

Our Alliance Advisory Board member, Yale Professor Elisa Celis, was drawn to create AI technology to better the world, only to find out that it has a big problem – AI that is designed to serve all of us, in fact, excludes most of us.

In this recent TEDxProvidence talk Professor Celis highlights that algorithmic bias exists throughout AI – and the problem lies along every step of the way, from the moment we collect data, to the way we design algorithms, to how we analyse and use the data. Each step requires human decisions and requires human motivations – but rarely do we stop ourselves and ask who is taking these decisions, who is benefiting from them and who is being excluded?

The good news? We can fix this now. The Alliance, led by Advisory Group members Professor Celis and Professor Nisheeth Vishnoi has developed a number of resources and algorithms for individuals and organisations to use to support gender equality and non-discrimination in ADM and to correct for bias in machine learning systems. These resources are available for ANYONE to use right now.

We can either seize the moment to correct bias in the digital realm, as we tackle bias in the analogue world, or condemn ourselves to having historical bias baked into new ADM that has old stereotypical associations of gender, race and class baked in and impossible to 'bake out'.

We must correct real life bias and barriers that prevent women from achieving full participation and rights in the present, and in the future we invent. For this reason we urge you to join the global Alliance.

*Elisa Celis is an Assistant Professor in the Department of Statistics and Data Science at Yale University. To see her full TEDx talk click here.