Algorithmic equitable actions to correct for real life biases and barriers that prevent women and girls from achieving full participation and equal enjoyment of rights.
Public institutions to Pilot and Lead: Affirmative Action for Algorithms deployed when public institutions pilot ADM. Base pilots on longstanding and new social science research that allocate social incentives, subsidies, or scholarships where women have traditionally been left behind in prior systems. This is a positive agenda to advance values of equality we have long embraced, to correct for the visibility, quality, and influence of women proportionate to the population.
Public and private sector uptake of Algorithmic Impact Assessments (AIA): A self assessment framework designed to respect the public’s right to know the AI systems that impact their lives in terms of principles of accountability and fairness.
Rigorous testing across the lifecycle of AI systems: Testing should account for the origins and use of training data, test data, models, Application Program Interface (APIs), and other components over a product life cycle. Testing should cover pre-release trials, independent auditing, certification, and ongoing monitoring to test for bias and other harms. ADM should improve the quality of, not control the human experience.
Strong legal frameworks to promote accountability: Including potential expansion of powers for sector specific agencies, or creation of new terms of reference to oversee, audit, and monitor ADM systems for regulatory oversight and legal liability on the private and public sector.
Gender responsive procurement guidelines: Organizations and all levels of government, to develop ADM gender equality procurement guidelines with hard targets; and outline roles and responsibilities of those organizations required to apply these principles.
Improve datasets – Open gender disaggregated data, data collection, and inclusive quality datasets: Actively produce open gender disaggregated datasets; this better enables an understanding of the sources of bias in AI, to ultimately improve the performance of machine learning systems. Invest in controls to oversee data collection processes and human-in-the-loop verification, so that data is not collected at the expense of women and other traditionally excluded groups. Engage in more inclusive data collection processes that focus not only on quantity but on quality of datasets.