Contact

Equality by Design – A model for managing the discriminatory risks of AI

Posted in: Ai February 15, 2024
Share on LinkedIn Share on Twitter
Equality by Design –  A model for managing the discriminatory risks of AI

Guest blog by Jim Fitzgerald, Director, Equal Rights Trust

Throughout the last year, it has seemed that barely a week passed without a major development in the field of artificial intelligence. From the launch of Chat GPT to the establishment of the UN’s High Level Advisory Body on Artificial Intelligence, the news has been filled with stories of the transformative potential of these technologies, alongside warnings about adverse impacts, particularly in the area of human rights.

The use of artificial intelligence and other algorithmic systems is increasing rapidly. These technologies are being used to automate tasks and reduce cost in areas ranging from recruitment to law enforcement; to deliver new goods, products and experiences; and to transform the ways in which existing services, such as education and healthcare, are provided. 

The optimistic once suggested that these systems – through using data and automating decisions – could create processes which are more efficient and objective, free from human bias. Yet there is growing recognition that – far from eliminating discrimination – algorithmic systems frequently result in discriminatory impacts. Discriminatory by Default? a report published by the Equal Rights Trust, presents fifteen case studies from across the globe which illustrate some of the many different ways in which the use of algorithmic systems can cause discrimination.

Discriminatory by default

One example cited in the study is the “Optum” system – an algorithmic system used by healthcare providers in the USA to identify and refer patients in need of “high risk” care. Academics found that Optum was significantly less likely to refer Black patients than white patients with the same healthcare issues because it used “health costs as a proxy for health needs”, and Black patients historically paid less for care due to existing structural inequalities. In another case, from the Republic of Korea, the Lee Luda AI chatbot “learnt” to make homophobic, racist, and ableist statements and claimed to hate lesbians, Black people and persons with disabilities as a result of its interaction with users. These case studies, and others from the Netherlands, Jordan, and Paraguay and elsewhere, demonstrate that the use of algorithmic systems can and does give rise to discrimination on a wide range of different grounds.

Indeed, as the tile of the report indicates, because of the way in which these systems are developed designed, trained, deployed and used, they are frequently discriminatory by default. Reliance on data which reflects existing bias; system design which reflects stereotypes; failures to accommodate difference and a range of other issues mean that these systems frequently reinforce existing patterns of discrimination and replicate bias. In light of this – and in view of the broad range of discriminatory impacts and the challenges in foreseeing these impacts – states and businesses need to adopt a pre-emptory and precautionary approach if they are to ensure that these systems do not cause discrimination. 

We call this approach equality by design.  Equality by design is a proactive and participatory approach to identifying, assessing and addressing the discriminatory impacts – and any potential positive equality impacts – of algorithmic systems. It is based on integrating informed consultation and meaningful engagement with those exposed to discrimination at all stages in the lifecycle of the system. This approach enables potential discriminatory impacts – such as those which arose in the Optum and Lee Luda cases – to be identified by those whose experience enables them to foresee how discriminatory impacts will occur and to be addressed in advance of implementation.

Alongside the Discriminatory by Default report,  the Equal Rights Trust launched a new set of legal standards – the Principles on Equality by Design in Algorithmic Systems. Endorsed by a group of leading international equality organisations representing women, ethnic and religious minorities, LGBTIQ+ persons and other groups, the Principles outline why and how states and businesses must adopt an equality by design approach if they are to meet their legal obligations of non-discrimination.

Having launched these Principles, we’re working with others in civil society to engage national, regional and intergovernmental institutions on the integration of equality by design into new regulatory frameworks. At the same time, however, we know that many in the business community – particularly those who procure and use these technologies – want clear guidance right now on how to meet their legal obligations and operate in line with their values of equality and inclusion.

A commitment to combating discrimination

To meet this demand, we’re working with a small group of multinational businesses to develop user-friendly guidelines on equality by design for those who commission and use algorithmic systems. This group includes businesses from a range of different sectors who share a commitment to combating discrimination and promoting equality. It is led by Mary Kay, the global cosmetics company. 

Over the last two years, Mary Kay has supported our research, analysis and consultation on algorithmic discrimination, and enabled us to develop the Principles on Equality by Design. Now, it is leading this collaborative effort to develop practical guidance for business on how to adopt the equality by design approach. In doing so, it is taking the next step in a journey which started with strong internal equality and non-discrimination policies, moved to collaboration with and support for us and other actors in this field, and is now focused on incubating and piloting best practice. In this journey, Mary Kay is helping to model how businesses can play an active role in developing improved approaches to eliminating discrimination and advancing equality.

Together, the Equal Rights Trust and Mary Kay are seeking other businesses to join our new initiative. Contact algorithmicdiscrimination@equalrightstrust.org to find out more.  

See Also

New report: Sexual Violence and Harrassment in the Metaverse

New report: Sexual Violence and Harrassment in the Metaverse

By

April 26, 2024

The metaverse is a space where new forms of sexual violence are occurring, with female-presenting avatars far more likely to ex...

Read More
New research brief: Doxing, digital abuse and the law

New research brief: Doxing, digital abuse and the law

By

February 27, 2024

Doxing typically involves the deliberate sharing of private personally identifiable  information on the internet, without cons...

Read More
New research brief: Deepfake image-based sexual abuse, tech-facilitated sexual exploitation and the law

New research brief: Deepfake image-based sexual abuse, tech-facilitated sexual exploitation and the law

By

January 24, 2024

The rise of deepfake image-based sexual abuse necessitates urgent and comprehensive responses from technological innovation, le...

Read More

Design by StudioDBD