By Ivana Bartoletti, co-founder of AUDRi, and Women Leading in AI
There is not a day that goes by without someone theorizing about the end of the world and the existential calamity which will be wrought by artificial intelligence. AI will take over and even extinguish the whole of humanity, warn the thought leaders and governments of the world, and we have to act now to save the world. This is certainly the rhetoric being shared ahead of the UK’s AI Summit happening this week, and, to a lesser extent, seems to have prompted the Executive Order signed this week by US president Joe Bidern.
This doom and gloom is unhelpful and misguided. And, cynically, it might even be a way of distracting from the hard work of getting around the table and negotiating the rules of the game.
And it obscures the fact that AI does not exist in isolation, its development and adoption are not cause for the panicked development of new and untested solutions. AI operates within a world where, albeit imperfectly, we have already learned how to set rules around exploitation, human rights, and human dignity.
Using what we know to move faster than AI
Amid all this gloom and catastrophizing, it is reassuring that the United Nations Secretary General has presented his Policy Brief for a global framework for digital governance called the Global Digital Compact. The compact outlines “shared principles for an open, free and secure digital future for all.” And, importantly, it references aspects and nuances of the digital ecosystem that might be lost or overlooked in the rush to solve the potential “dangers” posed by AI.
There are some really interesting elements in this briefing: the understanding, for example, that women have been at the sharp end of the unequal distribution of the digital dividends and have been more subject to online abuse -and also much more prone to be discriminated against via algorithms used to hire and fire, make predictions, or allocate resources, because of (among other things) the historical data that these algorithms are trained on.
The UN, and its proposals, matter because it brings together nations from all over the world to discuss common problems, and find solutions that benefit all of humanity. One main challenge around global AI governance relates to epistemic injustice: while bringing more people around the table is great, we must identify how vital voices from the Global South can have equal footing and enrich a debate that otherwise remains Western-centric.
And the global nature of the quest to use AI to improve the world is one that cannot be underestimated. Mahamudu Bawumia, Vice-President of Ghana, argues that Africa has a workforce ready to take on the tech revolution and drive progress and economic transformation across the continent – but that resources are needed so that AI can improve the livelihood and wellbeing of many.
But for AI to be for good, rules have to be set. The UN’s Global Digital Compact can be the space for this – but good intentions won’t suffice. Whatever is agreed at the UN needs to have a degree of enforceability, transparency requirements, and the ability to benchmark countries against a set of rules. The UN agreement also needs to inform national laws and policies, and have the ability to be a persuasive authority for courts when making decisions at the country level.
A crucial head start
Of course, we cannot sit and wait for the Digital Compact to be ready. With this in mind, it is heartening that temporary measures are being thought of. For example, the European Union and the US have decided to lay out a code of conduct for responsible AI, in the hope that other democracies around the world would follow suit while specific new legislation is being prepared.
We also must recognise existing laws. In the US, FTC commissioner Alvaro Bedoya is right when he says that Machine Learning is not an excuse to break the law. The French privacy watchdog, for example, has just issued guidance around AI and reaffirmed the compatibility of AI with the General Data Protection Regulation. It is fair to say that Data Protection Authorities from all over the world have played a fundamental role in upholding people’s rights against opaque systems, making decisions over their future and locking citizens and workers out of services and opportunities.
Of course, there are things that we will not be able to regulate. For example, over reliance on tech (the so-called automation bias) and the lack of understanding about how decisions made by machines are different from those made by humans (machines categorize and work on correlations, knowing nothing about what causes what). For this, it is imperative that the media as well as our institutions upskill their game and present AI for what it is: a great opportunity to value and augment our humanity and our quest for progress, equality and hope.
AI is not neutral: instead, it is an inextricable bundle of power dynamics between nations, data, large tech companies, and people. This is a crucial time to define our relationship with technology – and a time when politics and international institutions must demonstrate that they can innovate faster than the incredible speed of AI itself. We must lean into what we already know to give us a crucial head start.