Contact

OPEN LETTER TO OPEN AI’S CEO SAM ALTMAN: INVITATION TO TALK TO HUMAN RIGHTS EXPERTS ABOUT CHAT GPT

Posted in: Technology March 24, 2023
Share on LinkedIn Share on Twitter
OPEN LETTER TO OPEN AI’S CEO SAM ALTMAN: INVITATION TO TALK TO HUMAN RIGHTS EXPERTS ABOUT CHAT GPT

Following the recent launch of OpenAI’s ChatGPT and the organization’s own view that “this technology comes with real dangers as it reshapes society”, AUDRi has published an open letter to OpenAI’s CEO Sam Altman, inviting him to talk in greater detail about his call for regulators and society to be involved in creating solutions which mitigate potential negative consequences, and how these solutions are aligned with AUDRi’s 9 Digital Principles.

Dear Mr Altman,

Invitation to talk to human rights experts about Chat GPT

Our global campaign, the Alliance for Universal Digital Rights (AUDRi), was founded in 2022 by two international gender and equality organizations, Equality Now and Women Leading in AI

We are dedicated to making digital technology and AI work for everyone, everywhere. We work with partners at the highest levels of government and business and we have just returned from advancing this vision at the United Nations Commission on the Status of Women where we discussed this vital challenge with leaders from all over the world.

Governments, companies, and the UN are working constructively with us. We hope that you will join them.

We congratulate you and your team on the latest release of OpenAI’s ChatGPT. We are excited by the new capabilities that have been added, and we are encouraged by the positive steps Open AI have taken to prevent the conversational tool from encouraging hate speech, racism, and misogyny.

We applaud your honesty and realism when you warn that “this technology comes with real dangers as it reshapes society.” You note that we have to be careful because, for example, “these models could be used for large-scale disinformation.” 

Your Chief Scientist Ilya Sutskever adds that “at some point it will be quite easy, if one wanted, to cause a great deal of harm with these models.” 

It is right and proper that these concerns are shared and that action is taken collectively to mitigate the risks of such powerful technologies.

So we are throwing our weight behind your call for regulators and society to be involved to mitigate potential negative consequences and to adopt the solutions you and others have proposed, such as:

  •       the implementation of proper governance,
  •       transparency (including of the data being used to train the model),
  •       frameworks for testing,
  •       clear rules around the use of these tools.

These solutions are aligned with AUDRi’s 9 Digital Principles which we developed to inform global efforts towards a digital future in which everyone can enjoy equal rights to safety, freedom, and dignity. 

But we ask you to go further. Your system card risk assessment makes it clear that some risks of ChatGPT are currently unmitigated, and that this is dangerous. We know that there is more work to be done, and we want to help.

With all this in mind, we invite you to work with us to achieve what appear to be shared goals. As the UN develops its Global Digital Compact, a set of shared principles for an open, free and secure digital future for all, world-leading products like yours must be designed for the common good.

We ask you to meet with us at the earliest opportunity so that together, we can start wrapping the appropriate governance around ChatGPT. To be clear, this is about restricting the potential harms, not restricting technology.

AUDRi wants you on board to deliver a new global governance framework. Will you join us?

By the way, we’ll be making this letter public tomorrow.

We look forward to working with you.

The Alliance for Universal Digital Rights.

See Also

New research brief: Doxing, digital abuse and the law

New research brief: Doxing, digital abuse and the law

By

February 27, 2024

Doxing typically involves the deliberate sharing of private personally identifiable  information on the internet, without cons...

Read More
Equality by Design –  A model for managing the discriminatory risks of AI

Equality by Design – A model for managing the discriminatory risks of AI

By

February 15, 2024

Guest blog by Jim Fitzgerald, Director, Equal Rights Trust Throughout the last year, it has seemed that barely a week passed wi...

Read More
New research brief: Deepfake image-based sexual abuse, tech-facilitated sexual exploitation and the law

New research brief: Deepfake image-based sexual abuse, tech-facilitated sexual exploitation and the law

By

January 24, 2024

The rise of deepfake image-based sexual abuse necessitates urgent and comprehensive responses from technological innovation, le...

Read More

Design by StudioDBD