Recently, AUDRi wrote an open letter to Sam Altman, CEO of Open AI, the creators of the very popular and much talked about ChatGPT.
Open AI produced a report explaining how they had mitigated some of the risks associated with the latest version of ChatGPT, like stopping it from answering questions that might solicit harmful content, such as where to buy an illegal firearm or from creating a racist joke.
But the report outlined the risks that ChatGPT has so far been unable to mitigate, such as:
- It’s potential to create large-scale disinformation
- How it reproduces real-world biases and generates discriminatory content
- How it could be used to write more convincing ‘phishing’ emails
- How it could make it easier for ill-intentioned users to access information about materials used to make weapons and identify potentially vulnerable target locations.
Open AI’s Chief Scientist admitted that “at some point it will be quite easy, if one wanted, to create a great deal of harm with these models.”
Sam Altman called for the implementation of proper governance around large language models like ChatGPT, which mirrors AUDRi’s key ask for the development of digital technology.
We asked for a meeting with Open AI ‘so that together we can start wrapping the appropriate governance around ChatGPT.’ We’ll let you know when we hear back from them.