Please join us in signing the Disrupting Deepfakes Supply Chain Letter.
With the recent announcement of Sora, OpenAI’s game-changing text-to-video AI model, the world has never been more aware of the amazing implications, both positive and negative, of AI-generated video technology. Now while Sora may not yet be available to the general public so it can be “assessed for harms or risks”, AI-assisted video technology is nothing new. “Deepfakes”, which are non-consensual or grossly misleading AI-generated voices, images, or videos, that a reasonable person would mistake as real, have been with us for several years now — and have become increasingly sophisticated.
One way we can help ensure that software developers like OpenAI prevent their audio and visual products from creating harmful deepfakes is to hold them liable if their preventive measures are too easily circumvented. This is one of several important measures enumerated in the Disrupting the Deepfake Supply Chain Open Letter. This letter also pushes to establish criminal penalties for anyone who knowingly creates or knowingly facilitates the spread of harmful deepfakes, and fully criminalizes deepfake child pornography, even when only fictional children are depicted.
Whether you are motivated by the prevention of fraud, the spread of misinformation, political disinformation, election integrity, preventing non-consensual pornography, CSAM and VCSAM, gender-based violence, or the threat posed to free speech (by being drowned out by machine-generated fake speech), we implore you to please support this urgent cause.
Deepfakes are an asymmetric technology, causing much more harm than benefit. Please join us in signing this push to regulate deepfakes, before they cause even more irreparable harm to the fabric of society: https://openletter.net/l/disrupting-deepfakes.