Advertisement

Artificial intelligence experts urge more deepfake ‘safeguards’

Click to play video: 'Implications of AI Deepfakes'
Implications of AI Deepfakes
WATCH - Implications of AI Deepfakes – Jan 31, 2024

Artificial intelligence experts and industry executives, including one of the technology’s trailblazers Yoshua Bengio, have signed an open letter calling for more regulation around the creation of deepfakes, citing potential risks to society.

“Today, deepfakes often involve sexual imagery, fraud, or political disinformation. Since AI is progressing rapidly and making deepfakes much easier to create, safeguards are needed,” the group said in the letter, which was put together by Andrew Critch, an AI researcher at UC Berkeley.

Deepfakes are realistic yet fabricated images, audios and videos created by AI algorithms, and recent advances in the technology have made them more and more indistinguishable from human-created content.

Click to play video: 'Taylor Swift deepfake images: Why people are concerned over pornographic AI photos'
Taylor Swift deepfake images: Why people are concerned over pornographic AI photos

The letter, titled “Disrupting the Deepfake Supply Chain,” makes recommendations on how to regulate deepfakes, including full criminalization of deepfake child pornography, criminal penalties for any individual knowingly creating or facilitating the spread of harmful deepfakes and requiring AI companies to prevent their products from creating harmful deepfakes.

Story continues below advertisement

As of Wednesday morning, over 400 individuals from various industries including academia, entertainment and politics had signed the letter.

Breaking news from Canada and around the world sent to your email, as it happens.

Signatories included Steven Pinker, a Harvard psychology professor, Joy Buolamwini, founder of the Algorithmic Justice League, two former Estonian presidents, researchers at Google DeepMind and a researcher from OpenAI.

Ensuring AI systems do not harm society has been a priority for regulators since Microsoft-backed OpenAI unveiled ChatGPT in late 2022, which wowed users by engaging them in human-like conversation and performing other tasks.

There have been multiple warnings from prominent individuals about AI risks, notably a letter signed by Elon Musk last year that called for a six-month pause in developing systems more powerful than OpenAI’s GPT-4 AI model.

Sponsored content

AdChoices