Advertisement

U.S. high schooler accused of using AI to create nude images of classmates

File photo of Westfield High School, which is at the centre of a scandal after a male student allegedly used an AI website to create deepfake pornographic content of female classmates. Rich Graessle/Icon Sportswire via Getty Images

Parents and students of Westfield High in New Jersey are speaking out after a male student or students allegedly used an AI generator to create fake nude images using the faces of female classmates and distributed them in a group chat.

Francesca Mani, 14, is one of more than 30 girls who were victimized in the incident, she says. In a televised interview with CNN, she called on the U.S. government to enact laws to protect victims of AI-generated content.

She also claimed that members of the school community know the student responsible for creating the lewd images, but he has yet to face appropriate consequences.

“So many girls don’t feel comfortable knowing that he’s walking our hallways,” Mani said.

Westfield High stated it could not provide specific details about disciplinary actions taken or the number of students involved, due to privacy reasons. The incident allegedly happened over the summer break, but the school did not become aware of it until Oct. 20.

Story continues below advertisement

On that day, Westfield High School principal Mary Asfendis sent an email to parents about the “very serious incident.”

“There was a great deal of concern about who had images created of them and if they were shared,” Asfendis wrote. “At this time, we believe that any created images have been deleted and are not being circulated.”

According to the Wall Street Journal, which first reported on the incident Saturday, female students learned about the non-consensual images after noticing male classmates in Grade 10 acting “weird” on Oct. 16. A few days later, one boy came forward to say that a classmate had used social media photos of female students to generate fake nude images using an AI-powered website.

A group of female students reported the matter to school administrators. The incident left many of them feeling “humiliated and powerless,” the Journal reported.

Westfield High said it conducted an investigation into the incident while collaborating with local police. Counselling was provided to students after the matter came to light.

Superintendent Raymond González stated that schools all over “are grappling with the challenges and impact of artificial intelligence and other technology available to students at any time and anywhere.”

“The Westfield Public School District has safeguards in place to prevent this from happening on our network and school-issued devices. We continue to strengthen our efforts by educating our students and establishing clear guidelines to ensure that these new technologies are used responsibly in our schools and beyond,” he added.

Story continues below advertisement

Westfield student Mani believes the issue should be dealt with by local police, and that the school should further contribute to making students feel comfortable walking the halls.

Mani’s mother said she’s proud of her daughter for advocating for herself and other girls who have been victimized by deepfake pornographic content.

“I think this issue is more complex than just Westfield High School, and this is our time and opportunity to treat it as a teachable platform, to shed the light on this important issue,” Dorota Mani told CNN.

In recent years, as AI technology has become more accessible, the number of deepfake images and videos on the internet has exploded.

A report from Sensity AI, a company that detects and monitors AI-generated content, found that 96 per cent of deepfake videos online are pornographic.

An AI-powered bot that “strips” people of their clothing was used to create fake nude images of nearly 105,000 women in a one-year time span, the company found. The bot was widely shared on the platform Telegram.

Around 70 per cent of the images created were of private individuals; in other words, not celebrities or public figures.

“Deepfakes continue to pose a threat for individuals and industries, including potential large-scale impacts to nations, governments, businesses, and society,” according to a recent report from the Department of Homeland Security.

Story continues below advertisement

“Experts from different disciplines whose research interests intersect at deepfakes tend to agree that the technology is rapidly advancing, and the high cost of producing top-quality deepfake content is declining. As a result, we expect an emerging threat landscape wherein the attacks will become easier and more successful,” the report adds.

Currently, there are no laws in Canada’s Criminal Code that outright ban the creation and distribution of deepfake pornographic content. However, there is a provision that criminalizes revenge porn, or the non-consensual sharing of intimate images.

Sponsored content

AdChoices