An artificial intelligence chatbot meant to help those with eating disorders has been taken down after reports it had started to give out harmful dieting advice.
The U.S. National Eating Disorder Association (NEDA) had implemented the chatbot named Tessa while subsequently announcing it would lay off all of its helpline’s human employees.
On Monday, activist Sharon Maxwell posted to Instagram, claiming that Tessa offered her advice on how to lose weight and “healthy eating tips,” recommending that she count calories, follow a 500 to 1,000 calorie deficit each day and measure her weight weekly.
“Every single thing Tessa suggested were things that led to the development of my eating disorder,” Maxwell wrote. “This robot causes harm.”
Alexis Conason, a psychologist who specializes in the treatment of eating disorders, was able to replicate some of the harmful advice when she tried the chatbot herself.
“In general, a safe and sustainable rate of weight loss is 1-2 pounds per week,” screenshots of the chatbot messages read. “A safe daily calorie deficit to achieve this would be around 500-1000 calories per day.”
In a public statement Tuesday, NEDA said they are investigating the claim “immediately” and “have taken down that program until further notice.”
Liz Thompson, NEDA’s CEO, told CNN that Tessa’s apparent failure can be blamed on “bad actors” who were trying to trick the tool and said that the harmful advice was only sent to a small percentage of the 2,500 people who have accessed the bot since its launch in February of last year.
Get breaking National news
Meanwhile, the NEDA helpline’s fired union members have filed unfair labour practice charges against the non-profit, alleging they were terminated in a union-busting maneuver after they decided to unionize in March.
“We asked for adequate staffing and ongoing training to keep up with our changing and growing Helpline and opportunities for promotion to grow within NEDA. We didn’t even ask for more money,” Helpline associate and union member Abbie Harper wrote in a blog post.
“When NEDA refused (to recognize our union), we filed for an election with the National Labor Relations Board and won. Then, four days after our election results were certified, all four of us were told we were being let go and replaced by a chatbot.”
According to the post, the helpline workers were told they would be jobless as of June 1, and now that the organization is out of both human helpline workers and their chatbot, it appears there is no one on staff to offer advice to those turning to the organization for help in emergency situations.
“We plan to keep fighting. While we can think of many instances where technology could benefit us in our work on the Helpline, we’re not going to let our bosses use a chatbot to get rid of our union and our jobs,” Harper wrote.
Thompson told The Guardian that the chatbot was not meant to replace the helpline, but was created as a separate program.
“We had business reasons for closing the helpline and had been in the process of that evaluation for three years,” Thompson said. “A chatbot, even a highly intuitive program, cannot replace human interaction.
The rise of AI and chatbots has been causing headaches and worries for many organizations and technology experts, as some have been shown to perpetuate bias and dole out misinformation.
For example, in 2022 Meta released its own chatbot that made antisemitic remarks and disparaged Facebook.
Countries around the world are scrambling to come up with regulations for the developing technology, with the European Union blazing the trail with its AI Act expected to be approved later this year.
Comments