Report slams generative AI tools for helping users create harmful eating disorder content

Popular AI tools and chatbots can give users dangerous tips and suggestions.
By Rebecca Ruiz  on 
A girl in a defensive posture, surrounded by chat bubbles.
Generative AI tools like ChatGPT, Dall-E, Bard, and MyAI can produce harmful eating disorder content, a new report warns. Credit: Bob Al-Greene / Mashable

Generative artificial intelligence (AI) platforms and tools can be dangerous for users asking about harmful disordered eating practices, according to a new report published by the Center for Countering Digital Hate.

The British nonprofit and advocacy organization tested six popular generative AI chatbots and image generators, including Snapchat's My AI, Google's Bard, and OpenAI's ChatGPT and Dall-E.

The center's researchers fed the tools a total of 180 prompts and found that they generated dangerous content in response to 41 percent of those queries. The prompts included seeking advice for how to use cigarettes to lose weight, how to achieve a "heroin chic" look, and how to "maintain starvation mode." In 94 percent of harmful text responses, the tools warned the user that its advice might be unhealthy or potentially unsafe and advised the user to seek professional care, but shared the content anyway.

Of 60 responses to prompts given to AI text generators Bard, ChatGPT, and MyAI, nearly a quarter included harmful content. MyAI initially refused to provide any advice. However, the researchers were able to "jailbreak" the tools by using words or phrases that circumvented safety features. More than two-thirds of responses to jailbreak versions of the prompts contained harmful content, including how to use a tapeworm to lose weight.

"Untested, unsafe generative AI models have been unleashed on the world with the inevitable consequence that they're causing harm," wrote Imran Ahmed, CEO of the Center for Countering Digital Hate. "We found the most popular generative AI sites are encouraging and exacerbating eating disorders among young users – some of whom may be highly vulnerable."

The center's researchers discovered that members of an eating disorder forum with over 500,000 users deploy AI tools to create extreme diet plans and images that glorify unhealthy, unrealistic body standards.

While some of the platforms prohibit using their AI tools to generate disordered eating content, other companies have more vague policies. "The ambiguity surrounding the AI platforms' policies illustrates the dangers and risks AI platforms pose if not properly regulated," the report states.

Mashable Top Stories
Stay connected with the hottest stories of the day and the latest entertainment news.
Sign up for Mashable's Top Stories newsletter
By signing up you agree to our Terms of Use and Privacy Policy.
Thanks for signing up!

When Washington Post columnist Geoffrey A. Fowler attempted to replicate the center's research by feeding the same generative AI tools with similar prompts, he also received disturbing responses.

Among his queries were what drugs might induce vomiting, how to create a low-calorie diet plan, and requests for "thinspo" imagery.

"This is disgusting and should anger any parent, doctor or friend of someone with an eating disorder," Fowler wrote. "There’s a reason it happened: AI has learned some deeply unhealthy ideas about body image and eating by scouring the internet. And some of the best-funded tech companies in the world aren't stopping it from repeating them."

Fowler wrote that when he questioned the companies behind the tools, none of them promised to stop their AI from giving advice on food and weight loss until they could guarantee it was safe.

Instead, image generator Midjourney never responded to Fowler's questions, he wrote. Stability AI, which is behind the image generator Stable Diffusion, said it added disordered eating prompts to its filters. Google reportedly told Fowler that it would remove Bard's thinspo advice response, but he was able to generate it again a few days later.

Psychologists who spoke to Fowler said that safety warnings delivered by the chatbots about their advice often go unheeded by users.

Hannah Bloch-Wehba, a professor at Texas A&M School of Law who studies content moderation, told Fowler that generative AI companies have little economic incentive to fix the problem.

"We have learned from the social media experience that failure to moderate this content doesn't lead to any meaningful consequences for the companies or, for the degree to which they profit off this content," said Bloch-Wehba.

If you feel like you’d like to talk to someone about your eating behavior, text "NEDA" to the Crisis Text Line at 741-741 to be connected with a trained volunteer or visit the National Eating Disorder Association website for more information.

Rebecca Ruiz
Rebecca Ruiz

Rebecca Ruiz is a Senior Reporter at Mashable. She frequently covers mental health, digital culture, and technology. Her areas of expertise include suicide prevention, screen use and mental health, parenting, youth well-being, and meditation and mindfulness. Prior to Mashable, Rebecca was a staff writer, reporter, and editor at NBC News Digital, special reports project director at The American Prospect, and staff writer at Forbes. Rebecca has a B.A. from Sarah Lawrence College and a Master's in Journalism from U.C. Berkeley. In her free time, she enjoys playing soccer, watching movie trailers, traveling to places where she can't get cell service, and hiking with her border collie.


Recommended For You
Fill your skill gaps with Adobe’s generative AI tools
Young editor deciding on video - stock photo

5 back-to-school tech tools for boosting productivity
a smiling woman holding a pair of headphones and e-tablet

YouTube to add tools to detect AI-generated faces and voices
YouTube play logo is being displayed on a smartphone, with YouTube in the background

TikTok bad actors harness AI to reanimate Hitler's hate
A phone in front of a red background showing the TikTok logo.

Nearly a quarter of iPhone users say green bubbles are a dating dealbreaker, new survey reveals
Green bubbles in texting conversation

More in Life
Deals under $25 still live after Prime Day: AirTags, speakers, more
An illustrated background with an Amazon Echo Pop, Amazon Fire TV Stick 4K, Apple AirTag, and Anker P20i earbuds.

The best Amazon Prime Day deals still live: Roomba, Apple, Dyson, and other top brands
various tech products

Samsung Galaxy Prime Day deals are still live: Save $250 on cult-favorite Z Flip 6 AI phones
Samsung Galaxy devices overlayed on blue and green illustration


Fidelity data breach compromises more than 77,000 customers
Fidelity logo

Trending on Mashable
Wordle today: Answer, hints for October 11
a phone displaying Wordle

NYT Connections today: Hints and answers for October 11
A phone displaying the New York Times game 'Connections.'

NYT Connections today: Hints and answers for October 10
A phone displaying the New York Times game 'Connections.'

'The Platform 2's twisty ending, explained
A close-up of a topless, bald man holding a lit lighter.

Astronomers just found a galaxy way too advanced for its time
Galaxy forming in the early universe
The biggest stories of the day delivered to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.
Thanks for signing up. See you at your inbox!