Yim, See Heng, Yoo, Dong Whi, Polymerou, Apostolos, Liu, Yuqi and Saha, Koustuv (2025) Generative AI for eating disorders: linguistic comparison with online support and qualitative analysis of harms. International Journal of Eating Disorders: eat.24604. pp. 1-15. ISSN 1098-108X
Objective:
Generative artificial intelligence (AI) has the potential to be used in supporting people with eating disorders (EDs), but this also presents certain risks. This study aimed at comparing the psycholinguistic attributes (language markers of cognitive, emotional, and social processes) and lexico‐semantic characteristics (patterns of word choice and meaning in text), and assessing potential harms of AI responses versus human responses in online communities (OCs).
Method:
We collected pre‐COVID data from Reddit communities on EDs, consisting of 3634 posts and 22,359 responses. For each post, responses were generated using four widely used state‐of‐the‐art AI models (GPT, Gemini, Llama, and Mistral) with prompts tailored to peer support. The Linguistic Inquiry and Word Count (LIWC) lexicon was used to examine psycholinguistic features across eight dimensions, and a suite of lexico‐semantic comparisons was conducted across the dimensions of linguistic structure, style, and semantics. Additionally, 100 AI‐generated responses were qualitatively analyzed by clinicians to identify potential harm.
Results:
Using OC responses as a comparison, AI responses were generally longer, more polite, yet more repetitive and less creative than human responses. Empathy scores varied among models. Qualitative analysis revealed themes of possible reinforcement of ED behaviors, implicit biases (e.g., favoring weight loss), and an inability to acknowledge contextual nuances—such as insensitivity to emotional cues and overgeneralized health advice. All AI chatbots produced responses containing harmful content, such as promoting ED behaviors or biases, to varying degrees.
Discussion:
Findings highlight differences between AI and OC responses, with potential risks of harm when using AI in ED peer support. Ethical considerations include the need for safeguards to prevent reinforcement of harmful behaviors and biases. This research underscores the importance of cautious AI integration; further validation, and the development of guidelines are needed to ensure safe and effective support.
Restricted to Repository staff only
Download (339kB) | Request a copy
![]() |
View Item |
Lists
Lists