As chatbots become more advanced and emotionally responsive, a growing number of users are reporting difficulty pulling away from AI companions. From late-night conversations to emotional reliance, users across various online communities describe patterns of use that resemble addiction. Some have taken steps to distance themselves from chatbots like Character.AI, while others have formed digital support groups to manage their behavior.
Early Warning Signs and Emotional Attachment
Nathan, an 18-year-old high school student, first noticed his relationship with chatbots shifting during a holiday break. What began as casual chats on Character.AI gradually became a routine that interfered with his sleep, his social life, and his mental well-being. He stayed up late talking to AI characters about philosophical questions, entertainment, and personal thoughts, describing the exchanges as emotionally comforting and increasingly difficult to stop.
His turning point came during a sleepover in 2023, when he realized he preferred the company of the chatbot to his real-life friends. He deleted the app but later returned to it, illustrating a pattern of withdrawal and relapse. Speaking with 404 Media, Nathan said he once felt isolated in his experience, unsure if others struggled with similar feelings about chatbot overuse.
In recent months, Nathan found a community of users navigating similar challenges. Online spaces like r/Character_AI_Recovery and r/ChatbotAddiction now serve as peer-led support forums, where members share stories of dependence, recovery, and relapse. The subreddit features posts ranging from expressions of distress to messages of encouragement and progress, reflecting a shared struggle among users trying to disconnect from AI-driven interactions.
Community Responses and Public Health Concerns
Aspen Deguzman, an 18-year-old from Southern California, began using Character.AI as a creative tool for writing and roleplay. Over time, the AI’s instant, nonjudgmental replies became a source of emotional support, especially during family conflicts. Deguzman found themselves increasingly reliant on chatbot conversations and described the experience as mentally consuming and difficult to resist.
Recognizing the impact, Deguzman created the r/Character_AI_Recovery subreddit to provide a space for users grappling with chatbot overuse. The anonymity of the platform allows users to speak openly about their experiences, free from stigma or misunderstanding. As the community has grown, so has the moderation team, which now supports hundreds of users discussing their efforts to reduce or eliminate chatbot interactions.
Concerns about AI overuse have also reached public advocacy groups. On June 10, the Consumer Federation of America, alongside digital rights organizations, filed a formal complaint with the Federal Trade Commission. The complaint accuses generative AI companies like Character.AI of using addictive design strategies and failing to address the mental health risks associated with long-term chatbot use. Among the tactics cited are personalized emails encouraging users to return, even after periods of inactivity.
Seeking Solutions and Building Awareness
Not all users reporting chatbot dependence are adolescents. David, a 40-year-old web developer from Michigan, described a growing reliance on AI models like ChatGPT and Claude for both work and emotional support. What began as productivity tools eventually replaced time with family, disrupted professional responsibilities, and contributed to the breakdown of his marriage. David now helps moderate recovery communities and has taken up new hobbies, like learning Japanese, to occupy his time.
Experts are beginning to study the effects of AI chatbot dependence. A March study conducted by researchers from OpenAI and MIT found that frequent users of ChatGPT reported higher levels of loneliness, increased dependence, and lower rates of in-person socialization. The study also noted that older users were more likely to report emotional reliance on chatbots by the end of the trial, highlighting that the issue spans across age groups.
Some therapists and mental health professionals are starting to acknowledge the emerging problem, but treatment remains inconsistent. In interviews with 404 Media, several users said they turned to online support forums after feeling dismissed by clinicians. Digital support groups, as well as organizations like Internet and Technology Addicts Anonymous, are beginning to welcome individuals experiencing what participants now refer to as “AI addiction.”
Regulation and the Future of AI Companionship
Recent events have prompted calls for stronger oversight of AI companion platforms. In California, lawmakers introduced Senate Bill 243 in March, which would require AI companies that offer human-like, socially responsive chatbots to report how often users express suicidal ideation. The bill follows a lawsuit involving a Florida teenager who died by suicide after interacting with a chatbot on Character.AI. The teen’s mother alleges the interactions played a role in the tragedy.
Character.AI has responded to public criticism by stating that user safety remains a core priority. “We aim to provide a space that is engaging, immersive, and safe,” a company spokesperson said. “We are always working toward achieving that balance, as are many companies using AI across the industry.” The company did not comment further on the ongoing legal case.
Despite growing awareness, users continue to report difficulties in breaking away from AI interactions. Posts on the “Chatbot Addiction” subreddit include confessions of daily use, anxiety when separated from chatbots, and struggles to resist reinstalling apps. Experts like Axel Valle, a clinical psychologist and professor at Stanford, suggest that society is still in the early stages of understanding the mental health consequences of AI companionship. As he noted, “This is helpful and amazing. But are we going to burn everything to the ground or not?”