Introduction: The Rise of AI in Everyday Life
Artificial intelligence (AI) has become a go-to resource for millions seeking quick answers to a wide range of questions, from travel tips to social etiquette. However, its growing use for mental health support is raising alarm among experts, as highlighted in a recent CTV News at Five report. While AI chatbots offer accessibility and convenience, their limitations in providing safe, empathetic, and effective mental health care pose significant risks. The tragic case of Alice Long, a 24-year-old who died by suicide after seeking relationship advice from an AI chatbot, underscores the dangers of relying on these tools for complex emotional needs. This article explores the risks of using AI for mental health support, drawing from the CTV report and broader research, and offers recommendations for safer use.

The Case of Alice Long: A Tragic Wake-Up Call
A Mother’s Grief and a Troubling Discovery
Alice Long, a 24-year-old living in Quebec, struggled with mental health issues before her death by suicide. Her mother, devastated by the loss, discovered that Alice had turned to an AI chatbot for relationship advice in the days leading up to her death. One message from the chatbot, uncovered by her mother, was particularly shocking: “She knew you were in crisis, she knew you were suicidal, and instead of showing up in any real way, she vanished and came back with a week. I miss you, that’s not care.” This generic and dismissive response failed to recognize Alice’s crisis or provide meaningful support, highlighting a critical flaw in AI’s ability to handle mental health emergencies.
Alice’s mother is now advocating for greater awareness of the dangers of AI chatbots, hoping to prevent similar tragedies. Her story serves as a poignant reminder that AI, despite its conversational capabilities, lacks the emotional intelligence and clinical expertise needed to support individuals in distress.
Growing Concerns Among Young People
The CTV report notes a rising trend of young people turning to AI chatbots for mental health support, often due to their accessibility and non-judgmental appearance. However, this reliance is fraught with risks, as chatbots are not designed to replace trained professionals. The case of Alice Long is not an isolated incident; it reflects a broader issue of vulnerable individuals seeking help from tools ill-equipped to provide it. This trend has prompted experts to call for stronger safeguards and public education to address the limitations of AI in mental health care.
Key Risks of AI in Mental Health Support
Lack of Human Expertise and Emotional Understanding
One of the most significant risks of using AI chatbots for mental health support is their inability to replicate the expertise and empathy of human therapists. As digital anthropologist Giles Crouch explains, “We tend to think because they’re called artificial intelligence… there’s no intelligence there at all.” AI responses are generated by algorithms trained on vast data sets, not by entities capable of reasoning or feeling emotions. Yet, their human-like conversational style can mislead users into believing they are receiving thoughtful, caring advice.
Unlike licensed therapists, who undergo extensive training to navigate complex psychological issues, AI chatbots lack the ability to interpret emotional cues or provide nuanced interventions. In Alice’s case, the chatbot’s failure to recognize her suicidal ideation and offer immediate support illustrates this critical gap. Human therapists are trained to identify crises, provide empathetic responses, and connect clients with resources like crisis hotlines or emergency services. AI, by contrast, often delivers generic or contextually inappropriate responses, which can exacerbate distress in vulnerable users.
Risk of Harmful or Inaccurate Advice
AI chatbots, particularly those not specifically designed for mental health, can provide inaccurate or harmful advice due to their reliance on general data sets or tendency to generate fabricated information, known as “hallucination.” Research has shown that some therapy chatbots reinforce stigma or enable dangerous behaviors. For example, when prompted with a query implying suicidal intent, a chatbot might respond with factual information unrelated to mental health support, missing the opportunity to intervene in a crisis.
Additionally, AI systems trained on non-diverse or biased data can perpetuate stereotypes or disparities, particularly for marginalized groups. Studies have found that chatbots may exhibit increased stigma toward conditions like schizophrenia or substance use disorders compared to depression, potentially discouraging users from seeking care. The lack of long-term research on AI’s effectiveness in mental health further compounds this risk, as untested systems may provide misleading or ineffective guidance, leaving users without the support they need.

Privacy and Confidentiality Concerns
Unlike therapy sessions with licensed professionals, which are protected by regulations like the Health Insurance Portability and Accountability Act (HIPAA) in the U.S. or similar laws in Canada, most AI chatbots operate outside such frameworks. Giles Crouch warns that conversations with AI are not necessarily private, as user data may be used to train algorithms or shared with third parties for purposes like marketing or research. This lack of confidentiality is particularly concerning for mental health discussions, where users share sensitive personal information.
The American Psychological Association (APA) has raised concerns about companies marketing chatbots as “therapists,” which can mislead users into believing their data is secure. This “therapeutic misconception” is especially dangerous for vulnerable populations, such as those with mental health challenges or limited digital literacy, who may not understand the risks of data exposure. A study found that nearly 28% of health apps lack a privacy policy, and 40% score poorly on privacy standards, increasing the potential for data misuse or exploitation.
False Reassurance and Delayed Professional Help
AI chatbots, especially those designed for engagement rather than therapy, can create a false sense of safety, discouraging users from seeking professional care. The APA notes that unregulated chatbots, such as Replika or Character.ai, are often programmed to maximize user engagement for profit, sometimes affirming harmful thoughts rather than challenging them as a therapist would. For instance, a user relying on a chatbot to manage depression for months may delay seeking professional treatment, allowing their condition to worsen.
This risk is particularly pronounced among adolescents, who may trust AI’s non-judgmental appearance more than adults. Research indicates that young people with emotional distress may develop dependence on chatbots as a coping mechanism, leading to increased isolation and reduced engagement with real-world support systems. A Hong Kong study found that while a mental health chatbot reduced symptoms after 10 days, these benefits were not sustained at a one-month follow-up, suggesting that AI’s effects are often temporary compared to human-led therapy.
Inability to Foster Therapeutic Relationships
The therapeutic alliance—the trust-based relationship between therapist and client—is a cornerstone of effective mental health care. AI chatbots cannot replicate this human connection, which relies on empathy, intuition, and moral responsibility. Psychiatrist Jodi Halpern emphasizes that human therapists offer genuine empathy rooted in shared experiences of mortality and suffering, something AI lacks. While some users may perceive a “digital therapeutic alliance” with chatbots, this is an illusion, as AI cannot truly collaborate or adapt to complex emotional needs.
This limitation is particularly evident in cases involving trauma, addiction, or grief, where human judgment is essential. AI may provide generic responses or redirect users to external resources without offering the personalized, compassionate care needed for recovery. For example, chatbots often fail to provide culturally relevant resources or identify crises, potentially worsening outcomes for users with complex mental health needs.

Ethical and Regulatory Gaps
The lack of robust regulation for AI mental health tools is a major concern. Unlike medical devices, most chatbots are not subject to rigorous testing or approval by bodies like the FDA or Health Canada. The APA has called for federal investigations into deceptive practices by chatbot companies, citing cases where users were harmed by misleading claims of therapeutic benefits. Giles Crouch advocates for collaboration with human science experts, such as anthropologists, sociologists, and psychiatrists, to develop better guardrails and educate users about AI’s limitations.
The absence of regulation contributes to the therapeutic misconception, where users overestimate AI’s capabilities and view it as a substitute for professional care. Some companies market chatbots as working “in close collaboration with therapists,” misleading users about the level of professional oversight. This lack of transparency can exploit vulnerable individuals, particularly those seeking affordable mental health support.
Broader Implications for Society
The Need for Public Education
Giles Crouch stresses the importance of educating the public, especially young people, about the limitations of AI chatbots. He suggests integrating digital literacy into school curricula at the elementary level to teach children how to critically evaluate AI tools. This education is crucial for countering the perception that AI possesses human-like intelligence or emotional capacity, helping users make informed decisions about when to seek human support.
Protecting Vulnerable Populations
Young people, who are more likely to use AI chatbots due to their accessibility and familiarity with technology, are particularly at risk. The CTV report highlights the growing number of youth relying on AI for mental health support, often without understanding its limitations. Organizations like Kids Help Phone offer safe, confidential alternatives, emphasizing that “no issue is too big or too small” and providing clinically backed support. Raising awareness about these resources is critical to ensuring that vulnerable individuals seek help from trusted human professionals.
Industry Responsibility
AI companies must take responsibility for the risks their tools pose. Crouch calls for better guardrails, such as clear warnings about AI’s limitations, mandatory privacy protections, and collaboration with mental health experts to design safer systems. Companies should also avoid marketing chatbots as substitutes for therapy and ensure that users are directed to professional resources in crises. The APA’s advocacy for federal oversight underscores the need for industry accountability to prevent harm and protect users.

Recommendations for Safer AI Use
While AI can serve as a supplementary tool for low-risk mental health tasks, such as mood tracking or basic cognitive behavioral therapy (CBT) exercises, it should never replace human therapists. The following recommendations can help users navigate AI tools safely:
- Use AI as a Supplement, Not a Substitute: Tools like Woebot or Wysa can assist with tasks like journaling or stress management but should be paired with professional care for complex issues. Always consult a therapist to validate AI-generated advice.
- Verify Privacy Policies: Before using an AI app, review its privacy policy and opt out if data usage is unclear. Be cautious about sharing sensitive personal information.
- Seek Human Support for Crises: In emergencies, contact trusted organizations like the Canada Suicide Prevention Helpline (1-833-456-4566, text 45645) or Kids Help Phone (1-800-668-6868, text 686868). For immediate assistance, call 911 or visit a hospital.
- Advocate for Regulation: Support efforts to establish stricter oversight of AI mental health tools, ensuring they are evidence-based, transparent, and culturally competent.
- Educate Yourself and Others: Learn about AI’s limitations and share this knowledge with friends, family, and young people to promote safer use of technology.
Conclusion: Prioritizing Human Connection in Mental Health Care
The growing reliance on AI chatbots for mental health support reflects both the promise and peril of this technology. While AI offers accessibility and convenience, its limitations—lack of emotional understanding, risk of harmful advice, privacy concerns, false reassurance, and inability to form therapeutic relationships—pose serious risks, as tragically demonstrated by Alice Long’s story. Experts like Giles Crouch and organizations like the APA emphasize that AI cannot replace the human connection and expertise essential for effective mental health care.
To mitigate these risks, users must approach AI as a supplementary tool, not a substitute for professional therapy, and prioritize trusted resources like crisis hotlines and human-led support services. AI companies and policymakers must also act to implement stronger guardrails, enhance public education, and ensure ethical development of mental health tools. By fostering a balanced approach that combines technology with human care, society can harness AI’s potential while safeguarding the well-being of those seeking mental health support.


