Human Artificial Intelligence and Mental Health Support

AI chatbots can offer detailed, relevant information and, at times, responses that people might not be able to provide. However, they often lack qualities such as warmth, individuality, and emotional sensitivity (Balcombe, 2023). To help bridge this gap, researchers have proposed human artificial intelligence collaboration, where people and AI systems complement one another and work toward shared goals. This approach aims to create outcomes that are safer, more efficient, and more meaningful for daily life and work, and aligns with ideas promoted by the Center for Humane Technology (2023), which calls for AI systems built on values such as empathy, responsibility, and care.

Chatbots are increasingly used across many areas of everyday life, particularly in situations where people prefer not to or are unable to interact directly with another person (Adamopoulou & Moussiades, 2020). Key advantages include continuous availability, lower-cost assistance, adaptability to individual needs, and reduced fear of judgment., they can be available at any time, offer lower-cost assistance, adapt to individual needs, and reduce fear of judgment. As a result, AI-supported chatbots in digital mental health programs may help ease access barriers, prompting people to seek support sooner while also producing information that can guide future research and planning (Boucher et al., 2021).

Furthermore, a more recent systematic review and meta-analysis indicated that AI chatbots are generally acceptable across many mental health conditions (He et al., 2023). For example, Woebot, a fully automated conversational agent, has shown promise as an engaging way to deliver cognitive behavioural strategies to young adults experiencing anxiety and depression (Fitzpatrick et al., 2017).

Despite their potential, AI chatbots have important technical limitations. Because natural language understanding remains imperfect, chatbot responses may be incomplete, inappropriate, or misinterpreted, particularly when users are unaware that they are interacting with an automated system (Balcombe, 2023). This highlights the need for clear communication and user education when deploying AI in mental health contexts (Balcombe & De Leo, 2022).

A major challenge lies in chatbots’ difficulty processing emotional complexity. Human mental health communication often involves subtle emotional cues, metaphors, and ambiguous language, which AI systems may fail to interpret accurately. Research on suicidal communication illustrates how nuanced and context-dependent such messages can be, posing significant challenges for automated interpretation (Ireland & Bradford, 2021). Although computational tools may assist in detecting patterns such as negative tone or restricted language, these systems cannot replace human judgment (Tamim, 2023).

Beyond technical constraints, the use of AI in mental health raises broader ethical concerns. These include risks related to privacy, data protection, fairness, and the handling of highly sensitive personal information (Lee & Kruger, 2023). Accordingly, the World Health Organization emphasizes that generative AI should only be introduced into health care following rigorous testing, expert oversight, and safeguards to minimize potential harm (Morrison, 2023).

To address these challenges, human–AI collaboration frameworks have been proposed, in which AI systems support rather than replace human decision-making. Such approaches emphasize shared responsibility and ongoing supervision, reducing reliance on fully automated systems. Future research should examine which forms of human involvement are most effective across research, clinical practice, and policy, alongside the development of clear governance and regulatory guidelines (Dwivedi et al., 2023).

Taken together, existing evidence suggests that AI chatbots hold considerable promise for mental health care, yet important questions remain. Continued research, careful implementation, and cross-sector collaboration are essential to reduce risks and maximize benefits.

References:

  • Adamopoulou, E., & Moussiades, L. (2020). Chatbots: History, technology, and applications. Machine Learning with applications2, 100006.

  • Balcombe, L. (2023). AI Chatbots in Digital Mental Health. Informatics, 10(4), 82. https://doi.org/10.3390/informatics10040082

  • Balcombe, L., & De Leo, D. (2022). Human-Computer Interaction in Digital Mental Health. Informatics, 9(1), 14. https://doi.org/10.3390/informatics9010014

  • Boucher, E. M., Harake, N. R., Ward, H. E., Stoeckl, S. E., Vargas, J., Minkel, J., ... & Zilca, R. (2021). Artificially intelligent chatbots in digital mental health interventions: a review. Expert review of medical devices18(sup1), 37-49.

  • Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., ... & Wright, R. (2023). Opinion Paper:“So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International journal of information management71, 102642.

  • Fitzpatrick, K. K., Darcy, A., & Vierhile, M. (2017). Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR mental health4(2), e7785.

  • He, Y., Yang, L., Qian, C., Li, T., Su, Z., Zhang, Q., & Hou, X. (2023). Conversational agent interventions for mental health problems: systematic review and meta-analysis of randomized controlled trials. Journal of Medical Internet Research25, e43862.

  • Ireland, D., & Bradford, D. K. (2021). Pandora’s Bot: Insights from the Syntax and Semantics of Suicide Notes. In Healthier Lives, Digitally Enabled (pp. 26-31). IOS Press.

  • Lee, M.; Kruger, L. Risks and Ethical Considerations of Generative AI. 2023. Available online: https://ukfinancialservicesinsights.deloitte.com/post/102i7s2/risks-and-ethical-considerations-of-generative-ai (accessed on 23 August 2023).

  • Morrison, R. WHO Urges Caution over Use of Generative AI in Healthcare. 2023. Available online: https://techmonitor.ai/technology/ai-and-automation/ai-in-healthcare-who (accessed on 23 August 2023).

  • Tamim, B. Belgian Woman Blames ChatGPT-Like Chatbot ELIZA for Her Husband’s Suicide. 2023. Available online: https://interestingengineering.com/culture/belgian-woman-blames-chatgpt-like-chatbot-eliza-for-her-husbands-suicide (accessed on 23 August 2023).

  • The Center for Humane Technology. Align Technology with Humanity’s Best Interests. 2023. Available online: https://www.humanetech.com/ (accessed on 19 August 2023).

Previous
Previous

Why the EU AI Act Matters for Digital Mental Health Services

Next
Next

Avatar Therapy for Auditory Hallucinations