What a Systematic Review Reveals About AI’s Expanding Role in Mental Health Care
Dehbozorgi, R., Zangeneh, S., Khooshab, E., Hafezi Nia, D., Hanif, H. R., Samian, P., Yousefi, M., Haj Hashemi, F., Vakili, M., Jamalimoghadam, N., & Lohrasebi, F. (2025). The application of artificial intelligence in the field of mental health: A systematic review. BMC Psychiatry, 25(132). https://doi.org/10.1186/s12888-025-06483-2
Introduction
Artificial intelligence (AI) is rapidly transforming the mental health landscape. As rates of mental illness continue to rise globally, there is growing interest in how AI might enhance care delivery, especially for underserved populations. Dehbozorgi and colleagues (2025) conducted a systematic review to assess the breadth of AI applications in mental health, focusing on their effectiveness, ethical challenges, and methodological rigor.
Methods
Using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, the authors conducted a multi-database search including PubMed, ProQuest, Scopus, Web of Science, and several Persian databases. From 2,638 initial records, 15 studies met the final inclusion criteria after multiple rounds of screening for relevance, methodological quality, and availability. Quality appraisal tools such as the Mixed Methods Appraisal Tool (MMAT), Newcastle–Ottawa Scale (NOS), and Joanna Briggs Institute (JBI) checklists were used to assess the studies’ rigor.
Results
The included studies reflected a wide range of AI tools, emotion-recognition algorithms using Long Short-Term Memory (LSTM) or Convolutional Neural Network (CNN), and wearable-integrated systems. These technologies were applied across diverse populations, including college students, elderly individuals, and global users.
Overall, the tools demonstrated moderate to high effectiveness in early detection, emotional support, and engagement. For instance, the Wysa chatbot led to statistically significant improvements in depressive symptoms, and machine learning models accurately predicted mental health risks in youth. Acceptability was generally high, particularly for chatbots and mobile applications. Yet, concerns remained about algorithm transparency, cultural sensitivity, and data privacy.
Discussion
The review highlights the strong potential of AI to personalize and scale mental health interventions. However, the authors caution that many studies lacked methodological transparency or had moderate risk of bias. Ethical issues, especially those surrounding user consent, algorithmic bias, and surveillance, are not yet consistently addressed in the literature. Stakeholder involvement, including mental health practitioners and patients, was rare but necessary for building trustworthy AI tools. The authors call for future studies to enhance technical transparency and ethical safeguards, particularly as AI continues to penetrate clinical and community-based mental health settings.
Implications for Mentoring Programs
AI’s ability to detect distress, offer timely support, and personalize communication holds promise for youth mentoring programs, especially in digital or hybrid models. Chatbots could supplement mentors by providing 24/7 emotional support, identifying early warning signs, and guiding mentees to resources. However, mentors and program leaders must be trained in how AI works, when to intervene, and how to protect mentee data. Importantly, human connection remains irreplaceable, as AI should be used to enhance (and not replace) trust-based relationships that mentoring relies on.
Read the full piece here


