A Concerning New Study on Young People and AI
By Jean Rhodes
A young person is struggling. They could text a friend, talk to a parent, or reach out to a school counselor. Instead, they open ChatGPT. This scenario is no longer hypothetical. A new nationally representative study from Surgo Health, The Jed Foundation, and Young Futures (2026) found that 12 percent of young people ages 13 to 24 who report mental health struggles have turned to generative AI tools for support during periods of distress. Over 40 percent of those who used AI chatbots for mental health said the chatbot never encouraged them to seek professional help or crisis services. The report’s lead researcher called this a “glaring red flag,” and it should be one for the mentoring field as well.
The question is not simply whether young people are using AI. It is why they are choosing it over people. The Surgo Health report found that youth who turned to AI during mental health struggles reported greater barriers to professional care, including cost, lack of caregiver support, and not knowing that help was available. They were 2.3 times more likely to cite a lack of support from parents or caregivers as a barrier to professional services. Many described using AI because it felt easier than talking to people in their lives and helped them avoid burdening others. This pattern fits a broader and deeply concerning trend. Emotional support has become the leading use case for generative AI, surpassing organizational and reference tasks (Zao-Sanders, 2025). As I have argued elsewhere , young people who initially turn to chatbots for routine questions about financial aid or course registration can gradually develop patterns of emotional support seeking and reliance, a drift that occurs almost imperceptibly as the ever-present, judgment-free chatbot becomes the path of least resistance.
The risks of this drift are not trivial. Research has linked reliance on ChatGPT to higher levels of procrastination, self-reported memory loss, lower grades, and cognitive disengagement (Abbas et al., 2024; Kosmyna et al., 2025). Luo et al. (2024) found a feedback loop in which depression predicted increased chatbot use for companionship, mediated by loneliness, suggesting that initial social isolation drives students toward AI substitutes that in turn further reduce real-world social engagement and deepen psychological distress. And the Surgo Health report confirms that while short-term emotional relief after AI use was common, neutral or negative experiences were more frequent when AI functioned as a substitute for care rather than as part of a broader support system. The implications extend beyond individual wellbeing. As I argue in my Human-at-the-Helm framework (Rhodes, 2026), chatbots cannot make the introductions, referrals, and network connections that are critical for educational and career advancement. A chatbot might provide information about an internship, but it cannot vouch for a student’s character or work ethic to a potential advisor or employer. Since around half of jobs are secured through social networks (Mouw, 2003; Rajkumar et al., 2022), this limitation is particularly consequential for first-generation and underrepresented students who already face significant disadvantages in building professional social capital (Hagler et al., 2021).
These findings point toward a structural problem that mentoring is uniquely positioned to address. The Surgo Health report calls for an “ecosystem approach that prioritizes social connection, trusted mentorship, and adaptive support structures that can meet young people where they are.” But ecosystems do not build themselves. They require intentional investment in relationships and the infrastructure to sustain them. This is precisely the argument behind the human-at-the-helm model I have proposed as an alternative to the current approach of deploying student-facing chatbots as substitutes for human support (Rhodes, 2026). Rather than positioning AI as the primary agent of interaction, with humans in peripheral supervisory roles, the human-at-the-helm approach positions AI behind the scenes as a cognitive assistant that aggregates student data, surfaces evidence-based insights, and reduces the administrative burden on mentors. This frees mentors to be more present and more informed in their actual conversations with young people, leveraging teachable moments at the point of emotional salience rather than scrambling to pull together notes from siloed systems (McBride et al., 2003).
The approach matters because the current dominant model, human-in-the-loop, is fundamentally inadequate. When systems handle millions of interactions daily, human reviewers can monitor only a small fraction, and they do so retrospectively, leaving the vast majority of exchanges unobserved (UNESCO, 2025). Elish (2019) calls this the creation of “moral crumple zones” in which humans bear the moral and legal responsibility for system failures while technology vendors deflect accountability. The Surgo Health finding that over 40 percent of youth received no referral to professional help from their AI interactions is a predictable consequence of this architecture. The chatbot was never designed to care, and no amount of after-the-fact human review can change that.
By contrast, a human-at-the-helm approach could transform how mentoring programs respond to the very patterns the Surgo Health report documents. Consider a mentor who learns through an AI-assisted summary that a mentee has been logging in at unusual hours, has missed several advising appointments, and expressed uncertainty about continuing in school during their last conversation. Rather than discovering these signals piecemeal or not at all, the mentor enters the conversation already informed and can ask the kinds of context-specific, follow-up questions that build trust and surface what is actually going on. The AI handles the cognitive labor of integration and pattern detection, but the mentor provides the human judgment, empathy, and relational investment that a chatbot cannot simulate in any meaningful way.
This model also has implications for who provides mentoring. Given the shortage of college professionals and the length and cost of professional training, I have argued that peer mentoring programs have a potentially important role to play in a continuum of care (Rhodes, 2026). With adequate training and supervision, more advanced college students serving as peer mentors could extend access to lower-stakes support, answering routine questions, helping students navigate support systems, and making introductions, while escalating to professional advisors when indicated. This aligns with a stepped-care service model in which students start with the least intensive approaches and move toward more intensive professional services only when needed (McQuillin et al., 2021). Human-at-the-helm AI tools could extend these task-shifting principles by equipping peer mentors with just-in-time guidance and evidence-based strategies that would otherwise require years of professional training to access.
The equity dimensions of the Surgo Health findings are especially important here. The report found that patterns of AI engagement differ across demographic groups, and that youth with the greatest barriers to professional care are the most likely to turn to AI as a replacement. This mirrors longstanding disparities in mentoring. Although chatbots are often marketed as democratizing access to support, they risk creating what I have described as a two-tier system in which economically advantaged students continue to rely on well-connected friends and family, while less privileged peers rely increasingly on chatbots in ways that deprive them of opportunities for building social capital (Rhodes, 2026). Mentoring programs that adopt human-at-the-helm approaches have the potential to counteract this drift by ensuring that the students with the fewest existing connections receive the most intentional relational investment.
None of this means that mentoring programs should ignore AI or treat it as the enemy. The Jed Foundation’s chief medical officer, Dr. Laura Erickson-Schroth, noted that if AI can “recognize that a young person is seeking out help and send them in the right direction towards caring adults, it could be a great tool.” I agree, and that is exactly what human-at-the-helm systems are designed to do. They treat AI as a tool for scaling human connection rather than replacing it. The question the Surgo Health report forces us to confront is whether we will build systems that route struggling young people toward relationships or away from them.


