Mentoring Programs in the Age of AI: Perils and Promise
By Jean Rhodes
Mentoring programs are entering an era in which many young people already treat AI chatbots as quasi‑helpers, whether adults like it or not. National survey data show that about one in eight U.S. adolescents and young adults has turned to large language model chatbots such as ChatGPT for mental health advice, with usage even higher among 18‑ to 21‑year‑olds and among those with elevated symptoms of distress. Qualitative research finds users “shaping ChatGPT into my digital therapist,” using it for emotional processing, perspective‑taking, and companionship when human support feels unavailable or out of reach. For mentoring programs, the question is no longer whether AI will be part of the landscape, but how to respond in ways that protect and strengthen, rather than erode, human relationships.
The first step is clarity about what chatbots are and are not. General‑purpose systems are optimized for fluent conversation, not for developmental mentoring. They tend to affirm and mirror users’ perspectives, which may create seductive validation loops for adolescents who are already prone to egocentric and self‑focused thinking. Generative models are also nondeterministic; the same prompt can yield different responses across sessions, a variability that makes it difficult for programs to guarantee consistent, accountable guidance.
At the same time, AI can help mentoring programs solve real problems if it is kept firmly in a support role. Platforms already use AI to improve mentor‑mentee matching by analyzing interests, goals, and backgrounds to identify more compatible pairs, a task that becomes increasingly complex at scale. Other tools can synthesize surveys, attendance data, and brief notes into concise dashboards that help mentors come into each meeting better prepared, much as AI scribes are beginning to relieve clinicians of some documentation burden so they can focus attention on the encounter itself. Properly designed, AI can handle logistics and pattern recognition so humans can devote their energy to listening, challenging, encouraging, and advocating.
A useful guiding principle is to keep mentors “at the helm.” In a human‑at‑the‑helm model, AI operates in the background: surfacing relevant information, suggesting evidence‑based prompts, and flagging potential concerns for follow‑up, while mentors remain the visible, accountable agents in young people’s lives. The aim is not to make AI sound more human, but to make humans more fully present.
Finally, mentoring programs should treat AI literacy as part of their developmental mission. Young people need trusted adults who can help them understand what chatbots can and cannot do, how to interpret their advice cautiously, and why certain kinds of struggles still call for human conversations. Emerging work underscores both the potential and the risks of these tools, particularly for youth with more intensive needs. Mentors are well positioned to help mentees navigate this new terrain, modeling critical thinking, boundary‑setting, and help‑seeking offline
If mentoring programs harness AI to extend reach, sharpen match support and mentor preparation, and support mentors in doing the irreducibly human work of showing up, noticing, and caring, they can meet this technological moment with both realism and hope.


