Human-at-the-Helm: Imagining AI’s Role in Mentoring Programs

By Jean Rhodes

 

As artificial intelligence (AI) chatbots proliferate across academic and mental health services, mentoring programs face a critical choice: Will we let AI systems take the wheel, or will we keep humans firmly at the helm? The dominant approach emerging in educational and therapeutic contexts is what technologists call “human-in-the-loop” (HITL) design, where AI systems handle the primary interactions while humans serve as supervisors, stepping in only when algorithms detect problems. While this model may work for transactional services, it fundamentally misunderstands what makes mentoring effective.

Why Human-in-the-Loop Falls Short for Mentoring

Research consistently demonstrates that effective mentoring relationships are built on authentic human connection, genuine empathy, and the development of social capital (the resources embedded in real relationships that open doors and create opportunities.) When we position AI as the primary agent of support, we risk undermining the very mechanisms that make mentoring transformative.

Consider what happens when a young person shares a struggle with an AI chatbot versus a human mentor. The AI may provide technically appropriate responses, even sophisticated ones that simulate empathy. But it cannot offer what mentees most need: the experience of being truly heard and understood by another human being who is willing to invest precious time and emotional energy in their success.

Moreover, AI systems cannot make the introductions, referrals, and network connections that research shows are crucial for educational and career advancement. A chatbot might provide information about internship opportunities, but it cannot make the phone call that leads to an interview or vouch for a student’s character to a potential employer.

Human-at-the-Helm

Instead of relegating humans to supervisory roles, I propose a “human-at-the-helm” approach that positions AI as a sophisticated tool to enhance rather than replace human mentors. This paradigm preserves the relational foundations of effective mentoring while leveraging AI’s capabilities to make mentoring more accessible and evidence-based. Rather than replacing human mentors with chatbots, AI should serve as an intelligent assistant that helps mentors provide more effective support.  For example, my colleagues and I are developing a system that analyzes mentee data from multiple sources, provides mentors with research-backed suggestions tailored to specific situations, and connects them with relevant resources while preserving the human mentor as the primary relationship agent. By design this system doesn’t generate messages for mentors to send. Instead, it provides insights and suggestions that mentors use to craft their own authentic responses.

Practical Implications for Programs

For mentoring programs considering AI integration, the human-at-the-helm approach offers several key principles. First, any AI implementation should strengthen rather than replace human connections.  Second, Mentees should understand when and how AI assists their mentors, maintaining trust and authenticity in the relationship. Third, critical decisions affecting mentee welfare must remain under human control, with AI serving an advisory rather than determinative role. Finally, AI should provide mentors with access to research and best practices they might not otherwise have, democratizing expert knowledge while preserving human judgment.

Conclusions

As the world races to adopt AI solutions and companions, let’s resist the temptation to sacrifice authentic human connection for operational efficiency. The young people in our programs need to chat with humans, not bots, and need genuine relationships with caring adults who can provide empathy, guidance, and social capital that no algorithm can replicate. By keeping humans at the helm and leveraging AI to enhance mentor effectiveness, we can create programs that truly scale human connection.