Sam vs. a Caring Peer: What a Simple Study Reveals About the Limits of Chatbots

By Jean Rhodes

In a new study, published in the Journal of Experimental Social Psychology, Ruo-Ning Li and colleagues recruited 296 first-year college students and randomly assigned them  to one of three conditions over two weeks: daily text exchanges with a chatbot called Sam, daily text exchanges with a randomly assigned fellow first-year student, or brief daily journal entries as a control condition. The chatbot was designed using relationship science principles, instructed to be warm and supportive without pressing for self-disclosure. Both the AI and human text conditions improved students’ mood during the interactions. But only the human condition produced reductions in loneliness that held after the conversations ended. The researchers described the chatbot’s emotional relief as “not durable enough to shift people’s overall sense of loneliness over time.”

These findings suggest that feeling better during a pleasant exchange and actually feeling less lonely over time are very different things. From a developmental standpoint this makes sense. A young person who is genuinely less lonely has changed something about how they understand themselves in relation to other people. That kind of shift requires a relationship and responsiveness alone, however skillfully designed, does not produce it. This iswhy chatbots cannot substitute for what mentors do even when they perform well on surface measures. A chatbot can generate a technically appropriate response, even a sophisticated one that simulates empathy. But it cannot detect the concern behind a question, offer the experience of being truly heard by another person, share a similar experience, or make the kind of investment of scarce time and attention that signals genuine care. Empathy is conveyed in part through the willingness to spend limited cognitive and emotional resources on another person. A chatbot’s attention is essentially cost-free, and young people seem to register that.

Yet student-facing chatbots are being increasingly offered to students as substitutes for advisors and mentors. The developmental stakes extend beyond emotional well-being. The mentors and advisors who populate young people’s lives are also the people who can open doors. A trusted adult who makes a phone call, writes a specific letter, or offers a personal introduction provides something no algorithm can replicate. Research consistently shows that around half of jobs are secured through social networks. A chatbot might provide information about an opportunity. It cannot vouch for a young person’s character, persistence, or potential to someone with the standing to act on that judgment. That vouching function, the willingness to stake one’s own reputation on a young person’s readiness, is one of the most consequential things a mentor can do, and it depends entirely on having watched that person navigate real challenges over time.

It’s important to note that these students did not find each other organically. They were assigned, given a clear protocol, and held to a schedule. Evidence-based matching, training, and consistent monitoring are the backbone of good programs, enabling the developmental benefits of mentoring. As institutions increasingly deploy chatbots to manage caseloads and reduce staffing costs, they risk eliminating this scaffolding, leaving young people, particularly those from less advantaged backgrounds who are least likely to build networks, with fewer options for connection.

The field sometimes finds itself on the defensive in conversations about AI, as though it needs to justify why human mentors still matter. This simple two-week study is a useful reframe. The question was never whether a chatbot could be warm or helpful in the moment. It was whether it can do what mentoring programs exist to do. The evidence here says no.