To Bot or Not to Bot? How AI Companions Are Reshaping Human Services and Connection

Reprinted from Stanford Social Innovation Review (SSRI)

By Julia Freeland Fisher Jan. 21, 2025

Last year, a Harvard study on chatbots drew a startling conclusion: AI companions significantly reduce loneliness. The researchers found that “synthetic conversation partners,” or bots engineered to be caring and friendly, curbed loneliness on par with interacting with a fellow human. The study was silent, however, on the irony behind these findings: synthetic interaction is not a real, lasting connection. Should the price of curing loneliness really be more isolation?

Missing that subtext is emblematic of our times. Near-term upsides often overshadow long-term consequences. Even with important lessons learned about the harms of social media and big tech over the past two decades, today, optimism about AI’s potential is soaring, at least in some circles.

Bots present an especially tempting fix to long-standing capacity constraints across education, health care, and other social services. AI coaches, tutors, navigators, caseworkers, and assistants could overcome the very real challenges—like cost, recruitment, training, and retention—that have made access to vital forms of high-quality human support perennially hard to scale.

But scaling bots that simulate human support presents new risks. What happens if, across a wide range of “human” services, we trade access to more services for fewer human connections?

Emerging Models of One-Sided Relationships

AI companions are quickly becoming a new source of parasocial capital—offering on-demand support and resources through one-sided relationships with cheerful, encouraging bots. That’s not bad per se, but it’s a sharp departure from traditional forms of social capital—the resources that flow through and across our human networks. The opportunities and limitations of relying on parasocial capital are not yet well understood.

In a new report, Navigation and Guidance in the Age of AI, my Clayton Christensen Institute colleague Anna Arsenault and I explore this phenomenon in the context of one of US education’s most capacity-constrained services: college and career guidance. On average, there’s one guidance counselor for every 385 high school students in the United States. For every career services employee, there’s an average of 2,263 college students. To manage these punishing ratios, the field is starting to explore how chatbots can step in.

We interviewed leaders and advisors at 30 tech companies and advising organizations spanning the education-to-career continuum. By and large, there was consensus that bots should take on informational and administrative tasks that could free up advisors to spend more time with students. Bots are also being built to nudge students to check in with their advisors more frequently. But in tandem with deploying bots to make social support more efficient, we also heard that bots are being engineered to provide social support directly to students.

Much like the AI companions in the Harvard study, college and career bots are being scripted and trained to present as caring. “The emotional tone for Coco right now is very encouraging. The prompt we gave it was to be like the cool older cousin that supports you in your process,” said Christine Cruzvergara, who heads strategy at Handshake, one of the largest job platforms in the college space.

Imbuing bots with warmth is key to driving engagement. “Part of building the bot is giving the bot a persona and building that relationship between the bot and the students as well,” said Sarah Place, the chief program officer at Bottom Line, a nonprofit that supports students from low-income households to and through college. “Our bot, Blu, is a cheerleader.”

In a number of cases, the human elements of bots went beyond just tone. Despite the common refrain that bots should perform technical tasks so humans can perform “human” ones, over half of the leaders we interviewed said bots should ideally provide esteem, emotional, or motivational support. Some expressed surprise at just how well bots built with Gen AI perform on more human dimensions: “The thing that I’m struck by is the emotional intelligence of these chatbots,” said David Ma, CEO of the nonprofit Hope Street Group, who’s putting his decade-plus of building tech at Facebook toward building Hope, an AI career coach for high schoolers. “You look under the hood and see how the neural network’s mapped, and you see there’s a region for emotional recognition.”

These capabilities have started to push on leaders’ core assumptions. “The bot is already able to help with some of the things that felt like they were going to be inherently human,” said Kanya Balakrishna, cofounder and CEO of Future Coach, a coaching platform that helps young people explore their purpose with both human coaches and a bot.

Whether it’s cheerleading, motivational interviewing, or emotional support, bots trained on these capabilities could be a boon to driving engagement and impact at scale. But they raise bigger questions about the degree to which bots will be built to foster greater human interaction or replace it outright.

The Costs of Cutting Relationships Out of Interventions

Some leaders expressed strong opinions about those tradeoffs. “AI should drive me to a human, not be the human,” said Tiffany Green, founder and CEO of Uprooted Academy, a hybrid advising program for first-generation college students.

Green’s distinction is critical. The more bots are built to “be” the human, the more they will be poised to take on interventions historically based on and in relationships.

Making interventions less relationship-intensive could produce short-term gains and significant efficiencies in delivering those gains. But it could have hidden costs. In our society, “who you know” matters immensely. Research shows that relationships shape a wide array of individuals’ life outcomes across learningdevelopmenteconomic mobilitywell-beingresilience in the face of disasters, and longevity. In short, social capital is a core variable in the opportunity equation. The fewer relationships gained in the course of human services, the less people will be able to bank on others to get by and get ahead.

Leaders in the navigation and guidance space are wrestling with those tradeoffs. One commonly cited risk is the weakening of the skill of interacting with real humans—not programmed by an algorithm. These social skills are immensely valuable, not just in the job market, but as foundational to human development.

Jean Rhodes, a leading mentoring researcher, worries that bots could distort the very foundation of our ability to connect with one another. “In order to be in relationship with somebody, you need an ontological framing–you need to understand their context, their history, their flaws, what makes them laugh. All of those things require what we are evolutionarily designed to do: to connect,” she said. “Strip away all that context, all that need for empathy and attunement, and the bot’s just doing the work for you.”

Alex Bernadotte, founder and CEO of the nonprofit Beyond 12, which takes a “high-tech, high-touch” approach to supporting first-generation college students, shares some of those concerns. Although the organization is currently building Gen AI tools into its tech stack, Bernadotte is holding tight to her commitment to human coaching. She described what gets lost if human coaches are left out. “It’s not just about the methodology of coaching,” she said. “It’s the implementation of the methodology through the practice with a human who is also dancing in the moment.”

Bernadotte spoke to a longer-term upside as well. She recounted the story of a former coach on her staff, Calvin, who, long after his time with Beyond 12, helped his former student, Kiara, to secure a job at Morgan Stanley where he now worked. “AI cannot recall when it stood in the shoes of a young adult with impostor syndrome on the doorstep of a big and possibly scary future. It cannot personally introduce students to a colleague or employer, or extend its social capital like Calvin did for Kiara,” Bernadotte wrote.

That’s more than just anecdotal in a labor market where an estimated half of internships and jobs come through personal connections. In fact, as many leaders we interviewed pointed out, one of the human things bots still can’t do is broker warm introductions that unlock jobs. “The copilot will prompt students to connect with people and resources in their network. It stops short of a nice warm hand-off,” said Patrick O’Donnell, CEO of the Making Waves Education Foundation, which launched an AI advising tool last year.

Bots also stop short of providing the opportunities that can open up when relationships outlast interventions, like Calvin and Kiara’s. “I think about my students that I worked with 20 years ago,” said Tobi Kinsell, the chief impact officer of College Advising Corps. “They still contact me like ‘hey, do you know someone in such and such field?’…And I don’t know that the chatbot can do those kinds of things.”

In other words, users relying on bots could have access to more perfect information and more plentiful advice but still end up lacking the networks they need to get the jobs they want.

Markets and Metrics Will Shape the Future of Connection

The dynamics in the navigation and guidance market reflect a larger and growing tension for all sorts of institutional and nonprofit leaders in the age of AI.

The question is not whether bots should be enlisted to scale effective interventions, but how much understanding of relationships—and their lasting power—is made sufficiently clear in how those interventions and their intended impact get framed, measured, and funded.

In the case of our research, we found that much of the college and career guidance market doesn’t discriminate between humans and bots. That’s not because anyone is overtly anti-human, but because success metrics are about making progress, not connecting with people. Most logic models and funding streams in the space treat relationships as inputs to that progress rather than outcomes in their own right.

In the coming years, the promise of conversational, seemingly empathetic bots will collide head-on with that calculus. If we’re not careful, we could architect a whole system of human services where humans are increasingly isolated; a system where people make gains in their education, health, or professional lives, but then have fewer people to help them maintain and build on those gains in the long run. We could see massive investment in AI solutions alongside underinvestment in social capital.

That doesn’t have to be the case. In our research, we surfaced a range of proactive steps that tech entrepreneurs were taking to build pro-social tech. Technology can serve as a conduit to connection, easing outreach to student support networks, nudging them to seek help, and providing space to build confidence in their networking skills. For example, Uprooted Academy, a virtual, AI-enabled community center to support under-resourced students applying to college, asks students to identify up to five supportive individuals in their lives and automatically texts updates to those individuals on students’ progress and needs every two weeks. Another nonprofit, CareerVillage operates a chatbot called Coach, which guides students through practice networking and conducting informational interviews with working professionals. However, it’s noteworthy that many of those social features were not being built in response to demand-side pushes, but supply-side ingenuity. If we care about human connection, the market—as much as the technology itself—needs to course correct.

To Bot or Not to Bot?

The immense advances that AI has made toward emulating—and even exceeding—human capabilities are breathtaking. In this new world, insisting that human connection should be central to a given intervention can sound sentimental, not to mention expensive. If a bot can help someone get what they need at the press of a button, it would be draconian to limit its impact.

But to curb the risks of minimizing human support, we need metrics that gauge and safeguard human connection alongside other outcomes. Those metrics can help us to know, at more regular intervals, whether people’s networks are growing or contracting; whether the quality of their human connections is deepening or deteriorating; and whether their muscle to interact with others is strengthening or atrophying.

With those guardrails in place, the social and public sectors can unleash the immense potential of AI without sacrificing access to the social capital vital to an equitable and thriving society—and one committed to remaining human as bots push the boundaries of what makes us so.

Read more.