Addressing the Ethics of AI-Supported Mentoring in Higher Education
Köbis, L., & Mehner, C. (2021). Ethical Questions Raised by AI-Supported Mentoring in Higher Education. Frontiers in Artificial Intelligence, 4, 624050. https://doi.org/10.3389/frai.2021.624050
Introduction
As artificial intelligence (AI) becomes increasingly embedded in higher education, its use in mentoring demands critical ethical scrutiny. Mentoring has long been viewed as a deeply personal, trust-based process that supports students’ academic, career, and personal development. However, AI tools, which are capable of automating feedback, designing personalized learning paths, and recommending career opportunities, introduce complex ethical dilemmas. Authors Köbis and Mehner (2021) call attention to the lack of guidance specific to AI-supported mentoring, urging a closer examination of how traditional mentoring ethics and AI ethics intersect, diverge, and can be harmonized.
Methods
This conceptual analysis synthesizes ethical principles from two distinct domains: mentoring and AI. Drawing from existing guidelines in business ethics, psychology, education, and AI development, the authors compare key ethical tenets from each field by identifying overlaps and gaps. They propose a new ethical framework tailored to AI-mentoring by juxtaposing the principles and examining their implications through a hypothetical use case involving an AI career guidance tool.
Results
The analysis reveals substantial overlap between mentoring and AI ethics (e.g., beneficence, fairness, autonomy, transparency), but also highlights distinct concerns unique to each field. For instance, AI ethics emphasizes technical robustness, data agency, and sustainability—areas traditionally overlooked in face-to-face mentoring. Conversely, mentoring ethics foregrounds loyalty, emotional support, and boundaries. The authors also present four applied ethical dilemmas to illustrate how these issues play out in practice, such as concerns around data privacy, algorithmic bias, transparency of recommendations, and the potential erosion of mentee autonomy.
Discussion
Köbis and Mehner argue that ethical mentoring via AI must be understood as an interdisciplinary challenge. Developers, educators, and policymakers must collaboratively develop frameworks that uphold both technological integrity and the human-centric values foundational to mentoring. The authors caution against assuming AI objectivity, pointing out that data-driven decisions can perpetuate inequalities and biases if not designed and implemented carefully. Importantly, the discussion reaffirms the enduring value of human mentorship, while recognizing that AI could extend mentoring access and democratize support if ethically applied.
Implications for Mentoring Programs
For mentoring programs, this research underscores the necessity of integrating ethical considerations into the design, deployment, and evaluation of AI tools. Programs should engage diverse stakeholders, including students, in conversations about consent, data governance, and fairness. Mentors using AI-supported systems must remain transparent about how decisions are made and ensure mentees retain agency over their personal and professional choices. Additionally, programs must invest in training staff to navigate the ethical tensions between personalization and privacy, automation and autonomy. Ultimately, ethically aligned AI could enrich mentoring, but only if designed to complement, not replace, human connection.
Read the full paper here


