Thorny ethical questions: New study explores AI and mentoring
Köbis, L., & Mehner, C. (2021). Ethical Questions Raised by AI-Supported Mentoring in Higher Education. Frontiers in Artificial Intelligence, 4, 624050. https://doi.org/10.3389/frai.2021.624050
Background Across universities, AI is being being used for everything from grading essays to providing personalized tutoring. One area where AI could have a profound impact is in mentoring students – those highly personal relationships where an experienced mentor guides a student’s academic and career journey.
But as researchers Laura Köbis and Caroline Mehner of Leipzig University argue in a recent paper, the adoption of AI mentoring systems raises thorny ethical questions that have not been adequately addressed. At the heart of the issue is the incredible amount of sensitive personal data that would be required to power such AI mentors.
“Mentoring in higher education requires one of the highest degrees of trust, openness and social-emotional support,” the authors write. Students share deeply personal information with their mentors about their goals, struggles, and life circumstances – information that could be used to train AI systems to provide tailored advice and recommendations.
However, the authors point out that collecting and using such data for AI purposes could violate principles of privacy, confidentiality and autonomy that have long been pillars of the mentor-mentee relationship. There are also concerns about bias being baked into the algorithms, and a lack of transparency around how the AI actually arrives at its guidance.
The researchers don’t argue against using AI for mentoring altogether. But they say much more public discourse and ethical scrutiny is needed before universities move ahead with adopting these technologies that could profoundly reshape one of the most sensitive relationships in academia.
“How can ethical norms and general guidelines be respected in these complex digital mentoring processes?” they ask. “This article strives to start a discourse on the relevant ethical questions and raise awareness for the ethical development and use of future data-driven, AI-supported mentoring environments.”
The authors provide a checklist of ethical principles – including: privacy, fairness, transparency and respect for human rights – that should be carefully considered as AI mentoring systems are developed. They also outline some hypothetical scenarios that illustrate the potential ethical pitfalls, such as a university surreptitiously using student data to train its AI mentor for marketing purposes.
Köbis and Mehner emphasize that interdisciplinary collaboration between ethicists, developers, students and policymakers will be key to finding the right balance. Blindly adopting AI mentors without this oversight could undermine the trust and human connection that is so vital to the mentoring relationship.
“Mentoring can benefit from considering AI ethics principles that raise more awareness of possible sensitivity of data, which is the basis for all mentoring relationships,” they write. “The security aspects of the data that a mentee shares with mentors go beyond privacy and confidentiality.”
As AI capabilities continue to grow, universities must proactively grapple with these ethical concerns around AI mentoring before problems arise. Failing to do so could damage the institution of mentorship and betray the trust placed in human mentors by students at pivotal points in their lives.