Can AI Feedback Support More Responsive Teaching?
Demszky, D., Liu, J., Hill, H. C., Jurafsky, D., & Piech, C. (2023). Can Automated Feedback Improve Teachers’ Uptake of Student Ideas? Evidence From a Randomized Controlled Trial in a Large-Scale Online Course. Educational Evaluation and Policy Analysis, 46(3), 483-505. https://doi.org/10.3102/01623737231169270 (Original work published 2024)
Introduction
High-quality teaching depends not only on what teachers say, but how they respond to students’ ideas. One especially powerful practice—uptake, or acknowledging and building on student contributions—supports engagement and learning, yet is difficult to improve through traditional coaching. Demszky and colleagues (2023) asked whether automated, artificial intelligence (AI)–based feedback could help teachers strengthen this complex practice at scale.
Methods
The authors developed M-Powering Teachers, a tool that uses natural language processing to analyze classroom transcripts and provide formative, non-evaluative feedback on teachers’ uptake. The tool was tested in a randomized controlled trial (RCT) embedded in Code in Place, a large, online introductory computer science course. A total of 1,136 instructors were randomly assigned to receive weekly email prompts encouraging them to review personalized AI feedback or to a control condition. Transcripts from recorded sessions were analyzed to measure uptake, questioning, repetition, and talk time, along with student assignment completion and course satisfaction.
Results
Instructors who engaged with the automated feedback demonstrated a roughly 10% increase in uptake of student ideas, driven primarily by more frequent and higher-quality follow-up questions. There were no meaningful changes in simple repetition or overall talk time. Student outcomes showed suggestive benefits, including higher completion of one assignment and greater course satisfaction.
Discussion
The study provides rare causal evidence that AI-generated feedback can change a high-leverage teaching practice that has historically been resistant to improvement. Effects were strongest among instructors who reviewed the feedback and among certain subgroups, including returning instructors and those teaching outside the United States. While conducted in an online setting, the findings highlight AI’s promise as a low-cost complement—not a replacement—for human coaching, alongside important cautions about equity, privacy, and contextual fit.
Implications for mentoring programs
Mentoring programs can draw on this work by using AI-supported reflection tools to help mentors strengthen responsive listening, follow-up questioning, and affirmation of mentee voice. When paired with human supervision and ethical safeguards, automated feedback may accelerate mentor skill development and improve relationship quality at scale.
Read the full paper here.


