It’s time to shed light on the “black box” of mentoring programs

By Jean Rhodes

For the most part, the field of mentoring has not yet specified the precise conditions under which different approaches to mentoring “work.”  Psychologist, Patrick Tolan and his colleagues have argued that the mentoring field’s resistance to identifying, implementing, and adhering to standards, including specifying how program inputs relate to outcomes, stems from its firmly held belief that the benefits of mentoring interventions flow mainly from close, enduring relational bonds. Tolan et al., (2016) note that, for youth mentoring, “the body of research is remarkable in the limited emphasis on systematic description of intervention content, description of intended processes through which effects are expected, and in important features of implementation and providers. There seems to remain limited commitment and perhaps even some reluctance to aim for continuity across the field or specificity in applying and describing mentoring efforts that might facilitate scientific understanding of effects.” Yet it is only through this sort of conceptual rigor that the field will reach its true potential.

Psychologists Sam McQuillin and Michael Lyons, have also bemoaned the lack of both theories and studies connecting mentoring activities and discussions with the outcomes that are typically often used in mentoring program evaluations. They recently identified “a large discrepancy in how [mentoring] treatments are specified compared to other volunteer interventions.” In their review of fifty-seven published school-based mentoring program evaluations, the researchers found that fewer than half of the studies even discussed the activities that occurred between mentors and mentees, with less than a quarter reporting either prescribed practices or guidelines for the meetings. Additionally, only 7 percent of the evaluations actually measured and reported the specific activities that mentors were expected to do with mentees. As they note, this absence of tracking “preclud[es] any legitimate understanding of what occurred between mentors and mentees” and contrasts with the extensive documentation of program content common to evaluations of other volunteer-based educational interventions. Without information on what mentors and mentees actually do together, it is impossible to determine the extent to which programs are actually following recommended practices. Nor is it possible to know what levers to pull to improve disappointing outcomes. One study found that those mentoring programs that monitor the way they are implemented obtain effect sizes three times larger than do programs that report no such monitoring (DuBois et al., 2002). Similarly, when researchers examined data from nearly five hundred studies across five meta- analyses of youth prevention programs, they found that effect sizes were double and sometimes triple the average when programs were carefully implemented (Durlak & Dupree, 2008).

Mentoring programs do not necessarily need to follow guidelines to the letter; research on social programs suggests that implementation can diverge and be customized as much as 40 percent from overarching guidelines and still achieve intended outcomes. However, making such determinations requires that programs actually specify and measure what they are doing in the first place.

In a new study–highlighted in this issue–researcher Mike Lyons and his colleagues specified how conversation topics between mentors and mentees led to different outcomes. Their findings are interesting, and draw attention to the need to continue to study the particulars of mentoring interventions in greater depth.