Predicting the future of mentoring programs

by Jean Rhodes

I predict that formal mentoring programs will become increasingly specialized, professionalized, and evidence-based in the years ahead. This is a positive development, particularly given that our field’s two most important barometers of success—the number of adults willing to serve as volunteer mentors and the effectiveness of these efforts—have not changed in the past decade. Indeed, despite the appeals of organizations, funding agencies, celebrities, researchers, and others, there appears to be a hard upper limit to the number of adults who are willing to give up their time in the service of a stranger—around 2.5 million or 1% of American adults (Raposa et al., in press). If we assume that some of the adults engage in group mentoring, we can roughly estimate that about 3.5 million (7%) or so of the 45.7 million American youth between the ages of 6 and 17 receive volunteer mentoring each year. Even if this percentage somehow doubled, we’d still be around 2% of adults. In other words, we’ll never span the elusive mentoring gap through recruitment efforts alone. So, in addition to continuing to encourage volunteerism and expanding approaches to natural mentoring, we need to double down on our efforts to get the most possible good out of those generous adults who are willing step forward.

And here’s where we need to take another look at the data. In particular, our recent, comprehensive meta-analysis of youth mentoring programs (Raposa et al., in preparation), like the comprehensive meta-analyses that preceded it (DuBois et al., 2011, DuBois et al., 2002), suggests that the effects of mentoring have hit an upper limit as well.  In particular, the overall effect size of mentoring remains at .21, which is indicative of a “small” effect according to common conventions (Cohen, 1988). Put differently, the average youth in a mentoring program scores about nine percentile points higher on indices of improvement than the average youth in the non-mentored control or comparison group (Cooper, 2010; DuBois et al., 2011). That’s not bad and, of course, there remains considerable variation across types of youth, mentors, and programs which helps us target and improve our efforts. But before we go too far into those weeds, we need to internalize the simple fact that the overall effectiveness of youth mentoring is stable despite the rapid growing base of research and evaluation findings in the field (Blakeslee & Keller, 2012). And, when we compare mentoring to other youth interventions, some of which require a relatively lighter investment in human and capital resources, we see that mentoring falls in the low to middle range of effectiveness (Gutman & Schoon, 2015).

This is not to say we should give up on volunteer mentoring programs. Far from it. But, if we’re going to improve that metric (and we most certainly can), we will have to become more disciplined in our approach and make better use of the opportunities that the 2.5 million volunteers present to us each year. And, since volunteer training is the most direct and efficient medium through which mentoring science is transferred into practice, evidence-based training is our best hope. Programs may pull from time-tested ideas, but few are evaluated against counterfactuals before they are disseminated. Such trainings may be more economical than expert trainings, but when the opportunity costs of under-realized volunteer and youth potential are factored in the calculus of such arguments break down. Psychologist John Wiesz and his colleagues made this point in their analysis of the effects of child and adolescent therapeutic approaches that were grounded in evidence (EB) versus therapist intuition (or what he called “usual care,” UC). The former had robust effects while the latter hovered around zero. And when he and his colleagues compared EB to UC head to head across 32 studies, they found that hat EB outperformed usual care. The average youth treated with an EB approaches was better off after treatment than 62% of youths who received UC. Evidence-trained therapists were more likely to use treatment manuals to use techniques that had shown efficacy. Mentoring researchers recently uncovered similar results in a study of mentors who were provided with expert versus usual care pre-match training. The former showed superior performance on a number of indicators (Kupersmidt et al., in press).  And, why shouldn’t this be the case? We know that mental health professionals, teachers, home nurse-practitioners, Social Emotional Learning (SEL) practitioners, and others are all more effective when they follow carefully-stipulated evidence-based practice. Why should mentoring be any exception?

EB trainings can consistently impart the core competencies and raise the bar. And, as we move forward, there is also a need for EB trainings that are specialized to target populations, e.g., the Foster Healthy Futures Project (Taussig, 2015) among others. To accomplish this, mentoring experts need to move from a “what works” mentality to to one of  “what works, for whom, and under what circumstances?” The National Mentoring Resource Center provides excellent direction in that regard. And all training, no matter how specific, should be carefully built on theory and evidence, rigorously evaluated, carefully reviewed, and then, and only then, widely disseminated. That’s how evidence makes its way into mentoring practice and that’s how program effectiveness improves.