“The truth will set you free, but first it will piss you off.”
– Gloria Steinem
by Jean Rhodes
Over the course of my career, I have met many practitioners who have launched what sound like amazing mentoring programs. The practitioners are often new to the field—passionate about their mission and eager to join the broader mentoring movement. As much as I find their commitment inspiring, I also confess to occasional skepticism about the return on their investment of time and resources. My doubts are fueled by the somewhat sobering data which, when plotted over the course of a long career, emerge as an irrefutable trend line.
More than 15 years ago, DuBois et al. (2002) conducted a comprehensive meta-analysis of youth mentoring programs that showed disappointingly small effects. Since then, at least 5 subsequent meta-analyses of youth mentoring have yielded strikingly similar results. Although there remains considerable variation across studies, the average effect size remains small by conventional benchmarks (Cohen, 1988)). This despite a rich and growing body of research and evaluation studies, a strong commitment on the part of MENTOR and other organizations to evidence, and advances in training and classification through the National Mentoring Resource Center. The resistance to improvement is particularly humbling when we compare mentoring program effect sizes with those that have been found in meta-analyses of other prevention programs for children and adolescents. For example, a meta-analysis of 177 prevention studies found effects ranging from moderate to large, depending on program type and target population (Durlak & Wells, 1997).Likewise, meta-analyses of youth psychotherapy, encompassing hundreds of studies, have reported strong mean effects, (Weisz, Sandler, Durlak, & Anton, 2005). Interestingly , however, Weisz et al. (2005) note that, “in settings in which therapists were able to use their clinical judgment to deliver treatment as they saw fit, not constrained by evidence-based interventions or manuals, and in which there was a comparison of their treatment to a control condition. Meta-analyses of these studies of usual clinical care have found effect sizes averaging about zero (see, e.g., Weisz, 2004; Weisz, Donenberg, Han, & Weiss, 1995), indicating no treatment benefit.” I’ll spare you more data, as your confirmation biases may have already kicked in. You may already be thinking “mentoring does work, the data must be wrong!” Such biases are understandable as they are grounded in your lived experiences with caring adults. But there is often a gulf between the influence of a natural mentor and that of a typical volunteer in a typical program.
There are many factors that have contributed to the state of affairs. One is the field’s uneven adherence to evidence-based practice. A recent study by Kupersmidt et al (2016) revealed that only around 40% of programs consistently follow the Elements of Effective Practice (EEP), despite research showing fewer unexpected match closures in programs that adhere to them. And it’s not just the EEP. There are promising strategies that, when carefully exported from other fields can raise the bar. McQuillan et al. (2016), for example, has demonstrated effect sizes of his mentoring approach that are quite impressive, as have others. Despite their availability, many programs continue to rely on home-grown approaches and curricula.
There also the lack of specificity. Many programs will endorse the general goals of promoting healthy development, reducing risk, and ameliorating youth dysfunction and disorder. But those are tall and rather unspecific orders. Depending on their circumstances, youth need dramatically different things from their mentoring experiences. Nonetheless, many programs continue to embrace a one-size fits all friendship approaches. In doing so, they are akin to a pediatrician who, irrespective of presenting problem, prescribes two aspirins to every child that visits. Children with low-grade fevers and muscle aches are likely to benefit from this generalist approach, and they may even be held up as anecdotal evidence that the treatment works. But, in the long-run, this cherry picking of positive effects ignores the many for whom aspirin had little or no lasting effect.
A more effective approach would be to first understand the needs of the youth and then provide targeted, intensive training to the volunteer or paraprofessional. This targeted, evidence-based approach will require precision, fidelity, and, perhaps most importantly, humility. Precision in identifying the needs or goals of the target population and the best strategies for addressing them; fidelity in implementing strategies that adhere as closely as possible to the program and conditions in which they were designed and evaluated; and the humility to refrain from reinventing the wheel when there are already rigorously developed and validated strategies.