What we talk about when we talk about evidence

Screen Shot 2013-02-28 at 8.47.31 PM

By Jean Rhodes

“Don’t accept your dog’s admiration as conclusive evidence that you are wonderful.”  Ann Landers

The summit is upon us, and many of us will optimistic tagline is “Mentoring Works.” Despite this assertion, the researchers in attendance will, no doubt, be wringing our collective hands about the relatively small effect sizes that have emerged in recent meta-analyses, the lack of clear evidence for specific mentoring approaches (group, e-mentoring, etc), and the need for additional studies in the field. Indeed, although there have been a few large-scale evaluations of mentoring in recent years, the overall base of evaluation findings on which policy and practical decisions rests remains thin. Since the vast majority of programs are implemented through small community-based organizations, rigorous evaluations are impractical. And when large-scale experimental evaluations have been undertaken, the results have ranged from confusing to disappointing.  Findings rarely provide the clear and simple answers that practitioners are looking to share with their funders and, when one set of findings contradicts an earlier conclusion, the field’s “best practices” in mentoring can seem suddenly up for grabs. In fact, given our often inscrutable presentations, and how far “off message” we sometimes stray, it’s little wonder that researchers are often consigned to the back recesses of the ballroom at mentoring conferences, where we can absorb each other’s complicated messages, unwelcome caveats, and frustrating ambiguity.

Yet clear-eyed, nuanced calculations of what it takes to deliver high quality, effective youth mentoring are essential to improving the effectiveness of youth mentoring. They could lead to allocations for program enrichments that would yield a higher return on investments. Effective (and cost-effective) solutions are in everyone’s best interest and wishful thinking that all mentoring works may foster complacency and, ultimately, less effective interventions. Since mentoring fits within the broader field of prevention science, it should strive to more effectively align itself with the field’s standards of evidence. This means that evaluations that employ sound measures and rigorous methods will always be needed to determine the efficacy of the many new approaches to mentoring. Several high-quality random assignment evaluations of community- and school-based programs have been undertaken or are currently underway. Their findings have fallen on fertile soil and provide grist for subsequent meta- and secondary analyses.  As we identify them (and we are), the most efficacious approaches should be carefully disseminated and supported through ample, ongoing supervision (Flay et al., 2005).

Much remains to be done to understand the complexities of mentor relationships and to determine the circumstances under which mentoring programs make a difference in the lives of youth. At this stage, we can safely say that, yes, mentoring works. It is, by and large, a modestly effective intervention for youth who are already coping relatively well under somewhat difficult circumstances. In some cases it can do more harm than good; in others it can have extraordinarily influential effects. The balance can, and should, be tipped toward the latter. A deeper understanding of mentoring relationships, combined with high quality programs, enriched settings, and a better alignment of research, evaluation, and practice, will position us to harness the full potential of youth mentoring.

Jean Rhodes