Evidence Corner: Is Mentoring Worth the Investment? The Jury is Out
By David DuBois
It is intuitive to many that investments in mentoring are a worthwhile investment. Indeed, we do have robust evidence that youth can and do benefit in important ways from mentoring, such as improved behavior, social relationships, and school performance. Why, then, would I argue that the “jury is out,” which is to say the question of whether mentoring is a good investment is still one with an uncertain answer. Let’s begin with the conclusion of Michael Foster, a noted expert in the economic evaluation of prevention programs, in a commentary that he wrote to accompany a report that Marc Wheeler, Tom Keller, and I authored a couple of years ago on the effectiveness of school-based mentoring:
“In sum, existing research demonstrates that the costs of mentoring programs are variable and rather modest, at least in comparison to the high costs of delinquency, school dropout and substance abuse (Cohen, 1998). Simple comparisons reveal that the latter dwarf the former. Nonetheless, at this point, the literature does not offer the sorts of comprehensive economic analysis that are required to judge mentoring cost-effective and to suggest that public funds should be expended on such programs.” (Foster, 2010, p. 24)
What would a comprehensive economic analysis entail? Well, for starters, any such analysis needs to rest on a rigorous assessment of the benefits of mentoring, the kind that would normally be obtained from a rigorous, large-scale random assignment evaluation. The landmark Public/Private Ventures (P/PV) randomized controlled evaluations of the Big Brothers Big Sisters’ (BBBS) community- and school-based mentoring programs would be good exemplars here. So if, as I noticed being done this past week, a mentoring organization or program makes strong claims as to how much money it is saving the community, but lacks this type of evidence base, it is on shaky ground at best.
Let’s assume that such evidence exists and that it includes some favorable outcomes, as would be the case with both of the P/PV evaluations of BBBS programs above. There would still be many hurdles to overcome before one could marshal the type of strong, evidence-based argument that experts in the science (and art!) of economic evaluation would find compelling. First, it would be necessary to convert outcomes to their economic (“dollar”) value. This is no easy undertaking under any circumstances. It becomes inordinately difficult, however, if the outcomes with the most established direct economic value are not even measured in evaluations of the program. Such outcomes include high school graduation, employment, incarceration, use of health care and social services, and so on. No such outcomes were assessed in the P/PV evaluations, for example, nor have they been examined systematically in any rigorous evaluation of a traditional, “stand alone” mentoring program. Lacking such data, those attempting cost-benefit analyses of mentoring programs have been forced to extrapolate or project what they might be, for example projecting the likelihood of high school graduation from youth reports of improved grades in the P/PV evaluation of the BBBS community-based program. To call such projections “estimates” may be too generous – more appropriately, we might refer to them as “guesstimates.” Consider, for example, the P/PV evaluation finding of improved teacher reports of academic performance for youth participating in the BBBS school-based mentoring program. It would be tempting to attempt to extrapolate this difference to an expected difference in the likelihood of graduating high school. Yet, to do so would be overlooking the finding from the same study that as soon as the next school year there was no evidence of better school performance for mentored youth relative to those in control group – that is, the program effect had disappeared. As this illustrates, unless one has relatively direct measurement of the outcomes on which one is basing conclusions about economic benefits, there is good reason to be concerned about any estimates that are proffered.
Further hurdles await those who do have reliable estimates of economic benefits. One of the most fundamental of these is that even a favorable benefit:cost ratio (i.e., benefits exceed costs) does not translate into an indisputable case for investment in a mentoring program. The reason for this is that there are always many alternatives available for investment of public dollars and, in the case of mentors, their personal time. Some of these alternatives — whether programs for youth or other potentially laudable causes — may offer much better returns on investment, so-called “ROI”, than the mentoring program. To illustrate, in a comprehensive analysis of the costs and benefits of prevention and early intervention programs, Aos and colleagues (2004) estimated – using findings from the P/PV evaluation of the BBBS community-based program — that for each youth served there was a modest net benefit of $48 or, framed differently, $1.01 return for each dollar invested. Eighteen other youth development or substance abuse prevention programs were also examined in the report. All but 3 of these programs were estimated to provide better ROI than the BBBS program, in some cases by a large margin – for example, 5 of the 6 youth development programs offered estimated benefits of between 3 and 28 dollars per dollar of cost. The BBBS program fared better when the value of volunteer time was excluded from costs (I will put aside for the present discussion the debatable issue of whether such an exclusion is appropriate, except to say my general understanding is that it would not be by most economists, as it essentially says had volunteers not served as mentors there is no substitute activity of benefit to society or youth that they would have pursued with their available time). Even in this comparison, though, there are 3 youth development programs estimated to offer more than twice the net benefit (i.e., benefits minus costs per youth served) of the BBBS program.
My point in highlighting these data is not to argue that mentoring programs like BBBS are in fact less worthwhile investments than other programs. Just as with the BBBS program, the calculations of the monetized benefits of the other programs undoubtedly suffer from limitations that make them prone to error and uncertainty. Rather, I just wish to illustrate how challenging it can become to construct a compelling argument for investment in mentoring programs in the context of an array of competing options for use of the same resources. This reality was illustrated just this week, as President Obama in a speech here in Chicago called for universal preschool education as a marquee feature of his proposed agenda to curb violence. In making this recommendation, he cited the relatively robust evidence for high-quality preschool to yield up to $7 of cost-savings per dollar invested down the road as recipients enter adulthood. This, despite youth themselves repeatedly citing in a variety of venues that they see mentoring as a top need for ensuring their safety and ability to have positive futures (one example can be found in the conversations with teens that took place – again, here in Chicago — a few years ago in the wake of the much-publicized tragic death of Derrion Albert).
Where does this leave us? To be clear, I personally believe that mentoring programs for youth are a highly worthwhile investment. That said, if I were a member of a duly sworn “evidence jury” I could not vote without a “reasonable doubt” that mentoring programs merit societal investment. To continue the analogy, as a field we need better investigative work and trial preparation in order to have a strong case for the jury (the members of which may include, for example, government funders, philanthropists, taxpayers, and those deciding whether to volunteer their time to mentoring among other options). One key priority should be to expand evaluations of mentoring programs to include measures of outcomes that capture the monetized benefits of program participation. Doing so will require not only much longer-term follow-up assessments to capture effects on outcomes such as high school graduation, employment, and criminal offending during adulthood. It also should include expanded measurements of more immediate outcomes that can be monetized such as reduced levels of involvement in the juvenile justice system, utilization of health care and social services, and need for special education within schools. We also need to push the envelope as to what can be counted as monetized benefits of mentoring – for example, if youth as a result of mentoring are happier and more content with their lives (“life satisfaction”), shouldn’t there be a way to attach some value to this? And, certainly, we need to do much more to factor in the ancillary benefits that mentoring programs may offer to mentors, the parents of youth (e.g., through reductions in parental stress), and communities as a whole (e.g., through increased availability of positive peer role models for other youth), none of which will be reflected in simple summations of benefits to participating youth. I’m hopeful that significant in-roads can be made in these areas in the near future, thus giving the jury more than enough information to feel confident in reaching a verdict regarding the merits of investment in mentoring.
References:
Aos, S., Lieb, R., Mayfield, J., Miller, M., & Pennucci, A. (2004). Benefits and costs of prevention and early intervention programs for youth. Olympia, WA: Washington State Institute for Public Policy. Retrieved from http://www.wsipp.wa.gov/rptfiles/04-07-3901.pdf
Foster, E. M. (2010). Commentary: What we know about the cost-effectiveness of mentoring. Social Policy Report, 24(3), 23-24. Retrieved from http://srcd.org/sites/default/files/documents/spr_24-3.pdf