Messing with Mr. InBetween: Prevention science and the road to rigor in youth mentoring
By Jean Rhodes
You’ve got to accentuate the positive,Eliminate the negative,
Latch on to the affirmative,
Don’t mess with Mr. InBetween. (Johnny Mercer)
Who could disagree? Well, for starters, a growing number of researchers and practitioners who are reaping the benefits of a more balanced approach to youth mentoring–and reckoning with the field’s deep alliance with positive youth development. Specifically, as the mentoring movement was expanding in the late 1980s and early 1990s, psychologists pushed for a more empowering, less problem-focused views of youth. Two fields—positive youth development and prevention science—emerged concurrently from the recognition that many problems youth face share both similar risk factors (e.g., stress, trauma) and similar protective factors (e.g., extracurricular activities, caring adults). Although both fields sought to strengthen protective factors, prevention scientists were more encompassing of risk factors, developing interventions that targeted those risks as well as carefully stipulated guidelines for conducting program implementation and evaluation.
The strengths-based focus of positive youth development grew both from valid arguments that young people should not be defined by their problems and early research showing that youth are more likely to thrive when their strengths are aligned with the resources and opportunities in their environments. Although after-school, recreational, and other activity-based programs and settings (e.g., 4-H, Boys and Girls Clubs, Scouts, YMCA, and athletic leagues) fit naturally into this positive youth development framework, formal mentoring interventions did not. The latter tend to share far more in common with paraprofessional helping relationships and operate in ways that are somewhat detached from youths’ broader contexts.
I suspect that it was the focus on supportive, intergenerational relationships that led positive youth development programs to find common ground with formal mentoring programs in ways that helped to rationalize the field’s alignment with positive youth development and the adoption of a more recreational approach. Although bound together by shared terminology, the two types of relationships are actually quite different. Natural mentoring relationships relationships are not developed in response to any particular program or funding prerogatives, logic models, or time frames. The same cannot be said of formal mentoring programs, which share far more in common with the structures and imperatives of professional helping interventions. Relationships in formal mentoring programs are relatively short-term (e.g., only around 5.8 months in school-based mentoring) and, although they occasionally take on the contours of natural mentoring, this is not the norm. Granted, formal mentors sometimes meet with their mentees in after-school and other positive youth development settings, but that does not necessarily imply that mentoring is a positive youth development program. After all, meeting at Starbucks doesn’t make the coffee shop a positive youth development program. Likewise, strong connections between youth and staff may arise naturally in after school and other positive youth development programs, but that does not make those settings formal youth mentoring programs. In fact, youth’s connections with staff tends to be distributed across several adults. In one ethnographic study of natural mentoring relationships in after school programs, researchers observed that groups of staff members engage in “collective mentoring,” where they share the responsibility to cultivate students’ strengths and talents. Youth similarly tend to distribute their natural mentoring needs across many adults. For example, one study found that youth can have as many as five natural mentors at a time, each filling different roles and needs.
Whatever the reasons, this placement of mentoring programs under the umbrella of positive youth development has had major implications for the precision and rigor with programs were developed. Since a major focus of positive youth development was on creating settings that were developmentally aligned with, and responsive to, youths’ diverse strengths and interests, the field developed “without emphasis on specific intervention techniques or prescribed dosage and methods.” Positive youth development researchers instead documented settings’ organizational features, opportunities, relational processes, and alignment with youths’ interests and skills. This worked for after school and other programs, but was problematic for youth mentoring programs, which benefit from targeted, evidence-based approaches. The relatively light emphasis on risk and heavier focus on activities and friendship, also meant that mentoring volunteers were not always adequately trained to take on the difficulties that so many youth were facing.
Researchers have argued that positive youth developments’ relatively imbalanced focus on promotion of positive outcomes rendered programs less effective than those that more fully encompassed risk and followed the norms of prevention science. And, although positive youth development rests on a rich conceptual framework, the appropriation of its terminology by mentoring programs rarely included careful operationalization, implementation, and measurement of its core constructs. Instead, mentoring programs largely drew on positive youth development’s sunny, upbeat slogans, such as finding one’s “spark,” building “developmental relationships,” and cultivating the “Six Cs” (competence, confidence, connection, character, caring, and contribution) in ways that were rarely operationalized, and could not fully mitigate the multiple risk factors facing the youth they were serving.
The construal of formal mentoring as more aligned with positive youth development programs led to focus on settings and strengths, away from specific risks and vulnerability. Given this focus, few mentoring programs carefully tracked what their mentors were doing with their mentees, or specified how particular activities would bring about particular outcomes. In fact, in its loose specification of activities, goals, and outcomes, as well as the overall absence of standardized evidence-based training of volunteers and youth, mentoring is something of an outlier in the broader field of volunteer interventions, largely eschewing the field’s theoretical, evidentiary, and implementation standards.
Attempts have been made to bring greater precision into youth mentoring programs, and to draw from intervention science, with some mentoring programs now incorporating manuals, monitoring, and evaluation. Yet the field has largely resisted such efforts. It has also resisted efforts to move from omnibus, difficult to falsify conceptual models that explain how “mentoring works,” toward a more precise specification of the conditions under which different approaches to mentoring might work for different youth. Psychologist Patrick Tolan and his colleagues have argued that the mentoring field’s resistance to identifying, implementing, and adhering to standards, including specifying how inputs relate to outcomes, stems from its firmly held belief that the benefits of mentoring interventions flow mainly from close, enduring relational bonds. Tolan and his coauthors note that, for youth mentoring, “the body of research is remarkable in the limited emphasis on systematic description of intervention content, description of intended processes through which effects are expected, and in important features of implementation and providers. There seems to remain limited valuing of and perhaps even some reluctance to aim for continuity across the field or specificity in applying and describing mentoring efforts that might facilitate scientific understanding of effects.” Yet it is only through this sort of conceptual rigor that the field will reach its true potential.
Another team of researchers led by psychologist Sam McQuillin has also bemoaned the lack of specificity in how activities relate to outcomes in youth mentoring. They recently identified “a large discrepancy in how [mentoring] treatments are specified compared to other volunteer interventions.” In a review of fifty-seven published school-based mentoring program evaluations, McQuillin and his co-authors found that fewer than half of the studies even discussed the activities that occurred between mentors and mentees, with less than a quarter reporting either prescribed practices or guidelines for the meetings. What’s more, only 7% of the evaluations actually measured and reported the specific activities that mentors were expected to do with mentees. This absence of tracking, “preclud[es] any legitimate understanding of what occurred between mentors and mentees” and contrast with the extensive documentation of program content common to evaluations of other volunteer-based educational interventions. Without information on what mentors and mentees actually do together, it is impossible to determine the extent to which programs are actually following recommended practices. Nor is it possible to know what levers to pull to improve disappointing outcomes. One study found that those mentoring programs that monitor the way they are implemented obtain effect sizes that are three times larger than programs that report no such monitoring. Similarly, when researchers examined data from nearly 500 studies across five meta-analyses of youth prevention programs, they found that effect sizes were double and sometimes triple the average when programs were carefully implemented. Mentoring programs do not necessarily have to follow guidelines to the letter; research on social programs suggests that implementation can diverge and be customized as much as 40% from overarching guidelines and still achieve intended outcomes. However, making such determinations requires that programs specify and measure what they are doing in the first place.
Taken together, a collective underestimation of risk and related overemphasis on strengths, led to field down the path of imprecise models that produce weak effects. Cognitive biases and miscommunication, which I’ve discussed in previous columns, have rendered the field somewhat immune to counter-narratives and compelling data for more targeted approaches.