McQuillin, S. D., Lyons, M. D., Clayton, R. J., & Anderson, J. R. (2020). Assessing the impact of school-based mentoring: Common problems and solutions associated with evaluating nonprescriptive youth development programs. Applied Developmental Science, 24(3), 215–229. https://doi.org/10.1080/10888691.2018.1454837
Summarized by Ariel Ervin
Notes of Interest:
- Although youth development and mentoring programs support youths during their transition from adolescence to adulthood, they don’t target specific outcomes.
- This brief review examines how researchers have described treatments in published school-based mentoring (SBM) evaluations.
- More specifically, it assesses the following: the rationale for being more specific about treatment implementations; specification of treatments in studies; potential solutions to this issue; the complications of testing treatments constructs, like school-based mentoring; and how the field can advance in the future.
- Many of the studies examined here did not describe various aspects of treatment constructs in-depth.
- For instance, while academic mentoring activities were the most common type of activities in SBM programs, there aren’t many details about what the mentors specifically did.
- Although some programs talked about what activities mentoring dyads participated in, a majority of the examined studies didn’t provide the measurements of those activities.
- Researchers need to create and utilize new measures to assess the following:
- what activities mentoring dyads participate in; and
- create studies that explore how varied mentoring interactions affect youth outcomes.
Introduction (Reprinted from the Abstract)
Like many youth development programs, most youth mentoring programs do not have prescribed practices that target specific outcomes. Because the construct of mentoring represents a broad range of potential activities, researchers face a conundrum when making generalizable causal inferences about the effects of this and similar services. On the one hand, researchers cannot make valid experimental inferences if they do not describe what they manipulate. On the other hand, experiments that include prescribed protocols do not generalize to most mentoring programs. In most cases, researchers conducting school-based mentoring program evaluations err on the side of not sufficiently specifying treatment constructs, which limits the field’s ability to make practically or theoretically useful inferences about this service. We discuss this reality in light of the fundamental logic of the experimental design and suggest several possible solutions to this conundrum. Our goal is to empower researchers to adequately specify treatments while still preserving the treatment construct validity of this and similar interventions.
Implications (Reprinted from the Discussion)
Over a decade ago, DuBois, Doolittle, Yates, Silverthorn, and Tebes (2006) wrote:
For the most part, [specification of treatments] have been centered on program criteria for minimally acceptable levels of the frequency of mentor–youth contact and the duration of relationships … Much less attention has been given to specification of [treatment constructs] that pertain to other potentially important dimensions of mentoring relationships… (p. 669).
Based on our review of the SBM literature, we make a similar conclusion. The results from our evaluation indicate that researchers, on average, are not specifying aspects of treatment constructs that are presumed to be the essential elements of mentoring relationships. We cannot, from this review, state that the Elements were absent from the treatments used in the studies; rather, that they were not reported. This is an important distinction. For example, it is reasonable to assume that in every program mentors did something with mentees, or that programs matched mentors and mentees using some criteria, though the absence of reporting and measurement of these events precludes us from understanding the origin of, or the lack of, any observed causal inferences; that is, the treatment constructs were not well specified.
We were particularly interested in how mentor–mentee activities were discussed in the studies. We were not surprised to see that academic focused activities were the most commonly reported activity in SBM programs, although we were surprised to see how little detail was provided on what mentors did in terms of academic mentoring activities (e.g., “helping mentees with academic work”; Mboka, 2012). Though some programs discussed activities that occurred between mentors and mentees, measurements of these activities or specific descriptions of what occurred were notably absent from most studies. Thus, even if the contact events were discussed at a general level, the level of detail in most programs precluded any legitimate understanding of what occurred between mentors and mentees. In contrast, evaluations of other volunteer based educational interventions extensively document and measure the treatment construct. For example, in describing one aspect of a tutoring intervention, Fuchs et al. (2008) stated:
In sessions 1-4, problem-solution instruction was delivered, with problems that varied only in their cover stories. A poster listing the steps of the solution method was displayed in the classroom. In session 1, RA teachers addressed the underlying concepts and structural features for the problem type, presented a worked example, and as they referred to the poster, explained how each step of the solution method had been applied in the example… (p. 498).
To access this article, click here.