Five Common Methodological Challenges in Mentoring Research (and How to Address Them)

By Matthew Hagler

To increase the effectiveness and availability of mentoring interventions for our youth, researchers and practitioners must form dynamic, bidirectional collaborations. Even beyond such partnerships, it is important that all stakeholders in mentoring programs be able to understand and critique mentoring studies, including their methodological limitations. I’m currently teaching an undergraduate research methods course, which has inspired me to revisit common methodological challenges in mentoring research and how to address them.

 

  1. Lack of Comparison Groups

Many mentoring programs subject themselves to various forms of program evaluation. Often, programs will use a pre- and post-test design, in which various youth characteristics of interest (e.g., behavioral problems, self-esteem, academic engagement) are assessed before the intervention starts and then again at the end of the program. Changes in these variables are then attributed to the intervention. However, this is not necessarily the case; these variables may naturally change with time and/or age and observed changes may have little to do with the intervention.

To isolate the impact of the program, we need to have a comparison group. That is, we need a group of similar youth who do NOT receive the intervention within the study’s time frame (often those placed on a waitlist) and compare changes in the variables of interest for youth who participated in the program and those who did not.

 

  1. Selection Bias

Say we have access to students from an entire school, some of whom sought out or have been referred to a school-based mentoring program. If we compare youth who participated in the program to those who did not, then we can conclude that differences are due to program effects, right? Not quite. In this case, youth who chose to participate in the program (or were referred to the program by teachers or parents) may have differed in some way to youth who did not participate. For example, participating youth may have had lower initial academic performance, or may have been more likely to be from single-parent households. We call this selection bias.

To combat selection bias, we need to randomly assign participants to conditions in an experimental study. If we start out with a group of potential participants and randomly place them in the mentoring group or the comparison group, then it is unlikely that the two groups will significantly differ on any given variable.

In some cases, random assignment is not ethical or possible. For example, if we are interested in natural mentoring relationships, we assigning youth with mentors would violate the definition of natural mentors. Even with formal mentoring, programs may not want to temporarily withhold services from a large group of young people to form a waitlist control group. In these cases, there is a statistical technique called propensity score matching, which allows us to find youth who are highly similar on a range of variables EXCEPT whether they had a mentor. My comparing matched pairs, we are better able to isolate the effects of mentoring and reduce selection bias.

 

  1. Study attrition

Attrition refers to study drop-out – when participants who completed baseline study measures, for whatever reason, do not complete follow-up measures. Participants drop out of studies for a range of reasons – lack of interest, geographic relocation, change of contact information, stress, etc. Regardless of the reason, attrition creates an issue because the final group you end up with may different from your initial group, and participants with certain types of characteristics may not be represented in the group whose data you actually analyze.

What do we do about study attrition? We can try to prevent it by increasing incentives to participate in follow-ups, collecting and using multiple forms of contact information, and making follow-up participation as easy as possible for participants (e.g., by phone or online vs. in person). If we do have attrition, we should at least compare the characteristics of the initial sample to the final sample to assess the extent to which attrition is creating bias. We might consider using some kind of missing data procedure, like imputation, which can simulate missing data in a way that reduces, but does not eliminate, bias.

 

  1. Small Sample Size

Regardless of attrition, mentoring studies can sometimes have very small sample sizes to begin with. This can be a problem because small samples often are biased and are unlikely to be representative of the population to which you are hoping to generalize your findings. When we analyze small samples, we also lack statistical power, which makes it difficult to uncover meaningful findings.

Obviously, the way to address this is to strive for larger samples, which, of course, is not always possible or practical. For very small samples (i.e., 50 or less, give or take), findings should be interpreted with caution, and researchers may be better served utilizing qualitative methods.

 

  1. Self-report Bias.

Among the most common ways of measuring outcomes are self-report questionnaires. These measures simply ask youth to report on how happy, healthy, and academically engaged they are. Although useful (and cheap!), these questionnaires can be subject to self-report bias. Of course, this phenomenon is not unique to young people. Universally, self-judgment and self-perception are highly subjective, and may be influenced by the way we want to see ourselves rather than the objective truth.

To address self-report bias, researchers should rely on multiple observers, when possible. For example, in addition to the youth, we can ask mentors, teachers, and parents about their academic engagement. Of course, all these reporters might be biased. To reduce reporter bias, research might consider utilizing observational techniques when possible. For example, trained raters might observe and rate youth’s engagement in the classroom. Finally, researchers should seek out more objective ways of measuring certain phenomena. For example, rather than asking youth how they are doing in school or how often they have had criminal involvement, researchers might seek permission to obtain report cards and criminal records, respectively.

 

Conclusion

Every research study is flawed in some way. There is simply no practical way to conduct a perfect research study. However, it is vital that we acknowledge and discuss the impact of study limitations, and, when possible, employ the techniques discussed above to boost the rigor of our methods.