How do we know that mentoring works?: The many benefits of experimental designs
by Adar Ben-Eliyahu, Ph.D. Senior Lecturer (Assistant Professor), University of Haifa
How do we know that mentoring works? How do we know that a child would not have improved anyways? The best way to understand whether programs affect youth outcomes is to compare groups of children who are the same in every way—except that some have been randomly assigned to receive mentoring and the others are placed on a waiting list and not mentored. This type of comparison is often referred to as an experimental design.
What is an experimental design?
We will focus on a simple two-group experimental design, however, a similar idea can be applied to more than 2 groups. In our case, we will focus on a mentored group and a non-mentored group. An experimental design rules out the possibility that the mentoring program (or reading intervention, medical treatment, etc.) account for the changes observed over time. This is done by comparing two groups:
- the Experimental Group: those who receive the intervention
- the Control Group: those who are similar to the experimental on all background characteristics except that, by random assignment, they did not receive the intervention.
In the figure below, I map the differences between an experimental and a control group. Noticeably, the green arrow in the background is constant and identical – this depicts the similarity in the passing of time. In a perfect design, the only difference between the experimental and control group is the “experiment” or intervention, in our case – the mentoring program. Because this is the only difference, when significant differences are found between the experimental and the control group, we can confidently conclude that our program works.
What is random assignment and how is it done?
In an ideal design, group assignment is random. This means that, when conducting a two-group design, being assigned to the experimental or control group is as much likely as getting a heads or tails when flipping a coin. Of course, we do not actually flip a coin for each participant, but use some form of program (e.g., Excel) to determine random group assignment. Most data processing programs have a built in algorithm that can be used to generate random group assignment.
What is a Quasi Experimental Design?
Experimental evaluations can be costly and difficult to construct, particularly when we want to understand naturally occurring events. For example, although we might be interested in the effects of divorce or birth order, it is impossible (not to mention unethical!) to assign youth to a single-parent and a two-parent home. Likewise, we may be interested in the effects of an afterschool program, it’s hard to contrive a good control group for a particular afterschool program as waitlists are typically unfeasible. In these cases we cannot conduct a highly controlled experiment. Instead, we can conduct what has been termed as a Quasi Experiment. Quasi experiments build on naturally occurring groups to “assign” to the experimental or control group. In this case, it is important to check and control for relevant characteristics when applying statistical analyses.
Example of experimental and quasi experimental design from Mentoring Research
As reported in a recent column by Rhodes described “the recently released evaluation of the Mentoring At-Risk Youth project, which was initiated by P/PV in 2007 and released this month by MDRC. This research includes the first multi-agency randomized controlled evaluation of the Big Brothers Big Sisters of America community-based mentoring program since the landmark PPV study of 1990. Researchers Carla Herrera, David DuBois, and Jean Grossman followed 1,310 youth who ranged in age from 8 to 15 and who were deemed “higher-risk” by virtue of either individual or environmental risk (or both). In the largest two programs, half the youth were assigned to a waitlist (controls) while the other half was matched right away (experimental design). Across another five programs, all youth were provided with mentors and then compared to the control group of the random assignment portion (quasi-experimental design). At 13 months, youth from the latter, quasi-experimental portion were doing better than the non-mentored youth: they had fewer depressive symptoms, greater acceptance by peers, more positive academic self-perceptions, and better grades. In the random assignment portion, mentored youth showed fewer depressive symptoms and were doing better on an aggregate measure of positive change.” In this latter group, because the assignment to the treatment (or experimental) group and the non-mentored control group was done randomly, the researchers could conclude that the differences between the two groups were a results of the “treatment” or mentoring program, suggesting that youth mentoring reaps benefits.
Challenges of an experimental design
- costly
- requires many resources to design and execute
- may not be feasible for certain questions
Benefits of an experimental design
An experimental design allows us to follow 2 or more groups of youth, one that is in a mentoring program and one that is not in a mentoring program. We can therefore consider how their experiences in a mentoring program influence their development, in comparison to a group of youth who were not in a mentoring program, allowing us to conclude about causation. The ability to assign participants to the experimental or control groups allows us to rule out the notion that it is simply the passing of time that has led to improvements and youth development, and that it is reasonable to attribute differences to the treatment.