From equal footing to lost ground: How can we achieve stronger effects?

Screen Shot 2014-05-14 at 9.59.50 AMby Jean Rhodes

In 1979, a young assistant professor named Joe Durlak published a controversial study in Psychological Bulletin that sent ripples through the field of clinical psychology and provided momentum for what eventually became the mentoring movement. What he sought to do was combine all of the published studies that had compared the outcomes of experienced psychologists, psychiatrists, and social workers with those of paraprofessionals (i.e., minimally trained community volunteers and helpers). His analysis of 42 evaluations led to a provocative conclusion—almost across the board, paraprofessional were more effective than trained professionals. In fact, in only one study were professionals significantly more effective than paraprofessionals in promoting positive mental health outcomes. In most, paraprofessionals were comparable to trained mental health professionals and in 12 they were actually superior. As he concluded, “professionals do not proccess demonstrably superior therapeutic skills, compared with paraprofessionals. Moreover, professional mental health education training and experience are not necessary prerequisties for an effective helping person.” (Durlak, 1979, p. 6). Such data challenged mental health professionals to look more closely at the nature and efficacy of mental health practices.

Over the next five years, researchers using more sophisticated, meta-analytic procedures were able to further test these the promising trends. One such study (Hattie et al., 1984) replicated the findings showing that, even controlling for the difficulty of the patients with whom professionals were working. As they concluded. “the average person who received help from a paraprofessional was better off at the end of therapy than 63% of person who received help from professionals.”(1984, 536). Importantly, those paraprofesisonals with more experience showed the strongest effects relative to professionals.

In the nearly thirty years since, as psychology has embraced evidence and has fine tuned its treatment strategies, effect sizes have dramatically improved. With greater specificity and evaluation of the populations served, approaches used, problems addressed, studies have begun to show much stronger effects for evidence-based therapy relative to paraprofessionals (including volunteer mentoring). Indeed, compared to meta-analyses of volunteer youth mentoring, which tend to show effect sizes in the .20 (i.e., low) range of impact, meta-analyses of youth psychotherapy, encompassing hundreds of studies, have reported much stronger mean effects, ranging from 0.71 to 0.88 (i.e., high) depending on the age of the children being treated (Weisz, Sandler, Durlak, & Anton, 2005).

But, importantly, while the overall effect size of mentoring programs is modest, there is substantial variation in the effectiveness of different programs across studies. In meta-analyses of youth mentoring, more structured programs, in which there were clear expectations, embrace of evidence-based practice, and the ongoing support to volunteer mentors yielded notably strongest effects. Interestingly, a similar pattern has emerged in meta-analyses of youth psychotherapy. Weisz et al. (2005) note that, in studies of “treatment as usual in settings in which therapists were not constrained by evidence-based interventions and in which there was a comparison of their treatment to a control condition” effect sizes were close to zero (see, e.g., Weisz, Donenberg, Han, & Weiss, 1995), indicating no treatment benefit.

Catching up with evidence-based professional helping relationships is within our reach. And as our effect sizes go up, so will the life chances of the many children we reach. But more fully embracing evidence is only way we will close this gap.