A case for the embracing the science of mentoring

 by Jean Rhodes

Despite significant strides in establishing the evidence base in the field of mental health, relatively few practitioners ground their work in clinically-proven strategies. Indeed ,there are now proven strategies for treating such problems as anxiety, depression, PTSD, and eating disorders, including cognitive-behavioral therapy (CBT) , dialectical behavior therapy, and family-based treatment programs–yet, few patients actually receive evidence-based treatments. In fact, despite numerous  (and compelling) clinical trials demonstrating the effectiveness of CBT in treating common disorders, most deploy evidence-based strategies sporadically, in combination with other, unproven treatments. Reporter Harriet Brown explored this issue in the New York Times article, “Looking for Evidence That Therapy Works,” concluding that one of the key reasons that mental health professionals do not rely on proven strategies is that they see the establishment of caring, helpful relationships as more of an art than a science. Similar trends exist in the field of youth mentoring–so it is worth exploring this issue further–and understanding why therapists (and their supervisors’) resist therapy research and evaluation findings, preferring instead to rely on mentors and match support specialists’ intuitive sense of what works.

In her review, Brown quotes Glenn Waller, chairman of the psychology department at the University of Sheffield and one of the authors of a major meta-analysis on treatment effects, who notes that that a “large number of people with mental health problems that could be straightforwardly addressed are getting therapies that have very little chance of being effective.”  Indeed, fewer than 20% of surveyed psychologist use treatments that have been proven effective for PTSD and a recent study showed that, to a large extent, research findings did not influence whether mental-health providers learned and used new treatments. Far more important was whether a new treatment could be integrated with the therapy that the providers were already offering.

As mentioned, some although some providers view helping relationships as needing to be grounded in science and proven effective in both research and clinical trials, others view their work more as a delicate art. They claim that evidence-based guidelines can undermine relationship factors (i.e. empathy, warmth and communication) which help  to build a strong working relationship.  A helpful relationship, they argue, stems form the particularly alchemy of personalities and other subtle factors in the relationship that structured programs undermine.  “The idea of therapy as an art is a very powerful one,” Brown notes, “Many psychologists believe they have skills that allow them to tailor a treatment to a client that’s better than any scientist can come up with with all their data.”

It’s important to note, however, that the research suggests otherwise. Brown points to a study published last year, which concluded that clients working with therapists who did not use an evidence-based treatment or combined evidence-based treatment with other techniques tended to have poorer outcomes than those who received a more standardized treatment. Moreover, many experts believe the empathy versus guidelines argument is a false dichotomy. Those adhering to evidence are not robotic followers of  guidelines–they too pay close attention to establishing a close tie. Indeed, nobody would argue that a close relationship isn’t vitally important. The question is whether or not that is enough.

Second, there is little incentive for therapists to change what they are doing if they believe it works. Yet, in the absence of outcome data on the immediate and long-term benefits, it is easy to overestimate effectiveness.

On the bright side, there is evidence to suggest that mental health workers are, to a growing extent, relying on evidence-based practice. A similar pattern occurred has occurred in medicine over the years–which has, to a growing extent, embraced evidence-based guidelines. Perhaps mentoring is not far behind. Right now, however, despite substantial investments, relatively few programs have achieved the highest rating on registries such as the Office of Justice Programs’ (OJP’s) Crime Solutions, the Department of Education’s What Works Clearinghouse, and the National Registry of Evidence-based Programs and Practices (NREPP).

This variability across mentoring programs is disconcerting, particularly because researchers are increasingly converging on a core set of practices that, when faithfully applied, can yield dramatically larger effects (Rhodes & Lowe, 2005; DuBois et al., 2011). Likewise, organizations like the Office of Juvenile Justice and Delinquency Prevention (OJJDP) have worked to incentivize practitioners toward greater adherence to evidence-based practice. Yet, the return on the substantial investment of effort, expertise, and funding that has gone into generating such knowledge has remained limited. The same arguments–i.e., that relationships are an art, we should rely on the intuition,  we are already seeing great outcomes, why change?–can be heard in these debates. An additional explanation for this inconsistent application of evidence and failure to fully optimize evaluation findings is that, during the past decade, advocacy organizations and funders have placed considerable emphasis on growth and expansion goals. Indeed, widespread exuberance about mentoring often leaves little motivation for tweaking around the edges. As a result, we  have prioritized launching new matches, programs, practices, or partnerships over more rigorous, deliberate, and iterative efforts to implement evidence-based practices with fidelity to strengthen the quality of existing programs. Many have championed untested innovations and models and, contrary to recommendations, relaxed minimum volunteer screening, commitment, and training requirements. Indeed, data from a recent meta-analysis reveal little evidence of a trend toward greater prevalence of use of evidence-based practices across the decade encompassed by the review (DuBois et al., 2012). Moreover, despite the growing availability of evidence-based programs and resources, the tools and websites featured by mentoring organizations and deployed by training and technical assistance (T/TA) providers include a multitude of untested trainings, webinars, and resources that leave the consumers to determine their merits.

The problem also stems from researchers’ inconsistent grounding of their work and recommendations in the everyday needs and constraints of practitioners and local settings. As noted above, an important factor in therapists adoption of new practices was whether it could be integrated with the therapy that the providers were already offering. Yet it remains rare for researchers to seek input directly from practitioners regarding what questions they would like researchers to address.  Findings are often reported in ways that are decontextualized, leaving practitioners to determine their relevance and application. There are also basic gaps in how researchers, practitioners, and policymakers define research and evidence (Caplan, 1979). Whereas researchers often employ the two terms interchangeably to mean “findings derived from scientific methods” (Tseng, 2012, p. 6) or “demonstrated by causal evidence, generally obtained through one or more outcome evaluations [which] … depends on the use of scientific methods” (OJP), studies of research uptake suggest that practitioners tend to define evidence more broadly—as stemming not only from scientific methods, but from expert testimony; practitioner wisdom; consumer satisfaction surveys; untested manuals; parent, youth, and community feedback; and more (Honig & Coburn, 2008). Likewise, although researchers tend to qualify evidence in terms of the instruments, experimental designs, and methods, practitioners are often more concerned with its application to settings (Tseng, 2012). Thus, even when evidence-based strategies are employed, wide variations in local adaptions can undermine fidelity.

As mentoring continues to expand, there is an urgent need for a more integrated and accelerated effort to strategically enhance not only the production but also the uptake of EB practices. The mentoring field is ripe for such innovation, and has already made notable efforts  to bridge research, practice, and policy. This can be seen in the new web-based, mentor training course based on Elements of Effective Practice 3rd edition, (EEPM, hosted by Innovation Research & Training (iRT) on the Mentoring Central learning management system website (Kupersmidt & Rhodes, 2013). The Mentoring Central training, which was funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) and took more than 5 years to develop, has been used by hundreds of programs and thousands of mentors. In fact, a customized version of the training has been licensed by Big Brothers Big Sisters of America (BBBS) and is currently being rolled out to more than 400 affiliates (representing 100,000 volunteers). We are currently conducting a randomized controlled trial of the efficacy of the web-based training program, following which the program will be submitted for review by NREPP, CrimeSolutions.gov, the OJJDP Model Programs Guide (MPG), and the new National Mentoring Resource Center (NMRC). More generally, the embrace of evidence can be seen in the growing interests amongst advocacy organizations, practitioners, and policy makers to build on research findings.

In essence, these and other efforts are grounding the field of mentoring  and helping to develop a systematic and scientifically rigorous approach for establishing effectiveness and disseminating evidence-based practice.  Such efforts will better align youth mentoring with recent advances made in the broader fields of mental health practice, medicine, positive youth development, and prevention science.