Five factors that explain our field’s resistance to evidence-based practice
Over the past few decades, considerable federal, state, and foundation resources have been invested in research and evaluation aimed at improving the evidence base of youth mentoring. This can be seen in the rigorous evaluations of Big Brothers Big Sisters of America, Big Brothers Big Sisters of Canada, the National Guard Youth ChalleNGe Program, Friends of the Children, Communities in Schools, the Washington State Partnership affiliates, and others—all of which have pointed to the conditions and populations for whom mentoring is most helpful. Yet, the return on the substantial investment of effort, expertise, and funding that has gone into generating this knowledge has remained limited. Data from a recent meta-analysis reveal little evidence of a trend toward greater prevalence of use of evidence-based practices across the decade encompassed by the review (DuBois et al., 2012). Consequently, relatively few mentoring programs have achieved the highest rating on registries such as the Office of Justice Programs’ (OJP’s) Crime Solutions, the Department of Education’s What Works Clearinghouse, and the National Registry of Evidence-based Programs and Practices (NREPP).
Why the disconnect?
There are at least five factors that explain our field’s inconsistent embrace of evidence-based practice.
1. Art over Science. Most mentoring programs are centered around building close, caring relationships between caring adults and vulnerable youth. As such, the field’s providers share much in common with mental health workers. Reporter Harriet Brown this issue in, “Looking for Evidence That Therapy Works,” and concluded that one of the key reasons that mental health professionals do not rely on proven strategies is that they see the establishment of caring, helpful relationships as more of an art than a science. Helpful relationship, they argue, stem from the particular alchemy of personalities and other subtle factors. What’s more, structured, evidence-based guidelines are antithetical to relationship building as they mute expressions of empathy and warmth. “The idea of therapy as an art is a very powerful one,” Brown notes, “Many psychologists believe they have skills that allow them to tailor a treatment to a client that’s better than any scientist can come up with with all their data. It’s important to note, however, that the research suggests otherwise. Brown points to a study published last year, which concluded that clients working with therapists who did not use an evidence-based treatment or combined evidence-based treatment with other techniques tended to have poorer outcomes than those who received a more standardized treatment. Moreover, the empathy versus guidelines argument is somewhat of a false dichotomy. Those adhering to evidence are not robotic followers of guidelines–they too pay close attention to establishing close ties. Nobody would argue that a close relationship isn’t vitally important–the question is whether or not that is enough.
2. Few Incentives. There is little incentive for practitioners to change what they are doing if they believe it works. Yet, in the absence of outcome data on the immediate and long-term benefits, it is easy to overestimate effectiveness. Indeed, when large-scale experimental evaluations have been undertaken, the results have ranged from confusing to disappointing. Findings rarely provide the clear and simple answers that organizations are looking to share with their funders and, when one set of findings contradicts an earlier conclusion, the field’s “best practices” in mentoring can seem suddenly up for grabs. As novelist Don Delillo noted, “The deeper we delve into the nature of things, the looser our structure may seem to become.” In the broader field of mental health, we find that fewer than 20% of surveyed psychologists use treatments that have been proven effective for PTSD and a recent study showed that, to a large extent, research findings did not influence whether mental-health providers learned and used new treatments. Similar trends exist in the field of youth mentoring—resulting in uneven adherence to best practices. This variability across mentoring programs is disconcerting, particularly because researchers are increasingly converging on a core set of practices that, when faithfully applied, can yield dramatically larger effects (Rhodes & Lowe, 2005; DuBois et al., 2011). Mentoring relationships can offer a range of developmental benefits for our nation’s youth, and findings from rigorously implemented initiatives provide some support for this viewpoint. Yet there remains a relatively modest and inconsistent pattern of effects of mentoring programs (DuBois et al., 2011; Eby, Allen, Evans, Ng, & DuBois, 2012), particularly in comparison to youth prevention programs that have more fully embraced EB practices (Durlak & Wells, 1997).
3. Incompatible Priorities. An additional explanation for this inconsistent application of evidence and failure to fully optimize evaluation findings is that, during the past decade, advocacy organizations and funders have placed considerable emphasis on growth and expansion goals. Indeed, widespread excitement and compelling anecdotes about the power and glory of mentoring often leave little motivation for investments in tweaking existing practice. As a result, we have prioritized launching new matches and programs over more rigorous, deliberate, and iterative efforts to implement evidence-based practices with fidelity in ways that would strengthen the quality of existing programs. Contrary to recommendations, some programs have relaxed minimum volunteer screening, commitment, and training requirements. despite the growing availability of evidence-based programs and resources, the tools and websites featured by many mentoring organizations include a multitude of untested resources alongside evidence-based guidelines, leaving consumers to determine their merits.
4. Researcher Disconnect. The problem also stems from researchers’ inconsistent grounding of their work and recommendations in the everyday needs and constraints of practitioners and local settings. Research on psychotherapy has shown that the key factor determining adoption of evidence-based practices is whether a new treatment could be integrated with the therapy that the providers were already offering. Yet it remains rare for researchers to seek input directly from practitioners regarding what questions they would like researchers to address. Findings are often reported in ways that are decontextualized, leaving practitioners to determine their relevance and application.
5. Divergent Definitions of “Evidence”. There are basic gaps in how researchers, practitioners, and policymakers define research and evidence (Caplan, 1979). Whereas researchers often employ the two terms interchangeably to mean “findings derived from scientific methods” (Tseng, 2012, p. 6), studies of research uptake suggest that practitioners tend to define evidence more broadly—as stemming not only from scientific methods, but from expert testimony; practitioner wisdom; consumer satisfaction surveys; untested manuals; parent, youth, and community feedback; and more (Honig & Coburn, 2008). Likewise, although researchers tend to qualify evidence in terms of the instruments, experimental designs, and methods, practitioners are often more concerned with its application to settings (Tseng, 2012). Thus, even when evidence-based strategies are employed, wide variations in local adaptions can undermine fidelity.
As mentoring continues to expand, there is an urgent need for a more integrated and accelerated effort to strategically enhance not only the production but also the uptake of evidence-based practices. The field is ripe for such innovation, and has already made notable efforts to bridge research, practice, and policy. Such efforts will better align youth mentoring with recent advances made in the broader fields of mental health practice, medicine, positive youth development, and prevention science.