FORUM: What should mentoring programs keep in mind as they think about evaluation?

What should mentoring programs keep in mind as they think about evaluation?As a training and technical assistance provider who has worked with many diverse mentoring programs and initiatives over the years, I can honestly say that I’ve not seen a topic generate as much consternation, confusion, and near panic as program evaluation. As with many types of nonprofit and human services, mentoring programs are increasingly caught up in the push for “data-based decisions” and “evidence-based programming.” They often feel the pressure from funders to deliver better results with fewer resources, or are even faced with the daunting prospect of justifying their very existence. Program staff see innovations and compelling new research emerging in the field and worry that they are being left behind if they aren’t producing similar “evidence” supporting their program’s approach and results. Programs worry about the considerable time, effort, cost, and validity of undertaking a big evaluation of their efforts, but often feel like they have no choice if they are to remain sustainable. Even if a program’s evaluation is simply grounded in a desire to improve services, there is always the fear that the results might not be what they hoped, shaking stakeholder confidence and creating a perception of “failure.” In many ways, program evaluation is both the best thing a program can do, but also the riskiest and least-understood.

So with this in mind, we reached out to several prominent researchers and evaluators and asked them what advice they might have for mentoring programs that are thinking about undertaking a program evaluation. What are the mistakes to avoid? What mindsets or strategies can yield the most useful information? And when should a program scale back its evaluation ambitions to avoid self-inflicted damage?

Our experts each offered their best advice. Feel free to share your evaluation tips and experiences in the comments below!

 

Screen Shot 2013-04-07 at 9.14.53 PMDavid DuBois – School of Public Health, University of Illinois at Chicago

This is a topic that I thought a lot about when writing the chapter on program evaluation for the recent Handbook of Youth Mentoring. In that chapter, we cover a lot of the basic framework for conducting and reporting evaluation results, but most of my best advice can be found in a nice table that summarizes  recommendations for good practice in evaluating mentoring programs. Highlights of that include:

  • Conduct the evaluation in an ethical way – The Joint Committee on Standards for Education Evaluation has widely accepted standards one can follow. Of these, making sure you have institutional review board approval is one of the most important.
  • Planning is critical – Most mistakes with evaluations happen before any of the data are collected. Focus on process evaluation if your program is relatively new; you can always examine outcomes once your program has worked out the kinks and is operating at peak efficiency. Be sure you have adequate resources and stakeholder buy-in to implement the evaluation.
  • Really know your model before examining process – A process evaluation essentially examines how well you are implementing a program plan (adherence). But you can’t design an effective evaluation of this kind if you are unsure about how services are being delivered or if there are no established benchmarks for what the program should look like in action. Be sure that process evaluations also examine responsiveness (how clients and staff experience the services) and program differentiation (can you distinguish what your program does from other similar programs or services).
  • Outcomes need context to have meaning – At the very least, an outcome evaluation needs to have a comparison group that is as close as possible to the youth receiving your services. Everyone always wants to aim for the gold standard of a random-assignment design, but this is very expensive and requires large numbers of participants for good statistical power. Obviously, those two things will be hard for most grassroots mentoring programs to supply. But almost any program can come up with a reasonable comparison group of youth to compare the mentored youth against. Without this, there is no frame of reference for whether your results are good or bad. In fact, the landmark study of BBBS in the 1990s might have viewed that program as fairly ineffective if there hadn’t been a control group of youth who were faring worse that the mentored youth. This is a critical aspect of any good outcome evaluation.
  • Use reliable, valid data collection instruments – A homegrown survey is unlikely to give you accurate or valid results. Try and identify data collection instruments that have been used in previous research on a similar population to minimize the risk that your findings are simply random. And exercise caution when deciding what to measure. Think carefully about your program’s theory of change and choose outcomes that make sense, both in terms of how mentors and mentees spend their time and for how long the mentoring lasts.
  • Don’t let your evaluation become a “thumbs up/thumbs down” judgment on your program’s efficacy – While you should share your findings honestly and openly with your stakeholders, don’t let one evaluation define your program for all time. Take the time to explain the context of the findings, alternative explanations for the results, limitations of the data and conclusions, and potential implications for future program improvement. Providing this nuance and context is easier if the design of your evaluation allows some linking of process and outcome. “Our program is more effective when we do these practices” is a far better message than “Our program is not very effective but we don’t know why.” So think about how you can connect your process and outcome evaluations to better understand the how of your results.

 

CarlaHerreraPhotoCarla Herrera – Independent Evaluation Consultant

  • Make sure you start by creating a good solid logic model – What are the goals of your program and how do you think you are achieving those goals?  That will make it easier to outline how you should shape your evaluation efforts.
  • If you are conducting an outcome evaluation, make sure you limit your outcomes to those that are central to your program – Include those that are very closely in line with your logic model and that are realistically achieved.  For example, if you have a short-term academically focused program, examining the amount of time spent on homework is a fairly realistic goal, whereas decreases in drug use or changes in standardized test scores probably aren’t.  You might consider a couple of “stretch” outcomes that you hope to achieve but aren’t as closely tied in with your work…but if you ONLY include these, you may come up concluding (incorrectly) that your program doesn’t work.
  • Don’t dive into a hard-core evaluation before you are ready (e.g., make sure your program has several years of experience and that you already  have preliminary evidence that the program works).
  • Get the advice of a seasoned evaluator that you trust – Even if you are not ready for an external evaluation, a good researcher can help you avoid major pitfalls in designing and implementing your own evaluation and even help you select an external evaluator that meets your needs when you are ready.
  • Make sure that the entire organization has bought into conducting an (internal or external) evaluation.
  • Consider collecting data on measures other than outcomes to strengthen your program before you conduct an outcome evaluation (e.g., collect data on participant satisfaction, quality of training and support provided, match challenges, etc.).
  •  Be ready and willing to invest in program improvements based on what you find.
  • Give voice to all of your matches, not just those that succeed – Some of the most valuable data you can get are from those matches that don’t work as you had hoped, and the strongest evaluations include all perspectives.

 

Janet Forbush_1x1.5.jpg-1-2012Janet Forbush – Independent Evaluation Consultant

While much progress has been made over the past several years, there are still far too many programs that are continuing to struggle with evaluation.  These are the things that have ‘bubbled up’ in the work I have done with mentoring programs over the years:

  • Program staff benefit from revisiting their overarching mentoring program design and associated goals and objectives on a regular, e.g., annual basis.  Evaluations should be aligned with  program goals and objectives and unless there is an intentional effort to revisit these regularly, it is possible findings will not demonstrate anticipated outcomes at either the individual and/or program levels.  Fidelity to any program design needs attention in order to be maintained.
  • Understand the terminology and language around evaluation.  If something doesn’t make sense to you in the context of your program, ask questions of funders and/or agency leadership. This is not an easy assignment given that the terminology that is associated with so many youth programs that are funded by diverse government and private sector agencies and organizations is not uniform.  The result is that mentoring programs that apply for competitive awards in mentoring and are asked to participate in an evaluation may or may not know the level of rigor associated with the funding.  Some funding might require ‘performance measurement’ that ties into the actual # of mentees/mentors participating in program activities versus a required pre- post-measurement of mentees/mentors/parents/caregivers around a particular aspect of the mentoring intervention.  It is important to document both process and outcome features of your program including the outcomes of your clients against those goals/objectives that undergird the effort.  More modest levels of effort accommodate the adoption of one or two validated pre- post-test instruments designed to address a particular characteristic or behavioral characteristic of youth participants and/or an instrument that examines the match relationship quality.  The important take-away here is that you should tie one or more of these less rigorous assessments to measuring something that you are trying to address through the mentoring program design.
  • Evaluation takes resources…money that will support personnel, staff training, data collection/management and reporting.  Everyone is trying their best to work in the context of the ‘new normal’ which means fewer dollars, however, tight budgets shouldn’t ‘squeeze’ evaluation out but rather find ways to ‘squeeze evaluation into the picture.’  Identify the most cost effective way of identifying the person(s) charged with evaluation whether that be someone on the agency staff who could be tasked with the mentoring program evaluation in addition to other agency wide evaluation responsibilities, or, consider engaging an independent evaluator or local community college faculty member/graduate student to assist in this important part of your work.
  • Do not be afraid of evaluation results…it isn’t a ‘gotcha game’ but rather an essential part of providing a window on what is working in your program along with useful insight on where adjustments/modifications need to be introduced.    There continues to be palpable nervousness around evaluation though it has diminished in the last few years.  That said, evaluation reports, which should include an executive summary, should be shared  and reviewed with staff, board members, stakeholders, funders, and, prospective funders…highlight findings on websites and e-newsletters and through social media.  Findings and results provide an often used path to sustainability by your current funders. If you are disappointed in results, meet with key staff and the evaluator(s) to identify next steps.  In addition to formal reports, anecdotal feedback from youth participants, mentors, and parents/caregivers is extremely helpful.
  • As the evaluation continues to be strengthened, begin to think of ways in which you and your team can partner with researchers in your community (or are represented here on The Chronicle) to find opportunities to present and/or publish findings.  Share the good news about your results as you might be in the forefront of doing something innovative but it just hasn’t yet been recognized.

 

jrhodesJean Rhodes – Director, The Center for Evidence-Based Mentoring, University of Massachusetts-Boston

You’ve touched on an important issue Mike. Evaluators are often called in after programs have been developed, and often after the programs have been widely dispatched to the field! Instead, evaluators and practitioners should work together to specify the goals and procedures of the particular approach to mentoring. Adar Ben-Eliyahu, has written a nice, clear piece on the topic of experimental designs . But, in essence, I would recommend that, whenever possible, experimental designs be used  and data from multiple sources and methods be collected. And, in the course of designing the evaluations, I would attend to the following:

1. Understand Variation

  • Even the best models are likely to be more helpful in some contexts than others, and for some groups than others.
  • Systematic comparison of practices of differing type and intensity are needed within all relevant program areas, including recruitment, training, matching, supervision and mentor/mentee activities. Comparing traditional approaches to newer models would help to advance practice.
  • Also necessary is information regarding the core elements of successful mentoring relationships, and how these might vary as a function of the needs and characteristics of particular youth. Such information has become increasingly important, particularly as programs are encouraged to serve specialized populations or are implemented in new settings. There is growing evidence, for example, that boys and girls experience and benefit from the mentoring process in different ways, with girls reporting more troubled maternal relationships at baseline and being more reactivity to relationship disruptions. The same may hold true for younger versus older youth and for youth from differing ethnic backgrounds. Similarly, many of the young people served by mentoring programs have special needs. They may be in foster care, have learning disabilities, have a parent who is incarcerated, etc. (Screening tools that permit greater specification of baseline risk, strengths, and circumstances and strengths of their families are likely to be particularly helpful in this regard).
  • Since mentoring is often included as part of a larger youth development program that has several different components, evaluators should compare stand-alone mentoring programs to those that integrate mentoring with other services, and examine the extent to which mentoring adds to the effectiveness of programs with multiple components.

2. Understand Quality and Duration

  • Although policy makers are increasingly calling for quality mentoring programs, exactly how quality is defined and measured remains somewhat unclear.
  • Systematically assess program quality across a range of relationships (youth-volunteer, youth-staff, volunteer-staff, staff-administration) and relate these to outcomes can provide an empirical rationale for supporting enhancements in mentoring programs.
  • Moreover, research to date has focused predominantly on the effects of mentoring over a relatively short period of time. The more substantial benefits that may be associated with longer-term (i.e., several year) relationships have yet to be examined.
  • Another important consideration may be whether relationships are continued for the full duration of whatever expectations were established, even if for only a short period of time.
  • Research on the role of duration and intensity, including the minimum required dosage to achieve various outcomes, the role of expectations, and the effects of long waiting lists is needed.

3. Assess the Underlying Processes of Mentoring

  • As Dr. Carla Herrera recently noted, there is a need for program developers to  articulate the goals and the models of change that guide their approach, including the processes that are thought to mediate outcome and their temporal ordering. Indeed, although a relationship between a caring adult and a young person lies at the heart of mentoring, little is known about how such relationships actually influence youth outcomes.
  • By more thoroughly examining relationship processes, evaluators can help mentoring programs develop more effective strategies for training and supervising mentors. Evaluators should investigate relationship processes from both the mentors’ and mentees’ perspectives with attention to the broader influences of families, schools, and communities.
  • Qualitative research, which provides in-depth descriptions of how relationships develop and why they sometimes fail, as well as longitudinal studies of outcomes, have a vital role to play in theory development

There’s a lot of suggestions here but, the bottom line is that collaboration between evaluators and mentoring practitioners will help to advance the field!
So what do you think? Share your evaluation tips, questions, and stories in the comments below!