New, Exciting…and Effective? Tips on Evaluating Innovative and New Programming
Organizations are evolving rapidly within the mentoring field – often adapting or launching new or innovative programs to meet local needs (and leverage local assets). Across the Big Brothers Big Sisters network, agencies are adding to or enhancing our core model at a surprising rate. A recent survey of the Big Brothers Big Sisters network found that 86% of our agencies are currently offering innovative programming (such as workforce development, literacy, or STEM).
While the philanthropic community often rewards such ingenuity and focus, they also often have high expectations for outcome measurement and impact. Yet, evaluating new and emerging programming can be challenging. For one thing, a new program is a work in progress. Taking a concept to operation necessarily means making mistakes and taking corrective action. It requires staff to try new approaches and to adjust assumptions with some frequency. This dynamic and iterative process flies in the face of traditional evaluation methods aimed at answering questions such as, ‘Does this program work?’ or, ‘Is this program more effective than business as usual?’
At Big Brothers Big Sisters of America, we’ve learned that rushing into a randomized control trial too early in the development of a new program can be a frustrating (and costly) experience. This type of study – the “gold standard” in human service evaluation – demands a high degree of programmatic consistency and fidelity during a period of time when program staff are rapidly learning what isn’t working and what needs to be improved.
As a result of these experiences, the Research, Innovation, and Growth team at Big Brothers Big Sisters of America has developed a phased approach to developing and evaluating new programming. Although intuitive, we believe that a concerted effort to follow the steps below will result in stronger programming with more demonstrable outcomes.
Phase One: Planning & Program Design
Although this component is often the hardest to raise funds for, we believe this phase is the most crucial step in developing strong new or enhanced programs that require minimal re-work and adjustment to reach targeted outcomes. We do our best to dedicate at least six months to the following key steps:
- Understanding the need: the best source of information is often the intended service recipients themselves. Focus groups and surveys can be powerful tools to identify key needs, obstacles, and recommendations for services.
- Learning from others: no matter how innovative your idea, it is likely that someone has tried it already and will have the scars to prove it. Interview similar programs, experts, and key stakeholders (such as school administration or college admissions staff).
- Conducting a literature review: if your concept has been piloted before, chances are it has also been evaluated. If you don’t have staff with access to (or interest in) scholarly journals, see if you can hire an independent evaluator to conduct a literature review for you. The research she unearths will help you to build a stronger program, and to make a stronger case to funders.
- Drafting a theory of change and/or logic model: these documents are where the serious program design begin. They help you identify specific outcomes and the activities needed to achieve them. Again, if your staff has limited experience developing these conceptual models, consider working with an independent evaluator.
- Sketching a program design, timeline, and budget: this is where you answer questions such as, ‘What are the major activities our innovation will include?’ and, ‘Approximately how much will they cost?’
Phase Two: Program Development & Evaluation Design
Although this phase can be blended into the first for fundraising and budgeting purposes, we often prefer to group it separately. After all, until you’ve completed the program design you really have no idea how expensive or time consuming the program development will be.
- Developing necessary materials: when I ran a new mentoring and career development program in New York, we often built the plane while flying it – creating programmatic materials a week or two before their use. If you can attract sufficient investment, however, it is ideal to develop most of the marketing and training materials you are going to need before you launch the program (easier said than done, I know).
- Obtaining input from key stakeholders: obtaining input early and often can save you a lot of time, money and frustration later. Depending on your organization and program, this might include obtaining input on your program design and draft materials from your board, from anticipated service recipients (or their parents), and from other experts in the field.
- Determining key performance indicators and data collection methods: during the program development phase, decide what data you’ll want to collect once the piloting begins and how you’ll collect it. This will likely include outputs (such as youth attendance, number of events held, frequency of match support provided) that will tell you whether the program is being implemented as expected, and customer satisfaction data from participants. You may also want to work with an independent evaluator to collect pre/post outcomes data to see how youth attitudes, knowledge, or behavior are changing over time. Keep in mind that the standard questions or data you collect may need to be revised for your next program year as you learn more about participant experiences and as you tweak program delivery.
Phase Three: Piloting & Formative Evaluation
When piloting a new enhancement or program, the key question you or your evaluators should be asking isn’t, ’Does it work?’ but rather, ’How can it be optimized?’ No program works seamlessly in its first iteration. Anticipate changes by developing a continual feedback loop – collecting data that can inform further programmatic refinement and improvement. This initial data will also inform more rigorous evaluations in the future.
- Piloting the program or enhancement: this is where rubber meets the road. Create a culture of learning by regularly debriefing with your team, focusing on process improvements and learnings. If your team understands that they aren’t expected to get it exactly right the first go around, they’ll be more likely to keep an open and attentive mind regarding potential improvements.
- Collecting implementation and feasibility data: be a stickler about data collection. Create a simple dashboard using line graphs to track key measures (such as attendance or retention) over time. Discuss the data with your team at least quarterly to help prompt creative ideas and solutions.
- Collecting customer satisfaction data: when I was running a new series of workshops, we asked participants via survey after each workshop what they liked, what they recommended we change, and whether they would recommend that workshop to a friend. We also conducted annual customer satisfaction surveys for both mentors and mentees, and periodic focus groups to learn more about what was working and what wasn’t.
- Revising the program as needed: use the data you are collecting to make course corrections. Foster open and honest dialogue with your staff and your customers by keeping the focus on learning. Ultimately, both staff members and program participants will be proud to help shape a new program that will impact many youth in the future. Be careful to document lessons learned and decisions made, as staff transitions during this phase can set the entire project back significantly without proper documentation.
The length of time your organization dedicates to piloting and enhancing your program will be highly variable. Some evaluators recommend staying in this phase indefinitely – taking advantage of the virtuous cycle of continuous improvement to drive ever better outcomes and more efficient services. For outcomes evaluation (or funder appeasement) purposes, you may wish to aim for a level of relative programmatic stability and formalize a program model so that you can conduct a randomized control trial and definitively answer that critical question ‘Does this program work?’
Many of you may be thinking this sounds great, but when do you ever have the time or money to execute this? The truth, I’ve found, is that when these steps are clearly articulated, funders get excited about supporting a new or “signature” initiative that is conscientious about quality and driven by data.
– Venessa Marks, Vice President Research, Innovation & Growth, Big Brothers Big Sisters of America