On Methods: What’s a meta-analysis, anyways?
There is often considerable fanfare when a new meta-analysis is published. What’s the excitement about anyways? Don’t most meta-analyses seem to be saying things we already know from previous research? This is somewhat true, as meta-analyses summarize previous research findings. However, in contrast to a review, a meta-analysis uses statistical analyses to systematically aggregate the findings.
Prior to conducting a meta-analysis, researchers locate all relevant research through databases or by contacting researchers (via email for example) requesting any manuscripts that may not be published but may be relevant for the research question (e.g., conference presentations). All of this information is translated into an effect size (or magnitude of impact) and the overall effect size is calculated. Confidence intervals may also be calculated. These are ranges of values that estimate the probability for capturing the true value within the range of “confidence.”
Additionally, different characteristics or variables (i.e., moderators) that may lead to differential influences on the findings can be analyzed. For example, a mentor program practices may influence (or moderate) the success of the mentoring program.
Steps in conducting a meta-analysis
Before beginning to analyze the data, researchers must decide on a number of steps or rules to guide their meta-analysis. First, inclusion criteria are outlined. For example, does the research need to be a randomized controlled study? Are there certain criteria the participants from the study must meet?
After outlining these criteria, a comprehensive review is conducted and codes or categories are created for each component of the reviewed studies. For example, how matching was conducted in the program may be coded.
In the final step, researchers compute standardize the effect sizes of the studies, so that an overall effect size can be computed. Given that there are a few types of effect sizes, it is crucial to choose one type of effect size so that it would be possible to aggregate and compute an overall effect size.
A final step in conducting a meta-analysis is to investigate whether certain characteristics lead to differential outcomes, known as moderator analyses.
An example from mentoring research
Dubois et al. (2011) conducted a meta-analysis on the effectiveness of mentoring programs based on 73 independent evaluations of mentoring programs published from 1999-2010, which included 82 studies. Their inclusion criteria included “A program or intervention that is intended to promote positive youth outcomes via relationships between young persons (18-years-old and younger) and specific non-parental adults (or older youth) who are acting in a nonprofessional helping capacity.” They limited their analysis to studies that had mentoring programs and comparison groups. In doing so, Dubois et al. were able to rule out natural development and the passing of time as accounting for the change in a variety of outcomes (e.g., social relationships, emotional well being, conduct problems, physical health and academic/school-related outcomes). They found that these outcomes improved more drastically for youth in mentoring programs in approximately half of the studies they analyzed (52%). Collapsing across all these outcomes and studies, the overall effect size was .21, suggesting that there are small but significantly meaningful effects of mentoring programs on youth outcomes.
In order to better understand what leads to these effects, Dubois et al. also conducted moderator analyses, whereby they compared the effects on mentoring programs based on seven different aspects. The overall findings suggest that mentoring programs that match mentors with mentees on these characteristics (moderators) yield better youth outcomes. For example, youth who were matched with their mentor based on interests had a .41 effect size for the improvement in youth outcomes, whereas those who were not, had .20. This shows that when matching mentors with youth on shared interests, the effects of the mentoring program is greater.
In summary
A meta-analysis is a statistically based review of the findings. It is based on a comprehensive review of all the literature and provides the current empirical state of the research. Importantly, meta-analyses can also look at characteristics that may influence the findings (i.e., moderators).
Some strengths and cautions
Strengths
– the more studies available, the more accurate the meta-analysis will be
– can empirically test the overall effects across a number of studies
– results can be generalized to the larger population
– can quantify and analyze inconsistency of results across studies
– can consider characteristics that may explain variation across studies (moderators)
Cautions
– when there are only several small studies, a meta-analysis is less indicative of possible results from a large scale study
– the meta-analysis cannot account for weak design of studies
– Many times, studies that are not published may not be included in the meta-analysis (e.g., conference presentations). This is referred to as “publication bias”.