“If I look at the mass, I will never act. If I look at one I will”: Why statistics are no match for a good story

by Jean Rhodes

In an interesting study (Kogut & Ritov, 2005), people were asked to donate money to help a poor seven-year-old girl from Mali named Rokia. Many were so moved by her story that they gave generously. But when another group was told the same story along with statistics about the scope and effects of poverty in Africa, they were less inclined to give. As Yale psychologist Paul Bloom has noted, “When it comes to eliciting compassion, the identified individual victim, with a face and a name, has no peer. Psychological experiments demonstrate this clearly but we all know it as well from personal experience and media coverage of heroic efforts to save individual lives.” As Mother Theresa once said, “If I look at the mass, I will never act. If I look at one I will”

This so called, “identifiable victim effect” was observed nearly a half century ago by economist Thomas Schelling who observed, ““Let a six-year-old girl with brown hair need thousands of dollars for an operation that will prolong her life until Christmas, and the post office will be swamped with nickels and dimes to save her. But let it be reported that without a sales tax the hospital facilities of Massachusetts will deteriorate and cause a barely perceptible increase in preventable deaths—not many will drop a tear or reach for their checkbooks.”

In addition to eliciting more generosity, stories with an “identifiable victim” mute our tendency to think critically and scrutinize.  Decades of somewhat disappointing data about mentoring program effectiveness have simply been no match for an emotionally appealing story of self-sacrificing volunteer who helps to transform a young life. As cognitive psychologist Bahador Bahrami described, “The more emotionally engaged, the more gripping and vivid the story is, the less attention we’re paying to sort of the apparatus of this story and questioning and wondering and being on guard and monitoring these questions about, should I trust this source? What are the discrepancies here and so on? You’re just immersed in that perspective.” This tendency is runs deep in the field of mentoring, where stories of lives transformed by devoted mentees abound. Intellectually we may know that the young person and her mentor are outliers–the Facebook equivalent of the perfect family on the perfect vacation–but emotionally, we feel the tug.

Moreover, once anchored, the intuitively-appealing framing of formal mentoring relationship as capable of routinely delivering transformative experiences has fueled confirmation biases. Such biases lead us to elevate facts that confirm our beliefs and to instinctively discount or ignore alternative views. Confirmation biases have an evolutionary advantage. It’s a “basic human survival skill…we push threatening information away; we pull friendly information close. We apply fight-or-flight reflexes not only to predators, but to data itself “(Lupia, 2015). Or as Stanford psychologist Leon Festinger wrote in the 1950’s “A man with a conviction is a hard man to change. Tell him you disagree and he turns away. Show him facts or figures and he questions your sources. Appeal to logic and he fails to see your point.” This makes it difficult for nuanced mentoring findings to break through and falsify our beliefs. The cranky researcher with reams of data may be telling us one thing, but the high production video of the selfless mentor rings far truer.

There’s also the problem of equity bias, the tendency to weigh all opinions (and by extension research findings) as equally valid, irrespective of the opinion holder’s or program developer’s expertise. This bias runs deep in mentoring and cuts both ways–people with relatively lower expertise often think they know as much as everyone else, while experts tend to rate themselves on par with everyone else (the Dunning–Kruger effect). Moreover, people tend to favor their own opinion over expert advice, even when they might benefit from following the advisor’s recommendation. In one study, cognitive psychology Bahador Bahrami (2015) found that participants assigned nearly equal weight to their own opinions as they did to those with more expertise. This tendency persisted even after participants were told about the expertise gap and even when they had a monetary incentive to maximize collective accuracy! The belief that everyone’s viewpoints should be weighed equally (or that every data point deserves equal weight regardless of its source) complicates decisions about how best to invest in mentoring programs. This overconfidence in personal expertise is particularly rampant in the field of mentoring where the familiar, easy to visualize concept of “mentoring” leads well-meaning philanthropists to equate their success in one arena (e.g.., technology, finance, sports, politics) with their likely success and knowhow in what seems to be pretty straightforward approach to helping youth.

To complicate matters, researchers, practitioners, and policymakers actually define “research” and “evidence”quite  differently. Whereas researchers often employ the two terms interchangeably to mean “findings derived from scientific methods” (Tseng, 2012, p. 6), studies suggest that practitioners tend to define evidence more broadly as stemming not only from scientific methods, but from consumer satisfaction surveys, parent, youth, and community feedback, and more (Honig & Coburn, 2008). Consequently, they may weigh a tally of responses from a non-representative sample as being on par with the claims from a large federally-funded, peer-reviewed randomized trial. Researchers, however, would only find evidence in the latter.

Still, to an untrained eye, a finding is a finding and technical differences in study designs are, well let’s just say, academic. An added complication is that there is a growing antipathy toward experts and, in the face of claims about “fake news,” growing suspicion that experts may have a hidden agenda. A recent article in the Guardian noted that, “Not only are statistics viewed by many as untrustworthy, there appears to be something almost insulting or arrogant about them….People [also] assume that the numbers are manipulated and dislike the elitism of resorting to quantitative evidence.”

To be sure, researchers like myself share some of the blame. Practitioners with pressing program and funding concerns can easily grow weary of absent-minded professors’ tendency to poke and prod topics that seem arcane and irrelevant. Furthermore, our communications are often inscrutable and riddled with caveats, nuances, and frustrating ambiguities. Anyone who has ever sat through a research presentation is familiar with such equivocation, e.g., “yes, the program had a small positive effect on 7th grade boys’ attendance, but only at wave two, and it had the opposite effect on 9th grade girls’. It is this  messy context that sometimes gives data meaning, enables us to draw meaningful conclusions, and adds nuance and humility to our claims. But, as HH Munro once observed, “A little inaccuracy sometimes saves tons of explanation.” Ironically, efforts to be complete and transparent can fuel misinterpretation and doubt, thereby driving the audience to reach for the boldly bulleted findings that have not undergone the rigors of peer review or the outlier anecdotes that is construed as a stand in for all mentoring relationships.

All of these tendencies will, no doubt, be all be on display next week at the  Mentoring Summit.–the nerdy presentations with complicated statistics, the heartwarming stories of the 6 year olds with brown hair, and the glossy, data-light powerpoints given as much credence as a peer-reviewed study. We may not be able to change our confirmation and equity biases, but we can at least enjoy the show.