Evidence Corner: Looking for Evidence in All the Right Places

Evidence Corner: Looking for Evidence in All the Right Placesby David DuBois

I am very pleased and honored to have the opportunity to contribute to the Chronicle of Evidence-Based Mentoring as the editor of the site’s Evidence Corner. I look forward, in the months to come, to sharing and dialogue regarding the evidence base on a wide range of topics that is evolving in the field to inform youth mentoring practice and policy. For this inaugural post, however, it seems that it might be most appropriate to take a step back and briefly consider what actually it is that we mean by evidence. Often, it is my impression that this term is taken to be more or less synonymous with research.* To view evidence and research as one and the same, however, is in my estimation both problematic and limiting. I would much rather our field embrace a more inclusive viewpoint that places value on evidence coming from multiple sources and modalities. Viewed from this perspective, the focus can shift to the more generically essential features and characteristics of evidence rather than its specific source or origins.

A group of like-minded scholars in the UK (Pawson and colleagues, 2003) have proposed that useful and reliable knowledge to inform social programs (like mentoring) can come from several different sources, including:
– Service Users/Care Providers
– Practitioners (Line & Management Staff)
– Organizations
– Policy Community
– Research

Applied to youth mentoring, important information and insights thus may be distilled not only from research, but also from youth, mentors, and parents (among other potential stakeholders or participants in any given mentoring initiative), program staff and leadership, and legislative or other policy-related initiatives. Our existing knowledge basis includes several laudable efforts by researchers to obtain and synthesize input from these latter sources – Renee Spencer’s ground-breaking investigations into how mentoring relationships are experienced by mentors and youth in their own words are one prime example of this. A truly democratized approach to knowledge development, however, would be one in which practitioners, mentors, youth, policy-makers, and others also have the ability to share their voices directly, unmediated and unfiltered by the dictates of the research paradigm.** This might seem a subtle or even esoteric distinction. Yet, consider the uproar that would undoubtedly ensue from researchers if they were restricted to sharing knowledge only at the pleasure of practitioners or others outside of the academic community.

The same set of UK authors go on to propose general criteria and questions for use in evaluating what constitutes viable knowledge from any source. These guidelines, captured in the acronym TAPUPAS, are as follows:
– Transparency – is it open to scrutiny?
– Accuracy – is it well grounded?
– Purposivity – is it fit for purpose?
– Utility – is it fit for use?
– Propriety – is it legal and ethical?
– Accessibility – is it intelligible?
– Specificity – does it meet source-specific standards?

Unfortunately, with the exception of research, the source-specific standards referred to in the last of these criteria are not well-evolved. As such, the guidelines necessary for separating out anecdotes and casually formed (or agenda-driven) opinions from well-grounded insights and information in areas such as practitioner or program participant knowledge are largely lacking. It would be wonderful to see the field of youth mentoring be at the forefront of advancing such standards. This work, in my view, will be essential for ensuring that more diverse forms of knowledge are able to be put forth in a manner that is responsible and most likely to advance the field. Developing these types of standards is also likely to be critically important for advancing the status and credibility of knowledge that emerges from outside of the realm of research.

Historically, viable outlets for sharing and disseminating non-research forms of knowledge have been limited as well. If knowledge was not codified in the form of an academic journal article or report, the prospects were slim for its reach extending beyond local boundaries of where it emerged. Yet, recent years have seen the advent of information technology and web-based resources like the Chronicle, coupled with a seemingly ever-growing array of state, regional, and national conferences and other venues for sharing of practitioner and stakeholder knowledge. As such, the field now has unprecedented opportunities for diffusion of diverse forms of knowledge. This is not to say, of course, that the playing field is level. The number and accessibility of outlets for research, and the status and credibility that are attached to them by many (including funders), continue to far outpace those associated with other less traditional forms of evidence.

As I embark on my stewardship of this corner of the Chronicle, I ask for your help with advancing the pluralistic framework of knowledge development that I have attempted to briefly articulate in this opening contribution. Put more simply, please help me not only “talk the talk,” but “walk the walk”. I look forward to the journey!

Notes:
*Increasingly it seems in some quarters, evidence these days is being conceived of even more narrowly to consist primarily, or at least preferentially, of just one type of research, namely controlled evaluations of program effectiveness. I will leave commenting on this trend, however, which I also find troubling, to another time.

**The stakes that are involved with whether we are able to make a shift toward a broader and more egalitarian viewpoint on what constitutes legitimate evidence to inform practice and policy were articulated with eloquence by Marston and Watts (2003):
There is a risk that evidence-based policy will become a means for policy elites to increase their strategic control over what constitutes knowledge about social problems In a way that devalues tacit forms of knowledge, practice based wisdom, professional judgment and the voices of ordinary citizens…We have tried to assert a more encompassing and ultimately democratic definition of what can count as ‘evidence’ (Marston & Watts, 2003, p. 158, 159).

References:

Marston, G., & Watts, R. (2003). Tampering with the evidence: A critical appraisal of evidence-based policymaking. The Drawing Board: An Australian Review of Public Affairs, 3, 143-163. Retrieved from http://pandora.nla.gov.au/pan/13501/20030611-0000/www.econ.usyd.edu.au/drawingboard/journal/index.html

Pawson, R., Boaz, A., Grayson, L., Long, A. F., & Barnes, C. (2003). Types and quality of knowledge in social care. London: Social Care Institute for Excellence. Retrieved from
http://www.scie.org.uk/publications/knowledgereviews/kr03.pdf