As some of you may already know, I describe myself as a "current explorer of realist evaluation" and am working on a realist review and evaluation of the Violence Prevention Education for front line healthcare staff in emergency departments in British Columbia, with Sharon Provost through the Partnership for Work, Health & Safety at the University of British Columbia (much more to come on this in subsequent blog posts).
I first learned about the realist methodology during a course I took during my graduate studies at the University of Melbourne entitled Debates in Evaluation with Dr. Brad Astbury. In this course, we examined the origins and evolution of evaluation theories, models and approaches, in particular the role of evaluation theory; pioneering figures and major debates in the field of evaluation; approaches to classifying evaluation theories; as well as the relationship between evaluation theory and contemporary practice (https://handbook. unimelb.edu.au/2019/subjects/educ90715). I loved the whole course (in fact, I really think this course should be a requirement of the program as it is so foundational to the way we think about evaluation...but I digress) and I took a particular interest in the topic of the realist approach to evaluation. For me, the most important and useful aspect of realist evaluation was its ability to tell us not just whether a program 'worked' or not but how and why it worked. Put more concisely, a realist approach to evaluation aims to identify the underlying generative mechanisms that explain how specific outcomes are caused under certain contexts.
Here is my brief overview of the realist evaluation approach (excerpted from an article I wrote for the Debates in Evaluation class):
Realist evaluation is rooted in the realist philosophy of science. As realists, Pawson and Tilley (1997) contend that there is a reality which exists independently of the human mind; however human experiences and understandings are always filtered through cognitive structures, including experiences, beliefs and assumptions, as well as through language and culture. Consequently, objective and absolute knowledge is not possible because our understanding of reality is filtered through these perceptual senses (Archer, 1995; Bhaskar, 1975). The goal then, according to realists, is to incrementally improve our understandings of the world through the continual process of testing hypotheses against reality.
This grounding in realist philosophy has important implications for the conceptualization of social programs. For Pawson and Tilley (2004), the nature of programs can be characterized by four features: they are embedded in layered social systems; they are theories or hypotheses of the human imagination about how to address a social problem; they are active in that their intended effects work through the reasoning and volition of their subjects; and they are open systems which are impacted by a variety of externalities (pp. 3-5).
Based on this conceptualization, Pawson and Tilley (1997) explain how a program brings about change through three fundamental concepts. First, a mechanism, which is often hidden, describes what it is about an intervention that brings about change. Second, the context is a set of conditions within which a program operates that is relevant to the process of program mechanisms. Third, outcome patterns are the variations in outcomes due to differences in contexts and mechanisms. These three concepts form the basis for the realist formula of causation:
Mechanism + context = outcome (Pawson & Tilley, 1997)
This CMO configuration (CMOC) formula is grounded in Harre’s (1972) generative model of causation, which posits that a particular set of contextual factors can trigger a mechanism, resulting in change. Conversely, certain aspects of the context may also prevent particular mechanisms from being triggered, resulting in different outcomes.
Guided by this understanding of how an intervention brings about change, the realist evaluator develops a hypothesis about why a program works, for whom and in what circumstances and tests it empirically to explain how the causal influences generated by the program resulted in particular outcomes in specific contexts. The result is a more refined program theory which can then be further tested and developed. Figure 1 provides an overview of this realist evaluation cycle.
Figure 1: The Realist Evaluation Cycle
Source: Pawson and Tilley (1997, p. 85)
While complete and definitive knowledge about a program can never be attained, realist evaluation reflects the Popperian view that this continual testing and refining of program theories enables a better approximation of truth and a continual improvement in social programs. For both Pawson and Tilley, a close relationship ought to exist between evaluation and social reform (Pawson, 2011, pp. 195-196; Tilley, 2000, p. 2).
There are, of course, plenty of really great online resources on realist evaluation, if you are interested in learning more, including (but certainly not limited to):
The RAMESES Project, a website filled with realist resrouces, including quality standards and training materials: https://www.ramesesproject.org/
An interview with Ray Pawson himself describing the basics of the approach:
A solid introduction to the approach on Better Evaluation:
Over the coming months, I will be blogging about my experiences with both a realist review and a realist evaluation of the Violence Prevention Education for front line healthcare staff in emergency departments in British Columbia - or what I like to refer to as my "realist journey." It has been a fun and exciting journey, complete with mountains and valleys, wherein I have had the opportunity to experience and explore the relationship between realist evaluation theory and practice.
I invite you to join me in this learning journey by not only reading this blog series but by engaging in this expedition - ask questions, post comments, share with colleagues, challenge something I say!