Danielle: Okay so today we have an interview with Robyn Sachs who specialises in research and evaluation, and I'm really excited to have you Robyn for this chat today to talk more about evaluation and the bigger picture of what it can do you for access social entrepreneurs and the potential that it has to help us plan and navigate through our social enterprise work, so let’s just start off. If you can please introduce yourself and show a little bit of your background and experience working in research and evaluation.
Robyn Sachs: Great, thanks Danielle, I’ll say I'm very excited to speak with you as well and to share some of my excitement about evaluation as a field and so I started out my undergrad degree studying psychology and clinical science and really was constantly searching for something that met my areas of passion. I was very surprised to find out that I actually liked doing research type skills, but was not really drawn to go into research specifically in an academic setting because it really didn’t feel like it served the kind of purpose I wanted to serve which is just to get out of an academic setting and into the real world dealing with people who are actually addressing some of the key issues in the world. I think research has a place in it but I feel like its place is kind of really kept in an academic setting, and in academic journals and that kind of thing. So, when they discovered evaluation, I was in my last year of my undergrad and was immediately drawn to it, and what I felt it did was use research skills and apply them to a real-world setting and the other thing is was really appealing to me about it was that it was a very interpersonal endeavours. So it really requires some skills of working with... building my research and evaluation skills, I spent a few years doing some programming in hospitals as a project coordinator; doing some smoking cessation work in hospitals and that helped me actually build some knowledge around how programs work, how you design programs- what kinds of things you have to think about to do that. I also did some evaluation as part of that job, and then started a Masters in health promotion and that's a field that really has a lot to do with community development, promoting, let’s say the best in people and in communities and individuals and that kind of thing. It's not really about physical health per se but health systems. I really tried to build my own research skills while doing that Masters and focus on especially qualitative work, so that's the really important part of evaluation.
Danielle: Great, that's fantastic, so since you're schooling and experienced in the healthcare world, you've also worked with a few other community-based organizations or organizations with social impact as well, right?
Robyn Sachs: Absolutely, yeah. So I started out in things that looked more health ‘promotion-ing’. So smoking cessation, some physical activities, and those types of things, as well as mental health and addictions, but it's always been a goal of mine tends to move more into community development, community-based programming, human services... and that type of thing. So I'm pretty excited this year, for example, to be working with Community-wise Resource Centre, which is where you and I met and they work with up to eighty different nonprofits organizations who are doing various different things out in the community, and then in the world. So that's really great opportunity.
Danielle: Yeah, that's fabulous, excellent. That's a good background to kick us off. So now we can dive into more of what we want to talk about today. So in our past conversations what really struck me is unique about the way you talk about evaluation was your perspective on going into the work and you talk a little bit about the interpersonal relationships but can you share a bit more about your point of views on the goals and objectives of evaluation work and why it's important for anybody running a program or an initiative that is hoping to achieve a certain goal or certain long-term impacts?
Robyn Sachs: Sure... so the day that we met was at a community conference and I spoke about my work, sort of four minutes or less and had a lot of people coming up to me and saying, I’ve never seen anyone so enthusiastic or excited or happy about evaluation.
Danielle: It was such a pleasant surprise, it was really exciting.
Robyn Sachs: And unfortunately, you know, I think it speaks to the fact that evaluation has become what is currently thought of as something that looks a little bit more like accountability or reporting and because may be that the chance of people has to do evaluation so often because funders are asking for it, and then that create a certain dynamic of someone saying, are you doing what you said you were going to do- we’d give you the money and is what you're doing really making a difference; is it worth us continuing to give you this money. And that creates a little bit of stress on the part of projects and it kind of creates the opposite of what I think evaluation could be doing or should be doing, which is helping people who are doing projects or initiatives really look critically at what they're doing to be excited to find out if what they're doing is making a difference, to have sort of a curious eye on their work all the time him, and more when you're doing reporting or accountability. It’s trying to do exactly what you said you would do and in some cases, as opposed in innovating and trying to be curious about what you can do better. So that's really the approach I try to emphasize and to be honest, I'm not thankfully that different than other valuators that I have rubbed elbows with here in Calgary. I met some other people who have similar viewpoints and whenever I go to evaluation conference I'm surrounded by people that have this view, this debate comes up quite a bit- that of accountability versus learning evaluation. So lots of people are thinking this way and I think more and more are getting a chance to actually use that way of thinking when we work with people, of course, structurally if we’re always brought in to do funder driven evaluation that may be the work that we are relegated to do. I think that is happening less and less and others more of an appetite, including on the funder side but especially on the programs that is to say, really, what are we doing, are we making a difference? So that's the biggest part.
Danielle: Yeah I love that perspective too because the accountability and the reporting is important that I think as a leader managing your strategy and the bigger picture of what you're trying to achieve. Looking at the evaluation process, and the tools that evaluation gives us as kind of this learning opportunity and the checks and balances in the work to make sure were actually on track to what we think we're trying to achieve. That is kind of a huge mindset shift that could be so valuable for leaders and entrepreneurs as they are assimilating all these outside information and making decisions from them.
Robyn Sachs: Exactly I think another big part of it is the idea of an expert having to come into that evaluation or even an outside person. The idea of where the fact that you can’t have a critical eye in your project without any bias that you might just be very protectionist of your program and not be able to see any issues or flaws or a need for opportunity, I think that's really a misguided way to look at things. There's certainly a place for outside evaluation people to help you think of things in different ways, but I think there's a lot to be said for people in there within the scope of their own work really looking at it with a curious and critical eye.
Robyn Sachs: They are really the experts at the end of the day, there are people doing projects and they are going to be the best people to know the ins and outs of how it works and how to improve it.
Danielle: Right. And working with those experts like yourself at the times when it makes sense just grows that knowledge base so that may be... we can be more conscious of those biases and still keep the critical thinking without having hopefully as much of student perspective like you mentioned that that issue can up with you know, loving your idea which is great. We need leaders to love our ideas, but it's balancing that when you're receiving information as well, right?
Robyn Sachs: Exactly and in terms of loving ideas, trying to stay more in love with the idea that changes that you want to see then the way that you think that those changes will happen. So being open to the fact that maybe your initial thought of how it might work is not really the best way to get there and that can be one of the hardest things for people I think but it is exceedingly important.
Danielle: Fantastic. So another aspect that I'm excited to talk with you more about it which did come out in a few of our previous conversations is... speaking about evaluation and how you talk about how interpersonal skills are weaved throughout the evaluation process and your focus or awareness of the importance of that component of the entire process and how you approach evaluation. So I'm hoping you can speak even more to that idea.
Robyn Sachs: Sure. So even with just the thing that I mentioned in terms of people being attached to certain things and certain ways of thinking, I see my role as an evaluator coming in within a different perspective and helping bring people along to that perspective in a really gentle and encouraging and inspiring way.
Danielle: Right, not in the “slap you on the wrist kind of way”.
Robyn Sachs: Absolutely, because it is understandable why people think the way they do when they attached to the way they do things, why they might think it's working without maybe seeing a full picture and that kind of thing, so and again I want to emphasise that I don't see myself as an expert, but just has a different set of eyes so that's one big part of it. The other big part is the interpersonal skills really needed to understand how people are working to again have some curiosity as an outsider of like how does that actually play out, who comes, what time do they come, what kind of paper records do you have to keep to keep this going, who is all involved? All of that kind of understanding comes about from, mostly from conversation. There may be some documentation involved, but really getting a sense of how people work and actually especially their motivations really is a negotiation process and that type of negotiation has to happen throughout an evaluation project as well- when you're deciding on questions and methods, and reporting you really have to be a reflective listener and by that I mean you are listening very intentionally and also reflecting things back to the people that you're talking to so that you make sure that your understanding makes sense.
Danielle: So that whole communication, back and forth is really important throughout the entire process of even designing what you want to evaluate?
Robyn Sachs: Exactly so it really speaks to the need to have those interpersonal skills in order to design something that will work, will be useful and feasible and simple and efficient, hopefully all those things... it is hard to do that without really having that back-and-forth process.
Robyn Sachs: And another thing I know we talked about last time we spoke was this idea of translations, of being a translator. Almost every program I’ve gone into and I'm sure this will not change these stakeholders that come from different backgrounds, have different experiences they bring to the table. This could be quite extreme as in you know, some program I’ve done where we've got strict business people and you got people that are a little bit more on the, I don't use the word hippie, but you can say anything with tongue-in-cheek, the people that that maybe don't do things in a sort of straightforward and bottom-line business way but are actually more looking at that longer-term goals and people coming from different inter-disciplines, so people that are using medical language versus people that are maybe using my community development or social work language. All of those things mean that you have to sit down often and flesh out what you're actually talking about so I’ll often say that people are using the same terms and meaning very different things about them, or they're using very different terms and meaning very similar things and it's really the interpersonal skill needed here is just to really flesh out how people are using language and making sure that some consensus is reached about, what are we talking about and are we agreeing on not just the language, but also what we’re doing, how to get to the goals that we’re talking about, are we talking with the same goals that type of thing.
Danielle: Now it is a really good point and those are the kind of more touchy feeling component of this type of work or any type of work the interpersonal layer and communication layer is huge and it applies to everything, but it's not what we focus on first unfortunately, because it is kind of that extra layer on top of the more tangible work that we're doing, right?
Robyn Sachs: Absolutely and I think it definitely implies to my work is an evaluator, but that this work really that ideas and personal skills really applies to any type of work in the business world and in the social impact kind of world.
Danielle: Yeah, even at home, right? Even with friends, every day so...
Robyn Sachs: Absolutely.
Danielle: So also, what I wanted to chat with you about is what I find really interesting. The topic of the valuation and most people immediately go straight to kind of the dry data collection and statistical analysis and I shouldn’t say most people but in my experience, that's that found, tends to be the common understanding of evaluation. So I was really excited to hear from you, you're even more specifically exploring the value and the ideas that are emerging around using my qualitative approaches, which could include things like storytelling, and narratives as sources of assessing contributions to social impact. So maybe you can touch on contribution and attribution as well- a little bit of evaluation terminology for us.
Robyn Sachs: Some states... but I’ll start by saying that certainly qualitative evidence has been used in evaluation for a long time as far as I understand it but then it is very often it seems to use may be more for the process or the context understanding but not really seen as a good way to understand impact if people for example are doing evaluation and they only get perceptions of participants or perceptions of people running the program. Many people wouldn't count that as a strong evidence that program really worked what often people want to see, some kind of experimental study with some kind of measurement, maybe even rating scales, even if those are perception-based, people often will count as a stronger evidence than something like a story or qualitative look at things. Even sometimes people counting outputs which shouldn't be counted as outcomes but often are they know how many people attended training, how many training sessions are given? And some people do count that more as evidence and quantitative understanding and I think that's really it makes sense that that people are reacting to the fact that may be anecdotal evidence..., some of it make sense that people are reacting to the fact that a anecdote evidence isn't really strong so to come out to a whole program and say, well Joe said it was really good so we’re going to keep going with it or the people and the person doing the training said that they felt really good doing it. So we should keep going. I think we regularly shouldn’t only be looking at those kinds of anecdotes to decide whether or not to continue with the program, but we really need to make sure we have the same level of systematic understanding of what were looking for and what impact a program has had. There is also an underlying idea that numbers in some way are more valid than perceptions or experience. I think there's a continuum of where people sit on this but some people think that perceptions and experience and the human experience are more touchy-feely and they want to have something more tangible to “prove whether a program has worked”. So, all of those things make sense, what is nice still is that people seem to be realizing more and more that there is value in different types of evidence, and that often people react most strongly to stories and this makes sense. I mean, they appeal to emotions which of course we all have. They might stick most strongly in her memory as opposed to a stat or a number and I think what I really like about qualitative evidence in general and storytelling is part of that it may be farther than just qualitative interviews, and that kind of thing is that they really help us to understand the context that a program happens in, in relation to the context of people's lives that take part in it, and a lot of nuances and understanding and really it gets to the full story of a program of how it's placed in the world and how it's hopefully leading to some impact so I'm really glad to see more and more being written on the evaluation blogs and evaluation journals and that kind of thing about how to capture this storytelling piece.
Danielle: I love how you talk about it to the importance of this providing the context and the kind of the bigger picture; the context of the people experiencing the program and the program itself within the environment that it operates in. I mean those bad idea of the broader context, I think it’s so applicable and important to remember and acknowledged, and work with especially when in social change work because there's essentially no social change project that I have never known of that operates in a bubble you know It comes with the territory, and I love the idea of embracing that and learning how to learn how to work with that in the evaluation sphere and you know, making the most of that reality that it's complicated.
Robyn Sachs: Absolutely and to be frank, I mean we are, we live in a world that is very complicated, but were not always good at knowing how to process that, we’re still as humans are often very linear thinkers and the dominant way of looking at the world and especially within a field-like evaluation is the scientific view which is a little bit linear and a little bit black and white... I think we’re still really just learning how to bridge that gap, how to have a more complex understanding, a more systems approach understanding and that kind of thing, and I think storytelling is a key method to bridge that gap.
Danielle: Yeah I think you're right, we’re definitely learning this skill as the broader population it takes you to keep all of those things in our mind, because I think you're right. We tend to try and simplify things and go black or white and linear because it’s easier to handle.
Robyn Sachs: Exactly and time to remember what that phrase exactly is what, we notice what is measured...
Danielle: What gets measured gets noticed.
Robyn Sachs: Yes, yes, and I didn’t know who it was, because one of those things turn around.
Danielle: I only know that because I included it in our workbook in the very section when we talk about evaluating our impact and measuring our impact than just recognising that they're just act of consciously planning out how we're going to measure and evaluate something will help us notice really important pieces of information that will help us make decisions later on.
Robyn Sachs: Absolutely I think the other thing to really be aware of that relates to what we were talking about here is what do we mean by measure and trying to expand our view of that beyond these may be these easier or more typical or more traditional things that we can measure. So getting beyond numbers and into more in-depth understanding, reflections, and stories as well.
Danielle: Fantastic. So one thing I did want to ask because we might have a few people listening, people who are just starting out with a project or might be in very early stages of a new project and I'm thinking that this evaluation process is something that is important to work on further and potentially bring someone else in with more expertise in that area to help formulate the ideas and the conversation. So if someone has a limited reserves starting out but wants to take some low-cost initial stats to work on the evaluation strategy and begin assessing their social impacts, what would you recommend for them; in other words what might minimal evaluation process look like?
Robyn Sachs: So I think this is a question that should be asked more often because evaluation doesn't need to be an expert-driven thing because it doesn't really require as many resources as people think it does and that's largely because it's really an exercise in being systematic and people in the evaluation world call it evaluative thinking & it is sort of being referring to already, which is being critical, having a critical eye on the work you're doing and having a curious eye as well. So, some of the steps that are involved in a minimal evaluation process;
First of all, it's really having a good idea of your theory of change, so that said term that sounds really fancy and all it really means is having an understanding of the activities you’re doing how you think that they're going to lead to immediate outcomes and how, in turn, you expect these immediate outcomes to lead to the longer-term impacts or changes that you'd like to see, so these are usually done in terms of the visual. So a theory of change visual and this is sort of the same ideas of a logic model, those terms are somewhat interchangeable. Some people swear that they are completely different, and I tend to think they're pretty similar, but that doesn't really matter what term you're using, we’re talking about the same thing and it can also be helped than just in instead to just write down some statement that if I do this then I expect that this will happen; if that happens then I expect this other thing will happen. That's really the underlying logic of a theory of change so a lot of that can really be done from a common sense perspective, so when an individual who is involved in a project and just sit down and really think it through and it is really surprising to find out that there may be gaps in logic. Just being able to think through the activities that you're doing and that long-term impact and trying to think the steps between them, it can happen fairly regular that you notice, oh how do I really think that is going to help and trying to be again as systematic as you can to say what are all of these steps, am I really doing all the activities that I can be or should be to lead to these outcomes so starting with that common sense individual process and then expanding out to talking to as many people as possible about the logic so especially those that are immediately involved in the program, people that are involved in similar programs or have done similar work and have seen some of the links or logic that you may expect to see and even further strengthening by going to whatever research evidence or literature is out there that's not something that always is done or has to be done, but it certainly does strengthen a theory of change.
Danielle: Right, and I imagine during that process, you might even uncovers some really great sources of the wisdom and insight that you might have not thought of in incorporating our integrating into the program or things that other people have already learned that you might not have come across yet just going through that research phase to support your idea of what happens sequentially.
Robyn Sachs: That’s exactly... I've had people tell me that even just going into the theory of change process when example springs to mind, one of the first evaluations that I was involved in was looking at a project management office that operated within health promotion Department on house services and we had a few different people out there at the table, some health promotion people, community development people and some business people and just going through the process of sorting through a language and really thinking through what are we trying to do, how do we get there, and realizing some gaps in that logic; really they told me that even if we didn't have the data collection process and evaluation report at the end if it they would have learned so much and really started to do their work differently.
Danielle: Right just from that initial foundational piece of thinking through.
Robyn Sachs: Exactly, and the new thing about it was it wasn’t those changes and understanding it was also the relationships that were built with those conversations and how people are being able to understand where each other were coming from, had a lot of respect for each other actually, as well.
Danielle: So, it is kind of teambuilding exercise going through that process and having those conversations too?
Robyn Sachs: Yeah, exactly. Yes what I want to emphasise about the theory of change is that, it shouldn't be only in the realm of evaluation it has become sort of an evaluation thing because without having that solid understanding of how we think a program is working in the theory of change. It's hard to design an evaluation, it is hard to know what data you might want collect to test that theory of change, to test what is happening, but it really should be done within the context of programs themselves so on one hand you know it should be done within programs themselves. On the other hand, it really helps with that idea of systematic and evaluative thinking as well.
Danielle: Right so it’s kind of a jack of all trade tools that at no matter what stage you're at everybody should be thinking about going through that thought process?
Robyn Sachs: Exactly and certainly it can be helped by bringing someone in who has done it before because it can be difficult to know just then the nuts and bolts of how detailed you need to be, what kind of computer tools might help you get this down on paper in a way that people can talk about this kind of thing that but there's still a lot that can be done in terms of the conversation and just trying to keep it as simple as possible, to identify again what you're doing and how you think it will lead to those other outcomes.
Danielle: Okay so that covers the first step for somebody maybe with limited resources or just starting out and what might be the best bank for their buck or bank for their time to start thinking about and working on this evaluation strategy.
Robyn Sachs: So a second one is identifying clear and answerable evaluation questions. So after you have some kind of theory of change figured out and you know what you think is going on and you now have a good understanding of what you're doing. It is really going down and saying okay, what do I want to know then moving toward, what information do I maybe not have right now, that I really want to know or need to know to figure out if I'm doing the right kind of things in my evaluation or in my project and if I'm making an impact. So, one of my favourite evaluators out there doing work in the world is Jane Davidson and she is out of New Zealand and she is extremely pragmatic and really good at sort of cutting through the things that people get sort of bogged down by their overly complicated thing than just saying let's just cut through this and figure out what we’re trying to do. So in terms of questions she really emphasizes having big picture questions that some people when they are designing evaluations and thus includes evaluators, not just time, people who are experts in evaluation go tight to the level of indicator measurement or tool of measurement or that kind of thing rather than just asking bigger picture questions of what they're doing. The other part that she emphasizes is really the value part, so instead of a descriptive question like how many people were reached by this program. For example, going right to the value and saying did we reach enough people, did we reach the right people and similarly, when talking about outcomes, not just saying did this program make a difference and our intended outcomes that's a pretty boring question. It doesn’t say much, it’s, did this program make a difference in people's lives, so those really making sure that where getting value, the value for the program, which can be scary for people who are doing the program itself, but it’s also... I think very necessary. Exactly, you should be asking those question from ourselves when we were spending our days doing work or designing programs or, you know?
Danielle: Yeah, definitely it is kind of facing the facts and being open to receiving whatever the facts are and then moving on, you’ll be better off having done than kind of avoiding the process altogether.
Robyn Sachs: Absolutely.
Danielle: Even though it can be scary like you said but definitely worth it and it sound like the process of designing your evaluation questions is definitely a bit of an art, it sounds like could be, you know, totally wildly different questions proposed for very similar project. I wonder if you could, what people know like... there is better questions and worse questions that you can't really go wrong, can you, if you’re just starting out.
Robyn Sachs: Well I think the biggest chance of going wrong is not sticking to things that I have just mentioned sustaining into detailed, yeah and not being value focused. That being said, you know, wording doesn't really matter beyond just trying to be clear and making sure that lots of people understand. The other thing that's important to think about is not really about going wrong but is making sure you have as many people involved as possible in helping identify the questions because let's say you're the project coordinator; the person designing a social enterprise, you're going to have a particular viewpoint that might not take into account everybody's viewpoint or the whole big picture. So, when you're talking about the art of developing evaluation questions that is very certainly the case and that's both on the level of the individuals thinking creatively and turn it into big pictures as possible. As well as bringing a number of people together and trying to do that consensus process of an identifying question the narrowing in on the questions that that should be focused on, and can be answered in an evaluation or within the scope of a project. That makes sense to make things maybe a little less daunting as far as identifying questions. One thing I often think about is just separating, just trying to see questions in terms of three major areas. One is thinking about whether a program is appropriate and that can be thought of in terms of, you know does it fit into this context. Is the program designed in a way that will actually get out these outcomes that we think we want to get out that kind of thing. A second area is process or implementation. So are we really, have we designed the program we carrying out the program in a way that will get outcomes, or are there any flaws in the way it is being implemented that may get in the away and third area is outcome, and that's of course because the people jump too, usually from the get-go as in are we making a difference, are we making a difference in people's lives. Those are three... that may not cover all of it, but those are three easy areas. I’ll add one more which is the idea of efficiency; so, is the effort that we are putting in worth the outcome that we are seeing. So we may be making a difference on some level, but are we spending too many resources to get there. Or are there ways that we can save money or save time or that kind of thing, or makes things more efficient overall.
Danielle: Great, so that's awesome to have those four areas to start with and like you said, can definitely grow from there but at least it is a starting point and a bit of framework to frame the discussion early on.
Robyn Sachs: Exactly, and I’ll make one more point about the development of questions which is that there many different points along the project lifecycle that makes sense to concentrate on different questions. So the question of appropriateness you hope people would be asking that very early in the process and that isn’t always the case and as you move along of course, process and implementation becomes more important to think about and then the further you are along, the more you want to be thinking about outcomes. So it tends to not make sense to talk about outcomes until you're really solid on your process. So if you just start a program and run right to measuring outcomes and you don't see an effect for an example, you may not know whether you maybe just didn’t implement the program in an appropriate way or in the correct way as you intended to unless to look at that process piece. So it could be more efficient, from sort of data collection and evaluation perspective to say let’s just concentrate for example first on making sure a program is feasible and that were implementing it properly and we’re working on those bugs before we really jump to that question about outcomes, because outcomes are generally a little bit more time intensive or resource intensive to evaluate. Process is a little bit less so.
Danielle: Great that's actually good note and that's the type of thing that is why I live talking to people like you about these different areas of expertise and knowledge so that we can save other people from jumping right to the spot that is not the ideal place to start. So it’s kind of getting our minds around the big picture of evaluation and what makes sense in terms of order and where to put our thoughts and resources initially and understanding how we can move through that learning process over time.
Robyn Sachs: Absolutely, there is just a note about that. I think it has to be a balance of making sure you are keeping an eye on outcome if that makes sense. You don't want to ever forget that you're trying to achieve outcomes and only concentrate on your process.
Danielle: It is staying clear on that big picture of the impact that you want to achieve.
Robyn Sachs: Yeah, but not losing sight and of course there has to be processes to get there and those processes should be as optimal as possible, right?
Danielle: As with all things entrepreneurial it is kind of balancing many different thoughts and priorities and objectives alongside each other and managing that from bunch of different levels, all the way from big picture down to daily actions.
Robyn Sachs: Exactly and again to help pictures seem a little bit less daunting some of it really just comes down to always being clear on what you're doing and why you're doing it, and that's both at the project level and at the evaluation level, you know, what are we looking at right now. Does it make sense to look at it right now and why are we looking at it, what are we hoping to get out of it.
Danielle: So a third piece, aside from identifying the theory of change and identifying evaluation questions is just moving to that step of findings, or data, and what I want to emphasize here is that data doesn't have to look like what we often think it is. I think people jump, the first thing people actually to is surveys, you've got to do surveys to figure out if a program is working and surveys are good, sometimes they are overused I would say, and they're usually/often not very well designed. I have a BMI bonds and their both surveys but the other things that the people can do is first of all, the most important part of any sort of data collection strategy or measurement strategy is mapping it to your questions. So, you have your questions that are well-designed and just do a simple chart saying, what do I want to find out and what are the ways that I can find this out being creative and thinking about how you can answer that question and again to thinking beyond interviews and surveys and measurement tools on quantitative kind of measurement tools. There can be some creative and low-tech ways to gather information. A lot of it can be done by reflecting on the program, just again, individual reflection as a program manager by going in and observing activities that are involved in your enterprise or your program, talking to people more informally, it doesn't have to be a formal interview and looking at past information or records to help you answer your questions. And of course sometimes those things don't work when you do need to design some more formal measurement strategies but even these don't have to be crazy big high-tech operations, you can do something, you know I did something as simple recently by just putting a big sheet of paper on the wall at a conference writing a question of, you know, what do you think is important in the sector and having people write... having a comment box with this particular question that you want participants or clients to answer.
Danielle: That's a great idea. It’s going to people where they are right as they're experiencing it.
Robyn Sachs: More in my work, I'm aiming for this and having clients asking for this is to be engaging in how data is being collected, so making it easy for them doing it in a way that will help them stay enthusiastic and to me, it is always about trying to get away from surveys, or if you are using surveys keeping in short, keeping them, you know, snappy Not just Likert scales 1 to 5 do you like it or dislike it. Do you like it or strongly disagree, those things some of that becomes very rote, even for evaluators again to go back to okay well what’s our survey going to say rather than just trying to think outside the box, you know trying again, focusing on answering evaluation question, what is the best evidence that we need to get to that? And very often it's important as well not just on one line of evidence to answer an evaluation question having a few different pieces can help you fill in those gaps where one line of evidence is often not one survey from a group of people may not answer all your questions related to that evaluation question or they might be biased in some way or they may have some limitations, and rounding it out with a few different ways of looking at things is usually the best way to meet those limitations.
Danielle: Yeah, that definitely makes sense to kind of come at conclusion from a few channels when you're able to but it is good to know like you suggested that not every channel of information needs to be this big research reporting initiative, but it can be simply using your past records and doing observation and asking those informal questions too to support and back up maybe hunches or fill in the gaps like you said.
Robyn Sachs: Exactly and I think where people are often uncertain when they take this on is that rule of maybe leading to an external on one hand and in a related way, one to avoid bias; so it is my program how do I avoid just being un-objective about how I'm collecting the information and I will say about research professionals, evaluation professionals, that is really where a lot of our training lies and how to write questions that are needing in some way and to make sure that we are collecting information as unbiased as possible. I’ll say about that though, we're not always as good as we think we are, we just have to work at it as well as research professional any other thing is sometimes we get too caught up in what we know about research and think that everything has to look exactly as it did when we were studying social sciences and getting away from just that practical quick and dirty understanding that can happen in a lot of different ways.
Danielle: So as with all things it comes down to balance, right?
Robyn Sachs: Yeah, exactly. So I think a lot can be gained from people who are doing a social enterprise or doing a project by bringing in evaluator in and trying to get a better understanding of how to avoid bias, but to balance out with their own sort of pragmatic life based understanding of what they're doing and how they're doing it.
Danielle: Right. Fantastic, that's great. So, that's kind of many crash courses on what to look out for initially starting out the evaluation strategies. So that's a great starting point for people. Can you just summarise that one last time for us- we started with the theory of change and then...
Robyn Sachs: Questions. Basically evaluation questions and then just designing, for lack of a better word, data collection and measurement strategies to answer those questions. All of that is really being trying to be systematic and intentional about all of those pieces.
Danielle: Fantastic, great! So those are the kinds of the major things that I wanted to ask you about but I’d like to wrap up, if you have any final thoughts or considerations you want to share with the group about building this mindset for evaluation and building these ideas into their social enterprise initiatives from the beginning of being aware of what the potential is down the road as well. If anyone is on the fence about why it’s important, do you have something to say to push them over to the side if it’s ready to move. I find it so hard to believe anyone would be on the fence, I know I'm biased as an evaluator and I guess partly what I would say is evaluation isn’t maybe what you think it is. Just as what you're describing, thinking and it’s about reporting and statistics and reporting back to the else about the work you're doing, really thinking about it more as just critical, curious thinking trying to make sure you're doing the best work you can do, and I sincerely hope that everybody tries to work that way and uses the tools that they have at their disposal to improve that way of thinking and working. So that's really what evaluation at the heart of it is all about from me, it’s about bringing some systematic ways of thinking about one’s work into their everyday workday, I guess. Sow that would be the biggest thing for me and that this idea of sort of data measurement also doesn't have to be these typical things that we think of. So it doesn't necessarily have to be interviews, it can be more informal conversations, it can be just reflecting on what’s happening where you work so careful observation and all of those things being done with those evaluation questions that you've identified in mind that can be extremely powerful when you're being systematic about what you're looking for, as well as maybe keeping track of those reflections or observations that you're doing. Really the final part I would have is just to reemphasize the importance of being critical and curious about your work and trying to be intentional about what you can do on your own and what you might need to talk to somebody who has done this work in a more professional way, but you might want to bring them in to talk to you about or to help you with.
Danielle: We can just remind people listening that when in doubt go, reach out and ask someone for a little bit of guidance on picking their brain, you know, here's what I'm thinking do you think this is a good way to move forward or am I ready to bring somebody external in to work with me in more detail or what do you think?
Robyn Sachs: Yeah and unfortunately a gap there is that people don’t always know who they can turn to ask that kind of simple question or even who they turn to if they do think they want an external not to be brought in. That’s a gap that it’s kind of close to my heart within Calgary, for example, they want to make sure that people have a better idea of who's out there, even to be sometimes that person that people can just say, can you talk to me for five minutes or ten minutes- it is about my project, you know but that's a longer term thing.
Danielle: Right. So I guess initially for somebody wondering who to reach out to reach out to with a simple google search of evaluators in your area? The good way to start or do you have any other... how to start?
Robyn Sachs: Sure, I can send you a link to the list that's capped by the Canadian Evaluation society they'd put a list together of evaluators in Alberta. I don't know how sort of recent it is. It's an accident, I'm not on there... but the other thing I'd say is to talk to other people who you know I have done evaluations at work. Word of mouth is the biggest way I personally get business and I know that is true for a lot of people as well, If you know somebody who has worked with somebody then they can vouch... the person is pretty reasonable to work with me, they understood where I was coming from and they helped me design something that was useful.
Danielle: Okay great! Yeah we’ll share that link with the group, the video and I am sure for those outside of Canada, there would be similar associations or links that we could search for on google or like you said, probably the best way like a professional is to ask your colleagues and friends who have you worked with and how was it. So that is also a good starting point if you have those people in your network already.
Robyn Sachs: There are associations or societies, and in most of the countries around the world so I know there is a African evaluation Association, American evaluation Association, there's one in South Asia-a community of evaluators that includes India, Pakistan Sri-Lanka. There is a huge list of them. So if I can find a link as well, Danielle Douglas, with the different associations I can send that to you too.
Danielle: Oh great! Yes, that would be fabulous and people who are ready to dive in and work in this work with their evaluation thinking in a bit more detail can also access from that insight. So thanks Robyn again so much for taking the time to have this interview with me and for making yourself available to share your insight and expertise with the group and I sure that everybody will get lots of good knowledge and insight out of it and hopefully they will apply some of these ideas right away. So thanks for your time and it’s really fun to have you.
Robyn Sachs: Thanks, it’s great to be here.