A manager watches a webinar on program evaluation on their laptop.

Through the Nonprofit Learning Lab, I recently presented a webinar on Program Evaluation. In the webinar I explored the roll of program evaluations in organizations including the different types of program evaluation, logic models, and how all of this ties in to your strategic plan. I also include a rather surprising case study from Google!

Learning Goals

The learning goals of the webinar were:

  • Understand the 3 Types of Program Evaluations
  • Define a Theory of Change
  • Articulate the Core Element of a Logic Model
  • Define the Program Evaluation Cycle

You can listen to and view the entire webinar below. Over the next few months I’ll be writing a series of articles that further expand on some of the points I cover in the webinar including the types of program evaluation and the cycle of program evaluation.


A slightly modified transcript follows:


Payal: Hi, good morning or afternoon depending on where you are. And thank you for taking the time to join me today to learn about program evaluation. I’m Payal Martin with Brighter Strategies and we are an organizational development firm specializing in evaluation, strategic planning, process improvement and leadership development for nonprofits and the public sector. I am delighted to be here talking with you about program evaluation today.

What Program Evaluation Isn’t

Before diving into what program evaluation is, I want to start with what it is not because I think there are a lot of myths and fears out there about evaluation that often keeps people from engaging in a highly productive and informative process. So what program evaluation is not is, is not a hard to learn complex science. It is definitely not a one-time linear event focused on outcomes. And we’ll talk more later about the cycle of program evaluation. It’s not a whole new set of activities requiring a lot of time and resources to support. And we can talk a little bit about how to make evaluation more simple. And it is not intended to report on the outcomes of only program stuff. Today we are going to debunk these myths and talk about different ways of conducting and using program evaluation.

Learning Objectives

So let’s dive in. Our learning objectives today, we’re going to focus on five different aspects of program evaluation. One is to explore the role of evaluation at a higher level. We will talk about three different types of program evaluation. We will define a theory of change and articulate the core elements of a logic model and when it is useful. And then finally, we will define the program evaluation cycle with specific insights on data collection, management, and analysis.

The Role of Evaluation

So exploring the role of evaluation, which we consider an organizational best practice. Evaluation is a part of a broader strategic management process. Strategic management is the systematic analysis of external organizational factors such as clients, stakeholders, and competitors and the internal environment such as employees, organizational systems and processes to achieve better alignment of organizational policies and procedures with strategic priorities. Too often, I find clients thinking of program evaluation as something sitting on its own island. In fact, program evaluation is part of a larger strategic management process which includes many organizational components. Effective strategic management requires the proper distribution of organizational resources to different business goals, which may include HR management, technology, risk management, and financial management as you see here according to the organization’s overarching strategic goals. The role of program evaluation in this broader process is to serve as a tool to help assess the relationship between these elements or the arrows as you will thus to my earlier point program evaluation impacts many more people in the organization beyond program staff. Before digging into the details of program evaluation, I want us to start with one important role of any evaluation, and that is of asking questions and getting feedback. I like to use the example of Google to introduce this idea. Google’s mission is to organize the world’s information and make it universally accessible and useful. And with this mission, Google is very serious about using information and asking questions to inform their decisions. For example, one question Google wanted to have an answer to was do managers actually matter. This is a question Google has been wrestling with from the outset where its founders were questioning the contribution managers make. At some point, they actually got rid of all managers and made everyone an individual contributor, which didn’t really work, and then managers were brought back in. They ask questions and use the answers to steer their course. What is notable to me is that they have only 30 questions and they are a big company. As a nonprofit, we often ask too many questions and struggle with finding the resources to answer them. We need to be strategic in the questions we ask. If we are strategic in the questions we ask, then evaluation does not become an overwhelming data collection and analysis process as I think many fear.

The Google Example

So, going back to Google, a team was generated to look at data sources that already existed, which were performance reviews, which are top-down reviews by managers, and employee surveys which are bottom-up reviews of managers. They took this data and plotted it on a graph and then using regression analyses found that there was a big difference between the top and bottom quartile of managers in terms of team productivity, employee happiness, and employee turnover. So, they learned that better managers were performing better and employees were happier and more likely to stay. While this had confirmed that managers do actually make a difference, it didn’t allow Google to act on the data. So, they had to ask more questions. The next question they needed to answer was what makes a good manager at Google. Answering this question would provide much more usable insights. So, the team introduced two new data collections. First was the great manager’s award through which employees can nominate managers they feel are particularly good. As part of a nomination process, employees had to provide examples of behaviors that they felt showed that the managers were indeed good managers. The second piece of data collection was interviews with the managers in each of the two core quartiles, the bottom, and the top-performing managers to understand what they were doing. The managers didn’t know which quartile they were in. The data from the interviews and from the Great Manager Award nominations were then coded using text analysis. Based on this analysis, the team was able to extract the top eight behaviors of high scoring managers, as well as the top three causes why managers were struggling in their role. All of this stemmed from one question Google asked. Do managers matter?

Evaluation Questions

The questions you ask in an evaluation are essential in determining every component of the evaluation and that also applies to program evaluation. So what kinds of evaluation questions do we ask in a program evaluation? They may include, could we do better? How well are we using the program resources or what is the efficiency of the program? What is the impact of the program on the community? Maybe you need data to justify the existence of your program or support the argument to increase resources for staffing, say. And finally, maybe your purpose is to identify outcomes or specifically highlight the impact of your program on the community in terms of strong outcomes. There are many types of evaluation questions and the ones you choose will guide your evaluation and define the type of evaluation you do. We will talk more about those in a minute. But first, a definition. Evaluation takes an in-depth approach at program based on focused evaluation questions. Program evaluation answers the evaluation questions by collecting data using specific methodologies and thus provides tools to manage the program, identify program gaps, develop implementation plans, show outcomes, and create stakeholder feedback reports.

My experience is that a lot of clients, including donors, use evaluation primarily for the last bullet, creating stakeholder reports. There is nothing wrong with this, but you would benefit in a major way by starting the evaluation process a little earlier and thinking about how to use the data to better manage and implement the program and therefore improve the program’s impact. There is often a missed effort– this is often a missed opportunity and one I encourage organizational leaders to take advantage of. So, I talked a little bit about types of evaluation. Once you have your evaluation questions, you’ll find that answering the question will lead you into conducting one of three types of evaluations. The first is a formative evaluation. This may include a goal-based evaluation and/or a needs assessment. A formative evaluation is conducted to increase the likelihood the program will achieve its goals. You would benefit from a formative evaluation when you are entering into a new phase of program planning or a new program or applying your program in a new way, which could mean a new place or a with a new population. In terms of questions, a formative evaluation may answer questions such as is the population– is the program addressing the needs of the population? What is the program’s current status in regard to meeting its goals? Are resources and program components allocated appropriate only to ensure the goals are met. Our proposed program elements likely to be needed, understood, and accepted by the population you want to reach. While program evaluation focuses on a goal and needs of the program population, a process evaluation focuses more on implementation. Specifically process-based evaluation is one that determines how a program operates and achieves its results and how well the program was implemented. Process-based evaluations are beneficial when long-standing programs have experienced recent or ongoing changes or programs appear to contain major inefficiencies or maybe the program needs to be illustrated to external audiences. All of these are ways to use process evaluations and the questions that a process evaluation may answer are how do we replicate the program, what does the lifecycle of this program look like from beginning to end, and how much of the planned program is actually being implemented.

Outcome Evaluation

Finally, the third type evaluation is an outcome evaluation and outcome-based evaluation determines to what extent your program is delivering the outcomes it is designed to achieve. Outcome-based evaluations are beneficial when you need to justify the existence of your program to external stakeholders. You want to track performance of your program over time and questions that the outcome-based evaluation might address include what indicators best measure to what extent your program is meeting its intended outcomes and how effectively is the program meeting those desired outcomes.

Before moving on from types of evaluations, I want to take one more minute to talk about a needs assessment which may occur as part of a formative evaluation. A needs assessment may be necessary as an early part of a formative evaluation to identify what needs you are trying to address and what outcomes are necessary to address them. For example, if you’re working on a community obesity intervention you might research how overweight is your community and what are the community individual risk factors or behaviors. You might then determine that an outcome such as eating more fruits and vegetables is necessary. This assessment informs not only your outcomes but your program elements and thus is an important component of a formative evaluation. If you want to conduct a robust community assessment you can find more details in a free e-book on how to do that on our website.

Theory of Change

My favorite definition of a theory of change is a description of how and why a set of activities, be they part of a highly focused program or a comprehensive initiative, are expected to lead to early, intermediate, and long-term outcomes over a specified period. Basically, a theory of change is a description of why your program works and provides the core hypothesis of the program you are testing. In your theory of change, you describe what change you expect and how with a series of if-then statements. And here is an example. If obesity prevention programs are targeted at multiple locations, including schools, communities, or within the family, then they are more likely to produce results. Another example might be if community members experience programs to increase intake of fruits and vegetables, then they’re less likely to be overweight or obese. These are two bullets in a theory of change related to obesity, but I’ll use examples throughout this webinar. And you can imagine a couple of other if-then statements that we could add on to this. And you could have a theory of change with multiple if-then statements. But the idea here is that they really summarize what you’re trying to accomplish and how.

Logic Model

I can introduce a logic model, a core evaluation tool that relates to a theory of change. A logic model is a framework that describes, in more detail than your theory of change, what components of a program are leading to your outcomes. I believe a logic model is essential because it frames your theory of change, and from it, you derive the needed qualitative and quantitative indicators to measure the most important aspects of your program.

There are a couple of elements that I consider core to a logic model. One is inputs. These are the resources used to meet the program’s needs, be it money, time, materials, or equipment. Activities are what happens on a daily basis that comprise the work of your program. And these are what you would consider the program elements of your program. Outputs are the direct and tangible products of the programs’ activities but have little inherent value to the program participants. And we’ll talk a little bit more about outcomes versus output in a minute. Outcomes are the benefits or change for individuals as a result of participation in the program. And this will include a change in behavior or skill, knowledge, and attitude. And we’ll talk a little bit more in a second about the difference between short, intermediate, and long-term outcomes, but first, a bit about outcomes versus outputs.

Outcomes vs Outputs

Sometimes people can get confused about the difference of these, especially after so much work goes into developing an output. I once worked with an organization that had a lot of programming around developing reports. The report itself, however, is an output. The education and behavior change that comes from the report are outcomes, and measuring them may communicate how valuable the report was. So, an outcome indicates program effectiveness, while an output may indicate program efficiency. The outcome talks about change and knowledge, skills, behavior, and stakeholder experiences benefiting from as a result of the program or an output more of a tangible value produced as a result of the program.

Developing Outcomes

Some questions you might ask when developing your outcomes may focus on both the change you are trying to make and your ability to attribute that change to your program. Specific questions might include what are you trying to accomplish? What are the desired results we expect? Do you know what works already? And what is the benefit to the program participants? Can your program influence the outcomes even if you can’t control them? And does measuring these outcomes help you identify programmatic success or areas of needs? And finally, will stakeholders accept the outcome as a valid representation of your program? The last question is important because often many factors impact an outcome and how you may need to think about which outcomes directly tie to your program and how you might need to think about using comparison groups in your analytics to diffuse confounding variables will all be informed by your answers to these questions. So, we talked a little bit about outcomes and I specifically like to divide them into short-term, long-term, and mid-term outcomes.

Short Term Outcomes

A short-term outcome usually lines up with knowledge, skills, and attitudes. So, knowledge being what participants learn as a result of the program, skills being the development of a new skillset or improvement of skillset over time as a result of your program, and attitudes being how participants’ perceptions or feelings about a topic change as a result of the program. There are often many medium, many short, and medium-term programs that lead to a long-term program, and the medium-term outcome I think is often a behavioral change, a change in participants’ actions as a result of a program, and the long-term impact may be the big picture change you wish to see. Following the theory of change example I gave you earlier, if prevention programs are targeted at multiple locations, then they are more likely to produce results. So, a short-term outcome in that logic and that theory of change might be community leaders and teachers believe implementing prevention programs is important. The word “believe” makes this one about a change in attitude.

Medium Term Outcomes

A medium-term outcome might be community leaders implementing prevention curriculum and programming focused on eating more fruits and vegetables. The behavior of implementing the curriculum, in this case, is a mid-term outcome. And a long-term outcome for this theory of change might be that community members eat more fruits and vegetables and have lower rates of obesity. Again, those are simple short, medium-term outcomes and in a larger evaluation, you would have several of those leading to your long-term outcome. So, while your theory of change describes the links between inputs, activities, and outcomes, your logic model articulates each of these. So, we talked about each of these elements and this is one way to frame a logic model. Although all logic models have the same core elements, they might look very different depending on the program or organization and evaluator.

Logic Models

This one is a simple one with all of the core elements. Sometimes there is confusion between theory of change and a logic model. A logic model articulates the inputs, activity outputs, and outcomes while a theory of change uses “if then” statements to describe the arrows that connect one element to the next. Both the theory of change and the logic model help articulate what the program is to achieve. Here is the logic model framework I borrowed from the University of Wisconsin. This one includes the core elements I showed you earlier plus a few others, which I think can be very useful to think through and include with your logic model if you have the space to do so. But I’m showing you this one because I like that the outcomes are broken up into short-term, medium-term, and long-term. When developing your logic model and even the complexity you present on a page, I’d like to remind folks to think about your audience. I remember in my first job out of graduate school I presented a giant logic model on an extra-large piece of paper with lots of boxes and arrows to my boss, who happened to be a lawyer. His response was that, “This is very academic. Go back and give me something more simple.” That was a good lesson and a reminder for me that the framework is useful to identify indicators. But if you’re using the framework as a communication tool, even internally, you have to know your audience. I mentioned indicators a little bit. One of the most useful elements of a logic model is that they help you articulate what you can measure.

One reason I like to include short, medium, and long-term outcomes is so that you can articulate indicators to measure against over a shorter time window, therefore demonstrating program progress, especially when the long-term outcomes might be several years in the making. Following our example from earlier, here are a few indicators stemming from the short, medium, and long-term outcomes. So, the short-term outcome of, “Community leaders and teachers believe implementing prevention programs is important,” might lead to indicators of surveys assessing the community leader or teacher attitudes and knowledge about implementing an obesity prevention program. A mid-term outcome might be indicators to increase the availability of fruits and vegetables in local grocery stores or development of the curriculum on how to cook vegetables easily. And a long-term indicator for community members eating more fruits and vegetables and having lower rates of obesity might be a food and nutrition survey of community members or self-reported measurement or physician-provided measurements of BMI. Note that this last one is a very long-term outcome that would have to be measured over the course of possibly many years, where the short and medium-term ones could be measured more quickly to demonstrate progress, early progresses, of your program. I’m going to stop there. I know we covered a lot. Are there any questions about theory of change, logic models, or indicators, or the role of evaluation?

Questions on Logic Models

Question:  How often would you suggest we do this evaluation?

Answer: Well, that depends on the– on your program. And I think that you can infuse evaluation throughout your program cycle. And it doesn’t mean that– like I said earlier evaluation isn’t a one time event where you said okay, we did a year of programming and now let’s look at the outcomes. You might want to start with a formative evaluation to say well, what makes sense here? Are we getting to the right population? Do our programs address the right goals? Are we working towards the right goals that can be done early in the process? A process evaluation can occur throughout the program particularly if you’re looking to just make sure that the program elements are being implemented. And then outcome evaluations, you– I would measure them as the indicators come up depending on sort of what your short term, long term, and mid-term goals are. We’d have to think of if you’re measuring change between let’s say year after year, you have to think of a measurement period that then makes sense for you because it’s hard to demonstrate certain kinds of changes over a period of say four or five months. Whereas other kinds of changes, you might be able to demonstrate in that period. So I would say that really depends on your program.

Program Evaluation Cycle

We’re going to dive a bit now into the program evaluation cycle, which does build on the question that was asked. So, we’ve developed a nine-step cycle. As I mentioned earlier, program evaluation is not a one-time outcome-based event that is either prescriptive or linear, although it does have specific steps. It’s cyclical because it supports a continuous quality improvement approach. Each program enters the cycle in a different spot. Your evaluation cycle might start with the data you already have or with the logic model. It all depends on where you are with the program and when you decided to engage in the evaluation process. There are nine steps in our cycle. And first is to assess your readiness. How prepared is your organization for an evaluation? Do you have a culture of data-driven decision making? Without data-driven culture, it’s hard to justify the investment in an evaluation to an organizational leadership or use the results to inform program changes. If you don’t have a data focus culture, you might think about how to work with your organization more so that when you do the evaluation the results are valued. Second, start with the end in mind by identifying clear program outcomes. I sometimes want to invert my logic model so that the outcomes are to the left with seems like a natural starting point. Too often we think about programs you want to do, or that we think our donors like, or that just feel good but not necessarily linking to what we want to achieve. When thinking about an evaluation, it is important to start with outcomes and then to identify the best programs to achieve those outcomes or at least make sure that there is a clear connection between your program and your outcomes. And possibly modify the programs as part of your evaluation process to get out of the rut of creating programs that feel good but don’t have a logical flow or evidence behind them.

Data Collection Plan

The third piece is to develop a data collection plan and choosing data gathering methods that support your program’s needs. We will talk a little bit more about that in a minute. The fourth is to identify resources. The fifth is to determine program outcomes, which we discussed a bit and the fix is to review and analyze data. And we’ll talk a little bit about some tips for data analysis in a minute. Another step is to create a logic model. And then I think the step, track and use outcomes and evaluation results to inform strategic management is a critical step for an evaluation to inform resources going into a program. As is integrating the results into strategic planning. Meaning that if your program doesn’t seem to work, you can change it using the results of your evaluation. I want to say Brighter Strategies has a free e-book with resources and worksheets to help you get through all nine of these steps on your own and we have lengthier training available.

For now, however, I want to focus on data collection because that is a component of this cycle that we get the most questions about. Once you have figured out which indicators you want to measure, you’ll need to create a data collection and management plan. It doesn’t have to be overly onerous and I want to emphasize that. It is often helpful to partner with organizations who house data. You can get them invested in your project by explaining the purpose, and why you would like to use their data or partner with them. And that’s often the case that government agencies, sister organizations and community forums are interested in your outcomes too and are willing to share their data and measurement tools where appropriate. And finally, you can use formal or informal data depending on what you’re measuring. For example, sometimes at a conference you might see large pieces of paper on the board that say, “What did you learn?” That is an evaluation tool. It’s a fun, simple way to collect data from conference participants that gets people out of the grind of filling out a survey. And, it gets people up and networking.

Other Tools

Other times I’ve seen clients wanting measurement tools supported by peer review journals to give them credibility for their audience. You can be creative here, or formal, as long as you are clear about the indicators you’re measuring and the audience with whom you’re communicating your data with. I have a couple of data collection tips here, and based around the idea that don’t forget about data that you already have, we collect data all the times. You might have it stored somewhere on a drive, and we want to just reduce the burden of over collecting redundant data. Also, think about national data collection points. For example, where our partners are already waiting around for a piece of your program or filling out another form, it might be easy to collect data at those points. Another important element of data collection I’d like to recommend is collecting both quantitative and qualitative data. While you can often get a feedback from more participants more quickly with quantitative data, the results may seem shallow and dry without qualitative elements.

Data Collection and Audience

Think about your audience, will they respond better to charts and graphs, verbal anecdotes, or a combination of both. Quantitative data includes surveys with closed end questions, checklists, organizational statistics. As often referred to as hard data that can be measured and communicated in numbers and charts and in graphs. Qualitative data includes– it’s sometimes considered as soft data. But I don’t think it gets enough credit with that word. The insights from surveys with open-ended questions, interviews and book groups and case studies are extremely informative and can be hard to articulate graphically. I think these two data types often inform each other. You might do a survey and follow it up with a stakeholder interview to understand the results of your survey better or you might do a focus group and identify a multitude of perspectives and then turn those perspectives into a survey question to figure out how pervasive each of those perspectives really are in your target population. Once you have collected your data it is time to review and analyze.

Analyzing Data

When analyzing your data there are lots of interesting findings that may pop up, but this is a good time to remember to focus on answering your evaluation questions. For quantitative data, you might perform various statistical calculations such as ranking, mode, mean, median, and so forth. And for qualitative data, you might group comments into categories and themes highlighting program strengths and areas of improvement. When thinking about what kind of analysis to run there are a few items to consider including comparing data to establish targets for each indicator, thinking about where you are now versus where you want to be in the future. You might want to compare your data to the local, state, or national benchmarks to show if you’re doing better or worse than those benchmarks. And you might also compare your most recent results to that of previous reporting periods if it’s not your first round of data collection. So you might be comparing your current results to a baseline or an end line or a mid-line depending on your project length. Finally, consider looking at data trends such as differences between major demographics within your target population or differences between your results and those of similar programs within your agency or organization in the community. And finally, importantly, I think it’s important to make inferences and draw conclusions by looking at multiple data points when possible rather than focusing too heavily on a single data point. Sometimes different data points have conflicting results and you have to think through the impact of reporting errors or bias to decide which data points are reflecting results most accurately. So what do you do with the data once you’ve analyzed it? Here’s a list of a few different ways you can use the data. You can form small working groups to examine the reason for unusual outcomes. You can use data to identify where improvements are needed. You might hold how are we doing meetings with program managers and staff to brainstorm possible program modifications that may help you better achieve outcomes, again, taking your data into that cycle of quality improvement. The results of your process evaluation might determine how effectively any implemented program modifications improve the services that you delivered or you might use the evaluation as an opportunity to identify high performing staff, recognize them and celebrate your results. Finally, you’ll probably use your outcome data to improve fundraising and community relations by disseminating the evaluation report and data points you crafted.


That is the end of our presentation material. I know we covered a lot of points and I wanted to leave you with a summary slide to reiterate a couple of them. One, evaluation does not sit on an island but is part of a broader strategic management process. Two, program evaluation starts with asking questions, but like Google, not too many. And then next key elements of a program evaluation include developing the theory of change, which is a series of if-then statements summarizing how the program works, and a logic model, which articulates at a minimum the inputs, activities, outputs, and outcomes of your program. And I recommend you consider using both qualitative and quantitative data. But remember to think about your evaluation questions and target audience or who you wish to communicate the data to when deciding which kinds of data to collect and communicate. And if possible, draw conclusions using multiple data points instead of just one. And finally, use the data to identify problems, make program modifications, celebrate success, and inform fundraising and communications about the program.

Thank you for taking the time to speak with me to let me talk to you about program evaluation today. I hope the session was helpful in helping you think through, think about, and execute your program evaluation. If you’re interested in downloading our free e-book with worksheets to guide you through the cycle of program evaluation, please visit our website or contact me if you have additional questions. Thank you so much.
Thank you. This workshop has come to a close. Thank you so much for participating in implementing program evaluations for your non-profit