Program Evaluation pt. 1: First Steps

Word association:  When you think of the term “Program Evaluation,” what’s the first word that comes to your mind?

It was “fun,” wasn’t it?  Of course it was.

Ok, maybe it wasn’t.  Very few program leaders seem to enjoy the process of engaging in an evaluation process on a regular basis (at least not a very rigorous one).  There are plenty of reasons why.

  • It takes a good chunk of time and energy to do well.
  • Sometimes it feels like they just confirm what you already know.
  • They can be unhelpful (e.g. it’s really easy to create a bad survey).
  • Nothing really changes after an evaluation.  Implementation is hard.

Evaluations can also feel really negative sometimes.  They tend to hold the door wide open for criticism (now that’s fun – sign me up for more of that).

On the other hand, it can actually feel too positive sometimes as well.  One of the more frustrating tendencies that I’ve experienced is that an evaluation is sometimes used to simply confirm or justify what is already being done.  It’s a thinly veiled attempt at patting ourselves on the back.

The problem is that you can identify both strengths and weaknesses in just about every program.  There will always be areas to improve and there will always be success stories, even in programs that are shown to be largely a waste of resources (which, unfortunately, includes most social programs).

Seasoned evaluators, Linfield and Posavac, put it this way:

“Without an informed evaluation, cynical stakeholders will claim evidence for failure, although others may see evidence for success.”

What this means, practically, is that you will face resistance no matter what you do with the results of an evaluation.  There will always be someone who doesn’t appreciate what the data reveals. 

Evaluation highlights the need for change.  This can be threatening.

Sometimes you may even need to make the tough call of ending a program in order to try something new. 

People generally aren’t going to like you for this. 

Regardless of how solid the rationale is, change feels like loss and people don’t like to lose things (it’s called Loss Aversion and it’s one of at least 4 reasons why innovation is hard).

Asking the hard, yet necessary questions takes courage.  It take leadership.

Effective program evaluation is not just about methods and measures, it’s about leadership. Share on X

If you’re ready to shine a light on what you do and lead your team forward, this series will cover all the major bases for you – from planning and design, to implementation and analysis, all the way to communicating and utilizing your results.

Let’s jump in. 

Here are four points to consider before you start designing your next evaluation.

1. Determine the Why

The first step is to determine why you need to do an evaluation.  Vague notions of wanting to be more effective are fine, but you need to drill down deeper before you begin.

Evaluations take up a considerable amount of time and resources when done well.  Why is it worth it for your team or organization to invest in this right now?  Be specific.

  • What questions need answering?
  • What decisions need to be made?
  • What frustrations or “pain points” are triggering the need to look closer?

Think of evaluations like doctors’ tests.  They’re not going to send you off for an MRI or a complete blood count just for fun.  They’re going to choose tests based on the symptoms you’re displaying. 

Similarly, identifying your rationale for the evaluation will help you later when you decide exactly which “tests” you’ll need to be using (i.e. your methods and measures).

Types of Evaluations

To help think through why you might need an evaluation, Linfield and Posavac lay out four common types of evaluations that can help to narrow your focus. 

They are need, process, outcome, and efficiency:

Need

Start at the beginning. Which needs is this program meant to address?  While this form of evaluation needs to happen during the program planning phase, it is also important to ask after the program is running.  Perhaps the needs have changed, or your understanding of them has deepened, since you first started. 

Process

This type of evaluation examines the actual process being used to meet the needs.  In other words, what is happening in the program currently?  As an example, when evaluating a church’s youth ministry, you may ask: how often do you meet, what is the content of a typical night, who is showing up, how many contacts are leaders making with teens outside of program times, how are parents being communicated with, what is the ratio between teens and leaders, etc. 

One of your main goals in this type of evaluation is to see whether the program is being implemented as planned.  Sometimes our programs look really great on paper, but in reality, we don’t have enough leaders, they’re improperly trained, communication isn’t happening well, we don’t have the funds to do everything we hoped, etc.  Another name for this type of ongoing evaluation is simply monitoring.

Outcome

This is what most people think of when they use the term “evaluation.”  What change is actually occurring as a result of this program?  Has it made a difference in the lives of the participants?  This is notoriously difficult to gauge in ministry or human service programs and most evaluations aren’t rigorous enough in this regard.  It’s one thing to identify that a change occurred; it’s another to prove that the change occurred as a result of the program and not some other factor (e.g. the influence of other friends and family or even the simple passage of time). 

One way to track change is to ask people to complete an assessment when they first join your program and compare that to a similar assessment after a set time.  Another, more difficult method would be to compare the change of program participants with those outside of the program.  To use the youth ministry example again, it would be enlightening to be able to compare the development of teens in the weekly program vs. those teens who are part of the church but never attend the various youth events.

Efficiency

Even if we are meeting needs, the program is being run well, and we can identify positive outcomes, we still need to ask the question, “Is it worth it?”  Given the time, energy, and money that goes into programming, at some point we have to determine if the costs are outweighing the benefits.  For example, some churches have one full-time youth pastor to lead a program of over a hundred teens, while other churches have full-time youth pastors for programs of only 10-15 teenagers.  One of these programs is vastly more expensive than the other.  This would be a good time to ask the question of “opportunity cost” – what else could we be doing with our resources that may be more effective than the current strategy?

There is a logical sequence here.  There isn’t much reason to evaluate outcomes if we don’t even know what needs we’re trying to meet and there is no need to gauge efficiency if the outcomes aren’t what we intended.

Each type of evaluation answers a different set of questions and helps you make different decisions.  Perhaps you only need to focus on one type right now, or maybe you need to dip into all four.  What do you need to focus on to move forward?

One caution:  If you try to accomplish too much with one evaluation effort, it is easy to lose focus and just end up with a bunch of data that is hard to know what to do with.  Narrow your focus and only measure what you need to measure.

 

2. Ensure support for the evaluation

Right from the beginning, you need to be thinking about everyone who needs to be involved in the evaluation efforts and how to get buy-in from each group.

Neglecting this step is a surefire way to make sure that the results of your evaluation never get used.

The primary groups you’ll need to be thinking about include:

Program leaders

If the primary leaders are disinterested or resistant to the evaluation, the likelihood that they will pay attention to the results or support the recommended improvements is low.  If leaders feel threatened by the evaluation (i.e. a program failure feels like a personal failure to them), or simply that it is a waste of time, it is easy for them to dismiss the findings or become overly critical of the methodology – killing the chance of any meaningful change happening as a result of the effort.

Program participants

Some people naturally love surveys and being asked to share their opinions.  Others, not so much.  When considering participants, ask yourself,  “How will we prepare them for the evaluation?  Is it clear what is being asked of them?  What will motivate them to be engaged in the process?”  Without buy-in from this group, the quality of your data will suffer.

Other stakeholders

There are many voices that you may need to include.  Make a list of all your stakeholders.  Some of them will only need to be informed of the evaluation, while others will need to be consulted and actually help shape the effort.  Examples of other people you may want to communicate with include fellow staff members, senior leadership, your board, volunteers, parents (if the program involves minors), donors, or even the church at large (i.e. organizational members, if not in a church setting).

Casting vision and getting support early on will greatly increase the odds of your evaluation results being taken seriously and producing meaningful change.

 

3. Clarify the foundations

What is the theory behind how the program is supposed to work?

In other words, if you do X and Y, on what grounds can you expect to get Z?

Many programs are based on implicit assumptions about how the specifics of what’s being done will lead to the right outcomes.  If these assumptions are unknown, or faulty, it becomes very difficult to properly evaluate the effectiveness of a program.

Evaluating a program without knowing the theory behind how it should work is a little like a doctor administering tests without really knowing how the human body works.  She could take your temperature reading, but if she doesn’t know what it should be, what’s the point?  Even if she does know this much, a high temperature would indicate a problem but says nothing about what is causing it.  A theory connects the dots.

Similarly, there is no point in issuing surveys and conducting interviews for an evaluation if it is unclear how the program is expected to produce results.

Linfield and Posavac provide a good list of questions to ask during this part of the planning phase:

  • What is intended to happen to those who complete the program?
  • How are they to change initially?
  • What program experiences are to lead to that change?
  • What are the essential elements/activities in the program?

Quite simply, “Why, specifically, do we run this program this way, this often, at this time, for this group of people?” 

If you haven’t clearly identified this, chances are good that there are a lot of different theories floating around among your leadership.

If there is not some sort of consensus here, this may be one of the more difficult tasks you encounter during your program evaluation.  It is worth the work, though. 

When people are polarized on why you do what you do, even a well-planned program evaluation is unlikely to be useful. Share on X

After clarifying the theory behind how the program is supposed to work, there are two major questions to ask here:

  1. Is the theory plausible (i.e. does it make sense)?
  2. How well is the theory actually being implemented by the program?

To give you a better understanding of why these questions are important, take a look at these two case studies (paraphrased examples from Program Evaluation by Linfield and Posavac):

Caring for Alligators – An implausible theory

At one point, owners of swampland in Florida were selling alligator hides to shoe manufacturers at an alarmingly high rate.  To protect alligators and ensure their continued survival, legislators passed a law stating that it was illegal to sell alligator hides.  Problem solved, right?  The theory behind this was that if alligators were no longer marketable, we would leave them alone and they could safely continue on with their creepy reptilian lives.  However, the actual outcome was that landowners no longer had an incentive to maintain their land as alligator habitat.  The swamps were drained and developed into farmland, seriously reducing the amount of natural habitat for alligators.  Did the law reduce the sale of alligator hides?  Likely.  Did it actually help alligators thrive in Florida?  Not so much.

Part of identifying whether your program has a plausible theory is unearthing the hidden assumptions around it.  In this case, the assumption was that landowners would maintain the swampland, which turned out to be incorrect.  In order for your program or initiative to be effective, which assumptions are you relying on?

Community Mental Health Centres – Implementation gone wrong

In the US, a community mental health program was proposed that would allow for mental health patients to be effectively treated while living in their community settings rather than confined to large state mental hospitals.  An admirable goal.  As part of this new initiative, patients would take certain medications and regularly check-in at local community mental health centres for continued care and guidance on these medications.  However, many patients rejected this new system and never visited these health centres.  Additionally, the funding for this program was not as generous as expected, meaning that the program could never really be implemented according to the original vision.  As a result, many people struggling with serious mental health issues ended up homeless and living on the streets or under bridges in urban centres.

Sometimes we can have a great theory as to how our idea will make a difference for people, but the program fails because we didn’t properly plan out the execution.  Similarly to the alligators in Florida, this mental health initiative was resting on some pretty important assumptions – namely, that patients would actually show up at these health centres.  If your target audience is rejecting your services, that’s a pretty good indicator that it’s time to rethink what you do.

If the people you’re trying to reach don’t want to show up to your program, it’s time to rethink your strategy. Share on X

 

4. Count the costs

Before we move on to the next part in this series – actually designing the evaluation – we have to take stock of which resources are available to us for this effort.

Evaluations can quickly eat up far more time than you anticipated, so there is a need to be realistic and focused in your efforts.

  • Do you have a budget for this?
  • Is there a team in place, or is this evaluation a solo effort?
  • How many hours do you have available to oversee the work?

If your resources are tight, there are still plenty of options available to you.  For example, a brief survey takes far less time than conducting focus group interviews. 

Why you’re conducting an evaluation in the first place will also affect how in-depth you go (e.g. if you’re simply curious about an element of the program vs. your donors will pull their funding if you can’t verify certain outcomes).

This step is where I’ve seen most evaluations falter.  We’re all busy in our jobs.  We’re already working 40+ hours without the additional work needed for an evaluation.  Most people have a hard time leading an evaluation well while simultaneously continuing to run all the programs that are under their umbrella.  I get it; this is hard. 

Plan ahead and enlist the help of others so that you avoid starting strong but ending up with an evaluation that really hasn’t helped in any tangible way.

If you know you need a top-to-bottom reassessment of a program, you may want to consider finding an outside consultant.  A thorough evaluation requires sustained attention over a period of time; if you can’t provide that, get outside help.  A good consultant will not only help you with your current evaluation but will also help create tools and build your capacity for future evaluations as well.

 

Next Steps

Let’s recap.  Here’s the groundwork we’ve laid so far: 

  • We know why we need an evaluation and which decisions it will help us make.
  • We’ve identified which voices need to be involved at each stage and have a plan for communicating and casting vision for this evaluation.
  • We’ve clarified the theoretical foundations for the program and have some level of consensus around how and why it should work.
  • We’ve been honest and calculated the amount of resources available for this evaluation.

Program evaluation is important work.  The alternative is to continue to sink time and resources into programs of unknown value.

If we aren’t willing to question what we do, we will continue to run programs of questionable value. Share on X

If you’re ready to move forward, head on over to part 2 in this series, where we look closer at the actual measurements you’ll use in your evaluation.

Thanks for being here,

Dan

 

Resources

Linfield and Posavac (2019). Program Evaluation: Methods and Case Studies.

WK Kellog Foundation.  “A Step-by-Step Guide to Evaluation.”   

Carter McNamara.  “Basic Guide to Program Evaluation.” 

US Department of Health and Human Services.  The Program Manager’s Guide to Evaluation  (2nd ed.).

United Way’s Evaluation Guide

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Share This