What is evaluation?
Evaluation involves assessing the extension programme in terms of how effective it has been in delivering on the intended goals and outcomes. This assessment is done by collecting data on whether the participants:
- were satisfied with the deliverables of the programme
- learned something from it
- are able to apply those learnings to their own situations on farm.
Evaluation can be performed at a number of different levels, using a range of different tools. In this section we’ll look at a couple of the key tools you could choose to use to evaluate an extension programme, and how to use the information you get from evaluation to communicate the results of your programme.
Note – in this section we are talking about evaluation at the programme level. We’ll look more specifically at evaluating individual extension activities in the final section of these guidelines.
Why should we evaluate extension programmes?
Having an effective evaluation plan:
- identifies what success will look like – what impact and outcomes are desirable, and how we will know when they have been achieved
- gives the whole team involved with the programme a clear guide as to the direction everyone should be heading
- keeps us accountable – encourages us to think carefully about how we can get the best outcomes for the resources we have, and how to work efficiently
- allows us to identify strengths and weaknesses within our current programme
- gives us information as to how we can improve the quality of our programmes in the future
- helps us show our stakeholders (including the farmers) what has been achieved with the resources invested
- provides robust data on which to base decisions about whether a programme should be continued, expanded or terminated.
When should you plan your evaluation?
Ideally, you should plan the evaluation you intend to undertake at the same time as you plan the extension programme itself. This will allow you to gain the maximum benefits from the time and resources you invest in evaluation, as well as provide early learnings about what’s working well and what could be done better in the later phases of your programme.
However, it is never too late to start with evaluation. Introducing evaluation partway through a programme will still add value, so even if your extension programme is well underway already, don’t let that put you off investing time in evaluation moving forward. You will still be able to gain useful information about your programme, which will enable clear communications with interested parties as to how things have gone and where improvements can be made.
How do you design an evaluation plan for your programme?
Anybody developing and managing an extension programme should be able to design a simple evaluation plan to capture the indicators that can be used to monitor and evaluate success.
You will need to:
- choose an evaluation method that works best for your situation – we’ll look at two key ones in these guidelines (Program Logic and Bennett’s Hierarchy), but you may know of or prefer an alternative method that will work equally as well
- map out a plan for evaluation using your chosen method
- implement your evaluation plan – some evaluation activities may need to be undertaken while the programme is in progress; others will only be undertaken after a programme is complete
- communicate the results of your evaluation to all interested parties.
Key evaluation tools and techniques
There is no ‘one best way’ to evaluate an extension programme. Some approaches are probably better than others for addressing particular types of questions or concerns, but all approaches have their own strengths and weaknesses. We’ll introduce you to two options here:
- Program Logic
- Bennett’s Hierarchy.
Program Logic models (also called outcome logic models) are actually a visual way of representing your extension programme plan as well as capturing key measurable outcomes that can then form the basis of your evaluation plan. It visually represents the relationships between the programme inputs, goals and activities, its operational and organisational resources, the techniques and practices used, and the expected outputs and effects.
Here’s an example of Program Logic used for the Red Meat Profit Partnership Programme as a whole. It may look complicated, but don’t be put off – work from the lowest level (problems and opportunities) up to the highest level (long term outcomes 2025), it all makes sense.
Watch this short video (2:07 minutes) to see a basic introduction to Program Logic from AgResearch - https://www.youtube.com/watch?v=cg4lV5pwiMQ
AgResearch also make a Program Logic model template, available as a Word document, that you could adapt for your programme if you choose to use this approach.
|Advantages of Program Logic||Limitations of Program Logic|
How do you develop a Program Logic?
There are several elements in a Program Logic. It makes sense when planning and thinking through a programme to begin with the issue/opportunity statement and then work backwards from your desired long-term outcomes. Once you have the major elements in place, consider the assumptions and external factors as a way to identify potential risks that may prevent you from achieving your outcomes. The different elements are laid out in this order below.
- Issue/opportunity statement
Developing the issue/opportunity statement that your programme is going to address is always the first step. This statement should be both targeted and specific.
If you need to refresh your memory on how to go about this, revisit the section of these guidelines on Identifying the issue/opportunity. You may also find it helpful to revisit the section on Defining SMART goals for the programme at this point.
- Long-term outcomes
The long-term outcomes should resolve the issue identified in your issue/opportunity statement and it should fit with your overall goal. Long-term outcomes are sometimes called 'impact outcomes'. Long-term outcomes usually take a long time to be seen (sometimes up to ten years) and will be influenced by factors that are outside of your control.
- Short-term outcomes
Short-term outcomes are the changes you expect to see on completion of your programme. These are the easiest to measure, and the timeframe will usually be the length of your programme. Short-term outcomes are most often changes in skill level, or knowledge about a particular technology or practice.
- Medium-term outcomes
Medium-term outcomes are what you would expect to follow on from the short-term outcomes you have identified. So if you have identified an increase in farmers’ skills in accurately body condition scoring sheep as a short-term outcome, the medium-term outcome is likely to be the application of that skill, for example a change in practice whereby farmers undertake more frequent body condition scoring of their flocks.
Inputs are the resources that you have that you are able to draw on to address the issue/opportunity your programme is targeting. It is good to think of both the material resources (e.g. funding, physical spaces) and the non-material resources (e.g. SME knowledge, time available by people providing adoption support).
- Outputs: Activities
The activities are the things that you do. This could include a range of extension activities, and must also cover adoption support.
- Outputs: Participation
This describes who will be involved. It is good to clearly define the target audience for your programme and include relevant information about this group (for example age, farming systems used). As well as the target audience, you should include information about others who may be involved (e.g. connectors, facilitators, SMEs, people providing adoption support).
Making assumptions explicit is a really important part of the logic approach. Assumptions are the beliefs we have about our programme, the people involved, and how it will work. Unexamined assumptions are a big risk to programme success. Experts in the Program Logic approach suggest asking "what is known, and what is being assumed?". It is worth spending some time on this section, and asking a range of people involved in the programme to help you identify a full list of assumptions so you can address them.
Examples of common assumptions that may be worth considering at the outset include:
- that sufficient funding for implementation of the programme as planned will be made available
- that the target audience will be able to participate in all critical components of the programme, these being….
- that the additional resources required (facilitators, SMEs and adoption support providers) will be available and willing to contribute in the desired way to the programme.
- External factors
This element of a Program Logic requires you to consider the environment in which your programme is being delivered. Economic, political, cultural, historical, environmental and social contexts all impact on the way your programme is delivered, and the outcomes that you can achieve. Likewise, your programme has the potential to impact some of these factors too. For example, a change in the demographics of an area may mean you need to reconsider the target audience for your programme.
Examples of external factors you may need to consider include:
- resistance to evaluation
- lack of co-operation amongst different parts of your organisation, or between your organisation and others involved, that may affect the implementation of your programme
- wide-scale industry change that affects the climate within which your programme is being implemented
- significant environmental issues (e.g. flooding or drought) that might affect the capacity of the participants in the programme to maintain necessary focus on or commitment to the planned activities.
A note about revising and updating your Program Logic
If your Program Logic includes consideration of assumptions and external factors that may change during the course of the programme’s implementation, it will be important to regularly revisit the Program Logic and revise and update it as needed.
How do you use a Program Logic to guide evaluation?
A Program Logic can help you identify:
- focus – what aspects of the programme will you evaluate?
- questions – what do you want to know?
- evidence – how will you know it?
- timing – when should you collect the data?
- data collection – sources (who will have this information?), methods (how will you gather this information?), samples (who will you question?) and instruments (what tools will you use?).
The outcomes section of your Program Logic is the part that will guide your evaluation activity, because it records what you should be trying to measure.
It’s good for you to try and measure both short- and medium-term outcomes.
A note about measuring long-term outcomes
It is often more difficult to measure the long-term outcomes, as the impact usually takes a long time to be realised, and there are many external factors that affect it. This can make it challenging to establish how much of the long-term or impact outcome was the result of your programme, and how much was the result of external factors.
RMPP has engaged specialist evaluation support to measure the long-term outcomes of the extension programmes being run as part of its overall Partnership Programme from 2013-2020.
Keep in mind that you can’t evaluate everything, so you will need to prioritise. Often, what aspects of a programme you choose to evaluate can be guided by what it is your stakeholders want to know, while keeping in mind that any evaluation activities planned need to be realistic and ‘doable’.
What to know more about Program Logic and evaluation?
How to develop a Program Logic for planning and evaluation is an excellent, in-depth collection of information, links to other resources, and templates on this topic.
University of Wisconsin Extension has a useful collection of Program Logic templates, examples and references, as well as an in-depth handbook titled Enhancing Program Performance with Logic Models (pages 158-178 focus on Using Logic Models in Evaluation).
Bennett’s Hierarchy is a 7-level model. Like Program Logic, it can be used for both programme planning and programme evaluation. Here’s a summary of the seven levels.
|Advantages of Bennett’s Hierarchy||Limitations of Bennett’s Hierarchy|
How do you use a Bennett’s Hierarchy to guide evaluation?
It’s best to work from Level 1 to Level 7 when it comes to evaluation.
- Levels 1-4 (inputs, activities, participation and reactions) focus on evaluating the processes used. They allow you to measure the extent to which the programme is operating as intended.
- Levels 5-7 (KASA change, practice change, and end results – SEE: social, economic, environmental conditions) focus on evaluating the outcomes of the programme. They allow you to measure programme outputs in terms of benefits to participants and the wider industry.
In evaluating the process, you will be assessing:
- How well is the programme working?
- Is it reaching the intended audience(s)?
Here are some questions that can be used to evaluate at each of these levels.
In evaluating the outcomes, you will be assessing short-, medium- and long-term outcomes and impacts that have been brought about by the programme.
Here are some questions that can be used to evaluate at each of these levels.
|7||End results - SEE outcomes||
What to know more about Bennett’s Hierarchy?
Viewing Bennett's Hierarchy from a Different Lens: Implications for Extension Program Evaluation is a short article from the Journal of Extension. It looks at previous uses of Bennett’s Hierarchy in extension programme evaluation. It suggests a 4-step framework for identifying costs for short, intermediate and long-term outcomes.
Targeting Outcomes of Programs: A Hierarchy for Targeting Outcomes and Evaluating Their Achievement is another useful article. Pages 25-45 in particular deal with using Bennett’s Hierarchy for evaluation.
Evaluating Success in Achieving Adoption of New Technologies is a paper written for presentation at the NSW DPI and Beef Co-operative Research Centre Conference ‘Moving from Research to Adoption’, held in 2005. It includes a modified Bennett’s Hierarchy to help in framing an evaluation and draws on some practical examples of how to collect and collate relevant data.
How do you present evaluation results?
One of the main reasons for evaluation is to communicate to what extent the extension programme has met it’s intended outcomes, and what could be improved, to key stakeholders. It’s important then, that you consider how best to present the evaluation results so that interested parties get the information they need.
Some key points to keep in mind:
- Where possible, involve some of your stakeholders in the evaluation process, by getting them to participate in the process of working through a Program Logic or Bennett’s Hierarchy.
- Brief your stakeholders throughout the process, rather than waiting until the end of the programme. Try to avoid surprising them with the results.
- Create a plan for how you will communicate the results. Identify the various audiences that need to see the results (including the target audience for the programme), what information would be most useful to them, and how to get it into their hands.
- The quality of the evaluation and relevance of the findings matters. If the evaluation design is logically linked to the purpose and outcomes of the project, the findings are far more likely to be put to use.
- Use the most appropriate format for each audience (for example, public presentation, social media, flyers, reports).
- The timing is also critical. If a report is needed for a funding decision, but isn’t ready in time, then the chances of the data being used drop significantly.
- The way in which findings are reported, including layout, readability and user-friendliness, all make a difference.
- Always use simple, plain English. Regardless of who the audience is, they want to be able to quickly and easily assimilate what you’re trying to tell them.
- Where possible, present results visually. Pie charts are useful to show what percentage of respondents gave a particular response; line graphs are useful for showing trends through time.
- Don’t exclude or ignore negative or unexpected results. Instead, use these as an opportunity to inform future efforts.
- Consider competing information. For example, are there results from similar programmes that confirm or deny your results?
- Make sure your conclusions and/or recommendations are clearly spelt out. Don’t leave your audience wondering “So what?”