Why is it important to evaluate individual extension activities?
As well as having a plan for how you will evaluate the overall extension programme, there will also be occasions when you want to evaluate individual extension activities. Evaluation at this level helps us to:
- further develop the activity so it’s as effective as it can be in meeting the needs of the audience and delivering on the desired outcomes
- improve facilitation skills
- contribute to into the overall programme evaluation – for example, if you’re using Bennett’s Hierarchy, evaluating individual extension activities captures information for Levels 2-4 (activities, participation, reactions).
Which activities should you evaluate?
In any extension programme, there is likely to be a large number of different activities developed and run. These could range from field days, to workshops, to webinars, to discussion groups, to any number of activities. However, the evaluation resources you have will be limited, so it’s important to put your evaluation efforts into the places where you’ll get the greatest return for any time and money you put into it.
Factors that you should consider when selecting which activities to evaluate include:
- Is this a new kind of activity that you haven’t used before, or that hasn’t been used in a while? If so, it’s probably worth evaluating, so you can gauge how successful it was and how it can be improved.
- Is this an activity that targets critically important objectives for the programme? If it is, then it should probably be evaluated.
- Does this activity involve facilitators or presenters who haven’t worked within your programme before? If so, an evaluation can provide both you and them with useful feedback on their skills.
- Have the participants involved in the activity already been asked to evaluate a lot of different activities recently? People can start to get ‘evaluation fatigue’ if asked to evaluate every activity they participate in.
When should you evaluate activities?
Often activities are evaluated at the immediate end of a session. The advantage of doing this is that you have a ‘captive audience’ – participants can be easily encouraged to complete an evaluation before they leave. However, their reactions will be limited to their experience at the time, and won’t give you the opportunity to find out how useful they have found the learning since returning to their farm, and how successful they have been in implementing any new ideas.
Another approach is to evaluate the activity some time after the event itself. This could take the form of a follow-up form posted out, an email, a link to an online survey, or a phone call to the participants to ask for their reflections on the event. The benefits of this approach is that participants can give you more information about how they have been able to apply new ideas and skills gained from the event back on the farm, as well as any challenges they have faced and what they’ve been doing to tackle them. You can use all of this information to then inform and strengthen future activities. However, people lead busy lives and may find it challenging to find time to complete an evaluation or talk with you when other priorities are calling on their time.
Whenever and however you chose to evaluate an extension activity, make sure you do it in a way that:
- captures the information you most need from the process
- allows participants to share honest feedback
- ideally, prompts participants to plan for and/or reflect on their implementation of learnings from the event.
Want to know more?
Obtaining and Using Feedback from Participants is a useful summary of the why, who, when and how of evaluation feedback.
Review events or group process offers an overview of the reasons for, and methods of, evaluation, as well as links to some specific evaluation tools.
Better evaluation is a library of information about choosing and using evaluation methods and processes.
Key activity evaluation tools and techniques
Here is a selection of easy-to-use tools for evaluating an extension activity:
- Participant feedback sheets
- Online surveys
- Postcard feedback
- Evaluation dartboard
- Goal post evaluation
- Phone interviews
- Group evaluation.
A note about revising and updating your Program Logic
If your Program Logic includes consideration of assumptions and external factors that may change during the course of the programme’s implementation, it will be important to regularly revisit the Program Logic and revise and update it as needed.
Participant feedback sheets
Participant feedback sheets are usually one or two page sheets that are distributed and completed by participants at the end of an event. They can be used to gather participant reaction to the facilities, content, and facilitation (if relevant).
These sheets can include a mix of questions.
|Type of question||Data that will result||Examples|
The purpose of today was XYZ. Overall, how well was this objective met?
How satisfied were you with the handouts you received?
Which session from today was most useful to you? Why?
What improvements can you suggest for future field days?
What topics would you like to see covered at future discussion groups?
Here are some collections of example evaluation sheets that you can use or adapt to your needs:
- Training toolkit – Evaluation forms and questionnaires
- JotForm – a collection of free evaluation form templates.
Sending out a link to an online survey for participants to complete can be a quick way to gather and analyse evaluation feedback. You can ask the same kinds of questions you would on a printed participant feedback sheet, since most online survey tools allow users to set up questions using a rating scale or to enter free text.
Want to know more?
There are a number of easy-to-use, free online tools that allow you to set up, send out and analyse basic online surveys, including:
Postcard feedback is a very quick method to gather feedback from an event. It’s a technique you may like to use if you think participants may be unwilling to, or there isn’t sufficient time for them to, complete a longer feedback sheet.
It involves giving each participant a card the size of a postcard (about A5 size). Participants are asked to fill out one side of the card at the start of the event, noting three things they wish to learn or achieve at the session. At the end of the event, participants are asked to flip the card over and note to what extent the event helped them achieve these goals.
The evaluation dartboard is another quick and easy way to gather evaluation from participants at an event. It consists of concentric circles on a whiteboard or piece of flipchart paper, with the middle (‘bulls-eye’) representing the highest/most favourable response, and the outer layer representing the lowest/least favourable response. Each sector of the ‘dartboard’ can then be used to evaluate a different question; four quadrants is the usual format, but more can be used as long as you don’t make things too cluttered.
Participants are then asked to place a sticky dot or mark somewhere on each sector of the dartboard their response to each of the evaluation questions or statements, as shown in the example below.
Looking at the diagram above yields the following results:
- item 2 shows the greatest variation – the participants vary quite widely in terms of their levels of confidence in implementing the farming practice covered; further adoption support would be useful to support those feeling less confident.
- item 3 shows the most positive responses – most participants feel that the process used in the workshop worked well for them
- item 4 shows the least positive responses – most participants finished the workshop not feeling especially motivated to make the practice change back on their own farm. Further work might need to be done to help the group see the potential benefits of making the change.
Goal post evaluation
Like the evaluation dartboard, the goal post evaluation is a visual way to gather feedback from a group. Here are the steps to use:
1. Draw some goal posts on a whiteboard or flipchart page. Write 0 at one end of the cross-bar and 10 at the other end.
2. Get each person to place a mark on the crossbar to indicate where they would rate the event between 0 and 10.
3. Give each person three sticky notes to write down three things they regard as positive about the event. Stick these on the left-hand side of the goal post.
4. Using three more sticky notes (a different colour provides a visual contrast), ask each person to write down a maximum of three negatives associated with the event. Stick these on the right-hand side of the goal posts.
5. Taking into account the positive and negative points, ask the group to assign an overall mark (out of 10) as to how they would rate the event. Write this number above the crossbar. Note: It is possible for the group mark to vary considerably from first round marks.
6. Ask the group to suggest ways in which future events could be improved. List the collective ideas from the group below the crossbar.
In addition to, or instead of, written or online surveys, you could choose to follow up and interview participants by phone after the event. If evaluating by phone, keep in mind that you are more likely to focus on open response questions than rating-style questions.
The benefit of a phone interview over seeking written responses, is that when an issue is raised you can clarify and drill down further using follow-up questions as needed.
Not all evaluation needs to be done as an individual activity – you can make evaluation a group activity. Here are some options for doing this.
You can ask people to work in pairs to complete an evaluation sheet together. Discussion can add to the depth of the feedback you will gain, although it can be challenging for two people to come to a consensus about a response.
- Team discussion
You can post four or five evaluation questions on pieces of flipchart paper and post them around the room. Then divide the group into four or five teams and ask them to discuss and note down a group response to each question as they circulate around the room.
The advantage of this approach is that everyone will remember different aspects of the event and have different perspectives, so a group discussion allows the event to be reviewed by a team and further deepen some of the learnings.
- The ORID Technique
ORID stands for Objective, Reflective, Interpretive, Decisional. An evaluation using this technique would involve asking the group questions under each of the four headings, as shown below:
|Type of Question||Examples|
Questions that analyse facts and reality.
Questions that focus on participants’ personal reactions.
Questions that focus on the ‘what does this mean for me?’ for participants.
Questions that focus on finding out what participants intend to do as a result of what they have learned, seen or heard.
The ORID technique presents a particular way of structuring the questions used in a facilitator-led evaluation conversation, to ensure you capture information from each of the different aspects. It’s important, when using this technique, that the questions are posed sequentially (O-R-I-D); the process relies on the idea that once the facts and the participants’ reactions are clarified, then they’re in a better position to understand and interpret the meaning of the information and ideas, and therefore make better decisions regarding it.