Evaluating program relevance
But before you start, it will help to review the following characteristics of a good evaluation list adapted from resource formerly available through the University of Sussex, Teaching and Learning Development Unit Evaluation Guidelines and John W. Evans' Short Course on Evaluation Basics :.
Your evaluation should be crafted to address the specific goals and objectives of your EE program. However, it is likely that other environmental educators have created and field-tested similar evaluation designs and instruments. Rather than starting from scratch, looking at what others have done can help you conduct a better evaluation.
It ensures that diverse viewpoints are taken into account and that results are as complete and unbiased as possible. Input should be sought from all of those involved and affected by the evaluation such as students, parents, teachers, program staff, or community members.
One way to ensure your evaluation is inclusive is by following the practice of participatory evaluation. Evaluation results are likely to suggest that your program has strengths as well as limitations.
Your evaluation should not be a simple declaration of program success or failure. Evidence that your EE program is not achieving all of its ambitious objectives can be hard to swallow, but it can also help you learn where to best put your limited resources.
A good evaluation is one that is likely to be replicable, meaning that someone else should be able to conduct the same evaluation and get the same results. The higher the quality of your evaluation design, its data collection methods and its data analysis, the more accurate its conclusions and the more confident others will be in its findings.
Making evaluation an integral part of your program means evaluation is a part of everything you do. You design your program with evaluation in mind, collect data on an on-going basis, and use these data to continuously improve your program.
Developing and implementing such an evaluation system has many benefits including helping you to:. As you set goals, objectives, and a desired vision of the future for your program, identify ways to measure these goals and objectives and how you might collect, analyze, and use this information. This process will help ensure that your objectives are measurable and that you are collecting information that you will use.
Strategic planning is also a good time to create a list of questions you would like your evaluation to answer. See Step 2 to make sure you are on track. Update these documents on a regular basis, adding new strategies, changing unsuccessful strategies, revising relationships in the model, and adding unforeseen impacts of an activity EMI, It describes features of an organizational culture, and explains how to build teamwork, administrative support and leadership for evaluation.
It discusses the importance of developing organizational capacity for evaluation, linking evaluation to organizational planning and performance reviews, and unexpected benefits of evaluation to organizational culture.
If you want to learn more about how to institutionalize evaluation, check out the following resources on adaptive management. Phase 3: Marketing Strategy. Step 3. Select your target audience segment s. The selection of target audience segments has major implications for the evaluation, ranging from interview methods to cost. They can also tell you whether a potential segment can be distinguished from others, an essential element in such evaluation tasks as measuring the proportion of the target audience segment that is eventually exposed to program components.
Define current and desired behaviors for each audience segment. The desired behavior is the major program outcome to be measured by the evaluation. Describe the benefits you will offer. Benefits should be measured in the evaluation as a behavioral determinant. Write your behavior change goal s. The behavior change goal for the entire program is a general directional statement about behavior. It is at the top of a pyramid of more specific standards against which program performance can be compared.
The goals of the component intervention activities must add up to the overall behavior change goal. An evaluator will determine whether the program has a detectable effect in the intended direction, as stated in this overall program goal. Select the intervention s you will develop for your program. Decisions made here must be documented and shared with evaluators so that they understand the rationale for the intervention activities that they will evaluate.
This consideration may factor into your intervention choice. The selection of interventions has enormous bearing on the design of the outcome evaluation. Write the goal for each intervention. Evaluators will compare the effects of the program against each of these subgoals to determine whether each component intervention is having a detectable effect in the intended direction.
They can help you articulate clear subgoals that add up to your overall program goal. Phase 4: Interventions. Step 4. Select members and assign roles for your planning team. Having contributed to program strategy, an evaluator continues to be a valuable member of the intervention planning team.
Write specific, measurable objectives for each intervention activity. We hope that it will result in lively discussion and suggestions of alternative standards. Facebook Twitter Google. Rogers download publication. Feasibility: Are the planned evaluation activities realistic given the time, resources, and expertise at hand?
Propriety: Does the evaluation protect the rights of individuals and protect the welfare of those involved? Does it engage those most directly affected by the program and changes in the program, such as participants or the surrounding community? Accuracy: Will the evaluation produce findings that are valid and reliable, given the needs of those who will use the results?
Sometimes the standards broaden your exploration of choices. Often, they help reduce the options at each step to a manageable number.
Feasibility How much time and effort can be devoted to stakeholder engagement? Propriety To be ethical, which stakeholders need to be consulted, those served by the program or the community in which it operates?
Accuracy How broadly do you need to engage stakeholders to paint an accurate picture of this program? Similarly, there are unlimited ways to gather credible evidence Step 4. Asking these same kinds of questions as you approach evidence gathering will help identify ones what will be most useful, feasible, proper, and accurate for this evaluation at this time. Thus, the CDC Framework approach supports the fundamental insight that there is no such thing as the right program evaluation.
Rather, over the life of a program, any number of evaluations may be appropriate, depending on the situation. Good evaluation requires a combination of skills that are rarely found in one person.
The preferred approach is to choose an evaluation team that includes internal program staff, external stakeholders, and possibly consultants or contractors with evaluation expertise. An initial step in the formation of a team is to decide who will be responsible for planning and implementing evaluation activities. One program staff person should be selected as the lead evaluator to coordinate program efforts. This person should be responsible for evaluation activities, including planning and budgeting for evaluation, developing program objectives, addressing data collection needs, reporting findings, and working with consultants.
The lead evaluator is ultimately responsible for engaging stakeholders, consultants, and other collaborators who bring the skills and interests needed to plan and conduct the evaluation.
Although this staff person should have the skills necessary to competently coordinate evaluation activities, he or she can choose to look elsewhere for technical expertise to design and implement specific tasks. However, developing in-house evaluation expertise and capacity is a beneficial goal for most public health organizations. The lead evaluator should be willing and able to draw out and reconcile differences in values and standards among stakeholders and to work with knowledgeable stakeholder representatives in designing and conducting the evaluation.
Seek additional evaluation expertise in programs within the health department, through external partners e. You can also use outside consultants as volunteers, advisory panel members, or contractors. External consultants can provide high levels of evaluation expertise from an objective point of view. Important factors to consider when selecting consultants are their level of professional training, experience, and ability to meet your needs.
Be sure to check all references carefully before you enter into a contract with any consultant. To generate discussion around evaluation planning and implementation, several states have formed evaluation advisory panels.
Advisory panels typically generate input from local, regional, or national experts otherwise difficult to access. Such an advisory panel will lend credibility to your efforts and prove useful in cultivating widespread support for evaluation activities.
Evaluation team members should clearly define their respective roles. Informal consensus may be enough; others prefer a written agreement that describes who will conduct the evaluation and assigns specific roles and responsibilities to individual team members. Either way, the team must clarify and reach consensus on the:. This manual is organized by the six steps of the CDC Framework.
0コメント