Sabtu, 30 April 2022

Program Evaluation with CDC's Framework

Evaluation: A method for gathering, evaluating, and interpreting data in order to assess the efficacy and efficiency of programs and, more significantly, to contribute to continual program improvement.

Program: Any organized public health action; any combination of connected activities conducted to accomplish an intended goal. Policy, interventions, environmental, systems, and media activities, as well as other endeavors, are all included in the CDC's definition of program. It also includes initiatives in readiness, as well as research, capacity, and infrastructure.

Purpose:

  • The Centers for Disease Control and Prevention (CDC) has a long-standing commitment to data-driven decision-making, as well as the duty to describe the effects of its public-health dollars.
  • Strong program assessment can assist us in identifying our finest investments and determining how to develop and maintain them as best practices.
  • The purpose is to promote the use of evaluation data across the agency for ongoing program improvement.

When it comes to evaluation, research, and monitoring, what's the difference?

  • The goal of evaluation is to determine the efficacy of a certain program or model and to understand why it may or may not be functioning. The goal is to make programs better.
  • The goal of research is to test theories and develop generalizable knowledge. The goal is to add to the body of knowledge.
  • Monitoring: The goal is to keep track on implementation progress by collecting data on a regular basis. The goal is to provide early progress indicators (or lack thereof).
There are also similarities:

  • The methods of data collecting and analysis used in research and evaluation are frequently the same.
  • Monitoring and evaluation (M&E) is the process of measuring and evaluating performance in order to enhance performance and achieve goals.
The CDC's Evaluation Methodology
Effective program evaluation involves techniques that are relevant, practicable, ethical, and accurate in order to enhance and account for public health initiatives. The framework helps public health professionals use program evaluation effectively. It's a useful, nonprescriptive technique for summarizing and organizing key aspects of program evaluation.
Centers for Disease Control and Prevention. Framework for program evaluation in public health. MMWR 1999;48 (No. RR-11)


The framework's objectives are to:
  • outline the most important aspects of program evaluation
  • establish a framework for conducting successful program assessments,
  • Explain the steps in the evaluation of a program,
  • evaluate the criteria for a successful program evaluation, and
  • clear up any misconceptions about the goals and procedures of program assessment.

When the emphasis is on practical, ongoing assessment that engages all personnel and stakeholders, not just evaluation experts, evaluation can be tightly linked to ordinary practice. Individuals who ask questions and consider comments as part of their everyday professional obligations conduct informal evaluations on a regular basis. When the stakes are modest, such informal evaluation techniques are sufficient. When the stakes of a scenario rise, though, it's critical to employ formal, transparent, and defensible review techniques.

Answering the following questions can help you assign value to a program and make evidence-based decisions about it:
  • What will be assessed? (i.e., what is "the program" and where does it appear)
  • When judging program performance, what parts of the program will be taken into account?
  • What criteria must be met in order for the program to be deemed successful?
  • What evidence will be utilized to assess the program's success?
  • By comparing the available facts to the chosen standards, what judgments about program performance can be drawn?
  • How will the findings of the investigation be used to improve the success of the program?
Evaluation Steps with CDC's Framework
The framework's six interconnected steps serve as a starting point for customizing an evaluation for a specific program at a specific moment in time. The phases are all interrelated and may appear in a nonlinear order; yet, there is a certain order in which they must be completed – earlier steps lay the groundwork for subsequent advancement. As a result, decisions about how to carry out a step should be made in stages and should not be finalized until all preceding steps have been completed. The procedure is as follows:
  • Participate in discussions with stakeholders.
  • Describe the program in detail.
  • Concentrate on the evaluation strategy.
  • Amass reliable evidence
  • Justify your decisions.
  • Ensure that the lessons acquired are put to good use and that they are shared.
Participate in discussions with stakeholders
The engagement of stakeholders is the first step in the evaluation process (i.e., the persons or organizations having an investment in what will be learned from an evaluation and what will be done with the knowledge). Almost all program work is done in collaboration, therefore any evaluation of a program must take into account the partners' value systems. Stakeholders must be involved in the investigation to ensure that their viewpoints are heard. Because assessment findings do not meet the stakeholders' questions or values when stakeholders are not involved, they may be ignored, dismissed, or opposed. Stakeholders assist in the execution of the remaining steps after becoming involved. It's vital to identify and engage the following three main groups:
  • Those who work on the program's operationsSponsors, collaborators, coalition partners, financing officials, administrators, managers, and staff are just a few examples.
  • Those who have been served or are affected by the program: Clients, family members, neighborhood organizations, academic institutions, political officials, advocacy groups, professional associations, skeptics, opponents, and staff of related or competing agencies are examples of people who might be involved.
  • The evaluation's primary users: (for example, those in a position to do or do something about the program)

Describe the program in detail 
Program descriptions serve as the foundation for all subsequent evaluation choices. The description permits comparisons with other programs and attempts to link program components to their effects. Furthermore, stakeholders may have varying perspectives on the program's goals and objectives. Without consensus on the program definition, evaluations are likely to be of limited utility. Negotiating with stakeholders to develop a clear and logical description can sometimes pay off before data is available to assess program efficacy. The following are elements to include in a program description:
  • Need : What is the program's solution to a problem or opportunity? Who is the one who is affected?
  • Expected effects: What are the expected changes as a result of the program? What must the program achieve in order to be deemed successful?
  • Activities: What are the program's steps, methods, or actions for bringing about change?
  • Resources: What resources (time, talent, technology, information, money, etc.) are available to carry out program activities?
  • Stage of Development: Is the program mature (i.e., is it primarily focused on planning, implementation, or effects)?
  • Context: What is the program's running environment like? What role might environmental factors (e.g., history, geography, politics, social and economic conditions, secular tendencies, and attempts) have in the decision-making process?
  • Logic Model: What is the likely sequence of events that will result in change? How do program elements interact to provide a believable picture of how the program is meant to work?

Concentrate on the evaluation strategy
The evaluation's direction and procedure must be focused on assessing problems of greatest relevance to stakeholders while maximizing the use of time and resources. Not all design options are equally effective at addressing stakeholders' information needs. Even if better approaches become apparent after data collection begins, modifying procedures may be difficult or impossible. A well-thought-out strategy anticipates expected uses and develops an assessment strategy that is most likely to be beneficial, practicable, ethical, and accurate. The following are some things to think about when focussing an evaluation:
  • Purpose : What is the purpose of the evaluation (for example, to acquire insight, modify practice, analyze impacts, or affect participants)?
  • Users: Who will receive the evaluation findings and who will benefit from participating in the evaluation?
  • Uses: What will each user do with the data or experiences gathered throughout the evaluation?
  • Questions: What are the questions that the evaluation should address? What boundaries will be created in order to provide the evaluation a viable focus? What level of study (a system of related programs, a single program, a project inside a program, a subcomponent or process within a project) is appropriate?
  • Methods: What techniques will give the necessary information to answer stakeholders' queries (i.e., which research designs and data gathering procedures are most appropriate for the primary consumers, uses, and questions)? Is it possible to combine methods to get around the limits of a single method?
  • Agreements: How will the evaluation plan be carried out using the resources available? What obligations and functions have the stakeholders agreed to? What measures are in place to ensure that requirements, particularly those pertaining to the protection of human subjects, are met?
Amass reliable evidence
People working on an assessment should try to gather data that will give a complete picture of the program and be considered trustworthy by the evaluation's major users. Stakeholders should believe that the information is trustworthy and relevant to their questions. Such decisions are influenced by the evaluation questions asked and the motivations behind them. Credible evidence helps to enhance evaluation judgments and the suggestions that follow. Although all sources of data have limits, using several processes for acquiring, evaluating, and interpreting data can increase an evaluation's overall trustworthiness. Stakeholders will be more likely to accept the evaluation's conclusions and act on its recommendations if they are involved in identifying and acquiring data that they believe is trustworthy. The following characteristics of evidence gathering have been shown to influence believability perceptions:
  • Indicators: How will general conceptions about the program, its environment, and desired outcomes be transformed into specific, interpretable measures? Will the indicators chosen provide systematic, valid, and trustworthy data for the intended purposes?
  • Sources: To acquire proof, what sources (people, papers, observations) will be used? What will be done to integrate multiple sources, particularly those that include narrative and numerical data?
  • Quality: Is the information dependable (i.e., valid, and informative for the purposes intended)?
  • Quantity: What is the minimum quantity of information required? What is the maximum level of certainty or precision that can be achieved? Is the power to identify affects sufficient? Is the burden placed on the respondent reasonable?
  • Logistics: What methods, schedules, and physical infrastructure will be employed to collect and handle evidence?
Justify your decisions
When evaluation conclusions are linked to the facts gathered and reviewed against the stakeholders' agreed-upon values or criteria, they are justified. Before stakeholders may trust the evaluation results, they must agree that the conclusions are justified. The following five aspects are used to justify conclusions based on evidence:
  • Standards: Which stakeholder values serve as the foundation for making decisions? What type of or level of performance must the program achieve in order to be judged a success?
  • Analysis and Synthesis: What methods will be utilized to assess and summarize the findings of the evaluation?
  • Interpretation: What do the findings imply (i.e., what practical implications do they have)?
  • Judgment: Based on the available information and the chosen standards, what assertions about the program's merit, worth, or relevance are justified?
  • Recommendations: What steps should be considered as a result of the assessment? [Note: Making recommendations differs from making judgements in that it requires a complete understanding of the context in which programmatic decisions are made.]
Ensure that the lessons acquired are put to good use and that they are shared.
It would be foolish to assume that the lessons learnt during an evaluation will immediately convert into informed decision-making and suitable action. To guarantee that the assessment methods and outcomes are used and distributed appropriately, deliberate effort is required. Preparing for usage involves strategic thinking and constant vigilance, both of which begin early in the stakeholder engagement phase and continue throughout the review. The following five elements must be present in order to ensure use:
  • Design: Is the evaluation planned from the outset to accomplish the primary users' goals?
  • Preparation: Have you taken any efforts to practice using the evaluation findings in the future? What steps have stakeholders taken to ensure that new understanding is translated into appropriate action?
  • Feedback: What kind of communication will take place between the evaluation's participants? Is there a trusting environment among stakeholders?
  • Follow up: How will users' technical and emotional requirements be met? What will keep lessons learnt from getting forgotten or neglected in the midst of making difficult or politically charged decisions? What precautions are in place to prevent the evaluation from being abused?
  • Dissemination: How will the evaluation's methodologies or lessons learned be disseminated to key audiences in a timely, objective, and consistent manner? What methods will be used to adapt reports to different audiences?

Tidak ada komentar: