RTI uses cookies to offer you the best experience online. By clicking “accept” on this website, you opt in and you agree to the use of cookies. If you would like to know more about how RTI uses cookies and how to manage them please view our Privacy Policy here . You can “opt out” or change your mind by visiting: http://optout.aboutads.info/ . Click “accept” to agree.
Monitoring, Evaluation, Research, Learning and Adapting
MERLA is the intentional application of results focused monitoring, evaluation, and research tools and methodologies to inform continuous evidence-based learning that is purposefully used to adapt program and policy decision making. It is founded on a desire for more meaningful and sustained results.
In our experience , this requires coordinated and collaborative development efforts that continuously test promising and innovative approaches and strive for improvements grounded in evidence. Through a MERLA approach, RTI guides performance management, helping donors, governments, program staff, partners, and other stakeholders meet rigorous and dynamic data and knowledge needs.
MERLA 101 Course
The MERLA Course is geared towards international development programs. It is designed for individuals with varying levels of MERLA experience and teaches skills important for all levels of project management and technical contribution. It consists of five modules (linked below), covering a range of topics, including:
- The purpose and different types of MERLA
- How MERLA integrates into the full project cycle
- Developing data collection instruments and managing data for performance monitoring, and evaluation
- Designing performance indicators
- Developing a program logic (Theory of Change, Logical Framework, Results Framework)
- Understanding the costs of implementing MERLA
Introduction and Basic Concepts: Purpose of MERLA; differences between monitoring, evaluation, and learning activities; how M&E leads to learning and adapting; how MERLA integrates into the full project cycle.
Fundamentals of Project Design: How do MERLA concepts fit into the project design phase; identify client MERLA requirements; develop a program logic (Theory of Change, Logical Framework, Results Framework); identify and analyze assumptions and risks.
MEL Plans and Indicator Development: Identify the components of a comprehensive monitoring, evaluation and learning plan; develop S.M.A.R.T. performance indicators; identify steps and best practices for establishing baseline values and targets for indicators; understand the costs of implementing MERLA.
Collecting and Managing Data for Performance Monitoring: Organize indicators into a ‘data map’ to plan for efficient data collection; develop data collection instruments; technology for, and management of, data collection.
Evaluations: Differentiate between different types of evaluations; distinguish between MERLA components and activities; plan for different types of evaluations depending on project needs; choose among evaluation methods for data collection purposes.
Have a Question?
We're happy to help answer any questions or listen to your feedback regarding the MERLA 101 Course.
Watch Recording of Past Webinar on the MERLA 101 Course.
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Publications
- Account settings
The PMC website is updating on October 15, 2024. Learn More or Try it out now .
- Advanced Search
- Journal List
- Vaccines (Basel)
Monitoring and Evaluation of National Vaccination Implementation: A Scoping Review of How Frameworks and Indicators Are Used in the Public Health Literature
Manar marzouk.
1 Saw Swee Hock School of Public Health, National University of Singapore, National University Health System, 12 Science Drive 2, Singapore 117549, Singapore; [email protected] (M.M.); [email protected] (S.T.L.); gs.ude.sun@gnum (M.U.); ude.sun.u@7751340e (Z.M.A.); [email protected] (N.H.)
Maryam Omar
2 Barts Health NHS Trust, Newham University Hospital, London E13 8SL, UK; [email protected]
Kanchanok Sirison
3 Health Intervention and Technology Assessment Program, Ministry of Public Health, Nonthaburi 11000, Thailand; [email protected] (A.A.); [email protected] (C.P.); [email protected] (C.P.); [email protected] (N.P.); [email protected] (S.V.D.); [email protected] (W.I.)
Aparna Ananthakrishnan
Anna durrance-bagale.
4 London School of Hygiene and Tropical Medicine, 15-17 Tavistock Place, London WC1H 9SH, UK
Chatkamol Pheerapanyawaranun
Charatpol porncharoen, nopphadol pimsarn, sze tung lam, mengieng ung, zeenathnisa mougammadou aribou, saudamini v. dabak, wanrudee isaranuwatchai.
5 Institute of Health Policy, Management and Evaluation, University of Toronto, 155 College St., Toronto, ON M5T 3M6, Canada
Natasha Howard
Associated data.
Please contact the corresponding author for data requests.
An effective Monitoring and Evaluation (M&E) framework helps vaccination programme managers determine progress and effectiveness for agreed indicators against clear benchmarks and targets. We aimed to identify the literature on M&E frameworks and indicators used in national vaccination programmes and synthesise approaches and lessons to inform development of future frameworks. We conducted a scoping review using Arksey and O’Malley’s six-stage framework to identify and synthesise sources on monitoring or evaluation of national vaccination implementation that described a framework or indicators. The findings were summarised thematically. We included 43 eligible sources of 4291 screened. Most (95%) were in English and discussed high-income (51%) or middle-income (30%) settings, with 13 in Europe (30%), 10 in Asia-Pacific (23%), nine in Africa (21%), and eight in the Americas (19%), respectively, while three crossed regions. Only five (12%) specified the use of an M&E framework. Most (32/43; 74%) explicitly or implicitly included vaccine coverage indicators, followed by 12 including operational (28%), five including clinical (12%), and two including cost indicators (5%). The use of M&E frameworks was seldom explicit or clearly defined in our sources, with indicators rarely fully defined or benchmarked against targets. Sources focused on ways to improve vaccination programmes without explicitly considering ways to improve assessment. Literature on M&E framework and indicator use in national vaccination programmes is limited and focused on routine childhood vaccination. Therefore, documentation of more experiences and lessons is needed to better inform vaccination M&E beyond childhood.
1. Introduction
Improving national vaccination programme implementation requires collection and analysis of data on relevant vaccination components. Monitoring and evaluation (M&E) or more recent Monitoring, Evaluation, Accountability and Learning (MEAL) frameworks [ 1 ] support decision-making by consolidating available information on agreed indicators, benchmarked targets, and methods to collect, analyse, and report necessary data to strengthen vaccination programmes [ 2 ]. M&E frameworks are usually aggregated into pre-, peri-, and post-vaccination phases, and include elements of vaccine procurement, transport, storage, staff training, communication, coverage, adverse effects, and identification of successes and failures [ 3 ].
Planning effective M&E for national rollout of new vaccines, such as for COVID-19, can be strengthened by learning from previous vaccination experiences, particularly those targeted beyond routine childhood populations [ 4 ]. A virtual expert roundtable, hosted by the Saw Swee Hock School of Public Health in January 2021, identified key M&E framework components to inform COVID-19 vaccination. This included best practice guidelines, particularly by the World Health Organization (WHO) [ 5 ], but few lessons or experiences of using M&E frameworks and selecting and appropriately benchmarking indicators within vaccination programme M&E. Practical details of these experiences could help governments and technical partners in planning, implementing, and assessing M&E for new vaccine implementation. Lessons learnt from assessment experiences worldwide could help inform national efforts to improve routine and vaccine-specific data collection and analyses and support vaccination programme strengthening, particularly in resource-constrained settings, which aligns with the Immunization Agenda 2030 goal to make vaccination available to everyone, everywhere [ 6 ].
We thus aimed to synthesise the literature on M&E frameworks and indicators used for vaccination implementation. The objectives were to: (i) summarise the scope of existing primary literature on M&E frameworks or indicators used; (ii) identify any useful indicators to inform development or adaptation of M&E frameworks; and (iii) synthesise lessons to inform M&E framework development for national rollouts of vaccination.
2. Materials and Methods
2.1. study design.
We conducted a scoping literature review using Arksey and O’Malley’s six-stage framework with Levac and colleagues’ revisions and Khalil and colleagues’ refinements [ 7 , 8 , 9 ]. We selected this method because, as Munn et al. suggested, scoping reviews are useful to map and identify evidence in emerging topics and help identify key concepts and gaps [ 10 ].
2.2. Identifying the Research Question (Stage 1)
Our research question was: “What are the scope (i.e., extent, distribution, nature), main findings, and key lessons of literature on M&E frameworks and indicators for vaccination implementation?” Table 1 provides our study definitions.
Study definitions.
Terms | Definitions |
---|---|
Evaluation | The systematic assessment of an activity, project, programme, strategy, policy, topic, theme, sector, operational area or institution’s performance to determine its relevance, effectiveness, efficiency, impact, and/or sustainability [ ]. |
Framework | Shows how the programme or activity is intended to work by organising out the components of the initiative and the order or the steps needed to achieve the desired results. A framework increases understanding of the programme’s goals and objectives, defines the relationships between factors key to implementation, and articulates the internal and external elements that could affect the programme’s success. |
Immunisation | A process by which a person becomes protected against a disease through vaccination or recovery from infection [ ]. |
Monitoring | The systematic process of collecting, analysing, and using information to track progress toward objectives and guide management decisions [ ]. |
M&E framework | A matrix compiling goal/purpose, outcomes, and outputs, along with the defined and measurable indicators with specified targets/thresholds necessary to achieve success. |
Vaccination | The management and administration of vaccines pre-/peri-/post-vaccination to provide people with the most effective immunisation [ ]. |
Vaccine | A product, usually administered through needle injection, by mouth, or sprayed into the nose, that stimulates a person’s immune system to produce immunity to a specific disease, protecting the person from that disease [ ]. |
2.3. Identifying Relevant Sources (Stage 2)
To ensure breadth, we included multiple electronic databases and websites. First, we searched five databases systematically (i.e., Medline, Embase, Web of Science, Scopus, Eldis). Second, we searched eight relevant websites purposively (i.e., WHO, Australian Department of Health, National Advisory Committee on Immunization Canada, India Ministry of Health and Family Welfare, Philippines Department of Health, Singapore Ministry of Health, UK Joint Committee on Vaccination and Immunisation, US Centers for Disease Control and Prevention). For both databases and websites, we used search terms for ‘vaccine’ (i.e., vaccin*, immuniz*, immunis*) AND ‘monitoring’ AND ‘evaluation’ (i.e., monitor* and evaluat*, M&E, Monitor*, evaluat*) and related terminology adapted to subject headings.
2.4. Selecting Sources (Stage 3)
We established eligibility criteria based on our research question and discussion with experts ( Table 2 ). We included primary research sources focused on vaccine implementation in national settings and including content on M&E frameworks or indicators. Thus, we also included conference abstracts, commentaries, book chapters, and reviews that provided research data not already included in a research article. We did not exclude on language (if an English abstract was accessible), publication year, study design, or participants.
Eligibility criteria.
Criteria | Included | Excluded |
---|---|---|
1. Context | ||
2. Topic | ||
3. Outcomes | ||
4. Source type | ||
5. Time-period | ||
6. Language | ||
7. Study design | ||
8. Participants |
We screened 4288 potential sources using Covidence and EndNote software. After removing 2089 duplicates, all authors first screened 2199 titles and abstracts against eligibility criteria and excluded 1995 ineligible sources. We then screened 204 full texts, excluding another 163 ineligible sources. We added two eligible website sources to 41 eligible database sources, thus including 43 in total.
2.5. Extracting (Charting) Data (Stage 4)
We extracted data from 43 sources into Excel using the following headings: lead author, publication year, source type (i.e., article, abstract, book, report), language, country/ies included, aim, study information (i.e., design, participants, data collection, analysis), and findings (i.e., M&E tool/framework used, indicators included, lessons described). Indicators were subcategorised as coverage (i.e., targeting, population estimation, equity, disaggregation, uptake/coverage, attitude/behaviours), operational (i.e., health service capacity, human resources, vaccine supply chain (e.g., availability, allocation, transport, storage, delivery, wastage, disposal)), clinical (i.e., vaccine safety, vaccine demand), or others (i.e., costs, additional indicators) as described in the WHO-UNICEF monitoring framework for COVID-19 vaccines [ 12 ].
2.6. Collating and Summarising Findings (Stage 5)
First, we summarised sources extent (i.e., database/website origin, publication year), distribution (i.e., publication language, countries included), and nature (i.e., type, topic, study design, outcomes included). Second, we synthesised data thematically under framework and indicator headings guided by our research question and stakeholder consultation.
2.7. Consulting Stakeholders (Stage 6)
We discussed initial review findings with 16 high-level stakeholders in the Thai Ministry of Public Health in December 2021. Stakeholders were asked how findings could be made most useful and for any additional potential sources (none were identified). Inputs informed final synthesis.
3.1. Scope of the Literature
Figure 1 provides the PRISMA flow diagram of the 43 eligible sources of 4291 screened. Databases provided 4288 (i.e., 1122 in Medline, 1746 in Embase, 600 in Web of Science, 820 in Scopus, 0 in Eldis) and the UK Joint Committee on Vaccination and Immunisation website provided two [ 13 , 14 ].
PRISMA flow diagram.
Figure 2 shows the extent of sources by publication year. None were found before 1987 or in 1991–2005. From 2006, publications slowly increased, with two notable increases in 2010 and 2012, to a peak of nine in 2018–2019, and then decreased.
Number of sources by publication year.
Forty single-country sources were distributed across 26 countries, while three multi-country sources included Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, United Arab Emirates [ 15 ]; China, Indonesia, Viet Nam [ 16 ]; and Bangladesh, Mozambique, Uganda, and Zambia [ 17 ]. Over half of sources described high-income settings (22/43; 51%), while 13 (30%) described middle-income settings and only 8 (19%) described low-income settings.
Most (41/43; 95%) were published in English, with one each in French and Spanish. Most (40/43; 93%) were journal articles, and three (7%) were conference abstracts. Study designs and methodology were often unclear, but 34 sources (79%) appeared to use primarily quantitative, six (14%) used mixed-method, and three (7%) used qualitative approaches. Methods were somewhat better described and included surveys, document analysis, observations, interviews, and focus group discussions.
3.2. Synthesised Findings
We synthesised findings under: (i) description of any framework and how it was used; (ii) coverage indicators; (iii) operational indicators; and (iv) clinical indicators. As most sources did not detail the specific frameworks or indicators used, we instead reported on ways they were used and any lessons within each sub-section ( Table 3 ).
Synthesised findings by source.
Lead Author, Year | Type | Country/ies | Approach | M&E Framework | Coverage Indicators | Operational Indicators | Clinical Indicators | Lessons Learnt | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Targeting/Estimation | Equity | Uptake | Service Capacity | Vaccine Supply | Human Resources | M&E Costs | Vaccine Safety | Vaccine Demand | ||||||
Aceituno, 2017 | Article | Bolivia | Quantitative | X | X | X | X | |||||||
Al Awaidy, 2020 | Article | multi (Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, United Arab Emirates) | Quantitative | X | X | X | ||||||||
Alam, 2018 | Article | Bangladesh | Quantitative | X | X | |||||||||
Ashish, 2017 | Article | Nepal | Quantitative | X | X | |||||||||
Bawa, 2019 | Article | Nigeria | Quantitative | X | X | X | ||||||||
Beard, 2015 | Article | Australia | Quantitative | X | X | X | X | |||||||
Bednarczyk, 2019 | Article | US | Quantitative | X | ||||||||||
Bernal, 2021 | Article | UK | Quantitative | X | ||||||||||
Bhatnagar, 2016 | Article | India | Quantitative | X | ||||||||||
Bianco, 2012 | Abstract | Italy | Quantitative | X | X | |||||||||
Carrico, 2014 | Article | US | Mixed | X | X | |||||||||
Checchi, 2019 | Article | UK | Quantitative | X | X | X | ||||||||
Cherif, 2018 | Article | Ivory Coast | Quantitative | X | X | |||||||||
Cutts, 1988 | Article | Mozambique | Quantitative | X | ||||||||||
D’Ancona, 2018 | Article | Italy | Quantitative | X | X | |||||||||
Dang, 2020 | Article | Viet Nam | Quantitative | X | X | |||||||||
Edelstein, 2019 | Article | UK | Quantitative | X | X | |||||||||
Geoghegan, 2021 | Abstract | Ireland | Qualitative | X | ||||||||||
Hall, 2021 | Article | UK | Quantitative | X | X | |||||||||
Hipgrave, 2006 | Article | multi (China, Indonesia, Viet Nam) | Quantitative | X | ||||||||||
Hutubessy, 2012 | Article | Tanzania | Quantitative | X | X | |||||||||
Ijsselmuiden, 1987 | Article | South Africa | Mixed | X | X | |||||||||
Imoukhuede, 2007 | Article | Gambia | Quantitative | X | ||||||||||
Lacapere, 2011 | Article | Haiti | Quantitative | X | X | X | ||||||||
Lanata, 1990 | Article | Peru | Quantitative | X | X | |||||||||
Loughlin, 2012 | Article | US | Quantitative | X | X | |||||||||
Maina, 2017 | Article | Kenya | Mixed | X | X | |||||||||
Manyazewal, 2018 | Article | Ethiopia | Mixed | X | X | X | X | X | X | X | ||||
McCarthy, 2013 | Article | US | Quantitative | X | X | |||||||||
Muhamad, 2018 | Article | Malaysia | Quantitative | X | X | |||||||||
Ozdemir, 2010 | Article | Turkey | Quantitative | X | X | X | ||||||||
Raji, 2019 | Abstract | Nigeria | Qualitative | X | ||||||||||
Richard, 2008 | Article | Switzerland | Quantitative | X | X | |||||||||
Sarker, 2019 | Article | Bangladesh | Quantitative | X | X | |||||||||
Soi, 2020 | Article | multi (Bangladesh, Mozambique, Uganda, Zambia) | Quantitative | X | X | |||||||||
Tanton, 2017 | Article | UK | Mixed | X | X | |||||||||
Tuells, 2010 | Article | Spain | Qualitative | X | ||||||||||
van Wijhe, 2018 | Article | Netherlands | Quantitative | X | X | X | ||||||||
Vivekanandan, 2012 | Article | India | Quantitative | X | ||||||||||
Walker, 2014 | Article | Kenya | Quantitative | X | X | X | X | X | ||||||
Ward, 2017 | Article | Uganda | Quantitative | X | ||||||||||
Watson, 2010 | Article | US | Quantitative | X | ||||||||||
Wattiaux, 2016 | Article | Australia | Mixed | X | X | X | ||||||||
Totals | 5 | 13 | 3 | 16 | 2 | 3 | 5 | 2 | 5 | 0 | 39 |
3.2.1. Frameworks
Five (12%) sources explicitly described using any type of M&E framework, while the rest either did not use one or were unclear about whether or how one was used and what it included. Thus, framework usage was minimally described and heterogeneous, depending on requirements and objectives. For example, Al Awaidy et al. used the WHO M&E framework for hepatitis B in reviewing vaccination in several Gulf countries [ 15 ]. Dang et al. used the mHealth Assessment and Planning for Scale (MAPS) toolkit to assess scale-up of an electronic immunisation registry in Viet Nam [ 18 ]. Hutubessy et al. used the Cervical Cancer Prevention and Control Costing (C4P) tool to determine cost-effectiveness of HPV vaccination in Tanzania [ 19 ]. Ijsselmuiden et al. used the WHO Extended Programme on Immunisation framework to determine vaccination coverage and cold chain maintenance in South Africa [ 20 ]. Non-WHO M&E frameworks included Manyazewal and colleagues’ use of the Plan-Do-Check-Act (PDCA) cycle with Continuous Quality Improvement, a prospective quasi-experimental interrupted time-series design to evaluate effectiveness of a continuous quality improvement intervention for a vaccination programme in Ethiopia [ 21 ] and Aceituno and colleagues’ use of a logical framework to determine participant engagement in Bolivia [ 22 ].
Four (9%) sources did not specify the use of a formal framework, instead describing an assessment process or method (e.g., use of registers, sampling approaches, surveys). For example, Lanata et al. used lot quality assurance sampling to determine vaccine coverage in Peru [ 23 ], and Tuells et al. used a WHO process for cold chain temperature monitoring in Spain [ 24 ]. Sources seldom explicitly distinguished between routine data collection sources (e.g., national infectious disease surveillance), dedicated disease-specific registers (e.g., for rabies, tetanus, HIV), general surveys (e.g., national-level demographic and health surveys or multiple-indicator cluster surveys), or vaccine-specific (e.g., post-introduction evaluation surveys).
General lessons included incorporating national staff in monitoring meetings to improve M&E ownership and accountability [ 22 ], improving contextual dynamics tailored to the specific vaccination programme to improve coverage [ 21 ], and multi-disciplinary co-production and inclusion of M&E staff during decision-making to improve outcomes [ 17 ].
3.2.2. Coverage Indicators
Most (32/43; 74%) described elements of target population estimation, equity, and uptake, primarily of routine childhood vaccines. Thirteen (30%) included targeting or population estimation, although most did not describe indicators in depth and how they were used differed by setting and resource availability. For example, Lacapere et al. roughly estimated measles-rubella vaccination coverage by dividing the number of vaccine doses given by the estimated population for each district in Haiti [ 25 ], while D’Ancona et al. used an immunisation register to estimate coverage in Italy [ 26 ]. Bianco et al. determined the number of foreign workers eligible for vaccination in Italy through screening [ 27 ]. Manyazewal et al. used WHO Reaching Every Community (REC) mapping of community locations and characteristics to help estimate coverage targets for five vaccines in Ethiopia [ 21 ].
Only three (7%) included indicators to examine equity or disaggregate data. For example, Sarker et al. compared immunisation coverage among children aged 12–59 months in Bangladesh across socioeconomic and demographic factors, finding disparities by parental education and mothers’ access to media [ 28 ]. Wattiaux et al. considered equity aspects of vaccination rollout by comparing hepatitis B immunisation incidence between indigenous and non-indigenous Australians [ 29 ]. Geoghegan et al. examined whether women received COVID-19 vaccine when pregnant or received routinely recommended vaccines in pregnancy in Ireland [ 30 ].
Sixteen sources (37%) discussed uptake indicators. Most focused on general target populations with minimal discussion on vaccine coverage for migrants and refugees and none on the elderly, people with disabilities, or other potentially vulnerable groups. However, Bawa et al. estimated oral polio vaccination coverage among underserved hard-to-reach communities in Nigeria, defining them based on difficulty of terrain, any local or state border, scattered households, nomadic, water-logged/riverine, or conflict-affected and thus requiring outreach services [ 31 ]. More generally, Lacapere et al. calculated numbers of municipalities that reported achieving 95% vaccination coverage for measles-rubella vaccine in Haiti [ 25 ]. Only two sources (7%) described coverage indicators beyond the second year of life. Muhamad et al. calculated total HPV vaccine doses delivered through school-based outreach to evaluate the effectiveness of free vaccination for schoolgirls in Malaysia [ 32 ], while Beard et al. estimated 40–50% coverage of pertussis vaccination among pregnant women in Australia, based on number of births and consent forms returned centrally [ 33 ].
Coverage lessons were varied. Aceituno et al. described challenges of collecting high-quality data in resource-constrained settings [ 22 ]. Soi et al. noted that using feedback loops to guide policy decision must be pragmatic, as they are often too slow—e.g., Gavi’s HPV demonstration project policy required countries to demonstrate adequate coverage before applying for rollout funding, which could take years [ 17 ]. Alam et al. found automation of EPI scheduling can improve coverage and enhance monitoring, particularly in remote areas [ 34 ]. Edelstein et al. found that data triangulation and inclusion of routine data, in countries with good national records, can help identify vulnerable groups and monitoring of vaccine coverage [ 35 ]. Lanata et al. found lot quality assurance sampling helped identify small areas with poorer vaccination coverage in rural areas of Peru with dispersed populations, thus improving coverage and equity monitoring [ 23 ]. Aceituno et al. found staff understanding of cultural-linguistic context improved coverage and vaccination continuity in Bolivia [ 22 ].
3.2.3. Operational Indicators
Only two (5%) sources mentioned health service capacity indicators. Manyazewal et al. assessed immunisation services availability, regular static immunisation services delivered, adequate outreach sites, catchment area mapped for immunisation, separate and adequate rooms for immunisation services and storing supplies, all planned outreach sessions conducted, health education on immunisation provided, and immunisation services availability in all catchment health posts in Ethiopia [ 21 ]. Walker et al. assessed surveillance feedback reports, timely reporting, and number of districts with populations not receiving immunisation services [ 36 ].
Three (7%) sources mentioned supply chain and logistics indicators. Walker et al. used cold chain and logistics data from facility inventory logs for routine immunisation to identify gaps in vaccine supplies and equipment, such as the number of facilities with insufficient supply of syringes and diluent, and number of facilities with inventory logs consistent with vaccine supply [ 36 ]. Hipgrave et al. reviewed evidence on thermostability of hepatitis B vaccine for pregnant women when stored outside the cold chain in China [ 16 ]. Özdemir et al. assessed cold chain storage and gaps for a hepatitis B vaccine in Turkey [ 37 ]. Manyazewal et al. assessed adequacy of fridge-tag 2 units for temperature monitoring, refrigerator spare parts, vaccine request and report forms, and inventory documents in Ethiopia [ 21 ].
Five (12%) sources mentioned human resource indicators. Hall et al. assessed the number of English health-workers vaccinated against COVID-19 stratified by dose, manufacturer, and day [ 13 ]. Cherif et al. assessed numbers of health-workers participating in vaccination activities, e.g., epidemiological surveillance, adverse event monitoring training, and supervisions in Abidjan [ 38 ]. Carrico et al. assessed numbers of states mandating health-worker vaccination in the US [ 39 ]. Manyazewal et al. assessed numbers of experts assigned for immunisation and numbers of immunisation focal persons to evaluate effectiveness of system-wide continuous quality improvement for national immunisation programme performance [ 21 ]. Walker et al. assessed numbers of supervisory visits conducted, documented in writing, surveillance guidelines observed, surveillance discussed at supervisory visits, and if an operational plan was observed [ 36 ].
Two (5%) sources included indicators for vaccination costing. Hutubessy et al. calculated the incremental costs to the health system of HPV vaccination for adolescent girls through schools, health facilities, and other outreach strategies in Tanzania [ 20 ]. Walker et al. measured the number of districts in Kenya with insufficient financial resources for key surveillance elements for acute flaccid paralysis [ 36 ].
Multiple sources discussed operational lessons. D’Ancona et al. used an observational survey to show that decentralised health systems such as in Italy could result in fragmented immunisation registries and information flow across regions [ 26 ]. Ward et al. described how data improvement teams, allocated to all districts, enhanced the quality of vaccine administrative data in Uganda by helping identify data inaccuracies and providing on-the-job data collection training [ 40 ]. Dang et al. used qualitative research to describe an approach to optimise vaccination information in Vietnam, which included establishing a partnership between the Vietnamese Ministry of Health and mobile network operators [ 17 ]. Soi et al. described the importance of physically co-locating evaluators from different disciplinary backgrounds, and suggested that including evaluators in decision-making could enrich outcomes [ 17 ].
3.2.4. Clinical Indicators
Five (12%) sources described clinical indicators, primarily counting numbers of adverse events following immunisation (AEFI). For example, Aceituno et al. assessed monthly reporting of adverse events and severe adverse events, details of any deaths, reasons for all withdrawals, and infant and maternal death rates below Demographic and Health Survey rates for Bolivia [ 22 ]. Loughlin et al. conducted a post-marketing evaluation to assess the number of confirmed cases of intussusception or Kawasaki disease among infants who received Rotavirus vaccine in the US compared with historical cohort data from diphtheria-tetanus-acellular pertussis vaccination [ 41 ].
Lessons were relatively limited. Vivekanandan et al. described the positive role of health-workers in assessing vaccine safety indicators in India [ 42 ]. Cherif et al. similarly noted that improving AEFI system performance required improved health-worker training, data analysis, and community engagement [ 38 ]. Ijsselmuiden et al. suggested that vaccination targeting at-risk populations, such as for Hepatitis B, should ensure concomitant disease surveillance to reduce morbidity and mortality [ 20 ]. Similarly, Beard et al. suggested combining AEFI and syndromic surveillance in emergency departments to monitor numbers of pertussis vaccine adverse events and supplementing maternal influenza vaccination AEFI monitoring with mobile phone text messages in Australia [ 33 ].
4. Discussion
This initial review of the use of M&E frameworks and indicators in vaccination highlights the relatively limited literature on this topic. M&E frameworks are important for consolidating selected indicators and described as essential in the Global Vaccine Action Plan (GVAP) [ 4 ], yet their use was seldom explicit or clearly defined in our sources. Most sources described assessment methods (e.g., survey, lot quality assurance) rather than the use of a formal M&E framework, suggesting overreliance on individual methods without the benefit of an overarching assessment framework. Similarly, while indicators were described more frequently, they were rarely fully defined or benchmarked against targets, and sources focused on ways to improve vaccination programmes without explicitly considering ways to improve assessment. Given indicators and benchmarked targets are crucial to national vaccination programme M&E these are noteworthy gaps.
Limited description of M&E framework or indicator use outside routine childhood vaccination was perhaps unsurprising and some of the lessons identified in our review could inform development of monitoring or evaluation for COVID-19 vaccination, or other vaccines, beyond routine childhood populations [ 43 ]. It is worth reiterating that most sources described high- or middle-income settings, and none described the use of M&E frameworks or indicators in fragile or conflict-affected settings despite the risk of poorer routine vaccination coverage, weakened health system responses, and infectious disease outbreaks in these settings. For example, 14 million ‘zero-dose’ children, who did not receive an initial dose of required vaccines, live in conflict-affected African countries [ 44 , 45 ]. The limited documentation of equity indicators was unexpected given that improving equity in immunisation is both essential [ 46 ] and expected by donors such as Gavi [ 47 ], yet equity monitoring seemed limited and ill defined. Historical and socio-cultural influences and biases can influence the effectiveness of data collection and assessment on equity, and thus vaccination programme success. For instance, if some high-risk cohorts are not considered acceptable or relevant and hence not recorded (e.g., men who have sex with men in Gulf states), accurate and detailed assessment becomes impossible [ 15 ]. An approach used by M’Bangombe et al. analysed historical data to augment current data on high-risk populations, expanding the cohort of those identified as being at risk for cholera in Malawi [ 48 ].
Another surprising gap was the limited documentation of operational indicators, given their importance in effective vaccination management. The Organisation for Economic Co-operation and Development (OECD) report on lessons from government evaluations of COVID-19 responses similarly highlighted gaps in cost and health system indicators [ 43 ]. Additionally, we found that sources used different types of routine and ad hoc data, including surveillance, disease registers, household surveys, and vaccine-specific surveys. While this is understandable, depending on setting and programming needs [ 49 ], justification was not always explicit.
Less surprisingly, our review showed a preference for quantitative assessment methods, with only three sources using qualitative and five using mixed methods. However, qualitative and mixed-method social science approaches offer deeper insights into how processes work and are particularly useful for equity analyses. For example, Dutta et al. interviewed vaccination decision-makers in India to examine the importance of engaging with communities to promote health equity, and found this required formulating policies and guidelines that clearly define community engagement and its related evaluation metrics [ 50 ]. Qualitative methods can also help amplify perspectives and groups that may be less visible, which can be particularly important in reaching zero-dose children [ 45 ].
What is often missing in evaluations is the impact of vaccination on the general population, particularly for lower efficacy vaccines (e.g., against malaria, cholera, or influenza) that have relatively low demographic impacts even with high vaccine coverage. It may be worth exploring this demographic impact further, as was observed for smallpox vaccination [ 51 , 52 ]. Therefore, further documentation of assessment methods, experiences, and lessons appears necessary to expand the evidence base and help inform ongoing and future vaccination assessment.
Several potential limitations should be considered. First, while we included five databases and eight websites, we may still have missed relevant sources. It is likely that much of the research on this topic remains unpublished, as evaluations conducted by non-academic bodies (e.g., government, consultants) may not be in the public domain, though sites such as bioRxiv could be useful for manuscripts awaiting peer review. Second, included sources were not assessed for quality, as the purpose was to scope existing literature, and this would have eliminated too many documents. Third, we excluded sources on vaccine development, so may have missed some that discussed vaccine safety. Fourth, many sources only assessed one or more components of the vaccination programme (e.g., financing, equity, personnel), and thus we did not attempt to make direct comparisons of assessments. Fifth, methods were often insufficiently described, so we chose to categorise broadly as “quantitative, qualitative, mixed-methods” approaches rather than trying to provide more detail. Sixth, we focused on the public health literature as most likely to contain M&E frameworks and indicators, rather than conducting a broader search of various social science literatures. We thus may have missed some qualitative frameworks or indicators, e.g., for vaccine hesitancy. Finally, we chose not to include MEAL framework components for advocacy or learning, which could be relevant for future research.
Overall, our review identified minimal literature describing M&E frameworks or indicators for use in vaccination programme implementation. Numbers of relevant publications increased during the past decade, particularly after the 2012 Middle East Respiratory Syndrome epidemic, but numbers are still small and focused on high- and middle-income countries. Further research and documentation are therefore needed to identify additional public health lessons.
Acknowledgments
We wish to thank Russell Burke, Assistant Librarian at the London School of Hygiene and Tropical Medicine, for invaluable help in developing database search structures and Sirikorn Sujinnaprum for helping with title/abstract screening.
Study funding was provided by the Health Systems Research Institute of Thailand (grant number HSRI 64-061). K.S., A.A., C.K., N.P., C.P., S.D., and W.I. are employed by the Health Intervention and Technology Assessment Program (HITAP), a semi-autonomous research unit of the Thai Ministry of Public Health and funded by national and international public funding agencies, including the Access and Delivery Partnership—hosted by the United Nations Development Programme and funded by the Government of Japan. HITAP’s international work is supported by the International Decision Support Initiative (iDSI), funded by the Bill & Melinda Gates Foundation (OPP1202541), the UK Foreign, Commonwealth, and Development Office, and Rockefeller Foundation. Funders were not involved in study implementation, analysis, or preparation of the manuscript.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Data availability statement, conflicts of interest.
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Introduction to Monitoring and Evaluation
Course Description
This course introduces participants to key concepts of Monitoring and Evaluation, to enable them to better understand, participate in, and contribute to the M&E processes of the Commonwealth of Learning. It is a self-directed course that can be taken individually or as part of a team. The course offers flexibility with options for learning the content. You will learn from readings, videos, discussions with other participants, practical activities, and short quizzes. Certification is available for those who complete the quizzes for all units.
Course Contents
Unit 1: Monitoring and Evaluation Overview
Participants will:
- be introduced to Monitoring and Evaluation (M&E) and its key concepts
- understand some of the inter-relationships among key M&E concepts
- be able to identify where a pathway of change may need more development or details
- see how key M&E concepts are manifest in a relevant example
Unit 2: Results-based Monitoring and Evaluation
- understand the key elements of results-based M&E
- be able to distinguish the roles of monitoring and evaluation activities
- be able to identify where there is a breakdown in the logic of a results chain
Unit 3: Theory of Change
- understand key elements of a theory of change (ToC)
- become familiar with the process of constructing a theory of change
- be able to analyse a theory of change to check the logic of pathways of change and the evidence for them
Unit 4: Logical Frameworks
- u nderstand the purpose of a Logframe
- b e able to identify the key elements of a logframe
- r eview the links between different results levels in a logframe
- c onsider possible indicators for logframes and how results can be measured
- r eview the appropriateness of assumptions
Unit 5: Indicators and Means of Verification
- understand how indicators can be used as a sign of the results
- l earn about the process for developing SMART and gender-responsive indicators
- identify possible evidence (means of verification) that can be used to track progress against indicators
Unit 6: Developing an M&E Strategy
- u nderstand the necessity of and critical components of M&E strategies
- b e able to develop the outline for a broad M&E strategy
Unit 7: Data Collection Methods - Overview
- understand the importance of balancing comprehensiveness versus the cost of data collection
- be able to distinguish between primary and secondary data
- identify the strengths and weaknesses of quantitative and qualitative data
- have an overview of a variety of data collection methods and sampling approaches.
Unit 8: Overview of Quantitative Methods
- learn about a variety of quantitative methods used in M&E and t ypes of analysis to use for quantitative data collected.
- understand the difference between descriptive and inferential statistics.
Unit 9: Overview of Qualitative Methods
- learn about a variety of qualitative methods used in M&E
- have an overview of t ypes of analysis to use for qualitative data, with a focus on content analysis.
Unit 10: Overview of Participatory Methods and Approaches to M&E
- understand the importance of a participatory approach to M&E
- be able to identify features of participatory M&E as well as common methods and approaches.
Target audience
This course is intended for COL partner staff who are currently or will be working on the implementation of a COL project. It is also a useful introduction to M&E for COL staff and consultants who will be overseeing or supporting M&E for a COL project.
Background/pre-requisite: This course does not require any prior exposure to Monitoring and Evaluation.
Expected time investment: There are 10 units in this course. It is expected that each unit of the course will take approximately 3-5 hours to complete (30-50 hours total). There is no time limit in which you must complete the course or a particular unit
Mentoring: This is intended to be a self-directed course. If you are currently working with COL on a project, a COL staff member or consultant may arrange to provide support or feedback to you/your team as you work through the course. They may also arrange supplementary activities related to your specific COL project, aligned to the course. You will be advised of these arrangements prior to enrolling.
Outcomes of this Course
By the end of this course, participants will: Understand key terms and concepts related to M&E; and be able to critically engage with and participate in core M&E processes including contributing to the development and interrogation of theories of change, logframes, indicators and Means of Verification, M&E strategies, and data collection.
Certificates
You will be awarded a Certificate of Completion by COL based on the following criteria:
1. Completion of required readings and videos
2. Completion of quizzes for all units (no minimum score)
Other Information
This course is based on Results-Based Monitoring and Evaluation at the Commonwealth of Learning: A Handbook by Glen Farrel and Addendum by Tristan Measures; and Monitoring and Evaluation: A Reference Guide for Development Practitioners by Damodaram Kuppuswami.
Instructors
Lectures presented by:
Dr Robert Sauder
Mr Tristan Measures
Dr Kirston Brindley
Dr Balasubramanian Raman
Operations Team
Soyeon Kim, Administrative Support
Advisory/course development support provided by:
Dr Robert McCormick
Dr Tony Mays
Dr V. Balaji
This platform made available courtesy Commonwealth of Learning
Copyright 2022· All rights reserved
Designed by Zymphonies
- Effective Strategies for Conducting Evaluations, Monitoring, and Research
- Learning Center
Table of Contents
Understanding the Importance of Evaluations, Monitoring, and Research
Key steps to conducting comprehensive evaluations, best practices for ongoing monitoring, designing and executing effective research, leveraging technology in evaluation, monitoring, and research, analyzing and presenting your findings, ethical considerations in evaluations, monitoring, and research.
Evaluations, monitoring, and research are all fundamental aspects of effective project management and strategic decision-making in virtually every field. Each serves a different, but crucial, purpose and offers unique insights into the performance and potential of a project, a program, a policy, or an organization.
Evaluations allow us to assess the effectiveness of an intervention, a program, or a policy. They involve a systematic collection and analysis of information to judge the merit, worth, or value of something. This can help stakeholders understand the impacts, both expected and unexpected, and identify areas for improvement.
Monitoring is an ongoing, systematic process of collecting data on specified indicators to provide management and stakeholders of an ongoing development intervention with indications of the extent of progress and achievement of objectives. Monitoring is important because it helps track progress, detects issues early, ensures resources are used efficiently, and contributes to achieving transparency and accountability.
Research is a process of steps used to collect and analyze information to increase our understanding of a topic or issue. It provides evidence-based insights, which can help in shaping strategies, formulating policies, and making informed decisions. It offers a way of examining your practice critically, to confirm what works and identify areas that need improvement.
Understanding the importance of evaluations, monitoring, and research, and employing them correctly, can significantly contribute to the success of projects and initiatives. They allow us to verify assumptions, learn, make informed decisions, improve performance, and achieve planned results.
Conducting comprehensive evaluations involves a series of crucial steps. These steps are designed to facilitate objective assessments and draw meaningful conclusions that can help improve performance and effectiveness.
1. Define the Purpose of the Evaluation: Before you start the evaluation, clearly define what you hope to achieve from it. This might include assessing the effectiveness of a program, identifying areas for improvement, or informing strategic decision-making. The purpose of the evaluation will guide the rest of the process.
2. Identify Stakeholders: Determine who has a vested interest in the evaluation’s results. Stakeholders might include team members, managers, funders, or the program’s beneficiaries. These individuals or groups should be involved in the evaluation process as much as possible to ensure their perspectives and needs are taken into account.
3. Develop Evaluation Questions: Based on the purpose of the evaluation and stakeholder input, develop specific questions that the evaluation will seek to answer. These might include questions about the program’s effectiveness, efficiency, relevance, impact, and sustainability.
4. Determine Evaluation Methodology: Decide on the best methods for gathering and analyzing data to answer your evaluation questions. This might involve quantitative methods like surveys and data analysis, qualitative methods like interviews and focus groups, or a combination of both. Consider the resources available to you and the strengths and limitations of different methodologies.
5. Collect Data: Implement your chosen methods to collect data. This might involve distributing surveys, conducting interviews, or gathering data from existing sources. Be sure to follow ethical guidelines when collecting data, particularly when human subjects are involved.
6. Analyze Data: Once you’ve collected your data, analyze it to draw conclusions. This might involve statistical analysis for quantitative data or thematic analysis for qualitative data.
7. Report Findings: Compile your findings into a comprehensive report that clearly presents the results of the evaluation, including answers to the evaluation questions and any recommendations for improvement. Share this report with stakeholders and use it to inform decision-making and strategic planning.
8. Implement Changes: An evaluation is only useful if its findings are acted upon. Use the results of your evaluation to implement changes, improve performance, and inform future strategies.
Remember, a comprehensive evaluation is not a one-time activity, but a continuous process of learning and improvement.
Catch HR’s eye instantly?
- Resume Review
- Resume Writing
- Resume Optimization
Premier global development resume service since 2012
Stand Out with a Pro Resume
Monitoring is a vital component of any project or program management. It involves the ongoing assessment of a project’s progress, ensuring it’s on track and aligned with its set goals. Below are some best practices for effective ongoing monitoring:
1. Establish Clear Goals and Objectives: The first step in any monitoring process is to establish what you are aiming to achieve. Set clear, measurable goals and objectives for your project or program.
2. Define Key Performance Indicators (KPIs): KPIs are measurable values that demonstrate how effectively a project or organization is achieving its objectives. Identify relevant KPIs that align with your goals and can provide quantifiable metrics to measure progress.
3. Use a Monitoring Plan: A monitoring plan provides a detailed outline of what will be monitored, how monitoring will take place, who will conduct the monitoring, and when it will happen. This can help ensure the monitoring process is systematic and comprehensive.
4. Regularly Review Progress: Monitoring is an ongoing process, and regular review of progress is essential. Determine a suitable review schedule based on the nature of the project.
5. Utilize Monitoring Tools: There are numerous monitoring tools available, both digital and traditional, that can help with data collection, analysis, and reporting. These tools can automate many aspects of the monitoring process, increasing efficiency and accuracy.
6. Collect and Analyze Data: Data collection is at the heart of monitoring. Collect data related to your KPIs and analyze it to understand the progress and performance of your project or program.
7. Involve Stakeholders: Monitoring should be participatory, involving all relevant stakeholders. This includes team members, project beneficiaries, and any external stakeholders. Their perspectives can provide valuable insights into the effectiveness of the project.
8. Respond to Monitoring Results: The purpose of monitoring is to identify any deviations from the plan and address them promptly. When issues are detected, timely action should be taken to correct the course and mitigate any risks.
9. Document Lessons Learned: As you monitor your project or program, document lessons learned and best practices. These insights can help improve future projects and contribute to the organization’s knowledge base.
By integrating these best practices into your monitoring processes, you can ensure your projects stay on track, achieve their objectives, and deliver maximum impact.
Research is a systematic inquiry to discover and interpret new knowledge. It’s crucial for making informed decisions, developing policies, and contributing to scholarly knowledge. Here are some key steps to designing and executing effective research:
1. Identify the Research Problem: The first step in the research process is to identify and clearly define the research problem. What is the specific question or issue that your research seeks to address? This should be a succinct statement that frames the purpose of your research.
2. Conduct a Literature Review: Before diving into new research, it’s essential to understand the existing knowledge on the topic. A literature review helps you understand the existing body of knowledge, identify gaps in the current research, and contextualize your study within the broader field.
3. Formulate a Hypothesis or Research Question: Based on the research problem and literature review, formulate a hypothesis or research question to guide your study. This should be a clear, focused question that your research will seek to answer.
4. Design the Research Methodology: The research methodology is the framework that guides the collection and analysis of your data. This could be qualitative (e.g., interviews, focus groups), quantitative (e.g., surveys, experiments), or a mix of both, depending on your research question. Consider factors like feasibility, reliability, and validity when designing your methodology.
5. Collect the Data: Based on your methodology, collect the data for your study. This might involve conducting interviews, distributing surveys, or collecting existing data from reliable sources.
6. Analyze the Data: After collecting your data, analyze it to uncover patterns, relationships, and insights. The exact methods will depend on the type of data you’ve collected. You might use statistical analysis for quantitative data or thematic analysis for qualitative data.
7. Interpret the Results: Based on your analysis, interpret the results of your study. Do the results support your hypothesis or answer your research question? Be cautious not to overgeneralize your findings beyond your specific context and sample.
8. Write the Research Report: Once your research is complete, write up your findings in a research report. This should include an introduction, literature review, methodology, results, discussion, and conclusion. Be sure to also acknowledge any limitations of your study.
9. Share Your Findings: Finally, share your findings with others, such as through publication in a peer-reviewed journal, presentation at a conference, or application in a practical context.
Effective research requires careful planning, meticulous execution, and thoughtful interpretation of the results. Always adhere to ethical standards in conducting research to maintain integrity and credibility.
In today’s digital era, technology plays a vital role in enhancing the efficiency and effectiveness of evaluation, monitoring, and research. It provides new ways to collect, analyze, and interpret data, allowing for real-time updates, greater reach, and better visualization of information. Here are some ways to leverage technology in these areas:
1. Digital Data Collection: Traditional methods of data collection like paper surveys or in-person interviews can be time-consuming and resource-intensive. Digital tools allow for quicker, more efficient data collection. Online surveys, mobile data collection apps, and web scraping tools can streamline this process, reduce errors, and enable real-time data collection.
2. Remote Monitoring and Evaluation: Technology allows for remote monitoring and evaluation, which can be particularly useful when physical access to a site is difficult or impossible. Satellite images, GPS tracking, and internet-based communication platforms can provide valuable data from afar.
3. Advanced Data Analysis: Technology provides sophisticated tools for data analysis, from basic statistical analysis software to advanced machine learning algorithms. These tools can handle large datasets, perform complex analyses, and reveal patterns and insights that might not be evident through manual analysis.
4. Data Visualization: Data visualization tools can help to present complex data in an understandable and accessible format. Interactive dashboards, charts, and maps can make data more engaging and easier to interpret, allowing for more informed decision-making.
5. Improved Communication and Collaboration: Technology enhances communication and collaboration among researchers, evaluators, stakeholders, and participants. Cloud-based platforms allow for real-time collaboration on data collection, analysis, and report writing.
6. Virtual Reality (VR) and Augmented Reality (AR): VR and AR technologies are being used for immersive data presentation, allowing stakeholders to experience data in a novel and engaging way. They are also used in research to simulate environments and scenarios.
7. Artificial Intelligence (AI) and Machine Learning (ML): AI and ML can help automate the analysis of large and complex datasets, predict trends, and identify patterns that humans might not notice.
While technology provides powerful tools, it’s also important to consider issues like data privacy, digital literacy, and accessibility. Additionally, technology should not replace human judgment and interpretation, but rather, serve as a tool to support these processes. Technology is most effective when it’s used strategically, with clear goals and a good understanding of its capabilities and limitations.
After the data collection and analysis phase in your evaluation, monitoring, or research process, it’s crucial to effectively analyze and present your findings. This step allows you to translate your data into meaningful insights that can inform decisions and drive action. Here’s how to go about it:
1. Review and Interpret the Data: Begin by reviewing your data. Look for patterns, trends, and key points of interest. Interpret these within the context of your research questions or objectives. Always be transparent about any limitations or uncertainties in the data.
2. Compare with Prior Expectations or Benchmarks: If you set hypotheses or benchmarks, compare your findings with these. Do the results align with what you expected, or are there surprising outcomes? Understanding these deviations can provide valuable insights.
3. Develop Key Takeaways: Summarize the most important insights from your data into a few key takeaways. These should be concise, impactful statements that encapsulate the main findings of your work.
4. Create Visual Representations: Data visualization tools, like charts, graphs, or infographics, can help illustrate your findings and make complex data easier to understand. Ensure these visualizations are clear, accurate, and effectively support your key takeaways.
5. Write a Clear and Concise Report: Document your findings, methodology, and implications in a report. Make sure it’s structured logically, clearly written, and accessible to your intended audience. Include an executive summary that provides a high-level overview of your findings.
6. Tailor the Presentation to Your Audience: Different stakeholders may have different interests and levels of familiarity with your subject. Tailor your presentation to suit your audience, focusing on the information that is most relevant to them.
7. Use Clear and Simple Language: While presenting your findings, use language that your audience will understand. Avoid jargon and technical terms as much as possible, or clearly define them if necessary.
8. Include Recommendations: If appropriate, include recommendations based on your findings. These can help guide future action or decision-making.
9. Solicit Feedback and Questions: After presenting your findings, invite feedback and questions. This can help ensure your audience understands your findings and can facilitate a more in-depth discussion.
Remember, the goal of analyzing and presenting your findings is to communicate your insights clearly and effectively, enabling stakeholders to understand and act upon them.
Ethical considerations are paramount in evaluations, monitoring, and research. They ensure the integrity of your work, protect the rights and wellbeing of participants, and build trust with stakeholders. Here are some key ethical considerations to keep in mind:
1. Informed Consent: Participants should understand what they’re agreeing to when they participate in your research. This includes the purpose of the research, what’s involved, any risks or benefits, and their right to withdraw at any time without penalty.
2. Privacy and Confidentiality: Protect participants’ privacy by keeping their data confidential. This means that individual responses should not be shared without consent, and data should be reported in aggregate to prevent identification of individuals.
3. Avoiding Harm: Your research should not cause harm to participants. This includes physical harm, but also emotional distress, inconvenience, or any other negative impacts. Always consider the potential impacts on participants and take steps to mitigate any harm.
4. Honesty and Transparency: Be open and honest about your research. This includes being transparent about any conflicts of interest, accurately reporting your findings (including negative results or limitations), and not manipulating or misrepresenting data.
5. Respect for Diversity and Inclusion: Ensure your research respects and includes diverse perspectives. This might involve including participants from diverse backgrounds, considering cultural sensitivities, and ensuring your research does not perpetuate bias or discrimination.
6. Data Integrity and Management: Collect, store, and analyze data in a way that maintains its integrity. This includes avoiding practices like data fabrication or falsification, and ensuring secure and appropriate data storage.
7. Compliance with Laws and Regulations: Ensure your research complies with all relevant laws and regulations. This might involve data protection laws, ethical review processes, or sector-specific regulations.
8. Collaboration and Respect: Treat all participants and stakeholders with respect. This includes valuing their time, acknowledging their contributions, and fostering a collaborative relationship.
Ethical considerations should be integrated throughout your research process, from planning to reporting. It’s also beneficial to stay updated on ethical guidelines in your field, as they can evolve over time.
Evaluations, monitoring, and research are indispensable tools for understanding the performance, impact, and progress of projects, policies, and programs. Effectively conducting these activities requires careful planning, thorough data collection and analysis, clear communication of findings, and careful consideration of ethical issues.
The choice of method should always be driven by the research questions and objectives. Whether using qualitative methods, quantitative methods, or a combination of both, it’s important to select a methodology that best fits the research purpose and context.
Technology is playing an increasingly prominent role, offering new ways to collect, analyze, and share data. However, it’s essential to use technology judiciously, considering factors such as data privacy, digital literacy, and accessibility.
Presenting findings in a clear, accessible, and engaging manner is crucial. Data visualization tools can help translate complex data into a format that’s easy to understand and actionable. And no matter how the data is presented, maintaining the integrity and transparency of the research process is paramount.
Ethical considerations must be at the forefront of all evaluations, monitoring, and research. Respecting participants’ rights, maintaining privacy and confidentiality, avoiding harm, and adhering to laws and regulations is vital to ensure the credibility and integrity of your work.
In conclusion, conducting effective evaluations, monitoring, and research is a multifaceted process that requires strategic planning, rigorous execution, and constant learning and adaptation. These activities offer valuable insights that can guide decision-making, inform strategy, and contribute to the broader body of knowledge.
Fation Luli
Hey EvalCommunity readers,
Did you know that you can enhance your visibility by actively engaging in discussions within the EvalCommunity? Every week, we highlight the most active commenters and authors in our newsletter , which is distributed to an impressive audience of over 1.289,000 monthly readers and practitioners in International Development , Monitoring and Evaluation (M&E), and related fields.
Seize this opportunity to share your invaluable insights and make a substantial contribution to our community. Begin sharing your thoughts below to establish a lasting presence and wield influence within our growing community.
Leave a Comment Cancel Reply
You must be logged in to post a comment.
How strong is my Resume?
Only 2% of resumes land interviews.
Land a better, higher-paying career
Jobs for You
Monitoring & evaluation advisor.
- Toronto, ON, Canada
- Cuso International
Director of Organizational Development/ Organizational/Change Management Specialist
Mis/mel specialist, deputy chief of party, chief of party, senior project manager.
- United States
Local Economic Expert
Social cohesion and conflict prevention expert, labor market and workforce development expert, team leader, outbreak response advisor – usaid/car.
- Central African Republic
Outbreak Response Advisor – USAID/Burundi
Retreat facilitator, outbreak response advisor.
- Democratic Republic of the Congo
Services you might be interested in
Useful guides ....
How to Create a Strong Resume
Monitoring And Evaluation Specialist Resume
Resume Length for the International Development Sector
Types of Evaluation
Monitoring, Evaluation, Accountability, and Learning (MEAL)
LAND A JOB REFERRAL IN 2 WEEKS (NO ONLINE APPS!)
Sign Up & To Get My Free Referral Toolkit Now:
Enter your email address or username.
Enter the password that accompanies your email address.
- Create new account
- Reset your password
Topics: Monitoring & Evaluation
Monitoring systems in Africa
Summary: Proving the value of agroecology for farmers and food systems: what methods and evidence do we have?
Learning Links | resources for monitoring and evaluation practitioners and decision-makers
Conducting MEL with digital agriculture service providers – lessons from leveraging MEL to develop farmer-centric digital services
- Agriculture
- Information and Communication Technologies
- Monitoring & Evaluation
Disability inclusion in evaluation
Monitoring and Evaluation Capacity Assessment Toolkit
A study on the status of M&E practices on SDGs in selected countries in the Asia Pacific Region
Addressing environmental sustainability through the OECD DAC Criteria for Evaluation of Development Assistance
Monitoring, evaluation and learning in farmer field school programmes. A framework and toolkit
A lack of learning in the monitoring and evaluation of agriculture projects
“Here we go again” - A lack of learning in the monitoring and evaluation of agriculture projects
Strengthening and measuring monitoring and evaluation capacity in selected African programmes
What type of evaluator are you?
Innovating in monitoring and evaluation to better respond to the needs of decision-makers at national level
- National Evaluation Capacities
Emerging evaluator programme shows that sustainable capacity development initiatives are possible
- Capacity Development
How to Establish a National Evaluation System
Establishing a National Monitoring and Evaluation Policy
Evaluating agroecology: what is your experience?
Tool for Agroecology Performance Evaluation (TAPE) - Test version
Evaluation and performance assessments for agroecology
- Biodiversity
- Environment
- Food Security
Connecting data and research to policies to transform the agricultural sector: the EPA Forum
Adapting evaluations to the needs of decision-makers: the case of Benin
The proof of the pudding is in the eating: five lessons for developing national evaluation capacity
- Evaluation process
Monitoring, evaluation and learning: How can we maximize capacity?
- Evaluation methods
Impact assessment and evaluation tools
The road ahead: Evaluators must promote learning for adaptive development processes
- Knowledge management and communication
How best to evaluate capacity development?
Insights from participatory outcome evaluation with social forest groups in Myanmar
Citizen's monitoring in forestry - Participatory Monitoring and Evaluation for Community Forest Groups
Using Evidence in Policy and Practice – Lessons from Africa
M&E capacity-strengthening approaches and measurement in Africa
How do we move forward on Evaluation Systems in the Agriculture Sector?
Rapid evaluation to measure the impact of the COVID pandemic in mountainous areas
Evalxchange - a festival of learning.
How has Covid-19 affected data quality in the agriculture sector? Highlights on a recent e-Panel
A guide: integrating communication in evaluation
How to use Knowledge Management to strengthen the impact of Evaluation on smallholder agriculture development?
Review of monitoring and evaluation capacities in the agriculture sector
Data quality in agriculture and food security in the time of COVID-19
How can evaluation help improve data quality and policies on food security during Covid-19 pandemic?
e-Panel on Leadership and RBM in the African Agricultural Sector
Strengthening monitoring and evaluation for adaptation planning in the agriculture sectors
Can we settle for evaluation alone to ensure that the SDGs are achieved?
Monitoring, Evaluation and Learning – Concepts, principles and tools
The farmer as a key participant of M&E: lessons and experiences from Participatory M&E systems
Evaluation practice in Africa: key trends in the landscape
Youth in agriculture: what lessons can we draw from evaluations?
Webinar: use of synthesis and meta-analysis in development evaluation, learning event: 2019 annual report on results and impact of ifad operations.
EVALUATION FOR TRANSFORMATIONAL CHANGE
Enhancing funding and service delivery in agriculture: any ideas?
Supporting smallhoder agriculture: what are your experiences in using evaluation?
Breaking the Mould. Alternative approaches to monitoring and evaluation
What can we do to improve the quality of development projects?
Recurring errors in public policies and major projects: contributions and solutions from evaluation
E-learning course - Developing a Monitoring and Evaluation Plan for Food Security and Agriculture Programmes
Can agriculture and food security policies be effective when statistical data is unreliable?
What can we do to improve food security data?
Depta - development evaluation training programme in africa.
We need evaluation to support agricultural transformation in Africa
- Rural development
PRiME Impact Evaluation Course
This website may not work correctly because your browser is out of date. Please update your browser .
Action and reflection: a guide for monitoring and evaluating participatory research
- Action and reflection: a guide for monitoring and evaluating participatory research File type PDF File size 981.83 KB
This paper from the International Development Research Centre (IDRC) was designed to support those involved in participatory research and development projects with monitoring and evaluation strategies.
"The guide is not a blue-print, but addresses issues that are at the heart of making an art of monitoring and evaluating participatory research.2 The guide is organized around six basic, interrelated questions that need to be answered when doing monitoring and evaluation. These questions are:
- WHY do we monitor and evaluate participatory research? (Chapter 2)
- FOR WHOM will we monitor and evaluate? (Chapter 3)
- WHAT will we monitor and evaluate? (Chapter 4)
- WHO will monitor and evaluate? (Chapter 5)
- WHEN will we monitor and evaluate? (Chapter 6)
- HOW will we do it? (Chapter 7)." (McAllister & Vernooy, 2005)
- Introduction: six basic questions
- How to use this guide
- 1. Issues which influence participatory research
- 1.1 About participatory research
- 1.2 The nature of knowledge and information
- 1.3 Types of participation
- 1.4 Influences on the results of participatory research
- 2. WHY do we monitor and evaluate?
- 2.1 Objectives
- 2.2 Efficiency, effectiveness and relevance
- 2.3 Accountability and causality
- 3. FOR WHOM do we monitor and evaluate?
- 4. WHAT do we monitor and evaluate?
- 4.1 Basic concepts: introduction
- 4.2 Participatory baseline analysis
- 4.3 Outputs
- 4.4 Process and methods
- 4.5 Outcomes (short term impacts) and impacts
- 4.6 Reach
- 5. WHO monitors and evaluates?
- 5.1 Role of researchers
- 5.2 Role of the community
- 5.3 Role of external evaluators
- 6. WHEN do we monitor and evaluate? (the project cycle)
- 6.1 Pre-project phase: proposal development stage
- 6.2 In-project phase
- 6.3 Post-project phase
- 7. HOW we monitor and evaluate
- 7.1 Selecting tools
- 7.2 References: Selected readings
- 7.3 Other sources
- Annex 1. Glossary
- Annex 2. About indicators
- Bibliography
McAllister, K., & Vernooy, R. International Development Research Centre, (2005). Action and reflection: A guide for monitoring and evaluating participatory research (Working Paper 3, Rural Poverty and the Environment Working Paper Series). Retrieved from website: https://www.researchgate.net/publication/277213689_Participatory_monitoring_and_evaluation_readings_and_resources
Back to top
© 2022 BetterEvaluation. All right reserved.
This site uses cookies to optimize functionality and give you the best possible experience. If you continue to navigate this website beyond this page, cookies will be placed on your browser. To learn more about cookies, click here .
tools4dev Practical tools for international development
10 Reasons Why Monitoring and Evaluation is Important
Monitoring and evaluation are essential to any project or program. Through this process, organizations collect and analyze data, and determine if a project/program has fulfilled its goals. Monitoring begins right away and extends through the duration of the project. Evaluation comes after and assesses how well the program performed. Every organization should have an M&E system in place. Here are ten reasons why:
M&E results in better transparency and accountability
Because organizations track, analyze, and report on a project during the monitoring phase, there’s more transparency. Information is freely circulated and available to stakeholders, which gives them more input on the project. A good monitoring system ensures no one is left in the dark. This transparency leads to better accountability. With information so available, organizations need to keep everything above board. It’s also much harder to deceive stakeholders.
M&E helps organizations catch problems early
Projects never go perfectly according to plan, but a well-designed M&E helps the project stay on track and perform well. M&E plans help define a project’s scope, establish interventions when things go wrong, and give everyone an idea of how those interventions affect the rest of the project. This way, when problems inevitably arise, a quick and effective solution can be implemented.
M&E helps ensure resources are used efficiently
Every project needs resources. How much cash is on hand determines things like how many people work on a project, the project’s scope, and what solutions are available if things get off course. The information collected through monitoring reveals gaps or issues, which require resources to address. Without M&E, it wouldn’t be clear what areas need to be a priority. Resources could easily be wasted in one area that isn’t the source of the issue. Monitoring and evaluation helps prevent that waste.
M&E helps organizations learn from their mistakes
Mistakes and failures are part of every organization. M&E provides a detailed blueprint of everything that went right and everything that went wrong during a project. Thorough M&E documents and templates allow organizations to pinpoint specific failures, as opposed to just guessing what caused problems. Often, organizations can learn more from their mistakes than from their successes.
M&E improves decision-making
Data should drive decisions. M&E processes provide the essential information needed to see the big picture. After a project wraps up, an organization with good M&E can identify mistakes, successes, and things that can be adapted and replicated for future projects. Decision-making is then influenced by what was learned through past monitoring and evaluation.
M&E helps organizations stay organized
Developing a good M&E plan requires a lot of organization. That process in itself is very helpful to an organization. It has to develop methods to collect, distribute, and analyze information. Developing M&E plans also requires organizations to decide on desired outcomes, how to measure success, and how to adapt as the project goes on, so those outcomes become a reality. Good organizational skills benefit every area of an organization.
M&E helps organizations replicate the best projects/programs
Organizations don’t like to waste time on projects or programs that go nowhere or fail to meet certain standards. The benefits of M&E that we’ve described above – such as catching problems early, good resource management, and informed decisions – all result in information that ensures organizations replicate what’s working and let go of what’s not.
M&E encourages innovation
Monitoring and evaluation can help fuel innovative thinking and methods for data collection. While some fields require specific methods, others are open to more unique ideas. As an example, fields that have traditionally relied on standardized tools like questionnaires, focus groups, interviews, and so on can branch out to video and photo documentation, storytelling, and even fine arts. Innovative tools provide new perspectives on data and new ways to measure success.
M&E encourages diversity of thought and opinions
With monitoring and evaluation, the more information the better. Every team member offers an important perspective on how a project or program is doing. Encouraging diversity of thought and exploring new ways of obtaining feedback enhance the benefits of M&E. With M&E tools like surveys, they’re only truly useful if they include a wide range of people and responses. In good monitoring and evaluation plans, all voices are important.
Every organization benefits from M&E
While certain organizations can use more unique M&E tools, all organizations need some kind of monitoring and evaluation system. Whether it’s a small business, corporation, or government agency, all organizations need a way to monitor their projects and determine if they’re successful. Without strong M&E, organizations aren’t sustainable, they’re more vulnerable to failure, and they can lose the trust of stakeholders.
About Emmaline Soken-Huberty
Related Articles
Apply now: Master of Science in Engineering, Sustainability and Health
15 May 2024
Top 10 Websites to find Monitoring and Evaluation Jobs
12 August 2023
Monitoring and Evaluation Tools for NGOs
6 August 2023
- Browse Works
- Monitoring and Evaluation
Monitoring and Evaluation Research Papers/Topics
Measuring the impact of literacy projects on rural poor in northern uganda: the case of rural education empowerment project in nebbi district.
ABSTRACT This qualitative study examined the impact of literacy projects on the rural poor in Northern Uganda with specific reference to the Rural Education Empowerment Project (REEP), which the Uganda Programme of Literacy for Transformation (UPLIFT) implemented in partnership with Norwegian Baha’i Office of Social and Economic Development (NorSED) from 2007 to 2009. The study considered literacy as the independent variable and the impact of the project on the rural poor as its dependent ...
Institutional Factors And Evaluation Quality In Non-Governmental Organisations In Uganda: A Case Study Of Fhi360 Uganda
Abstract The study examined how institutional factors affect the quality of project evaluations in non- governmental organizations. It was guided by three research objectives, that is, the influence of management strength, staff competence and resource management on the quality of project evaluation at FHI360. The study used a cross-sectional survey design with a qualitative-focused but mixed method inquiry. The study population involved project team leaders, M&E staff and project implementa...
Performance Contracting And Service Delivery At Directorate Of Co-Operatives In Nairobi County
ABSTRACT Performance contracting introduction to government ministries and agencies was aimed at improving service delivery. In 2006, Directorate of Co-operatives adopted and implemented performance contracting however; its performance on service delivery remains unsatisfactory. The main objective of the research was to analyze performance contracting and service delivery at Directorate of Co-operatives in Nairobi County. The specific objectives were to assess performance contracting process,...
The Influence Of Cash Transfer Programmes On Socio-Economic Wellbeing Of Recipient Households In Migori County, Kenya
ABSTRACT Conditional cash transfers are increasingly becoming a best practice in the social sector for developing countries. In 2004, Orphans and Vulnerable Children Cash Transfer was introduced on pilot basis in Kenya. This was in response to the impact of HIV/Aids on children. A study carried out by Kenya National Bureau of Statistics (2006) show poverty rates tend to be higher among vulnerable groups such as children (53.5%), including orphans and vulnerable children (54.1%). Kenya’s Cas...
Factors Influencing The Performance Of Monitoring And Evaluation Systems In Non-Government Organizations In Lira District, Northern Uganda
ABSTRACT Monitoring and evaluation systems allow for project activities to be measured and analyzed. Unfortunately, there is often a gap in the design of M&E systems; generation of information during the process of M&E and use of this information in future designs. The purpose of this study was to establish the factors influencing performance of M&E systems of NGOs in Lira District. The study was guided by the following research objectives: To determine how M&E structure influenced the perfor...
Factors Affecting Utilization Of Monitoring And Evaluation Findings In Implementation Of Malaria Control Programmes In Mukono District, Uganda
ABSTRACT This study set out to ascertain the factors affecting utilization of Monitoring and Evaluation findings in implementation of Malaria Control Programmes. Its objectives included identifying the implementation factors, decision factors and community factors that affected utilization of Monitoring and Evaluation findings in implementation of Malaria Control Programmes. The study used a survey design in which questionnaires were administered to 120 employees from Monitoring and Evaluatio...
Factors Affecting Data Quality In Private Clinics In Uganda: The Case Of Selected Uganda Health Marketing Group-Supported Clinics In Kampala
ABSTRACT This study sought to establish the factors that affect data quality in UHMG-supported private clinics in Kampala District. The objectives of the study were: to examine the effect of internal factors on data quality in UHMG-supported private clinics; to find out how external factors affect data quality in UHMG-supported clinics and to find out how data quality can be improved in UHMG-supported private clinics. Specific emphasis was put on the effect of internal factors as well as exte...
Factors Affecting Application Of Results Based Monitoring And Evaluation System By Nurture
ABSTRACT Donor countries are concerned about development practices in the developing countries where much of the financial and technical investment has been done with little change. One of the causes the donors identified was under-reporting of project impact on people’s lives. The donors resolved during the Paris Declaration of 2005 that development organizations should use resultsbased management approach to implement projects. The purpose of the study was to examine factors affecting the...
Evaluation Capacity Development Processes And Organisational Learning In Ugandan Municipal Local Governments
ABSTRACT The study to establish the relationship between Evaluation Capacity Development (ECD) processes and Organisational Learning (OL) in Ugandan Municipal Local Governments (LGs) was influenced by the Organisational Learning theory (Argyris & Schön, 1978) and used a cross sectional survey design that adopted mixed methods on 62 (sixty two) respondents from four Municipal LGs, two central government ministries and one agency. The researcher employed a questionnaire survey, key informant i...
Effective Role Of Public Sector Monitoring And Evaluation In Promoting Good Governance In Uganda: Implications From The Ministry Of Local Government
ABSTRACT The purpose of this study was to examine the effectiveness of the role of public sector monitoring and evaluation in promoting good governance in Uganda, with a focus on the Ministry of Local Government. Specifically, the study sought to: examine the effective role of M&E Accountability, M&E Management Decision, M&E Organisational Learning in promoting good governance, draw lessons from practice and provide recommendations to better inform the implementation strategy of M&E in the Mi...
Determinants Of Effective Utilization Of Routine Health Information Within Private Health Facilities In Kampala -Uganda
ABSTRACT This study examined the extent to which the identified determinants influence the effective utilization of routine health information in the private health facilities in Kampala. The study was based on the following research objectives: to describe how technical determinants influence utilization of routine health information in private health facilities in Kampala; to determine how organizational determinants influence utilization of routine health information in private health faci...
Decentralized Policy Management And Performance Of Water And Sanitation Development Facility-North, Lango Sub-Region, In Northern Uganda
ABSTRACT The research focused on the influence of decentralized policy management and performance of deconcetrated structures in the Ministry of Water and Environment in Uganda with a case study of Water and Sanitation Development Facility-North. It covered a selected number of districts of Lango sub-region in northern Uganda namely, Oyam, Apac, Dokolo, Lira and Amolatar. The performance was the dependent variable with decentralized policy management being the independent variable (i.e. measu...
Organizational Learning Culture And Utilization Of Evaluation Results By International Development Agencies. A Case Study Of Heifer International Uganda.
The study examined the relationship between organizational learning culture and utilization of evaluation results in international development agencies taking Heifer International Uganda as a case study. The study interrogated organizational learning culture in terms of senior leadership support, staff capacity and structural support system with an interest of finding out how each of these influences utilization of evaluation results at Heifer International Uganda. In an effort to answer the ...
Contract Management Practices And Performance Of The Road Construction Projects In Wakiso District - Uganda
ABSTRACT The study examined the use of contract management practices on performance of the road construction projects in Wakiso district. The objectives of the study were: to examine the role of monitoring intensity in enhancing performance of the road construction projects in Wakiso district; to analyze the relationship between risk management and performance of the road construction projects in Wakiso district; and, to assess the role of evaluation in enhancing performance of the road const...
Institutional Systems-Related Factors And Performance Of National Non-Governmental Organizations In Sudan: Acase Study Of Sibro
ABSTRACT This study sought to examine the influence of institutional systems-related factors on the performance of national NGOs in Sudan with a particular focus on Sibro Organization. Specific emphasis was put on investigating the effect of financial management and human resources on organizational performance as well as establishing the relationship between strategic leadership and performance. The study applied a correlational research design in a case study involving both qualitative and ...
Popular Papers/Topics
Influence of monitoring practices on projects performance at the water sector trust fund, assessment of the impact of pregnancy school on pregnancy outcomes and skilled delivery in nabdam district of upper east region, use of routine health information for decision making among health workers at coast general hospital, mombasa county, kenya, role of logical framework adoption on project success in non-governmental organizations in kenya: a case of relief non-governmental organizations in nairobi county, assessing the impact of maternal and child health and nutrition improvement project (mchnp) on service delivery outcomes in ghana, determinants of adherence to exclusive breast feeding among hiv positive mothers attending child welfare clinic at pumwani maternity hospital, nairobi county, kenya, influence of monitoring and evaluation on project sustainability in nigeria, a case study of local empowerment and environmental management project (leemp) in benue state, nutritional status of free living and institutionalized elderly and associated factors in trans nzoia county, kenya, assessment of the triage system at the emergency department of the greater accra regional hospital, assessment of the implementation of one-household onebin sanitation intervention in abokobi community..
Privacy Policy | Refund Policy | Terms | Copyright | © 2024, Afribary Limited. All rights reserved.
Monitoring and evaluation (M&E) topics covered by select institutes of four South Asian countries.
Similar publications
- Sonal Khosla
- Jasmine Tenpa Lama
- Ifeanyi McWilliams Nsofor
- Recruit researchers
- Join for free
- Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up
Research to Action
The Global Guide to Research Impact
Social Media
Monitoring and evaluation.
It’s not easy to measure the impact of development research in bringing about positive change. It’s even harder to show how communications efforts, and expenditure, helps to achieve both research objectives, and development outcomes. This section aims to offer key resources and insights to help support better monitoring and evaluation of research uptake activities.
Impact Practitioners
Four strategies to increase policy influence
This 10-page article offers some insights from policy studies and case studies of Oxfam campaigns to promote the uptake of research evidence in policy. Oxfam…
IMAGES
VIDEO
COMMENTS
MERLA is the intentional application of results focused monitoring, evaluation, and research tools and methodologies to inform continuous evidence-based learning that is purposefully used to adapt program and policy decision making. ... (linked below), covering a range of topics, including: The purpose and different types of MERLA; How MERLA ...
Explore the latest full-text research PDFs, articles, conference papers, preprints and more on MONITORING AND EVALUATION. Find methods information, sources, references or conduct a literature ...
Monitoring and Evaluation Training: A Systematic Approach. Thousand Oaks, CA: Sage. 464 pp. $69 (paperback), ISBN 9781452288918. ... Methods for Development Work and Research: A New Guide for Practitioners. 2005. SAGE Research Methods. Whole book . Collaborative Approaches to Evaluation. Show details Hide details.
The site publishes practical resources on a range of topics including research uptake, communications, policy influence and monitoring and evaluation. It captures the experiences of practitioners and researchers working on these topics and facilitates conversations between this global community through a range of social media platforms.
The study followed a positivist research philosophy, relying only on quantitative methods and an explanatory research design. Data from prior research on monitoring practices, evaluation practices, the business environment, and project outcomes were used to create a structured questionnaire. The study's population comprised tech start-ups in ...
Monitoring, evaluation, and research activities generate important data, but they often fail to change policies or programs. In addition, local program staff and partners often feel disconnected from these activities, which undermines their ownership of data and results. To bridge the gaps between monitoring, evaluation, and research and to ...
ESSENCE on Health Research. Planning, Monitoring and Evaluation Framework for Research Capacity Strengthening. Geneva: Training in Tropical Diseases (TDR)/World Health Organization (WHO); 2016. [Google Scholar] Institute of Development Studies. Learning about Theories of Change for the Monitoring and Evaluation of Research Uptake.
This one-of-a-kind book fills a gap in the literature by providing readers with a systematic approach to monitoring and evaluation (M&E) training for programs and projects. ... Watch videos from a variety of sources bringing classroom topics to life. Read modern, diverse business cases ... About Sage Publishing About Sage Research Methods ...
This 39-page guidance note by the Overseas Development Institute (ODI) is about designing a monitoring and evaluation (M&E) framework for policy research projects. It explains what aspects of policy research projects you should monitor and evaluate, why, when and how. It offers plenty of practical examples, explainers and an overview of key questions and methods.
Monitoring and Evaluation of National Vaccination Implementation: A Scoping Review of How Frameworks and Indicators Are Used in the Public Health Literature ... It is likely that much of the research on this topic remains unpublished, as evaluations conducted by non-academic bodies (e.g., government, consultants) may not be in the public domain ...
PDF | This article discusses the principles and practice of monitoring and evaluation and emphasises that monitoring and evaluation (M and E) is... | Find, read and cite all the research you need ...
Unit 1: Monitoring and Evaluation Overview. Participants will: be introduced to Monitoring and Evaluation (M&E) and its key concepts; understand some of the inter-relationships among key M&E concepts; be able to identify where a pathway of change may need more development or details; see how key M&E concepts are manifest in a relevant example
Monitoring and evaluation is a research topic. Over the lifetime, 2379 publications have been published within this topic receiving 32520 citations. Popular works include Research use of linked health data--a best practice protocol, Handbook on Impact Evaluation: Quantitative Methods and Practices and more.
️Designing a Monitoring and Evaluation Plan: Steps and Strategies. Designing a monitoring and evaluation (M&E) plan involves several steps and strategies to ensure that the plan is effective in measuring program performance, identifying areas for improvement, and making evidence-based decisions. Here are some of the key steps and strategies: ...
Research is a process of steps used to collect and analyze information to increase our understanding of a topic or issue. It provides evidence-based insights, which can help in shaping strategies, formulating policies, and making informed decisions. ... After the data collection and analysis phase in your evaluation, monitoring, or research ...
Topics: Monitoring & Evaluation. Topics: Monitoring & Evaluation. Breadcrumb. Home; ... The study adopted a qualitative research approach, specifically using semi-structured interviews to gain an in-depth understanding of capacity-strengthening approaches and how... Posted on. Jan 27, 2023
This paper from the International Development Research Centre (IDRC) was designed to support those involved in participatory research and development projects with monitoring and evaluation strategies.. Excerpt "The guide is not a blue-print, but addresses issues that are at the heart of making an art of monitoring and evaluating participatory research.2 The guide is organized around six basic ...
Basic principles of monitoring and evaluation Monitoring and evaluation usually include information on the cost of the programme being monitored or evaluated. This allows judging the benefits of a programme against its costs and identifying which intervention has the highest rate of return. Two tools are commonly used.
Monitoring and evaluation (M&E) of development activities provides government officials, development managers, and civil society with better means for learning from past . ... Global data and statistics, research and publications, and topics in poverty and development. WORK WITH US. Jobs, procurement, training, and events. News; The World Bank ...
Monitoring and evaluation are essential to any project or program. Through this process, organizations collect and analyze data, and determine if a project/program has fulfilled its goals. Monitoring begins right away and extends through the duration of the project. Evaluation comes after and assesses how well the program performed. Every organization should have an M&E …
Popular Papers/Topics . Factors Influencing The Performance Of Monitoring And Evaluation Systems In Non-Government Organizations In Lira District, Northern Uganda; Effective Role Of Public Sector Monitoring And Evaluation In Promoting Good Governance In Uganda: Implications From The Ministry Of Local Government
Download scientific diagram | Monitoring and evaluation (M&E) topics covered by select institutes of four South Asian countries. from publication: How Do Masters of Public Health Programs Teach ...
Research To Action (R2A) is a learning platform for anyone interested in maximising the impact of research and capturing evidence of impact. The site publishes practical resources on a range of topics including research uptake, communications, policy influence and monitoring and evaluation.