Introduction

DPMG’s quality standards are designed to ensure that DPMG evaluations are of the highest quality. The standards are based on existing best practices and reflect international consensus followed by the development evaluation community, including the OECD/DAC and the UN Evaluation Group.  

The standards apply to proposals, inception reports, and evaluation reports. Inception reports, which are produced after the initial phase of the evaluation, are designed to establish a shared understanding between the evaluation team and the client regarding all aspects of the planned evaluation.  They describe the information collected to date, including lists of persons interviewed, and what the evaluation team has learned during the inception phase. 

 

Quality Standards for DPMG Proposals

1. A high quality evaluation starts with the selection of a high quality evaluation team and a high quality proposal that clearly identifies the questions to be addressed, and explains how the evaluation proposes to address them.  

2. A high quality proposal demonstrates a clear understanding of the purpose, objectives and audience for the proposed evaluation. 

3. The proposal identifies the theory of change or results chain that explains the causal links between the activities and the achievement of objectives.  

4. A DPMG proposal clearly lays out the specific evaluation questions to be answered, as well as the evaluation’s breadth and coverage (“scope”). 

5. The evaluation methodology is clearly and comprehensively described, including how the evaluation questions will be answered, the analytic methods to be used for each question (quantitative methods, qualitative, mixed, etc.), and why they were chosen.

6. The evaluation design is efficient and capable of generating credible answers to the evaluation questions. The design selected should reflect cost/effectiveness tradeoffs among alternative ways to address the evaluation questions. Can a different way of conceptualizing the evaluation save resources? Are more junior and thus less expensive staff members adequate to the task? Can the task be properly conducted with fewer staff?  The evaluation approach is appropriate to the problem and the purpose. Sometimes established formal methodologies are appropriate—for example, experimental designs, regression analyses, validated models, or survey protocols.  When the choice of methodology is less obvious, the analytic approach and its rationale are explained and justified. When a new method or mixed methods are used in a novel way, or an existing method is applied to a new kind of problem, the rationale for the selected approach and its advantages over alternatives is explained.

7. The proposal identifies and describes the data requirements and data sources (or collection methods and instruments) that will be used for the evaluation, along with any data limitations (e.g., confidence that data are available or can be collected within cost/time constraints, the extent to which the data are accurate, reliable, and valid for this purpose). 

8. The proposal demonstrates knowledge of previous evaluations and research literature on the topic and explains what is new about the planned evaluation, and how the proposed work will add value.  

9.  The proposal is transparent about the strengths and limitations of the evaluation design.  It will explain what questions the design will and will not allow to be answered. It also acknowledges the possible risks to carrying out the design and the mitigation strategies that could be applied to reduce those risks. 

 

Quality Standards for Inception Reports

1. The report sharpens the team's understanding of the scope and purpose of the evaluation, such as the geographic coverage and time period covered.

2. The report refines the evaluation's data collection and analysis plan, showing how different methods relate to each other and how multiple data sources will be used to substantiate findings and conclusions.

3. The report identifies options for the client, such as trade-offs between alternative selections of countries for desk reviews.

4. The report flags obstacles that might warrant a change in the Terms of Reference and/or the original proposal for the evaluation.

5. The report elaborates on the assignment of tasks among team members, the detailed work plan, and the timelines for major activities and deliverables.

6. The report presents a preliminary outline of the evaluation report.

 

Quality Standards for Evaluation Reports

1. The evaluation is independent, objective, and balanced.   Its tone is impartial and constructive, and it presents positive and negative findings in a fair way.  It avoids advocacy or bias with respect to the evaluated activity. Evaluations report findings honestly, even when these are unwelcome to the client.  They also highlight alternative interpretations supported by the data and any ambiguities. DPMG's Conflict of Interest Policy ensures that clients, sponsors, and DPMG’s readership as a whole can trust both the quality and integrity of DPMG’s work. 

2. The evaluation clearly lays out its rationale, objectives and scope.  It explains why it was undertaken at the particular time, which stakeholders are expected to use the evaluation, and what accountability systems, decisions, processes or debates the evaluation aims to inform. A clear understanding of what the evaluation is attempting to accomplish— its objectives—is key to ensuring its utility or future use. The scope of the evaluation (its breadth and depth of coverage) is clearly described and justified.  

3. The evaluation describes the program or activity being evaluated, as well as the context in which it operates. The reader is given a good sense of the key social, political, economic, historic, demographic, institutional factors and the policy and development context(s) within which the activity operates or operated and how this context relates to the observed results. 

4. The evaluation describes the theory of change behind the program or activity being evaluated, including its major underlying assumptions.  The objectives of the program, how the program or activity is intended to achieve the objectives, the assumptions being made, and which factors are most important for achieving the objectives are all clearly laid out.  Many programs are not explicit in articulating a theory of change, and in this case, the evaluation identifies implicit program goals and describes an implicit theory of change underlying the program.   

5. The evaluation frames its questions well and explains the evaluation methods used to answer them.  For an evaluation to be credible, transparency about evaluation design and methodological approaches is essential.  Thus DPMG’ evaluations describe the specific methods used (e.g. qualitative/quantitative inquiry), and are candid about the limitations of the evaluation design and methods, and their implications (e.g. data gaps, analytical limitations).

6. The evaluation takes account of relevant literature and past evaluations. A high-quality evaluation cannot be done in intellectual isolation and necessarily builds on a body of relevant literature and analysis as well as past evaluations. How the evaluation agrees, disagrees, or otherwise differs significantly from previous studies is clearly spelled out. The evaluation team's understanding of this past research and the findings of other relevant evaluations is reflected throughout the evaluation - from how the problem is formulated and approached, to the discussion of findings and their implications. 

7. The evaluation scrutinizes the reliability and validity of data, and uses multiple sources of data wherever possible. Evaluations assess the quality of the evidence—both quantitative and qualitative—in relation to its intended purpose, and highlight important limitations or caveats. Data anomalies are explained (e.g., can conflicts or policy reforms explain sharp discontinuities in data series). Data-generation methods and database fields are clearly specified, and data is screened and manipulated using standard social science research methods. When alternative methods converge on the same result, more confidence can be placed in the result; therefore, triangulation among alternative data sources is done whenever possible.

8. The evaluation uses analytic methods appropriate to the nature and quality of the data available. In particular, analytic methods that maximize the information legitimately obtainable from the data are sought, as sophisticated methods rarely compensate for low-quality data. Determining the likelihood that development results are attributable to a particular intervention is especially difficult and DPMG evaluations use state-of-the-art methods to assess attribution as rigorously as possible.

9. Evaluation findings and conclusions are warranted by the data and analyses presented. The findings of the evaluation observe strict standards of evidence and inference.  The evaluation thoroughly explores the implications of its findings and examines where new and prior knowledge are congruent, and where they are not, and whether existing theories and conceptual frameworks have been strengthened or must be modified. 

10. Recommendations are logical, warranted by the findings, explained with appropriate caveats, and germane to the policy concerns of the client.  Making recommendations is a highly accountable step for evaluators because the resulting client decisions may have important real-life consequences and impacts. Recommendations are evidence-based and follow logically from the evaluation’s findings and conclusions. They include relevant caveats to help ensure that they are not applied to inappropriate cases or with unrealistic expectations.  Recommendations are actionable and feasible, and are clearly prioritized in terms of their urgency and timing.  Most importantly, they are elaborated in consultation with the client and reflect an understanding of possible institutional or other constraints to implementation. 

11. The report is well written and easily understood by its intended audiences. DPMG reports are written for a wide, non-technical audience; their style is clear, straightforward, and free of technical, economic, and social science research jargon.  Abbreviations, acronyms, technical terms and footnotes are kept to a minimum, and defined and explained when used. Graphs, charts, tables and diagrams are creatively used to clarify complex ideas and illustrate findings. These have clear legends and labeling and are easily and accurately interpreted. 

12. The evaluation contains an executive summary that highlights the purpose of the evaluation, its objectives, the evaluation questions, design and methods, the main findings, conclusions and recommendations. The executive summary is brief, self-explanatory, and self-contained and is written for a non-technical reader. It provides a balanced synthesis of what DPMG evaluated, how DPMG evaluated it, and what DPMG found, including (a) the rationale, objectives, and scope of the evaluation; (b) a description of the activity being evaluated and its context; (c) a clear statement of the evaluation questions and of the criteria by which performance was assessed; (d) a short statement of the evaluation design, methods and the data used, including any limitations of these and their implications;  (e) the key findings;  (f) conclusions and an overall evaluative assessment on each evaluation question, and (g) the main recommendations. 

 

Sources:  Among other documents, DPMG’s Quality Standards draw on:

i) OECD/DAC Evaluation Quality Standards;

ii) ADB. 2008 and 2009 Annual Reports on Acting on Recommendations; 

iii) UNEG Quality Checklist for Evaluation Reports. 2010. Page 6; 

iv) UNEG. Standards for Evaluation in the UN System. 2005; 

v) Michael Hendricks and Elisabeth A. Handley. 1990. Improving the Recommendations from Evaluation Studies. Evaluation and Program Planning, v13 n2, p. 109–17; 

vi) USAID. Performance Monitoring and Evaluation. TIPS 17: Constructing an Evaluation Report. 2010; 

vii) CIDA. How to Perform Evaluations – Evaluation Reports. 2002; 

viii) UNEG. Standards for Evaluation in the UN System. Standard 4.16. 2005. 

ix) UNICEF Evaluation Report Standards;  

x) The Program Evaluation Standards: A Guide for Evaluators and Evaluation Users