Evaluation Methodology
Linda Morra Imas
Linda Morra Imas (Ed.D., Evaluation, University of Virginia) has over 30 years of experience in managing evaluations nationally and internationally and in developed, developing, and transition countries. Until her recent retirement, she was Chief Evaluation Officer and Evaluation Capacity Building Adviser for the World Bank Group and earlier, a former director at the U.S. Government Accountability Office. She is founder and co-director of IPDET, the International Program for Development Evaluation Training. 

She now consults on monitoring and evaluation and provides M&E training to various organizations, including non-profit organizations, evaluation firms, national ministries, and bilateral aid organizations. Among her many publications is the textbook The Road to Results: Designing and Conducting Effective Development Evaluations published by the World Bank and thus far reprinted in 5 languages. Dr. Morra Imas is also co-author of Case Study Evaluations, a World Bank publication. Her workshop on Designing and Conducting Case Studies is held annually at IPDET and as a pre-conference workshop at many evaluation venues.

Patrick Grasso
Patrick Grasso (Masters and Ph.D. in Political Science from the University of Wisconsin).  Mr. Grasso’s professional career has been spent primarily in the field of evaluation, including conducting evaluations, managing evaluation functions, building and maintaining evaluation partnerships, and training in evaluation methods. This work has been carried out in a number of institutional contexts, including the World Bank and other major development organizations, the US Government Accountability Office (GAO), the Pew Charitable Trusts, and universities. He has served as a member of the American Evaluation Association’s Task Force on Evaluation Policy, including his role as Chair for the past two years. 

At the World Bank, he was responsible for leading efforts of the Operations Evaluation Department (now the Independent Evaluation Group, or IEG) to overhaul its ex-post project evaluation approach, by successfully leading a team of evaluation staff in developing a new instrument for conducting project reviews. This involved the re-definition of key ratings criteria and indicators, with a sharper focus on project outcomes, as opposed to inputs and outcomes. In the mid-2000s, he participated in a Bank-wide task force that further revamped and modernized the whole ex-post project evaluation system. In that capacity, he played a key role in identifying and resolving problems in the rating system and data used to assess project accomplishments. In related work, he carried out an evaluation of the Bank’s annual project monitoring system self-report to the Board of Executive Directors on the performance of the Bank’s project portfolio. The evaluation highlighted both strengths and weaknesses in this key report, and made recommendations on improving the Bank’s monitoring system. During his time at the World Bank, he also established a Knowledge Management program for the Evaluation Department, and recruited staff to carry it out. It was cited as one of the most successful KM efforts at the Bank, and the function remains in use more than a decade later. In addition to consulting for the World Bank, his recent work includes developing evaluation policies for the Caribbean Development Bank and assessing evaluation use for Norad. 

Christine Wallich
Christine Wallich (Ph.D., Economics, Yale University) has over 30 years of experience in the development field, including more than 20 years’ experience in senior leadership positions in international financial institutions such as the World Bank and the Asian Development Bank. She has considerable evaluation experience having led a large number of evaluations. Her sectoral expertise covers infrastructure, energy, finance and banking, public finance and public sector governance.  She also has significant experience in post-conflict reconstruction and fragile states

Most recently, she served as the Director for Methods and Quality, and Chief Evaluator of the World Bank’s Independent Evaluation Group (IEG), where she developed and pioneered the use of evaluation standards for independent evaluations to enhance their quality, relevance and rigor. She also developed Good Practice Standards for evaluating private sector guarantee operations of international financial institutions, and elaborated and institutionalized project evaluation standards and protocols for MIGA’s operations. She also led the international task force tasked with developing an evaluation policy for the World Bank Group.  As an evaluator, she managed both thematic and project-specific evaluations, including numerous evaluations of guarantee operations, financial sector and manufacturing projects worldwide. She has also led several corporate-level evaluations assessing the institutional effectiveness of a major development organization. 

Prior to this, she managed several high-profile country programs, including Bangladesh, the World Bank’s second-largest concessional program, and post-conflict Bosnia. Her recent work on fragile states addressed project implementation bottlenecks and the design of conflict-sensitive country strategies. As Director for Infrastructure, Energy and Finance at the Asian Development Bank and concurrent Head of ADB’s private sector arm, she managed a portfolio public sector projects as well as ADB’s portfolio of private sector investments and private equity funds.

Sue Berryman
Ms. Berryman (Ph.D., Interdisciplinary program in Political Economy, Social Psychology, Sociology, Johns Hopkins University) has done significant evaluation work and has extensive expertise in the logic of evaluations, particularly in structuring hypotheses to be tested (e.g., causal chains and theories of change) and conditions under which stronger versus weaker conclusions can be drawn from those tests, depending on the rigor of the test methods used. 

In her 35 year career, Sue E. Berryman taught at the Harvard Business School, worked for 12 years as a Behavioral Sciences policy analyst with the RAND Corporation, and directed the Institute on Education and the Economy at Columbia University. She has subsequently had 20 years of experience in economic development, primarily as a World Bank staff member and, post-retirement, as a consultant to the World Bank, the Asian Development Bank, and USAID. 

Her technical expertise lies in education systems, especially their financing, governance, management, and accountability and their alignment with a country’s skill demands in the public and private sectors. At the World Bank she conducted numerous public expenditure reviews of education systems in emerging economies, analyzed the performance of education systems of developing countries, and designed and supervised lending with Governments. She managed and was the primary author of the World Bank’s first lending strategy for education in Europe and Central Asia. 

Under the auspices of the Bank’s Quality Assurance Group (QAG), for 14 years she assessed the performance of the World Bank’s active lending portfolios and analytic activities, including the initial pilots to assess the validity and reliability of the instruments developed for these assessments. In this context she evaluated hundreds of World Bank projects in terms of their likelihood of achieving their development objectives. She assessed the quality of the governance and anti-corruption aspects of projects in all sectors for the World Bank’s Governance and Anti-Corruption Board, chaired assessment teams to evaluate programs in the World Bank’s Global Programs and Partnerships, conducted assessments of the education projects in country-specific portfolios, and was a member of small panels to assess the World Bank’s Education Sector Board and the World Bank Institute. She managed the midterm review of the capacity building programs in Timor-Leste for the World Bank, New Zealand’s Ministry of Foreign Affairs and Trade, and Australian Aid. She has also conducted several ex post facto assessments for the World Bank’s Independent Evaluation Group (IEG).

Kris Hallberg
Kris Hallberg (Ph.D., Economics/ Econometrics, University of Wisconsin- Madison) is an experienced development evaluation specialist with 30 years of international experience specializing in project evaluation, industrial policy, financial sector development, and private sector development including small- and medium-scale enterprises. Before retiring from the World Bank, she was a Lead Evaluation Officer for the Operations Evaluation Department (now IEG) where she managed evaluations of private sector development operations and non-lending services and developed methodologies for assessing the impact of WB activities. She led an evaluation of World Bank, IFC, and MIGA activities to improve investment climates in client countries. Since retiring, she has led a number of evaluations of public and private sector projects and programs for various clients including the World Bank, Danida, the Asian Development Bank, the African Development Bank, IFAD, and DFID among others. Recently, she provided assistance to the Evaluation Cooperation Group (ECG), whose members include the evaluation departments of the major multilateral development banks and international financial institutions. Her inputs included a stocktaking of ECG member policies and practices in evaluating public sector operations; a revised Good Practice Standards for evaluating public sector operations; and a stocktaking of ECG member policies and practices in evaluating technical assistance operations.

Christopher Gerrard
Christopher Gerrard (Ph.D., Agricultural Economics, University of Minesota and M.Phil, Economics, Magdalen College, Oxford University) has considerable expertise in leading and conducting evaluations of agricultural policies, institutions and related markets.  He is a specialist in agricultural policy and institutional reform.  Currently, he is involved in several evaluation activities drawing on his evaluation methodology expertise: i) as one of three Quality Assurance Advisers for the Independent Comprehensive Evaluation of the Scaling-Up Nutrition (SUN) Movement, commissioned by the Lead Group of the Sun Movement; and ii) as the Team Leader for an Independent Evaluation of the CGIAR Research Program on Policies, Institutions, and Markets, commissioned by the Independent Evaluation Arrangement of the CGIAR.

Prior to 2013 he was a Lead Evaluation Officer and Global Programs Coordinator for the Independent Evaluation Group (IEG) of the World Bank.  In this capacity he initiated and built up IEG’s program of Global Program Reviews, including supervising the preparation of in-depth reviews of 25 such programs in which the World Bank had been involved.

Qualitative Methods
Andrew Bennett
Andrew Bennett (Ph.D., Public Policy, Harvard University, Kennedy School of Government) is Professor of Government at Georgetown University.  He is the author, together with Alexander L. George, of Case Studies and Theory Development in the Social Sciences (MIT Press, 2005; winner of the Giovanni Sartori Award for the Best Book Published in 2005 on  qualitative research methods), and co-editor, with Jeffrey Checkel, of Process Tracing: From Metaphor to Analytic Tool (forthcoming in 2014, Cambridge University Press).

Professor Bennett is the co-founder, together with Colin Elman and David Collier, of the Institute for Qualitative and Multimethod Research, which teaches research methods to 200 PhD students each June at Syracuse University.   He was the first president of the American Political Science Association section on Qualitative Methods, and he has taught case study methods to graduate students in Norway, Switzerland, Chile, Sweden, Germany, Finland, Italy, and Argentina as well as the United States.  He has served as a consultant on case study research projects for the World Bank, the U.S. Department of Defense, and the U.S. Intelligence Community.

Desha Girod
Desha Girod (Ph.D., Political Science, Stanford University) is assistant professor in the Department of Government at Georgetown University. She specializes in international and comparative political economy of developing countries and focuses on foreign aid and natural resources. Her book, Explaining Post-Conflict Reconstruction, is forthcoming at Oxford University Press. Her research also appears or is forthcoming at The American Journal of Political Science, Comparative Political Studies, Conflict Management and Peace Science, International Organization, The Journal of North African Studies, and The Quarterly Journal of Political Science. At Georgetown, Professor Girod is a member of the Executive Committee of the Center for Latin American Studies and a faculty affiliate of the African Studies Program. She teaches doctoral students on comparative methodology and on political economy of developing countries. She also teaches undergraduate students on civil war in developing countries and on state building. 

Quantitative Methods
Juan Saavedra
Juan Saavedra (Ph.D., Economics and Public Policy, Harvard University) is an expert in advanced program evaluation research designs, including experimental and observational designs in the context of development economics and the economics of education. He has used random assignment to investigate the impacts and mediating mechanisms of vouchers for private schooling in Colombia, and is currently conducting a long-term experimental evaluation of a conditional cash transfer program for education in Colombia.  He is the Principal Investigator for an ongoing randomized basic education charter school effectiveness project in Mexico, and an ongoing randomized evaluation of a teacher training and targeted teaching model in elementary schools in Peru. He is a referee for multiple journals, such as the Quarterly Journal of Economics, American Economic Journal, Review of Economics and Statistics, Journal of Development Economics, Journal of Research on Educational Effectiveness, International Initiative for Impact Evaluation (3ie).

Survey Design
Albert Weerman
Albert Weerman, M.A., has expertise in the management and analysis of survey and other types of data.  Mr. Weerman left the RAND Corporation in 2013 to join USC as an information technology director.  Early in his career he worked at CentERdata at the University of Tilburg and as a software developer at Statistics Netherlands, where he redeveloped the software package “Statline,” which is now used as the electronic databank for Statistics Netherlands.  The databank contains statistical information on social and economic topics in the form of tables, graphs, and maps.  Statline provides a user-friendly environment for end users and allows them to easily retrieve and manipulate statistical information.  An online version of the software can be found at http://statline.cbs.nl/statweb. At RAND, he was the inventor of a Multi-Mode Interviewing Capability (MMIC) that integrates various traditional modes of collecting interview data, including telephone interviewing, written interviewing, and personal interviewing. He was responsible for the technical aspects of the development of a Multi-Mode Interviewing Capability (MMIC) that integrates various traditional modes of collecting interview data, including telephone interviewing, written interviewing, and personal interviewing. He is also working on the Metadata Repository (metadata.rand.org), a resource that facilitates the use of different datasets in comparative studies, that is a repository of information and experience, and which may serve as a library of survey questions for aging surveys. 

An important activity at CentERdata included work on SHARE. SHARE is an EU-sponsored project that is building up a Survey of Health, Ageing and Retirement in Europe. Eventually, a pan-European interdisciplinary panel data set that covers persons aged 50 and over will be created. Scientists from some 11 countries are involved in feasibility studies, experiments, and instrument development. Albert Weerman has been responsible for the technical layout of the project. 

For the project “Internet Interviewing and the HRS”, he developed the Internet instrument that was used to interview a subset of HRS respondents (2,700 respondents) willing and able to participate in the experimental Internet study. The assignment included the implementation of a data-analyzing tool, providing real time access to the respondent’s answers.