Last year, the Gordon and Betty Moore Foundation commissioned Monitor Institute to conduct a landscape assessment of measurement and evaluation practices and capabilities and to provide recommendations to the foundation based upon their findings. This assessment was a strategic and targeted analysis, with a primary purpose of uncovering innovative or potentially transferable ideas and practices from all sectors.

Effective measurement and evaluation is a core value of the foundation. Our founders, Gordon and Betty Moore, believe the foundation’s approach and work should bring about measurable, durable solutions to specific problems. An emphasis on rigorous measurement and evaluation has shaped the foundation’s selection of focus areas and grant making since its inception.

 As the foundation approached its 15th anniversary, Dr. Harvey V. Fineberg—who assumed the presidency at the foundation in January 2015—wanted to understand how to best structure the foundation’s measurement, evaluation and learning function going forward, and to explore how to improve and potentially expand its practices in a manner that respects and reinforces our founders’ intentions. To help realize this new vision, Monitor conducted a landscape assessment that focused on understanding best practices, tools, applications and research needs in philanthropy, business, academia and the non-profit sector.

To understand best and next practices in philanthropy, Monitor interviewed and conducted research on more than 40 social sector and governmental organizations. They interviewed measurement and evaluation leaders at foundations and other organizations considered high-performers, as well as with field specialists with experience across a number of organizations or deep expertise in relevant areas. They organized their research within four central categories of measurement, evaluation and learning design: structure, staff, methodology and processes. Monitor also uncovered several design characteristics that seem common to high-performing units. These included a leader with high positional authority and broad expertise, methodological diversity, a focus on learning and an evaluation focus beyond individual grants. 

As a result of this body of work from Monitor Institute, Dr. Fineberg decided the foundation should engage in a new approach to how we structure and practice measurement, evaluation and learning in our work, and this model is in the process of being implemented across the foundation.  We invite you to read below a summary of Monitor Institute’s findings, as well as a description of the foundation’s new approach to measurement, evaluation and learning.

We are excited to share these ideas with you, and we welcome your thoughts and reactions.

Overall Design Framework

Throughout the research, Monitor uncovered several design characteristics that seemed common to high-performing units. They also found a number of characteristics for which there was no one-size best-in-class design. For these characteristics, the aim should be to design a function that is the right fit for the organization in terms of purpose and in keeping with organizational structure, culture and measurement and evaluation function.

To determine the best design for M&E within an organization, it is critical for that organization to be clear on its measurement function (purpose) and to be clear-eyed on its culture.

Monitor also found that structural, cultural and functional components are not fixed, and can create a virtuous cycle. An organization working to strengthen its measurement and evaluation practice can add or deepen functional coverage after proving value to program staff. Culture is also not static, and leaders can shift to focus on better aligning incentives, for example, in the service of a stronger measurement and evaluation system.

Organizational Structure & Culture

Monitor Institute repeatedly heard from those interviewed about the importance of considering organizational culture in the design of an the measurement and evaluation unit. The sometimes explicit, sometimes implicit, elements of structure and culture provide opportunities for transformation, create barriers to effective implementation and affect how competing priorities are decided. Key aspects of organizational structure and culture included:

  1. Leadership commitment to data, evidence and learning. An evaluative focus can be uncomfortable for program staff and learning activities can require time from busy program staff. Leadership support and board clarity around expectations is a necessary pre-condition for successful implementation of organizational priorities.
  2. Staff skill and commitment to data, evidence and learning. The level of knowledge and experience staff have around measurement and evaluation will affect how many of the functions can be handled by program staff as opposed to measurement and evaluation specialists. Even when staff have the skills, trade-offs of time may lead to different preferences among program staff about how much of the measurement and evaluation activity they want to "own."
  3. Tolerance for risk and failure. The level of organizational acceptance of risk and failure has implications for both resistance to more evaluative measures and openness to learning. Additionally, incentives can be structured so that there are more or fewer negative consequences associated with "bad" outcomes, and so that learning from failure is or is not rewarded.
  4. Level of centralization. The autonomy of programs and authority of program leaders affect how measurement and evaluation staff interact with program staff and how these priorities are implemented at the program level. The distinctiveness of programs also impacts how uniform processes can and should be.
  5. Size. Size as reflected in organizational staff and funding levels affects overall amount and complexity of the measurement and evaluation work. It also includes constraints on additional hiring and the use of outsourcing.

Measurement and evaluation functions

While the roots of foundation measurement and evaluation lie in grantee evaluations, it also now covers a much broader range of functions. This range has expanded as organizations look for better ways to get value from their data, more closely linking efforts to strategic and organizational decision-making, and interjecting activities at earlier points in a project life cycle. Monitor Institute identified three overarching responsibilities for a measurement and evaluation function:

  • Design. This function involves supporting initial program and initiative development and ensuring that foundation-wide quality standards are in place. It includes research for new programs and initiatives, help developing an evaluative mindset at the onset of programs and initiatives, and the creation of standards and resources.
  • Adaptive Management. This function includes the ongoing iterations that occur throughout a project lifecycle. It includes monitoring and evaluation activities, as well as efforts to promote organizational learning.
  • Increasing Effectiveness of Field. This function relates to efforts to promote high skills and standards, and broadly, more effective use of data in the field of philanthropy through capacity building efforts and the creation of public data goods.

Must-Have Characteristics

The landscape assessment found that some best practices appear to hold across different organizational structures, cultures and functional configurations, so-called “must-have” characteristics.

  • Authority and skills of the measurement and evaluation leader. Respect for and inclusion of measurement and evaluation practices throughout an organization are sustained by having leader with substantial authority. This can be accomplished through formal authority that broadly matches that of program leaders, or through reporting lines (e.g., to the CEO). Formal authority matters for both signaling and decision-making reasons, and having the leader be part of the organization's executive team signals to the staff that the work is a high priority for the organization. Being part of key organizational decision making also helps keep measurement, evaluation and learning considerations and perspectives front and center at the organization. In addition, narrow methodological expertise alone is insufficient for the measurement and evaluation leader. Regardless of the range of specific functions the unit serves, leading it requires the ability to translate technical information, support the implications of uncomfortable results, and work effectively with staff across the organization. Strong soft skills are essential to play this translational role and to build trust and support for measurement and evaluation activities throughout the program areas. There is no single formula for personal leadership. However, two qualities seem particularly useful: understanding of the challenges of program staff and an understanding of a broad set of methods and approaches to help weigh evaluation choices. The most effective measuremetn and evaluation have both positional authority and significant personal leadership qualities.
  • Methodological diversity. Leaders in the field of measurement and evaluation have moved away from a single-minded focus on one gold standard for research (e.g., randomized-controlled trials). Instead, they employ a range of methodologies and work to match levels of evidence to need, including randomized-controlled trials when looking to scale models or where appropriate for a policy audience. Even organizations strongly identified with randomized-controlled trials are emphasizing smaller, more lightweight and mixed methods (e.g., quantitative and qualitative methods) as well. 
  • Focus on learning. The field of measurement in philanthropy has been moving for the past several years toward a greater focus on organizational and strategic learning, and a focus on learning is now firmly rooted in the philosophy and practice of best-in-class measurement and evaluation units. This movement stems from earlier disappointment with outcomes, in which an emphasis on summative evaluation didn’t enable a timely integration of findings into program activities and the organization more generally. As a result, best-in-class programs have moved beyond data generation for evaluative purposes to include the creation and strengthening of feedback loops between information and decision-making. These programs are also developing activities that promote the dissemination of knowledge throughout the organization. This focus on learning can take many forms and much experimentation is occurring. A blueprint for the ideal learning system has yet to be developed. What is clear is learning as an measurement and evaluation practice is driving efforts for rapid-cycle learning, continuous improvement and evidence-based decision making in support of world-class program development inside the organization.
  • Focus beyond individual grants. Measurement and evaluation in philanthropy began with the evaluation of grantee programs for compliance purposes, and this was the sole focus for a number of years. Many programs still direct most of their efforts on grantee evaluation. However, with the ultimate usefulness of many summative grantee evaluations called into question and a greater focus on learning in support of organizational and strategic decision making, there is increasing attention to evaluation and learning efforts at the program- and strategy-level among the high-performing interviewees. This is combined with stricter thresholds for when to do grantee evaluations. The interviewees spoke of an attentiveness to organizational priorities and determining when grantee evaluations provided important opportunities to answer key questions for learning and decision-making.

Customizable Elements

For some measurement and evaluation design elements, there is no one best practice; instead, a best-fit approach is needed. The successful units Monitor Institute researched chose features that were well suited to their organizational structure, cultural, and functional requirements.

Design Questions

Structure

  • Level of centralization – Should we use a centralized or partially decentralized M&E model?
  • Advisory committee(s) – Should we create an M&E advisory committee?
  • Resources – What should we spend on M&E? Who should control the budget?
  • Evaluative or consultative role – Should the M&E unit have primarily an evaluative or consultative role?

Staff

  • Staff number / use of consultants – How many M&E staff do we need?
  • Leadership and staff skills – What skills should we prioritize?

Methodology

  • Contribution vs. attribution – Should evaluations focus on contribution or attribution?
  • Methodological rigor – What level of methodological rigor should we require?
  • Program methodology – Should our methodologies and levels of evidence vary by program?

Processes

  • Learning – How should we develop learning processes?
  • Strategic learning – Should we engage in strategic learning?
  • Project lifecycles – Where throughout the project lifecycle should we engage M&E?
  • Transparency – How transparent should we be, and on what subjects?

 

 

 

Assessment conclusions and next steps at the foundation

Measurement and evaluation is at an inflection point in which there is considerable interest in, and focus on, the importance of measurement for social impact. There is exciting innovation happening in the philanthropic space and in related adjacencies around data and technology, methodologies and techniques, and ways of integrating key voices. There are advances coming from health, education and other socially oriented fields that are relevant across program areas. There is also a palpable frustration with the limitations of the field in contrast to its promise. This frustration is propelling efforts to improve the quality and capabilities across the practice of measurement and evaluation, as well as to search for better ways to develop real-time and lower-cost data in the service of philanthropic decision-making.

 

There is considerable opportunity for foundations to take a real leadership role in measurement and evaluation. However, there are also significant pitfalls to achieving progress. The assessment outlined four other considerations that were the most common advice from interviewees:

  • Do not over-engineer the system. Measurement and evaluation is ultimately in service of organizational goals. It is a means to achieve better performance and ultimately greater impact. However, it takes time and resources, diverting attention from other priorities. Flexibility and lightness of touch should be watchwords. What needs to be standardized should be. Allow for adaptation and customization where possible.
  • Getting buy-in across programs takes time. Interviewees stressed repeatedly that as leaders they worked carefully and systematically to develop relationships with program staff. Getting buy-in involves gaining the trust of staff members, proving value and showing an understanding of program perspectives, even if measurement and evaluation is primarily in an evaluative role. Interviewees suggested the importance of allowing time for new practices to be fully accepted and implemented throughout the organization. Continued, active leadership support throughout that process is essential.
  • Change itself is difficult. Because of leadership and staff turnover, as well as changes in strategy, many of even the highest-performing organizations with respect to measurement and evaluation have experienced organizational change around their function. Sensitivity to the disruption of change, even if directionally sound, matters.
  • Design of measurement and evaluation only gets you part way. There is no perfect design that will solve all organizational challenges, as there are always dynamic tensions to manage (e.g., between accountability and learning, spending more on measurement/evaluation or more on programs). When asked to name leading measurement and evaluation departments, in a number of cases field experts mentioned departments that had been quite active but had declined after the departure of the leader. Continued attentiveness to individual concerns and ongoing clarity about prioritization from leadership and the board helps to mediate those tensions.

Conclusions from the Foundation

In response to the landscape assessment and recommendations of Monitor Institute, as well as additional feedback from staff, the foundation decided to pursue a ‘hub-and-spokes’-style model for its measurement, evaluation and learning function. In addition, the foundation decided to elevate the director of evaluation and learning position to chief evaluation and learning officer, reporting directly to the president and serving on the foundation’s executive committee.

The intent of the hub-and-spokes model is to ensure that, in addition to a strong central measurement and evaluation function, we are also ensuring that the subject-matter and grant-making expertise in our programs is being supplemented by program-specific measurement, evaluation and learning skills and resources. To achieve this goal, the foundation plans to embed evaluation and learning officers within each program, which best suits the measurement, evaluation and learning needs of our foundation and ensures this important work is closely aligned with the specific needs of our programs.

The foundation believes that having a dedicated leader with a fully staffed team in the measurement, evaluation and learning function is critical for us as we work to do what our founders intended with their philanthropy: “tackle large, important issues at a scale where it can achieve significant and measurable impacts." We are excited to operationalize this new approach.

Read the full landscape assessment.


 

Help us spread the word.

If you know someone who is interested in this field or what we are doing at the foundation, pass it along.

Get Involved
 

SUPPORTING MEDIA

Monitor Institute Measurement and Evaluation Landscape

 
 

Related Stories