Grant Evaluation Approaches and Methods

One of the greatest challenges encountered in thinking about evaluation is that there usually is more than one acceptable way to evaluate a given grant, project, or program.

The form that an evaluation takes and the products that it yields will depend on choices made about the following issues:

  • The purpose of the evaluation
  • The target audience for the evaluation
  • The evaluation questions to be answered
  • The evaluation methods to be used
  • The qualifications and experience of the person or group conducting the evaluation
  • How much to spend

Purposes and Priorities

A foundation must identify its greatest needs and the ways it, or others, will use evaluation results, and then use this information to set evaluation priorities. Most foundations would probably assign a high priority to accountability and to improving their abilities as grantmakers. Beyond this, the reasons for supporting evaluation will depend on the foundation's own goals and programs.

The Audience

Possible audiences for evaluation findings include foundation trustees and staff, grantees, other program funders, policymakers, the field at large, and the general public. When deciding whether and how an evaluation ought to be undertaken, a foundation should also decide upon its desired audience for the evaluation findings. The information needs of various groups can be very different, and the costs of obtaining certain kinds of information—notably, the kinds of information obtained by scientifically rigorous evaluation—can be high.

Evaluation Questions

Evaluations are distinguished by the nature of the questions they attempt to answer. Many evaluation projects will involve more than one of these characteristics.

Needs Assessments. These evaluations verify and map the extent of a problem. They answer questions about the number and characteristics of the targeted institutions or individuals. Needs assessments can help design a new program or justify continuation of an existing program.

Monitoring. These activities produce regular, ongoing information that answers questions about whether a program or project is being implemented as planned. They also identify problems and facilitate their resolution in a timely way.

Formative Evaluations. These evaluations answer questions about how to improve and refine a developing or ongoing program. Formative evaluation usually is undertaken during the initial, or design, phase of a project. However, it also can be helpful for assessing the ongoing activities of an established program. Formative evaluation may include process and impact studies. Typically, the findings from formative evaluations are provided as feedback to the programs evaluated.

Process Evaluations. Studies of this kind are directed toward understanding and documenting program implementation. They answer questions about the types and quantities of services delivered, the beneficiaries of those services, the resources used to deliver the services, the practical problems encountered, and the ways such problems were resolved. Information from process evaluations is useful for understanding how program impact and outcome were achieved and for program replication. Process evaluations are usually undertaken for innovative service delivery model projects, where the technology and the feasibility of implementation are not well known in advance.

Impact or Outcome Evaluations. These evaluations assess the effectiveness of a program in producing change. They focus on the difficult questions of what happened to program participants and how much of a difference the program made. Impact or outcome evaluations are undertaken when it is important to know how well a program’s objectives are being met, or when a program is an innovative model whose effectiveness has not yet been demonstrated.

Summative Evaluations. Summative evaluations answer questions about program quality and impact for the purposes of accountability and decision making. They are conducted at the conclusion of a project or program and usually include a synthesis of process and impact or outcome evaluation components.

Evaluation Methods

Some primary evaluation methods presented in the context of evaluation choices:

Expert peer reviews associated with judgment-based information. The peer may be an individual or a committee. The review may consist of reading documents; conducting site visits and interviews with project staff, participants, or other individuals; or both. The benefits of this approach are that it can be done quickly and at low cost. The hazard of the approach is that it depends totally on the knowledge, experience, and viewpoints of the experts chosen, and so runs the risk of being biased.

Data-based evaluation uses other methods. A descriptive analysis, for example, uses descriptive statistics to characterize a program, its participants, and attributes of the relevant social, political, or economic environment in order to understand how and why a program works. A case study makes extensive use of descriptive analytic methods.

A comparison-group design is called for when it is important to measure effects and to attribute those effects to a project or program. In such designs, a group of people or institutions who receive an innovative treatment or participate in a new program are compared to a similar group who do not receive the treatment or participate in the program. Differences in prespecified measures of impact or outcome between the two groups are attributed to the intervention.

Who Conducts the Evaluation?

Evaluation activities may be conducted by the program itself, foundation staff, one or more outside experts, an independent grantee or contractor, or a combination of these.

How Much to Spend

The cost of an evaluation project can range from a few hundred dollars for an expert-judgment assessment of a completed research grant to a few million dollars for a randomized controlled experiment of an innovative service program at multiple sites. Between these extremes, many evaluations can be conducted for less than $10,000, such as a visiting-committee expert review involving travel to several program sites; a descriptive study of program clients' characteristics, use of services, and satisfaction; a telephone survey of grantees; or a study of the number of publications yielded by a research program. A descriptive evaluation of a program that entails primary data collection at one site might range from $10,000 to $50,000. Process and impact evaluations of programs in more than one community requiring data collection from individuals at two or more points in time might cost between $100,000 and $300,000.

Here are some examples of factors that can increase costs:

  • A desire to attribute causal impact to the program, which means using a comparison-group design (and hence more data collection)
  • Programs that target whole communities rather than specific groups of individuals
  • Multi-site rather than single-site programs
  • Programs that try to make relatively small reductions in problems, so that evidence of impact is hard to discern
  • A need to collect primary data, when suitable records or published statistics are not available
  • Designs that require data collection in person
  • Designs that require collecting data at multiple points in time
  • A need for data that must be collected through highly technical procedures

Questions?

Connect with Council Staff
Share on FacebookShare on TwitterShare on LinkedInShare on all
Grant Evaluation
This resource summarizes grant, project, and program evaluation best practices for private foundations.

Members only

Keep reading with one of these options:

Only Council members can log in to access this resource. If you aren't a member, learn more about the exclusive benefits of Council membership.