Routine Evaluation Activities
Many foundations are already doing some evaluation in the course of carrying out routine proposal review and grant monitoring. they are in an excellent position to gradually expand their evaluation activities. The kind of comparison evaluation involved in the methodical assessment of proposals and systematic grant monitoring can be seen as already existing components for a foundation's evaluation portfolio.
This evaluation activity, which falls into the category of the expert peer review, is carried out by staff and consultants, based on review of documents, site visits, and discussions with applicants. The reviewers assess:
- the need for the project
- the appropriateness of the objectives
- the feasibility of the implementation strategy
- the qualifications of the applicant institution and project team
- the adequacy of the proposed timetable and budget
- the likelihood of raising additional funds if required
- the potential contributions or impact of the project
- the potential problems and risks
- the rationale for funding by the foundation
If a project is funded, a written critical analysis of the proposal, together with a clear written statement of the foundation's expectations for the project and the rationale for funding, can serve as the baseline for the postgrant assessment.
Grant monitoring refers to the tracking of grant progress through regular interim reports and a final report from the grantee, the reviewing of these reports by responsible foundation staff, and other communications between foundation staff and the grantee. Monitoring should alert the foundation to important changes in project objectives or implementation strategy at an early date and should be helpful in identifying problems and facilitating their timely resolution. If monitoring is to be useful for problem identification and resolution, a reasonable reporting period is every six months.
Monitoring reports also can provide the foundation with basic process indicators of the grantee's success in accomplishing measurable objectives. Some foundations have developed guidelines that tell grantees what topics their reports should cover. It may turn out that additional funds will be required or that the grantee will need technical assistance to set up a data collection system. If the monitoring process is important enough to warrant the additional expense, the foundation may decide to increase its support of the project.
If grant monitoring ends with the responsible program officer reading a final report and filing it away, the foundation will lose a valuable opportunity to reflect and learn. A foundation wishing to maximize the learning opportunity from its grants will build on its grant monitoring capabilities and conduct post grant assessments.
In a postgrant assessment, the program officer or a consultant reviews the grantee's reports and other deliverables and possibly makes a brief site visit in order to:
- assess the grantee's performance and accomplishments relative to original expectations
- reconsiders the adequacy of the original plans, budget and timetable
- reflects on what made the project successful or not
- comments on the project's potential lasting impact, if any
- identifies benefits to the foundation and lessons learned
- recommends whether the foundation should provide future support to the project team, to the grantee institution, or to other similar projects
A further benefit is that foundations can add the postgrant assessment – either written or oral – to their evaluation repertoire with relatively little additional expenditure. The written critical analysis suggested as a part of the proposal evaluation at the outset of the funding process will facilitate preparing the postgrant assessment at the end. Grantee report guidelines that specify the information needed will also make it easier to prepare postgrant assessments.
When is evaluation beyond monitoring called for? Considerations to weigh in making this decision include:
- The importance of the questions that monitoring cannot answer
- The adequacy and practicality of the evaluation options available to provide answers to those questions
- The potential impact of the evaluation results for the foundation, its grantees, and the field
- The cost of the evaluation options
- The competing opportunities for the available funds
When Monitoring is Enough
For many grants, evaluation beyond routine monitoring is not recommended, because either the monitoring itself satisfies the foundation's evaluation questions or the ability to learn from further evaluation is minimal. For other grants, limited resources will suggest that further evaluation would not be cost-effective.
For most foundations, grants unlikely to warrant evaluation beyond monitoring, even if resources were unlimited, include:
- contributions to the general support of grantmaker organizationsendowments
- good citizenship grants
- small contributions to very large undertakings
- grants intended to yield tangible products (ex-purchase of equipment)
- start-up funding to help establish a program already demonstrated to work
- general support grants for programs that are not innovative models
- small feasability studies
- planning grants that are expected to result in another proposal to the foundation
When Evaluation Beyond Monitoring is Desirable
The factors that signal that further evaluation may be in order include the importance to a foundation of the project, grant, or program that would be evaluated; the characteristics of the project, grant, or program that make it a good subject for evaluation; and the potential contributions of the evaluation.
Sometimes the reason for deciding to evaluate derives from the foundation's perspective:
- When the grant represents a sizable investment for the foundation
- When the project has great salience for the foundation's programs or larger goals
- When foundation staff have been especially proactive in designing and initiating the project
- When the foundation will be asked to renew funding for the project and wants to know more about the project's effectiveness and future potential than it can glean from monitoring
- When the information obtained from monitoring will not be sufficient to satisfy the foundation's accountability needs
Sometimes the main signal that evaluation would be a good idea comes from the characteristics of the project or grant that would be evaluated:
- When the project has potential to be a model and evaluation can be a tool of dissemination
- When the project has potential to have a measurable impact on a target group of people or institutions
- When the design of the project is such that a credible evaluation is feasible (that is, if important effects can be measured within a reasonable period of time)
- When the grant can be grouped with similar other grants to form a cluster, so that the grants can be evaluated together
- When the project, once in progress, experiences problems that could compromise its performance
Finally, the potential utility of an evaluation is indicated:
- When evaluation could improve the performance of the project
- When evaluation would enhance the impact of the project
- When evaluation results have potential to influence policymakers in the public or private sector
- When evaluation will yield information that will make an independent contribution to the attainment of the foundation's goals
Who Should Conduct an Evaluation
Evaluation activities may be conducted by foundation staff, grantees, or outside evaluators. What considerations should be taken into account in choosing among these three sources?
Foundation Staff as Evaluators
Most of the time, foundation staff perform routine grant monitoring, although some foundations use consultants to review grantees' final reports and some organize formal site visits by staff and outside experts to evaluate grantee performance.
Some staff undertake a small evaluation project, such as a mail survey of grantees, possibly in collaboration with a local consultant or with the assistance of a student intern or a research assistant. As a rule, foundations do not hire staff to carry out evaluations that require major data collection and research.
However, foundation staff can play a significant role in the design of a major evaluation by specifying the evaluation's purpose, the target audience, the key questions to be answered, and the methods that would provide the kinds of information desired, and by monitoring the evaluation's progress.
Grantees as Evaluators
Grantee's various reports can provide useful evaluation information. Progress reports can contain process information on project implementation, services provided, and people served. With some forethought, grantees can arrange to collect the necessary information from their routine record systems.
Grantees' final reports can take the form of summative evaluations based on judgment and information from program records, in which they describe how projects were implemented and the extent to which their original objectives were attained. Some foundations provide guidelines to help their grantees organize their reports in the ways that will be most informative and useful to the foundation.
Sometimes grantees build, or can be assisted to build, an evaluation component into their project workplans, covering both process and impact questions and carried out by program staff themselves, or by consultants under subcontract. These evaluations can often satisfy a foundation's needs for evaluative information. However, the foundation should recognize its own role as an important stakeholder in the evaluation, and should make sure that its information needs will be met by the built-in evaluation.
The design requirements of a complex evaluation may require more technical expertise, more research capacity, and more dedicated staff than either the foundation or the grantee can reasonably be expected to provide. It then becomes appropriate to look to an outside individual or group to conduct the evaluation.
A related consideration is generalizability. If a purpose of the evaluation is to inform the field, it may be important to use a design that will yield results that can be compared with results from evaluations of other programs or with statistics from national surveys. In such cases, outside evaluators familiar with such studies and surveys may be best qualified to conduct the evaluation.
Another consideration is the need for objectivity. If the purpose of the evaluation is to learn about the effectiveness of a new model, and if its target audiences are scientists and policymakers, the results may carry more weight with those audiences if the evaluation is conducted by an impartial outsider rather than by the staff who are conducting the program being evaluated or by a subcontractor to the grantee.
Sometimes, multiple grants are evaluated at the same time as part of an overall program evaluation. In such cases, it may be most acceptable and practical to have the evaluation performed by a neutral third party.
Reporting and Disseminating Results
Unless the evaluation project is very brief, no one wants to wait until the end to learn what the evaluators found. Interim reports on progress and findings from the evaluation are important to keep the foundation and the grantee informed. Moreover, reading such reports sometimes triggers suggestions for additional questions or topics to be covered; therefore, a schedule for regular reports should be built into the evaluation workplan. At the end, the foundation should expect to receive a complete report on the evaluation methods and findings, accompanies by an executive summary.
When audiences other than a foundation and its grantee are expected to be interested in an evaluation's results, a dissemination plan should be drawn up to target appropriate publication vehicles and to make sure that enough time is allotted to the preparation of manuscripts and to the presentation of findings at professional meetings. Many evaluators are eager to publicize their work among their peers, so this requirement is unlikely to pose a problem when outside evaluators are used.
Many foundations have developed periodic newsletters or special publications to communicate results of their grants to such audiences as policymakers, practitioners, and a foundation's particular constituencies. Some hold press conferences or convene meetings of interested persons to present study results.
A foundation's board of trustees is a special audience for evaluation results. Often the responsibility for communicating evaluation findings to this audience falls to foundation staff, who should be able to synthesize technical material and highlight its policy significance in terms that are appealing to their board.
Suggestions for Getting Started
Foundations that are new to evaluation should avoid trying to do everything at once. Rather than thinking about how to evaluate every new grant or project, it is better to begin slowly with one or two evaluation projects, both to get some experience and to discover what is truly most important for a foundation to learn.
Another suggestion is to take some time to look at evaluations carried out by or for other foundations to get a taste of what is possible.
A final recommendation for doing evaluation is not to overdue it. A foundation should know who has a valid interest in evaluation results and what kinds of evidence will be credible to these people. A foundation should avoid evaluations that are more elaborate than necessary. And a foundation should never spend more money on evaluation than it believes the results will be worth.