35 Keys to Effective Evaluation

As the need for scarce grant dollars grows more intense, so does the need to make certain those dollars are spent as effectively as possible. Hence the question of how to evaluate the consequences of grant supported activities has risen to the forefront.

First Steps

1. Think of evaluation as a management tool. You and your grantees can use it to obtain feedback to improve programs and stimulate good planning. It also generates essential information to use with important outsiders - e.g., public policy people, other funders and nonprofits, media, and other constituencies.

2. Develop in-house knowledge about evaluation. You don't have to conduct your own evaluations, but you do need to put together the right evaluation arrangements and see to it that the results are put to good use. That takes some knowledge of the field.

3. Encourage grantees to develop their own abilities, to monitor their work, and to evaluate it themselves or to have it evaluated by others. Only then will they begin to know what they are doing and be able to capitalize on that knowledge. Perhaps the first step is to persuade grantees to install the good recordkeeping practices essential to any effective evaluation.

4. When you do take the initiative in evaluation, find ways to make grantees your partners in the process. Let the results belong to and benefit both of you. Step back and take a broad look at your grantee's field of concern. Evaluate a cluster of grants. Avoid "kill-or-continue" evaluations dominated by the issue of renewed funding.

5. When deciding which programs to evaluate, choose the ones that are really worth it. Consider the following criteria: 

  • The importance of the ideas involved
  • Whether they are innovative
  • Whether the programs affect significant numbers of people
  • How much talent and money are at stake
  • Whether the evaluation itself has the potential to provide new intelligence

6. Make sure the people who can make the most important use of the evaluation are involved as stakeholders in planning and carrying out the evaluation.

7. Do not be overly impressed with what has been done or written so far in the evaluation profession. There's plenty of room for improvement and new approaches.

Good Practices

8. Start early. When evaluation is planned at the same time the program is planned it can contribute to overall program design. Nothing sharpens program planning better than having to answer the basic evaluation questions: 

  • What are you really trying to do with this program?
  • What is going to happen that can tell you whether or not you have succeeded?
  • How will you know if it happens or not?

9. Don't try to evaluate everything. Try to get everybody to focus on the essential first question: "What is most important for us to find out?" A clear statement of how the evaluation is linked to the purposes and the open questions of the program should be made part of the evaluation's records. Articulating what you choose to evaluate and why is, in itself, a very useful part of the process.

10. Be flexible. Allow for change or expansion in midstream if program objectives change or evaluation data show an important new direction for inquiry. Preliminary findings may show an unexpected program result - a "side effect" that may in the end be one of the most important outcomes of the whole program experience. Such findings deserve more evaluation attention.

11. Insist on evaluations that do not just count the hours that shine. Evaluations should of course not be hatchet jobs, but neither should they be valentines. It is important to suspend judgment a little bit and to tolerate disappointments as well as successes.

12. Evaluate at the level of the people who will ultimately be affected by the program. The most trustworthy and useful evaluations are those that get answers directly, rather than from other institutions and professionals.

13. See that your evaluations take a longer view over time. Longitudinal studies, annual reviews, and follow-ups after an initial study are much more revealing than a one-time shot. A cycle of interaction between programs and their evaluations can be set up: program planning, program experience, evaluation, learning, and then back to program planning.

14. Place more value on indications of behavior than on opinions. The most important part of evaluation planning is determining the best available indicators of success; the strongest indicators are those that show behavior.

15. Look not only at the quality of a program (whether it's good or bad) but also at its worth (whether it's needed).

16. Respect previous work. A good evaluation builds on what is already known.

17. Use a variety of evaluation methods for different purposes or sometimes side by side for verifying or contrasting. Combine quantitative and qualitative; get some numbers and some personal, documentary accounts.

18. Squeeze everything you can out of an evaluation. Compare, contrast, break down, and look at the impact of variables in several ways.

19. Use evaluation as a chance to bring grantees together. Evaluate clusters of grants and encourage these grantees to share ideas, information, and reactions.

Finding Evaluators and Working With Them

20. Develop relationships with sources of evaluators - preferably before you need them - and keep scouting around for new and better ones.

21. Find evaluators who are both disciplined in their approach to the task and convivial and tactful in their approach to people.

22. When possible, find an evaluator who has an identifiable personal interest in conducting the evaluation you want. This can give you a better job and often lower costs.

23. Know as much as you can about the professional interests and biases of an evaluator before you start working with him or her.

24. Make sure the evaluation looks at political issues as well as professional ones. If the goals of a program are to provide services and to make an impact in the field, whether and how that impact happens is crucial to the evaluation. This may not happen if the evaluator is locked into a service delivery perspective alone.

25. Consider using an evaluation team, which will bring a diversity of viewpoints to the task.

26. Be clear with evaluators as well as with grantees about who is going to be responsible for the design, implementation, and reporting of evaluations. Consider a written agreement or contract. Agree about who owns the evaluation, who releases the reports, and how and when.

27. Make sure the people who are being interviewed or surveyed are indeed representative of the universe of people the evaluation says is represented. How the evaluators reached these determinations needs to be clear in the reports.

28. Insist on comprehension and precision in both quantitative and qualitative evaluations. Findings need to be clearly based on available data and interpretations need to be clearly based on findings. Case studies and other qualitative methods require just as much attention to precision as methods with lots of numbers.

Funding

29. Budget adequate funds to do the evaluation. Mediocre evaluations lead to mediocre programs in the future. There is no easy formula for determining how much money to spend on evaluations. A modest program that shows strong signs of success, fresh ideas, and potential influence, for instance, may warrant an evaluation that costs much more than the program itself.

30. On the other hand, don't let anyone tell you that an evaluation can only be scientific or rigorous if you spend megabucks. Creative evaluators can help you find ways to keep costs down.

31. Involve several funders and pool the group's resources. The results should be better and receive more attention.

Reporting and Dissemination

32. Give reporting and dissemination the importance - and the funding - they deserve. Don't accept the first draft of the evaluation report as a sacred, all-purpose document. Chances are you'll find it could be better written and organized, the findings could be presented in a more informative manner, irrelevant information has been included, and pertinent information has been omitted. Two reports - one summary, one with detail, may be needed. While it will usually be to your advantage to have the independent evaluator serve as the sole author of the report, paying a good editor to help the evaluator may be a smart investment.

33. Use oral reporting opportunities - conferences, staff meetings, and board retreats - to disseminate and discuss information from evaluations.

34. Keep track of the impact of an evaluation. This includes reactions to oral reports, who asks for copies, who responds to the reports with comments, who takes action on the basis of the report, and what happens at meetings where the evaluation is planned, reported, or critiqued. One reason it is hard to find enough money to do evaluations well is that the impacts of evaluations are so seldom documented.

35. Give everyone in philanthropy the opportunity to know what you are learning from evaluation activity. Inform foundations that you know are interested in the evaluated program's field, regional associations of grantmakers, and the Council on Foundations. All should hear about your evaluations, preferably before and after they happen. To date there has been very little trading and talking about evaluation experiences.