In This Section
As different as community foundations can be from one another, they all share this: the need to know what works and, especially, what works well. The more community foundations can show how their own grants are making a difference, the more value they bring to their communities.
To know what works, community foundations must evaluate their grants. Evaluation has many benefits. It helps a community foundation assess the quality or impact of funded programs, plan and implement new programs, make future grant decisions, and demonstrate accountability to the public trust.
But how do community foundations evaluate what works? There are many possible approaches, ranging from informal, anecdotal methods to full-fledged program studies. With so many choices, it may seem overwhelming to even think about evaluation. Yet, evaluation doesn’t have to be as complicated as it sounds. As one expert put it, “Evaluation is the process of asking good questions, gathering information to answer them, and making decisions based on the answers.”1
Does this sound simple? It can be. Beyond the fancy terminology and debates over whether qualitative or quantitative assessments are better, evaluation boils down to things you already do every day: Ask questions and get answers.
Evaluation Types — What You Want to Know
There are, of course, different ways to ask questions and different reasons for asking. Let’s take a look at some common types of evaluation and what purpose they serve.2 Some evaluations are categorized according to when they are conducted. For example:
Formative Evaluations provide information about how well a project is designed, planned, and implemented, with the goal of improving and refining a developing program. Grantmakers typically look at process (what happened), impact (what changed), or both, giving feedback to newer programs.
Participatory evaluations involve an evaluator working with a grantee's staff and participants. The purpose is to empower grantees by involving them in generating data to be used in the decision-making process. Instead of waiting until the grant ends, the evaluator collects the data throughout the grant period, giving the grantee the opportunity to adjust the project along the way.
Summative evaluations provide summary information about the outcomes of a project. They involve judgments of cause and effect, in particular the effectiveness and worth of a project’s activities in solving specific problems. Summative evaluations are usually conducted for accountability and decision-making purposes.
Other evaluations are determined by the kind of data being collected:
Process evaluations determine if a program is reaching the desired target group and if it is being executed in the way it was planned. It’s common to use an evaluation form to collect this data.
Outcome evaluations (sometimes called impact evaluations) assess how a program produced change. Rather than focus on the process (what did we do), they focus on the outcomes (what changed as a result). Usually, the evaluation focuses on harder to quantify questions, such as how the program made a difference in people’s lives. Choose this type of evaluation when you want to demonstrate how well grantees met their objectives, highlight an innovative model or program, or evaluate your overall grantmaking program.
Evaluation Methods — How to Collect the Data
Now that you know a little more about different types of evaluation, let’s take a look at the actual activities that evaluations involve. Some primary methods to choose from include the following:3
Grantee self-evaluation is a do it yourself model that works well and requires limited resources, although some see it as little more than grant monitoring. At the end or midway point of a grant, community foundations ask grantees to fill out a selfevaluation form. Sometimes the forms are followed up with an interview by a foundation staff member. This method can be a good way to build evaluation skills within grantee organizations.
Case studies are used to collect indepth information about a program through a single participant or site. For example, a case study might be the story of one person’s experience with a program.
Expert peer review relies solely on judgment-based information. The assigned evaluator may be an individual, an advisory committee of experts, or community members who read documents, conduct site visits, and interview grantees. This method can usually be completed quickly and inexpensively. However, the evaluation depends on the knowledge, experience, and opinions of those chosen, so this method can be biased.
Focus groups encourage people to share their feelings and perceptions. Although led by a facilitator, these conversations often flow naturally and explore unexpected or unintended areas. Evaluators can qualitatively analyze areas in which participants’ responses converge and the tone or feeling that accompanies those responses.
The logic model method follows a linear, logical format that breaks down a project into various components and tracks it from before it begins through after it finishes. The logic method uses a specific set of words to describe what the project intends to achieve, the resources and processes used to conduct the work, and the immediate, mid-term, and long-term results. This form of evaluation can be complex and requires people to be trained in its technique.
Descriptive analysis uses statistics to characterize a program, its participants, and the relevant social, political, or economic environment to understand how and why a program works.
Comparison-group design is used to measure effects and attribute those effects to a project or program. For example, a group of people who receive an innovative treatment or participate in a new program are compared to a group of people who don’t. The study measures the difference between the two groups and attributes those differences to the treatment or program.
Frequently Asked Questions
What is the difference between grant monitoring and grant evaluation?
Grant monitoring requires grantees to set goals and periodically report back to grantmakers on their success or failure. It also requires grantees to perform financial reporting. Grantees submit progress and/or final reports to the foundation, which helps keep them accountable to the project goals as well as the budget. While all grants should be monitored, not every grant is worth evaluating. For projects that receive small amounts of funding, monitoring is the most reasonable expectation. An example of grant monitoring: A community foundation asks a grantee to fill out a self-evaluation form describing the grantee’s progress relative to the goals determined in the proposal and the grant agreement and how it spent the money compared to its budget.
Grant evaluation takes a more in-depth look at the program’s accomplishments, helping the foundation learn what happened because of the grant—how it made a difference. An example of evaluation: A community foundation invites grantees and community members to a focus group where it collects feedback on how the grant or program made a change in the community. From the information collected, the community foundation writes a report, has it reviewed by an expert evaluator for key findings, and uses the information to guide future grant decisions.
What should we think about when considering an evaluation?
There usually is more than one acceptable way to evaluate a given grant, project, or program; however, it is difficult to decide which direction to choose. It’s important to think through what you want out of the evaluation up front. The form it takes—and the results it yields—will depend on the choices you make regarding the:
- purpose of the evaluation
- target audience for the evaluation
- questions to be answered
- evaluation methods to be used
- qualifications and experience of the person or group conducting the evaluation
- cost of the evaluation relative to the grant’s size
What is the most common and simple way to evaluate grantees?
Most community foundations ask grantees to fill out an evaluation form, either midway through a grant period or at its end, as a form of grant monitoring. This is a simple, costeffective method for gathering data—plus it introduces grantees to the concept of accountability (if they aren’t already familiar with it). One difference between grant monitoring and grant evaluation is what you as a community foundation decide to do with the data you collect. You are monitoring
the grant if you use the form to check how the money was spent. If you use the data to assess how the grant actually made a difference, you are moving toward evaluation.
One potential downside to selfevaluation forms is that grantees may be reluctant to admit their project’s weaknesses or may be unsure what to report. Nevertheless, when a relationship of trust between the community foundation and grantee has developed, grantees almost always appreciate participating in the evaluation process and are likely to accept and implement findings.
Keep in mind: Not all of your grants will require the same level of evaluation, and your criteria will vary from grant to grant. Some community foundations don’t ask for evaluation forms for grants below a certain dollar level.
What should a grantee evaluation form look like?
What the actual form looks like doesn’t matter. You can choose from dozens of samples or design your own. What does matter is that you are asking the right questions and that you track the answers. The key is to define—or have the grantee define—what determines “success” for a particular grant or initiative and follow up from there.
Ask grantees up front: What is the change that you want to see happen as a result of the project? How will you know if it happens?
Here’s some advice from your foundation colleagues:
- “On the evaluation form, we copy what they wrote in their grant proposal, telling them: ‘This is what you said you would do, this is what you said would take place, and this is how you said you would measure it.’ Then we ask them to write a narrative comparing the project against what they said would happen.”
- “Be sure to ask grantees about the pitfalls they encountered as well as the successes. Remind them that the ‘negative’ information is just as valuable as the positive.”
- “If foundations have specific expectations of grantees for evaluation or monitoring, they should be cognizant that there may be a cost associated with the activity and be prepared to pay for it as part of the grant.”
A good rule for evaluation forms is to keep it simple. The fewer—but more pertinent—questions you ask grantees, the more effective they will be in giving you answers.
For sample evaluation forms, contact your regional association of grantmakers (find yours at www.givingforum.org) or email the Council at email@example.com.
When should we conduct a formal evaluation?
Evaluations can be simple or elaborate, depending on the size and scope of the grant as well as on your community foundation’s goals and capacity. In simple evaluations, community foundations ask grantees for a final report and perhaps interview the grantee or their clients. If a community foundation has the resources, it might conduct a formal evaluation— particularly if the grant is part of a largescale initiative.
Formal evaluations can be costly and timeconsuming, however. According to one community foundation colleague, the only time a community foundation should do a formal evaluation is for a large, multiyear strategic grant—such as a major grant to a nonprofit trying to bring about social change.
Why should you bother with formal evaluations at other times? “Evaluation is hard work, and the results can be arbitrary,” said a community foundation program director, “You have to be committed at least one, two, or even three years before you are going to see real outcomes from a grant.”
When considering when to conduct a formal evaluation (or if it’s even appropriate for your foundation), it helps to set an evaluation policy to guide you. For example, some community foundations only evaluate major initiatives or grantees that received more than a certain dollar amount.
Who conducts an evaluation?
The board establishes policy on how the foundation will conduct its evaluations and who will be responsible for the task—e.g., the staff, the board, community volunteers, or an outside evaluator. For simple evaluations, the staff will typically be in charge. If you have the resources, you might work with an outside evaluator who can help guide the process. For larger initiatives, you might contract an outside expert to manage the whole project.
Outside evaluators can be objective and may have technical skills and resources that you don’t have in house. However, they can be costly. Fees will vary based on the type and scope of evaluation, but expect to pay between $10,000 and $100,000. Check with your local university research department or ask your foundation colleagues for names of consultants, nonprofit research institutes, or experts in your field of funding.
What data can we collect in an evaluation?
The kind of data you collect depends on the type of project and the purpose of your evaluation. For some grants, you might look at quantitative data on how many people were served and how that made a difference. With others, you might examine the qualitative results of the grant, answering questions such as: “How do clients feel about their circumstances because of the grant?” In any case, criteria should relate to the grant or program purpose and, again, they should be defined at the grant’s onset. (For grant monitoring purposes, you will look at the grant amount given and how closely the grantee adhered to the proposed budget.)
Technology makes it much easier today for community foundations to evaluate grants not only locally but regionally and even nationally. Community foundations can now use online surveys and data tabulation tools, among other resources, making evaluation more accessible and immediate and requiring less labor and costs than in the past.
What do we do with the data once we get it?
Again, this will depend on the type and purpose of the evaluation, the audience, and your own mission and goals. With any evaluation, though, you will use the data to determine if the community foundation’s investment (i.e., funding) was worth it, and if so, how. Here are some examples of how you might use the data:
- Create a grant summary in which you report highlights from the grant: what was awarded, what it accomplished, what was successful about it, what could be improved.
- Report the evaluation results to the board to help inform their future grant decisions.
- Include the grant summary in newsletters to donors and the community.
- Include the grant summary on your website, in your enewsletters, and in your annual report.
- Create and distribute a news release announcing the grant or initiative results.
- Give the data to an outside expert to draw conclusions.
- Commission a research paper based on the evaluation results.
- Share the evaluation process and results (warts and all) with the community foundation field; sometimes, admitting where you went wrong is the most valuable information.
- Assess how cost effective it was for your foundation to conduct the evaluation.
How can we evaluate the impact of our entire grantmaking program on our community?
It used to be that, if community foundations evaluated their work at all, they evaluated only grants. Now, with changing roles and increased accountability, many want to show how all of their activities make an impact—their programs, leadership, services, and organizational effectiveness. But how should they go about doing it?
Even large private foundations with greater resources struggle with this question. The short answer is: It simply can’t be done—or rather, it can’t be done simply.
According to one community foundation colleague, “A better approach for community foundations may be to redefine ‘impact.’ Rather than focusing on grantmaking effectiveness, it’s more reasonable and useful to focus on mission effectiveness. In other words, instead of asking what difference you are making in the area of [education, etc.], the question becomes how effective[ly] you are meeting the intent of your own mission and purpose.
“Shifting to a mission focus will force you and your board to recognize that you simply can’t meet every need in your community to the degree you would hope. Ultimately, it’s about knowing your community foundation’s place and recognizing your limitations as well as your strengths.”
Other community foundations emphasized the importance of knowing what you can (and cannot) handle. “There are so many levels of evaluation—it all depends on the capacity of your foundation as far as what you can take on. You simply can’t evaluate everything at once; it’s important to prioritize. Start by evaluating individual grants and work your way up from there.”
When developing an evaluation process, consider these questions:
- What do you want to know one, two, three, or five years from now that you don’t know today?
- How do you get the board’s commitment to the evaluation process?
- How do you know you have the inhouse capacity to conduct (or oversee) an evaluation?
- What is your budget for such an endeavor?
- Which projects are the most important to measure?
- What are the three most important things you want to learn from a particular grant/initiative?
- Who will you share your evaluation findings with?
- How should you receive evaluation results?
- How will you use the information from the evaluation?
- How can you involve grantees in the evaluation process?
Evaluation Techniques: A Series of Brief Guides. Each guide explains the basics of one evaluation technique and how some grantmakers are applying it. Visit www.grantcraft.org.
Foundations and Evaluation: Contexts and Practices for Effective Philanthropy. JosseyBass, 2004. A good read for both newcomers to evaluation and those with more experience. Visit www.josseybass.com.
Grantmakers for Effective Organizations. GEO’s mission is to maximize philanthropy’s impact by advancing the effectiveness of grantmakers and their grantees. Visit www.geofunders.org.
GrantBenefit. A web resource for demonstrating grantmaking impact, sponsored by the Community Foundations of Canada. The site includes many useful tools on evaluation. Visit www.grantbenefit.org.
Learning Together: Collaborative Inquiry among Grant Makers and Grantees. This guide explores an increasingly popular method called “collaborative inquiry.” Grantmakers define the practice, consider potential benefits, and grapple with common challenges. Visit www.grantcraft.org.
When and How to Use External Evaluators. This publication from the Association of Baltimore Area Grantmakers helps in assessing whether, when, and how program evaluations should be done and why and how to manage external evaluators. Visit www.abagmd.org.
W.K. Kellogg Foundation Evaluation Handbook. This book provides a framework for thinking about evaluation as a relevant and useful program tool. Visit www.wkkf.org.
For further information email firstname.lastname@example.org or call 703-879-0600.