(Editor’s note: This column is an excerpt from Deb Ward’s latest book, Effective Grants Management, 2010: Jones and Bartlett Publishers, Sudbury, Mass., www.jbpup.com. Reprinted with permission from the publisher.)
The following information regarding evaluations was provided by Dr. Matt Rearick, an evaluator who is currently an assistant professor of education, health, and human performance at Roanoke College in Virginia. Dr. Rearick has experience conducting external evaluations for several grant-funded projects.
Q: What are [some] of the most common types of evaluation tools included in proposals?
1. Logic models
Logic models are general frameworks that answer four questions about a project: What are the inputs? (people, places, things); What are the activities? (programs, resources, and equipment); What are the outputs? (What does the grantee expect to see based on the inputs and activities?); and, What are the outcomes? (What does the grantee expect to see as a result of achieving the project’s goals?)
Logic models are visual and often look like a flow chart, and so [they] are often easier to understand than a narrative description in a proposal. A proposal does not need to include a logic model to have structure or a disciplined assessment and evaluation, but its inherent clarity and the discipline that goes into creating one helps everyone–proposal reviewers, proposal writers, stakeholders, and evaluators–comprehend the project’s logic and flow. Logic models are often the first step in determining what types of evaluation tools are necessary for a project.
2. Quasi-experimental evaluation designs
Experimental designs (i.e., experimental vs. control groups) are preferred in research. Yet, in most grant programs, this is difficult to implement for three reasons: (1) limited funds; (2) buy-in/adherence; and (3) desire on the part of the grantee to include all participants in the project, rather than excluding some individuals. As an alternative, quasi-experimental designs can be used. A grantee can use comparison groups or use participants as their own control group by pre- and post-testing them.
There are inherent problems [in] using a quasi-experimental design when trying to ascertain causality. The advantage of the effectiveness for evaluating a grant program, [however,] combined with the potential for lowering costs and lessening the full burden of experimental designs, makes this type of design appealing to funders who are often attracted to “research-oriented approaches.”
3. Well-established tools, such as surveys, questionnaires, and examinations
Grantees should use well-respected and research-established surveys and exams whenever possible. These have been tested for reliability and validity and give evaluators, project staff, and funders the greatest degree of confidence when examining data for trends and significant findings.
4. Project-specific surveys, questionnaires, and examinations
Every grant-funded project is unique, and using already established tools described in No. 3 might not capture all that is happening in every project. … To increase the scope of the assessment and evaluation, consider using both well-established tools and those that have been developed specifically for the project.
Part of the evaluation process can–and in many cases, should–include interviews with project staff, participants, and other stakeholders. Interviews can be structured or unstructured. [As with items] 3 and 4, interviews should not be the sole evaluation tool, but seen as complementary to other assessment tools being utilized.