Return to search form  

Session Title: Is There Anything Left to Say About Logic Models?
Panel Session 800 to be held in Baltimore Theater on Saturday, November 10, 12:10 PM to 1:40 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
John Stevenson,  University of Rhode Island,  jsteve@uri.edu
Discussant(s):
Bob Williams,  Independent Consultant,  bobwill@actrix.co.nz
Abstract: This session provides several perspectives on the ways logic models can be beneficial in evaluation and the limitations on their utility. Examples are provided for the federal government context, the large-scale multi-site program context, and the local context. Although there is an extensive literature on how to develop logic models (e.g. Renger & Titcomb, 2002; Rush & Ogborne, 1991) and numerous examples of their application in evaluation studies, this session will offer insights on the circumstances best suited to their use, the limits of the conceptual foundations, and alternative choices for developing and applying these diagrammatic devices. An important theme is the tension between an inductive, emergent, developmental approach and a formally imposed structure. Each can have its place, as examples in the papers will show. Methods for deciding on layout and complexity will be discussed, along with their advantages and disadvantages.
When Does Linear Logic Help?
John Stevenson,  University of Rhode Island,  jsteve@uri.edu
The advantages of applying logic models in small-scale evaluations of local-level programs may seem obvious, but there are a number of important concerns. This paper examines some advantages of using logic models in this context and some reservations about their use. Some advantages: these models can guide the development of measures for both short-term (mediating) effects and longer term outcomes in a convincing story of the causal effects of the program; logic models can be used to bring various stakeholders together on the purposes and processes of a program. Some reservations: linear causal logic may not fit the understanding of important stakeholders; there may be dramatic areas of disagreement among stakeholders; imposed logic may please no one and inductive development of logic may seem an artificial exercise. Examples will be drawn from local evaluations, considered singly and from the perspective of state-level grant-makers trying to foster coherent local programming.
Multi-site Evaluations and Logic Models: Development Strategies, Uses, and Cautions
Debra Rog,  Westat,  debrarog@westat.com
This paper discusses the role of logic models in multi-site evaluations. Given that multi-site evaluations can be implemented in a variety of ways, from directed by a central evaluation team to highly collaborative, the process of developing logic models and their role in the evaluations varies. These different development processes and roles will be reviewed, including the use of overall models and single site models, and the use of models for program development, evaluation design, measurement development, analysis, and writing. Examples from these different types of studies, including both quantitative and qualitative multi-site evaluations, will be highlighted. Some of the ways in which models can be 'overused' or misapplied also will be addressed.
A Developmental Approach to Using Logic Models in Evaluation
George Julnes,  Utah State University,  gjulnes@cc.usu.edu
It is well established that different evaluation methods are generally most useful at different points in the life-cycle of projects. Some have discussed a similar life-cycle dimension to different uses of logic models in supporting effective evaluations. This presentation will discuss the use of logic models in a random assignment experimental study of a policy innovation for the Social Security Administration, with an emphasis on their use in guiding the design of the evaluation, the analyses of resulting data, and the dissemination of findings.
Constructing Logic Models of Impact to Guide Evaluation Designs of Multi-level Programs
Robert Orwin,  Westat,  robertorwin@westat.com
This paper will present the logic model of impact developed for the cross-site evaluation of the Strategic Prevention Framework State Incentives Grant (SPF SIG) program. The model depicts the chain of activities that logically links funding of SPF SIG states to community and statewide outcomes, and articulates a broader theory of impact, not only of SPF SIG elements and the relationships among them, but also of non-SPF factors that potentially influence the same processes and outcomes as SPF (i.e., threaten internal validity). This was critical to identifying the design and data elements needed to address how the flow of state- and community-level activities will lead to systems change and epidemiological outcomes in the uncontrolled “open system” where prevention operates, including measures of processes and outcomes, baseline status, and post- baseline contextual change, each assessed at both state- and community-level. Implications for evaluating multilevel programs in general also will be discussed.
Search Form