Return to search form  

Session Title: Cost Benefit Analyses in and of Evaluation
Multipaper Session 632 to be held in Liberty Ballroom Section A on Friday, November 9, 3:35 PM to 4:20 PM
Sponsored by the Costs, Effectiveness, Benefits, and Economics TIG
Chair(s):
Sarah Heinemeier,  Compass Consulting Group,  sarahhei@mindspring.com
Abstract: One reason evaluations typically are under-funded is that the cost-savings evaluation activities can generate are not clearly identified. Cost benefit analyses of evaluation activities demonstrate the economic value of evaluation and highlight the importance of focusing evaluation activities on rigorous methodologies and the collection of data relevant for decision making. Economic methodologies are becoming increasingly prominent in evaluation as evaluation methodologies broaden from a traditionally retrospective framework to a future-oriented, prospective framework. The prospective framework encourages evaluators and clients to collect data relevant for decision making; cost and benefit data often is included and utilized to further investigate and compare the effectiveness and efficiency of program alternatives. This panel session will discuss two aspects of cost benefit analysis in evaluation, the application of cost benefit analysis to evaluation and in evaluation, to promote utilization of economic methodologies as standard practice.
The Costs and Benefits of Conducting Evaluations
Sarah Heinemeier,  Compass Consulting Group,  sarahhei@mindspring.com
Amy Germuth,  Compass Consulting Group,  agermuth@mindspring.com
Anne D'Agostino,  Compass Consulting Group,  anne-d@mindspring.com
This paper presentation will share the costs and benefits of conducting evaluation for three projects. In the first project, a cost benefit analysis (CBA) was performed on an evaluation of a community health project. In the second, CBA was performed on a multi-year evaluation of an early childhood organization, and in the third the analysis was performed on a comprehensive evaluation of a workforce development program. In each case, an economic benefit was realized from evaluation activities; in not all cases, however, did the economic benefits exceed the costs. In this presentation, the authors will present their methodology (Boardman, et. al's 9 steps) and discuss key CBA challenges which included the need to plan for adequate and accurate cost and benefit data. Key recommendations include suggestions for including cost-benefit estimates in evaluation reporting, as a measure of evaluation's merit in improving efficiency and program effectiveness.
Integrating Cost Benefit and Effectiveness Analyses Into Comprehensive Evaluations
Sarah Heinemeier,  Compass Consulting Group,  sarahhei@mindspring.com
There are clear methodologies for conducting cost benefit and effectiveness analyses that often overlap with traditional evaluation methodology. Boardman et.al., (2005) established nine steps for conducting cost benefit and effectiveness analyses, including several that overlap with traditional evaluation methods such as identification of stakeholders, full delineation of program benefits, and creation of criteria for assessing value. This paper presentation will present these basic steps in tandem with traditional evaluation methods, and provide a case study of the integration of economic evaluation techniques into traditional evaluation activities. In this presentation, the case study focuses on a cost effectiveness comparison of similar early childhood education and child care activities provided by multiple organizations. The utility of cost effectiveness analyses as complementary to traditional techniques will be highlighted; limitations and notes of caution regarding utilization of cost data also will be discussed.

Session Title: Program Theory and Theory-driven Evaluation TIG Business Meeting and Panel: The Use of Evaluation to Promote Learning: A Theory Based Perspective
Business Meeting with Panel Session 633 to be held in Liberty Ballroom Section B on Friday, November 9, 3:35 PM to 4:20 PM
Sponsored by the Program Theory and Theory-driven Evaluation TIG
TIG Leader(s):
Katrina Bledsoe,  Planning, Research and Evaluation Services Associates Inc,  katrina.bledsoe@gmail.com
Lea Witta,  University of Central Florida,  lwitta@pegasus.cc.ucf.edu
Chair(s):
Katrina Bledsoe,  Planning, Research and Evaluation Services Associates Inc,  katrina.bledsoe@gmail.com
Discussant(s):
Craig Thomas,  Centers for Disease Control and Prevention,  cht2@cdc.gov
Abstract: Theory-based evaluations are conducted in a variety of settings ranging from formative to summative. Many evaluators engage in TBE for a variety of reasons, but when the reasons behind the use of TBE are explored at a deeper level, they point to the main outcome of providing a forum to promote learning either on the part of the programs and/or associated stakeholders. This session highlights the learning that often occurs when using the TBE approach.
A Theory-based Evaluation Case Study: Learning About Teaching About Learning and Teaching
John Gargani,  Gargani & Company Inc,  jgargani@berkeley.edu
I present a case study that describes three consecutive evaluations of a teacher professional development program--a small quasi-experiment, a small experiment, and a large experiment. Teacher professional development programs are based on 'long-chain' theories that describe how a program's interaction with teachers is believed to impact students who are never directly served by the program. With each evaluation, the program theory evolved to reflect new ideas about teaching and learning. I describe how the program theory helped structure the evaluations, and how the evaluations helped structure for the theory. I argue that in this case, and others with complicated program theories, validating a program theory is an unrealistic goal. Nonetheless, a theory-based approach has great utility, supporting learning and providing a coherent, consistent and rational basis for the program designs.
Theory-based Evaluation Promotes Learning About Cultures: Examples From Three Evaluations Focused on Ethnic Communities
Katrina Bledsoe,  Planning, Research and Evaluation Services Associates Inc,  katrina.bledsoe@gmail.com
Cultural competency in evaluation is not only required for the evaluator, but also the evaluation approach. This paper discusses the use of a theory-based to promote cultural learning, and will discuss evaluation work with three separate ethnic communities. I will demonstrate how a theory-based approach has been helpful in articulating unique and previously unknown cultural mores to the program designers, as well as highlighting shared universals across majority and minority cultures.
What do we Learn From Program Theory?
Stewart I Donaldson,  Claremont Graduate Unviersity,  stewart.donaldson@cgu.edu
This paper focuses on understanding the usefulness of program theory to promote learning in evaluation settings. Extraction of program theory can appear to be superfluous to an evaluation, until one considers the type of learning and knowledge that such theory can provide. In this paper, I discuss why delineating program theory is a necessary aspect of the evaluative process. I also provide examples of how program theory development and exploration can lead to a greater understanding of the program, as well as the evaluative process on the part of stakeholders. This discussion will dovetail with some of my thoughts concerning the expansion of TDE to program theory theory-driven evaluation science in the 21st century.

Session Title: What is Systems Thinking?
Panel Session 634 to be held in Mencken Room on Friday, November 9, 3:35 PM to 4:20 PM
Sponsored by the Systems in Evaluation TIG
Chair(s):
Cabrera Derek A,  Cornell University,  dac66@cornell.edu
Abstract: Evaluation is one of many fields where "systems thinking" is popular and is said to hold great promise. However, there is disagreement about what constitutes systems thinking. Its meaning is ambiguous, and systems scholars have made diverse and divergent attempts to describe it. Alternative origins include: von Bertalanffy, Aristotle, Lao Tsu or multiple aperiodic "waves." Some scholars describe it as synonymous with systems sciences (i.e., nonlinear dynamics, complexity, chaos). Others view it as a laundry list of systems approaches. Within so much noise, it is often difficult for evaluators to find the systems thinking signal. Recent work in systems thinking describes it as an emergent property of four simple conceptual patterns (rules). For an evaluator to become a "systems thinker," he or she need not spend years learning many methods or nonlinear sciences. Instead, with some practice, one can learn to apply these simple rules to existing evaluation knowledge with transformative results.
The Popularity and Promise of Systems Thinking
Laura Colosi,  Cornell University,  lac19@cornell.edu
There are many ways to think about systems thinking. Some scholars view it as a specific methodology, such as system dynamics, while others believe it is a -plurality of methods- (Williams & Imam, 2006). Others see systems thinking as systems science, while others see it as a general systems theory. Still others see systems thinking as a social movement. We propose that systems thinking is conceptual, because changing the way we think involves changing the way we conceptualize. That is, while systems thinking is informed by systems ideas, systems methods, systems theories, the systems sciences, and the systems movement, it is, in the end, differentiated from each of these.
Patterns not Taxonomies
Derek A Cabrera,  Cornell University,  dac66@cornell.edu
Systems thinking is often considered an unwieldy agglomeration of ideas from numerous intellectual traditions. To put some workable limits on this mass of systems theories, we have chosen to define the systems thinking universe as all of the concepts contained in three broad and inclusive sources (Midgley, Francois, Schwartz). By defining the systems universe, one can then begin to think about what features are essential for membership and therefore arrive at a less ambiguous description of systems thinking. Scholars who have made attempts to describe systems thinking have often taken a pluralistic approach and offered taxonomic lists of examples of systems thinking. We propose that the question what is systems thinking? cannot be answered by a litany of examples of systems thoughts (or methods, approaches, theories, ideas, etc.). Such a response is analogous to answering the biologist's question what is life? with a long list of kingdoms, phyla, classes, orders, families, genus and species. Taxonomy of the living does not provide an adequate theory for life. Likewise, taxonomy of systems ideas, even a pluralistic one, does not provide an adequate theory for systems thinking.

Session Title: Evaluating Online Training for Disaster and Emergency Preparedness
Multipaper Session 635 to be held in Edgar Allen Poe Room  on Friday, November 9, 3:35 PM to 4:20 PM
Sponsored by the Disaster and Emergency Management Evaluation TIG
Chair(s):
Elizabeth Ablah,  University of Kansas School of Medicine,  eablah@kumc.edu
Evaluating Pandemic Influenza Preparedness: The Contribution of an Online Short Course to Local Health Department Preparedness in North Carolina
Presenter(s):
Molly McKnight Lynch,  University of North Carolina, Chapel Hill,  mlynch@rti.org
Richard Rosselli,  University of North Carolina, Chapel Hill,  rrosselli@unc.edu
Kristina Simeonsson,  East Carolina State University,  kristina.simeonsson@ncmail.net
Mary Davis,  University of North Carolina, Chapel Hill,  mvdavis@email.unc.edu
Abstract: Measuring the contribution of training programs to increased community preparedness is difficult because criteria for preparedness often lack definition. A pilot online course on pandemic influenza was offered to North Carolina local health department staff in fall 2006. The evaluation, guided by the RAND public health preparedness logic model, linked course activities to participant functional capabilities, which are actions public health workers would take during a pandemic response. Evaluation measures included a retrospective pre-test/post-test design that measured participant confidence to perform eight key functional capabilities and pre- and post-course knowledge assessments. Thirty-seven participants representing 36 health departments completed the course. Evaluation results revealed a significant increase in participant knowledge. Participant confidence to perform specific functional capabilities related to a pandemic response significantly increased for all eight measured capabilities. Nearly two-thirds of course completers plan to modify their pandemic influenza response plans based on information learned in the course.
Emergency Preparedness for Hospital Clinicians: Multi-state Evaluation for Online Modules
Presenter(s):
Elizabeth Ablah,  University of Kansas School of Medicine,  eablah@kumc.edu
Leslie Horn,  Columbia University,  lah2110@columbia.edu
Kristine Gebbie,  Columbia University,  kmg24@columbia.edu
Abstract: Six online modules were developed by the New York Consortium for Emergency Preparedness Continuing Education to train hospital clinicians in various aspects of emergency response based on their roles within their clinical setting. Evaluation of the new modules included a competency-based online evaluation of the modules’ contents and an automatically generated and distributed three-month follow up evaluation. Participants registered electronically, providing basic information utilized to determine the evaluation tools they complete. Based on registration information, participants from a pilot state were directed to an identical 10-item knowledge based pre-test and post-test; an evaluation was completed by all participants upon finishing a module. E-mail addresses collected at registration facilitated the dissemination of follow-up evaluations. Utilization of registration data to tailor evaluations to participants’ characteristics enables evaluators to simultaneously evaluate multiple aspects of a single program and streamlines follow-up evaluations.

Session Title: Getting To Outcomes at the Federal, State, County, and Local Levels: Session II
Panel Session 636 to be held in Carroll Room on Friday, November 9, 3:35 PM to 4:20 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Abraham Wandersman,  University of South Carolina,  wandersman@sc.edu
Catherine Lesesne,  Centers for Disease Control and Prevention,  ckl9@cdc.gov
Abstract: Getting To Outcomes is an approach to help practitioners plan, implement, and evaluate their programs to achieve results. The roots of GTO are traditional evaluation, empowerment evaluation, continuous quality improvement and results-based accountability. GTO uses 10 accountability questions; addressing the 10 questions involves a comprehensive approach to results-based accountability that includes evaluation and much more. It includes: needs and resource assessment, identifying goals, target populations, desired outcomes (objectives), science and best practices, logic models, fit of programs with existing programs, planning, implementation with fidelity, process evaluation, outcome evaluation, continuous quality improvement, and sustainability. GTO workbooks have been developed in several domains (substance abuse prevention, preventing underage drinking, positive youth development) and is currently under development in several other domains (preventing teen pregnancy, preventing violence, emergency preparedness). The papers in this panel will show how GTO is being used at the federal, state, county, and local levels.
Getting to Outcomes and Systems of Care For Child and Family Mental Health Services
Duncan Meyers,  University of South Carolina,  meyersd@gwm.sc.edu
Greg Townley,  University of South Carolina, 
David Asiamah,  University of South Carolina, 
Sheara Fernando,  University of South Carolina, 
David Osher,  American Institutes for Research,  dosher@air.org
Systems of Care (SOCs) are comprehensive service delivery models that support children and adolescents with severe emotional disturbances (SED) and their families with an array of community-based resources tailored to their unique strengths and needs. While federally-funded SOCs are guided by a philosophy of how care should be provided, local sites must develop innovations at the local level to reflect the context of their community while simultaneously adhering to national requirements. In an effort to provide a process for planning, implementing, evaluating, improving, and sustaining an SOC initiative, the Getting to Outcomes (GTO) framework is being crosswalked with SOC philosophy in a collaborative effort among diverse professionals (e.g., local SOC evaluators, national technical assistance staff, University collaborators). This session will: (a) describe ways in which the GTO framework complements SOC philosophy; (b) describe the process of crosswalking GTO and SOC philosophy; and (c) discuss future directions for this initiative.
Getting to Outcomes in Local Systems Transformations
Rusti Berent,  Children's Institute,  rberent@childrensinstitute.net
Jody Levinson-Johnson,  Coordinated Care Services Inc,  jlevinson-johnson@ccsi.org
Nowadays staff and stakeholders in public and private provider agencies, schools, and school districts often have the desire and motivation to be partners in the evaluation of their local health and mental health systems. Regardless of their experience with evaluation, these individuals share recognition of the value of partnering with evaluators to implement frameworks and develop strategies to document where they are and where they are going. This presentation examines a local community's readiness for and adoption of the Getting to Outcomes framework and its application to the evaluation of a system transformation. The stakeholders include youth, parents, teachers, administrators, and health, mental health and other child serving providers. Implementing the GTO process begins with gaining buy-in, assessing capacity and readiness, and identifying and building upon existing expertise. We illustrate how and why GTO is a flexible framework that is making a positive difference in getting to and sustaining outcomes.

Session Title: Ex Ante Evaluation: Methods for Estimating Innovation and Other Research Outcomes
Multipaper Session 637 to be held in Pratt Room, Section A on Friday, November 9, 3:35 PM to 4:20 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
George Teather,  Independent Consultant,  gteather@sympatico.ca
Ex Ante Portfolio Analysis of Public R&D Programs for Industrial Technologies in Korea: Practices at the Korea Institute of Industrial Technology Evaluation and Planning
Presenter(s):
Yongsuk Jang,  George Washington University,  jang@gwu.edu
Jongman Park,  Korea Institute of Industrial Technology Evaluation and Planning,  jmpark@itep.re.kr
Abstract: Korea Institute of Industrial Technology Evaluation and Planning (ITEP) is in charge of planning and evaluating the lion's share of Korean public investments in research and development for industrial technologies. One of its prominent activities is to carry out portfolio analysis before soliciting research and development proposals in order to fine-tune the technical portfolios of public R&D programs according to national innovation strategies and priorities. This paper will examine the overall scheme, specific procedures, and methodologies adopted for this ex ante portfolio analysis. From the past few years' experiences, it also will discuss the contributions, limitations, obstacles, and challenges of this ex ante evaluation practice at ITEP.
Impact Evaluation in Preliminary Feasibility Analysis of National R&D Programs
Presenter(s):
Jiyoung Park,  Korea Institute of Science and Technology Evaluation and Planning,  jypark@kistep.re.kr
Abstract: Preliminary feasibility analysis for the national R&D program is performed to decide budget distribution in Korea. The purpose of a preliminary feasibility analysis is to verify the feasibility of large public R&D programs through technical, policy and impact analysis. Recommendations are given as a result of the analysis through AHP, and approval or rejection of the newly proposed R&D programs is basically dependent on the result. For the purpose of setting up a preliminary feasibility analysis system, the guidelines for each R&D program category were developed. The guidelines were developed for industrial R&D programs, public health and welfare R&D programs, and basic R&D programs. In this study, the methodologies to measure impacts of each R&D program are introduced. The impact evaluation is performed to measure economic, social, and technological benefits and appropriateness. Various methodologies are employed to assess impact of the proposed R&D program.

Session Title: Needs Assessment TIG Business Meeting
Business Meeting Session 638 to be held in Pratt Room, Section B on Friday, November 9, 3:35 PM to 4:20 PM
Sponsored by the Needs Assessment TIG
TIG Leader(s):
Catherine Sleezer,  Baker-Hughes,  catherine.sleezer@centrilift.com
Jeffry L White,  Ashland University,  jwhite7@ashland.edu

Roundtable: Building Evaluative Capacity in Israeli Social Change Nonprofits
Roundtable Presentation 639 to be held in Douglas Boardroom on Friday, November 9, 3:35 PM to 4:20 PM
Presenter(s):
Nancy Strichman,  Independent Consultant,  strichman@ie.technion.ac.il
Bill Bickel,  University of Pittsburgh,  bickel@pitt.edu
Abstract: Nonprofit organizations need to continually learn from their experiences and adapt to changing circumstances in order to sustain themselves in today's environment. This 'adaptive capacity', considered one of the essential organizational capacities for enabling nonprofits to achieve their mission, requires nonprofits to nurture organizational learning, using evaluation as a tool to enhance learning and performance (Strichman, Bickel & Marshood, in press; Connolly, 2006; Letts, Ryan & Grossman, 1999). This presentation describes a two-year field testing of evaluation capacity building materials and processes with a set of small social change nonprofits in Israel. The work was sponsored by the One-to-One Children's Fund and conducted under the auspices of Shatil, The New Israel Fund's Empowerment and Training Center for Social Change Organizations in Israel. The authors report on the challenges faced by participant organizations in “growing” their evaluative capacities and the efficacy of the materials and processes used in the work.

Session Title: Emerging Practitioners in an Emerging Subfield: Vexing Issues, Opportunities and Lessons Learned
Multipaper Session 640 to be held in Hopkins Room on Friday, November 9, 3:35 PM to 4:20 PM
Sponsored by the Qualitative Methods TIG
Chair(s):
Jacqueline Copeland-Carson,  Copeland Carson and Associates,  jackiecc@aol.com
Discussant(s):
Michael Lieber,  University of Illinois, Chicago,  mdlieber@uic.edu
Abstract: Evaluation anthropology has emerged a transdiscipline blending the theories and methods of these two fields. This session will bring together an interdisciplinary group of established and new anthropological evaluators to explore the challenges faced working at the nexus of anthropology and evaluation. Organized around case examples, papers will address experiences translating anthropology for evaluation; managing the politics of evaluation anthropology projects; indigenous knowledge, diversity and equity in evaluation; as well as career development issues.
Translating Anthropology for Evaluation: An Anthropological Critique of A Framework for Understanding Poverty
Carol Hafford,  James Bell Associates,  hafford@jbassoc.com
This paper will provide an anthropological critique of 'A Framework for Understanding Poverty,' a training program that is currently in vogue with educators and human service professionals across the United States, as the panelist learned while conducting an evaluation on a neglect prevention program. The training contends that people in American society can be located in one of three social classes-poverty, middle class, and wealth-and advances the view that people in each of these 'cultures' are largely unaware of the 'hidden rules' of the others. According to this framework, impoverished families share a specific culture that is characterized by self-gratification and self-defeating behaviors that keep them entrenched in generational or situational poverty (e.g., valuing spending over thrift, living day-to-day rather than being future-oriented, etc.). To mitigate the pervasive influence of this 'culture of poverty,' educators, social workers, or pro bono attorneys must impart values and strategies that will enable their poor students and clients to function in mainstream institutions and relationships. 'A Framework for Understanding Poverty' has been criticized as a value-laden, deficit-oriented approach that reinforces gender, race, and classist stereotypes, and fails to take into consideration the causes of poverty or the systemic disparities that contribute to its reproduction. In joining these critical voices, this paper will revisit the limitations of the 'culture of poverty' concept from an anthropological perspective for those who evaluate human service programs and assess culturally competent approaches to service delivery and engagement.
Issues in Participatory Evaluation and Social Change: A Case Study From El Salvador
James G Huff Jr,  Vanguard University,  jhuff@vanguard.edu
While it may be axiomatic that participatory forms of program evaluation are beneficial to the varied stakeholders involved in community initiatives, less is understood about how such forms of evaluation generate cultural and social change. My aim in this paper is to begin to fill these gaps by critically reflecting upon my own work as an evaluation practitioner with a community development organization in rural El Salvador. Two principal questions will be considered in the paper. First, how do the various stakeholders engaged in a planned community intervention - and especially those who are members of the communities that are targeted for change - learn and then put into practice a participatory form of program evaluation? And second, what new conceptualizations of justice and notions of the social good are generated (and how might others be discarded or revalued) as community members engage in participatory program evaluation? A mini-case study of a program evaluation of a potable water project in Las Delicias, El Salvador will serve as the empirical backdrop upon which these questions will be addressed. In a brief, closing discussion I will critically reflect upon the challenges faced by the evaluation practitioner who is at once called upon to provide 'objective' input and to teach stakeholders about the 'value' of participatory program evaluation.
Research, Evaluation, and Program Data: The Politics of Information
Karen Snyder,  Public Health, Seattle and King County,  karen.snyder@metrokc.gov
The shift from academic researcher to contract evaluator involves understanding the many meanings of data, information, analysis, and reporting. An anthropological perspective helps tease out complex interactions of power and view situations from different angles. In this paper, I describe the process of obtaining access to quantitative and qualitative data needed for funder-required process and outcome evaluations in a community-based service agency. I used ethnographic techniques to understand the perspectives of the funder, project director, project staff, and agency management. Unlike much academic research, process evaluation requires recommending strategies for improving programs. In this case, the solution was framed around learning new skills: a curriculum was established on the principles of research, ownership of data, database and statistical software, privacy issues, data collection, entry, analysis, and interpretation. This strategy met the organization's core value of building capacity and honored the skills, abilities, and potential of the multi-cultural staff and management.
Building Evidence in Ethnographic Evaluation
Mary Odell Butler,  Battelle Centers for Public Health Research and Evaluation,  butlerm@battelle.org
Evaluators generally have come to understand the value of context-specific ethnographic approaches in evaluation. However, evaluation anthropologists are still beleaguered by beliefs on the part of clients and potential users that ethnographic data are interesting but not as rigorous as hard quantitative findings. This paper suggests methods that can be employed to present ethnographic results in a way that the linkage between data and evidence-building is clear and credible. These include demonstrable linkages between evaluation questions and proposed data collection, analytic methods that reflect the relative weight of findings to the population of users, and summary reports that can be easily disseminated and used. Examples from an evaluation of case management of tuberculosis in the US-Mexico border area will be used.
Current Opportunities and Challenges for Anthropologists Developing Evaluation Careers
Eve Pinsker,  University of Illinois, Chicago,  epinsker@uic.edu
An anthropologist working as an evaluator in the fields of public health and community development offers her perceptions of some current opportunities and challenges for anthropologists developing and seeking funding for evaluation projects. Opportunities include: 1) increased funding for translation research, which overlaps with evaluation research; 2) roles for anthropologists in training others in evaluation methods in the contexts of participatory evaluation and professional development; 3) evaluating programs aimed at increasing individual or organizational capacity to deal with cultural diversity. Challenges include 1) increased expectations for outcomes-based evaluation and combining qualitative with quantitative measures; 2) integrating anthropological approaches with program theory and logic models; 3) combining ethnographic methods with systems-thinking based approaches to evaluation, particularly in dealing with multiple-leveled phenomena (individual, organization, community). These challenges, if met, will also result in increased opportunities for anthropologist-evaluators, and contributions to both fields.

Session Title: Evaluation Across Policy Networks: Chronic Disease, Obesity, and Community Design
Expert Lecture Session 642 to be held in  Adams Room on Friday, November 9, 3:35 PM to 4:20 PM
Sponsored by the Advocacy and Policy Change TIG
Chair(s):
Ron Maynard,  University of Washington,  ronmaynard@comcast.net
Presenter(s):
Ron Maynard,  University of Washington,  ronmaynard@comcast.net
Abstract: This session will describe the evaluation of a policy initiative that spans complex partnerships with different legal, regulatory, and community structures; describe how qualitative research methods can be used for policy evaluation across social and organizational networks; and discuss applications of these evaluation approaches in different policy contexts. The Active Community Environments Initiative focuses on the development of community infrastructure that promotes walking, bicycling, and mobility for abled and disabled individuals. Community design elements that include access, neighborhood context, and connectivity play a significant role in promoting physical activity, which is seen as closely related to the prevention of chronic disease and obesity. The focus on upstream factors for health, requires the engagement of diverse new partners, including state and regional planning and transportation agencies, community coalitions and task forces, parks and recreation districts, and public health agencies. How does evaluation inform the activities, priorities, and strategies of these partnerships?

Roundtable: Poetic Devices for Evaluation: Found Data Poems From Interviews and Photography to Augment Qualitative Evaluation Reporting
Roundtable Presentation 643 to be held in Jefferson Room on Friday, November 9, 3:35 PM to 4:20 PM
Presenter(s):
Valerie Janesick,  University of South Florida,  vjanesic@tempest.coedu.usf.edu
David Campos,  University of the Incarnate Word, San Antonio,  campos@uiwtx.edu
Abstract: The purpose of this session is to describe and explain how evaluation reports may be augmented with artistic qualitative approaches to evaluation such as; found data poems from interview transcripts and photography. Qualitative evaluation techniques will be presented and discussed for the purpose of capturing the social context of the evaluation process. In order to make the participants come alive in the study, a phenomenological approach to evaluation will be addressed. Attendees of this session will practice finding poetry in interview data and creating poetry from photographs of a recent evaluation project. Issues regarding photography as evaluation data points will be described and analyzed.

Session Title: Conversation Hour With the 2007 AEA Award Winners
Panel Session 644 to be held in Washington Room on Friday, November 9, 3:35 PM to 4:20 PM
Sponsored by the AEA Conference Committee
Chair(s):
Jennifer Martineau,  Center for Creative Leadership,  martineauj@leaders.ccl.org
Abstract: Join us as we hear from each of the 2007 AEA Award Winners about their evaluation work, their careers, and other tidbits they’ll share with us related to their awards. After we hear from the award winners the audience will have an opportunity to ask questions and engage them in dialogue. The winners of the AEA awards will be announced at the Friday luncheon and then join us in the afternoon for this session.

Session Title: Why be Normal? Nonparametric Data Analysis Methods as an Important Tool to Analyze and Draw Conclusions From Program Evaluation Data
Demonstration Session 645 to be held in D'Alesandro Room on Friday, November 9, 3:35 PM to 4:20 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Tessa Crume,  Rocky Mountain Center for Health Promotion and Education,  tessac@rmc.org
Abstract: The opportunity for descriptive and experimental inquiry using qualitative evaluation data is often overlooked due to concerns about small samples, ordinal data, or violations of normality that common parametric statistical tests rely upon. Nonparametric methods can be more powerful than parametric methods if the assumptions behind the parametric model do not hold. We will explore a number of practical applications of common nonparametric analysis methods that are appropriate for counts, ordered-categorical, non-ordered categorical, small samples, data for one or several groups of subjects, and data collected at multiple time points. Common nonparametric tests will be discussed including Mann-Whitney, Wilcoxon, Kruskall-Wallis, and Freidman as well as common nonparametric correlation coefficients including Spearman R, Kendall Tau, and Gamma. We will review the rationale and assumptions underlying each method and discuss their weaknesses and strengths. Examples of applications in program evaluation will be used to illustrate each method.

Session Title: Applications of Geographic Information Systems in Local and Statewide Evaluation
Demonstration Session 646 to be held in Calhoun Room on Friday, November 9, 3:35 PM to 4:20 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Susan Voelker,  University of Arizona,  smlarsen@u.arizona.edu
Aunna Elm,  University of Arizona,  aunnae@email.arizona.edu
Michele Walsh,  University of Arizona,  mwalsh@u.arizona.edu
Abstract: Geographic information system (GIS) software available for desktop computers makes it possible for evaluators to incorporate geographically-defined variables into program and outcome data analyses. The presenters use GIS technology in evaluations of local and statewide tobacco prevention and cessation programs by creating interactive maps to organize and display generally available spatial data (e.g. US Census data) and geocoded program and outcome data. Using a laptop computer loaded with GIS software and spatial data from actual projects, the presenters will conduct demonstrations of GIS capabilities and applications in evaluation. The demonstration will include mapping of disparate entities, such as schools and tobacco vendors, and will show how spatial relationships can be identified and measured. There will also be demonstrations on the development and use of thematic mapping and health outcomes mapping. The presenters will discuss how to establish in-house GIS capability and will review technical and training requirements.

Session Title: GIS and QDAS: Technological Tools That Reveal Multiple Perspectives and Unique Data Associations
Multipaper Session 647 to be held in McKeldon Room on Friday, November 9, 3:35 PM to 4:20 PM
Sponsored by the Integrating Technology Into Evaluation
Chair(s):
Vanessa Dennen,  Florida State University,  vdennen@fsu.edu
Evaluation Data Analysis: The Importance of Methodology When Using Qualitative Data Analysis Software
Presenter(s):
Dan Kaczynski,  University of West Florida,  dkaczyns@uwf.edu
Michelle Salmona,  University of Technology Sydney, Australia,  m.salmona@pobox.com
Abstract: This paper explores the use of qualitative data analysis software (QDAS) from three different perspectives; action research, emergent inquiry and outcome structured inquiry. Each perspective provides a distinct foundation to evaluation design and produces very different results. It is the intent of this paper to better understand each approach through the use of QDAS. This will show the software as a technological tool that promotes transparency of qualitative methodology and evaluation practice. The examination involves a piece of the analysis process that evaluators rarely discuss in detail; the construction of meaning from qualitative data as seen through the development and use of the code structure. Code structure design is discussed in relation to two key QDAS features: data management and data analysis. Of particular significance in this discussion is the influence that design decisions have upon the methodology and, ultimately, the quality of an evaluation study.
Applications for Geographic Information System Technology in Program Evaluation
Presenter(s):
Janet Lee,  University of California, Los Angeles,  janet.lee@ucla.edu
Tarek Azzam,  University of California, Los Angeles,  tazzam@ucla.edu
Abstract: A Geographic Information System (GIS) is a unique technological tool that integrates and displays spatially referenced information. This paper presents various applications for GIS technology in program evaluation. More specifically, this technology is useful for making associations between disparate data sets by processing information and displaying data referenced by geographic location. A diverse sample of traditional and innovative applications of GIS technology used in actual program evaluations is presented, in order to illustrate its multiple uses and added value to evaluation work. In addition, potential uses and resources for GIS technology are also explored.

Session Title: Evaluating an Apple When You are Among a Bunch of Bananas: Meeting Stakeholders' Needs When Everyone Has Differing (and Conflicting) Expectations
Demonstration Session 648 to be held in Preston Room on Friday, November 9, 3:35 PM to 4:20 PM
Sponsored by the Special Needs Populations TIG
Presenter(s):
Kimberly Taylor,  Schwab Rehabilitation Hospital,  taykim@sinai.org
Abstract: How do you evaluate a department that does not "fit in" with the rest of the organization? What if the department is so unique, there is no other one like it? Meet an internal evaluator at a rehabilitation hospital and hear about her experience evaluating the "Extended Services" department, which is non-medical yet still provides services to patients. Learn some strategies for measuring quality control and locating performance indicators when the outcomes are psychosocial in nature. Discover the hows, whys, and benefits of presenting such a department to outside accreditation agencies. Discuss evaluation methods for situations in which no control group is available, as the hospital cannot withhold services to those who need them. The evaluation of individual programs (e.g., peer-mentoring, tutoring, violence prevention) will be discussed, as well as measurement of the overall impact of the department on its participants, staff, and the organization as a whole.

Session Title: Contextual Variables in Elementary Schools Influencing Organizational Learning and Predicting Evaluative Inquiry
Expert Lecture Session 649 to be held in  Schaefer Room on Friday, November 9, 3:35 PM to 4:20 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG and the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Rebecca Gajda,  University of Massachusetts, Amherst,  rebecca.gajda@educ.umass.edu
Presenter(s):
Jeffrey Sheldon,  Claremont Graduate University,  jeffrey.sheldon@cgu.edu
Discussant(s):
Chris Koliba,  University of Massachusetts,  ckoliba@uvm.edu
Rebecca Gajda,  University of Massachusetts, Amherst,  rebecca.gajda@educ.umass.edu
Abstract: In June 2006 I conducted a two-part study that explored the internal context of a small sample (n = 9) of elementary schools to determine which, if any, organizational learning characteristics (e.g., culture, leadership, communications, structures and systems, and teamwork) were present and whether these schools could, by definition, be called learning organizations. If organizational learning was indicated, the second part of study predicted, using those characteristics present as independent variables, whether evaluative inquiry as a means to organizational knowledge production was likely to occur. Of further interest was determining the single best or best combination of predictors of evaluative inquiry. The Readiness for Organizational Learning and Evaluation Instrument (ROLE) (Preskill & Torres, 2000) was used to operationalize both organizational learning and evaluative inquiry. This presentation will focus on the study's findings which confirm the literature on organizational learning and support the connection between evaluative inquiry and organizational learning.

Session Title: Magnet School Evaluation Issues
Multipaper Session 650 to be held in Fairmont Suite on Friday, November 9, 3:35 PM to 4:20 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Donna Lander,  Jackson State University,  donna.a.lander@jsums.edu
Evaluating Selection Criteria for an Urban Magnet School
Presenter(s):
Jill Lohmeier,  University of Massachusetts, Lowell,  jill_lohmeier@uml.edu
Jennifer Raad,  University of Kansas,  jraad@ku.edu
Abstract: Selection and outcome data from two years of students (Total N = 525) accepted to an Urban Magnet school were evaluated in this study. Regression analyses examined the predictive value of the following screening variables on graduation and Magnet school GPA: Suspension and attendance data, Standardized 7th grade Reading and Math concepts test scores, 6th and 7th grade GPA in core subjects, Matrix Analogies test scores, and demographic variables (Gender, Ethnicity and SES). Although the school district attempted to utilize several selection variables in order to admit students who were the most likely to succeed, most of the selection variables did not show predictive value. The results of the regression analyses will be discussed, as well as the implications for reporting conclusions like these, which may be at odds with the beliefs of school administrators.
Evaluating Educational Reform: Lessons Learned From the Implementation of Middle School Magnet Programs
Presenter(s):
Suzanne Raber,  Montgomery County Public Schools,  suzanne_m_raber@mcpsmd.org
Abstract: In 2005-2006, the Montgomery County Public Schools, a large urban-suburban district just outside of Washington DC, opened three unique whole-school magnets that provide students in Grades 6-8 countywide the opportunity to engage in highly rigorous instructional programs focusing on information technology, the performing and creative arts, or aerospace technologies. This Middle School Magnet Consortium incorporates many research-based educational reform concepts: choice, high-interest magnet themes, a rigorous accelerated core curriculum, and job-embedded professional development. This paper presents the challenges of evaluating such a comprehensive reform effort, given its overarching goals to improve student achievement and reduce socioeconomic isolation. In particular, the paper addresses the challenges of evaluating professional development, given it critical role in supporting teachers to deliver rigorous instruction to all students. While the paper focuses on methodological issues, some findings regarding the first two years of program implementation are included to illustrate how these evaluation challenges have been met.

Roundtable: Barriers to Implementation of Program Design: An Examination of Organizational Capacity, Collaborative Relationships and Program Implementation Design Issues
Roundtable Presentation 651 to be held in Federal Hill Suite on Friday, November 9, 3:35 PM to 4:20 PM
Presenter(s):
Kakoli Banerjee,  United States Department of Health and Human Services,  kakoli.banerjee@hhs.co.scl.ca.us
Abstract: Evidence-based substance abuse treatment programming is becoming the norm and the issue of organizational capacity to implement evidence-based program is a matter of some interest. Evidence-based programs are created in highly controlled situations and the program developer can select participants, randomize participants and calibrate the “dosage.” When an evidence-based program is transplanted to the “field,” the new program is often grafted on to an existing organization. This organization has its own culture and priorities and incorporating a different program requires fundamental change in existing policies and procedures. The paper will investigate three areas of program implementation – organizational planning, organizational readiness for change and inter-agency collaboration. I draw on examples from three programs located in a large California county. One program provides case management services to chronically homeless persons with substance abuse and/or mental illness and two treatment programs, and two are substance abuse treatment programs, one targeted at adolescents and the second, court involved persons with co-occurring disorders.

Session Title: A Directory of Evaluation Methods for Managers of Public Research and Technology Programs
Demonstration Session 652 to be held in Royale Board Room on Friday, November 9, 3:35 PM to 4:20 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Presenter(s):
Rosalie Ruegg,  TIA Consulting Inc,  ruegg@ec.rr.com
Abstract: This session presents a quick reference guide to methods for evaluating public R&D programs prepared for managers of those programs--particularly managers whose prior exposure to evaluation has been limited largely to peer review. Often managers of R&D programs may lack information in the form needed to describe succinctly and effectively the benefits of their programs even though they know the technical aspects of their programs in detail. This brief compendium provides a convenient and quick reference for managers to become aware of and to access suitable evaluation methods for their needs. Specifically, it provides a directory of 15 evaluation methods that have proven useful to R&D program managers in Federal agencies. It defines each method, explains its use, lists its limitations, illustrates its use in an example, and provides references.

Session Title: Evaluation to Improve Coordinated Social Marketing Campaigns: Lessons From Tobacco Control
Expert Lecture Session 653 to be held in  Royale Conference Foyer on Friday, November 9, 3:35 PM to 4:20 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
Carolyn Celebucki,  University of Rhode Island,  cceleb@etal.uri.edu
Presenter(s):
James Hersey,  RTI International,  hersey@rti.org
Discussant(s):
Carolyn Celebucki,  University of Rhode Island,  cceleb@etal.uri.edu
Abstract: This session presents a discussion that emphasizes evaluation lessons, from a systematic review of more than 100 evaluations coordinated social marketing campaigns to prevent tobacco use or to encourage smoking cessation conducted in the United States, in Europe, in Australia, and in the developing world. The review assesses how well different evaluation approaches help to identify the magnitude of effects and the way in which campaigns worked, for three major types of message campaigns: health effects campaigns; de-glamorization campaigns; and anti-tobacco industry campaigns. The talk reviews the strengths and limitations of different evaluation approaches in learning how campaigns worked and providing the feedback to improve campaign design and performance. Approaches range from assessing exposure, to understanding the chain of beliefs and attitudes that influence intentions, susceptibility to tobacco use; uptake, and cessation. The session discusses lessons for evaluations of other types of coordinated social marketing campaigns.

Session Title: Using Graduate Student Assessment to Evaluate Success of Graduate Programs
Think Tank Session 654 to be held in Hanover Suite B on Friday, November 9, 3:35 PM to 4:20 PM
Sponsored by the Assessment in Higher Education TIG
Presenter(s):
Rhoda Risner,  United States Army Command and General Staff College,  rhoda.risner@us.army.mil
Discussant(s):
Maria Clark,  United States Army Command and General Staff College,  maria.clark1@conus.army.mil
Abstract: The United States Army Command and General Staff College (UASCGSC) developed a process for measuring the learning outcomes of adult graduate students for accreditation purposes. This measurement is part of our graduate degree program evaluation. As part of a think tank session a facilitator will share the USACGSC process. After the initial presentation, participants will self select one of two small group breakout sessions. One group will share how their graduate schools measure student learning outcomes and use evaluation data. The other group will brainstorm ideas for improvement or implementation of a student learning outcomes measurement system. Finally both groups will share a synopsis of their brainstorming.

Session Title: Introducing Appreciative Inquiry to Evaluation
Demonstration Session 655 to be held in Baltimore Theater on Friday, November 9, 3:35 PM to 4:20 PM
Sponsored by the Presidential Strand
Presenter(s):
Tessie Catsambas,  EnCompass LLC,  tcatsambas@encompassworld.com
Abstract: The life of evaluators and evaluation clients can be deeply enriched through the application Appreciative Inquiry (AI) to Evaluation. This highly interactive session will have a short experience using AI to clarify program desirable outcomes, and develop evaluation questions and measures. The session will using an appreciative process and then build on the data that comes out of the practice. It will also provide examples of AI applications in different sectors and contexts, and variation and options in AI application (domestic and international organizations, government and nonprofits, at the community, organizational, national, and international levels). The session will show how applying Appreciative Inquiry to evaluation can help participants learn (clarifying the goals and purpose of evaluation, engaging stakeholders in exciting new ways, broadening participation, deepening the culture competence of evaluation, bringing a whole systems view to evaluation, and, ultimately, building evaluation and organizational capacity).

Session Title: Learning Through Applied Research in Social Service Contexts
Multipaper Session 656 to be held in International Room on Friday, November 9, 3:35 PM to 4:20 PM
Sponsored by the Human Services Evaluation TIG
Chair(s):
Darryl Jinkerson,  Abilene Christian University,  darryl.jinkerson@coba.acu.edu
Discussant(s):
Darryl Jinkerson,  Abilene Christian University,  darryl.jinkerson@coba.acu.edu
The Impact of the Automated Information Systems (AIS) for Child Support Enforcement on Child Support Policy Outcomes
Presenter(s):
Jeongsoo Kim,  University of California, Berkely,  jk37@berkeley.edu
Abstract: This study evaluates the effect of the Automated Information Systems (AIS), of child support enforcement on child support collection outcome in the U.S. Using Current Population Survey (CPS) data from 2000 to 2005, I employ Heckman's two step method to deal with selection bias. The result in the first step shows a positive association between AIS and the probability of single mothers receiving child support from delinquent fathers. In the second step, AIS is statistically significant, indicating that among single mothers who received support, those living in a state that adopted AIS received $192 more per year, on average, than single mothers living in a state without AIS, holding other factors being constant.
Projecting Staffing Needs for Program Evaluation and Budget Planning in Public Social Services
Presenter(s):
Joy Stewart,  University of North Carolina, Chapel Hill,  jstewart@unc.edu
Dean Duncan,  University of North Carolina, Chapel Hill,  dfduncan@email.unc.edu
Jilan Li,  University of North Carolina, Chapel Hill,  jilanli@email.unc.edu
Abstract: The authors projected future staffing needs for a large public social services agency based on historical caseload data and comparison to local, state, and national caseload size standards. Utilizing forecasting methods, the researchers projected the number of staff needed over six years for all major social services programs including TANF, Medicaid, Food Stamps and child welfare. County and social services managers used the results in budget planning and program evaluation.

Session Title: Extending the Reach: Making the Most of Limited Evaluation Resources
Demonstration Session 657 to be held in Chesapeake Room on Friday, November 9, 3:35 PM to 4:20 PM
Sponsored by the Health Evaluation TIG
Presenter(s):
Tom Summerfelt,  University of Chicago,  tom.summerfelt@uchicago.edu
Cidhinnia M Torres Campos,  Crafton Hills College,  cidhinnia@yahoo.com
Rebekah King,  Spectrum Health Healthier Communities,  rebekah.king@spectrum-health.org
Abstract: This demonstration session highlights an innovative approach to evaluation of community health programs. It builds off of empowerment evaluation principles as well as developmental and participatory approaches to engage line staff, middle managers, and directors in the evaluation process from conceptualization of theory to implementation. In designing this engaged evaluation strategy, we created a model that would be responsive to community needs and resources; would develop the evaluation research capacity of community based organizations; would produce outcomes that are practical, understandable, and useful to the community; and facilitate organizational learning. We successfully implemented this type of evaluation approach while working with a hospital community benefits department (program budget of $6M) and over 45 programs that they fund. The approach allowed for outcome assessment at both the individual program level and the department level (combining similar programs by targeted outcomes). Factors critical to successful implementation of this approach will be discussed.

Session Title: Effectiveness and Impact of Federal Safety Risk Reduction Programs: Evaluation Experience and Lessons Learned From Three Government Agencies Efforts to Improve Industry Safety
Panel Session 658 to be held in Versailles Room on Friday, November 9, 3:35 PM to 4:20 PM
Sponsored by the Government Evaluation TIG
Chair(s):
Michael Coplen,  Federal Railroad Administration,  michael.coplen@dot.gov
Jonathan Morell,  New Vectors LLC,  jamorell@jamorell.com
Discussant(s):
Jo Strang,  Federal Railroad Administration,  jo.strang@dot.gov
Abstract: The panel will discuss federal government efforts to improve safety industry-wide by implementing risk-based safety programs – programs that identify leading safety indicators and precursor events likely to lead to accidents as a means of managing safety. Initiatives at the Federal Railroad Administration, the Federal Aviation Administration, and Transport Canada will be included. Presentations will cover evaluation methodologies, results to date, and the organizational change in industry and government that are required to successfully implement, run, and sustain these programs.
Compliance and Oversight of Risk-based Safety Systems in the Aviation Industry
Wes Timmons,  Federal Aviation Administration,  wes.timmons@faa.gov
Wes Timmons is the former Director of the Safety Management Oversight Division of the FAA Office of Air Traffic Safety Oversight. He is personally familiar with the FAA's safety oversight of their Air Traffic Organization, and will speak broadly about safety management oversight programs elsewhere in the aviation system.
Risk Assessment and Lessons Learned From Transport Canada's Railway Safety Act
Luc Burdon,  Transport Canada,  bourdol@tc.gc.ca
Luc Bourdon is the Director General, Rail Safety, for Transport Canada. He will discuss lessons learned from the implementation of safety management systems. In 2001, the legislator in Canada (Transport Canada) introduced new regulations, whereby all railroads in Canada had to implement safety management systems. After six years, was it a success or a failure? Did it improve the safety culture within the industry? What have we learned so far?

Search Results