Return to search form  

Session Title: Teaching About Evaluation: Methods With an Admixture of Epistemology and Ontology
Expert Lecture Session 701 to be held in  International Ballroom A on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the Presidential Strand
Presenter(s):
Sandra Mathison,  University of British Columbia,  sandra.mathison@ubc.ca
Abstract: Teaching evaluation, in University programs or workshops, focuses largely on how to do evaluation: the methods used to define, conduct and report evaluation take center stage. The emphasis on methods is a reflection of evaluation as a professional practice that supports the development of craft knowledge for those who would call themselves evaluators. Implicitly though, in teaching how to do evaluation one hears refrains of foundational conceptions: what counts as appropriate and useful evidence, in what form, as well as the desirable end goal of evaluation and how evaluative knowledge can and should be used to serve that end goal. This approach to teaching evaluation results in differentiated and specialized cadres of evaluators, who share a tenuous sense of solidarity in their work. The implications of teaching about evaluation by focusing on valuing, with a secondary focus on methods will be considered.

Session Title: Evaluation in the Context of High Stakes Assessments
Multipaper Session 702 to be held in International Ballroom B on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
M David Miller,  University of Florida,  dmiller@coe.ufl.edu
Abstract: Funding for many educational programs is often tied to an evaluation of the effectiveness of the programs. With the increased emphasis on accountability and high stakes assessments as required by the No Child Left Behind legislation and similar state initiatives, policy makers often define program effectiveness in terms of growth on the high stakes required assessments. At the same time, there is a growing concern for experimental design to measure program effectiveness which has resulted in a push for large scale experimental designs with random assignment of units and well established control groups. This presentation will focus on the use of high stakes accountability data to examine the effectiveness of education programs, particularly the issues of interpreting results of experimental studies using high stakes accountability testing.
Design Alternatives to Measure Effectiveness of Programs With High Stakes Assessments
M David Miller,  University of Florida,  dmiller@coe.ufl.edu
This paper will consider issues related to data collection when adequate control groups are not possible because high stakes assessments are the required outcome measure and all schools are working to increase their scores. Designs include measurement over time and looking at growth models; regression discontinuity designs; and experimental designs with alternative treatment options. Data are reported for the Florida Reading Initiative that has been tracking 53 schools for the last five years in a state testing program that spans more than a decade. Other examples will be discussed.
Interpreting High Stakes Test Data: Consequential Evidence and Multiple Stakeholders
Jenny Bergeron,  University of Florida,  jennybr@ufl.edu
This paper focuses on the consequential evidence for valid interpretation of high stakes test scores. Within the context of evaluation, it is important to examine the interpretation of testing data particularly in high stakes testing environments. This paper reports the effects of high stakes assessments in Florida which combines results from three studies. In the first study, principals were interviewed to determining the effects of the high stakes tests on test interpretation. In the second and third studies, teachers and students were given surveys to determining the effects on instruction and student psychological variables (e.g., feelings of control over their ability to do well on tests).

Session Title: What Have We Learned About Evaluation Principles and Practice in International Non-governmental Organizations?
Panel Session 703 to be held in International Ballroom C on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Michael Scriven,  Western Michigan University,  scriven@aol.com
Discussant(s):
Jim Rugh,  CARE International,  jimrugh@mindspring.com
Abstract: This session will discuss the findings of a major study conducted as part of a PhD dissertation on evaluation principles and practice in International Non-Governmental Organizations (INGOs). The study had three main objectives: (i) assess evaluation practice and principles adopted by international development agencies, with special focus on INGOs, but also how they relate to evaluation practice and expectations of other development institutions, (ii) assess the evaluation standards being developed by the American Council for Voluntary International Action (InterAction) and explore the extent to which they have been adopted by (or reflect practice of) its membership, and (iii) propose specific improvements to InterAction standards, getting eventually to specific methodological frameworks tailored to evaluate initiatives in a few areas of international development. The study included in-depth analysis of the current literature, consultation with experts in the field, and surveys and in-depth interviews of a sample of INGOs, members of InterAction.
What is out There: Findings From an Empirical Study on Evaluation Principles and Practice in International Non-Governmental Organizations
Thomaz Chianca,  Western Michigan University,  thomaz.chianca@wmich.edu
Mr. Chianca is the primary researcher for this study on evaluation principles and practice in International Non-Governmental Organizations (INGOs). He will be presenting the main findings of his research, especially the ones related to the electronic survey of the 165 member organizations of the American Council for Voluntary International Action (InterAction) and the case-studies highlighting lessons learned from a small group of INGOs identified through the survey as doing significant work on monitoring and evaluation in their agencies.
So What: Contextualizing the Relevance of the Study Findings for the International Non-government Organizations' Community
Paul Clements,  Western Michigan University,  paul.clements@wmich.edu
Dr. Clements is the director of the Master of Development Administration Program at Western Michigan University, and has extensively published and worked in the field of international development evaluation. He has been a senior evaluation consultant for the United Nations Development Programme, Isle, Inc., and CHF International. On his presentation, Dr. Clements will explore the relevance of this study to the larger community of international development organizations.

Session Title: Stakeholder Identification and Assessment in Nonprofit Organizations and Public Agencies
Demonstration Session 704 to be held in Liberty Ballroom Section A on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the Non-profit and Foundations Evaluation TIG
Presenter(s):
Barbara Wygant,  Western Michigan University,  barbara.wygant@wmich.edu
Abstract: Broadly defined, stakeholders are the people and groups that affect or are affected by an organization. A key stakeholder assessment for an organization may be conducted with relatively minimal resources, especially in comparison to the costs and problems that may arise if key stakeholders are not identified or are ignored. This demonstration is based on an extensive literature review of stakeholder groups related to nonprofit organizations and public agencies. The various definitions and key concepts related to this area will be discussed. Various techniques of stakeholder identification and analysis will be presented. The facilitator will demonstrate the steps in conducting a complete stakeholder assessment through the use of a public administration graduate-level classroom assignment. Stakeholder diagrams, issue networks, political environment, and power rankings will also be discussed. Evaluation tools for stakeholder assessment will be provided and challenges and future trends in evaluation-related stakeholder management and measurement will be discussed.

Session Title: Identifying Critical Processes and Outcomes Across Evaluation Approaches: Empowerment, Practical Participatory, Transformative, and Utilization-focused
Expert Lecture Session 705 to be held in  Liberty Ballroom Section B on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the Theories of Evaluation TIG
Chair(s):
Tanner LeBaron Wallace,  University of California, Los Angeles,  twallace@ucla.edu
Presenter(s):
Marvin Alkin,  University of California, Los Angeles,  alkin@gseis.ucla.edu
Discussant(s):
J Bradley Cousins,  University of Ottawa,  bcousins@uottawa.ca
David Fetterman,  Stanford University,  profdavidf@yahoo.com
Donna Mertens,  Gallaudet University,  donna.mertens@gallaudet.edu
Michael Quinn Patton,  Utilization-Focused Evaluation,  mqpatton@prodigy.net
Abstract: Inspired by the recent American Journal of Evaluation article by Robin Miller and Rebecca Campbell (2006), this session proposes a set of identifiable processes and outcomes for four particular evaluation approaches-Empowerment, Practical Participatory, Transformative, and Utilization-Focused. The four evaluation theorists responsible for each approach will serve as discussants to critique our proposed set of evaluation principles. This session seeks to answer the following two questions for each approach: What process criteria would identify each specific evaluation approach in practice? And what observed outcomes are necessary in order to make a judgment that the evaluation was "successful" in regards to the particular evaluation approach? Providing answers to these questions through both the presentation and the discussion among the theorists will provide comparative insights into common and distinct elements among the approaches. Our ultimate aim is to advance the discipline of evaluation through increasing conceptual clarity.

Session Title: Thinking About Systems Thinking
Multipaper Session 706 to be held in Mencken Room on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the Systems in Evaluation TIG
Chair(s):
Derek A Cabrera,  Cornell University,  dac66@cornell.edu
Getting Beyond Industrial Age Thinking in Evaluation: A Critical Look at System Archetypes
Presenter(s):
Natasha Jankowski,  Western Michigan University,  natasha.a.jankowski@wmich.edu
Abstract: System archetypes, when employed in evaluations with clients that are not familiar with system concepts, easily fall prey to industrial age thinking that is still prevalent in many organizations and programs. What we can learn and what we can do to make system archetypes more effective in light of industrial age thinking will be covered. As evaluators, it becomes necessary to be aware of the industrial age thinking that surrounds the programs we evaluate before we try to implement system concepts and archetypes. This paper presents a critical review of system archetypes as perceived through industrial age thinking with recommendations on how to advance beyond the constraints. This will allow evaluators to be better prepared to help their clients integrate system concepts into the evaluation and organizations. The potential for clients to limit the scope and impact of system archetypes employed to an evaluation using system concepts will be explored.
Unpacking the Logic Model: Systems Thinking in Practice
Presenter(s):
A Cassandra Golding,  University of Rhode Island,  c_h_ride@hotmail.com
Abstract: Program dynamics highlight an intricate organization of stakeholders, within diverse contexts, and embodying several cultural influences. This incites multiple sets of assumptions and provokes a multiplicity of dialogues. Consequently, the appreciation of a given program and the evaluation process is necessarily complex, interdependent, intertwined, and non-linear. Systems thinking, applied to evaluation work addresses the intersection of multilayered program components. Systems dynamics explicitly aims to stimulate proactive action and highlight the context in which programmatic systems are embedded. This conceptualization supports “rigorous rethinking, reframing, and unpacking complex realities and assumptions.” Systems thinking challenges conventional conceptualizations of the purpose, utilization and handling of evaluation data. This theoretical paper asks about the transformation of logic models in the evolution of the program, the representation of causal links and change within the logic model, and limitations of such. Is the logic model informed by the program, informing the program, or both?

Session Title: Engaging Communities in Disaster and Emergency Management Planning, Education, and Evaluation
Multipaper Session 707 to be held in Edgar Allen Poe Room  on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the Disaster and Emergency Management Evaluation TIG
Chair(s):
Liesel Ritchie,  Western Michigan University,  liesel.ritchie@wmich.edu
Museum Exhibits and Educational Programming for Natural Disaster Preparation: Evaluation of a Site-based System
Presenter(s):
Wendy Dickinson,  University of South Florida,  statgirl@aol.com
Bruce Hall,  University of South Florida,  bwhall@tampabay.rr.com
Dave Conley,  Museum of Science & Industry,  dconley@mosi.org
Abstract: Earthquakes, hurricanes, wildfire, floods, lightning, tornadoes, and other natural phenomena occur regularly as part of the natural environment of our planet. There is a clear need to increase public awareness of these natural forces and their impact on human existence. To address this need, the Museum of Science and Industry developed Disasterville, composed of immersive environments, interactive exhibits, and fortification displays. The major systemic goal is to educate visitors about the science of catastrophic natural phenomena, and steps which can be taken, both personally and collectively, to reduce the risks of devastating consequences. Evaluation results based on data triangulation from interviews, unobtrusive observations, and visitor reactions suggest strongly that Disasterville exhibits have positive effects on visitors: predisposing them to seek more information about natural hazards, and strategies to protect against them, and to take specific actions based on these protective strategies. Educating the public about effective strategies for protection, mitigation and recovery based on the latest scientific knowledge is critical to reducing human suffering, loss of life, and destruction of property from these deadly natural phenomena, both today and in the future.
An Evaluation of Tsunami Awareness and Preparedness in Six United States Coastal Communities — Yes, United States Communities
Presenter(s):
Liesel Ritchie,  Western Michigan University,  liesel.ritchie@wmich.edu
Duane Gill,  Mississippi State University,  duane.gill@ssrc.msstate.edu
Stephen Meinhold,  University of North Carolina, Wilmington,  meinholds@uncw.edu
Jennifer Horan,  University of North Carolina,  horanj@uncw.edu
Abstract: After the Indian Ocean tsunami disaster in 2004, a group of U.S. universities received a National Science Foundation grant to evaluate the effectiveness of tsunami warnings in the United States. Communities in Alaska, California, Hawaii, North Carolina, Oregon, and Washington are being studied, with the intent that findings will be used to improve the effectiveness of tsunami readiness efforts and warning messages in these and other communities. This presentation will first provide an overview of the study, then focus on the North Carolina site of Wrightsville Beach, where tsunami awareness and education activities were implemented by the National Weather Service following baseline data collection. Among the various interventions were posting of tsunami evacuation route signage, an information campaign consisting of mailings (brochures and DVDs), as well as a series of focus groups which included an educational presentation. We will conclude by presenting preliminary findings of post-intervention data collection.

Session Title: Story Bank: Learning through Story-telling
Demonstration Session 708 to be held in Carroll Room on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the Qualitative Methods TIG
Presenter(s):
Cassie Bryant,  Cassandra Drennon & Associates,  cassie@drennonassoc.net
Diane Monaghan,  Cassandra Drennon & Associates,  diane@drennonassoc.net
Abstract: The evaluators used an intranet website to gather data for an external evaluation of a statewide adult education program. Teachers at five pilot sites submitted weekly stories about their project-related experiences in the classroom. Using overhead projection, presenters will navigate attendees through the features of the website, showing how teachers used the site and how evaluators organized and analyzed resulting data. Handouts will include basic steps for creating a story bank, issues to consider, and examples of stories and logic models that emerged from the process. The concept, based on the change model of evaluation, became an action research project for teachers. Strengths included richer data collection and a deliberate mechanism for teachers to reflect on and advance their strategies. Weaknesses were teacher resistance to participating in the discussion board and lack of support from the state program coordinator.

Session Title: Approaches to Evaluation in Social work settings
Multipaper Session 709 to be held in Pratt Room, Section A on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the Social Work TIG
Chair(s):
Sarita Davis,  Clark Atlanta University,  sdavis@cau.edu
Looking for Strengths: The Irony of Internal Auditing of Social Work Services as a Strengths-based Evaluation Method
Presenter(s):
William Cabin,  Youth Consultation Service,  williamcabin@yahoo.com
Abstract: Strengths-based and evaluation theory de-emphasize problem-centered approaches to social work practice (Saleebey, 2006). Auditing typically emphasizes problem identification and correction (Harvard Business Review, 2006). The two approaches appear contradictory. The literature indicates limited, if any, mention of auditing as a strengths-based approach or as an accepted human services evaluation methodology. The paper presents an actual internal social work auditing program, the results of which were used as a strengths-building practice approach with social workers. The program was used at a multi-site, non-profit child welfare organization. The paper describes the origin, goals, design, implementation, and results of the two-year-old program, emphasizing the use of results to build on social worker strengths. Samples of the audit tool, expit report, and trend reports are included.
Applications of Complexity to Social Program Evaluation
Presenter(s):
Michael Wolf-Branigin,  George Mason University,  mwolfbra@gmu.edu
Abstract: This presentation develops the application of complexity theory to social program evaluation. It links aspects of complexity theory to social work values as outlined in the NASW Code of Ethics. It presents a model for framing social work phenomena within this framework by describing the complexity component followed by simple prompts for evaluators to use. The presentation discusses three diverse applications – community inclusion of persons with developmental disability, role of spirituality in substance abuse treatment, and application to social work education - of this flexible technique to social program evaluation. Framing social program evaluations in a complexity framework has a widening appeal given the increasing availability of evaluation tools and computational power.
Understanding the Nature of Work: New York State Child Welfare Workload Study
Presenter(s):
Paul Frankel,  American Humane Association,  paulf@americanhumane.org
Elizabeth Oppenheim,  Walter R McDonald & Associates Inc,  loppenheim@wrma.com
Abstract: This study is the first child welfare workload study that addresses the activity of both voluntary and public agencies. This is an important advance in understanding the total effort required to assess, plan, provide, and document the broad array of child welfare services. Eleven district offices in New York State, including the Administration for Children's Services, and 42 voluntary agencies participated in the study. Detailed time data from more than 2,200 caseworkers were analyzed. The findings of the time data collection and the other components of this study lead us to recommend that New York State reduce its caseloads for Child Protective Services Investigations, Foster Care Case Planning Services, and Preventive Case Planning Services. Recommendations are offered to improve performance on many different child welfare indicators (e.g., face to face contact with children), and systemic improvements in training and supervision, management, and outcome measurement are discussed.

Session Title: Retention in a Longitudinal Outcomes Study: Exploring Two Sides of the Same Coin, Who Asks and Who Answers
Multipaper Session 710 to be held in Pratt Room, Section B on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the Human Services Evaluation TIG
Chair(s):
Stacy Johnson,  Macro International Inc,  stacy.f.johnson@orcmacro.com
Discussant(s):
Christine Walrath,  Macro International Inc,  christine.m.walrath-greene@orcmacro.com
Abstract: This session examines two factors that impact retention in longitudinal research targeting marginalized or distressed populations: the internal and the external variables around the interviewer and the interviewee. The two papers draw on analysis of data from a large sample youth and caregivers from multiple communities who have participated in the National Evaluation of the Comprehensive Community Mental Health Services for Children and Their Families Program from 2002 to 2007. Youth with serious emotional disturbance and their caregivers provided quantitative data at 6-month intervals for up to 3 years. Local evaluation teams collecting the data provided qualitative and quantitative information on their staffing models, incentives, training, support, and life experiences of data collectors. This session will provide an analysis of conditions versus retention rates across a wide range of program and client characteristics. Lessons learned for informing future evaluation practice will be shared.
Retention in a Longitudinal Outcomes Study: Impact of Staffing Structure, Agency Policies and Staff Characteristics on Participants
Stacy Johnson,  Macro International Inc,  stacy.f.johnson@orcmacro.com
Connie Maples,  Macro International Inc,  connie.j.maples@orcmacro.com
The most critical aspect of longitudinal research is the ability to maintain long-term contact with participants and to track their outcomes over extended periods of time (van Wheel, et al., 2006). Attrition in longitudinal studies impacts their statistical power, introduces bias and threatens internal and external validity (Crom, D., et al., 2006). This paper focuses on data collection staffing models and their impact on participant retention in a 3-year longitudinal outcomes study of the Comprehensive Community Mental Health Services for Children and Their Families Program. Qualitative and quantitative analytical methods are used to explore how staffing variables such as the demographic characteristics and the role of data collectors in the community of study impact participant retention. Policies, procedures and staff structures that support data collection activities are also analyzed. Finally, we will share lessons learned related to overcoming obstacles to retaining participants longitudinally for informing future evaluation practice.
Retention in a Longitudinal Outcomes Study: An Exploration of the Effects of Respondent Characteristics, Roles and Consistency
Tisha Tucker,  Macro International Inc,  alyce.l.tucker@orcmacro.com
Laura Whalen,  Macro International Inc,  laura.g.whalen@orcmacro.com
Longitudinal studies are often challenged by participant reluctance to take part in the study, family changes and crises, competing priorities, and situational stressors (Ryan & Hayman, 1996). Hunt and White (1998) have identified that along with study design, the study population of interest may impact retention. Though there is extensive research on the respondent characteristics that affect retention, the field is lacking a consensus around which characteristics have the greatest impact. This paper explores how characteristics and roles of research respondents who have participated in the national evaluation of the Comprehensive Community Mental Health Services for Children and Their Families Program impact retention. By exploring respondent characteristics by gender and race/ethnicity, respondent role by caregiver types, and respondent consistency by change in respondents, we identify which variables are most strongly correlated with high retention. Findings identify needs for strategic development to maximize retention rates among certain respondent types and structures.

Roundtable: Evaluation and the Institutional Review Board (IRB): A Tale of Two Cities
Roundtable Presentation 711 to be held in Douglas Boardroom on Saturday, November 10, 9:35 AM to 10:20 AM
Presenter(s):
Oliver Massey,  University of South Florida,  massey@fmhi.usf.edu
Abstract: Evaluation straddles the gap between research and service, concerning itself with both the methodological issues of traditional research studies and practical issues of program management and improvement. These roles lead evaluators to collect and analyze a broad variety of information to better inform program managers about the effectiveness and functioning of their agency and programs. At times these activities clearly involve research, at times activities are clearly consultative. Unfortunately, a huge grey area exists regarding the question of what constitutes research, and what are adequate protections for individuals about whom information is collected. This roundtable is proposed as an opportunity for evaluators, whether university based or not, to discuss the business and activities of evaluation, the concerns for adequate and appropriate protection for human subjects and the interface with Institutional Review Boards (IRBs) that are charged with ensuring the rights of research participants in university linked or federally funded sites.

Session Title: The Theory Based Models as a Guide to Stakeholder Collaboration, Ownership, and Engagement
Multipaper Session 712 to be held in Hopkins Room on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the Program Theory and Theory-driven Evaluation TIG
Chair(s):
Dustin Duncan,  Harvard University,  dduncan@hsph.harvard.edu
Using a Program Theory Model to Clarify the Collaboration, Guide the Program,and Direct an Outcomes-based Evaluation
Presenter(s):
Kathryn Race,  Race and Associates Ltd,  race_associates@msn.com
Abstract: Based on an exemplar case study approach involving a 3-year evaluation, this paper highlights how a program theory model was used by stakeholders and the evaluation team in a collaboration of ten major museums and local parks in a large metropolitan city to conduct an outcomes-based evaluation of a multi-phased, hybrid after-school school and family outreach program. Highlighting the efforts and findings that have culminated in this evaluation, the paper presents how the program theory model served to help clarify the responsibilities of the collaboration, guide the development of new curriculum created during this period, and drive an outcomes-based evaluation that measured program fidelity as well as priority program outcomes. In this example, evaluation served a continuous management tool function where data gathering and assessment were incorporated to the extent possible as an integral part of the services provided by the program.
The Evaluation of Complex Theory Based, Professional Development Programs With “Show Me the Numbers” Expectations
Presenter(s):
Maryann Durland,  Durland Consulting,  mdurland@durlandconsulting.com
Abstract: STARS is a Chicago Public School professional development program for school teams consisting of the principal and 4-5 team members. STARS stands for School Teams Achieving Results for Students. STARS, a theory-driven program, targets results in three areas – organizational structure, impact on instructional practices and increased teacher leadership. The Year 1 – 2005-06 Evaluation focused on understanding, exploring, and defining implementation. The evaluation findings indicated that there were three levels or models of implementation. Further exploratory analysis indicated that there are statistically significant differences among the three models on measures of student achievement. Year 2 Evaluation, 2006-07 developed specific metrics for testing the model framework as a discriminate for determining level of implementation. This paper focuses on two issues, the first is the complex, longitudinal nature of evaluation when evaluating theory based programs, and the second is meeting the expectations of program stakeholders for “numbers, data, and links to results”.

Session Title: Fighting Poverty: What Works? Running Randomized Evaluations of Poverty Programs in Developing Countries
Expert Lecture Session 713 to be held in  Peale Room on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the Costs, Effectiveness, Benefits, and Economics TIG
Presenter(s):
Rachel Glennerster,  Massachusetts Institute of Technology,  rglenner@mit.edu
Abstract: Should limited education budgets in the developing world be spent on textbooks, teachers, or smaller classes? Should scarce health resources be spent on more doctors of basic sanitation? Policymakers lack the evidence they need to tackle these dilemmas. Researchers at the Abdul Latif Jameel Poverty Action Lab at MIT (J-PAL) are seeking to improve the effectiveness of poverty programs by providing rigorous evidence on what works in reducing poverty by implementing randomized evaluations around the world. Dr. Rachel Glennerster, Executive Director of J-PAL will talk about how to overcome the challenges of running rigorous randomized evaluations in developing countries. She will discuss ways to introduce elements of randomization into programs in ways that fit naturally with the work of those running poverty programs on the ground. She will also discuss the techniques used by J-PAL researchers to rigorously measure issues such as women's empowerment, social capital, and trust.

Session Title: Starting Out Right: How to Begin Evaluating Community Organizing, Advocacy, and Policy Change Efforts Using a Prospective Approach
Demonstration Session 714 to be held in Adams Room on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the Advocacy and Policy Change TIG
Presenter(s):
Justin Louie,  Blueprint Research & Design Inc,  justin@blueprintrd.com
Catherine Crystal Foster,  Blueprint Research & Design Inc,  catherine@blueprintrd.com
Abstract: How can we overcome the challenges of evaluating advocacy-the long time frames for change, the complex political environments, the constantly shifting strategies? How can we ensure that our evaluations remain relevant to community organizers, advocates, and their funders? Over the last three years, Blueprint Research & Design, Inc. has developed and refined a prospective approach to evaluating community organizing, advocacy, and policy change efforts that addresses these challenges head on. This session will delve into the lessons we've learned over our last year of work with advocates, organizers and funders in many fields (education, the environment, health care, media reform) and at many levels (coalitions, funder collaboratives, foundation-nonprofit partnerships, grantee clusters). We will compare across fields and levels to pull out lessons about what works, for whom, and in what contexts, and we will discuss our process, note obstacles, and describe how we've worked to overcome those obstacles.

Roundtable: Overcoming Mountains and Valleys: Examining the Dynamics of Evaluation With Underserved Populations
Roundtable Presentation 715 to be held in Jefferson Room on Saturday, November 10, 9:35 AM to 10:20 AM
Presenter(s):
Sylvette La Touche,  University of Maryland, College Park,  latouche@umd.edu
Amy Billing,  University of Maryland, College Park,  billing@umd,edu
Nancy Atkinson,  University of Maryland, College Park,  atkinson@umd.edu
Jing Tian,  University of Maryland,  tianjing@umd.edu
Robert Gold,  University of Maryland,  rsgold@umd.edu
Abstract: The U.S. Department of Health and Human Services, in its policy directive Healthy People 2010, identified home Internet access as a national priority, especially among traditionally excluded populations namely minority, low-income and rural groups. To address this need, the University of Maryland, College Park and Maryland Cooperative Extension launched “Eat Smart, Be Fit, Maryland!”. This research effort sought to assess the capacity of technology to promote positive health behaviors among low literate audiences, using a web portal as its primary component (http://www.eatsmart.umd.edu). Extensive efforts to assess the project's effectiveness have been made, including the use of both process and impact evaluation tools. A unique evaluation strategy was employed, utilizing traditional data collection methods, in conjunction with online evaluation techniques. The results will enable other web-based efforts an opportunity to identify effective evaluation strategies and help to assure that health disparities resulting from literacy barriers to e-health materials are addressed.

Session Title: Coalitions and Participatory Approaches in Health Partnership Evaluations
Multipaper Session 716 to be held in Washington Room on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the Health Evaluation TIG
Chair(s):
Kathryn Bowen,  BECS Inc,  drbowen@hotmail.com
"More Juice for the Squeeze": Developing Evaluation Indicators and Reference Materials for State Asthma Control Partnerships
Presenter(s):
Leslie Fierro,  Centers for Disease Control and Prevention,  let6@cdc.gov
Carlyn Orians,  Battelle Centers for Public Health Research and Evaluation,  orians@battelle.org
Shyanika Wijesinha Rose,  Battelle Centers for Public Health Research and Evaluation,  rosesw@battelle.org
Linda Winges,  Battelle Centers for Public Health Research and Evaluation,  winges@battelle.org
Abstract: In recent years, many questions have arisen about how best to monitor and evaluate the activities and outcomes of coalitions and partnerships. Similar to other public health efforts, a primary component of the 35 state asthma control programs currently funded through the Center's for Disease Control and Prevention's Air Pollution and Respiratory Health Branch is the formation of statewide asthma partnerships. Partnerships across the funded states vary in structure and membership composition, but are designed to arrive at similar outcomes. This presentation will provide an overview of the process and results from a multi-state workgroup that began in June 2006 to develop cross-site evaluation indicators and resource documents specific to state asthma control program partnerships. These indicators and documents are suitable for use with CDC's Framework for Program Evaluation. Key concerns were developing indicators gathered across programs that are feasible, representative, and useful.
Improving the Evaluation of Federally Funded Interventions Requiring the Use of Government Performance Results Act (GPRA) Instruments: A Participatory Approach
Presenter(s):
Justeen Hyde,  Institute for Community Health,  jhyde@challiance.org
Eileen Dryden,  Institute for Community Health,  edryden@challiance.org
Ayala Livny,  Cambridge Cares About AIDS,  alivny@ccaa.org
Karen Hacker,  Institute for Community Health, 
Monique Tula,  Cambridge Cares About AIDS,  mtula@ccaa.org
Abstract: This presentation highlights the value of utilizing a participatory evaluation approach when working with community agencies receiving federal funding for prevention and intervention services. Drawing from our experience as evaluators of a SAMHSA-funded substance abuse, HIV and Hepatitis prevention program targeting homeless young adults, we will describe the importance of and strategies for creating a participatory evaluation partnership with program implementers. By participatory evaluation we mean the active involvement of program implementers in defining the evaluation, developing instruments, collecting data, discussing findings, and disseminating results. There are a number of challenges faced when using this approach with federally funded programs that require the use of standardized measurement tools and data collection procedures. Strategies for assessing these challenges and striking a balance between federal requirements and local needs will be described. These strategies help to increase the support for evaluation requirements and the usefulness of evaluation data for program implementers.

Session Title: Tying it Together: Developing a Web-based Data Collection System for a Multi-site Tobacco Initiative
Demonstration Session 717 to be held in D'Alesandro Room on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the Non-profit and Foundations Evaluation TIG
Presenter(s):
Stephanie Herbers,  Saint Louis University,  herberss@slu.edu
Nancy Mueller,  Saint Louis University,  mueller@slu.edu
Abstract: In 2004, a Missouri health foundation committed significant funding to establish a nine-year, comprehensive, multi-site initiative to reduce tobacco use in Missouri. Projects funded through the initiative share common goals, but vary in design, structure, and implementation. This can pose challenges when evaluating progress, including capturing data on the overarching goals; detecting broad, cross-site effects; and ensuring valid results. As the initiative evaluators, we will demonstrate the process we took to identify a minimum data set to be collected from each of the funded projects. We will also describe the development of a web-based data collection system that provides a centralized location for submission and access to the minimum data set by grantees, the evaluator, and the foundation. In addition, we will demonstrate the system and discuss lessons learned in creating a flexible and user-friendly data collection and management tool that can be used by multiple stakeholders.

Session Title: Contextuality in Needs Assessment: Attention to Divergent Needs
Multipaper Session 718 to be held in Calhoun Room on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the Needs Assessment TIG
Chair(s):
Jeffry L White,  Ashland University,  jwhite7@ashland.edu
Discussant(s):
Deborah H Kwon,  The Ohio State University,  kwon.59@osu.edu
Barriers to Continuous Needs Assessment:Client Fatigue, Governmental Changes in Program Emphases
Presenter(s):
Zoe Barley,  Mid-continent Research for Education and Learning,  zbarley@mcrel.org
Abstract: This presentation reports on a needs assessment program that seeks to meet differing requirements of funders while obtaining a broad range of perceived needs data from educators (whose needs have been assessed too often already and who don't see the return for their time under current funding constraints). Balance is sought in the nature of data gathered and the persons from whom it is gathered: balance between: 1) direct, precise responses and more nuanced, richer discussions, 2) ease of quick response and lengthier yet richer data collection efforts, 3) needs of current clients and potential clients, and 4) perspectives of key leaders and other stakeholders. Four primary data collection methods as well as data compilation and reduction into meaningful information will be discussed. A conceptual chart of the process will be provided and recommendations offered for developing a comprehensive program.
Using a Multi-phase Assessment Process to Influence Program Selection and Evaluation Development
Presenter(s):
Caren Bacon,  University of Missouri, Columbia,  baconc@missouri.edu
Shannon Stokes,  University of Missouri, Columbia,  stokess@missouri.edu
Abstract: The Youth Community Coalition (YC2) in conjunction with the Institute of Public Policy (IPP) conducted a data, a resource, and a community readiness assessment regarding risky drinking behaviors in 12-25 year olds to develop a strategic prevention framework. Using Getting to Outcomes, data were collected from multiple sources on risk and protective variables associated with risky drinking behaviors, and the resources available to address them. This information was used to select the intervening variables and to identify and address resource gaps. Using the Community Readiness Handbook, the community readiness assessment identified the level of readiness within Columbia to address risky drinking behaviors and to determine the level of intervention necessary. Upon completion of the assessments, YC2 developed a strategic plan to address the risky drinking behaviors of 12-25 year olds in Columbia, Missouri based on a solid understanding of the community climate and needs.

Session Title: Online Evaluation Systems: One-stop Shops for Administrators, Managers, and Evaluators?
Demonstration Session 719 to be held in McKeldon Room on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the Integrating Technology Into Evaluation
Presenter(s):
Susanna Kung,  Academy for Educational Development,  skung@aed.org
Paul Bucci,  Academy for Educational Development,  pbucci@aed.org
Abstract: With the No Child Left Behind (NCLB) Act and the public's increasing emphasis on accountability, school just got tougher-that is, for those in the business of education. Consequently, numerous online evaluation systems and tools have been developed in recent years that deliver data integration and data analysis services in real-time to: - Simplify data collection, analysis, and reporting across multiple sites; - Automate completion of annual performance reports (APRs); - Increase accountability; and - Facilitate data-driven decision making. This session provides a comprehensive demonstration of one such tool, the GEAR UP Online Evaluation System (GOES), which has been implemented in a number of states to track demographic, program participation, academic performance, and survey results at the individual student-, parent-, and teacher-levels from multiple sites longitudinally.

Session Title: Assessing Strategic Alignment of Learning in Organizations Where Profits are Not the Bottom Line
Panel Session 720 to be held in Preston Room on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Chair(s):
Marlaine Lockheed,  Independent Consultant,  mlockheed@verizon.net
Abstract: This panel will discuss the needs for assessing the strategic alignment of knowledge and learning to broader organization goals, their challenges, and the methodologies for such assessments. Several World Bank evaluations will be discussed to illustrate the case of strategic learning assessments in large decentralized organizations whose results can't be measured on their bottom line.
The Challenges of Assessing the Alignment of Knowledge and Learning with Strategic Priorities
Dawn Roberts,  Independent Consultant,  dawn.roberts@starpower.net
Dawn Roberts has conducted an independent review of the World Bank Group's staff learning framework and evaluated learning programs provided by the Bank's professional and technical networks. She has an in-depth understanding of the processes used by large decentralized organizations to align learning with business objectives or strategic priorities and the inherent challenges of trying to measure such alignment. Drawing on specific examples from her studies, Ms. Roberts will frame the major issues related to evaluating whether knowledge and learning initiatives support the strategic direction of an organization. Ideally, such evaluations can rely on the organization having a planning process to identify and build consensus around knowledge and learning priorities vis-à-vis business needs, a data management system to track learning activities and participation, and protocols for monitoring whether activities and products are delivered as planned. The presentation will discuss the implications for evaluations when these conditions are imperfectly met.
Assessing the Triangular Relation between Business Needs, Learning Opportunities, Learning Consumption
Violaine Le Rouzic,  World Bank,  vlerouzic@worldbank.org
Marlaine Lockheed,  Independent Consultant,  mlockheed@verizon.net
Maurya West Meiers,  World Bank,  mwestmeiers@worldbank.org
Violaine Le Rouzic will discuss the method by which the World Bank Institute Evaluation Group assessed the strategic alignment of the knowledge and learning opportunities offered to World Bank staff with the organization's business priorities. The presentation will explain the triangular alignment assessment between 1) business priorities/performance gaps, 2) the learning plans and opportunities provided to respond to the needs, and 3) the strategic use of these learning opportunities. Ms. Le Rouzic, a senior evaluation officer at the World Bank, has an extensive experience in assessing learning in capacity building organizations that can't measure their results based on their bottom line. She led the study presented. She also developed several evaluation toolkits to measure perceived and actual learning. Recently, she co-authored "Advancing a Reporting and Results Framework for the World Bank's External Training."

Session Title: Building Local Evaluation Capacity in K-12 Settings
Multipaper Session 721 to be held in Schaefer Room on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG and the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Katherine Ryan,  University of Illinois at Urbana-Champaign,  k-ryan6@uiuc.edu
Discussant(s):
Rita O'Sullivan,  University of North Carolina, Chapel Hill,  ritao@email.unc.edu
The Need for Evaluation Capacity Building in After-school Programs: Results From the Michigan Evaluation of 21st Century Community Learning Centers
Presenter(s):
Laurie Van Egeren,  Michigan State University,  vanegere@msu.edu
Beth Prince,  Michigan State University,  princeem@msu.edu
Megan Platte,  Michigan State University,  plattmet@msu.edu
Celeste Sturdevant Reed,  Michigan State University,  csreed@msu.edu
Laura Bates,  Michigan State University,  bateslau@msu.edu
Abstract: The Michigan 21st Century Community Learning Centers (21st CCLC) after-school program requires sites to both work with a local evaluator and participate in a state evaluation. However, sites' ability to collaborate with their local evaluators effectively varies, with implications for their potential to improve program quality. This paper uses the results of the Michigan 21st CCLC state evaluation of 36 grantees and 187 sites to examine: (a) characteristics of sites (e.g., type of organization, site size) and evaluators (e.g., internal or external) associated with differences in administrator attitudes toward evaluation and perceptions of how to use evaluation results; (b) relations between administrators' and staff attitudes about evaluation; and (c) administrators' perceptions of the utility of a standardized but individualized annual reporting process conducted by the state evaluators designed to build capacity for program improvement, particularly in sites who receive less benefit from working with a local evaluator.
Making Some Headway: An Internal Evaluation Branch's Efforts to Build Evaluation Capacity in an Urban School District
Presenter(s):
Eric Barela,  Los Angeles Unified School District,  eric.barela@lausd.net
Abstract: This paper is a chronicle of an internal evaluation branch's efforts to build internal evaluation capacity in a Southern California urban school district. Prior to 2007, internal evaluators within this school district did little to facilitate evaluation use because stakeholders were not held accountable for implementing findings. Due to personnel, policy, and operational changes at both the branch and district levels, decision makers began to pay more attention to internal evaluation findings and evaluators began to work with stakeholders to ensure that findings were used in appropriate ways. The branch- and district-level processes that led to the change were documented by a participant-observer using an embedded case study framework. The lessons learned from this ongoing process can be helpful for both decision makers trying to find the usefulness of evaluation findings and the internal evaluator trying to build evaluation capacity.

Session Title: Methodological Challenges and Solutions for Business and Industry Evaluators
Multipaper Session 722 to be held in Calvert Ballroom Salon B on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the Business and Industry TIG
Chair(s):
Judith Steed,  Center for Creative Leadership,  steedj@leaders.ccl.org
Technology Drives Restructuring of Measurement Teams in Learning Organizations: Doing More With Less in the Professional Services Industry
Presenter(s):
John Mattox,  KPMG,  jmattox@kpmg.com
Darryl Jinkerson,  Abilene Christian University,  darryl.jinkerson@coba.acu.edu
Carl Hanssen,  Hanssen Consulting LLC,  carlh@hanssenconsulting.com
Abstract: Technology is an enabling tool that creates efficiencies on a grand scale. This presentation examines how technology has changed the organizational structure of measurement teams in learning organizations for the professional services industry (a.k.a., The Big 4 Accounting Firms). The technology in the learning analytics space is becoming robust enough to accommodate the needs of extremely large and complex organizations like the Big 4 professional services firms. Consequently, the organizational structure of the measurement function changes. Technology helps organizations do more with less. In the late 1990's Arthur Andersen employed 35 evaluation professionals to handle standard and custom evaluations for internal and external clients. Today, the evaluation groups can be as small as one person. During the presentation, tools and organization structure will be described. The presenters will also create an open forum for 10 – 15 minutes to allow attendees to share how their organizations have changed.
The Challenge of Responders/Non-responders in Evaluative Data Collection
Presenter(s):
Judith Steed,  Center for Creative Leadership,  steedj@leaders.ccl.org
Emily Hoole,  Center for Creative Leadership,  hoolee@leaders.ccl.org
Tracy Patterson,  Center for Creative Leadership,  pattersont@leaders.ccl.org
Bill Gentry,  Center for Creative Leadership,  gentryb@leaders.ccl.org
Abstract: The challenge in evaluation is not just analyzing the available data but also understanding what data we might have missed. In this paper, the authors share our efforts to engage and understand the non-responders in our evaluative assessments of our Business Executives Leadership Development Programs. Typically our programs are in three phases: 1) pre-work, 2) Face to Face classroom portion and 3) follow on work-based practice period. It is in this third phase that we believe the most learning/ impact of the program design is occurring. It is also in this third phase that we are administering our evaluative / developmental assessment. Our challenges are to keep our participants voluntarily connected and to get them to engage with and complete the evaluative/ developmental assessment so that we may gather quality data with generalizable, useful and actionable findings. This paper presents what we have found in our pursuit of these challenges.

Session Title: Get Those Data off the Shelf and Into Action: Encouraging Utilization Through Innovative Reporting Strategies
Skill-Building Workshop 723 to be held in Calvert Ballroom Salon C on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the Evaluation Use TIG
Presenter(s):
Mindy Hightower King,  Indiana University,  minking@indiana.edu
Courtney Brown,  Indiana University,  coubrown@indiana.edu
Marcey Moss,  Indiana University,  marmoss@indiana.edu
Abstract: This session will provide practical strategies to increase utilization of evaluation results, promote program improvement, and help develop strategic vision among a variety of stakeholders. Innovative formats tailored to the diverse information needs of various stakeholder groups will be shared with participants, providing techniques for determining which format best meets stakeholder needs. Participants will be provided with examples used in external evaluation work with small programs and organizations as well as statewide coalitions and initiatives. In addition, participants will be provided with an opportunity to consider and develop alternative reporting strategies for their current projects.

Session Title: Do Serious Design Flaws Compromise the Objectivity and Credibility of the Office of Management and Budget's Program Assessment Rating Tool (PART) Evaluation Process?
Expert Lecture Session 724 to be held in  Calvert Ballroom Salon E on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the Government Evaluation TIG
Presenter(s):
Eric Bothwell,  Independent Consultant,  ebothwell@verizon.net
Abstract: The Office of Management and Budget's Program Assessment Rating Tool (PART) evaluation process has been used to assess federal programs for the past six budget cycles. Since its inception it has continued to receive both praise and criticism but has not been formally evaluated in the context of evaluation standards and has remained relatively stable in its design the past several years. This session will address whether the PART evaluation process meets its own standard of being free of design flaws as expressed in its Question 1.4 that reads: Is the program design free of major flaws that would limit the program's effectiveness or efficiency?

Session Title: Linking Smaller Learning Communities to Student Achievement and Related Outcomes Measures
Think Tank Session 725 to be held in Fairmont Suite on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Shana Pribesh,  Old Dominion University,  spribesh@odu.edu
Discussant(s):
Linda Bol,  Old Dominion University,  lbol@odu.edu
John Nunnery,  Old Dominion University,  jnunnery@odu.edu
Alexander Koutsares,  Old Dominion University,  akoutsares@odu.edu
Abstract: Creating smaller schools/learning communities (SLCs) have been advocated as a specific reform for improving high school student engagement and graduation rates (NRC, 2002). The linkages of smaller learning communities to student achievement have been found to be promising (Felner, Ginter, and Primavera, 1982; NRC, 2002). However, the research connecting the SLC structure with student performance is tenuous - mostly due to methodological issues. We propose a think tank to discuss strategies to evaluate the effect of smaller learning communities on student achievement. This think tank will use case studies to propose innovative, rigorous designs to yield more valid evaluation findings. In addition, we would identify other constructs (e.g., school climate, student self-concept and motivation) theoretically linked to achievement that could be employed as additional outcome measures. The think tank session will be useful for practitioners faced with evaluating smaller schools within schools in public school districts.

Session Title: Higher Education Assessment and Evaluation in a Context of Use and Policy Development
Multipaper Session 726 to be held in Federal Hill Suite on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the Assessment in Higher Education TIG
Chair(s):
William Rickards,  Alverno College,  william.rickards@alverno.edu
Discussant(s):
Molly Engle,  Oregon State University,  molly.engle@oregonstate.edu
Evaluation Use in the Unique Context of Higher Education
Presenter(s):
Georgetta Myhlhousen-Leak,  University of Iowa,  gleakiowa@msn.com
Abstract: This research investigated the factors effecting use and the types of use present in outcomes assessment in higher education. A review of the literature suggests a significant lack of use in assessment. The absence of a definable structure or structures for understanding use and its potential for desirable change has made the process of incorporating outcomes assessment into the processes of higher education a difficult, unclear and sometimes disjointed experience for many administrators and faculty. Use as identified and described in the evaluation utilization literature was applied to investigate how program administrators, faculty administrators, and faculty members who act as evaluators and/or intended users conceptualize use and how those conceptualizations are reflected in planning, implementation and successfully completing the outcomes assessment process. This research provides evidence and insights into the nature of evaluative use in higher education and adds to the collective knowledge of use and differing contexts of use.
Strengthening Evaluation in Higher Education: Quality Assurance and the New Zealand Tertiary Education Reforms
Presenter(s):
Syd King,  New Zealand Qualifications Authority,  syd.king@nzqa.govt.nz
Abstract: New Zealand has embarked on a programme of reforms to strengthen tertiary education (post compulsory) provision and outcomes. A significant factor in the success of the reforms is a new system for quality assurance. This new system is being developed currently. Of central importance has been the decision to shift towards evaluation methodology with a focus on the contribution of key processes to desired outcomes. What is quality and how can it be 'measured'? How can a quality assurance system move from an approach based on compliance with a regulatory framework to one that considers 'quality' as a dynamic concept? How can the system 'focus on outcomes' and consider factors such as learner intake characteristics and needs assessment? This presentation will discuss some of the more challenging issues addressed in the reforms and the responses developed to date to address these issues.

Session Title: Evaluation as an Agent of Program Change: An Example From Austria
Panel Session 727 to be held in Royale Board Room on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Klaus Zinoecker,  Vienna Science and Technology Fund,  klaus.zinoecker@wwtf.at
Abstract: This session demonstrates the use of evaluation as an agent of change in a major Austrian research program. The evaluation study from which it draws bears not only on program management--that is, how the program in question could be managed more effectively, but also it informs public policy-that is, should the program exist and would it benefit from restructuring. The program in question is the Austrian Genome Research Program, GEN-AU (GENome Research in AUstria), Austria's first top-down research grant program. The two presentations will treat the background of the evaluation study, its aims, methods, major findings, implications for program management and public policy, and observations about changes subsequently made in response to the findings.
The Evaluation of Genome Research Austria (GEN-AU): Overview of the Study's Aims, Structure, Methods, Results, Implications, and Impacts
Klaus Zinoecker,  Vienna Science and Technology Fund,  klaus.zinoecker@wwtf.at
Alfred Radauer,  Austrian Institute for SME Research,  a.radauer@kmuforschung.ac.at
Brigitte Tempelmaier,  Austrian Economic Service,  brigitte.tempelmaier@univie.ac.at
Iris Fischl,  Austrian Institute for SME Research,  i.fischl@kmuforschung.ac.at
Roald Steiner,  Austrian Institute for SME Research,  r.steiner@kmuforschung.ac.at
Rosalie Ruegg,  TIA Consulting Inc,  ruegg@ec.rr.com
This presentation provides background for understanding the program and its evaluation, and lists steps in the design process of Gen-AU. It discusses the study time frame and its influence on selection of methodological approach, identifies major players involved with GEN-AU and the types of projects funded, presents the logic chart developed for GEN-AU and relates the associated systematic identification of the program's outputs, outcomes, and impacts. With this background, findings and implications are discussed, as well as the subsequent reactions of policy makers and program managers. The presentation concludes with an account of changes that appear attributable to the study, and lessons learned.
Developing a Plan for Future Monitoring and Impact Analysis of Genome research Austria (GEN-AU)
Rosalie Ruegg,  TIA Consulting Inc,  ruegg@ec.rr.com
Absent from the program was a plan for future program evaluation. A recommendation of the study was that appropriate steps be taken by GEN-AU to ensure that evaluation of outputs, outcomes, and impacts of the program be planned and implemented. To that end, the Logic Model developed with GEN-AU's program managers was used to construct an evaluation framework. The program's mission and goals were examined to derive key questions regarding program success. Three sets of potential indicators for monitoring progress of the program were constructed: Activity-based Performance Indicators, Output-based Performance Indicators, and Outcome-based Performance Indicators. General guidance was provided for evaluating longer-term program impacts. Specific approaches were suggested for addressing the key questions about impact. Parallel recommendations were made for supporting data collection.

Session Title: Leaving No Stone Unturned: Examining the Evaluation of a Statewide Program at the Local Level
Think Tank Session 728 to be held in Royale Conference Foyer on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Presenter(s):
Laura Feldman,  University of Wyoming,  lfeldman@uwyo.edu
Discussant(s):
Laura Feldman,  University of Wyoming,  lfeldman@uwyo.edu
Tiffany Comer Cook,  University of Wyoming,  tcomer@uwyo.edu
Shannon Williams,  University of Wyoming,  swilli42@uwyo.edu
Abstract: How does one define community readiness for change? How does one assess a program manager's passion, drive, commitment, and wisdom? How does one evaluate unexpected outcomes and incorporate local accomplishments into a statewide comparison? These questions relate to the University of Wyoming Survey and Analysis Center's (WYSAC) evaluation of Wyoming's Tobacco Prevention and Control Program. WYSAC evaluated 21 local programs to assess how well their outcomes correlated with state goals. The evaluation assessed individual community readiness to accept tobacco-related policies by using cultural, environmental, risk, and protective factors. The evaluation also documented community activities, including program manager capability, as well as community-specific outcomes. Attendees may choose to participate in one of three groups that will address each of the evaluation components: community readiness, program manager characteristics, and community outcomes.

Session Title: Consumer and Family Member Involvement in Evaluating Federally-Funded Initiatives
Multipaper Session 729 to be held in Hanover Suite B on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Cindy Crusto,  Yale University,  cindy.crusto@yale.edu
Abstract: This session will highlight how consumers and family members of children served by three federally-funded system of care initiatives have been integrated in the evaluation of each. The first paper will describe family member participation in all aspects of evaluation decision making, data collection and management, and continuous quality improvement processes. The second paper will compare a consumer-led needs assessment that was conducted in an urban community with an evaluator-led needs assessment that occurred in the same community. The benefits of engaging consumers in evaluation process along with some of the struggles encountered will be discussed.
Facilitating Family Member Involvement in the Evaluation of a Children's Mental Health Initiative
Cindy Crusto,  Yale University,  cindy.crusto@yale.edu
This paper will present the evaluation plan of a federally funded system of care initiative for children 11 years and younger with severe social, emotional, and behavioral health challenges and their families. A guiding principle of the federal funder and the statewide initiative focuses on family-driven practices and includes the participation and perspectives of family members at all levels of the initiative's development, implementation, and evaluation. The paper will focus on how family members of children with behavioral health challenges have been integrated into the evaluation process, including participation in evaluation decision making, collection and management of data, and involvement in the initiative's continuous quality improvement process. Strategies and lessons learned for increasing meaningful participation of family members and negotiating their roles as evaluation team members will be presented.
Comparison of a Consumer Led and an Evaluator Led Needs Assessments
Joy Kaufman,  Yale University,  joy.kaufman@yale.edu
This paper will compare a consumer-led needs assessment that was conducted in an urban community with an evaluator-led needs assessment that occurred in the same community. In the first assessment, 6 parents of children receiving services in the community were trained in all aspects of focus group assessment including protocol development, facilitation, data coding and analysis and data feedback. The second assessment was completed by a university-based evaluator. The presentation will highlight aspects of the parent training and review the methodology and results from both needs assessments. The presenter will also discuss the benefits and struggles that were encountered during each assessment.

Session Title: Increasing the Value of Items on a Measure: A Practitioner's Guide to Item Response Theory Analysis
Demonstration Session 730 to be held in Baltimore Theater on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Heather Chapman,  EndVision Research and Evaluation,  hjchapman@cc.usu.edu
Catherine Callow-Heusser,  EndVision Research and Evaluation,  cheusser@endvision.net
Abstract: High-stakes testing, increased accountability, and a focus on evidence-based designs and decision making all indicate that evaluators need to pay more attention to the quality of assessment instruments. Traditional statistical methods used in instrument development yield results with many limitations. Item response theory (IRT) offers a robust statistical technique that can be used in conjunction with or as a replacement to older, more traditional statistical methods when creating new assessment instruments. IRT has several benefits that often make it a more suitable choice for the purpose of instrument development. Many evaluators are unaware of how to use IRT techniques or of the benefit of these techniques to evaluation goals. This demonstration session aims to introduce evaluators to IRT through an explanation of the statistical assumptions of IRT, a demonstration of how to use IRT statistical packages effectively, and an explanation of how to interpret and apply the results.

Session Title: Summative Confidence: How Accurate are Your Evaluative Conclusions?
Expert Lecture Session 731 to be held in  International Room on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Brooks Applegate,  Western Michigan University,  brooks.applegate@wmich.edu
Presenter(s):
Cristian Gugiu,  Western Michigan University,  crisgugiu@yahoo.com
Abstract: One of the cornerstones of methodology is that "a weak design yields unreliable conclusions." While this principle is certainly true, the constraints of conducting evaluations in real-world settings often necessitate the implementation of less than ideal designs. To date, no quantitative or qualitative method exists for estimating the impact of sampling, measurement error, and design on the precision of an evaluative conclusion. Consequently, evaluators formulate recommendations and decision makers implement program and policy changes without full knowledge of the robustness of an evaluative conclusion. In light of the billions of dollars spent annually on evaluations and the countless millions of lives that are affected, the impact of decision error can be disastrous. This paper will introduce an analytical method that can be used to estimate the degree of confidence that can be placed on an evaluative conclusion and discuss the factors that impact the precision of a summative conclusion.

Session Title: A Discussion of AEA's Evaluation Policy Initiative
Panel Session 733 to be held in Versailles Room on Saturday, November 10, 9:35 AM to 10:20 AM
Sponsored by the AEA Conference Committee
Abstract: To Be Announced
William Trochim,  Cornell University,  wmt1@cornell.edu
Hallie Preskill,  Claremont Graduate University,  hallie.preskill@cgu.edu
George Grob,  Center for Public Program Evaluation,  georgeandsuegrob@cs.com

Search Results