2010 Banner

Return to search form  

Session Title: Program Theory and Theory-Driven Evaluation TIG Business Meeting and Panel: Improving Evaluation Quality by Improving Program Quality: A Theory-based/Theory-driven Perspective
Business Meeting with Panel Session 742 to be held in Lone Star A on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Presidential Strand and the Program Theory and Theory-driven Evaluation TIG
TIG Leader(s):
John Gargani, Gargani + Company, john@gcoinc.com
Katrina Bledsoe, Walter R McDonald and Associates Inc, katrina.bledsoe@gmail.com
Chair(s):
Katrina Bledsoe, Walter R McDonald and Associates Inc, katrina.bledsoe@gmail.com
Discussant(s):
Michael Scriven, Claremont Graduate University, mjscriv1@gmail.com
David Fetterman, Fetterman & Associates, fettermanassociates@gmail.com
Charles Gasper, Missouri Foundation for Health, cgasper@mffh.org
Abstract: The principle that quality evaluation promotes better programs is well accepted. However, the other half of that equation—that quality programs promote better evaluation—is rarely considered. This panel will examine this missing half and suggest how evaluators can foster a virtuous circle of program quality promoting evaluation quality that in turn promotes program quality. By approaching this dynamic relationship from a distinctly theory-based/theory-driven perspective, the panel will address how the real-world problems of program design, execution, and funding provide concrete opportunities for evaluators to use program theory to improve programs while improving their practice.
The Relationship Between Program Design and Evaluation Design
Stewart Donaldson, Claremont Graduate University, stewart.donaldson@cgu.edu
How we evaluate depends on what is being evaluated. There is arguably an underlying logic of evaluation, but evaluating a washing machine is different from evaluating a science curriculum or a large-scale social program, and the approach an evaluator chooses can and should take advantage of those differences. In spite of the push for evidence-based programs, however, programs remain stubbornly difficult to describe before an evaluation begins. Programs are largely intangible, often improvised, and rarely designed in systematic ways. In this presentation, I will discuss how evaluators can use program theory to connect evidence to program design, and how they can leverage program designs to improve the quality of their evaluations.
The Expanding Profession: Program Evaluators as Program Designers
John Gargani, Gargani + Company, john@gcoinc.com
Accountants do more than count. Plumbers do more than work with lead. And program evaluators do more than judge the merit or worth of a program. Increasingly, we are being called upon to use our expertise to help design programs and to judge the quality of programs based on their designs. In this presentation, I will discuss how evaluators might undertake the work of designing programs, the role that program theory plays in program design, and how the quality of programs and evaluations can benefit from a better design process.

Session Title: Expanding Evaluation’s Utility and Quality Through System-Oriented Data Synthesis
Demonstration Session 743 to be held in Lone Star B on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Systems in Evaluation TIG
Presenter(s):
Beverly A Parsons, InSites, bparsons@insites.org
Pat Jessup, InSites, pjessup@insites.org
Abstract: Many evaluations using multiple data collection instruments report analyses separately by instrument or subscales. Such reports give the reader interesting facts but little or no synthesis across data collection instruments that helps the evaluation user understand how to take actions that have high likelihood of increasing the value or merit of the evaluand. A systems dynamics framework can be a powerful tool for synthesizing the analyses and making meaning of the findings. This session uses three evaluation examples to demonstrate how evaluators used an understanding of two types of system dynamics—organized and self-organizing dynamics—to synthesize data across multiple sources and identify levers for transformative change that support sustainability and scale-up. The examples involve introducing online learning in a multi-year professional development initiative for teachers, a whole school change initiative to improve student achievement in low performing schools, and a partnership for supporting the prevention of child maltreatment.

Session Title: Racial and Ethnic Approaches to Community Health Across the United States (REACH US) Programs: Creating and Evaluating Community-based Coalitions
Panel Session 744 to be held in Lone Star C on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Ada Wilkinson-Lee, University of Arizona, adaw@email.arizona.edu
Discussant(s):
Mari Wilhelm, University of Arizona, wilhelmm@ag.arizona.edu
Abstract: Racial and Ethnic Approaches to Community Health across the U. S. (REACH US) programs are developed grounded in the socio-ecological framework to address racial health disparities. The tasks of REACH US evaluators are to develop evaluation plans based on the target communities needs and input and yet evaluate and capture the outcomes of the socio-ecological model. This session identifies strategies, initial outcomes and lessons learned in the effort to measure community-based coalitions and describe the process of forming a collaborative subgroup of grantees to identify core indicators of coalitions. Each presenter will offer a unique perspective of their successes and challenges with their target community and specific health disparity. These evaluators will also discuss the process of forming a subgroup of grantees to collaborate across various REACH US programs to identify core indicators of community-based coalitions that can be standardized and psychometrically tested.
Pima County Cervical Cancer Prevention Partnership (PCCCPP): Evaluation of a Community-based Coalition Addressing Cervical Cancer Among Mexican American Women
Ada Wilkinson-Lee, University of Arizona, adaw@email.arizona.edu
Martha Moore-Monroy, University of Arizona, mmonroy@medadmin.arizona.edu
Francisco Garcia, University of Arizona, fcisco@email.arizona.edu
Mari Wilhelm, University of Arizona, wilhelmm@ag.arizona.edu
Despite dramatic reductions in U.S. cervical cancer mortality over the last 50 years, Mexican American women continue to experience significant disparities related to cervical cancer. The Pima County Cervical Cancer Prevention Partnership (PCCCPP) is a community-based partnership which received funding in 2007 from the Centers for Disease Control and Prevention’s Racial and Ethnic Approaches to Community Health across the U. S. (REACH US) program. The development and implementation of community-based evaluation is essential in documenting the success of this REACH US program. One evaluation goal was to capture the dynamics of the coalition through a mixed method approach utilizing surveys and interviews. Another goal was to collaborate with other REACH US grantees in order to create a set of standardized coalition core indicators that could be utilized across several REACH US programs. The ultimate goal is to conduct psychometric testing and provide the field with a measure of community-based coalitions.
Evaluation of a Community-Academic Partnership Using a Mixed-methods Approach: B Free CEED: National Center of Excellence in the Elimination of Hepatitis B Disparities
Nancy VanDevanter, New York University, nvd2@nyu.edu
Shao-Chee Sim, Charles B Wang Community Health Center, ssim@cbwchc.org
Simona Kwon, New York University, simona.kwon@nyumc.org
Partnerships between academic institutions and community organizations to reduce health disparities in affected communities have been widely recommended. Intrinsic to success is the ability of collaborators to function effectively in addressing mutually established goals. B Free CEED, a five-year project funded under the Centers for Disease Control and Prevention’s REACH U.S. program, is a partnership of NYU School of Medicine and local and national coalition members. The mission of B Free CEED is to develop and disseminate multi-level, evidence-based practices to address hepatitis B disparities affecting Asian and Pacific Islander communities. An annual partnership evaluation consisting of qualitative key informant interviews with community and academic members to explore context of coalition function, and a quantitative survey addressing general satisfaction, impact, trust, decision-making, and adherence to CBPR principals, was developed and shared with REACH US grantees. A subgroup of grantees is working to identify core indicators to produce a standardized tool to be piloted across several REACH US programs.

Session Title: Evaluation in Foundations: The State of the Art
Panel Session 745 to be held in Lone Star D on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Richard McGahey, Ford Foundation, r.mcgahey@fordfound.org
Discussant(s):
Lester Baxter, Pew Charitable Trusts, l.baxter@pewtrusts.org
Mayur Patel, John S and James L Knight Foundation, patel@knightfoundation.org
Abstract: This session will present information from a benchmark survey and analysis on evaluation practices in leading major U.S. foundations, and an presentation on performance measurement and approaches tools and practices in foundations and nonprofits. The presenters are two of the leading experts in the field. Commentators will be three directors of evaluation from three major national foundations. Having two presentations will allow for maximum dialogue among the panelists and with the audience.
Evaluation Practice at Major United States Foundations: Report on a Benchmark Survey and Analysis
Patricia Patrizi, The Evaluation Roundtable, patti@patriziassociates.com
This presentation will discuss the results from a landmark survey of 31 major U.S. foundations regarding their structure, financing, staffing, and use of evaluations. The survey was supplemented by follow up interviews, and is unique in providing benchmark data on these important issues.
Performance Measurement at Foundations and Nonprofits: Tools, Techniques, and Effective Practice
Elizabeth Boris, Urban Institute, eboris@urban.org
This presentation will discuss how foundations and nonprofits are using performance measurement tools, and also some innovative approaches to making those tools more widely available to the sector. It also will have a critical analysis on how the sector is responding to the challenges and opportunities posed by the increasing demand for accountability and performance measurement.

Session Title: Ten Steps to Making Evaluations Matter: Designing Evaluations to Exert Influence
Expert Lecture Session 746 to be held in  Lone Star E on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Melvin Mark, Pennsylvania State University, m5m@psu.edu
Presenter(s):
Sanjeev Sridharan, University of Toronto, sridharans@smh.toronto.on.ca
Abstract: Much of the discussion on evaluation design has focussed on issues of causality. There has been far more limited focus on how design can be informed by considerations of pathways of influence. This session proposes ten steps to make evaluations matter. State of the art quantitative approaches will be integrated with qualitative approaches. These ten steps are informed by program theory, learning frameworks and pathways of influence, evaluation design and learning, spread and sustainability.

Session Title: Partner Roles in a Multi-site Evaluation: The Viewpoints and Experiences of the Cross-site Evaluator and the State Program Coordinator
Panel Session 747 to be held in Lone Star F on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
Kristin Everett, Western Michigan University, kristin.everett@wmich.edu
Abstract: A two-person panel will share viewpoints and experiences about their roles and responsibilities as a cross-site external evaluator and a program coordinator in a multi-site evaluation effort to evaluate a program to improve teacher quality. Session attendees will learn the pros and cons of the evaluation design as experienced in the multi-site evaluation. Additionally, the external evaluation team will describe how it aimed to build evaluation capacity at the local project level and provide evaluation technical assistance to local project directors. The panel will address practical dimensions applicable to other multi-site projects: planning, internal/external communication, evaluation capacity-building, data collection, data analysis and interpretation, technical assistance, reporting procedures, data use, and report preparation. Longitudinal aspects of this six year cross-site evaluation also will be explored. Sample procedures, instruments, and other materials will be shared. This model can be applied across large or small sets of projects and geographic areas.
Evaluating a Statewide, Multi-site Program: The External Evaluator’s Role
Kristin Everett, Western Michigan University, kristin.everett@wmich.edu
In a multi-site evaluation, the cross-site evaluator plays an important role, working closely with both program coordinators and individual grantees. Attendees to this session will hear about different evaluative activities that were used in a cross-site evaluation, as well as the pros and cons of each and “lessons learned” in the evaluation. In this presentation, a member of the external evaluation team will review the evaluation model used for this cross-site initiative and the role of evaluators in it. Topics covered will include the external evaluator’s role in the RFP development and proposal review/selection of grantees, technical assistance to program coordinators and individual projects, evaluation capacity-building, data collection, “mining” data from project individual project reports, analysis of longitudinal trends, site visits (including observations and interviews), and data reporting.
Evaluating a Statewide, Multi-site Program: The Program Coordinator’s Role
Donna Hamilton, Michigan Department of Education, hamiltond3@michigan.gov
The program coordinator will provide her perspective on the roles and responsibilities of the various players in a cross-site evaluation. She will address making evaluation information timely, relevant, and useful; using cross-site evaluations in state-level decision making; and using evaluation results to modify programs. The coordinator has many roles and responsibilities facilitating the statewide initiative and the evaluation. By learning about the roles of the coordinator, attendees will be better equipped to work with coordinators. The coordinator and evaluator organize activities to minimize duplication of effort and increase efficiency. The coordinator solicits assistance from evaluators in RFP development; RFP announcement/technical assistance sessions, and proposal review; periodic grantee review sessions; coordinated site visits; preparation of evaluation findings for targeted audiences; and collection of selected evaluation data. These discussion topics will give attendees ideas of additional types of evaluative activities that may be helpful when working with a multi-site program.

Roundtable: Translating Findings Into Client Action
Roundtable Presentation 748 to be held in MISSION A on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Independent Consulting TIG
Presenter(s):
Judith Russell, Independent Consultant, jkallickrussell@yahoo.com
Abstract: Do you ever feel that your clients don’t know what to do with the findings from your evaluation? Do you feel that your findings don’t fit within the larger organizational context? Do you want to learn more ways in which you can help your client to better translate your findings and recommendations into effective actions? In this roundtable participants will share tools and methods which encourage clear linkages between an evaluation and concrete actions for improved client performance. It will include program and systems evaluation approaches with examples of research design, recommendation frameworks, and client activities which have produced concrete, clear actions by the client based on evaluation findings. Bring your experiences and ideas to the table for a dynamic discussion with a variety of approaches.

Session Title: Evaluation of Interventions and Assessments of Individuals With Disabilities
Multipaper Session 749 to be held in MISSION B on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Special Needs Populations TIG and the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Julia Shaftel,  University of Kansas, jshaftel@ku.edu
One State’s Experience Implementing Links for Academic Learning
Presenter(s):
Julia Shaftel, University of Kansas, jshaftel@ku.edu
Abstract: Links for Academic Learning (Flowers, Wakeman, Browder, & Karvonen, 2007) is a comprehensive process developed by the National Alternate Assessment Center for the purpose of assessing the alignment of a state’s alternate assessment based on alternate achievement standards (AA-AAS) with the state’s general assessment. This process includes the evaluation of a state’s extended standards; alternate assessment test items, performance requirements, and scoring; professional development; and instructional practices. This presentation highlights implementation and interpretation issues encountered in one state’s use of Links for Academic Learning. Specifically, methodological changes and enhancements were made to the review process that led to richer data collection and interpretation opportunities. These outcomes will be presented along with discussion of definitions and procedures used in this application of Links for Academic Learning. This presentation will enrich current concepts in the evaluation of instruction and assessment of students with significant disabilities.
Using a Logic Model to Evaluate the Implementation and Effectiveness of a Complex Intervention
Presenter(s):
Celine Mercier, University of Montreal, cmercier.crld@ssss.gouv.qc.ca
Diane Morin, University of Quebec at Montreal, morin.diane@uqam.ca
Virgine Cobigo, Queen's University at Kingston, virgine.cobigo@gmail.com
Astrid Brouselle, University of Sherbrooke, astrid.brouselle@usherbrooke.ca
Abstract: A logic model was developed to evaluate the implementation and the effectiveness of a residential program for persons with intellectual disabilities or autism spectrum disorders and challenging behaviours. This transitional facility was designed to assess and stabilize difficult clients with the aim of maintaining them in a long-term, community-based facility rather than transferring them to a more restrictive living environment. The program, based on assumptions derived from available evidence in the scientific literature, was complex, with hypothesized active components at three levels: architectural and material, organizational, and clinical. The program was expected to generate positive outcomes for staff, clients, and the long-term residential facility. This presentation focuses on how the logic model was used to: 1) integrate the implementation and effectiveness components of the evaluation; 2) guide the dynamics of the evaluation process; and 3) support interactions between the stakeholders and the evaluation team.

Session Title: Evaluations of Community Nonprofits and New Organizations or Developing Programs: Lessons From the Field
Skill-Building Workshop 750 to be held in BOWIE A on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Independent Consulting TIG and the Non-profit and Foundations Evaluation TIG
Presenter(s):
Gary Miron, Western Michigan University, gary.miron@wmich.edu
Kathryn Wilson, Western Michigan University, kathryn.a.wilson@wmich.edu
Michael Kiella, Western Michigan University, mike.kiella@charter.net
Abstract: This skill-building session reviews lessons learned from more than 2-dozen program evaluations of community nonprofit organizations and relatively new programs within public sector organizations. Some of the specific topics that will be covered include the following: (1) conducting evaluation when clients and stakeholders have limited understanding of the purpose and process of evaluation; (2) articulating program logic to guide the evaluation; (3) dealing with misleading information from the client; (4) understanding and addressing pressure from organizations to commission evaluations in order to help sell their program and attract grants; (5) creative means to address or compensate for lack of content expertise; (6) measuring impact; (7) designing and conducting evaluation when programs are still evolving or are in the midst of change; and (8) strategies for conducting evaluation on a shoestring budget.

Session Title: Slow Down, You Move to Fast: Calibrating Evaluator Engagement to the Pace of Campaigners and Advocates when Developing Theories of Change
Panel Session 751 to be held in BOWIE C on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Advocacy and Policy Change TIG
Chair(s):
Mary Sue Smiaroski, Oxfam International, marysue.smiaroski@oxfaminternational.org
Abstract: Many development practitioners, especially those engaged in campaigning and advocacy, are activists, with little time for theoretical discussions or in-depth evaluative processes. Yet articulating a theory of change (TOC) with adequate specificity and committing to testing it can assist practitioners in multiple ways – by helping frame real-time strategic reflection in the face of complexity & uncertainty; by making explicit cause and effect assumptions so they can be tested, allowing for course correction and better allocation of resources; and by better meeting accountability obligations and garnering external support. Gabrielle Watson and Laura Roper, will draw from their extensive experience working with advocates and campaigners with Oxfam and other organizations and share how they’ve approached the challenge of engaging with high-octane activists to develop stronger evaluative practice. This will be a highly interactive session and we will look forward learning from participants’ experiences and insights.
Strategic Intuition and Theory of Change: Connecting the Dots for Stakeholders
Laura Roper, Brandeis University, l.roper@rcn.com
It’s been the experience of this evaluator that advocates, even in highly strategic and effective campaigning efforts, are often flummoxed when asked to explain their theory of change and how they intend to test it. Because campaigns are labor intensive, costly, involve a lot of “invisible work” (i.e. managing relationships, building consensus, cultivating media and allies, etc.), and often don’t result in clear policy victories, advocacy teams can find themselves in a vulnerable position in a time of resource constraints in institutions with competing priorities. Yet, M&E is almost always a low priority and even more so under pressure to deliver. Evaluators have to find light, efficient, accessible processes that help advocacy teams unpack and articulate their reality so that they’re active participants in setting the terms on which their work is evaluated. This presentation draws from several experiences with Oxfam and others on how this might be done.
Developing, Testing, and Building Data Systems Around Theory of Change: A Year in the Life of an Embedded Monitoring, Evaluation and Learning Staffer
Gabrielle Watson, Oxfam America, gwatson@oxfamamerica.org
Oxfam America has been working to develop a systematic approach to policy advocacy monitoring, evaluation and learning since 2007. Two campaigns were assessed through end-of-campaign staff debriefs and external summative reviews in 2007 and 2008. But Oxfam recognized that policy advocacy efforts are characterized by high degrees of unpredictability and complexity, that significant shifts in external context are the norm, and that advocates need constant feedback and up-to-the-minute intelligence about shifts in the policy-making environment. Further, managers and external stakeholders need to understand context and the complexity of policy change processes in order to understand the significance of intermediate results and claims of contribution or attribution. But how to do this? Recognizing that this is a classic adaptive challenge, Oxfam embedded a staff person inside its largest campaign, the climate change campaign, to develop MEL systems and tools “from the inside-out”. This presentation shares lessons this experience.

Roundtable: Increasing Access Through Openness? Evaluating Open Educational Resources (OER) in Himalayan Community Technology Centres
Roundtable Presentation 752 to be held in GOLIAD on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Distance Ed. & Other Educational Technologies TIG
Presenter(s):
Tiffany Ivins, Brigham Young University, tiffanyivins@gmail.com
David D Williams, Brigham Young University, david_williams@byu.edu
Randy Davies, Brigham Young University, randy.davies@byu.edu
Shrutee Shrestha, Brigham Young University, shruti722@gmail.com
Abstract: Eliminating various forms of poverty is directly linked to improving opportunities for education in the developing world. Despite this, one-fifth of the world’s population is still denied access to quality educational opportunity. (UNESCO, 2006) Effectively disseminating information in developing countries requires a continuous focus at removing obstacles that stand in the way of the right to education (Tomsasevski, 2006). This is best achieved through a holistic approach focused on sustainable and context-sensitive literacy programming conducted by locals for locals with particular regard to tailored content collection and dissemination. (Chambers, 2000) Open Educational Resources (OER) offer an opportunity to reach more learners with localized content that is freely available yet inaccessible to those who need it most. This OER evaluation investigates Open Content for Development (OC4D), a nascent educational portal that bridges this knowledge gap through customized ICT tools aimed at reaching more rural poor learners through a cost-effective sustainable approach.

Roundtable: Use, Ethics, and Not Giving Clients What They Ask For
Roundtable Presentation 753 to be held in SAN JACINTO on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Evaluation Use TIG
Presenter(s):
Rachael Lawrence, University of Massachusetts, Amherst, rachaellawrence@ymail.com
Sharon Rallis, University of Massachusetts, sharallis@gmail.com
Abstract: In 2009, we were contracted to evaluate degrees of collaboration between two agencies functioning as state contractors to serve adolescents institutionalized in mental health facilities. One agency provides residential treatment for adolescents; the other provides schooling. The final evaluation product was to be a “metric [that the agencies could then use] for measuring the effectiveness of collaboration.” From observations and interviews, we saw that no shared definitions or practices of collaboration existed across sites. Since we believe the agencies could not measure what they could not define, we chose not to deliver the requested metric. What we reported instead provided clarifications and descriptions that stakeholders found more insightful and thus ultimately more useful than a set of manufactured and irrelevant metrics. This roundtable discusses and analyzes design, execution, decisions, and resulting use through a framework that draws on the AEA Guiding Principles for Evaluators and various theories of evaluation use.

Session Title: The Impact of Exogenous Factors in Classroom Evaluation
Multipaper Session 754 to be held in TRAVIS A on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Assessment in Higher Education TIG
Chair(s):
Susan Rogers,  State University of New York at Albany, susan.rogers.edu@gmail.com
Discussant(s):
Rhoda Risner,  United States Army, rhoda.risner@us.army.mil
10% for Attendance & Participation: Evaluating Assessment Practices in Undergraduate Courses
Presenter(s):
Susan Rogers, State University of New York at Albany, susan.rogers.edu@gmail.com
Kristina Mycek, State University of New York at Albany, km1042@albany.edu
Abstract: Instructors who are otherwise rigorous may succumb to the inclusion of a “participation” grade in the course syllabus. Is this practice valid? Is it widespread? What are the implications? While there is an assumption in the literature that many undergraduate instructors do assign a grade for participation, there is a scarcity of research exploring this phenomenon. Evaluations in higher education often focus on overall program goals, however assessment in the classroom is a key indicator of design and delivery of education in the university. The current paper investigates the heretofore unconfirmed ubiquity of grading participation in undergraduate classes. Instructor practices are assessed through the use of an online survey instrument (n=352), and explored through the use of principal components analysis. Findings suggest that particular pedagogical attitudes may be related to how – and why – participation grades are assigned. Implications for evaluation in higher education are explored.
Tables and Chairs: The Effects of Classroom Design on Students and Instructors in Higher Educational Settings
Presenter(s):
Martin Wikoff, Krueger International, martin.wikoff@ki.com
Susan Rogers, State University of New York at Albany, susan.rogers.edu@gmail.com
Abstract: Optimizing formal learning environments has been the province of researchers and educators for many years. The design of so-called Learning Spaces has also caught the attention of architects and interior design professionals. The purpose of this research is to evaluate effect of classroom design on learning outcomes and attitudes of students and faculty. Three classrooms were structured in different formats: 1) Highly Flexible 2) Moderately flexible and 3) Traditional. A mixed methods approach was used to collect data from a variety of sources. Analyses were conducted to determine if these structures impacted student learning (as measured through course test scores), and student and instructor attitudes, measured using Burgess and Kaya’s (2007) Classroom Attitude Scale. Multiple observations were conducted to document the actual use of the furniture in the classrooms. Findings and implications for institutions of higher education, as well as evaluations conducted in university settings, are discussed.

Session Title: Data Dashboard Design for Quality Monitoring and Decision Making
Demonstration Session 755 to be held in TRAVIS B on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Integrating Technology Into Evaluation
Presenter(s):
Veronica Smith, data2insight, veronicasmith@data2insight.com
Tarek Azzam, Claremont Graduate University, tarek.azzam@cgu.edu
Abstract: While the business intelligence industry has excelled in generating robust technologies that are able to handle huge repositories of data, little progress has been made in turning that data into knowledge. Data dashboards represent the most recent attempt at turning data into actionable decisions. However, they often do not live up to their potential. This demonstration will provide: 1) a historical overview of the evolution of the dashboard; 2) strengths and limitations of the dashboard as a communication, monitoring, and self-evaluation tool; 3) key dashboard and metric design principles and practices; and 4) real-world examples of dashboards using different software packages. Participants will leave the demonstration with a clear understanding of appropriate dashboard applications, how this technology tool can be used to tap into the tremendous power of visual perception to communicate, and vetted resources for putting dashboards to work for their stakeholders.

Session Title: Strengthening the Learning Culture Within Organizations and Projects
Think Tank Session 756 to be held in TRAVIS C on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Presenter(s):
David Scheie, Touchstone Center for Collaborative Inquiry, dscheie@touchstone-center.com
Nan Kari, Touchstone Center for Collaborative Inquiry, nkari@comcast.net
Discussant(s):
Jessica Shao, Independent Consultant, 415) 889-7084
Scott Hebert, Sustained Impact, shebert@sustainedimpact.com
Ross Velure Roholt, University of Minnesota, rossvr@umn.edu
Abstract: In this Think Tank, participants can explore the challenges and promising strategies that help to shift organizational and project cultures to include more room for critical reflection and dialogue, thus strengthening ongoing learning and innovation. Drawing on experiences of evaluation projects in a variety of settings, facilitators will sketch some of the reasons for working to enlarge learning cultures and name pressures that tend to constrain free inquiry. Then small groups will engage in dialogue using the following prompts: • What are the conditions in which learning cultures thrive, and how can those conditions be established? • What specific practices have you seen effective in opening up habits of honest inquiry and reflection? • How can a vibrant learning culture be disruptive? What are potential risks if evaluation embraces a learning culture approach? • What benefits have you seen result from a robust organizational learning culture?

Session Title: Building Capacity in Community-Level Organizations
Multipaper Session 757 to be held in TRAVIS D on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Chair(s):
Debbie Zorn,  University of Cincinnati, debbie.zorn@uc.edu
Discussant(s):
Rebecca Gajda Woodland,  University of Massachusetts, Amherst, rebecca.gajda@educ.umass.edu
Capacity Building: Grass Roots Agencies – Grass Roots Funding
Presenter(s):
John Kelley, Villanova University, john.kelley@villanova.edu
Abstract: Rarely do small, community-based foundations devote significant resources to developing an evaluation capacity building training model and offering it, cost-free, to grass roots agencies. Yet, this is exactly what the Phoenixville Community Health Foundation did in 2007 when it commissioned an experienced evaluator to design/deliver a training sequence to help non-profits construct solid, “doable” evaluation plans. This presentation explores why the foundation used its limited resources this way. The training curriculum and homework assignments are described, emphasizing several innovative techniques. Four monthly half-day sessions culminate with agencies authoring customized evaluation plans using a 13-step model, and then turning to plan implementation. Six months later, a follow-up session is held wherein agencies discussed progress. Evaluation data were collected each of the four years (2007 thru 2010) on the training, the quality of plans, and factors that facilitated/restrained putting plans into action. These results will also be reported as “lessons learned.”
Monitoring and Evaluation (M&E) for Me! Building the Monitoring and Evaluation Capacity of Community-based HIV/AIDS Programs in Thailand
Presenter(s):
Anne Coghlan, Pact Inc, acoghlan@pactworld.org
Tatcha Apichaisiri, Pact Inc, tatcha@pactworld.org
Supol Singhapoon, Pact Inc, supol@pactworld.org
David Dobrowski, Pact Inc, ddobrowolski@pactworld.org
Abstract: Within the international HIV/AIDS arena, much of the focus of community-based organizations’ (CBOs) monitoring and evaluation (M&E) efforts has been on satisfying the reporting requirements of donor agencies. However, for CBOs to maximize their potential and sustain their efforts, they need to design and conduct M&E to meet their own needs. Pact, Inc., a U.S. international development organization, is conducting the in-depth M&E Capacity Building Initiative, “M&E for Me!”, with a variety of HIV/AIDS CBOs throughout Thailand. Pact’s approach is to conduct a series of participatory skills building workshops, along with corresponding site visits to coach CBO staff in developing logic models, building M&E plans, designing and implementing data collection methods, analyzing data, and using results. This paper will describe the processes used in, the results from, and the lessons learned because of the capacity building initiative, as reflected upon and measured through a series of mixed-method assessments.

Session Title: Evaluation in Medical Education
Multipaper Session 758 to be held in INDEPENDENCE on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Assessment in Higher Education TIG
Chair(s):
Chris Lovato,  University of British Columbia, chris.lovato@ubc.ca
Discussant(s):
Linda Lynch,  United States Army, sugarboots2000@yahoo.com
Using Evaluation to Inform Curriculum Renewal: A Case Example From Medical Education
Presenter(s):
Chris Lovato, University of British Columbia, chris.lovato@ubc.ca
Linda Peterson, University of British Columbia, linda.peterson@ubc.ca
Helen Hsu, University of British Columbia, helen.hsu@ubc.ca
Abstract: Evaluation is a key aspect of curriculum renewal. To inform the curriculum renewal process for undergraduate medical education at the University of British Columbia, we conducted an evaluation to identify program strengths and weaknesses. The study included a stakeholder survey, accreditation reports, internal evaluations, and specialized surveys, as well as studies using external data from student exams and national surveys. Results from each data source were interpreted separately. A protocol involving ratings and consensus by a group of evaluators was used to produce a final set of strengths and weaknesses. We will highlight the processes used to provide decision-makers with a final list of strengths and weaknesses based on the synthesized evidence, and discuss the approach and tools used, along with the lessons learned in working with the decision-making group. Finally, we will discuss how this process might be applied in a wide range of educational settings.
Tracking Long-Term Outcomes in Medical Undergraduate Education: Evaluating Readiness for Residency Training
Presenter(s):
Helen Hsu, University of British Columbia, helen.hsu@ubc.ca
Terri Buller-Taylor, Independent Consultant, btconsul@telus.net
Holly Buhler, University of British Columbia, holly.buhler@ubc.ca
Chris Lovato, University of British Columbia, chris.lovato@ubc.ca
Abstract: As a part of an initiative to evaluate the long-term outcomes of its fully-distributed undergraduate medical education program, The University of British Columbia (UBC) is collecting information about its medical graduates as they progress through residency and into practice. This information will contribute to evaluating the long-term impact of the program on British Columbia’s health workforce. As a part of this initiative, the Evaluation Studies Unit, Faculty of Medicine, has undertaken a study to evaluate the readiness of undergraduates entering residency training. This paper will describe the development of this study which included an environmental scan, development of a survey tailored to the UBC context, a pilot survey of residency supervisors, and triangulation of survey data with individual performance assessments. Challenges and lessons learned in developing and implementing the evaluation will be discussed, including insights related to enhancing utility, feasibility, credibility, and propriety in this type of initiative.

Session Title: Guidelines for Independent Consultants/Evaluators Working With Universities: Complying With Federal Funding Source Requirements, Budgets, Contracts, and Other Unique Issues
Demonstration Session 759 to be held in PRESIDIO A on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Independent Consulting TIG and the Government Evaluation TIG
Presenter(s):
Mary Anne Sydlik, Western Michigan University, mary.sydlik@wmich.edu
Abstract: The proposed demonstration will discuss questions an independent consultant may face when asked to evaluate federally funded, university-based programs. First, how do you determine evaluation expectations for the National Science Foundation, National Institutes of Health, and the U. S. Department of Education? For example, what are the agency-specific evaluation requirements, and what information will need to be contributed to the PI’s annual and final reports? Second, how do agency-specific budget requirements impact what can be charged for the evaluation? Third, which documents does the submitting institution need for each kind of agency grant submission, and what is the timeframe? Information requirements for NSF FastLane submissions are quite different than those for grants.gov submissions. Fourth, guidelines and expectations for preparing and submitting a university-based evaluation project subcontract will be shared. Finally, ideas about how to build lasting relationships with university customers for future collaborative program evaluations will be discussed.

Session Title: Evaluation Capacity Building Through State Affiliates
Panel Session 760 to be held in PRESIDIO B on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Chair(s):
Robert Shumer, University of Minnesota, rshumer@umn.edu
Abstract: From GPRA to NCLB, and everything in between, evaluation is demanded by more and more organizations and agencies. In order to meet that demand we need to find new ways of teaching, recruiting, and providing professional development for the evaluation field. In this session we discuss the efforts for the Minnesota Evaluation Association, a state affiliate of AEA, to conduct outreach and build capacity for evaluation in all sectors of society. Panelists will discuss how the organization developed a strategic plan and then partnered with the Minnesota Campus Compact to hold regional meetings about evaluation practice. They then will discuss a state-wide study of all colleges and non-profits (partnering with the Minnesota Council of Non Profits) to discover all who teach and conduct evaluations. We conclude by discussing the results of these efforts on the evaluation field.
Developing a Statewide Evaluation System
Robert Shumer, University of Minnesota, rshumer@umn.edu
Developing a statewide evaluation system is one role played by AEA state affiliates. In this presentation we discuss how the MN Evaluation Association developed a strategic plan that called for coordinating efforts with other state organizations around evaluation, especially teaching and conducting evaluations in many different settings. We describe how the MN EA collaborated with the Minnesota Campus Compact to develop a series of six regional meeting dealing with evaluating civic engagement. Those meetings resulted in a plan to study all those faculty members who teach evaluation in colleges and universities, and then produced an evaluation planning document. In addition, the two organizations partnered with the Minnesota Council for Non Profits to conduct a study of evaluation teachers and evaluation service providers to produce a system of names and activities to be used to build evaluation capacity in the state. Robert Shumer is past president of the MN EA and he initiated the project. He has 20 years experience evaluating civic engagement issues.
Creating An Evaluation System for Civic Engagement
Julie Plaut, Minnesota Campus Compact, julie@mncampuscompact.org
Conducting civic engagement programs is one of the primary goals of Minnesota Campus Compact. As a result of recent inquiries into important topics for Compact members evaluation was determined to be one of the highest priority areas. In this session we discuss how the Minnesota Campus Compact joined forces with the MN EA to conduct a series of regional meetings focusing on evaluation of civic engagement programs. The results of those meetings produced a general resource guide on evaluation issues and resources. It also included a study of evaluation faculty and individuals conducting evaluation in non-profits. The session concludes by discussing the results of both these efforts and how evaluation initiatives are becoming a part of the Minnesota Campus Compact plan. Julie Plaut is the Executive Director of the Minnesota Campus Compact. She has many years experience creating evaluation systems for civic engagement programs.

Session Title: Evaluation Capacity in School Mental Health: Lessons From School Counseling
Multipaper Session 761 to be held in PRESIDIO C on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
Melissa Maras,  University of Missouri, marasme@missouri.edu
Discussant(s):
Paul Flaspohler,  Miami University, flaspopd@muohio.edu
Getting to Outcomes in School Counseling: Examining State-wide Practices to Support Evaluative Practice
Presenter(s):
Melissa Maras, University of Missouri, marasme@missouri.edu
Stephanie Coleman, University of Missouri, slccm7@mail.missouri.edu
Norm Gysbers, University of Missouri, gysbersn@missouri.edu
Bragg Stanley, Missouri Department of Elementary and Secondary Education, bragg.stanley@dese.mo.gov
Keith Herman, University of Missouri, hermanke@missouri.edu
Abstract: This presentation will describe a state-wide, systems-level effort to better align existing practice and evaluation resources within a framework that will support and augment increased evaluation capacity among school counselors in Missouri. Specifically, this presentation will demonstrate how one state is organizing existing curriculum, evaluation surveys, needs and resource assessments, and other existing tools into a Getting to Outcomes (Chinman, Imm & Wandersman, 2004) framework such that resources are coordinated with their specific use (e.g., needs & resource assessment fits with the planning step of the framework). Summaries of research identifying strengths and challenges at each step will be presented as well as plans to increase evaluation capacity within school counseling.

Roundtable: Using Site Visits to Improve Programs
Roundtable Presentation 762 to be held in BONHAM A on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Non-profit and Foundations Evaluation TIG
Presenter(s):
Emily Hagstrom, Ciurczak & Co Inc, emily@ciurczak.net
Caroline Taggert, Ciurczak & Co Inc, caroline@ciurczak.net
Abstract: Formal site visit reports are an increasingly required component of evaluations. In addition to providing evidence of fidelity of implementation, site visits can be a source of useful feedback and recommendations to program staff. Providing site visit feedback to staff helps programs continuously improve their services and operations. Unannounced site visits and site visits conducted for the benefit of the client, rather than just to check on fidelity of implementation, have unique characteristics. For example, external evaluators can provide feedback on actual program processes prior to official visits from funding agencies. This roundtable presentation will discuss benefits of conducting site visits and providing feedback to staff, as well as examples of programmatic changes that have resulted from site visits. The presentation will also provide tools and tips for conducting effective site visits. Participants will discuss ethical challenges and concerns that arise from negative observations, and how to address these issues.

Session Title: Control Groups and Cost Analysis: Innovative Approaches to College Access Program Evaluation
Multipaper Session 763 to be held in BONHAM B on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the College Access Programs TIG
Chair(s):
Kurt Burkum,  ACT, kurt.burkum@act.org
The Impact of Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP) on Student Academic Preparedness
Presenter(s):
Kirsten L Rewey, Action Consulting and Evaluation Team (ACET) Inc, kirsten@acetinc.com
Joseph Curiel, Action Consulting and Evaluation Team (ACET) Inc, joseph@acetinc.com
Michael C Rodriguez, University of Minnesota, mcrdz@tc.umn.edu
Stella SiWan Zimmerman, Action Consulting and Evaluation Team (ACET) Inc, stella@acetinc.com
Abstract: A Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP) project is currently being implemented in two large, urban school districts in Minnesota. The goal of the federal GEAR UP program is to increase college attendance in traditionally under-represented groups by providing a wide range of college planning and preparation services, academic tutoring, and information on career and postsecondary options to students and their parents. The goal of the evaluation is to determine the impact of the GEAR UP program on student academic preparedness. Program impact will be determined by following a group of GEAR UP 10th grade students and a matched comparison group of students drawn from non-GEAR UP schools in the two districts of implementation and comparing performance between groups on a statewide, high-stakes test (Minnesota’s No Child Left Behind assessment). The presentation will summarize initial impact findings, review evaluation successes and challenges, and describe lessons learned.
Evaluation Quality From an Unexpected Combination: How Evaluation Management, a Participative Process, and Program Theory Enhanced the Development of a Cost Analysis Framework
Presenter(s):
Kathryn Hill, Minnesota Office of Higher Education, kathy.hill@state.mn.us
Abstract: This paper presentation will describe the development of a cost analysis framework for a college access program evaluation, from the perspective of an internal evaluation manager of a large, multi-site evaluation. The primary evaluation activity to be discussed is the interactive process of involving program staff in the articulation of program theory through guided interviews facilitated by external evaluators. The guided interview tool will be shared, along with key findings. The preliminary results of this collaborative effort between internal evaluation manager, external evaluation contractors, and program staff include the estimated average cost per participant of the program and the estimated program costs by activity (i.e., program component). The developed framework will be linked in the future with a separate but interdependent research study investigating the relationship between program participation, student perceptions, and student academic outcomes.

Session Title: Promoting Truth and Justice: Evaluation’s Role in Teacher Education Programs for Candidates From Underrepresented Populations
Multipaper Session 764 to be held in BONHAM C on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Morris Lai,  University of Hawaii, Manoa, lai@hawaii.edu
Kukuluao (Building Enlightenment): Heeding the Guidance of Teachers in a Native Community
Presenter(s):
Alice Kawakami, University of Hawaii, Manoa, alicek@hawaii.edu
Abstract: Kukuluao, a teacher support program, was funded in June 2007 to identify existing needs and to design assistance for community members along the continuum of the teacher career path to encourage them to become trained teachers and to persist in the teaching profession. The assessment was conducted by staff from the community, resulting in a high return rate on surveys. Results were validated by a community advisory board and initiatives designed and implemented. Since the original data collection, additional formative assessment strategies have been developed to gather information in ways that allow the program to be adapted to heed the changing economic, educational, and policy context and honor community through education and professional development for teachers. The presentation will describe ways that the development of capacity and inclusion of the native communities' voices is fundamental to building a stable core of teachers for this community.

Session Title: Evaluating Twenty First Century Skills
Think Tank Session 765 to be held in BONHAM D on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Sophia Mansori, Education Development Center, smansori@edc.org
Discussant(s):
Alyssa Na'im, Education Development Center, anaim@edc.org
Abstract: Twenty-first century skills—critical thinking, problem-solving, and creativity, as well as “soft skills” like communication and collaboration—have received increasing attention in recent years as more educational programs and initiatives have included them in curricula and goals. The lack of academic research regarding how 21st century skills are developed or assessed leaves evaluators to create ways to understand these skills. Presenters will review existing frameworks for documenting and assessing such skills and share methods used in the evaluation of Adobe Youth Voices, an international youth media program that promotes 21st century skills. Participants are invited to share their experiences in evaluating 21st century skills, specifically: 1) How do we measure and assess 21st century skills; and 2) How do we understand these skills in the larger context of education and learning? Through facilitated small group discussions, participants will explore successful approaches, strategies, and practices for their own evaluation work.

Session Title: Systems of Evaluation for Diverse National Portfolios of Research: Lessons From Russia and Finland
Multipaper Session 766 to be held in BONHAM E on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Yelena Thomas,  Ministry of Research Science and Technology, yelena.thomas@morst.govt.nz
Tekes Impact Goals, Logic Models and Evaluation of Socio-economic Effects
Presenter(s):
Jari Hyvarinen, Tekes - Finnish Funding Agency for Technology and Innovation, jari.hyvarinen@tekes.fi
Abstract: My paper presents new evaluating tools to analyse the socio-economic effects of Tekes activities in parallel with Tekes impact goals. The impact goals are economic renewing, environment and well-being, and the main challenge is the fragmentation of Tekes activities towards these impact goals, which makes it laborious to evaluate. My aim is to find more suitable road-maps how Tekes funding and activities can be grouped to more controllable items. Using logic models as a method to group the steps of Tekes impact model, I emphasize several improvements by combining the Tekes impact goals and the steps of impact model (inputs and resources, activities, results, and impacts on economy and society) in order to improve evaluation quality and evaluation tools.
Verifiable Evaluation System for Research Programme
Presenter(s):
Igor Zatsman, Russian Academy of Sciences, iz_ipi@a170.ipi.ac.ru
Abstract: In February 2008 the First Russian Academic Programme was adopted by the Government of the Russian Federation as a tool of public intervention in the area of science. The Programme consists of six parts including the Programme of the Russian Academy of Sciences (RAS). The objective of the RAS Programme is to improve effectiveness and cost-efficiency of spending the RAS budget through the application of measurable and reliable assessment. The RAS has decided to develop a verifiable evaluation system for providing assessment of the RAS Programme as a whole and of its projects. The goal of the evaluation system is acquisition of project information, on basis of which verifiable indicators should be subsequently calculated. The main aim of this paper is to present the RAS evaluation system with embedded means of verification. This research is funded by RFH Grant No. 09-02-00006a.

Session Title: Demonstrating Results for Federally Funded Programs
Panel Session 767 to be held in Texas A on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Government Evaluation TIG
Chair(s):
Michelle Kobyashi, National Research Center Inc, michelle@n-r-c.com
Abstract: When Federal grants fund programs implemented by diverse grantees with diverse programs - evaluation holds unique challenges. Here we demonstrate how to find the uniform threads among diverse grantees and present a case study in combining multiple evaluation techniques to assess performance. This evaluation work started with the development of common output tracking forms along with evaluation toolkits and trainings for grantees. The tracking forms collected from diverse grantees are synthesized each year and provide an annual picture of what the group is doing. Interest in broadening the evaluation from tracking outputs (“what we are doing”) to considering outcomes (“what is this changing”), led to some innovative thinking about how to evaluate complex systems with a performance measurement taxonomy that looks at outputs and outcomes within a framework of quality.
Whole Measures for Food Security: Qualitative Tools To Measure Work
Jeanette Abi-Nader, Community Food Security Coalition, jeanette@foodsecurity.org
Jeanette Abi-Nader will discuss the development of a common output tracking form - a form used by food projects to report common outputs to US congress. A second tool - whole measures - also will be described. This tool was devised as a qualitative, discussion tool to help grantees with self- reflection and evaluation.
Evaluation Taxonomy: Incorporating Diverse Grantees and Stakeholders
Michelle Kobayashi, National Research Center Inc, michelle@n-r-c.com
Michelle Kobyashi will discuss the measurement of outcome data through two tools: an evaluation toolkit and an ”evaluation taxonomy" to report performance measures- outputs and outcomes. This taxonomy is the culmination of the various tools - providing a concise, systematic way to collect and report meaningful data to a variety of stakeholders.

Session Title: Where Do Values Enter Into Evaluations?
Multipaper Session 768 to be held in Texas B on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Qualitative Methods TIG
Chair(s):
Leslie Goodyear,  National Science Foundation, lgoodyea@nsf.gov
Discussant(s):
Leslie Goodyear,  National Science Foundation, lgoodyea@nsf.gov
Slippery Evaluation: Context, Quality, Interpretation
Presenter(s):
Serafina Pastore, University of Bari, serafinapastore@vodafone.it
Abstract: Nowadays quality and evaluation represent key-words within the debate on formative processes. The increased interest in qualitative methodologies in the evaluation field has led to a growing appreciation of the aspects of individuality, contextuality and situationality. The context is fundamental: it indicates the situation in which the interests of individuals and opportunities provided by the context are mediated and defined. It is like an anchorage that is essential to understand the evaluand in a deeper and more intimate way. The context is never preordained, but is always made in situational frameworks. In the hermeneutic perspective, evaluation assumes an interpretative function and opens further perspectives. Evaluation involves the bringing together of disparate connections aimed at understanding and realizing an effective process that might enhance learning actions and develop a real co-construction of meaning.
What Matters and What Does It Look Like?: Values Meet Practices in the Complex Context of an Educative, Values-Engaged Evaluation of a Professional Development School.
Presenter(s):
Melissa Freeman, University of Georgia, freeman9@uga.edu
Jori Hall, University of Georgia, jorihall@uga.edu
Tracie Costantino, University of Georgia, tcost@uga.edu
Soria E Colomer, University of Georgia, scolomer@uga.edu
Isabelle Crowder, University of Georgia, crowderi@uga.edu
Abstract: The purpose of this paper is to examine the intersection of our evaluation process with four ways in which values emerged in the context of a professional development school (PDS) implementation: 1) values explicitly articulated, 2) values manifested in everyday practices, 3) values implicitly shaping power dynamics, and 4) values expressed through the evaluation process. Although ‘quality of instruction’ is a core value guiding the PDS, what that means and is intended to look like in practice creates tensions. For example, although PDS university partners endorse student led enrichment activities, a district led standardized problem-solving approach including a required product may work against their intended purpose. To understand these competing values about instruction we use the educative, values-engaged approach. Specifically, our paper will describe how this approach forefronts the need to engage values in the evaluation design, and through the conduct of the evaluation itself.

Session Title: How to Get People to Read Your Research And Take Action on It
Panel Session 769 to be held in Texas C on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the
Chair(s):
Susan Parker, Clear Thinking Communications, susan@clearthinkingcommunications.com
Abstract: Evaluation and research reports can provide critical information to the field. But they may instead languish on a bookshelf read by just a few people for just one reason--they are not written or presented in a clear and accessible way. Evaluators have already done the hard work of discovering useful findings. By simply shifting how they think about communicating their work, they can reach a much broader audience of policymakers and others who can use their valuable research. In this session, two communications and research experts will provide practical tips for evaluators to greatly expand the reach of their work. The panelists will provide tools and approaches that evaluators can use so that their reports have more impact on the field. Tools include storytelling, a linguist’s approach to writing clearly, and making the most of social media.
Find the Gold in Your Evaluation
Susan Parker, Clear Thinking Communications, susan@clearthinkingcommunications.com
Sometimes evaluators are too close to their work to find the “gold” that will make important audiences like policy makers and practitioners pay attention and take action. Susan Parker, owner of Clear Thinking Communications, a communications firm specializing in helping nonprofits make a bigger impact, has helped evaluators and foundations revise hundreds of reports so that they will reach a broader audience. Susan will lead the audience through three simple tools they can immediately use to make their reports more accessible and have a bigger impact. She will use case studies from her work to illustrate.
Online Strategies for Sharing Your Evaluations More Broadly
Gabriela Fitz, Issue Lab, gabi@issuelab.org
Gabriela Fitz is co-director of IssueLab, a Chicago-based organization that archives and promotes research produced by the nonprofit sector. Gabi's expertise lies in aggregating and disseminating social policy research and evaluations in ways that matter to diverse audiences. She also has an expertise in determining the key audiences for evaluation—a critical question—and finding those audiences through the use of social media. Gabriela will talk about ways that IssueLab has repackaged collections of evaluations and research reports to make them more broadly accessible and she will provide examples of how evaluators can use social media tools to broaden the reach of their work.

Session Title: Evaluating Contributions to Knowledge Translation for New Technologies or Medical Treatments
Multipaper Session 770 to be held in Texas D on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
John Reed,  Innovologie LLC, jreed@innovologie.com
Translating New Knowledge From Technology Based Research Projects: An Intervention Evaluation Study
Presenter(s):
Vathsala Stone, State University of New York at Buffalo, vstone@buffalo.edu
Abstract: Obtaining societal benefits as impact from research that is sponsored through public funding has been a growing concern in recent years. This has led to Knowledge Translation (KT) as an emerging new field centered on promoting research impact through effective delivery of new knowledge; which underscores the urgency of evaluating research impact. This paper will address the issues of generating, and of evaluating, research impact and will discuss evaluation quality related to both processes. As a position paper it is focused on translating new knowledge generated by applied research, specifically by technology based research projects. It will present a KT intervention study currently under implementation and evaluation at a federally funded center on knowledge translation for technology transfer at the University at Buffalo. It will present the proposed intervention strategy with its rationale; describe the intervention implementation and evaluation procedures, as well highlight quality features of the intervention evaluation.
Evolving a High-Quality Evaluation System for the National Institutes of Health’s (NIH) HIV/AIDS Clinical Trials Research Networks
Presenter(s):
Scott Rosas, Concept Systems Inc, srosas@conceptsystems.com
Jonathan Kagan, National Institutes of Health, jkagan@niaid.nih.gov
Jeffrey Schouten, Fred Hutchinson Cancer Research Center, jschoute@fhcrc.org
William M Trochim, Cornell University, wmt1@cornell.edu
Abstract: A collaboratively-authored evaluation framework identifying success factors across four domains was used to guide the evaluation of NIH’s HIV/AIDS clinical trials networks. In each domain, broad evaluation questions were addressed by pilot studies conducted to generate information about a particular element of the research enterprise. Viewed as a scientific process with iterative cycles of experimentation and revision designed to incrementally improve the quality of the overall evaluation system, these studies were expected to yield information utilized in the near-term to improve network functions, and update and advance the evaluation agenda as the state of knowledge evolves. This paper presents preliminary results of the evaluation studies in the four domains. Opportunities and challenges for conducting similar evaluation studies within large-scale research initiatives are highlighted. Implications for the next cycle of studies, integrative analyses of data to address cross-domain evaluation questions, and linkage to the original stakeholder constructed framework are also discussed.

Session Title: Effectively Communicating Evaluation Results: Creative, Innovative, and Technological Ways to Share Evaluation Findings
Demonstration Session 771 to be held in Texas E on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Evaluation Use TIG and the Integrating Technology Into Evaluation
Presenter(s):
Tiffany Comer Cook, University of Wyoming, tcomer@uwyo.edu
Abstract: Effectively communicating evaluation results is fundamental in ensuring use of evaluation findings. Evaluations can only benefit a program if evaluators communicate their findings clearly and concisely. This demonstration will focus on several web and software applications that have served as communication tools between evaluators and clients. Specifically, the presenters will systematically show attendees highlights of two interactive websites; fact sheets presenting the results of complicated statistical analyses in an easy-to-understand format; and two case management systems that allow users to input, store, and report data. The presenters will explain these specific examples to demonstrate how evaluation findings can be communicated effectively and creatively.

Session Title: Quality Evaluation: Drivers and Objectives of the Renewed Canadian Federal Policy on Evaluation
Expert Lecture Session 772 to be held in  Texas F on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the and the Government Evaluation TIG
Presenter(s):
Anne Routhier, Treasury Board of Canada, anne.routhier@tbs-sct.gc.ca
Abstract: On April 1st 2009, a renewed Canadian federal Policy on Evaluation (as approved by the Treasury Board of Canada), along with an accompanying Directive and Standard, came into effect. The objective of this policy – which applies to the majority of departments and agencies across the Government of Canada – is to create a comprehensive and reliable base of evaluation evidence that is used to support policy and program improvement, expenditure management, Cabinet decision making, and public reporting. In this presentation, the Senior Director of the Treasury Board of Canada Secretariat’s Centre of Excellence for Evaluation will provide participants with an overview of the drivers and objectives of the renewed policy with an emphasis on how the policy suite has been designed specifically to address issues observed over the course of several audits and evaluations of the evaluation function.

Session Title: Supporting Evaluation Capacity Building Within the Cooperative Extension System to Impact the Lives of Children, Youth, and Families at Risk
Multipaper Session 773 to be held in CROCKETT A on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Extension Education Evaluation TIG
Chair(s):
Daniel McDonald,  University of Arizona, mcdonald@ag.arizona.edu
Discussant(s):
Roger Rennekamp,  Oregon State University, roger.rennekamp@oregonstate.edu
Implementing an Evaluation Using Common Measures Across Multiple States With Individual Programs
Presenter(s):
Daniel McDonald, University of Arizona, mcdonald@ag.arizona.edu
Pamela B Payne, University of Arizona, pgargle@email.arizona.edu
Abstract: Over the past year several Cooperative Extension programs across six states participated in a pilot study to determine the extent to which individual state programs can use identical instruments to measure outcomes as part of their overall evaluation efforts. This paper will discuss the process used to identify the common outcomes (i.e., parenting and youth citizenship), appropriate survey instruments, and data collection methods. Challenges encountered will be discussed in relation to differences in specific content covered, variation in targeted age-ranges, and differences in duration of program delivery or dosage of the interventions. Additionally, site evaluators were asked to assess the “goodness of fit” of the instruments and the methods used as they applied to their specific programs. The primary focus of this paper will pertain to the lessons learned from the pilot study.

Session Title: Leading the Horse to Water, Part II: Winning the Front-end Needs Assessment Tug-of-War in a Knowledge Management Program Initiative
Think Tank Session 774 to be held in CROCKETT B on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Business and Industry TIG
Presenter(s):
Thomas Ward, United States Army, thomas.wardii@us.army.mil
Abstract: Quality evaluation of a program initiative is not only enhanced by an initial needs assessment, the needs assessment is truly fundamental for documenting program success – not to mention ensuring the program is pointed in the right direction in the first place. “Knowledge Management” suffers from being an over-used buzz word. What does the organization and its leadership really hope to accomplish? A “knowledge needs assessment” provides answers to those questions, and suggests priorities for effort. How do you convince the organization and its leadership to invest time and effort in such an assessment? This think tank begins with a narrative from the perspective of a “KM champion” (who is not a CKO) within a military educational institution, and identifies lessons learned in a “3 up / 3 down” format. The think tank will then break into small groups for participants to discuss their own experiences and share their ideas.

Session Title: Longitudinal Social Network Analysis: An Understanding of This Dynamic Network Approach
Expert Lecture Session 775 to be held in  CROCKETT C on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the
Chair(s):
Kimberly Fredericks, The Sage Colleges, fredek1@sage.edu
Presenter(s):
Kimberly Fredericks, The Sage Colleges, fredek1@sage.edu
Abstract: In the field of social network analysis, the study of longitudinal networks has been in the forefront as researchers try to capture the dynamic nature of networks. To study such dynamic networks actor-oriented stochastic models of continuous-time Markov chains and exponential random graph models have been used. Dr. Kimberly Fredericks is an expert in social network analysis and its use in evaluation and has published on the topic in such journals as New Directions for Evaluation. In this presentation she will introduce both models as methods to describe and explain the development of interpersonal and inter-organizational networks. The models are applied to longitudinal data to uncover which micro mechanisms (i.e., individual choices) lead to which macro outcomes (i.e., network structures), and how and why these structures change over time. The theories and components of each model and its analysis will be explored as well as its application to program evaluation.

Session Title: Evaluation Challenges in Designing and Implementing a Program Evaluation: The Experience of the Centers for Disease Control and Prevention’s (CDC) Colorectal Cancer Control Program and Prevention IS Care (PIC)
Multipaper Session 776 to be held in CROCKETT D on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Government Evaluation TIG
Chair(s):
Amy DeGroff,  Centers for Disease Control and Prevention, adegroff@cdc.gov
Planning an Impact Evaluation: The Experience of the CDC's Colorectal Cancer Control Program
Presenter(s):
Amy DeGroff, Centers for Disease Control and Prevention, adegroff@cdc.gov
Michelle Revels, ICF Macro International, michelle.l.revels@macrointernational.com
Danielle Beauchesne, ICF Macro International, danielle.a.beauchesn@macrointernational.com
Djenaba Joseph, Centers for Disease Control and Prevention, dajoseph@cdc.gov
Anna Krivelyova, ICF Macro International, anna.krivelyova@macrointernational.com
Janet Royalty, Centers for Disease Control and Prevention, jroyalty@cdc.gov
Florence Tangka, Centers for Disease Control and Prevention, ftangka@cdc.gov
Faye Wong, Centers for Disease Control and Prevention Centers for Disease Control and Prevention, fwong@cdc.gov
Susan Zaro, ICF Macro International, susan.m.zaro@macrointernational.com
Abstract: This paper describes the evaluation planning process and resulting evaluation design for CDC’s Colorectal Cancer Control Program (CRCCP), a public health program aimed to increase population-level screening rates and, subsequently, to reduce colorectal cancer incidence and mortality. Planning efforts involved the development of a conceptual program framework, articulation of a program logic model, assessment of existing data sources, and extensive discussions among evaluators, survey researchers, epidemiologists, clinicians, and qualitative researchers. A theory-based evaluation has been designed to assess program implementation, outcomes, and impact. The design involves quasi-experimental approaches, including the use of matched comparison sites. The complexity of the CRCCP program (e.g., multiple intervention strategies that vary by grantee) challenged evaluators to construct a rigorous design that will allow assessment of program effectiveness. Evaluation methods will include periodic cross-sectional population and provider surveys, surveys of grantees, a longitudinal qualitative case study, and analysis of BRFSS, NPCR, and SEER data.

Session Title: Exploring the Role of Software in Qualitative Analysis
Multipaper Session 777 to be held in SEGUIN B on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Qualitative Methods TIG
Chair(s):
Janet Usinger,  University of Nevada, Reno, usingerj@unr.edu
Discussant(s):
Janet Usinger,  University of Nevada, Reno, usingerj@unr.edu
Promoting Emergent Qualitative Inquiry Within Structured Evaluation Practices Through NVivo
Presenter(s):
Dan Kaczynski, Central Michigan University, dan.kaczynski@cmich.edu
Michelle Salmona, Central Michigan University, michelle.salmona@cmich.edu
Abstract: One of the questions posed by the AEA 2010 Presidential Invitation is; “How do we balance dimensions of evaluation quality when they seem in opposition to one another?” A critical aspect to this issue is the inherent tension between theory and practice when designing and conducting qualitative evaluations. This paper explores how the use of qualitative data analysis software (QDAS) can be used by evaluators as a technological tool to bridge theory and practice, thus promoting high quality qualitative evaluation practices. Sponsors, stakeholders and evaluation team members commonly have a voice and share in shaping the intent and meanings sought through qualitative evaluation. This voice frames method choices which, in-turn, influences evaluation design considerations. A means, however, to keep qualitative theoretical considerations in the forefront must be maintained. Toward this end, QDAS may be used to promote transparency of qualitative methodology and ultimately, the quality of an evaluation study.
Individual Versus NVivo
Presenter(s):
Jenny May, University of South Carolina, jennygusmay@yahoo.com
Robert Petrulis, University of South Carolina, petrulis@mailbox.sc.edu
Abstract: Many universities provide access to NVivo software to assist in the analysis of qualitative data. The experience of working with qualitative data led this researcher to ask in what ways the use of qualitative analysis software might influence analysis results. Does NVivo change researchers’ perceptions and analysis of the data, and if so, how? This study systematically compares research summaries of qualitative data written by several qualitative researcher-participants with varying degrees of experience. Researchers were assigned to a research methodology group: NVivo, or manual analysis. After analyzing one transcript using their assigned methodology, researchers analyzed a second transcript using the alternative method. All researcher-participants received training for the NVivo program, and were directed to analyze transcripts for the same purpose. Summaries were then compared to determine if a difference regarding the content reported or excluded exists between methods of analysis. Interviews were conducted to obtain insight regarding researchers’ perceptions of data analysis procedures. This study is a replicate of a study presented at AEA in 2009 with an increased number of participants to obtain a more complete understanding of the phenomenon.

Session Title: Simulation Model of Evaluation Biases Under Post Conflict Zones
Demonstration Session 778 to be held in REPUBLIC A on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the International and Cross-cultural Evaluation TIG
Presenter(s):
Mamadou Sidibe, International Relief & Development, mamadou.sidibe@gmail.com
Abstract: This demonstration explores how optimization tools and techniques can be used to help set up normative performance targets under post conflict settings, when uncertainty and high risk are part of program implementation strategies. It stems from the lessons learned when implementing the Community Stabilization Program (CSP) in Iraq by International Relief and Development (IRD) from May 2006 to September 2009. The CSP program targeted more than 15 Iraqi Governorates; it was designed to stabilize the country and set a path conducive to economic growth with justice and equity among Iraqis. The author uses a Mean Variance framework expressed as a Certainty Equivalent model to derive normative performance targets. These normative measures are compared to the actual program targets to determine evaluation biases in post conflict zones. Data from the Baghdad Province is used in the empirical estimates.

Session Title: Healthy Aging and Health Screenings: Lessons Learned Through Participatory Evaluations
Multipaper Session 779 to be held in REPUBLIC B on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Health Evaluation TIG
Chair(s):
Susan M Wolfe,  Susan Wolfe and Associates LLC, susan.wolfe@susanwolfeandassociates.net
Discussant(s):
Michael Harnar,  Claremont Graduate University, michaelharnar@gmail.com
Improving the Practical Implementation of Effectiveness Trials in a Primary Care Setting
Presenter(s):
Katarzyna Alderman, Battelle Memorial Institute, aldermank@battelle.org
Margaret J Gunter, LCF Research, maggie@lcfresearch.org
Judith Lee Smith, Centers for Disease Control and Prevention, jleesmith@cdc.gov
Gary Chovnick, Battelle Memorial Institute, chovnickg@battelle.org
Linda Winges, Battelle Memorial Institute, winges@battelle.org
Susan Pearce, Battelle Memorial Institute, pearce@battelle.org
April Salisbury, LCF Research, april.salisbury@lcfresearch.org
Deirdre Shires, Henry Ford Health System, dshires1@hfhs.org
Danuta Kasprzyk, Battelle Memorial Institute, kasprzyk@battelle.org
Daniel Montano, Battelle Memorial Institute, montano@battelle.org
Jennifer Elston Lafata, Virginia Commonwealth University, jelstonlafat@vcu.edu
Abstract: There has been a recent movement in prevention research toward combining the strengths of both efficacy and effectiveness trials to increase translation and implementation (Bierman, 2006; Weissberg and Greenberg, 1998). Effectiveness, or pragmatic trials, may increase a study’s “real world” effects, but may incur a loss of fidelity of its core design (Bierman, 2006). Conversely, shielding the evaluation from real world influences may limit its adoption and implementation. Effectiveness trials usually involve multiple groups; therefore evaluating these trials requires involvement of multiple perspectives. This collaborative element, while beneficial, has certain challenges. In a project funded by the Centers for Disease and Control and Prevention (CDC), scientists from CDC, Battelle, and two major health care organizations, designed and implemented an evaluation of an intervention to increase colon cancer screening in primary care clinics. Presenters will describe how to improve the practical implementation of effectiveness trials in a primary care setting.
Participatory Evaluation in a Program to Promote Well-Aging Among Persons With Intellectual and Developmental Disability
Presenter(s):
Karen Widmer, Claremont Graduate University, karenwidmer@earthlink.net
Abstract: Participatory evaluation is both socially just and mission-critical for programs aimed at promoting health among adults with Intellectual and Developmental Disabilities (IDD). More than most other populations, adults with IDD require a social network (formal or informal) to carry out behaviors that effectively preserve and promote health and well-being. This paper will take an in-depth look at the logic model and participatory tool components designed to evaluate a center of excellence for healthy aging among adults living in group homes. The logic model contained two distinct constructs: health behavior change and future planning. Data collection is being conducted at a third-grade conversation skill level and includes residents, peers, staff, and family members. Without participatory tools adapted for atypical communication styles, persons from this vulnerable population may be denied self-determination. Findings generalize to inclusion of participants of all abilities in evaluations of a wide array of health interventions.

Session Title: Contextual Influences on the Evaluator, the Evaluation, and the Evaluation Design
Multipaper Session 780 to be held in REPUBLIC C on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Theories of Evaluation TIG
Chair(s):
James Griffith,  Claremont Graduate University, james.griffith@cgu.edu
Experimental Evaluation: Dealing With Random Assignment of Individuals Within Units
Presenter(s):
Andrea Beesley, Mid-continent Research for Education and Learning, abeesley@mcrel.org
Sheila A Arens, Mid-continent Research for Education and Learning, sarens@mcrel.org
Abstract: The presenters will discuss experimental evaluations with random assignment at the individual level. While this design has significant benefits to power, it may have unexpected consequences. The presenters will explore this kind of evaluation and its implications, in order to help prepare attendees for similar projects. Specifically, the presentation will address: • To what extent should the evaluator be involved in ensuring that the client’s communications with participants adhere to ethical guidelines? • Does the evaluator have a role in ensuring that the client follows through on the program as originally planned? Does the evaluator therefore take on a curriculum development role, or merely document what happened? • To what extent should evaluators press to ensure the integrity of random assignment? How can evaluators diplomatically monitor and discuss crossovers? • Are there situations in which this design should be abandoned in favor of one with a different level of randomization?
Evaluation Influence Within Population Health Partnerships: A Conceptual Framework
Presenter(s):
Sarah Appleton-Dyer, University of Auckland, sk.appleton@auckland.ac.nz
Janet Clinton, University of Auckland, j.clinton@auckland.ac.nz
Peter Carswell, University of Auckland, p.carswell@auckland.ac.nz
Abstract: Evaluation has had a long interest in its utility. Equally, those commissioning evaluation want to benefit from its potential. The importance of evaluation use has led to a large amount of theoretical and empirical study, although it is still not well understood. More recently, the notion of ‘evaluation influence’ emerged. This term encompasses traditional conceptions of use, as well as changes at the individual, interpersonal and collective levels (Mark & Henry, 2004). This paper presents a theory of evaluation influence within population health partnerships. Specifically, the literature is used to develop hypotheses about the relationships between evaluation attributes, partnership functioning, contextual factors and evaluation influence. The model highlights the importance of understanding the contextual influences on evaluation, especially when evaluation is implemented within complex organisational systems. The model will be useful to both evaluators and public sector workers seeking to implement and benefit from evaluation within a partnership context.

Return to Evaluation 2010
Search Results for All Sessions