Evaluation 2008 Banner

Return to search form  

Session Title: Impact Evaluation and Development: Debates and the new International Architecture for Impact Evaluation
Expert Lecture Session 548 to be held in  Capitol Ballroom Section 1 on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Jim Rugh,  Independent Consultant,  jimrugh@mindspring.com
Presenter(s):
Howard White,  International Initiative for Impact Evaluation,  hwhite@3ieimpact.org
Abstract: Impact evaluation has been at the centre of growing controversy in development circles in recent years. Critics - notably the Center for Global Development - have argued that have been virtually no rigorous impact evaluations of development interventions. This point of view has been contested by development agencies, who point to an existing body of studies, and evaluators, who question the primacy afforded to particular approaches. This presentation will review these debates and the institutional responses which have resulted in a rapidly developing new international architecture for impact evaluation: specifically the Network of Networks on Impact Evaluation (NONIE), the International Initiative for Impact Evaluation (3ie), and the World Bank in-house initiatives - the Development Impact Evaluation initiative (DIME), the Spanish Impact Evaluation Fund (SIEF) and the Africa Impact Evaluation Initiative.

Session Title: College Access Programs: Evaluation Issues and Solution From Three Access Programs
Multipaper Session 549 to be held in Capitol Ballroom Section 2 on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the College Access Programs TIG
Education Beyond High School: Factors Associated with Postsecondary Education Access and Persistence Among Vermont Gear Up and Talent Search Participants
Presenter(s):
Laura Massell,  Vermont Student Assistance Corporation,  massell@vsac.org
Abstract: The Vermont Student Assistance Corporation (VSAC) administers Vermont’s two statewide college access grants, GEAR UP and Talent Search. Working with low-income youth in grades six through 12, these programs are designed to strengthen academic skills, raise educational aspirations, and support students in the college and financial aid application process. This study examines the postsecondary enrollment patterns of GEAR UP and Talent Search participants in High School Classes of 2001 and 2003 (n=1200) using a combination of National Student Clearinghouse, Vermont Grant records, and Telephone Interview data. Using logistic regression, the study examines the extent to which postsecondary education enrollment, persistence and completion can be predicted from students’ 6th-12th grade participation in either college access program (duration and intensity), students’ educational aspirations and postsecondary planning, high school coursework and grades, Pell and Vermont grant award history and expected family contribution levels, and other student demographic factors.
Evaluating the Kalamazoo Promise Scholarship Program as a Catalyst for Systemic Change
Presenter(s):
Gary Miron,  Western Michigan University,  gary.miron@wmich.edu
Stephanie Evergreen,  Western Michigan University,  stephanie.evergreen@wmich.edu
Abstract: This paper provides an overview of the evaluation of the Kalamazoo Promise universal scholarship program. The scholarship has garnered considerable national attention, including that of the U.S. Department of Education, which funded the evaluation due to its interest in whether the Promise can work as an effective school reform method. The paper presents the theoretical framework for the evaluation (we use a theory-driven approach). The design and methods for data collection also are explained. A number of obstacles and challenges arose in the process of designing and conducting this evaluation. The paper will examine these challenges and describe the strategies and measures used to address them. Key findings are highlighted related to the response to the Promise by students and families, teachers, administrators, and the broader community. In the paper’s conclusion, both methodological and operational issues are discussed regarding the evaluation.

Session Title: Integrating Evaluation into Program Design
Multipaper Session 550 to be held in Capitol Ballroom Section 3 on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Environmental Program Evaluation TIG
Chair(s):
Matt Keene,  United States Environmental Protection Agency,  keene.matt@epa.gov
Abstract: The U.S. Environmental Protection Agency works to improve the policy and practice of evaluating environmental programs by integrating evaluation into the design of new programs. Here, two members of the U.S. EPA's Evaluation Support Division discuss their work to integrate systematic evaluation into two new programs--the Paint Product Stewardship Initiative (PPSI), a demonstration project that will inform the creation of a national leftover paint management system, and Community Action for a Renewed Environment (CARE), a community-based cooperative agreement grant program. The two presenters will discuss the very different approaches chosen by each program, assess the challenges and benefits of working in a collaborative environment to design an evaluation, and how the evaluations have been an instrumental force in design and management of each program.
How an Innovative United States Environmental Protection Agency Grants Program is Using Evaluation Tools to Manage the Program Effectively and Build Capacity Among Staff and Across the Agency
Michelle Mandolia,  United States Environmental Protection Agency,  mandolia.michelle@epa.gov
As the U.S. Environmental Protection Agency works toward a more robust use of the entire suite of performance management tools (the logic model, performance measurement, and program evaluation), EPA's new community-based cooperative agreement grant program CARE (Community Action for a Renewed Environment) has been modeling this use. In this paper, CARE's evaluation and tracking team leader, Michelle Mandolia, will share how the program with limited staff, some critical funding, and the early support of an evaluation champion' has been using logic modeling, measurement, internal analysis, and external evaluation to shape and support the program in its early stages and establish an evaluation frame of mind from the outset.
Integrating Evaluation into Program Design
Matt Keene,  United States Environmental Protection Agency,  keene.matt@epa.gov
The U.S. Environmental Protection Agency works to improve the policy and practice of evaluating environmental programs by integrating evaluation into the design of new programs. Here, the U.S. EPA's Evaluation Support Division discusses its cooperation with the Paint Product Stewardship Initiative (PPSI) to integrate systematic evaluation into the design of a demonstration project that will inform the creation of a national leftover paint management system. We assess the challenges and benefits of working in a collaborative environment to design an evaluation that will rigorously test the effectiveness and impact of management systems and education strategies. We will also review the significance of the project's evaluation policies related to use and dissemination of the evaluation to key stakeholders that will use results and learning to make decisions about the most effective approaches for paint management.

Session Title: Evaluating Health Communication and Marketing Campaigns: Efficacy and Effectiveness Methods
Expert Lecture Session 551 to be held in  Capitol Ballroom Section 4 on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Program Theory and Theory-driven Evaluation TIG
Presenter(s):
W Douglas Evans,  The George Washington University,  sphwde@gwumc.edu
Discussant(s):
James Hersey,  Research Triangle Institute,  hersey@rti.org
Abstract: Communication and marketing are fast growing areas of public health, but rigorous evaluation research is rare. In particular, there have been relatively few controlled efficacy studies in these fields. In this paper, we review recent health communication and marketing efficacy research, present two case studies that illustrate some of the considerations in making efficacy design choices, and advocate for greater emphasis on rigorous health communication and marketing efficacy research and the development of a research agenda. By examining the literature and two case studies from tobacco control and reproductive health, we identify advantages and limitations to efficacy studies. We present outcome data and examine how it can address specific efficacy and effectiveness evaluation questions. We identify considerations for when to adopt efficacy and effectiveness methods, alone or in combination. Finally, we outline a research agenda to investigate validity, message mode effects, marketing and message strategies, and behavioral outcomes.

Session Title: Institutional Review Board Options for Evaluation: Benefits and Risks
Expert Lecture Session 552 to be held in  Capitol Ballroom Section 5 on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the AEA Conference Committee
Presenter(s):
D Paul Moberg,  University of Wisconsin Madison,  dpmoberg@wisc.edu
Nichelle Cobb,  University of Wisconsin,  nlc@medicine.wisc.edu
Abstract: It is a given that in any evaluation, mechanisms are put in place to protect the confidentiality, privacy and other rights of participants. However, standards and requirements for oversight of evaluation and quality improvement studies by Institutional Review Boards (IRBs) are ambiguous, not always well understood by evaluators, and are highly variable across institutional settings. In this paper, we seek to systematically describe the various options available to evaluators and the IRBs they work with for meeting review requirements. We also summarize the risks and the benefits (to investigators, IRBs and institutions) of these options. Each of the following options provided for in the “Common Rule” (federal regulations for human subjects research) will be discussed: 1. Determination of evaluation studies as not constituting human subjects research, either because the data used do not meet the definition of being from “human subjects” or the work does not meet the definition of “research”. 2. Determination of the evaluation as exempt from IRB review and oversight under one of several applicable conditions (e.g., as survey research); or 3. The evaluation constitutes minimal risk human subjects research requiring IRB review. The paper will provide examples of evaluations meeting each of these categories, with recommendations of applicability, benefits and risks of applying them. This information can be used to inform the decisions made by both evaluators and their IRBs to demystify this process.

Session Title: Addressing the Needs of Underserved Urban Communities Through Contextually Culturally Responsive Evaluations
Panel Session 553 to be held in Capitol Ballroom Section 6 on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Multiethnic Issues in Evaluation TIG
Chair(s):
VIctor Perez,  University of Illinois Urbana-Champaign,  vperez@uiuc.edu
Discussant(s):
Victor Perez,  University of Illinois Urbana-Champaign,  vperez@uiuc.edu
Abstract: Contextually Culturally Responsive Evaluations (CCRE) provides evaluators with a methodological theory to promote a social justice agenda striving for educational equity for students who have historically been underserved in urban communities. This holistic approach to evaluation contrasts traditional notions of cultural subjectivity which have historically served to marginalize students from diverse backgrounds. Importantly, contextually culturally responsive evaluations focus on the belief that evaluations and reforms striving for educational equity need to be meaningfully linked to the students' and their communities' unique cultures. This clear infusion of culture and context into evaluation provides an insightful lens for promoting a social justice agenda which stresses the importance of incorporating a student's culture and community context into a study.
Border Theory and It's Implication for Contextually Culturally Responsive Evaluations
Melba Schneider Castro,  University of Illinois Urbana-Champaign,  melbac@ucr.edu
Further analysis of theories driving evaluation methodology for underrepresented communities needs to be addressed. Importantly, border analysis opens a new dimension of critical inquiry over methodological and epistemological practices in evaluation, such as research design choices, data collection, interpretation, and reporting. Thus, the border as a conceptual region allows us to understand, analyze, and critique inequity resulting from differences, such as linguistic differences, or gender, race, class, and religion. The paper utilizes border theory to examine the interplay of values, culture, and power in which evaluation can be used as a tool to promote educational equity and social justices for underrepresented and marginalized students.
A Case Study of the Bethel Imani Freedom School (BIFS) Program from a Contextually Culturally Responsive Evaluation Approach
Maria Jimenez,  University of Illinois Urbana-Champaign,  mjimene2@uiuc.edu
The importance of culture and context cannot be ignored in evaluations which seek to address the needs of marginalized groups. Contextually culturally responsive evaluations (CCRE) provide a template in which evaluators can use to better understand the ways in which culture and context influences program design, implementation, and impact. Thus, in order to make accurate judgments of program quality evaluators need to be attuned with the culture and context of a program or policy. History, language, power, values, traditions, and norms are important contextual factors which influence evaluations of programs. The paper utilizes CCRE to examine the components of the Bethel Imani Freedom School Program in south side Chicago. The Bethel Imani Freedom School (BIFS) Program serves children ages 6-18 for six weeks and utilizes literacy, conflict resolution, and social action to promote cultural and social awareness. Further, he paper outlines the activities of BIFS and depicts how CCRE can be used in evaluation which takes into account both culture and context as means to promote social justice and equality for all students.

Session Title: LGBTQ Evaluation in Education Settings: Schools and Museums
Multipaper Session 554 to be held in Capitol Ballroom Section 7 on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Lesbian, Gay, Bisexual, Transgender Issues TIG
Chair(s):
Virginia Dicken,  Southern Illinois University at Carbondale,  vdicken@siu.edu
Exploring the Museum’s Closet Doors: A Pilot Study of Gay and Lesbian Visitors
Presenter(s):
Joe E Heimlich,  The Ohio State University,  heimlich.1@osu.edu
Judy Koke,  Art Gallery of Ontario,  judy_koke@ago.net
Abstract: Museums continually seek to expand their visitor and membership base; a much cited belief is that visitation leads to membership. One population that seems to defy that pattern is the GLBTQ community where attendance or visiting does not seem to lead to membership, subscription, or donation. This pilot study of GLBTQ visitation was undertaken to 1) test instrumentation for creating a long-term study of gay and lesbian museum visitors; and 2) begin to develop an understanding of specific issues, interests and barriers related to museum visitation from these specific audience segments. The population was purposefully selected to represent the “ideal” demographic of highly educated, above average income. Findings indicate that visitation is on a rate much greater than in the wider population and, of visitors/audiences membership/subscribership is very low. Heteronormativism emerges from the data as the dominant barrier.
Making Schools Safe for LGBTQ and All Youth: Lessons from a Safe Schools Coalition
Presenter(s):
Lisa Korwin,  Korwin Consulting,  lisa@korwinconsulting.com
Robin Horner,  Korwin Consulting,  rh4consulting@yahoo.com
Abstract: Starting in the summer of 2003, a group of nonprofit and public sector organizations in Northern California launched a coalition dedicated to fostering safe school environments for LGBTQ and all youth. The coalition was formed in response to a strategic planning process which revealed significant systemic prejudice and violence against LGBTQ youth. From the start, coalition members and their funder recognized and prioritized the role of evaluation in this endeavor. Over the next four years, the evaluator, Korwin Consulting, worked closely with coalition members to identify desired outputs and outcomes. In partnership, the coalition and evaluator designed and implemented a mixed-methods evaluation. We will present two of the three short evaluation reports that resulted from this process. In our presentation, we will highlight strategies and lessons learned from evaluating a coalition working on changing community norms.

Roundtable: A View from the Trenches: Evaluators’ Perspectives on Evaluation Training and Tools
Roundtable Presentation 555 to be held in the Limestone Boardroom on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Teaching of Evaluation TIG
Presenter(s):
SaraJoy Pond,  Brigham Young University,  sarajoypond@gmail.com
David Williams,  Brigham Young University,  dwilliams@byu.edu
Abstract: Dozens of tools have been developed for the purpose of preparing evaluators for practice. We have described models, outlined policies, designed instruments, written textbooks, and created job aids. We have tried everything from role playing to board games. But which of these tools is most effective? What makes it effective? What do evaluators want from a an instructional tool? From a professional job aid? This roundtable session will provide participants an opportunity to explore the challenges of evaluation, the gaps in evaluation training, some of the tools currently available and essential characteristics of future tools to help evaluators link theory, policy, and practice.

Roundtable: Utilizing Participant and Stakeholder Information to Improve Special Education Programs
Roundtable Presentation 556 to be held in the Sandstone Boardroom on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Special Needs Populations TIG
Presenter(s):
R Lee Smith,  Indiana University South Bend,  rllsmith@iusb.edu
Tanice Knopp,  Independent Consultant,  awetyk@yahoo.com
William Delp,  Special Education District of Lake County,  wdelp@sedol.k12.il.us
Carol DuClos,  Special Education District of Lake County,  cduclos@sedol.k12.il.us
Ken Marsh,  Sarasota County Schools,  ken_marsh@sarasota.k12.fl.us
Abstract: This round table discussion will focus on program review intended to provide information for program change and improvement rather than objectives-oriented or authoritative appraisals of program function, quality, and staff performance. In conjunction with a unified special education school district, five program reviews were conducted utilizing a participant-stakeholder client-centered design to suggest continuous improvement goals and to assist the district in their commitment to excellence. The presentation will include multiple perspectives. Presenters include two co-facilitators that contract for this type of program review, the superintendent and former associate superintendent from the local special education district, and professional who has served as a member of a program review team. This roundtable will assist attendees in understanding and sharing information about the potential use of client centered methodologies and their potential in organizations with a view towards continuous quality improvement.

Roundtable: Applied Early Childhood Research and Evaluation: Informing Public Policy With Real World Data
Roundtable Presentation 557 to be held in the Marble Boardroom on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Marijata Daniel-Echols,  High/Scope Educational Research Foundation,  mdaniel-echols@highscope.org
Abstract: During this session, examples from ongoing state-funded evaluations of preschool programs in Michigan and South Carolina and completed research on Head Start will be used to demonstrate how issues like design (quasi-experimental, random assignment, regression discontinuity), defining and measuring concepts, establishing efficacy and effectiveness, and political context have impacted the type of data that can be collected for evaluation, how that data has been communicated to program and policy stakeholders and the varying levels of success in impacting policy using evaluation data. The goal of this session is to bring together evaluators who have conducted evaluations of early childhood education programs to share their own experiences and collectively identify strategies that have successfully addressed particular challenges.

Session Title: New and Emergent Directions in Evaluation Policy and Practice Through the Lens of Utilization-Focused Evaluation
Expert Lecture Session 558 to be held in  Centennial Section A on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Evaluation Use TIG
Chair(s):
Michael Quinn Patton,  Utilization-Focused Evaluation,  mqpatton@prodigy.net
Presenter(s):
Michael Quinn Patton,  Utilization-Focused Evaluation,  mqpatton@prodigy.net
Abstract: The field of evaluation is dynamic and ever-developing. Given the profession's attention to use from its beginning, new developments in and approaches to evaluation policy and practice can benefit from examination through the lens of use. In the course of writing the 4th edition of Utilization-Focused Evaluation, just published in the summer of 2008, I identified major changes in evaluation policy, theory, methods, and practice over the last decade. (Over one-third of the 4th edition of the book is new and updated material.) This session will present the top ten new directions in evaluation that I identified and addressed during the revision with particular focus on Evaluation Policy and Evaluation Practice, and implications for enhancing evaluation use and influence.

Session Title: Findings Patterns in Evaluation Data: Searching for Clues That May Help Better Understand Programs and Policies
Panel Session 559 to be held in Centennial Section B on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Patrick McKnight,  George Mason University,  pmcknigh@gmu.edu
Abstract: There are many patterns in quantitative data. Unfortunately, quantitative analyses of program and policy evaluation data tends to be mechanistic and structured to find only one pattern in our data. The pattern we seek is one that fits a linear relationship with normally distributed residuals. In short, we seek to confirm the general linear model at some level. There are other patterns in our data that may be indicative of program effectiveness and without an focused effort to search for these patterns we will not discover them. The purpose of this talk is to introduce the concept of pattern recognition as a data analytic routine to stimulate interest in this budding area of quantitative methodology. Two presentations cover both the theoretical underpinnings of pattern recognition as well as specific examples from real program evaluations in education.
Discovering Patterns in Longitudinal Data
Patrick McKnight,  George Mason University,  pmcknigh@gmu.edu
Discovering patterns in longitudinal data may help us better understand who changes, to what extent, and under which circumstances. Some individuals may show no change while others change in odd ways. Only through a deliberate effort to find these different patterns may we come to this realization. what's more, finding the patterns allows us to then seek predictors for the different patterns. These patterns may also lead to insights into the nature of how programs or policies may be effective. The purpose of this talk is to demonstrate pattern discovery methods. Data from education and mental health provide a set of examples where longitudinal data may be better characterized by pattern recognition methods. These methods are contrasted with more traditional longitudinal analyses that are typical in contemporary social science and program evaluation.
Assessing Patterns of Readiness for Program Engagement
Katherine McKnight,  Pearson Achievement Solutions,  kathy.mcknight@pearson.com
Composite variables are combinations of variables that are individually meaningful and are thought to be indicators of the same construct. For example, socioeconomic status is often measured as a combination of income, household size, occupation, educational level and so on. Problems arise in determining how to combine these variables to produce a useful measure of the given construct. We typically sum the scores for each variable to create an index, which assumes that each variable contributes equally. In this paper, we discuss the use of pattern assessment applied to an index measuring readiness for effective engagement in teacher workgroups. Assessing patterns of scores for the different variables of the index--e.g., administrative support, identified 'point person,' etc.-- allows us to assess the contribution of each variable to 'readiness' and weight it accordingly. Applying pattern assessment for multidimensional composites helps to better understand the phenomenon and to create more thoughtful measurement.

Session Title: Evaluation Policy: Integrating Evaluation Offices into the Surrounding Agency Culture
Expert Lecture Session 560 to be held in  Centennial Section C on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Presidential Strand
Chair(s):
Melvin Mark,  Pennsylvania State University,  m5m@psu.edu
Presenter(s):
Eleanor Chelimsky,  Independent Consultant,  oandecleveland@aol.com
Abstract: Most discussions of evaluation policy focus on the substance and process of doing evaluations. This presentation focuses instead on another important aspect of evaluation policy: The organizational and structural considerations that facilitate doing needed studies, keeping them independent and credible, insuring their usefulness, and getting them disseminated. I examine three kinds of problems: first, problems typically encountered in achieving acceptance of evaluation and evaluators within agencies (e.g., clashes of professional cultures); second, challenges of organizing for use (especially, but not only, difficulties related to incompatible interests of different users); third, the independence and credibility of the evaluation product and the protections needed to maintain the evaluators' findings and the office's reputation. I argue that evaluation's failures can be traced directly to our naiveté about power relationships in government and to the difficulty of protecting evaluative independence in the face of political pressures. Good evaluation policy needs to avoid these failures.

Session Title: The Impact of Policy Change on Evaluation Practice
Multipaper Session 561 to be held in Centennial Section F on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Research on Evaluation TIG
Chair(s):
Christina Christie,  Claremont Graduate University,  tina.christie@cgu.edu
What do Evaluation Policies Change About Evaluator's Practices?
Presenter(s):
Claire Tourmen,  National Institution Higher Education Agronomique de Dijon,  claire.tourmen@educagri.fr
Abstract: Evaluation has different levels of institutionalization throughout the world and the public organizations. What influence does it have on evaluation practitioners’ day to day work? I will use the data gathered in my Ph. D., a thesis on evaluation practices undertaken in France. I will study the possible advantages but also perverse effects of a regular evaluation policy on evaluators’ practices. I will also study the opposite case, when people work with(in) in organizations doing evaluation if need be. Then I will discuss the success factors identified in this study and their links to evaluation policies.
“You Want Me to do What?": The Process of Evaluator Role Renegotiation
Presenter(s):
Eric Barela,  Los Angeles Unified School District,  eric.barela@lausd.net
Samuel Gilstrap,  Los Angeles Unified School District,  samuel.gilstrap@lausd.net
Abstract: This paper will explore how evaluators adapt to redefining and renegotiating their roles within the changing demands of an organization. Evaluators define their roles by both their training, experience, and values and the organization context in which the evaluation occurs. When the context changes due to new organizational policies and the new demands that are created, evaluators must reconcile these new demands with their beliefs and must renegotiate this new context. A case example of evaluator role renegotiation is presented within the context of an urban school district whose evaluators are being asked to build the district’s capacity in ways that are not always aligned with evaluation practice. The intent of this paper is to highlight the supports and barriers in the renegotiation of evaluator role as a way of assisting other evaluators who may find themselves struggling to do so in a variety of contexts.

Session Title: Can You Hear Me Now? Use of Audience Response Systems to Evaluate Educational Programming
Demonstration Session 562 to be held in Centennial Section G on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Extension Education Evaluation TIG
Presenter(s):
Karen Ballard,  University of Arkansas,  kkballard@uaex.edu
Abstract: Extension educators are challenged by the evolving accountability culture that pushes for credible evidence of impact from public education programs and initiatives. The logistics and time required to develop, deliver and evaluate programming is often daunting to community based educators, sometimes working as a one-person team. Personal response technology (PRS) is an interactive communications system that provides real time engagement of participants. PRS is a promising tool for reducing the burden of meaningful evaluation practice, while likewise enhancing the value of the evaluation product. This demonstration will be experientially based, with individual student transmitters provided for group participation.

Session Title: Economics Focused Meta-Analysis of Early Childhood Services and of Class Size Reduction Initiatives
Multipaper Session 563 to be held in Centennial Section H on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Costs, Effectiveness, Benefits, and Economics TIG
Chair(s):
Brian Yates,  American University,  brian.yates@mac.com
Meta-Analysis of Economic Studies of Early Childhood Services
Presenter(s):
Sarah Heinemeier,  Compass Consulting Group LLC,  sarahhei@mindspring.com
Abstract: This paper will present condensed findings from a meta-analysis of economic studies of services focused on very young children and their families. Many states now have initiatives to support young children and their families. These initiatives range from direct subsidies in support of child care costs to family support and outreach programs to health services. In addition, many states also fund or support some form of prekindergarten programming for children. This presentation will present the findings of a meta-analysis of economic studies, with data drawn from existing studies of these varied programs and services. The methodology used to conduct the analysis also will be presented as will the implications of this type of analysis for evaluation studies. Finally, this presentation will address how to apply such an analysis to an evaluation design or question.
The Cost-Effectiveness of Class Size Reduction
Presenter(s):
Stuart Yeh,  University of Minnesota,  yehxx008@umn.edu
Abstract: The cost-effectiveness of class size reduction (CSR) was compared with the cost-effectiveness of rapid assessment, a promising alternative for raising student achievement. Drawing upon existing meta-analyses of the effects of student-teacher ratio, evaluations of CSR in Tennessee, California, and Wisconsin, and RAND cost estimates, CSR was found to be 124 times less cost effective than the implementation of systems that rapidly assess student progress in math and reading two to five times per week. Analysis of the results from California and Wisconsin suggest that the relative effectiveness of rapid assessment may be substantially underestimated. Further research regarding class size reduction is unlikely to be fruitful, and attention should be turned to rapid assessment and other more promising alternatives.

Session Title: Evaluation Performance Support: DoView Visual Logic Models and Self-Sustaining Wikis
Multipaper Session 564 to be held in Mineral Hall Section A on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Integrating Technology Into Evaluation
Chair(s):
Vanessa Dennen,  Florida State University,  vdennen@mailer.fsu.edu
Evaluation Performance Support
Presenter(s):
Tom McKlin,  Georgia Tech,  tom.mcklin@gatech.edu
Shawn Edmondson,  Spectrum Education Group,  sedmondson@spectrumedu.com
Abstract: This paper presentation is relevant to the second portion of the conference theme, “Evaluation Practice,” by providing an avenue for every current evaluator to participate in a project to improve the practice of all evaluators. This presentation will demonstrate some of the content available at EvaluationWiki, which may be used to support seasoned evaluators and to enable new evaluators to perform like those with years of experience. This paper also invites participants to contribute their expertise to this growing knowledge management system and describes the benefits of doing so beyond supporting new and seasoned evaluators. Like its parent, www.wikipedia.org, EvaluationWiki is intended to be influenced, updated, and corrected in real time by members of the evaluation community.
Using DoView Visual Logic Models As a Front-End For Evidence-Based Practice Web Databases
Presenter(s):
Paul W Duignan,  Parker Duignan Consulting,  paul@parkerduignan.com
Abstract: Pursuing evidence-based practice is resulting in web-based evidence databases in many fields (e.g. Cochrane collaboration for health, Campbell collaboration for social programs, natural resource evidence databases, and international development databases). Concurrently, program logic models are becoming widely used as a way of visually describing programs. Visual logic models identify different parts of a program mechanism (program theory) for which a user may want evidence. Working with Australian evaluation consultants Clear Horizon and Land and Water Australia logic models drawn in DoView logic modeling software are being used as the visual front-end for an evidence-based practice web database - the Australian Government funded Natural Resource Management (NRM) Toolbar. The user views an HTML logic model on the web generated by DoView and drills down under hyperlinks to evidence summaries supporting key identified relationships within the model. This approach could be applied to any area of evaluation and evidence-based practice (http://www.clearhorizon.com.au/nrm).

Roundtable: Cluster Randomized Designs: Technical, Practical, and Policy Implications
Roundtable Presentation 565 to be held in Mineral Hall Section B on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Werner Wittmann,  University Mannheim,  wittmann@tnt.psychologie.uni-mannheim.de
Leonard Bickman,  Vanderbilt University,  leonard.bickman@vanderbilt.edu
Manuel C Voelkle,  University Mannheim,  voelkle@rumms.uni-mannheim.de
Abstract: Cluster Randomized Trials (CRT) are being increasingly used in program evaluation. In contrast to standard experiments, not individuals are being randomly assigned to an intervention versus control group, but entire groups of people (clusters). This roundtable will discuss the design and analysis of cluster randomized trials and, the policy and practical implications of using such designs. The roundtable will focus on particular technical details such as effect sizes as well as the potential negative unintended side effects of the requirement to use more rigorous research designs.

Session Title: Assessing Foundation Communications: A New Tool for Practitioners
Demonstration Session 566 to be held in Mineral Hall Section C on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Advocacy and Policy Change TIG and the Non-profit and Foundations Evaluation TIG
Presenter(s):
Edith Asibey,  Asibey Consulting,  easibey@gmail.com
Justin Van Fleet,  University of Maryland,  justinvanfleet@post.harvard.edu
Abstract: In response to a need expressed by foundation communication practitioners, the Communications Network has developed a tool to evaluate communication campaigns and programs. Informed by an academic and non-academic literature review, interviews with numerous evaluation, philanthropy and communication experts and a survey of 81 communication practitioners, the tool is designed to make the evaluation process approachable, doable and useful for foundations--and indirectly, its nonprofit grantees-- and to ensure continuous learning and sharing. The Communications Network has taken realistic organizational opportunities and constraints into consideration when placing this tool for effective evaluation in the hands of non-experts. Its creators, Edith Asibey and Justin van Fleet will provide an overview of the tool and its components, and discuss how early adopters are using the tool to assess their organizational and programmatic communications.

Session Title: Use of Agent-Based Modeling in Program Evaluation
Expert Lecture Session 567 to be held in  Mineral Hall Section D on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Research on Evaluation TIG
Chair(s):
Mark Spranca,  Abt Associates Inc,  mark_spranca@abtassociates.com
Presenter(s):
Rajen Subramanian,  Abt Associates Inc,  rajen_subramanian@abtassociates.com
Discussant(s):
Mallary Tytel,  Healthy Workplaces LLC,  mtytel@healthyworkplaces.com
Abstract: Standard approaches to outcome evaluation are susceptible to the problems of (1) unexpected consequences as a result of nonlinearity in the impacts of programs and (2) changing impacts because of adaptation of the program participants to program activities. For example, programs adopting new methods of teaching math might in the long term impact students' abilities to creatively solve problems by emphasizing limited approaches to problem solving. As a result, programs exhibiting short term benefits could lead to negative impacts in the medium to long term or cease to have any impact because of the adaptation of program respondents. This paper proposes the use of agent based computational models (ABMs) in outcome evaluation to account for unexpected consequences and the adaptation of responses by program participants. Specifically, ABMs can expand evaluators' abilities to predict the broader scope of program impacts. ABMs can also assist policy designers in developing better program interventions.

Session Title: Evaluation's Place on the Organizational Talent Management Stage: Starring or Supportive Role?
Expert Lecture Session 568 to be held in  Mineral Hall Section E on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Evaluation Managers and Supervisors TIG
Chair(s):
Vanessa Moss-Summers,  Xerox,  vanessa.moss-summers@xerox.com
Presenter(s):
Vanessa Moss-Summers,  Xerox,  vanessa.moss-summers@xerox.com
Abstract: Corporations today are seeking and implementing best approaches to entice, develop, and retain talent within the organization. Today's 'war on talent' requires that an organization effectively utilize its most important resource to both enhance the employee experience and boost company performance. Where does evaluation fit on the talent management stage? Does it play a starring or a supportive role? The session will provide the author's experience in shaping evaluation policy and practice in corporate initiatives. It will specifically address 3 of the questions our President asked us to consider in our proposals: - What policies need to be developed or used in guiding evaluation in talent management? - What systemic evaluation policies and practices could help the organization meet its goals? - What role can we as an evaluation community help to create and support organizational effectiveness goals?

Session Title: "Real-Time" Practice and Policy Implications
Multipaper Session 569 to be held in Mineral Hall Section F on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Social Work TIG
Chair(s):
Heidi Milch,  Community Connections of New York,  hmilch@gateway-longview.org
Changing Program Policy and Practice Using Data from Real-Time Methods: A Case Example
Presenter(s):
Brian Pagkos,  Community Connections of New York,  pagkos@hotmail.com
Heidi Milch,  Community Connections of New York,  hmilch@gateway-longview.org
Mansoor Kazi,  University at Buffalo - State University of New York,  mkazi@buffalo.edu
Dawn M Skowronski,  Mid-Erie Counseling and Treatment Services, 
Abstract: A recent request for proposals in Erie County, NY led to the formulation of a quality management organization (QMO) responsible for monitoring all vendor and care coordination agencies delivering wraparound services in the county, and analyze functioning of the system as a whole. The QMO uses a blended paradigm approach (utilitarianism and realist) to accomplish this task, resulting in products that are responsive to agency needs. Working collaboratively with primary users, data are used to develop both agency and system level policy as well as direct practice. Presenters will describe an agency case example where this evaluation intervention was put into place. Although findings were positive overall, the evaluation team worked closely with program supervisors and staff to troubleshoot areas needing improvement. This resulted in policy development surrounding formalized group supervision practices, worker training, outcome measure inter-rater reliability tests, and standardized methods to complete client assessments.
Evaluation Practice of Investigating What Works and in What Circumstances
Presenter(s):
Mansoor Kazi,  University at Buffalo - State University of New York,  mkazi@buffalo.edu
Brian Pagkos,  University at Buffalo - State University of New York,  pagkos@gmail.com
Abstract: This paper presents examples of real-time evaluation practice to investigate the patterns between the client contexts, the intervention, and outcomes. The binary logistic regression method (Kazi, 2003) is used where multiple factors are influencing the outcome, potentially with a prediction of the odds of achieving a given outcome in particular circumstances. Hierarchical Linear Modeling (HLM) is used with hierarchical structure of data, such as measures within persons or repeated measures designs (Raudenbush & Byrk, 2002). The regression discontinuity design can be used as a comparison group design in which the participants are assigned to program or comparison groups solely on the basis of a cutoff score on a pre-program measure (Shadish, Cook & Campbell, 2001). All of these three approaches to data analysis can be used at regular intervals whenever an outcome measure is repeated and the findings can inform practice prospectively in each three month period, in real-time.

Session Title: Evaluation as a Core Component of Evidence Based Practice Implementation
Demonstration Session 570 to be held in Mineral Hall Section G on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Presenter(s):
Barbara Wieder,  Case Western Reserve University,  barbara.wieder@case.edu
Kelly Burgess,  Case Western Reserve University,  krb26@case.edu
Abstract: The presentation will focus on the use of fidelity measures to evaluate Evidence Based Practice programs in organizations serving persons with severe and persistent mental illness, many with co-occurring substance use disorders. Areas addressed will include: - A brief introduction to Evidence Based Practices - A conceptual model of core components of EBP implementation including evaluation - An introduction to fidelity measures the rationale for their use in evaluation and how they are developed - A description of the fidelity scale for Integrated Dual Disorders Treatment (IDDT), an EBP widely disseminated in the United States and the General Organizational Index (GOI), an accompanying evaluation measure - An overview of fidelity review methods, e.g., conducting a site visit, tips for getting accurate information, reporting to stakeholders on the evaluation results-- using examples from experiences with IDDT fidelity - Issues around fidelity evaluation, e.g., consultation or audit?

Roundtable: My First Year as a Professional Evaluator: What They Taught Me in School and What I Learned on the Job
Roundtable Presentation 571 to be held in the Slate Room on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Graduate Student and New Evaluator TIG
Presenter(s):
Samuel Held,  Oak Ridge Institute for Science and Education,  sam.held@orau.org
Abstract: The year began with funding from one major client and ended with a minor client, a development partner, and new major project. I was asked to do some simple survey-based evaluations. At the end of the year, I became a member of the evaluation team of a state-wide teacher education initiative. Unconventionally, I was asked by my major client not to perform one program evaluation, but to evaluate all of their programs involving multiple populations - undergraduates, pre-service teachers, in-service teachers, and university faculty. Additionally, I was asked to manage a workforce study resulting in a system dynamics model, conduct two peer reviews, and conduct one program review. This paper will discuss the unconventional skills and tasks I was asked to perform to provide a complete evaluation service to all of my clients, especially those not taught in my program evaluator graduate program.

Session Title: Evaluating Healthcare Facilities for Emergency Preparedness and Response
Multipaper Session 572 to be held in the Agate Room Section B on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Disaster and Emergency Management Evaluation TIG
Chair(s):
Debora Goetz Goldberg,  Virginia Commonwealth University,  goetzdc@vcu.edu
Abstract: This multi-paper session presents information on the evaluation of emergency preparedness and response of health care organizations at both an individual facility and national/regional level. Currently, most health care organizations don't participate in emergency preparedness evaluations. Accreditation agencies in health care are beginning to address a range of emergency management practices, however these agencies do not have a method to evaluate the overall level of preparedness at an organization level or at the national level. This multi-paper presentation focuses on the methodology for evaluation and critical aspects for review of health care organizations at both levels. These presentations will draw upon lessons learned from national evaluations and case studies of individual health care organizations. Due to the sensitive nature of the topic no organizational names or specific findings will be discussed.
Large Scale Evaluations of Emergency Management for Health Care Facilities
Debora Goetz Goldberg,  Virginia Commonwealth University,  goetzdc@vcu.edu
Sue Skidmore,  DQE Inc,  sskidmore@dqeready.com
All hazards preparedness is a critical task for all levels and types of health care facilities. All hazards preparedness includes responding to and minimizing the impact from natural disasters, and disasters caused by unexpected chemical, biological, radiological, nuclear and explosive events. This presentation reviews large scale evaluation approaches for reviewing the level of preparedness of healthcare facilities. Methodological aspects that will be discussed include: evaluation design, data collection through structured interviews and electronic/written questionnaires, and dissemination of results to improve preparedness efforts. Critical aspects that will be reviewed for an evaluation of emergency preparedness and response include: surveillance; communication and notification; staffing and support; education and training; patient capacity; isolation and decontamination; supplies, pharmaceuticals, and laboratory support; and administration and planning. Lessons learned are drawn from several national evaluation studies of public and private healthcare facilities. Due to the sensitive nature of the information, evaluation findings will not be presented.
Emergency Management Evaluation of Individual Health Care Organizations
Sue Skidmore,  DQE Inc,  sskidmore@dqeready.com
Debora Goetz Goldberg,  Virginia Commonwealth University,  goetzdc@vcu.edu
This presentation reviews a multi-pronged evaluation approach for emergency management at the individual health care organization level. The evaluation methodology that will be discussed involves documentation reviews of an organization's disaster planning, structured interviews with key staff, and on-site facility tours. The methodological approach focuses on reviewing the following critical areas: leadership philosophy, hazard and vulnerability analysis (HVA), incident command system; training and sustainability, preparedness exercises and drills, organizational participation with local/regional preparedness activities, emergency management plan, decontamination program, security issues, command center components, communication issues, and surge capacity. A case study will be presented as an example of the evaluation approach that incorporates evaluation of requirements under recent emergency preparedness policies for health care organizations. The presentation will give participants information to assess the level of emergency preparedness for various types of health care organizations.

Session Title: Evaluating Technology Supported Instruction
Multipaper Session 573 to be held in the Agate Room Section C on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Distance Ed. & Other Educational Technologies TIG
Chair(s):
Trena Anastasia,  Colorado State University,  trena.anastasia@colostate.edu
How To (and How Not to) Teach Program Evaluation Online
Presenter(s):
Jeanette Harder,  University of Nebraska Omaha,  jharder@unomaha.edu
Jill Bomberger,  University of Nebraska Omaha,  jbomberger@unomaha.edu
Abstract: We must prepare the next generation of program evaluators. Many of our students are now demanding online delivery of course material, however, teaching online is not for everyone nor is it intuitive. Come to this workshop and consider if it’s right for you. This workshop will provide you with the “nuts and bolts” of teaching an online graduate-level course in program evaluation. Taught from an empowerment, strengths-based perspective, and using service-learning, an online course in program evaluation can engage students in program evaluation and help them see the relevance to their field of practice. Structured, incremental assignments move students quickly along the learning curve and before they know it, they have completed their first program evaluation. The presenter will share lessons she has learned from teaching program evaluation online to graduate social work students in urban and rural settings. A student perspective will also be offered.
An Evaluation of Learning Management System (LMS) Usage Patterns and Best Practice: What Have We Learned?
Presenter(s):
Kimberly McCollum,  Brigham Young University,  kamccollum@gmail.com
Larry Seawright,  Brigham Young University,  larrys@byu.edu
Abstract: Most universities and colleges use Learning Management Systems to help with administration, communication, and instruction. Recently, we have been tracking trends and perceptions of LMS usage on campus with the intent to evaluate system health and monitor stability. As system health has stabilized, our evaluation has shifted to the usefulness of the system for communication and instruction. We are in the process of gathering evaluative information to understand how well the LMS is being used for communication and instruction. Data sources include system data, surveys, interviews, and focus groups. To analyze the data, we are comparing actual usage to reported usage. We are also using student and instructor defined criteria to evaluate the effective use of the LMS in courses on campus. We expect to learn the best evaluation approaches to understand effective uses of LMS’s for administration, communication and instruction and if these are correlated.

Session Title: Two Evaluation Components of a College-School District Urban Teacher Preparation Program
Multipaper Session 574 to be held in the Granite Room Section A on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Sandra Foster,  Mid-continent Research for Education and Learning,  sfoster@mcrel.org
Discussant(s):
Jean Williams,  Mid-Continent Research for Education and Learning,  jmwilliams@mcrel.org
Abstract: This presentation will discuss two components of a larger evaluation of a Teacher Quality Enhancement Grant involving a college-school district partnership preparing teachers for urban schools. The first component is a benchmarking study focused on urban teacher preparation programs effective in preparing teachers to teach in hard-to-staff urban school settings. This study identified common best practices across a set of programs, and described these practices so that the partnering systems (our key stakeholders) as well as other universities and districts could take advantage of this knowledge. Findings from this benchmarking study will be presented along with details on how TQE partners used recommendations from this study to make program changes. Second, a systems analysis examined the components of the TQE program to determine areas of alignment. The presentation will describe how results were used to inform stakeholders of specific areas that fostered or impaired the achievement of the program goals.
Best Practices for College-School District Partnerships in Recruiting, Preparing, and Retaining Highly Effective Teachers in Hard to Staff Urban Schools
Ruby C Harris,  Mid-Continent Research for Education and Learning,  charris@mcrel.org
Trudy L Clemons,  Mid-Continent Research for Education and Learning,  tclemons@mcrel.org
Sandra Foster,  Mid-Continent Research for Education and Learning,  sfoster@mcrel.org
Jean Williams,  Mid-Continent Research for Education and Learning,  jmwilliams@mcrel.org
This presentation will focus on the best practices of existing college-school district partnerships in recruiting, retaining, and preparing urban teachers at the secondary level. Key search terms included 'urban' combined with 'teacher recruitment', 'teacher preparation', 'teacher retention', and 'secondary education'. Screening criteria for the review included rigor of the design, relevance of the questions answered, and results of the study. Included in this presentation is a discussion of how to best connect school based field experiences with the college based classroom curriculum to train highly qualified urban teachers. Working definitions and common principles will be identified to help guide higher education institutions, school districts, and education policymakers in addressing the best practices for recruiting, preparing, and retaining effective teachers in hard-to-staff urban schools. Additionally, the presentation will share how results were used by stakeholders to make program changes and engage program faculty in further research.
Using a Systems Analysis Approach to Evaluate a College-School District Partnership for an Urban Teacher Preparation Partnership
Trudy L Clemons,  Mid-Continent Research for Education and Learning,  tclemons@mcrel.org
Ruby C Harris,  Mid-Continent Research for Education and Learning,  charris@mcrel.org
Sandra Foster,  Mid-Continent Research for Education and Learning,  sfoster@mcrel.org
Jean Williams,  Mid-Continent Research for Education and Learning,  jmwilliams@mcrel.org
This presentation will focus on the use of a systems analysis approach to evaluate the Metropolitan State College of Denver and Denver Public Schools Teacher Quality Enhancement (TQE) grant to prepare teachers to teach in urban middle and high schools. In this evaluation a systems analysis approach was chosen in order to help stakeholders determine which goals to pursue and decide on a means to reach those goals. The TQE program was seen as a system of production, with the goal of developing a collaborative secondary education teacher preparation program with the capacity to graduate highly qualified urban educators. Systems theory diagrams and charts will be presented illustrating how results were shared with stakeholders to illustrate gaps between current and desired states. Additionally, the presentation will present recommendations for urban teacher preparation programs that have implications for higher education institutions, school districts, and education policy.

Session Title: Developing a Strategic Evaluation Process to Assess the Impact of a Local Health Officials' Orientation Training Program
Demonstration Session 575 to be held in the Granite Room Section B on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Health Evaluation TIG
Presenter(s):
Sue Ann Sarpy,  Sarpy and Associates,  ssarpy@tulane.edu
Abstract: Recent research indicates that approximately one-third of Local Health Officials (LHO) have been employed in their position at Local Health Departments (LHD) two years or less. In addition, the "aging" of the public health workforce over the next several years will evidence unprecedented openings in public health leadership positions. In response, the Survive and Thrive program was created to prepare new LHOs to succeed within the multi-faceted environment of local health practice. Correspondingly, a rigorous process was developed to evaluate program effectiveness and provide evidence of the impact of the Survive and Thrive program in building LHD capacity, developing effective leadership, and strengthening the infrastructure of local governmental public health. This demonstration will discuss the strategic development of the program evaluation, including its standardized measures and protocols. Further, implications of this evaluation process will be discussed with respect to evaluating the impact of leadership and executive development programs nationwide.

Session Title: What's Common? Impact of Research on Renewable Energy Technology and on Poverty
Multipaper Session 576 to be held in the Granite Room Section C on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
George Teather,  Performance Management Network Inc,  george.teather@pmn.net
Evaluation of a Portfolio of Technologies: Wind Energy
Presenter(s):
Rosalie Ruegg,  TIA Consulting Inc,  ruegg@ec.rr.com
Abstract: This proposed presentation will report on assessment of a portfolio of U.S. Department of Energy (DOE) wind energy technologies, including those that have moved into distributed-scale and utility-scale applications. The presentation will focus on the approach and findings of Phase 1 of a planned two-phase evaluation of DOE wind energy technologies. Phase 1 uses interview, document review, network analysis, and citation analysis to trace from the portfolio to applications, and also assesses the feasibility of following the Phase 1 tracing study with a Phase 2 benefit-cost analysis of the same portfolio. An emphasis of the study will be on issues and challenges encountered by the evaluation due to taking a portfolio approach, and on the approaches taken to deal with the issues and challenges.
Rethinking Impact Evaluation: Lessons from International Agricultural Research and Development
Presenter(s):
Jamie Watts,  Bioversity International,  j.watts@cgiar.org
Nina Lilja,  Consultative Group on International Agricultural Research,  n.lilja@cgiar.org
Patti Kristjanson,  International Livestock Research Institute,  p.kirstjanson@cgiar.org
Douglas Horton,  Independent Consultant,  d.horton@mac.com
Abstract: International agricultural research must increasingly account for its relevance and impact on reducing poverty in developing countries, and thus impact evaluation is a topic of much interest. As understanding increases about the numerous, dynamic and inter-related factors thought to cause rural poverty in developing countries, and the contribution of agricultural research to poverty reduction, impact evaluation is also being rethought. This paper presents the results of an international workshop that addressed three themes: cases where agricultural research contributed to poverty reduction, methodologies to evaluate the impact of agricultural research on poverty reduction, and institutionalizing new approaches to research and impact evaluation. From those experiences conclusions were derived for future impact evaluation for agricultural research for poverty reduction and for research planning.

Session Title: Evaluating Community-Based Early Child Development Programs: Two Participatory Approaches
Multipaper Session 577 to be held in the Quartz Room Section A on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Marla Steinberg,  Michael Smith Foundation for Health Research,  msteinberg@msfhr.org
Abstract: This multi-paper session will present two approaches to evaluating a national multi-site early child development program. The Community Action Program for Children (CAPC) was developed in 1992 and is designed to reach the most vulnerable families with children aged 0-6 years and expecting mothers. CAPC uses a population health approach and is based on the principles of community development. These principles were incorporated into the two evaluation designs. The first paper will describe the evaluation framework that evolved from the British Columbia regional CAPC evaluation. The second paper will address the evaluation approach developed by Ontario which attempted to find a balance between collecting standardized data consistently across the province and allowing funded projects flexibility to examine the outcomes important to them. Both papers will highlight the need for continuous and on-going training of funded organizations and an adequately resourced evaluation infrastructure that can also support knowledge transfer and exchange.
The British Columbia Regional Community Action Program for Children Outcome Evaluation: A User-Friendly Health Promotion Evaluation Framework
Marla Steinberg,  Michael Smith Foundation for Health Research,  msteinberg@msfhr.org
This paper will describe the evaluation methodology developed in British Columbia to evaluate a nationally sponsored but regionally implemented early child development (ECD) program, the Community Action Program for Children (CAPC). The framework for evaluating the diverse multi-site ECD programs includes common logic models, core data collection tools, data entry and analysis spreadsheets, and a report template. The framework incorporates the World Health Organization principles for the evaluation of community-based health promotion programs and evaluation capacity building. The results of the evaluation enables funded organizations to improve programming and enables the funder to demonstrate accountability. The development and implementation of the evaluation framework required a stable, well resourced evaluation support function with dedicated staff and resources.
The Ontario Regional Community Action Program for Children Outcome Evaluation: Finding a Balance between Consistency and Flexibility in Evaluation
Nicole Kenton,  Public Health Agency of Canada,  nicole_kenton@phac-aspc.gc.ca
This paper will examine the process behind the development of a large-scale government initiated evaluation of the Community Action Program for Children (CAPC) in Ontario. The evaluation framework developed aimed to find a balance between allowing flexibility at a project level and consistent outcome measurement at a government level. The framework for evaluating these projects includes, a participatory process to develop individual logic models, the development of an Evaluation Toolkit which aimed to help projects conduct program evaluations and ensure projects applied similar standards in the quality of measures used to assess outcomes, and the selection of core measures to ensure comparability across projects and allow for a region-wide picture. The framework incorporated a capacity-building perspective and a collaborative process which involved funded projects in decision-making. The challenges of under-taking this evaluation and lessons learned for future evaluations will be shared.

Roundtable: How Do We Make Choices In Evaluation With Diverse Populations When Costs and Time Are Limited?
Roundtable Presentation 578 to be held in the Quartz Room Section B on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Qualitative Methods TIG
Presenter(s):
Wendy DuBow,  National Research Center Inc,  wendy@n-r-c.com
Gregory Diggs,  University of Colorado Denver,  gregory.diggs@cudenver.edu
Abstract: Here’s a typical situation in the world of evaluation: You are given an evaluation contract. The timeline is tight, funding is limited, and the client wants to hear from a range of “hard-to-reach” populations – including elderly, disabled, and low-income persons as well as non-English speakers, whose first language ranges from Spanish to Amharic. You are faced with a series of choices: What must you do to feel like your evaluation is on solid ground professionally? What must you do to ensure your evaluation is ethically sound? And what can you not do to be sure you stay within the allotted budget? This roundtable will take on the issues and questions that come up in the practical application of cultural sensitivity and evaluation principles. Choices have to be made. Perhaps through discussion, we can gain greater clarity on how to make them, what we gain and what we lose.

Session Title: Evaluating Communications Strategies for Public Health Emergencies: Experiences of the Centers for Disease Control and Prevention
Panel Session 579 to be held in Room 102 in the Convention Center on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Health Evaluation TIG
Chair(s):
Keri Lubell,  Centers for Disease Control and Prevention,  klubell@cdc.gov
Abstract: During public health emergencies such as disease outbreaks, natural disasters, or bioterrorism attacks, decision-makers (individuals, families, communities, policy makers, clinicians) need timely, accurate, credible, and actionable health information to minimize morbidity and mortality. The Emergency Communication System (ECS) at the U.S. Centers for Disease Control and Prevention (CDC) developed from communication challenges and lessons learned during the anthrax attacks in 2001. ECS integrates emergency-related communication activity across CDC; ensures the execution of coherent risk communication strategies to reach the public, affected communities, and partners with health protection messages; translates CDC's science for diverse audiences; disseminates tailored, consistent messages through multiple channels to achieve maximum outreach; and serve as CDC's liaison to federal/state/local partners in coordination of emergency-related health messages nation-wide. This panel presents evaluation information on ECS communication efforts with, first, external partners and the public during emergencies generally, and second, internal agency staff working to prevent world-wide pandemic influenza.
Evaluating Agency Communications During a Public Health Emergency
Keri Lubell,  Centers for Disease Control and Prevention,  klubell@cdc.gov
Scott Hale,  Centers for Disease Control and Prevention,  shale@cdc.gov
Wendy Holmes,  Centers for Disease Control and Prevention,  wholmes@cdc.gov
Marsha Vanderford,  Centers for Disease Control and Prevention,  mev7@cdc.gov
Communicating effectively with the public during a large-scale health emergency or infectious disease outbreak is critical for ensuring safety and preventing death, injury or illness. But, to date, there have been few systematic efforts to evaluate the outcomes or impacts of emergency communications efforts (e.g., messages, dissemination strategies, or tactics). To address this need, in 2008, the Emergency Communications System (ECS) at the U.S. Centers for Disease Control and Prevention began developing a plan to evaluate how the agency communicates during the acute phases of a public health crisis. The process included creating a logic model, convening emergency communications experts to provide feedback and help construct an evaluation plan, evaluability assessment activities by ECS staff, and data collection and analysis by a collaborative team of internal and external evaluators. This presentation provides background on communication needs during emergencies and preliminary findings on the results of the evaluability assessment and evaluation.
Evaluating Media Monitoring as an Internal Agency Communications Tool
Scott Hale,  Centers for Disease Control and Prevention,  shale@cdc.gov
Reyna Jones,  Centers for Disease Control and Prevention, 
Miriam Cho,  Centers for Disease Control and Prevention,  gyo4@cdc.gov
Cornelia Redding,  Centers for Disease Control and Prevention,  gzx8@cdc.gov
Keri Lubell,  Centers for Disease Control and Prevention,  klubell@cdc.gov
ECS also conducts monitoring of domestic and international television, print, and internet media on a potentially devastating health issue: avian and pandemic influenza. Stories are systematically collected from media in the largest markets, coded into strategic categories, and tracked over time. The compiled information is emailed daily to over 150 CDC agency staff including epidemiologists, health communications experts, and decision-makers. To evaluate the utility of the report as an internal communications tool, we first collected data from a purposive sample of key informant CDC staff to establish their expectations and information needs. We then conducted a survey of staff receiving the report to assess how often they actively engaged the information and how they incorporated the contents into their work. This presentation provides data CDC's efforts to fill a significant gap in internal awareness of critical public concerns that affect the communications environment for pandemic influenza.

Session Title: The Built-In Evaluation Framework: Integration and Implementation of Built-In Evaluative Systems
Expert Lecture Session 580 to be held in  Room 104 in the Convention Center on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Human Services Evaluation TIG
Presenter(s):
Stanley Taylor,  California State University San Bernardino,  stan2023@yahoo.com
Abstract: The Built-In Evaluation Framework (BIEF) is a form of organizational development that facilitates monitoring and reporting on program effectiveness, and prepares programs for evaluation. Unlike models that attempt to reconfigure the organization to conform to their framework, the BIEF integrates into the organization's current Management Information System, without causing undue strain on current structure or systems. Use of this model has two primary functions (1) it offers managers an efficient, cost-effective method of monitoring and reporting the progress of new and ongoing programs, and (2) it prepares the organization for evaluations by external evaluators.

Session Title: Organizational Models for Interpreting Project Operations: Loosely and Richly Joined Systems
Expert Lecture Session 581 to be held in  Room 106 in the Convention Center on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Systems in Evaluation TIG
Presenter(s):
Michael Lieber,  University of Illinois Chicago,  mdlieber@uic.edu
Abstract: Ross Ashby described complex systems with multiple component hierarchies as having two possible models of organization: (1) hierarchical subsystems are connected at the top but share no connections at component levels or (2) the hierarchies are connected at the top and at the component levels. The first model is a loosely joined system, typical of insect nervous systems, while the second is a richly joined system, typical of mammalian nervous systems. Social systems can be loosely or richly joined. I examine one loosely joined system, a health initiative in Chicago organizationally similar to a bureaucracy in its 'silo' structure, to learn three lessons¼--(a) silo structures may be adaptive responses to an organization's environment, (b) ambiguous messages rapidly amplify into communicative crises, and (c) silo structures can be can be enriched at lower levels may eventually alter the higher order regulatory structures in the system.

Session Title: Evaluation under Violent and Post-Violent Conditions: Describing and Clarifying the Work and Suggesting Strategies, Tactics, and Tools
Panel Session 582 to be held in Room 108 in the Convention Center on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Michael Baizerman,  University of Minnesota,  mbaizerm@umn.edu
Abstract: International and humanitarian and other aid agencies require evaluation for accountability and program improvement. Increasingly, evaluation has to be undertaken in communities under conditions of violent division. There is practice wisdom about how to conceptualize and implement this work, but not as easily available public, professional literature as there is about social research under such conditions. This panel will offer a public, professional space for describing, clarifying and understanding this work, suggesting practical strategies, tactics and tools. Research topics on evaluation under these conditions will be covered. A bibliography of work on these will be distributed.
Conducting Evaluation in Contested Spaces: Describing and Understanding Evaluation Under These Conditions
Ross VeLure Roholt,  University of Minnesota,  rossvr@umn.edu
Ross VeLure Roholt worked and lived in Belfast, Northern Ireland for two years (2004-2006). During this time, he designed and worked on several evaluation studies for youth programs, youth services, museum exhibitions, and quality assurance. His evaluation experience under violence and post-violence conditions will be described and joined with other evaluation studies under similar conditions that have been gathered from practitioners and researchers for a special edition in the Journal of Evaluation and Program Planning, edited by Ross VeLure Roholt and Michael Baizerman. The focus will be on describing and clarifying evaluation work under these conditions.
Interrogating Evaluation Practice from the Perspective of Contested Spaces
Michael Baizerman,  University of Minnesota,  mbaizerm@umn.edu
Michael Baizerman has over 35 years of evaluation experience around the globe. Over the last seven years he has worked with governmental and non-governmental organizations in Northern Ireland, South Africa, Israel, Palestine, and the Balkan region to document and describe youth work in contested spaces and to develop effective evaluation strategies to document, describe, and determine outcomes of the work. Responding to and expanding on the description provided earlier, evaluation practice as typically described in the North and West will be interrogated. It will be shown that the very nature of such spaces makes difficult to impossible typical best practices. We use this finding to suggest practical strategies, tactics, and tools for designing and conducting evaluation under violent and post-violent contexts.

Session Title: Real World Strategies for Adapting and Evaluating Science-Based Adolescent Pregnancy and STD/HIV Prevention Programs
Think Tank Session 583 to be held in Room 110 in the Convention Center on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Government Evaluation TIG
Presenter(s):
Kelly Lewis,  Centers for Disease Control and Prevention,  fpi5@cdc.gov
Kathryn Braun,  University of Hawaii,  kbraun@hawaii.edu
Abstract: After fifteen years of continuous decline, the U.S. teen birth rate rose 3% between 2005 and 2006. Annually, about nine million youth ages 15-24 acquire an STD. Significant disparities persist in adolescent reproductive health. Research demonstrates that science-based programs (SBP) effectively reduce teen pregnancy and STD risks, yet a growing challenge is the use of specific SBP with diverse populations of youth. Thus, Adaptation Guidelines were developed through CDC's Promoting Science-Based Approaches (PSBA) program to assist youth-serving professionals to adapt and implement SBP. Participants will examine approaches to evaluating adapted SBP with discussions focusing on fidelity monitoring, measuring impact on outcomes, and the use of small sample sizes. Facilitators will describe how the Adaptation Guidelines fit into a systematic program planning, implementation, and evaluation process called PSBA-Getting To Outcomes. Hawaii Youth Services Network will summarize their adaptation of Making Proud Choices, highlighting evaluation strategies and lessons learned.

Session Title: Enhancing the Understanding of the Technology Development and Innovation Process in Firms: Creation of a Data Enclave for Business Dataset
Panel Session 584 to be held in Room 112 in the Convention Center on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Research, Technology, and Development Evaluation TIG and the Government Evaluation TIG
Chair(s):
Brian Zuckerman,  Science and Technology Policy Institute,  bzuckerm@ida.org
Abstract: The Data Enclave allows researchers to access confidential business data securely through a remote access protocol. The enclave combines elements from the computing and social sciences to develop secure remote data access protocols. It also provides researchers with an environment that facilitates collaboration and documentation. This environment, or collaboratory, features wikis, blogs as well as direct interaction with data producers. This not only promotes high quality research, but also promotes the interaction between producers and researchers that creates healthy survey lifecycle in providing feedback to improve survey questions and analysis. This panel session will describe the motivation for building the Data Enclave, the components of the enclave, a demonstration, and preliminary research results to date.
Fostering the Data Enclave
Stephanie Shipp,  Science and Technology Policy Institute,  sshipp@ida.org
Stephanie Shipp was the Director of the Economic Assessment Office, Advanced Technology Program, when she funded the creation of the Data Enclave. She will talk about the motivation for the Data Enclave, provide a brief description of why she wanted to provide researchers access to ATP data, and the program requirements for researchers to access the data through the enclave. She has a background in managing the analysis of surveys at the Bureau of Labor Statistics and Census Bureau and overseeing the preparation of public use files for release of survey data. These data were often subject to deleting variables or topcoding income to preserve confidentiality. This often had a negative effect on the analysis of data because researchers did not have access to the full dataset.
Developing the Data Enclave
Julia Lane,  National Science Foundation,  jlane@nsf.gov
Julia Lane was senior Vice-President, NORC/University of Chicago and the Principal Investigator who led the development of the Data Enclave. She provided the creative input into many of the innovative ideas, such as the collaboratory and secure remote access. She will describe the development and the structure of the Data Enclave. She will also highlight findings from researchers who are currently using the Data Enclave in terms of how the Data Enclave and Collaboratory enhanced their research. Julia’s career has focused on new ways to more fully utilize data; for example, she was the architect of the Longitudinal Household-Employee Data project at the Census Bureau which linked demographic and business datasets. She is a well-know labor economist and an international expert on confidentiality issues and data access.

Session Title: Independent Evaluation of the National Weed and Seed Strategy
Expert Lecture Session 585 to be held in  Room 103 in the Convention Center on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Crime and Justice TIG
Chair(s):
Roger Przybylski,  RKC Group,  rogerkp@comcast.net
Presenter(s):
James Trudeau,  RTI International,  trudeau@rti.org
Jon Blitstein,  RTI International,  jblitstein@rti.org
David Chrest,  RTI International,  davidc@rti.org
Abstract: RTI International is conducting an independent evaluation of the National Weed and Seed Strategy to assess W&S key components of law enforcement, community policing, prevention/intervention/treatment, and neighborhood restoration; core principles of collaboration, coordination, community participation, and leveraged resources; and the critical role of U.S. Attorneys. This presentation describes the comprehensive evaluation design, conceptual framework, identification of within-community comparison areas, and analysis plans (e.g. multi-level growth models, spatial analysis). The evaluation will integrate process and outcome information to explore linkages between local W&S implementation and outcomes. For all sites we will formulate a broad overview across the national W&S Initiative, using GPRA data from 250+ grantees, Census data, and web-based stakeholder surveys including social network analysis. In 13 Sentinel Sites we will derive additional information from surveys of target and comparison community residents; site visits; document review; and data on local business activity. Preliminary results will be included as possible.

Session Title: Options for Fast-Turnaround, Low-Cost Publishing In the Field of Evaluation
Expert Lecture Session 586 to be held in  Room 105 in the Convention Center on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the AEA Conference Committee
Presenter(s):
Michael Scriven,  Claremont Graduate University,  scriven@hotmail.com

Session Title: The Use of Theory of Change to Evaluate Communities of Practice in a Public Health Informatics Setting
Multipaper Session 587 to be held in Room 107 in the Convention Center on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
Andrea Hegedus,  Northrop Grumman Corporation,  ahegedus@cdc.gov
Abstract: Recently public health practice has begun a shift in focus from 'healthy people' to 'healthy communities.' Such a shift requires concomitant change in evaluation practice that simultaneously addresses multi-sites, multiple lines of inquiry (promising practices, social, environmental), several levels of analysis (individual, organizational, programmatic), and mixed methods all within a collaborative process that is co-constructed by diverse partners and is continually evolving. This two-paper session will use communities of practice (CoPs) as one means to intervene in such a complex system. The first paper will describe CoPs and how they can be evaluated using a theory of change approach, while the second presentation will discuss the unique challenges faced when evaluating public health informatics initiatives with CoPs. At the end of the session, attendees will gain an understanding of the need to use theory, such as theory of change, as a framework for evaluation of complex social processes.
Using Theory of Change to Evaluate Communities of Practice
Andrea Hegedus,  Northrop Grumman Corporation,  ahegedus@cdc.gov
One reason why complex multi-faceted evaluation designs can fail is that they are underspecified at the beginning of the initiative. Insufficient thought is given to how activities and context impact both intermediate and long-term outcomes. Communities of practice (CoPs) have been identified as an intervention in complex social initiatives. CoPs are defined as groups of people who interact to solve common problems within social learning systems. Theory of change is one tool that can be used by evaluators not only to specify how activities and context within CoPs can be tied to outcomes, but also to increase the rigor of attributing the impact of interventions to outcomes. A theory of change prospectively articulates those structures or activities that need to change over time in order for the intervention to be effective. This paper will explain the benefits of using theory of change within CoPs to improve outcomes.
Evaluation of Initiatives in Public Health Informatics
Awal Khan,  Centers for Disease Control and Prevention,  aek5@cdc.gov
Public health informatics (PHI) is a relatively new field that has been defined as the systematic application of computer science and technology to public health practice. Application of PHI includes the transfer of public health data among public health departments, hospitals, agencies of the Federal government, and other relevant partners; surveillance of data; and the application of technology to these efforts. PHI plays an important role in improving the quality and efficiency of public health data through the development and adoption of interoperable systems. Increasingly, participatory approaches are being used to facilitate activities and address barriers and the core elements of communities of practice (CoP) encourage participatory action research. This paper will describe communities of practice as an intervention in this setting, the model used to articulate the intervention, and how they will improve the practice of PHI.

Session Title: No Weak Link: The Role of a Collaborative Governance and Infrastructure in the Evaluation of Services for Families With Multiple Needs in Durham, North Carolina
Panel Session 588 to be held in Room 109 in the Convention Center on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Chair(s):
Monica Jolles,  Durham County System of Care,  monica.jolles@dpsnc.net
Abstract: Fiscal restraints have compelled governmental agencies to pursue community initiatives by sharing resources and jointly assessing progress. Durham System of Care (DSOC) is a county initiative implementing best practice wrap around principles across the public education, child welfare, mental health and juvenile justice systems. What are the implications for partners' structure and service guidelines within and across systems? What is the role of the collaborative structure that includes families, serving agencies, and local policy makers in sustaining this initiative and an evaluation process? Led by DSOC evaluator and Quality Management Specialist at a partner agency, this session will focus on the cross-agency learning and adaptive processes allowing this community to sustain this initiative for over four years. It will illustrate how systems have allocated resources, aligned policies, and revisited their governance process in order to meet evaluation needs. The session will conclude with lessons learned, next steps, and feedback from participants.
No Weak Link: The Role of a Collaborative Governance and Infrastructure in the Evaluation of Services for Families with Multiple Needs in Durham, North Carolina
Monica Jolles,  Durham County System of Care,  monica.jolles@dpsnc.net
This panel session illustrates a process which began in a Durham, North Carolina community that recognized its own needs and worked with county agencies and policy makers to create a wrap around initiative, a System of Care, to serve children at the highest risk for out of home placement, multiple agency involvement and school failure. Led by an evaluator, this session will describe how partner agencies and local policy makers have collaborated to increase capacity and sustain this evaluation process. For example, SOC Collaborative has developed an Outcomes workgroup composed of agency representatives and community stakeholders. This group serves as support to the evaluation process and as a link to the SOC infrastructure that includes a cross-agency council, collaborative and leadership roundtable. The session will end with lessons learned from a collaborative perspective and next steps to continue strengthening the evaluation capacity across partner agencies and System of Care infrastructure.
No Weak Link: The Role of a Collaborative Governance and Infrastructure in the Evaluation of Services for Families With Multiple Needs in Durham, North Carolina
Lisa C Perri,  The Durham Center,  lmperri@co.durham.nc.us
Across the US, the mental health system has lead the development and implementation of System of Care principles in the delivery of services among children with multiple needs. More specifically, the Durham county mental health agency has developed a System of Care (SOC) Unit as part of the agency's infrastructure. Cross-agency collaboration that encourages family involvement is a dream-come-true when aiming to improve the lives of children at the highest risk for out of home placement, multiple agency involvement and school failure. The ability to evaluate measurable outcomes for families served within a cross-agency structure, along with how partner agencies have adapted to this process is critical. Led by the agency's quality assurance specialist and member of the SOC Outcomes workgroup, this session describes the agency's role in the cross-agency evaluation effort. Also discussed will be agency-specific lessons learned and steps taken to strengthen its participation in the evaluation process.

Session Title: Comparing High Stakes Standardized Testing to Observational Measurement of Actually Doing Science: Exploring an Alternative
Demonstration Session 589 to be held in Room 111 in the Convention Center on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Charles Plummer,  Marion County School District and Simulation Systems Laboratory,  cmplum@rochester.rr.com
Abstract: Educational accountability on the extent desired ends are achieved, rather than evaluating adherence to means, seems useful. Evaluating adherence to prescribed means and ends seems inappropriate, unless the 'one true way' for both means and ends is known. The issue might be more aptly stated, to what ends, and by what means and measures? Constructivism has focused attention on the value of students constructing their own reality through discovery, exploration, and experimentation. National educational standards seek objectives of students learning how to actually do science. This increases the probability that more students will acquire proficiency in problem solving/decision making, learn how to pose their own questions and hypotheses based on theory, and then construct experiments and measurements to seduce reality into revealing itself by systematically applying the scientific process. We compare implications of (1) the measurement of knowledge by using high stakes standardized tests emphasizing the memorization and recall of facts, with (2) observational assessment of behaviors demonstrated when students are actually doing science. A 'Child Exploratory Behavior Observation Scale' (Plummer, 2007) developed using National Educational Science Standards is presented to provide an alternative method to evaluate 'highly-valued-but-difficult-to-measure' doing science behaviors.

Session Title: Integrating Technology Tools Builds Evaluation Capacity: Three Practical Examples
Demonstration Session 590 to be held in Room 113 in the Convention Center on Friday, Nov 7, 10:55 AM to 11:40 AM
Sponsored by the Integrating Technology Into Evaluation
Presenter(s):
Elise Arruda,  Brown University,  elise_arruda@brown.edu
Stephanie Feger,  Brown University,  stepanie_feger@brown.edu
Abstract: This demonstration extends Ritter and Sue's (2007) New Directions for Evaluation edition on online surveys in evaluation by addressing (1) the use of multiple instruments integrated through technology, and (2) the use of technology in evaluation practice in addition to data collection. The Education Alliance at Brown University is utilizing online tools to enhance evaluation work processes and extend collaboration among project teams and participants. The demonstration walks attendees through three evaluation instruments, including: a professional development log, interview/observation protocol, and pre-post test, all administered via an Open Source platform. The demonstration includes discussion of successes and challenges in developing online tools for evaluation. For example, the log instrument requires several entries over time, the observation protocol involves narrative data, and the student survey expects the same participant complete both the pre- and post-test. Each of these settings will be addressed through a viable and accessible technology resource.

Return to Evaluation 2008
Search Results for All Sessions