Evaluation 2009 Banner

Return to search form  

Session Title: Approaches to Evaluating Advocacy and Policy Change: An International Comparison
Multipaper Session 374 to be held in Panzacola Section F1 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Advocacy and Policy Change TIG
Chair(s):
Jacqueline Williams Kaye,  Atlantic Philanthropies, j.williamskaye@atlanticphilanthropies.org
Evaluating Advocacy: A Model of Influencing
Presenter(s):
Annabel Jackson, Annabel Jackson Associates, ajataja@aol.com
Abstract: The paper describes the evaluation models and methods developed to capture the learning from the Prison Reform Trust's campaign 'Criminal Damage' which is aimed at reducing the number of children and young people who are imprisoned in the UK. The campaign is funded by the Diana, Princess of Wales, Memorial Fund, a foundation that specialises in funding advocacy and campaigning. The evaluator has been appointed to help conceptualise and measure the learning from advocacy in a form that can be useful to very different campaigns and circumstances supported by the foundation. The evaluator has developed systems to track the actions and achievements of the campaign over the four remaining years for which it has been funded. These systems include: - A model of influencing styles. - A meeting database. - A stakeholder map. Together these methods capture the tacit knowledge held by the campaign director, help measure and communicate the achievements of the campaign.
Cultivating Demand Within United States Agency for International Development (USAID) for Impact Evaluations of Democracy and Governance Programs
Presenter(s):
Mark Billera, United States Agency for International Development, mbillera@usaid.gov
Abstract: There are countless testimonies by those who credit USAID-funded programs with improving conditions in their lives and in their countries. Gathering quantifiable evidence as proof of the effectiveness of USAID assistance has been more elusive. This can be especially challenging when it comes to measuring the impact of USAID assistance addressing democracy and governance (DG). Upon the recommendation of the National Research Council, USAID is undertaking a pilot program of impact evaluations designed to demonstrate whether such evaluations can help determine the effects of democracy and governance projects on targeted policy-relevant outcomes. A portion of these impact evaluations will use randomized designs. Two main challenges now confront USAID. First is to figure out the best way to undertake impact evaluations of DG programs, both technically and bureaucratically. Second, is to convince USAID and implementing partner staff that such evaluations will be worth the time, effort, and possible additional costs.
Characterizing and Assessing Policy Change Using an Annual Online Survey Instrument
Presenter(s):
Annette Gardner, University of California San Francisco, annette.gardner@ucsf.edu
Claire Brindis, University of California San Francisco, claire.brindis@ucsf.edu
Lori Nascimento, The California Endowment, lnascimento@calendow.org
Sara Geierstanger, University of California San Francisco, sara.geierstanger@ucsf.edu
Abstract: In 2008 and 2009, the University of California, San Francisco (UCSF) administered an online policy tracking survey to 18 grantees funded under The Endowment's Clinic Consortia Policy and Advocacy Program. The survey was used to quickly assess achievement of 3 policy issues targeted by grantees in the prior year. The web-based survey takes 20-minutes to complete and includes questions on advocacy activities, decision makers targeted by grantees, policymaker and stakeholder support and opposition, advocacy partnerships, and benefits of the 3 policies. We describe the findings for the two years, comparing policies and years, as well as the benefits of these policies for primary care clinics and their target populations. Additionally, we describe our strategy to link these findings to grantee advocacy planning. UCSF developed an accessible report that was discussed with grantees one month after collecting the data.

Session Title: Innovative Evaluations in Child Welfare
Multipaper Session 375 to be held in Panzacola Section F2 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Human Services Evaluation TIG
Chair(s):
Tania Rempert,  Bureau of Evaluation and Research, trempert@illinois.edu
Discussant(s):
Vajeera Dorabawila,  Bureau of Evaluation and Research, vajeera.dorabawila@ocfs.state.ny.us
Evaluation of a Community Program to Improve Dating Relationships Among Teenagers in a Residential Group Home
Presenter(s):
Kristin Duppong Hurley, University of Nebraska at Lincoln, kdupponghurley2@unl.edu
Laura Buddenberg, Boys Town, buddenbergl@boystown.org
Kathy McGee, Boys Town, mcgeek@boystown.org
Abstract: This presentation will review the evaluation approach and preliminary results for a community-based program to reduce and delay sexual activity among teenagers and help them establish boundaries in relationships. The setting for the study is a residential group home for youth with emotional and behavioral needs. This paper will discuss the rationale for selecting the evaluation approach, including a review of other designs that were considered.
Evaluating Post-adoption Services for Families When Conditions Change
Presenter(s):
Margaret McKenna, ConTEXT, mmckenna3@earthlink.net
Abstract: This paper identifies several implications for program evaluators to develop more flexible and adaptable evaluation approaches in response to human service programs having reduced budgets for evaluation as a result of declining economic conditions. The example is given of modifications in a program evaluation of a five year demonstration project that provided education and support for adoptive families. The cumulative effects of several contextual factors including the economy, changes in the direct service providers, and programmatic factors affected program attendance and participation patterns which led to program changes in location and service delivery model. The evaluation had included process and outcome components for the five year evaluation period but the emphasis shifted in year five to a process evaluation appropriate to project start-up rather than on outcome measurements. Suggestions are identified to apply the lessons learned to other human service program evaluations.
Framework for Evaluating Change in Complex Systems Across Context: An Example From Child Welfare
Presenter(s):
Margaret Richardson, Western Michigan University, margaret.m.richardson@wmich.edu
Jim Henry, Western Michigan University, james.henry@wmich.edu
Abstract: The Children's Trauma Assessment Center in Kalamazoo, MI is implementing and evaluating a project designed to foster the integration of trauma-informed policy/practice across agencies in county child welfare systems in Michigan. A mixed methods evaluation design is used to capture change in multiple forms/phases of the system in four system-level domains, and within the context of each county. A tri-level evaluation strategy is used, examining system, individual practitioner, and child level changes. An interrupted time series design is considered to track changes longitudinally at the system level. Fidelity to trauma-informed evidence-based practices is monitored at the practitioner level and is considered for association with levels of traumatic stress symptoms at the child level. Evaluation design development and implementation to capture multi-level, multi-system change across contexts will be shared, as will preliminary results. Evaluation tools will be shared.
Treatment Implementation and Mental Health Outcomes for Youth in Residential Care
Presenter(s):
Kristin Duppong Hurley, University of Nebraska at Lincoln, kdupponghurley2@unl.edu
Ron Thompson, Boys Town, thompsonr@boystown.org
Abstract: There is considerable need to establish methods for reliably and effectively assessing quality of treatment implementation. This presentation will describe the methods used to assess the implementation of complex intervention serving youth with significant emotional and behavioral health needs. Preliminary data results will also be provided for this ongoing study.

Session Title: Health Evaluation TIG Business Meeting and Presentation: An Evaluation of the Impact of School-based Health Centers on Children's Health Outcomes
Business Meeting Session 376 to be held in Panzacola Section F3 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Health Evaluation TIG
TIG Leader(s):
Jeannette Oshitoye, Nemours Health and Prevention Services, joshitoy@nemours.org
Robert LaChausse, California State University San Bernardino, rlachaus@csusb.edu
Tricia Hodge, Communitas Inc, tricia.hodge@yahoo.com
Jenica Huddleston, University of California Berkeley, jenhud@berkeley.edu
Presenter(s):
Lauren Lichty, Michigan State University, lichtyla@msu.edu
Miles McNall, Michigan State University, mcnall@msu.edu
Brian Mavis, Michigan State University, mavis@msu.edu
Abstract: Located within schools or on school grounds, school-based health centers (SBHCs) provide a comprehensive range of primary care, preventive, and early intervention services to children. SBHCs are staffed with multidisciplinary teams of health providers, including nurse practitioners, physician assistants, and social workers. SBHCs have increased access to and utilization of primary care services among low-income urban and rural youth, serving as a health care safety net for medically underserved children. The Michigan Evaluation of School-based Health (MESH) Project evaluates the impact of SBHCs on the health, school attendance, and healthcare costs of children attending schools with SBHCs. In this presentation, we will discuss the methodological improvements of the current study over prior evaluations of SBHCs and the quantitative findings from our two-level hierarchical linear models of the effect of SBHCs on student health and attendance. Results from the first two waves indicate positive effects on the well-being of SBHC users

Session Title: The Intersection of Behavioral Health and the Legal System
Multipaper Session 377 to be held in Panzacola Section F4 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
Rodney Wambeam,  University of Wyoming, rodney@uwyo.edu
Evaluation of Mental Health Courts: Lessons Learned From a Multi-site Longitudinal Study
Presenter(s):
Asil Ozdogru, Policy Research Associates, aozdogru@prainc.com
Henry Steadman, Policy Research Associates, hsteadman@prainc.com
Abstract: Mental health courts (MHC) are specialty courts within the legal system to facilitate the processing and diversion of people with mental health problems involved in the criminal justice system. MacArthur MHC evaluation study looked at four courts across the United States for three years in terms of public safety and mental health outcomes of defendants who were processed through MHCs and a similar comparison group who went through the regular court system. Preliminary analyses and our experiences show that adoption of a multi-site longitudinal evaluation strategy involving a wide range of stakeholders in an attempt to create change in multiple systems has methodological and analytical advantages as well as administrative and contextual challenges in the design and implementation of an evaluation study.
The Effect of Out-of-Home Treatment on Criminal Activity: A Framework for Comparing Treatment Modalities
Presenter(s):
John Robst, University of South Florida, jrobst@fmhi.usf.edu
Mary Armstrong, University of South Florida, armstron@fmhi.usf.edu
Norin Dollard, University of South Florida, dollard@fmhi.usf.edu
Abstract: Objective: The interaction between juvenile justice and out-of-home treatment is complex with differences in youth, providers, crime severity, and justice system placements. This presentation provides a framework for evaluating whether out-of-home treatment for children with mental health needs reduces criminal activity. Two particular questions focus on the importance of prior criminal activity, and whether placement with delinquent peers in residential settings affects future criminal activity. Methods: Medicaid data from FY2002/03-2004/05 are merged with juvenile justice data. Changes in criminal behavior (arrests/severity of crime) are assessed using a difference-in-difference approach in conjunction with propensity score weighting. Results: Youth treated in group homes had more post-treatment arrests and were more likely to commit felonies. The presence of delinquent peers also increased post-treatment criminal behavior. Conclusion: Youth treated in therapeutic foster care had greater reductions in criminal activity. The importance of peer effects suggests that group homes face challenges in reducing criminal behavior.
Evaluation of an Integrated Service Model and Family Drug Court to Improve Child Well-Being and Permanency Outcomes for Children Affected By Methamphetamine or Other Substance Abuse
Presenter(s):
Helen Holmquist-Johnson, Colorado State University, hjohnson@cahs.colostate.edu
Sonja Rizzolo, University of Northern Colorado, sonja.rizzolo@unco.edu
Abstract: Parental use of methamphetamine is associated with a multitude of problems that have a negative impact on children such as poor parental capacity. Under these circumstances, children are at great risk of neglect and are frequently removed from their homes and placed in the child welfare system. The Regional Meth Partnership is one of 53 Regional partnership grantees funded by the Children's Bureau. The aim of the project is to improve safety, well-being, and permanency outcomes for children of families impacted by methamphetamines and other drugs through the provision of integrated services with a Family Treatment Drug Court (FTDC) model in two counties in Colorado. Indicators across four outcome domains: child/youth, adult, family/relationship and regional partnership/service capacity, are being collected. Evaluation findings, implications to practice, and recommendations will be presented.
Evaluation of the South Dakota Public Safety Driving While Intoxicated (DUI) 1st Program Tracking DUI Recidivism
Presenter(s):
Roland Loudenburg, Mountain Plains Evaluation LLC, rolandl@mtplains.com
Abstract: This session will describe the evaluation methods and results of a five year study of the implementation of a standardized DUI first offender curriculum upon DUI recidivism rates in South Dakota. Prior to implementation of the standardized curriculum, there was not a standardized/systemized approach for effectively dealing with first-time DUI offenders to curtail recidivism and to obviate the impacts of DUI on public safety. Thus the overall goal of this program was to facilitate a statewide demonstration project that implements the adoption of higher standards of education, treatment and care for the DUI 1st offender and measure program effectiveness using sound evaluation and statistical methods. Reaction to the evaluation methods and findings by stakeholders such as judges, prosecuting attorneys, treatment providers, public safety and policy makers will be discussed.

Session Title: Program Evaluation Theory: Implications for Context, Design and Practice
Multipaper Session 378 to be held in Panzacola Section G1 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Extension Education Evaluation TIG
Chair(s):
Allison Nichols,  West Virginia University Extension Service, ahnichols@mail.wvu.edu
Developing a Context for Program Evaluation: From Logic Models to Program Theory
Presenter(s):
Mary Elizabeth Arnold, Oregon State University, mary.arnold@oregonstate.edu
Brooke J Dolenc, Oregon State University, dolencb@onid.orst.edu
Abstract: The purpose of program logic models is to articulate the connections between the resources that are invested in a program, what is done and who is reached with those resources, and what outcomes happen as a result. However, the connections between the elements of a logic model are often assumptive in nature. The assumptions about the links that describe how a program should work are crucial because the success of the program's effectiveness is dependent on their accuracy. In this paper, the authors will discuss the importance of developing logic models toward a more sophisticated and contextual understanding of a program's theory. Using the 4-H horse youth program as an example, the authors will share the process they used to develop a more contextual understanding of the program's theory and share how this enhanced knowledge of the theory shaped a large scale evaluation of the 4-H horse program.
Using Rigorous Program Evaluation Theory to Enhance Extension Program Planning
Presenter(s):
Alexa Lamm, University of Florida, alamm@ufl.edu
Glenn Israel, University of Florida, gdisrael@ufl.edu
Abstract: This presentation explores the application of rigorous program evaluation theory and its affect on formative Extension program planning through a case study. Rigor is characteristic of evaluation quality and can be emphasized through the use of high quality evaluation tools. These tools include advanced impact models, process models, and logic models. Skilled evaluators are aware of the importance of rigor, but this is not always true of Extension professionals when creating program plans. These educators tend to jump into specific program planning unsure of how to include evaluation in the planning process (Arnold, 2002). This presentation discusses questions about how evaluation methods can strengthen program planning in the formative stage: What role do evaluators play while working with Extension program planning teams? How can programs be strengthened by including evaluation in the planning phase? The presentation provides recommendations regarding the use of rigorous evaluation methods while planning Extension programs.
The Influence of Program Theory on Evaluation Relevance, Quality and Impact in Extension's Florida Yards and Neighborhoods Program
Presenter(s):
Robert Strong, University of Florida, strong@ufl.edu
Amy Harder, University of Florida, amharder@ufl.edu
Abstract: The relevance of this evaluation will describe the severity of problems, issues and concerns confronting the target audience of an adult Extension educational program in Florida. Educational solutions recommended focusing on the problems, concerns, and issues. The relevance of this evaluation will describe the personal and societal benefits from the implementation of recommended solutions. The quality of the evaluation will address specific aspects of the program. Additionally, the evaluation will describe the qualities of the program that are important to stakeholders. The evaluation will measure client and stakeholder satisfaction with those attributes. Lastly, impact measures the difference in the lives of individuals participating in the program. This is important to measure due to behavior change not occurring without knowledge gained in the first place. The change in learning (knowledge, attitudes, skills and aspirations) will be clarified. This presentation will explain the types of behavior change needed to attain the results.

Session Title: The Cutting Edge: Novel Applications of Program Theory
Multipaper Session 379 to be held in Panzacola Section G2 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Program Theory and Theory-driven Evaluation TIG
Chair(s):
Katrina Bledsoe,  Walter R McDonald and Associates Inc, kbledsoe@wrma.com
Social Mechanisms, Theory-building and Evaluation Research
Presenter(s):
Brad Astbury, University of Melbourne, brad.astbury@unimelb.edu.au
Abstract: The relatively recent movement towards a focus on 'mechanisms' as a key analytical concept in the social sciences has started to spill over to the field of program and policy evaluation. This has occurred mainly through the introduction of the term 'mechanism' into the vocabulary of various forms of theory-driven evaluation (e.g. Pawson & Tilley, 1997; Weiss, 1997). However, there appears to be some ambiguity about the proper meaning and uses of mechanism-based thinking in both the social science and evaluation literature. In this paper I attempt to clarify what is meant by 'mechanisms' in the context of program evaluation. I also demonstrate, through illustrative case examples, how the process of investigating underlying generative mechanisms can aid theory-building in evaluation.
Evaluating a Carbon Monoxide Ordinance: Using Program Theory to Enhance External Validity
Presenter(s):
Huey Chen, Centers for Disease Control and Prevention, hbc2@cdc.gov
Fuyuen Yip, Centers for Disease Control and Prevention, fay1@cdc.gov
Shahed Iqbal, Centers for Disease Control and Prevention, geo6@cdc.gov
Jacquelyn Clower, Centers for Disease Control and Prevention, flj@cdc.gov
Paul Garbe, Centers for Disease Control and Prevention, plg2@cdc.gov
Abstract: Mecklenburg County of North Carolina passed a residential carbon monoxide (CO) alarm ordinance in 2002. An evaluation is being conducted to assess the effectiveness of the law in reducing CO poisoning. External validity is an important issue in this evaluation, because the findings from this evaluation will have important public health impact on the development of prevention messages and policy recommendations nationally. The paper will demonstrate that program theory is useful so that both internal and external validity issues are addressed. Program theory is used to identify the following contextual factors related to external validity: the enforcement of the ordinance, the capacity of the agency for enforcing the ordinance, the support from collaborating organizations, the residents' awareness of the ordinance, and the prevalence of the CO detectors in the county. Issues on how to integrate internal and external validity in an evaluation will also be systematically discussed.
Linking Network Structure With Project Performance
Presenter(s):
Boru Douthwaite, Challenge Program on Water and Food, bdouthwaite@gmail.com
Sophie Alvarez, International Center for Tropical Agriculture, b.sophie.alvarez@gmail.com
Abigail Barr, University of Oxford, abigail.barr@economics.ox.ac.uk
Katherine Tehelen, International Center for Tropical Agriculture, k.tehelen@cgiar.org
Abstract: Innovation can be understood as an emergent property of people-based systems in which agents interact. These interactions can be represented by networks, showing agents as nodes, and interactions as the links between them. Research-for-development projects attempt to foster and support innovation through research. It follows that project network structure, in terms of the types of organizations that work together to implement projects, and their patterns of interaction, should correlate with measures of project success. This paper tests this premise using the network maps drawn by staff of 29 projects of the Challenge Program on Water and Food (CPWF). Performance is measured through a rating carried out by the program's research and development leadership. We conduct two types of network analysis. The first correlates individual project network structure with project performance. The second posits that an organization's performance will be affected by their location in the overall CPWF network in which they are embedded.

Session Title: Using Focus Groups in Evaluation: Practical Considerations for Evaluators
Think Tank Session 380 to be held in Panzacola Section H1 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Qualitative Methods TIG
Presenter(s):
Eric Barela, Partners in School Innovation, ebarela@partnersinschools.org
Abstract: The purpose of this think tank is to discuss the practicality and utility of conducting focus groups in evaluation work. Focus group methodology is utilized by many evaluators in a wide range of contexts. Because of this, evaluators possess a great deal of hard-won knowledge about what needs to be considered by an evaluator who is thinking about conducting focus groups to respond to client needs. The goal of this session is to draw upon this theoretical, contextual, and practical knowledge of both new and seasoned evaluation to generate a comprehensive list of practical considerations for evaluators who are thinking about incorporating focus groups into an evaluation design. To achieve this, participants will engage in three discussions. They will discuss strategies related to convening focus groups, conducting focus groups, and reporting on focus groups.

Session Title: Introduction to Logistic Regression
Demonstration Session 381 to be held in Panzacola Section H2 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Dale Berger, Claremont Graduate University, dale.berger@cgu.edu
Abstract: Evaluators often wish to use multiple predictors to predict or model a dichotomous outcome (e.g., success/failure, persist/dropout, admit/deny, self selection into a treatment vs. control as in propensity analysis). Ordinary regression does not provide an appropriate model for this type of analysis, but logistic regression is a readily available alternative that is accessible in SPSS and other statistical packages. Logistic regression is not difficult to use and understand although new terminology and unfamiliar statistics can be challenging for first-time users. In this demonstration we will examine the logic and application of logistic regression for dichotomous dependent variables, show why ordinary regression is not appropriate, and demonstrate applications with dichotomous predictors, continuous predictors, and categorical predictors. Participants will be given a packet with SPSS syntax and annotated output for a range of applications. Familiarity with multiple regression analysis will be helpful, but not required.

Session Title: Multiple-Perspectives on Measuring Arts Infusion Efforts
Multipaper Session 382 to be held in Panzacola Section H3 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Evaluating the Arts and Culture TIG
Chair(s):
Min Zhu, University of South Carolina, helen970114@gmail.com
Abstract: Educational programs that infuse or integrate arts in the curriculum were developed based on the belief that quality education in the arts significantly adds to the learning potential of all students. Although arts infused programs have been shown to benefit students' social, emotional, and physical development, current achievement-based accountability initiatives promote an emphasis on measurable evidence of academic achievement and cognitive benefits. Findings regarding the effect of arts infused programs on students' academic achievement, however, have been inconclusive. One variable that may explain disparate achievement levels among arts schools is how the arts infused programs are implemented (Yap, et al, 2007). The varied approaches to implementation may be attributed to a lack of consensus about what constitutes the nature and scope of arts infusion. The purpose of this multi-paper session is to present two perspectives on measuring arts infusion implementation efforts.
Arts Infusion Continuum (AIC): A Best Practice Perspective
Min Zhu, University of South Carolina, helen970114@gmail.com
XiaoFang Zhang, University of South Carolina, jae2008@gmail.com
The inconclusive findings on the effect of arts infused programs on student achievement has prompted evaluators to investigate arts programming and infusion implementation strategies of arts schools with disparate achievement levels. They found that one of the main differences between the high achieving and low achieving arts schools was the level of arts infusion effort. Attributing the differing arts infusions levels to varied definitions of arts infusion, the Arts in Basic Curriculum (ABC) Project convened a task force to develop instruments that would clarify the definitions for and identify levels of arts infusion efforts. This presentation will focus on the arts infusion research and the development of the Arts Infusion Continuum (AIC) that aimed to provide schools with a best practice framework for thinking about their schools arts infusion efforts. Finally, the presentation will conclude with a discussion of the development of the AIC measuring tool to evaluate arts infusion effort.
Understanding Arts Infusion Efforts Using the AIC-Measuring Tool
Grant Morgan, University of South Carolina, praxisgm@aol.com
The AIC-measuring tool comprises two parallel surveys with 100 Yes/No statements that describe arts infusion efforts based on the AIC. The two surveys are parallel in that words in several statements were changed to target either the arts or other content areas. Of the 47 schools, 868 other-content-area teachers and 207 arts teachers responded. A total of 63 items resulted after recoding several dichotomous items into polytomous items. Two dimensions were determined using the Mokken Scale analysis. Each dimension was scaled using the Rasch modeling program, WinstepR, to determine item level information and scale scores. This presentation will provide details regarding the statistical procedures involved in computing teacher-level and school-level scale scores for each arts infusion dimension. Furthermore, the interpretation of the diagnostic information regarding school's levels of arts infusion efforts based on the scale scores will also be presented.
Understanding How Teachers Approach Arts Infusion
Tara Pearsall, University of South Carolina, pearsalt@mailbox.sc.edu
Ashlee Lewis, University of South Carolina, ashwee301@hotmail.com
With the intent of further understanding teachers' approaches to arts infusion, an open-ended question was included with the AIC-measuring tool requesting that teachers describe their arts infusion experiences. Approximately 600 teachers from the 48 schools who responded to the survey provided a description of an arts infusion experience. An extensive content analysis was conducted to identify themes among teachers' descriptions. The major themes identified within teachers' statements included (a) purposes and benefits of providing arts infusion, (b) approaches to arts infusion instruction, (c) knowledge and understanding of the nature and scope of arts infusion, and (d) challenges and obstacles to arts infusion. During the session, the presenters will discuss the themes identified and include several examples of arts infusion experiences as described by teachers.
A Curricular Integration Framework: Synthesizing Implementation Strategies
Ching Ching Yap, Savannah College of Art and Design, ccyap@mailbox.sc.edu
A review of arts integration literature and the content analysis of teachers' arts infusion experiences revealed that most teachers implement arts integration by using sample units or lesson plans created by arts integration specialists. Some who attempted to create their own arts integration lessons were confronted with challenges, which they attributed to a lack of understanding of the concepts of arts integration implementation. In this presentation, the author will share a conceptualized framework for developing integrated lessons. This framework is designed based on content analysis of sample arts integration units or lesson plans. Because the author believes that integration should be co-equal and emphasize both arts and non-arts content areas, the framework is described as a curricular integration framework. Finally, the author will demonstrate how the framework may provide evaluators with indicators to measure arts integration efforts based on implementation strategies used.

Session Title: Non-profit and Foundations Evaluation TIG Business Meeting
Business Meeting Session 383 to be held in Panzacola Section H4 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
TIG Leader(s):
Lester Baxter, Pew Charitable Trusts, lbaxter@pewtrusts.org
Lorna Escoffery, Escoffery Consulting Collaborative Inc, lorna@escofferyconsulting.com
Teresa Behrens, The Foundation Review, behrenst@foundationreview.org
Joanne Carman, University of North Carolina at Charlotte, jgcarman@uncc.edu

Session Title: Evaluation of the Centers for Disease Control and Prevention's (CDC's) HIV Prevention Program: Rationale, Reporting, and Realities
Panel Session 384 to be held in Sebastian Section I1 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Government Evaluation TIG and the Health Evaluation TIG
Chair(s):
Dale Stratford, Centers for Disease Control and Prevention, bbs8@cdc.gov
Abstract: CDC funds 59 state and local health departments and approximately 150 community based organizations to conduct any number of a variety of HIV prevention programs in the US. How do you evaluate all of that, and satisfy the needs of congressional funders, the White House, national interest groups, CDC planners, local program directors and evaluators, and the HIV prevention program client? This session will describe the approaches and challenges to national level evaluation of a wide array of HIV prevention programs and activities in order to meet the accountability, program monitoring, and program improvement needs of a number and variety of stakeholders. The three presentations will focus on the rationale, reporting and realities of conducting national HIV prevention program evaluation.
Development and Implementation of HIV Prevention Program Reporting Requirements, Tools, and Technical Assistance
Marla Vaughan, Centers for Disease Control and Prevention, mhv1@cdc.gov
Kimberly Thomas, Centers for Disease Control and Prevention, kit9@cdc.gov
Antonya Rakestraw, Centers for Disease Control and Prevention, aip5@cdc.gov
Joanna Wooster, Centers for Disease Control and Prevention, zft62cdc.gov
This presentation will discuss the collection of meaningful data at the national level through close collaboration with stakeholders and the provision of significant technical assistance to grantees. The approach of collaborating with HIV prevention partners at every stage in the development and implementation of standardized data variables and program performance indicators on a national scale is critical to the success of monitoring and evaluation efforts. Providing ongoing support to grantees as they implement the data reporting requirements is also an important part of the process. This support includes training on monitoring and evaluation and use of applications for data reporting: readily available technical assistance; development and dissemination of guidances and tools for monitoring and evaluation with a focus on local data use; and direct and extensive guidance to grantees on data reporting requirements. This approach reflects a commitment to promote greater data utilization and to demonstrate progress toward reaching our goals.
Data Management, Analyses, and Reporting of Large-scale National HIV Prevention Program Data
John Beltrami, Centers for Disease Control and Prevention, hzb3@cdc.gov
Hussain Usman, Centers for Disease Control and Prevention, gnw8@cdc.gov
Nancy Habarta, Centers for Disease Control and Prevention, eqq1@cdc.gov
CDC has never before had a national system of standardized HIV prevention program data. Such data are critical for CDC to monitor and evaluate HIV prevention activities at both the local and national levels. This presentation will 1) provide a brief overview of data systems and data types, 2) describe activities related to what happens to the data once submitted by the health departments, 3) describe quality assurance (QA) reports, descriptive reports, and a QA feedback process to the health departments, 4) describe a data report and data release request system, and 5) challenges. The focus of information provided will be on the status and progress to date of this work, which began in 2008. Additionally, context will be provided that relates to a federal agency leading this national work that requires extensive collaboration and integration with multidisciplinary program and IT staff.
HIV Prevention Program Evaluation Studies: Purpose, Process, and Predicaments
Gary Uhl, Centers for Disease Control and Prevention, gau4@cdc.gov
Kathleen Raleigh, Centers for Disease Control and Prevention, kmr9@cdc.gov
Elizabeth Kalayil, Centers for Disease Control and Prevention, ehk2@cdc.gov
Andrea Moore, Centers for Disease Control and Prevention, dii7@cdc.gov
CDC conducts focused evaluation projects to describe and examine the delivery of local and national HIV prevention programs and also outcomes such as client-level and community-level risk and behavior change. We also conduct projects to develop and disseminate guidance documents on improving the quality of local and national evaluation data. All of these projects use both qualitative and quantitative methods and a range of evaluation designs. This presentation will highlight issues relevant to evaluators such as initiating and maintaining stakeholder involvement, conducting evaluation projects in the field (as opposed to university or research settings), collecting sensitive information such as type and frequency of sexual activity from high-risk minority communities (e.g., men who have sex with men, African Americans, Hispanics), and reporting results in a variety of formats to multiple audiences.

Session Title: Hot Topics in Quantitative Methods
Multipaper Session 385 to be held in Sebastian Section I2 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Fred L Newman,  Florida International University, newmanf@fiu.edu
Improving Statistical Conclusion Validity in Mediation Analysis using Bootstrap Procedures
Presenter(s):
MH Clark, Southern Illinois University at Carbondale, mhclark@siu.edu
Steven Middleton, Southern Illinois University at Carbondale, scmidd@siu.edu
Abstract: The present study describes a bootstrap method for testing mediational relationships that has quickly gained in popularity over the past few years. The bootstrap method is a more robust test that, under certain conditions, can provide more valid results than previous methods. Because the bootstrap method uses several subsamples from the original data to measure the direct and indirect relationships between variables, sampled distributions are less skewed and small sample sizes are less likely to affect the statistical power of the test. To demonstrate its effectiveness, the bootstrap method will be compared to Barron and Kenny's (1986) traditional method for assessing mediation using a data set with a small sample size (n = 64) and non-normally distributed variables.
Comparing Analysis of Covariance (ANCOVA), Repeated-Measures Analysis of Variance (ANOVA), and Multilevel Longitudinal Design in Causal Modeling of Non-Random Clusters
Presenter(s):
Lihshing Wang, University of Cincinnati, leigh.wang@uc.edu
Abstract: Quasi-experimental studies involving non-random clusters of subjects are often the norm in evaluation research for drawing causal inferences. When the design involves two or more groups where pretesting is possible but randomization is not, three common statistical procedures exist for modeling the treatment effects: Analysis of Covariance (ANCOVA), which compares the posttest means using the pretest as the covariate; Repeated-Measures Analysis of Variance (RMANOVA), which estimates the interaction effect between the within-subject Time factor and between-subject Treatment factor; and Multilevel Longitudinal Design (MLD), which estimates the treatment effect after adjusting for the intraclass correlation in hierarchically clustered data with Time nested within Subject and Subject nested within Treatment. The present study reviews the theoretical framework and pragmatic utility of these three approaches, arguing for the preference of RMANOVA over ANCOVA and MLD over RMANOVA. Simulation results with known true effects are augmented to support the claim.
Spatial Regression Discontinuity: Estimating Effects of Geographically Implemented Programs and Policies
Presenter(s):
Christopher Moore, University of Minnesota, moor0554@umn.edu
Abstract: Estimating causal effects is an important aim in the field of program evaluation, but many programs and policies are implemented in geographically defined jurisdictions, such as school districts or states, and not by randomly assigning participants to a treatment or control group. How might evaluators estimate causal effects in the case of treatment assignment based on geographic borders? Regression discontinuity is a quasi-experimental design and statistical modeling approach that can yield causal estimates that are comparable to those derived from randomized controlled trials. Spatial regression discontinuity is a special case that recognizes geographic borders as sharp cutoff points where local effects can be estimated. This paper details how evaluators can implement spatial regression discontinuity designs that allow causal conclusions. A hierarchical spatial regression discontinuity analysis will be demonstrated in the context of a well-known study of minimum wage effects by Card and Krueger (1994).

Session Title: Ensuring Consistently High Quality Local Evaluations: Lessons From an Evaluation of a Multi-site Education Program
Think Tank Session 386 to be held in Sebastian Section I3 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Presenter(s):
Ardice Hartry, MPR Associates, ahartry@mprinc.com
Discussant(s):
Patty O'Driscoll, Public Works Inc, patty@publicworks.org
Beverly Farr, MPR Associates, bfarr@mprinc.com
Abstract: What changes can be made to the grant selection process to ensure consistently high quality evaluations local evaluations? In this session, we will tap the expertise of evaluators to answer this question. Participants will be divided into three groups, each of which will be provided with a summary of two evaluation plans from a state-funded multi-site program. Each group will review the plans and use a protocol to determine strength and weaknesses. When we reconvene as a large group, each small group will report out its findings, which will guide the search for commonalities. Then the group will discuss how changes to the Request for Proposal or other processes may increase quality and forestall problems in evaluation design and execution. Throughout this session, evaluators will have the opportunity to reflect on their own practice in ways that will help them to improve the quality of their work in local evaluations.

Session Title: Contextualizing Your Inner Evaluator: Embracing Other World Views
Skill-Building Workshop 387 to be held in Sebastian Section I4 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Presenter(s):
Alice Kawakami, University of Hawaii at Manoa, alicek@hawaii.edu
Morris Lai, University of Hawaii, lai@hawaii.edu
Donna Mertens, Gallaudet University, donna.mertens@gallaudet.edu
Hazel L Symonette, University of Wisconsin Madison, symonette@bascom.wisc.edu
Deana Wagner, Johns Hopkins University, dwagner@jhsph.edu
Abstract: This session will assist evaluators who are not residents or cultural citizens of the context where they may be conducting evaluations. We will provide opportunities for dialogue with individuals from three diverse communities and focus on evaluation undertaken with approaches that honor and respect world views of community members. In addition to a general overview of strategies addressing hierarchies of power and privilege, we will provide opportunities for participants to engage in a small group activities that will explore world views related to decolonizing evaluation, transformative evaluation, and methods for cultivating self as responsive instrument. Each small group will describe an evaluation opportunity in a "non-mainstream" community and explore strategies appropriate to that context. The session will end with the presenters' and participants' debrief of the process and sharing of challenges and suggestions to guide evaluators' future endeavors.

Session Title: Systems in Evaluation TIG Business Meeting and Think Tank: Systems, Systems Thinking, and Systemness: What's It All About, Anyway?
Business Meeting Session 388 to be held in Sebastian Section J on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Systems in Evaluation TIG
TIG Leader(s):
Bob Williams, Independent Consultant, bobwill@actrix.co.nz
Janice Noga, Pathfinder Evaluation and Consulting, jan.noga@stanfordalumni.org
Margaret Hargreaves, Mathematica Policy Research Inc, mhargreaves@mathematica-mpr.com
Presenter(s):
Janice Noga, Pathfinder Evaluation and Consulting, jan.noga@stanfordalumni.org
Margaret Hargreaves, Mathematica Policy Research Inc, mhargreaves@mathematica-mpr.com
Abstract: As the Systems in Evaluation TIG continues to grow and bring in new members, we often find ourselves struggling with the need to develop a common understanding about what is really meant when we talk about concepts such as "systems theory" or "system thinking". And what about all those other "system" terms we encounter - such as "systemness", "systems of care", and "systems of service delivery"? What are the distinctions between these terms and how does understanding this impact us as evaluators? What are their strengths and limitations? What do these different perspectives bring to understanding issues pertaining to systems thinking for evaluation? To issues of context? What does this mean in the overall context of systems in evaluation? The ultimate objective of this think tank is to challenge members' thinking about systems and encourage them to "stretch" their evaluative paradigms concerning systems, systems thinking, and systemness.

Session Title: The Power of Context and Its Role in Shaping Evaluations
Panel Session 389 to be held in Sebastian Section K on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Presidential Strand
Chair(s):
Debra Rog, Westat, debrarog@westat.com
Discussant(s):
Debra Rog, Westat, debrarog@westat.com
Abstract: Just as evaluators have increasingly recognized the need to understand the "black box" of intervention, so have we become more aware of the need to understand the context in which we operate. In some instances, this understanding involves merely acknowledging the context and how it may have had a role in affecting program implementation or outcomes; in other evaluation situations, context is embraced within the study itself. This panel will provide three case examples in which context played a strong role in shaping one or more aspects of the evaluation approach, including the methods, analysis, and dissemination process. The discussant will present a synthesis of what the three papers may offer in helping us understand the ways in which different features of context - especially the nature of the phenomenon itself being studied, and the dynamics in the political context - influence evaluation practice.
Context Becomes Foreground: Baseline Study of a Program to Improve College-Readiness
Joy Frechtling, Westat, joyfrechtling@westat.com
Joseph Hawkins, Westat, josephhawkins@westat.com
Debra Rog, Westat, debrarog@westat.com
A topic of increasing interest to foundations, the current administration, and federal policy makers is finding ways of increasing the percentage of students from traditionally underrepresented populations who pursue, and succeed in, post secondary education. This paper will discuss a baseline study of such a program that illustrates how context, in this case the school or educational environment as well as program maturity, significantly impact both the implementation of a college readiness program and the strategies employed for its evaluation. Focusing on 11 high schools -6 traditional public schools and 5 charter schools--in an urban city with a history of educational challenges, the study uses interviews, focus groups, and document review to provide a rich description of early implementation. This description identifies important contextual factors that facilitate or present obstacles to a program's implementation and, likely, ultimate success. Implications for program design, as well as evaluation design, are discussed.
Ignore Context at Your Own Peril: Evaluation of a Community-based Employment Program for Young Adults With Criminal Justice Involvement
Scott Crosse, Westat, scottcrosse@westat.com
Janet Friedman, Westat, janetfriedman@westat.com
The New York City Justice Corps is an intensive employment-centered program for young adults with criminal justice involvement that is being implemented by two community-based organizations in New York. Context has shaped the design and implementation of the evaluation of this program in several important ways. For example, contextual factors have influenced the: development of working relationships between service provider and evaluation staff (e.g., overcoming wariness of evaluator staff), collection of data from individual participants in the evaluation (e.g., understanding concerns about terms in instruments), and interpretation of findings on program processes and outcomes (e.g., accounting for the effect of changes in local economic conditions on employment-related outcomes). The paper discusses some of the contextual factors in play and challenges posed by them, and efforts by the evaluators to respond to these challenges.
Attending to Context: Strategies for Producing Actionable Evidence in Dynamic Policy Issues
Kathryn Henderson, Westat, kathrynhenderson2@westat.com
Debra Rog, Westat, debrarog@westat.com
Linda Weinreb, University of Massachusetts Medical School, weinrebl@ummhc.org
Social issues are often dynamic and sensitive to changes in the broader economic and political context. Conducting evaluations and related efforts in areas such as homelessness can be challenging, particularly in providing information that can have relevance to the changing needs of decision makers. This paper describes the experience of designing, implementing, and reporting a study in Massachusetts focused on the factors that influence families' shelter stays and subsequent exit locations. With a strong emphasis on informing both local and state policy, the study incorporated stakeholder feedback throughout the study as a way to keep abreast of policy changes and offer data in the most targeted manner possible. This paper will describe the changes in the policy system at the beginning of the study, the subsequent changes sparked by the economic downturn, and the strategies used by the study team to continue to provide information that is relevant and useful.

Session Title: Evaluating United States Foreign Assistance
Panel Session 390 to be held in Sebastian Section L1 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Cynthia Clapp-Wincek, Independent Consultant, ccwincek@aol.com
Discussant(s):
Gerald Britan, United States Agency for International Development, gbritan@usaid.gov
Abstract: Over the last decade much attention has been focused on changes to U.S. foreign assistance, and as of late, different recommendations for how it should be delivered under the Obama Administration. In some form, all proposals identified the need to be "smart" and have strengthened monitoring and evaluation functions. USAID commissioned MSI to study the intersection of the changes in US foreign aid and the trends in development evaluation theory and practice and to recommend how evaluation of U.S. foreign assistance programs could be strengthened. Richard Blue, Cynthia Clapp-Wincek and Holly Benner undertook an independent study capturing the views and experiences of external evaluators of U.S. foreign assistance efforts. The authors of these studies will discuss what policies, practices and organizational structure would best assure learning and knowledge sharing to maximize the effectiveness and impact of US Government foreign assistance programs. What are the pros and cons for greater or lesser independence for USG foreign assistance evaluation offices? What are the best approaches to assuring quality, minimum standards, and sufficient rigor across the agencies, as well as broader learning relevant to program and policy decision making. The panel will begin with a presentation by MSI authors Keith Brown, Molly Hageboeck and Jill Tirnauer discussing the recommendations from their two studies and what trends in US foreign assistance and development evaluation theory and practice led them to those recommendations. Although the MSI studies were focused on recommendations for the USAID evaluation system, this panel is an opportunity to discuss how those trends might inform USG evaluation more broadly. Richard Blue, Cynthia Clapp-Wincek and Holly Benner will discuss recommendations based on the external view of monitoring and evaluation of USG foreign assistance as seen by practitioners such as themselves. Gerald Britan, Acting Chief of USAID's central evaluation unit, will discuss the implications of these studies for evaluations at USAID and the beyond.
Changes in US Government Foreign Assistance and Their Implications for Evaluation
Keith Brown, Management Systems International, kbrown@msi-inc.com
Jill Tirnauer, Management Systems International, jtirnauer@msi-inc.com
The U.S. Agency for International Development (USAID) was at the forefront of evaluation thought among the bilateral and international donors in the 1970s and 1980s. Although USAID missions worldwide undertook project-level evaluation, USAID's Center for Development Information and Evaluation was responsible for establishing evaluation policy, undertaking cross-sectoral evaluations and studies of special interest, and managing contracting vehicles. During the later years of the Bush Administration, USAID's evaluation system languished and the central office was eventually dismantled in 2007. USAID has recently recreated an evaluation function and this was commissioned to inform the development of that function.
Trends in Development Evaluation Theory, Policy and Practices to Inform United States Government Foreign Assistance Evaluation
Molly Hageboeck, Management Systems International, mhageboeck@msi-inc.com
The theory and practice of development evaluation has evolved over the past several years, with a growing emphasis on impact evaluation, methodological rigor and experimentation, as well as the creation of several new international evaluation forums and institutions. How do these trends inform the reinvigoration of USAID's evaluation system and what might that mean for US Government foreign assistance more broadly?
The External View of United States Government Foreign Assistance Evaluation
Cynthia Clapp-Wincek, Independent Consultant, ccwincek@aol.com
Holly Benner, Independent Consultant, hwbenner@yahoo.com
Richard Blue, Cynthia Clapp-Wincek and Holly Benner presented an independent, unfunded view of US Government monitoring and evaluation of foreign assistance in "Beyond Foreign Assistance: Monitoring and evaluation for Foreign Assistance Results". Much of this study included the responses to a survey of independent US Government evaluators on a number of issues associated with the present status of monitoring and evaluation efforts as practiced in the principal U.S. agencies that provide foreign assistance: USAID, MCC, and the Department of State. The perspective presented is the view from a set of 'external evaluators'', individuals who conduct evaluations of U.S. foreign assistance programs, either as part of non-governmental organizations (NGOs), consulting firms, or as individual consultants.
Proposal for a Center for Monitoring and Evaluation of USG Foreign Assistance
Richard Blue, Independent Consultant, richardblue@earthlink.net
Richard Blue will present a proposal for an independent Center for Monitoring and Evaluation (CME) that should be established to provide leadership and comparative monitoring, evaluation and reporting of the results of foreign assistance programs across the US Government. The center should strengthen monitoring and evaluation capacity across all foreign assistance agencies supporting each agency's M&E functions with greater consistency in policies and standards.

Session Title: Contextual and Methodological Challenges and Opportunities for Evaluating Transitional Justice
Panel Session 391 to be held in Sebastian Section L2 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Colleen Duggan, International Development Research Center, cduggan@idrc.ca
Abstract: Over the past decade those working for international development and human rights have witnessed an increase in efforts to develop and operationalize mechanisms intended to nurture transitional justice - the field of policy and practice that seeks to move a society characterized by repressive rule, systematic armed violence and institutionalized human rights abuse, towards one in which perpetrators are held accountable and the collective memory of historic events are harnessed in a "reconciliation" process whose intent is to decrease the chances of the recurrence of past atrocities. Unfortunately, the enthusiasm of the international community for designing, promoting and financing transitional justice mechanisms has not matched efforts to evaluate their impact and effects on the lives of people living in transitional societies. This panel will examine some of the challenges and opportunities for evaluating different types of transitional justice programming.
Evaluating Historic Memory and Racism in Guatemala
Colleen Duggan, International Development Research Center, cduggan@idrc.ca
The results of the 1999 Guatemalan Truth Commission report challenged Guatemala to deal a racist past and reeducate for the future. Most Guatemalans recognize that constructing a society in which ethnic diversity is celebrated is a task for many generations to come. Between 2004 and 2006 a Guatemalan research institution implemented an ambitious nation-wide anti-racism campaign that included an interactive museum exposition. The International Development Research Centre supported the development of a monitoring and evaluation framework for the expo, using Outcome Mapping as the central method for assessing citizen`s attitudes and behaviours around issues of historic memory and racism. A subsequent participatory summative evaluation of that effort has drawn out key learnings for program design and improvement; these are proving to be crucial for the re-launch of the expo this year. Drawing from the evaluation findings and discussions emerging from international development evaluation theory and practice, this paper will address some of the challenges and opportunities for evaluating the potential for using museums as a tool for social transformation.
Transitional Justice and Evaluation Methodologies: Finding the Best Fit for the Context and Content
Cheyanne Scharbatke-Church, Tufts University, cheyanne.church@tufts.edu
The belief of the international community in transitional justice is starting to be eroded by questions around impact, both positive and negative. Comprehensive reviews exist that articulate the lack of evidence available behind claims of success; what is now required are practical efforts to identify the best evaluation approaches to start to fill this evidence gap. This paper uses the three branches of Alkin's evaluation theory tree - use, methods and valuing - to organize a review of evaluation approaches for their appropriateness to transitional justice. Approaches will be considered against two primary issues; context and content. The context in which transitional justice programming takes place is rife with complexity; political agendas, corruption, limited local capacity, cultural differences and the north-south power dynamics to name just a few. The approaches will also be considered from the point of view of the unique elements that arise from the content of transitional justice work itself.
Utopian Dreams or Practical Possibilities? The Challenges of Evaluating the Impact of Conflict-Orientated Museums
Brandon Hamber, University of Ulster, b.hamber@ulster.ac.uk
Ereshnee Naidu, University of New York, ereshn@yahoo.co.uk
The transitional justice field has seen an increase in the recognition of the role of memorialisation in post-conflict peacebuilding. A variety of truth commissions have identified conflict-orientated memorial museums as vehicles to assist the healing of survivors of conflict, rebuilding relationships, and rewriting national narratives. However most of the accounts of the role of conflict-orientated memorial museums in dealing with the past are largely justificatory, if not utopian. Furthermore, truth commission have provided little direction on exactly how such initiatives could work. Assessing the impact of conflict-orientated memorial museums is also challenging as it generally requires resource intensive longitudinal studies. By drawing on empirical evidence gained from the author's work evaluating international conflict-orientated memorial museums, the paper will highlight the challenges of evaluating such work, propose possible indicators for impact assessment in lieu of longitudinal studies, and outline a framework for how evaluation of such mechanisms could be enhanced.
Towards a Framework for Monitoring and Evaluation of Transitional Justice Mechanisms
Kenneth Bush, St Paul's University, kbush@ustpaul.ca
The sheer number and variety of transitional justice (TJ) mechanisms are a tribute the deep collective desire to address the legacies of mass atrocity that characterize contemporary dirty wars, mass violence, and genocide. Yet, the creativity in shaping different kinds of TJ mechanisms has not been complemented by the development of appropriate approaches and tools for monitoring and evaluating their impacts. The challenges to doing so are daunting given the array of transitional mechanisms, the differences in the contexts within which they have been initiated. Building on previous work on Peace and Conflict Impact Assessment (PCIA), this paper will critically assess and sketch out the parameters of a framework for monitoring and evaluation transitional justice mechanisms. It will address questions such as: where to look to for impacts; what type of impacts to look for; how to measure impacts; and the political, logistical and ethical considerations of evaluating TJ Mechanisms.

Session Title: Evaluating Technical Assistance to Build Organizational Capacity: The Case of the Comprehensive Assistance Centers
Panel Session 392 to be held in Sebastian Section L3 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Chair(s):
Sharon Horn, United States Department of Education, sharon.horn@ed.gov
Discussant(s):
Sharon Horn, United States Department of Education, sharon.horn@ed.gov
Patricia Bourexis, The Study Group Inc, studygroup@aol.com
Abstract: The United States Department of Education established Comprehensive Assistance Centers to provide technical assistance to "build the capacity of State Education Agencies to implement No Child Left Behind." Sixteen regional comprehensive centers and five content centers work together to accomplish that goal. Center evaluators are challenged by the twin issues of measuring the effectiveness of the technical assistance and of increased organizational capacity. WestEd is evaluating two regional centers and one content center and has developed an evaluation approach that addresses these challenges, although differently for the regional and content centers.
From Parts to Whole: Putting Humpty Dumpty Together
Naida Tushnet, WestEd, ntushne@wested.org
The evaluations of two comprehensive assistance centers and one content center have evolved. As the centers began work, they sought credibility as technical assistance providers. Consequently, the first year of the evaluation focused on the quality, relevance, and usefulness of products and services. Because such indicators did not fully capture center work, the evaluators and center staff developed logic models to clarify short, intermediate, and long-term outcomes for activities. As the projects moved into their final years, both the technical assisters and evaluators focused on the "footprints" that would be left after the projects ended. Because the footprints stemmed from the objectives, we also asked whether the objectives added up to increased organizational capacity. We moved from asking, "Were the centers doing the work right?" to "Were the centers doing the right work?" This approach evolved from breaking the work of the centers into pieces to putting the egg back together.
How Will We Know if Capacity was Built?
Marycruz Diaz, WestEd, mdiaz@wested.org
Isabel Montemayer, WestEd, imontem@wested.org
The comprehensive centers work within particular state contexts, and their technical assistance must be relevant to state needs. As a result, center work is organized around goals and objectives. The evaluation of a center that served a single state, California, therefore, focused on the extent to which the objectives were achieved and used the footprints as indicators of increased state capacity to help districts and schools in specific ways. In addition, we aligned the footprints with the functions (Redding, S., & Walberg, H.J, Eds., Strengthening the Statewide System of Support, Center on Innovation and Improvement) of a state education agency to determine whether the centers increased organizational capacity to carry out the functions. This paper describes the process of generating the footprints and how we aligned them to state functions. It also describes how we measure both objective-specific and organizational capacity.
Evaluating Capacity Building Across Multiple States
Juan Carlos Bojorquez, WestEd, jbojorq@wested.org
The Southwest Comprehensive Center (SWCC) serves five states, each of which operates in a different context. All the states in the southwest region must address the demands of No Child Left Behind during a time of economic instability. However, the specific challenges within each state differ. For example, some of the states have a long tradition of local control, while others are more centrally governed. The evaluation problem is how we sum up the contribution of the SWCC to the region, in addition to each state. The paper will discuss how mapping footprints and functions, while helping to address this problem, does not fully solve it. Consequently, the evaluation included a survey (originally developed by Redding and Walberg) that focused on the functions, asking high-level and front-line state education staff members to assess the capacity of their state in a retrospective pre- and post-test approach.
Capacity Isn't Build in a Day
Treseen McCormick, WestEd, tmccorm@wested.org
Sharon  Herpin, WestEd, sherpin@wested.org
The US Department of Education designed the content centers to provide information and assistance to the regional comprehensive assistance centers (RCCs) in order to build their capacity to help states. Although some of the content centers engage only with RCCs, the Assessment and Accountability Content Center (AACC) collaborates with state education agencies as well as the 16 RCCs. In addition, the AACC has four strands of work: 1) special populations; 2) data use; 3) in-depth support to states; and 4) in-depth support to RCCs. For the evaluation, a key issue is capturing capacity-building data across multiple audiences and strands of work. The paper will discuss our approach to solving this problem.

Session Title: Strategies for Promoting Deliberation in Evaluation
Skill-Building Workshop 393 to be held in Sebastian Section L4 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Theories of Evaluation TIG
Presenter(s):
Sandra Mathison, University of British Columbia, sandra.mathison@ubc.ca
Abstract: Evaluation should be transparent and reasoned, and deliberation is the context in which evidence is reasoned about and conclusions are drawn. Sometimes, within a given evaluation context the evidence collected is relatively uncontested and the values of the stakeholders are highly coherent. Sometimes stakeholders morally disagree about these same things. In either case, stakeholders should deliberate with one other, seeking moral agreement when they can, and maintaining mutual respect when they cannot. Deliberation is the means to encourage continuous discourse about fundamental values and supports the legitimacy of collective decisions. In a more abstract sense, deliberation forms attitudes and ways of being that support engagement, social trust and political efficacy, both at the individual and social level. Strategies for encouraging, enabling, and participating in deliberation within evaluation contexts will be the focus of this skill building workshop.

Session Title: When Community Passions and Personal Callings Meet Empiricism: Exploring the Interpersonal Side of Program Evaluation Policy Shifts
Demonstration Session 394 to be held in Suwannee 11 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Independent Consulting TIG
Presenter(s):
Michael Lyde, Lyde and Associates, mlyde@lyde-enterprises.com
Abstract: A community-based agency has a rich history of effecting positive change in the lives of its clients. One critical element missing from this history is a catalog of formal evaluation reports that provide a counterpoint to the many testimonials and other qualitative evidence of the agency's effectiveness. A new program evaluation team is contracted and takes numerous steps to remedy the evaluation limitations of this agency-and they live happily ever after, right? Perhaps, but the journey to this outcome (i.e., relationship building, empowering agency staff, etc.) is the focus of this demonstration session. Inherent in any paradigm shift is the clash of philosophies and resistance to change. This demonstration will provide a forum for the presentation, exchange, and refinement of strategies that professional evaluators can utilize to overcome these challenges.

Session Title: Building and Evaluating a System Based Approach to Evaluation Capacity Building
Panel Session 395 to be held in Suwannee 12 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Chair(s):
William Trochim, Cornell University, wmt1@cornell.edu
Discussant(s):
William Trochim, Cornell University, wmt1@cornell.edu
Abstract: Systems approaches to evaluation capacity building are essential for developing effective evaluation systems. This session describes a multi-year NSF-supported project designed to develop a comprehensive approach to evaluation planning, implementation and utilization that is based on systems approaches and methods. We present the idea of a systems "evaluation partnership" (EP), the social and organizational network that is necessary to sustain such an effort, that emphasizes building consensus, using written agreements and delineating appropriate roles and structures to support evaluation capacity building. At the heart of the EP are: the systems evaluation "protocol," a specific well-designed sequence of steps that any organization can follow to accomplish a high-quality evaluation; and the integrated "cyberinfrastructure" that provides a dynamic web-based system for accomplishing the work and encouraging networking. This session describes the EP, the approaches used to evaluate it, and the results to date and sketches the plans for future development.
Evaluation Partnerships
Monica Hargraves, Cornell University, mjh51@cornell.edu
Thomas Archibald, Cornell University, tga4@cornell.edu
The Evaluation Partnership model provides a framework within which evaluation facilitators collaborate with program staff to share their respective expertise. The results are high-quality evaluation plans that are well-adapted to the local organizational context and program-specific needs and characteristics. Communication and transparency are important in facilitating creative thinking about programs, development of new perspectives, and organizational learning. The most common obstacles to internal evaluation include lack of motivation, time, confidence, and expertise. The Evaluation Partnership approach is designed to mitigate these by allowing for specialization of roles -- program staff draw on what they are expert in, and evaluation facilitators provide evaluation expertise and tools. Organization evaluation capacity grows as the work proceeds. An additional key contribution is to link the evaluation work to other organizational needs, which may include proposal development, strategic planning, and overall reporting mandates. This deliberate contextualization is important in making evaluation sustainable within organizations.
Evaluation Planning Using the Systems Evaluation Protocol
Jane Earl, Cornell University, jce6@cornell.edu
Thomas Archibald, Cornell University, tga4@cornell.edu
The Systems Evaluation Protocol (SEP) uses a systems perspective as a framework for developing evaluation capacity, enhancing evaluation quality and ultimately helping educators improve programs. Systems evaluation is an approach to conducting program evaluation that considers the complex factors that are inherent within a system - including integration across organizational levels and structures (nested systems) that are dynamic and involve multiple stakeholders (perspectives). Systems evaluation provides both a conceptual framework for thinking about evaluation systems and a set of specific methods and tools that enhance our ability to accomplish high-quality evaluation. The SEP divides evaluation into three phases - Planning, Implementation and Utilization. This presentation focuses on the Evaluation Planning phase and describes a series of steps that build high-quality comprehensive evaluation plans. The individual steps are the essential elements; the order in which a team follows them is flexible. Examples of the process will be given.
The Cyber Infrastructure
Claire Hebbard, Cornell University, cer17@cornell.edu
Monica Hargraves, Cornell University, mjh51@cornell.edu
A distinct yet integrated aspect of this evaluation research project has been the development and testing of a cyberinfrastructure titled the "Netway". The Netway provides a dynamic web-based system for accomplishing the work of evaluation and encouraging networking. Specifically, the Netway has features that support logic model development, pathway model development, measure identification, and evaluation planning. Moreover it incorporates dynamic search functions that allow users to immediately see outcomes in other programs that match outcomes they are identifying in their programs, thereby facilitating mutual learning. User-directed search functions facilitate program- and evaluation-focused networking effort that add to this environment of mutual learning and innovation. Evaluation measures can be entered into the Netway and linked to specific program outcomes, further enhancing the quality of evaluation planning and supporting evaluation research. This presentation will describe the Netway, how it is used by program staff and evaluators, and future Netway development plans.
Evaluation of Evaluation Capacity Building
Margaret Johnson, Cornell University, maj35@cornell.edu
Claire Hebbard, Cornell University, cer17@cornell.edu
This presentation will describe the emerging methodology for evaluating the Evaluation Partnership (EP), a multi-year systems-based approach to evaluation capacity building in organizations. This project is currently in the fourth year of development. Using self-report surveys to assess evaluation capacity, rubrics for quality of participant work products such as logic models and evaluation plans, and data on the use of networking tools including the Netway cyberinfrastructure, the evaluation of the EP examines its impact on participants' understanding of their own programs, on their knowledge of basic evaluation concepts, on the quality of the logic model and evaluation plans they develop, and on their level of engagement in the evaluation network created by the Partnership. The design is quasi-experimental, matched pre-test and post-test with multiple measures and two treatment groups.

Session Title: Evaluating Educators: Quality Indicators and Methods
Multipaper Session 396 to be held in Suwannee 13 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Susan Connors,  University of Colorado Denver, susan.connors@ucdenver.edu
Evaluating a Dosage Model of Professional Development: Professional Development in Context
Presenter(s):
Allison Meisch, Westat, allisonmeisch@westat.com
Jennifer Hamilton, Westat, jenniferhamilton@westat.com
Matthew Carr, Westat, matthewcarr@westat.com
Cathy Lease, Westat, cathylease@westat.com
Nancy Thornton, Westat, nancythornton@westat.com
Abstract: Professional development (PD) is widely used for providing continuing training to teachers. However, when evaluating the effectiveness of PD in changing behaviors, dosage must be considered within the context of individual participants. This study examines a contextual dosage model of PD. Data from Striving Readers, a project designed to improve literacy outcomes for middle school students in Newark, New Jersey, will be used. Striving Readers teachers were offered PD aimed at improving the quality of literacy instruction and also completed surveys. The amount of training will be examined to describe any dosage effects while considering individual teacher characteristics that may also contribute to changes in behavior. Understanding the relationship between amount of PD and individual characteristics has broad implications for future evaluations of other PD training programs.
Evaluating the Preparation of a New Breed of Principals in Tennessee: Making the Best of Drawing Premature Conclusions
Presenter(s):
Jonathan Schmidt-Davis, Southern Regional Education Board, jon.schmidt-davis@sreb.org
Abstract: Strong instructional leadership is increasingly recognized as key to raising student achievement. In spite of the need for high-quality school leadership preparation, many traditional programs lack selectivity, rigor and authenticity. This paper reports on the methods and findings of a comprehensive qualitative evaluation of a leadership redesign effort in Tennessee intended to prepare instructional leaders capable of improving school performance. From 2006-2008, the Southern Regional Education Board supported the research-based redesign of instructional leadership preparation at two universities producing 24 graduates in 2008. Simultaneously Tennessee adopted a series of policy changes consistent with the redesign effort. The timeline in which program officers and policy-makers needed an evaluation of the program precluded a goals-focused evaluation in favor of an implementation-focused evaluation. Surveys, interviews and site-visits were utilized to determine faithful implementation of the program's theory of change and offer an assessment that ultimate intended outcomes are likely to be achieved.
The Role of Evaluation in Program Sustainability
Presenter(s):
Claire Morgan, WestEd, cmorgan@wested.org
Abstract: Promising evaluation findings can be key to obtaining ongoing funding and support for education programs. But too often, sustainability efforts are not seriously undertaken until it may be too late to ensure program continuity. This paper session considers the important potential of evaluation for a more explicit and active role in program sustainability, both through supporting stakeholders in sustainability strategy development from the outset, and by treating the various components of sustainability as evaluation outcomes to be measured over time and reported on regularly. This paper session draws upon work of the Harvard Family Research Project and the experience of the presenter in evaluating alternative teacher licensure programs, and provides examples for evaluators of the kinds of questions that may be raised with program stakeholders and the kinds of data to be collected, as well as a description of the risks associated with undertaking this more involved and collaborative evaluation approach.

Session Title: Program Evaluation in Urban School Contexts: Five Cases
Multipaper Session 397 to be held in Suwannee 14 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Michelle Bakerson,  Indiana University South Bend, mmbakerson@yahoo.com
Balanced Evaluation in Urban Secondary Magnet Schools
Presenter(s):
Tom Watkins, Saint Paul Public Schools, tom.watkins@spps.org
Lesa Covington Clarkson, Saint Paul Public Schools, lesa.covingtonclarkson@spps.org
Sheila Arens, Mid-continent Research for Education and Learning, sarens@mcrel.org
Geoffrey Borman, University of Wisconsin, gborman@education.wisc.edu
Abstract: As urban schools face steeper increases in achievement targets and more drastic accountability consequences due to the federal No Child Left Behind law, program evaluators in education can help school staff identify and work toward reasonable yet challenging achievement expectations (Linn, 2005). In two Midwestern secondary schools, an evaluation of a federal Magnet grant that includes an experimental design and a quasi-experimental design has potential to provide a more accurate and useful achievement picture for stakeholders. Internal and external evaluators are collaborating to conduct these rigorous designs in a manner consistent enough to enable meaningful analysis, yet flexible enough to add value in the local context. This includes clarification of 'non-negotiables' while maintaining a commitment to meet data and assessment support requests from staff, customizing professional development, and enabling an 'open-ended' logic model. In one school, these measures may have provided some consistency and stability during the NCLB restructuring process.
Evaluating a Model Program Designed to Establish a College-Bound Culture Within the Context of a High-Need Urban District
Presenter(s):
Jacqueline Stillisano, Texas A&M University, jstillisano@tamu.edu
Hersh Waxman, Texas A&M University, hwaxman@tamu.edu
Judy Hostrup, Texas A&M University, jhostrup@usa.net
Brooke Kandel-Cisco, Texas A&M University, brookekandel@yahoo.com
Abstract: This study showcases the evaluation of the Gates GO Centers, a model program designed to assist students with college preparation activities and to create a college-going culture in eight high schools in a large urban district in Texas. A quasi-experimental, mixed-method design was developed that compares treatment high schools to comparison schools, matched by variables such as total enrollment, racial/ethnic distribution, percent economically disadvantaged, Texas Assessment of Knowledge and Skills (TAKS) scores, number of graduates, student-teacher ratio, student mobility rate, and percent of at-risk students. The evaluation team has developed and adapted multiple instruments--including surveys, interview protocols, and writing prompts--to collect data on key aspects of the evaluation. Preliminary findings based on school observations and interviews with Center Coordinators, school administrators, and teachers reveal a high degree of student use of the Centers and very positive attitudes regarding the importance of Gates GO Centers in encouraging college enrollment.
On the Road to 21st Century Schools: Challenges and Lessons Learned in the Evaluations of Two Title II, Part D, Enhancing Education Through Technology (EETT) Programs
Presenter(s):
Kathryn Pfeiffer, Research Works Inc, kpfeiffer@researchworks.org
Josh De La Rosa, Research Works Inc, jdelarosa@researchworks.org
Abstract: One of the latest trends in education is a push for the creation of '21st Century Schools' and the related emphasis on '21st Century Skills' that will presumably enhance student achievement and prepare students to enter and succeed in a competitive, global workforce. Many of the goals of these movements are reflected in some Title II, Part D, Enhancing Education Through Technology (EETT) programs. Drawing on experiences with the evaluations of two EETT programs operating in a large northeastern city, the authors discuss some of the challenges that such programs aimed at systemic school change pose and the implications of those challenges for the evaluation process. Additionally, the paper discusses methodological issues that evaluations of such programs face, including how to define and measure a '21st Century School' and '21st Century Skills,' as well as issues of program evaluability.
Challenges of Evaluating Student Achievement and Teacher Preparation in a Large and Changing School System
Presenter(s):
Edith Stevens, ICF Macro, edith.s.stevens@macrointernational.com
Ilana Horwitz, ICF Macro, ilana.m.horwitz@macrointernational.com
Abstract: Since 2007, Macro International has evaluated Math for America (MfA), an alternative certification program that recruits and trains secondary level mathematics teachers. As part of the evaluation, Macro conducted a quasi-experimental study to examine the comparative effectiveness of Math for America teachers and those with other forms of teacher preparation. Macro's analysis of data from over 6,500 students in New York City (NYC) indicated that students taught by MfA teachers had higher achievement on state assessments and more positive attitudes towards studying math than students taught by non-MfA teachers. In addition to the usual difficulties involved in conducting research in urban school settings, Macro was forced to cope with significant changes to the NYC Department of Education's testing program and school structure that took place as the study was underway. In this session, we will discuss how we adapted data collection efforts and study design to accommodate these contextual changes.

Session Title: Evaluating After-School Programs: Three Statewide Studies
Multipaper Session 398 to be held in Suwannee 15 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Dorinda Gallant,  The Ohio State University, gallant.32@osu.edu
Discussant(s):
Stacey Merola,  ICF International, smerola@icfi.com
A Status Report of Statewide Evaluations of the 21st Community Learning Centers Program
Presenter(s):
Huihua He, Washington State University, huihua_he@wsu.edu
Mike Trevisan, Washington State University, trevisan@wsu.edu
Abstract: Federal funding for the 21st Century Community Learning Centers program has grown and remained steady over the last few years. A key feature of the program is that states are expected to evaluate their programs. The purpose of this paper is to review statewide evaluation reports to assess types of evaluation methodologies and frameworks and gauge the extent to which these strategies meet the information needs of policymakers. A search of all state department websites found 11 statewide evaluation reports. Seven evaluations were descriptive in nature while four employed quasi-experimental designs. Surveys, interviews, document reviews and site visits were the most common data collection methodologies. Formative and summative findings were found. Despite differences in evaluation design not well suited for impact statements, reports from most states claimed positive impacts on student academic achievement and youth development. Recommendations for improvement of statewide evaluations with an eye toward defensible evidence are offered.
The Impact of Attendance in After School Programs on Achievement: Three Years of Findings From Washington State
Presenter(s):
Huihua He, Washington State University, huihua_he@wsu.edu
Michael Trevisan, Washington State University, trevisan@wsu.edu
Abstract: Evidence increasingly shows that participation in out-of-school time programs can have a positive effect on youth development. However, how much participation is necessary to improve outcomes remains unanswered. The purpose of this study is to examine the relationship between the length of participation and achievement as measured by state test scores from 2005 to 2008. Approximately 13,000 attendees of 150 21st Century Community Learning Centers in Washington State participated in the study. Results indicate small but statistically significant positive effects for both reading and mathematics achievement scores due to the intensity of the attendance. One implication of these findings is that there may not be a single participation measure that works for all programs. It is recommended that work continue toward obtaining meaningful attendance data. This will likely include the use of different measures of attendance to help programs better understand the relationships between the attendance patterns and participants' outcomes.
Evaluating the Supplemental Educational Services (SES) Program in Georgia: A Comprehensive Approach to Assessing Program Quality, Effectiveness and Impact
Presenter(s):
Scott Pollack, University of Georgia, scottp@uga.edu
Sheneka Williams, University of Georgia, smwill@uga.edu
Abstract: Supplemental Educational Services (SES) is part of the federal No Child Left Behind legislation that provides additional academic instruction outside regular school hours for eligible children in Title I schools that have failed to make adequate yearly progress for at least two consecutive years. States are required to evaluate SES provider organizations to determine if they are producing positive results and following program rules. Georgia is one of few states that have taken a comprehensive approach to evaluating SES. Georgia's evaluation analyzes student achievement data (both individually and aggregated by provider), monitors provider compliance with regulations, and assesses stakeholder perceptions of the process. While increased student achievement is the main goal of SES, the program requires cooperation among state and local education personnel, providers, parents and students. This presentation will describe the SES evaluation in Georgia and the decision-making process for retaining or removing organizations from the approved provider list.

Session Title: Contextual Influences on Evaluation Practice
Multipaper Session 399 to be held in Suwannee 16 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Research on Evaluation TIG
Chair(s):
Tarek Azzam,  Claremont Graduate University, tarek.azzam@cgu.edu
Evaluation in California Foundations
Presenter(s):
Susana Bonis, Claremont Graduate University, susana.bonis@cgu.edu
Abstract: This study adds to the growing literature on evaluation in foundations. Twenty of the top foundations in California (in terms of giving) were interviewed using a semi-structured format to determine how program evaluation is practiced in the foundation, what the foundation expects from its grantees with regard to program evaluation, and what actions by the foundation's leader and board of directors influence program evaluation in the foundation. Recent trends in program evaluation in foundations, and their implications, will be discussed.
The Relationships Between Involvement and Use in the Context of Multi-Site Evaluation
Presenter(s):
Frances Lawrenz, University of Minnesota, lawrenz@umn.edu
Jean A King, University of Minnesota, kingx004@umn.edu
Stacie Toal, University of Minnesota, toal0002@umn.edu
Denise Roseland, University of Minnesota, rose0613@umn.edu
Gina Johnson, University of Minnesota, john3673@umn.edu
Kelli Johnson, University of Minnesota, johns706@umn.edu
Abstract: This research project examined involvement in, and use of, evaluation processes and outcomes in four multi-site National Science Foundation (NSF) programs. Although NSF was the primary intended user, this research looked at involvement and use by 'secondary users', i.e., the individual projects comprising the NSF program and people in the evaluation and science, technology, engineering, and mathematics (STEM) education fields. The research used cross case analysis to examine data from surveys of participating projects, interviews with project PIs and evaluators, citation analysis, a survey of the STEM education and evaluation fields, and discussions with evaluators of NSF programs. Several themes emerged that affect the relationship between involvement and use: evaluator credibility, interface with NSF, life cycles, project control, tensions, and community and networking. The interaction of these themes with the relationship between involvement and use is complex and necessitated examination of unintended users.
Understanding How Evaluators Deal With Multiple Stakeholders
Presenter(s):
Michelle Baron, The Evaluation Baron LLC, michelle@evaluationbaron.com
Abstract: This paper explains the implications of a qualitative study on the broader question of what it means for an evaluator to deal with conflicting values among stakeholders, describes what practicing evaluators do when faced with conflicting stakeholder values, examines how current evaluation approaches might clarify what is going on in evaluator practices, and begins working toward a descriptive approach of evaluator-stakeholder interaction. While there is a plethora of literature that links theory and practice (Christie, 2003; Fitzpatrick, 2004; Schwandt, 2005; Shaw & Faulkner, 2006), few advocate for descriptive approaches (Alkin, 1991; Alkin, 2003; Alkin & Ellett, 1985) in terms of documenting how evaluators actually practice evaluation, and thus far prescriptive approaches continue to dominate the evaluation spotlight. This paper provides a foundation for descriptive theory development and expanded evaluator training to provide evaluators at all levels and disciplines timely, accurate, and concrete examples of evaluator roles and decision making processes.

Session Title: Assessment in Higher Education TIG Business Meeting and Presentations: National and International Contexts for Evaluative Practice in Higher Education
Business Meeting and Multipaper Session 400 to be held in Suwannee 17 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Assessment in Higher Education TIG
TIG Leader(s):
William Rickards, Alverno College, william.rickards@alverno.edu
Development of an Evaluation System of Research Performance by Applying the Outcome Mapping Approach: A Case Study of Faculty of Liberal Arts and Scienc at Nakhon Phanom University, Thailand
Presenter(s):
Sun Thongyot, Chulalongkorn University, s_thongyot@yahoo.com
Nuttaporn Lawthong, Chulalongkorn University, lawthong_n@hotmail.com
Sirichai Kanjanawasee, Chulalongkorn University, skanjanawasee@hotmail.com
Abstract: This study was to develop an evaluation system of research performance by applying the outcome mapping approach. The outcome mapping was employed in this research that consisted of three steps, i.e., 1) develop of an evaluation system of research performance by applying the outcome mapping approach, 2) evaluate the evaluation system of research performance by applying the outcome mapping approach, and 3) study an opinion of an honorable person's for an evaluation system of research performance by applying the outcome mapping approach. A case study and research and development approach were conducted at the faculty of liberal art and science, Nakhon Phanom University, Thailand. The results of this research were expected that the faculty of liberal art and science Nakhon Phanom university were able to effectively research management planning, building the capacity evaluation, and changing the research behavior, lastly, increasing the number and quality of research.
Development of a Model of Organizational Effectiveness Evaluation for Faculties of Education: An Application of Multilevel Casual Analysis
Presenter(s):
Pattrawadee Makmee, Chulalongkorn University, pattrawadee@gmail.com
Siridej Sujiva, Chulalongkorn University, ssiridej@chula.ac.th
Sirichai Kanjanawasee, Chulalongkorn University, skanjanawasee@hotmail.com
Abstract: The purposes of this research are 1) To develop an effective evaluation model for Faculty of Education of higher education in Thailand 2) To study causal factors and correlation at department level and faculty level in faculty of education effectiveness in Thailand and 3) To test invariance of a multilevel causal analysis model of faculty of education effectiveness between public universities and public autonomous universities. This model, consisted of 4 latent variables and 14 observed variables. The sample consisted of 10 universities from public universities and public autonomous universities using multistage random sampling technique. The data from 2,410 subject from 3 groups are collected by using 3 questionnaires. Mplus program are using for quantitative data analysis and content analysis are used for qualitative data analysis. The results are expected to provide the model of faculty of education effectiveness. The model should display multilevel causal correlation and give valid result.

Session Title: Evaluation of a Collaborative to Foster Research Translation Between Campuses and Communities: The Atlanta Clinical and Translational Science Institute's Community Engagement and Research Program
Panel Session 401 to be held in Suwannee 18 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Iris Smith, Emory University, ismith@sph.emory.edu
Abstract: The Atlanta Clinical and Translational Science Institute (ACTSI) is a federally funded collaboration between Emory University, Morehouse School of Medicine, Georgia Institute of Technology and other community organizations to enhance research productivity and speed the transfer of clinical innovations to community practice. The ultimate goals are improved public health and reduction in health disparities. Challenges to collaboration include real and perceived differentials in prestige, influence and resources among the institutions, organizational complexity, and limited resources. The strengths of the partnership include the diversity and complementarity of partner expertise. The ACTSI evaluation function is highly participatory and represents cross-institutional collaboration. Challenges to the evaluation include operationalizing and measuring multi-level, multi-faceted collaborative activities while creating a utility focused evaluation that reflects the information needs of a diverse group of academic and community partners. This multi-paper panel will discuss the socio-political/historical context of the ACTSI, the evaluation design and preliminary evaluation findings.
Atlanta Clinical and Translational Science Institute (ACTSI) Evaluation Framework and Organization: Benefits of Organizational Placement and Collaborative Evaluation Planning
Iris Smith, Emory University, ismith@sph.emory.edu
Andrew West, Emory University, awest2@emory.edu
Leo Andres, Emory University, landres@sph.emory.edu
The Atlanta Clinical & Translational Science Institute Tracking and Evaluation Program is one of 11 key functions that comprise the Institute. Organizationally it is equivalent to other key functions which include: Clinical Interaction Network; Research, Education, Training and Career Development; Ethics, Regulatory Knowledge and Support; Community Engagement and Research; Biostatistics, Epidemiology and Research Design; Translational Technology and Resources; Pilot and Collaborative Translational and Clinical Studies; Biomedical Informatics; Clinical Translational Research Program for Pediatrics and Governance. The evaluation framework is participatory and utilization-focused with an emphasis on stakeholder involvement. Evaluation activities are coordinated by an "evaluation workgroup" that includes evaluators from each of the academic institutions, ACTSI Governance and Biomedical Informatics. The organizational placement and collaborative process of building the evaluation function has facilitated evaluation "buy-in" and rapid communication of evaluation activities and findings.
Measuring Partnership Functioning Within the Atlanta Clinical and Translational Science Institute (ACTSI)
Cam Escoffery, Emory University, cescoff@sph.emory.edu
Brenda Hayes, Morehouse School of Medicine, bhayes.msm.edu
A Leadership Council comprised of the program directors oversees the administration and operations of the ACTSI. The purpose of this presentation is to describe the development and implementation of a partnership evaluation of the ACTSI. The evaluation will help assess engagement and satisfaction of the members of the Leadership Council and the key partners in the following dimensions: member characteristics and perceptions, planning and implementation, leadership, partner involvement in the collaboration, communication, and progress and outcome. The utilization-focused data will provide information on opportunities, challenges and barriers and suggest strategies to strengthen the collaborative relationships.
Animated Social Network Analysis of Patterns of Collaboration in the Atlanta Clinical and Translational Science Institute (ACTSI)
Iris Smith, Emory University, ismith@sph.emory.edu
Circe Tsui, Emory University, ctsui2@emory.edu
Eva K Lee, Georgia Institute of Technology, evakylee@isye.gatech.edu
Cam Escoffery, Emory University, cescoff@sph.emory.edu
Tabia Henry Akintobi, Morehouse School of Medicine, takintobi@msm.edu
A baseline social network analysis was conducted to identify trends in research collaboration from March 2007 (prior to ACTSI grant award) through October 2008 (13 months post award) using the Social Network Image Animator (SoNIA) which produces an animated sociogram allowing for visualization of changes in collaborative patterns over four time points. The results of the analysis showed an increase both in the number of multi-institutional grants developed and their proportion relative to the total number of grants generated by the partnering institutions. In addition, the animated sociograms positioned Emory University School of Medicine research teams at the center of the diagrams with the highest concentration of grants. Analysis of the sociogram suggested that over the 18 month period, there was an increase in the number of Morehouse School of Medicine and Georgia Institute of Technology researchers being drawn into the Emory research clusters.
Evaluation of a Collaborative to Foster Research Translation Between Campuses and Communities: The Atlanta Clinical and Translational Science Institute's Community Engagement and Research Program
Tabia Henry Akintobi, Morehouse School of Medicine, takintobi@msm.edu
Brenda Hayes, Morehouse School of Medicine, bhayes@msm.edu
This presentation will detail evaluation of the institutional partnership, processes, and The Atlanta Clinical and Translational Science Institute (ACTSI) Community Engagement and Research Program (CERP) partners Morehouse School of Medicine and Emory University in strategies to 1) sustain the generation of research that is effectively translated from the laboratory bench to communities, 2) partner communities and academicians in all aspects of research, and 3) train investigators in community-based participatory research principles and practice. Activities central to these aims include The CERP Community Mini-grant Program, research-community workshops, and The Community Engagement in Health Disparities in Clinical and Translational Research Course. All strategies and activities are advised and reviewed by a steering board composed of a community stakeholder majority. Evaluation of CERP is central to the ACTSI's goals of fostering ethical community engagement and translation of science to Atlanta outcomes central to achieving the aims of CERP.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Evaluating Systems Change in Medical Education
Roundtable Presentation 402 to be held in Suwannee 19 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Research on Evaluation TIG and the Systems in Evaluation TIG
Presenter(s):
Jennifer Terpstra, University of British Columbia, jlterp@interchange.ubc.ca
Chris Lovato, University of British Columbia, chris.lovato@ubc.ca
Treena Chomik, Chomik Consulting and Research Ltd, treena@chomikconsulting.com
Abstract: The purpose of this roundtable is to discuss methods for evaluating systems change. The discussion will be based on a presentation describing a national medical education initiative, "The Future of Medical Education in Canada," sponsored by the Association of Faculties of Medicine of Canada (AFMC). The first phase of the initiative has involved formative research to identify system-wide strategies for creating transformative change in medical education that addresses the future healthcare needs of Canadians. This presentation will focus on results of a systems evaluation literature review and proposed next steps to evaluate systems change. The case example and literature review results will provide the basis for a theoretically grounded and practical discussion using key systems concepts. Participants will discuss evaluation methods for the initiative. Results of this roundtable will be provided to AFMC leadership overseeing the initiative and planning for the evaluation.
Roundtable Rotation II: Evaluating Faculty Performance in Teacher Preparation Institutions in the Southeast of Mexico
Roundtable Presentation 402 to be held in Suwannee 19 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Research on Evaluation TIG and the Systems in Evaluation TIG
Presenter(s):
Edith Cisneros-Cohernour, Universidad Autonoma de Yucatan, cchacon@uady.mx
Ariana Leo Ramirez, Universidad Autonoma de Yucatan, chinari_17@hotmail.com
Abstract: The purpose of this study was to examine the current state of faculty evaluation processes in the different colleges devoted to the preparation of future educators in three states in the Southeast of Mexico. The study centers on who is conducting the evaluation, what kind of procedures are being used for assessing faculty performance, how results are used for decision making and what are the expected and unexpected consequences of current evaluation policies on the quality of faculty' work and performance. Moreover, the study focus on what is the meaning of good teaching and how well faculty assessment procedures take into consideration the context of teaching and learning and the cultural characteristics of students, particularly those of Mayan ancestry. Data collection included a survey, focus group interviews and document review at each of the Normal Schools in Southern Mexico. In addition, the researchers conducted observations of teacher work during on site visits to each of the Teacher Preparation Schools of the states of Yucatan, Campeche & Quintana Roo, Mexico

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Multicultural Program Evaluation: Understanding the Dimensions of Theory and Practice
Roundtable Presentation 403 to be held in Suwannee 20 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Presenter(s):
Jill Anne Chouinard, University of Ottawa, jchou042@uottawa.ca
Abstract: Evaluations that are responsive to contextual and cultural specificity are increasing, as growing disparities and increasingly multi-ethnic contexts globally are creating a heightened awareness and need for this type of evaluation. This presentation is part of an emergent and inter-connected three-part study exploring how relationships among evaluators and community stakeholders in multi-cultural settings shape evaluation processes and consequences. The specific focus of this presentation is on the second part of this larger study, a thematic analysis of telephone interviews conducted with evaluation scholars and practitioners who have made substantial written contributions in the field of multi-cultural evaluation. These interviews subsequently helped shape the development of a conceptual framework for thinking about and guiding research on multi-cultural approaches to evaluation. The conceptual framework will also be presented for discussion.
Roundtable Rotation II: Tools for Evaluating Culturally Competent Practices in Youth Serving Contexts
Roundtable Presentation 403 to be held in Suwannee 20 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Presenter(s):
Leslie Grier, California State University Fullerton, lgrier@fullerton.edu
Abstract: The purpose of this roundtable is to examine methodologies for facilitating culturally sensitive and inclusive practices in youth serving contexts. To this end, tools for assessing the extent to which culturally sensitive and inclusive practices are implemented in youth serving contexts will be shared. Tools will consist of a combination of paper and pencil assessments and interactive exercises. Tools will incorporate research findings on various practices (e.g., questioning formats, use of various goal orientations and qualities of youth-staff interactions that reflect diverse expectations). Tools will operationalize the extent to which diverse staff practices and occurrences are utilized and vary in use across diverse groups of children and youth. Tools will be presented along with reflections regarding their efficacy with respect to promoting culturally sensitive and inclusive practices.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: The Blurry Line Between Internal Evaluation and Compliance: Why Context Matters
Roundtable Presentation 404 to be held in Suwannee 21 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Evaluation Use TIG and the AEA Conference Committee
Presenter(s):
Chatrian Kanger, Louisiana Public Health Institute, ckanger@lphi.org
Abstract: In today's shrinking economy, it is not uncommon for organizations to have smaller staffs and/or smaller budgets, particularly for conducting program evaluations. So many individuals or organizations may find themselves playing dual roles as 'Administrators' and 'Evaluators'. Therefore, the context for all interactions between Program Administrators also acting as the 'evaluator' and a Client / Grantee organization has to be considered whenever data is exchanged in order to ensure accuracy of data collected -- and so as not to ruin the collaboration. This roundtable session will explore the following questions: What structures can be put into place within an organization playing dual roles in order to mitigate 'trust' issues that data collected for evaluation purposes would not be used against a client/grantee? Is there ever a true separation between internal evaluation and compliance? What methods can be administered to distinguish between evaluation for quality improvement versus evaluation for compliancy?
Roundtable Rotation II: An Ethical Fine Line for Internal Evaluators?
Roundtable Presentation 404 to be held in Suwannee 21 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Evaluation Use TIG and the AEA Conference Committee
Presenter(s):
Stacey Farber, Cincinnati Children's Hospital Medical Center, slfarber@fuse.net
Wendy DuBow, University of Colorado at Boulder, wendy.dubow@colorado.edu
Kathleen Tinworth, Denver Museum of Nature and Science, kathleen.tinworth@dmns.org
Abstract: Evaluators who analyze programs or organizations from within face unique ethical issues specific to their particular context. No matter how principled and disciplined an evaluator, the role of being both insider and evaluator fuels ethical complexities. Pressure from co-workers whose programs you are evaluating, demands from a boss or department to ensure positive findings for the sake of continued funding, and alliances formed through tenure and longevity within an organization are just a few examples of what can occur. Join a supportive network of fellow internal evaluators' as you listen to their experiences and share your own in a forum built to constructively tackle these issues together.

Session Title: The Evaluation of Distributed Learning and Computer-Enabled Environments to Support Instruction
Multipaper Session 405 to be held in Wekiwa 3 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Distance Ed. & Other Educational Technologies TIG
Chair(s):
Karen Larwin,  University of Akron, drklarwin@yahoo.com
40 Years of Research: A Meta-analysis Examining the Effectiveness of Computer Assisted Instruction in Post-Secondary Statistics Education
Presenter(s):
Karen Larwin, University of Akron, drklarwin@yahoo.com
David Larwin, Kent State University Salem, dlarwin@kent.edu
Abstract: This meta-analysis examines how effective computer-assisted instruction (CAI) is on student achievement in post-secondary statistics classes. The study incorporates all available research beginning in 1969 through 2009. An overall effect size of d = .556 was calculated from the 75 studies yielding 215 different effect size measures from a sample size of n = 40125. These results suggest that a typical student moved from the 50th percentile to the 73rd percentile in statistics classes when CAI was used. The results of this study include a number of course, technology use, and student characteristics that were significantly related to the effectiveness of CAI.
Exploring Team Collaboration in the Same Time Web-conferencing Problem-Solving Within an Engineering Distributed Educational Environment
Presenter(s):
YiYan Wu, Syracuse University, ywu02@syr.edu
Tiffany A Koszalka, Syracuse University, takoszal@syr.edu
Abstract: A distributed collaborative engineering design (CED) course was designed to engage engineering students in learning about and solving engineering design problems. The CED incorporated an Advanced Interactive Discovery Environment (AIDE) that engaged students with different tools to support collaborative engineering design tasks. Prior course evaluation reports highlighted a certain instructional design issues in 1) effective use of technology resources and 2) effectiveness of team collaboration. In order to understand causes of these instructional design issues, this qualitative study is proposed to examine and describe in detail of students' problem-based collaborative learning and how their collaboration with peers are influenced by their use of tools during team web-conferencing meetings. Thirty-two recorded team web-conferencing videos are analyzed as the major evaluation method.
Evaluation of Asynchronous Discussion Boards in Online Courses: Can Discussion Boards Support Learning and Improving Online Instruction?
Presenter(s):
Tania Jarosewich, Censeo Group, tania@censeogroup.com
Lori Vargo, University of Akron, lvargo@uakron.edu
LeAnn Krosnick, University of Akron, leann1@uakron.edu
Kristen Vance, Cleveland State University, ksuzzanne@yahoo.com
James Salzman, Ohio University, salzman@ohio.edu
Lisa Lenhart, University of Akron, lenhar1@uakron.edu
Kathleen Roskos, John Carroll University, pdroskos@suite224.net
Abstract: This paper presents the results of an evaluation of online teacher professional development courses. The courses include online content, three face-to-face sessions, and an asynchronous online discussion board. Participants in the courses are required to engage in the online discussion by responding to instructor questions, responding to other participants' comments, and posting their own questions. The evaluation collected information to understand the extent to which the online discussion boards supported and extended participant learning and allowed the participants to engage more deeply with the course materials. The presentation will summarize the previous research of online discussion boards, briefly describe the content and delivery system of the online courses, present research findings, and discuss the implications of analyzing online discussions in a formative way to help support students during an online learning experience and in a summative way to help improve systems of online instruction.

Session Title: Evaluations in International Organizations: International Labor Organization (ILO), International Union for Conservation of Nature (IUCN), the Centers for Disease Control (CDC)
Multipaper Session 406 to be held in Wekiwa 4 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Denis Jobin,  National Crime Prevention Centre, denis_jobin@yahoo.ca
Discussant(s):
Denis Jobin,  National Crime Prevention Centre, denis_jobin@yahoo.ca
Evaluating the International Labour Organization's (ILO's) Five-Year Country Programme in Indonesia
Presenter(s):
Michael Hendricks, Independent Consultant, mikehendri@aol.com
Abstract: The International Labour Organization (ILO), an agency of the United Nations, strives to advance opportunities for women and men to obtain decent and productive work in conditions of freedom, equity, security, and human dignity. The ILO's objectives and programming for a given country are encapsulated in a five-year Decent Work Country program (DWCP). This presentation will describe a very recent evaluation of the DWCP's relevance, partnerships, strategies, implementation, results, and monitoring and evaluation systems in Indonesia. In addition to reporting basic findings of the evaluation, the presentation will also discuss the planning, staffing, logistics, and operation of conducting such an evaluation. In addition, we will discuss the challenges inherent in evaluating a large portfolio in a complex country, especially given that the ILO's mandate requires the delicate political feat of collaborating equally with governments, employers' organizations, and trade unions.
Lessons From Developing a Participatory Monitoring and Evaluation (M&E) System Based on M&E Questions and the Theory of Change: Experience of the International Union for Conservation of Nature (IUCN)/DGIS Livelihoods and Landscapes Strategy
Presenter(s):
Ricardo Furman Wolf, International Union for Conservation of Nature, ricardo.furman@iucn.org
Abstract: Livelihoods and Landscapes Strategy (LLS) is a 23 countries International Union for Conservation of the Nature (IUCN) initiative funded by the Netherland Government. It is oriented to generate lessons from local initiatives to influence the national and local policies regarding real and meaningful change in the lives of rural poor, enhance long-term and equitable conservation of biodiversity and ensure the sustainable supply of forest-related goods and services. Its approach is to build a partnership with local stakeholders and communities to add value to on-going activities. It is outcome-oriented. To answer these challenges in M&E a local based system that combines Theory of change approach, use of M&E questions rather than indicators and a learning/action-research approach has been recently developed. We will present here lessons from various countries in Africa, Asia and Latin America
The Centers for Disease Control and Prevention (CDC) and Global Public Health Capacity Development: Outcomes of a Planning Process to Maximize Programmatic Reach and Impact
Presenter(s):
Karen Kun, Centers for Disease Control and Prevention, icn3@cdc.gov
Denise Traicoff, Centers for Disease Control and Prevention, dnt1@cdc.gov
Anisa Kassim, Centers for Disease Control and Prevention, aqk4@cdc.gov
Sara Clements, Centers for Disease Control and Prevention, grl7@cdc.gov
Emily McCormick, Centers for Disease Control and Prevention, emccormick@cdc.gov
Elizabeth Howze, Centers for Disease Control and Prevention, ehhowze@cdc.gov
Abstract: This paper presentation will describe a planning and evaluation process utilized at the US Department of Health and Human Services, CDC. The process was designed to assist in the reorganization of the Sustainable Management Development Program for greater effectiveness. The Sustainable Management Development Program is a CDC program devoted to promoting organizational excellence in global public health through strengthening leadership and management capacity. A case study in planning and evaluation will be presented that details the Sustainable Management Development Program's efforts to maximize its future programmatic reach and impact in global health capacity development. Data collection processes including an electronic survey of past participants; key informant interviews; a public health management competency project; and recommendations from a panel of CDC global health experts about the role of management in program effectiveness and sustainability will be discussed.

Session Title: Youth Participatory Evaluation: Moving From Positivism to Positive Youth Development
Panel Session 407 to be held in Wekiwa 5 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Kim Sabo Flores, Kim Sabo Consulting, kimsabo@aol.com
Abstract: Can Youth Participatory Evaluation move the dialogue in evaluation from positivism to positive youth development? This panel will feature presenters who have been actively strengthening the field of youth participatory evaluation over the last decade. The presenting panelists will draw on their research and their practices to discuss how they have integrated both human development and positive youth development theories into their work, and in doing so changed the nature of traditionally positivistic evaluation. In particular, the panel will discuss how the work of Lev Vygotsky and other constructivists have allowed them to step outside the dualistic paradigm of transformative versus positivistic evaluation.
Youth Participation: A Positive Move
Robert Shumer, University of Minnesota, drrdsminn@msn.com
Studies of youth programs have for too long avoided their inclusion in the evaluation process. It seems that evaluators are not interested in youth development as one of the outcomes of youth studies. In this presentation we discuss the role of youth in the evaluation process, one that requires their becoming active, critical citizens. From Piaget, to Bruner, and many others youth grow through an interactive process with their environment. Growth and develop are a function of how adults engage them in the developmental process. While some choose to see youth as empty vessels to be filled by the wisdom and knowledge of adults, others view them as growing human beings, whose development is predicated on their ability to interact as capable individuals who have something to contribute to their own development. Participation in studies of their world is not an option, it is positively required
Moving From Positivistic to Positive Youth Development Through Participatory Evaluation
Michael A Harnar, Claremont Graduate University, michael.harnar@gmail.com
To draw a line from positivistic evaluation to positive youth development in evaluation, one would necessarily pass through the constructivist paradigm of Participatory Evaluation (PE). While good evaluation in youth-serving programs should help programs engage in more fruitful positive youth development no matter the evaluation perspective, using the process of evaluation to affect positive youth development is unique to PE's intent. Further, Transformative Participatory Evaluation specifically intends to transform its participants. Individual transformation is also a fundamental construct of Vygotsky's social construction of the mind where social interaction is a necessary catalyst for individual development. Youth Participatory Evaluation intentionally engages youth in the evaluation so that learning and development are enhanced by their interactions. This moves evaluation away from a positivistic perspective where evaluators serve as aloof, separate researchers judging a program's merit and on to more fertile ground where the process itself enhances clients' lives in positive ways.
Youth Participatory Evaluation: Tool and Result
Kim Sabo Flores, Kim Sabo Consulting, kimsabo@aol.com
This presentation will explore the potential of youth participatory evaluation as a positive youth development practice that simultaneously transforms youth, adults, organizations, and evaluation. The presenter will discuss her research and practice, rooting her analysis of YPE in the work of Lev Vygotsky, specifically examining how he and others have utilized the notion of Zones of Proximal Development as a key method and practice that stimulates the development of both individuals and totalities. The goal of the presentation is to ignite a discussion about the relationship between YPE and positive human and youth development.

Session Title: Moving Toward Quantitative Evidence-based Science Policy: Science of Science Policy Developmental Efforts In Theory, Evaluation Methods, and Data Infrastructure
Panel Session 408 to be held in Wekiwa 6 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Kei Koizumi, United States Office of Science & Technology Policy, kkoizumi@ostp.eop.gov
Discussant(s):
Kei Koizumi, United States Office of Science & Technology Policy, kkoizumi@ostp.eop.gov
Abstract: The US Office of Science and Technology Policy (OSTP) released a report(1) in August 2008 by an inter-agency task group (ITG) presenting a roadmap for science policy making. The ITG concluded that while expert judgment remains the predominant decision support tool for policy, there is a compelling and immediate need for rigorous data integration and quantitative decision support. This has never been more important than in the current context of the administration's agenda for economic recovery. It is important to build a robust evaluation framework for the emerging Science of Science Policy (SoSP). The panel will offer briefings of key elements of the SoSP roadmap implementation including, an overview and recent activities; National Science Foundation (NSF) program in Science of Science and Innovation; and a proposed federal data infrastructure. Thirty minutes for open discussion will allow the audience to consider further development of methods and theory.
Science of Science Policy: Overview and Strategic Directions
Bill Valdez, United States Department of Energy, bill.valdez@science.doe.gov
Bill Valdez is been the co-chair of the Interagency Task Group that produced the OSTP roadmap. He is an acknowledged leader in the science policy community.
Stimulating Research on Science of Science and Innovation
Julia Lane, National Science Foundation, jlane@nsf.gov
Julia Lane is an economist who co-chairs the Interagency Task Group. She directs the only federal program in science of science.
A Data Infrastructure to Enable Research about Science
Israel Lederhendler, National Institutes of Health, lederhei@od.nih.gov
Izja Lederhendler Directs the Division of Information Services at the National Institutes of Health and is currently on detail to the NIH Office of Science Policy Analysis, and to the Division of Program Coordination and Strategic Initiatives. He serves on the OSTP Interagency Task Group.
Science of Science Policy: Methodological Development, Logic Model, and Need for Involvement of American Evaluation Association Community
Cheryl Oros, Independent Consultant, cheryl.oros@comcast.net
Cheryl Oros recently retired from Federal service where she worked as an planning and evaluation specialis in a number of agencies including, USDA, NIH, and VA. She was a central participant in the OSTP Interagency Task Group and currently adises the group through the Science & Technology Policy Institute. She has been actively involved in AEA.

Session Title: Evaluation Planning and Implementation in Dynamic Systems
Multipaper Session 409 to be held in Wekiwa 7 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Social Work TIG
Chair(s):
Donna Parrish,  Clark Atlanta University, sistachristian_p11824@yahoo.com
A Developmental Evaluation of Agency and Parent Perspectives on Parent Involvement, Understanding and Utilizing the Context
Presenter(s):
Jacqueline Counts, University of Kansas, jcounts@ku.edu
Rebecca Gillam, University of Kansas, rgillam@ku.edu
Karin Chang-Rios, University of Kansas, kcr@ku.edu
Abstract: Parent involvement is a critical component of early childhood programs. Many agencies struggle to authentically involve parents, therefore Kansas early childhood stakeholders undertook an initiative to better understand parent involvement. Evaluators used a developmental evaluation approach to understand how parents define involvement and access supports and how agencies utilize parent input to make programmatic changes. Developmental evaluation was chosen over traditional methods to capture system dynamics and complex relationships and to generate context-specific strategies. Results will be presented from a survey with over 90 agencies and focus groups with over 100 parents on understanding of, barriers to, and opportunities for involvement. Our plan to utilize the results with parents and agencies to develop a statewide initiative will be presented. Finally, the advantages of using a developmental evaluation to understand the context and identify patterns that support or impede the parent initiative will be discussed.
Looking at the Big Picture While Reading the Fine Print: Transforming Evaluation Data Into Agency and Larger System Improvement
Presenter(s):
Brian Pagkos, Community Connections of New York, bpagkos@comconnectionsny.org
Heidi Milch, Community Connections of New York, hmilch@comconnectionsny.org
Christa Foschio-Bebak, Community Connections of New York, cfoschio-bebak@comconnectionsny.org
Abstract: It is not enough to have evaluation results and data readily available as the use of said data is what transforms practice and ultimately the outcomes for those served. Every quarter, Community Connections of New York (CCNY) evaluates the effectiveness of all six care coordination agencies providing wraparound for Erie County, NY. Evaluation data, stakeholder input and current literature inform the development of program QI plans. When completed, these plans are reviewed for unique (agency specific) and mutual goals across multiple agencies. CCNY responds with a two-pronged approach: continue providing ongoing support for each agency as well as advocating for system development to affect all care coordination agencies. The presentation will discuss this process in three care coordination agencies, describing how evaluation and improvement efforts can be used to value individual agency needs as well as address gaps within the overarching system of wraparound in Erie County.
Utilizing a Randomized Control Trial Study in Child Welfare: The Jeffco Community Connection Project
Presenter(s):
Julie Morales, University of Denver, julie.morales@du.edu
Robin Leake, University of Denver, robin.leake@du.edu
Sheridan Green, University of Denver, sheridan.green@du.edu
Cathryn Potter, University of Denver, cathryn.potter@du.edu
Natalie Williams, Jefferson County Department of Human Services, nwilliams@jeffco.us
Abstract: Working in partnership with Jefferson County Department of Human Services, the Butler Institute for Families planned and implemented a randomized control trial study of the effectiveness of a collaborative child welfare and public welfare services intervention: the Jeffco Community Connection (JCC) Project. The current paper describes the year-long planning phase and the initial implementation phase of the JCC project. The paper addresses 3 major components of the project that are relevant to the field of evaluation research in child welfare. The authors discuss: 1) utilizing a participatory approach to project planning and development; 2) that establishing a strong collaborative relationship between the evaluator, the project director and staff was essential to an effective intervention design; and 3) the importance of aligning the dual priorities of ensuring the functionality of the project plan for human services line staff and supervisors, and implementing the intervention with a high degree of integrity and fidelity.

Session Title: Improving Training in Business and Industry Through Evaluation
Multipaper Session 410 to be held in Wekiwa 8 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Business and Industry TIG
Chair(s):
Ray Haynes,  Indiana University, rkhaynes@indiana.edu
Extracting Value From Post-course Evaluations Using Advanced Statistical Techniques
Presenter(s):
Michele Graham, KPMG LLP, magraham@kpmg.com
John Mattox, KPMG LLP, jmattox@kpmg.com
Heather Maitre, KPMG LLP, hmaitre@kpmg.com
Peter Sanacore, KPMG LLP, psanacore@kpmg.com
Abstract: Is your learning organization swimming in a sea of unused training evaluation data? In his book 'Super Crunchers: Why Thinking-By-Numbers is the New Way to Be Smart' Ayers (2007) describes how massive data sets, powerful analytics tools and a keen statistical mind can turn data into action-oriented information for businesses. This thinking applies to training evaluation data. For instance, many organizations collect large amounts of Level 1 (Kirkpatrick, 1998) data, but do not take full advantage of the benefits these results can yield. By using advanced statistical techniques to analyze this data store, you can improve training, enhance job performance and gain competitive advantage. This paper presents the four key steps a learning organization should take when gathering and analyzing data to bring value to the business.
It's a Beloved Part of Our Culture, But We Aren't Sure It Works: An Evaluation of a New Employee Orientation Program
Presenter(s):
Meghan Lowery, Southern Illinois University at Carbondale, mrlowery@siu.edu
Joel Nadler, Southern Illinois University at Carbondale, jnadler@siu.edu
Abstract: A mid-sized national organization with a new employee orientation training program sought a combination needs assessment and process evaluation. The training program is a week-long, intensive experience that uses and hands-on activities. This practice is very costly, and organizational decision-makers wanted to know if the current training was effective and if the training could be shortened or even eliminated through the use of an e-training system. Decision-makers sought to determine not only if the training was effective, but also if the training could be shortened or even adapted to incorporate online portions as e-training. A pre-test post-test design measured improvement in general knowledge and understanding of learning objectives. Results indicated significant improvement in knowledge and understanding based on learning objectives. The evaluation process and the importance of considering organizational culture when assessing need for change will be discussed.
Towards an Evaluative Model for Determining the Value of Faculty Diversity and Inclusion in Higher Education
Presenter(s):
Ray Haynes, Indiana University, rkhaynes@indiana.edu
Eric Abdullateef, Directed Study Services, eric.abdullateef@mac.com
Abstract: This paper presentation offers a dynamic model for determining the value of diverse faculty in higher education. It rests on the assumption that the administrations of predominantly white higher education institutions continuously grapple with the dilemma of achieving racial and ethnic diversity among its faculties. The model proffered is evaluative because it could be used to evaluate existing higher education diversity programs. The model is transformative because it creates a paradigm shift from a socio-economic view of diversity to an expanded view that incorporates cultural and ecological dimensions that are rarely considered or appropriately valued when faculty-of-color are hired to diversify higher education institutions.

Session Title: Innovations in Environmental Evaluation: Evaluating Natural Disasters and Effectiveness in Environmental Management
Multipaper Session 411 to be held in Wekiwa 9 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Environmental Program Evaluation TIG
Chair(s):
Katherine Dawes,  United States Environmental Protection Agency, dawes.katherine@epamail.epa.gov
An Effectiveness Revolution in Environmental Management
Presenter(s):
Matt Keene, United States Environmental Protection Agency, keene.matt@epa.gov
Andrew Pullin, University of Wales, a.s.pullin@bangor.ac.uk
Abstract: Since the initial exposure of environmental crises in the latter half of the 20th Century the environmental community has exploded into a huge and diverse number of organizations and disciplines that invest enormous resources in a comprehensive portfolio of approaches to 'solving' environmental problems. A powerful and growing demand for evidence that demonstrates the effectiveness of the environmental community has led to the increasing number and sophistication of initiatives focused on collecting evidence and making determinations of effectiveness. However cultural, political and financial obstacles persist and the effectiveness of the environmental community remains unclear. We assess the evolution and current state of the environmental community, our knowledge of the effectiveness of its interventions and examine how far we are from an effectiveness revolution in environmental management.
Evaluation of the Prevention of Accidental Chemical Releases During Natural Disasters
Presenter(s):
William Michaud, SRA International Inc, bill_michaud@sra.com
Abstract: Evaluation of programs aimed at preventing low probability, high consequence events is both challenging and necessary - challenging because the events of interest are scarce, and necessary because the cost of failure is high. This paper will describe an approach deceloped to help evaluate the impact of the U.S. Environmental Protection Agency's (EPA's) Risk Management Program (RMP) on preventing accidental chemical releases during natural disasters. The approach builds on previous research conducted using data collected during the first decade of the RMP program as well as the concepts conveyed in the Organization for Economic Cooperation and Development's (OECD's) guidance on safety performance indicators. The paper will describe the analysis of the effect of exposure to natural disasters on accident prevalence and severity, the use of logic modeling to identify alternatives for establishing causation, and recommendations for an approach to provide useful feedback for the management of the RMP program.

Session Title: University-based Evaluation Training Across National Contexts: Trends and Findings
Panel Session 412 to be held in Wekiwa 10 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Teaching of Evaluation TIG
Chair(s):
Stewart Donaldson, Claremont Graduate University, stewart.donaldson@cgu.edu
Discussant(s):
Stewart Donaldson, Claremont Graduate University, stewart.donaldson@cgu.edu
Abstract: The pre-service preparation of evaluators through university-based training programs (UBTPs) has been the subject of sporadic inquiry for the professional evaluation associations, and occasionally accumulates in the publication of a UBTP directory. Although the profession of evaluation has developed greatly, the last comprehensive directory was published in 1994 (Altschuld, Engle, & Kim, 1994), leaving evaluation practitioners and policy-makers alike unsure as to the current state of UBTPs. However, with the proliferation of professional evaluation organizations (Donaldson, 2006) comes a renewed interest in the pre-service preparation of professional evaluators and the role of the University in training those evaluators. This panel of evaluation researchers and UBTP leaders has been convened to discuss recent trends and empirical research on UBETPs across contexts and countries, with representatives from the United States, Europe, and Australasia.
European University Continuing Education Programs Devoted to Evaluation
Wolfgang Bewyl, University of Bern, wolfgang.beywl@zuw.unibe.ch
In 2008 a survey has been finished which updates a picture which was already drawn a few years ago. The initial baseline survey was undertaken in 2004/2005 and published in Evaluation (Beywl/Harich 2007). The 2008 study by Beywl & Harich shows a decent growth in the number of programs (14 in all), although some programs are disappearing here and there. For ten of the programs it was possible to get detailed data concerning aspects like entry requirements, covered areas of evaluation, student assessment types, number of graduates. Also figures on fees and quantities of contacts hours delivered are now are available for most of the programs. The density and stability of the programs is obviously a reflection of the strengths and weaknesses of national evaluation cultures. To enhance the quality of the programs and to bundle the resources some of them have formed a network of University programs in evaluation education with a beginning practical cooperation.
University-based Training Programs Across Australasia
Rosalind Hurworth, University of Melbourne, r.hurworth@unimelb.edu.au
Rosalind Hurworth will report on what is happening regarding evaluation training across Australasia but pay particular attention to trends that are occurring within the Centre for Program Evaluation at the University of Melbourne. The Centre has a unique course and this face-to-face and on-line program is growing both in terms of types of courses on offer and types of student it is attracting. She will also do some gazing into the crystal ball to consider what lies ahead in the Region.
The Growth of University-based Evaluation Training in the United States Findings and Opportunities
John LaVelle, Claremont Graduate University, john.lavelle@cgu.edu
Stewart Donaldson, Claremont Graduate University, stewart.donaldson@cgu.edu
Four evaluation training directories have been published since 1976, the most recent of which was published in 1994. Evaluation training directories were intended to provide snapshots of the current training programs, but the processes that created each directory have suffered from various methodological shortcomings (LaVelle, 2008). However, in spite of their various limitations, when combined, these data suggest a drastic decline in the number of evaluation UBTPs since 1976, a trend that has only recently begun to reverse itself (LaVelle, 2008; LaVelle & Donaldson, in progress). Using a combination of Internet research methodologies, curriculum analysis, qualitative interviewing, and innovative mapping technology, new research by LaVelle & Donaldson offers an in-depth picture of UBTPs in the U.S. with an emphasis on the different educational contexts in which evaluators are being trained. These new findings reveal that dramatic growth has occurred in recent years presenting aspiring and practicing evaluators with a wide range of training opportunities. This new opportunities will be presented and discussed with the audience.

Return to Evaluation 2009
Search Results for All Sessions