Return to search form  

Session Title: Eyes Wide Open: Learning to Spot Ethical Quandaries in Evaluation Practice
Multipaper Session 600 to be held in International Ballroom A on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the Independent Consulting TIG
Chair(s):
Ken Meter,  Crossroads Resource Center,  kmeter@crcworks.org
Abstract: Evaluators are likely to be confronted with challenging ethical dilemmas throughout their careers. No matter how assiduously they follow the AEA's Guidelines for Ethical Practice or other professional standards and codes of ethics, evaluators will find that some contexts generate intractable ethical issues. Evaluators will be able to resolve some ethical dilemmas satisfactorily, even to find in them invaluable learning opportunities. Not rarely, however, ethical issues are costly regardless of whether or how evaluators address them. They can cost evaluators time, financial and staff resources, professional standing, trust, relationships, and peace of mind. For independent evaluators and small firms these costs can be especially onerous. In this session, using cases from their own practices and the literature on ethics in evaluation, independent evaluators examine reasons that ethical issues surface in various evaluation contexts and suggest strategies for identifying areas of potential conflict and steps to avoid or mitigate them.
Avoiding Ethical Entanglements: Learning About Self, Situation, and Stakeholders
Amy La Goy,  Evaluation and Research Consulting,  amylagoy@earthlink.net
Evaluators will face ethical dilemmas throughout their careers. Due to the contextual nature of ethical issues there is no set of rules for determining an appropriate course of action when they arise. The American Evaluation Association document, 'Guiding Principles for Evaluators', suggests ways of thinking about and pursuing ethical practice, but evaluators are left to interpret the guidelines in light of their own values and the circumstances of the evaluation. Not rarely, ethical dilemmas surface when stakeholders and evaluators hold different expectations for and beliefs about the evaluation - about its aims, processes, stakeholders, outcomes, audiences. Conflicts in expectations can engender some of the most difficult dilemmas to resolve, but they may be avoided or tempered if evaluators have relevant knowledge of themselves and their clients before undertaking a project. In this paper, we present a heuristic for evaluators to use to identify and prepare for potentially problematic contexts and clients.
Crossroads Reached in Evaluation Practice: Learning to Identify Ethical Signposts
Norma Martinez-Rubin,  Evaluation Focused Consulting,  norma@evaluationfocused.com
Independent evaluators anticipate having the liberty to be selective about the clients they engage, the projects for which they are hired, and the duration of professional relationships with their clients. Quandaries examined retrospectively provide the evaluator opportunities to identify ethical pitfalls and prepare to manage future consulting engagements. Being unaware of such pitfalls can mar an evaluation practitioner's professional integrity and credibility, two valued assets in maintaining lasting relationships within the evaluation field and across client projects. In this paper presentation, case examples from the presenter's past consulting engagements will illustrate quandaries potentially encountered in the early development of an evaluation practice. Sections from the American Evaluation Association's Cultural Reading of Program Evaluation Standards will be discussed as professional signposts available to compare one's approach to successfully navigate ethical dilemmas faced in evaluation practice for the solo practitioner.

Session Title: The Proposed Program Evaluation Standards, Third Edition 2nd Revision: A National Hearing
Panel Session 601 to be held in International Ballroom B on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the AEA Conference Committee
Chair(s):
Elmima Johnson,  National Science Foundation,  ejohnson@nsf.gov
Abstract: This session will present the current draft of the Program Evaluation Standards prepared by a taskforce of the Joint Committee on Standards for Educational Evaluation (JCSEE). The discussion will outline the process and procedures guiding the revision as well as proposed changes in content, format and organization. The purpose of this session, which is classified as a National Hearing, is to solicit and respond to feedback from those members of AEA with an interest in the development and use of the Program Evaluation Standards. All comments will be recorded and responded to by the Joint Committee. Continued input by AEA members will be encouraged and the procedures for review will be explained.
American Evaluation Association and the Program Evaluation Standards: Where do we Stand?
Elmima Johnson,  National Science Foundation,  ejohnson@nsf.gov
Elmima Johnson serves as the AEA representative to the Joint Committee on Standards for Educational Evaluation. She will discuss AEA's role on the Joint Committee, the role of the Standards within the profession and practice of evaluation, and the importance of AEA member input in the development and revision of this guide to practice.
The Joint Committee and the Program Evaluation Standards: Standards Development and Use
Arlen Gullickson,  Western Michigan University,  arlen.gullickson@wmich.edu
Arlen Gullickson is the Chair of the Joint Committee on Standards for Educational Evaluation (JCSEE) and will review the history of the organization, its purpose and membership and national and international use of the standards.
Revisions in Standards Format, Content and Organization and the New Metaevaluation Standards
Donald Yarbrough,  University of Iowa,  d-yarbrough@uiowa.edu
Don Yarbrough chairs the taskforce that is revising the Program Evaluation Standards. This presentation will frame the session with a discussion of proposed revisions in Standards format, content and organization He will also present a new Metaevaluation Standard area.
Proposed Revisions to the Propriety and Utility Standards
Rodney Hopson,  Duquesne University,  hopson@duq.edu
Lyn Shulha,  Queen's University,  shulhal@educ.queensu.ca
The presenters are both members of the taskforce that is revising the Program Evaluation Standards. The presentation will discuss proposed revisions to the 3rd edition of Joint Committee Standards for Educational Evaluation as they relate to the Propriety and Utility Standards. The discussion will cover standards under the two areas in their entirety including the aspects overview, the rationale, implementation and potential hazards.
Proposed Revisions to the Feasibility and Accuracy Standards of the Third Edition of the Joint Committee Standards for Educational Evaluation
Flora Caruthers,  National Legislative Program Evaluation Society,  caruthers.flora@oppaga.fl.gov
Donald Yarbrough,  University of Iowa,  d-yarbrough@uiowa.edu
The presenters both serve on the taskforce that is revising the current Program Evaluation Standards. The presentation will discuss proposed revisions to the 3rd edition of Joint Committee Standards for Educational Evaluation as they relate to the Feasibility and Accuracy Standards. The discussion will cover standards under the two areas in their entirety including the aspects overview, the rationale, implementation and potential hazards.

Session Title: How to Publish an Article in the American Journal of Evaluation: Guidance for First-time Authors
Demonstration Session 602 to be held in International Ballroom C on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the AEA Conference Committee
Presenter(s):
Robin Miller,  Michigan State University,  mill1493@msu.edu
Michael Hendricks,  Independent Consultant,  mikehendri@aol.com
Katherine Ryan,  University of Illinois at Urbana-Champaign,  k-ryan6@uiuc.edu
Abstract: We propose to deliver a workshop in which participants who have little experience of publishing in a peer-refereed journal are provided a basic introduction to the process of publishing in AJE. In the session, we will detail the procedural aspects of submitting a manuscript. Most of our time, however, will be spent on teaching participants, using examples, the key steps to writing a professional article and responding to editors and reviewers' comments. Working in small groups, the journals' editorial leadership and members of its Editorial Advisory Board will lead small groups on the specific issues and questions that participants may have about the journal article writing process.

Session Title: Studying Process Use on a Large Scale
Multipaper Session 603 to be held in International Ballroom D on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the Evaluation Use TIG
Chair(s):
Susan Tucker,  Evaluation and Development Association,  sutucker@sutucker.cnc.net
Evaluation in Post War Countries: Tools and Skills Required
Presenter(s):
Mushtaq Rahim,  ARD Inc,  mrahim@ardinc.com.af
Abstract: Afghanistan, since 2000 has entered in to a new era. The new era also sees the interventions of a lot of donor agencies and NGOs. The approach to evaluation of results has also peaked during the era. However, identification of results has been more than a challenge since one of main sources of data collection is direct interviewing of the beneficiaries. The beneficiaries, due to low literacy rate, are unable to comprehend the purpose of it and therefore; do not provide real data. One the other hand, a handful of funds are being awarded to the NGOs in the country without considering the past experiences. The will is only to spend the money and get the outputs without focusing on the long term. There would not have been even a single ex-post evaluation of any program and project. Hence, the evaluation is rarely used for future project design.
Not by the Books: Models, Impacts and Quality in Ninety Evaluations
Presenter(s):
Verner Denvall,  Lund university,  verner.denvall@soch.lu.se
Abstract: A Swedish metropolitan policy has spent approximately $ 500 000 000 in the purpose of reducing social, ethnical and discriminatory segregation and increasing sustainable growth in 7 municipalities and 24 city neighborhoods. This program has attracted about 90 evaluators from universities and companies. As an outcome of this almost a hundred evaluations have been produced between the years 1999 – 2006. Those evaluations have been analyzed, the evaluators were interviewed and 400 administrators and project leaders have attended a survey in a research project. The paper will focus on evaluation models in use, the impact and the quality of those evaluations. How come quality does not seem to correspond to impact? And how can we understand that the evaluators seem to have adopted a narrative model of their own not anywhere near the models presented in the textbooks?
Learning From Evaluations in National Governments of Developing Countries: The Case for Sub-Saharan African Countries
Presenter(s):
Rosern Rwampororo,  Ministry of Economic Planning and Development,  rwampororor@mepdgov.org
Rhino Mchenga,  Ministry of Economic Planning and Development,  rhinomchenga@yahoo.co.uk
Abstract: Drawing on experiences from Uganda and Malawi, the paper is premised on the assumption that learning is a process that enhances individual and collective capacity within national governments to create the desired results that governments would like to create now and in the future. The paper analyses and uses the level of monitoring and evaluation (M&E) capacity in either country to explain the difference in learning from evaluations, if any, due to the focus paid to the monitoring function at the expense of the evaluative function. The paper goes further to debunk the assumption that learning is a process that enhances collective capacity in order to create the results desired in decision-making and action when in reality it may be individualistic. The paper demonstrates not only the poor culture on use of monitoring information and evaluation findings but also the political influences which affect decisions at various levels.
On the Value-added of the Evaluation Process: Investigating Process Use in a Government Context
Presenter(s):
Courtney Amo,  Social Sciences and Humanities Research Council of Canada,  courtney.amo@sshrc.ca
J Bradley Cousins,  University of Ottawa,  bcousins@uottawa.ca
Abstract: This paper reports on the preliminary results of a study which examined the link between process use and use of evaluation findings in the Canadian Federal Government context. The study involved a pan-Canadian survey of evaluation practitioner in federal government and an in-depth case study of a crown corporation of the Government of Canada. It examined such factors as the context in which an evaluation is conducted; the individuals involved in the evaluation; and the form of systematic inquiry used in the evaluation, and how these factors and characteristics fostered evaluation utilization and in particular, process use. The results of this study help us further our understanding of process use as a worthwhile consequence of evaluation, and as a means of deriving further benefits from the evaluation process.

Session Title: Graduate Student and New Evaluators TIG Business Meeeting and Presentation: Learning for High Quality Evaluation Practice: Training Options, Experiences, and Lessons Learned
Business Meeting with Panel Session 604 to be held in International Ballroom E on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the Graduate Student and New Evaluator TIG
TIG Leader(s):
Chris Coryn,  Western Michigan University,  christian.coryn@wmich.edu
Stephen Hulme,  Brigham Young University,  byusnowboarder@yahoo.com
Daniela C Schroeter,  Western Michigan University,  daniela.schroeter@wmich.edu
Annette Griffith,  University of Nebraska, Lincoln,  annettekgriffith@hotmail.com
Chair(s):
Bianca Montrosse,  University of North Carolina, Chapel Hill,  montrosse@mail.fpg.unc.edu
Abstract: Gaining relevant experiences are paramount for those seeking to pursue entry into the evaluation workforce (e.g., Altschuld & Engle, 1994; Berkel, 2004; Dewey, Montrosse, Schroeter, Sullins, & Mattox, 2006; Engle & Altschuld, 2004; Stevahn, King, Ghere, & Minnema, 2005, 2006; Stufflebeam, & Wingate, 2005). However, those seeking to pursue various experience opportunities are often left wandering down a path for which there is no map. That is, they are aware that experience matters, but knowing how to locate these opportunities and cultivate realistic expectations often serve as barriers. Too often, students and new evaluators are unable to identify these opportunities or are left unsatisfied with their experiences. The purpose of this panel is two-fold. First, it will provide a general overview of how to locate various opportunities. Second, presenters with relevant expertise will discuss their experiences, lessons learned, and offer advice for those seeking to pursue similar opportunities.
Locating Training Opportunities: Strategies That Work
John LaVelle,  Claremont Graduate University,  john.lavelle@cgu.edu
Evaluation is a skill set that is in great demand both in the for-profit and not-for-profit sectors, and the importance of learning opportunities and work experiences in evaluation is undisputed. Unfortunately, both students and new practitioners alike experience difficulty locating appropriate opportunities, which can leave them disappointed and anxious. The presenter will draw on his experience as Jobs Coordinator at Claremont Graduate University to share general job search strategies such as 1) using online resources, 2) network, network, network, and 3) indicators that an evaluation opportunity might be present even if the description does not say 'evaluation'.
Being an Urban Education Research Fellow for the Los Angeles Unified School District
Eric Barela,  Los Angeles Unified School District,  eric.barela@lausd.net
The Urban Education Research Fellowship (UERF), offered by the Los Angeles Unified School District (LAUSD), offers graduate students in a variety of social science research disciplines the opportunity to conduct evaluation in a unique environment. Aside from receiving tuition reimbursement and half-time employment, the UERF provides a crash course in school district evaluation. Dahler-Larsen (2002) suggests that it is difficult to separate evaluation practice from the organization in which it occurs. This is very true of school district evaluation. There are unique political pressures that exist in a school district that provide challenges and opportunities for evaluators. The UERF gives graduate students seeking to become school district evaluators the opportunity to reconcile their theoretical training with the real-world constraints placed on school district evaluation while also providing them with guidance and mentoring from a community of over 30 practicing evaluators.
Navigating the Non-Profit World: The HeartShare Human Services of New York Experience
Ariana Brooks,  HeartShare Human Services,  ariana.brooks@heartshare.org
Since opening its doors in 1914, HeartShare Human Services of New York has strived to improve the lives of those most in need. Currently, this non-profit agency provides a complete spectrum of services (e.g., youth programs, HIV/AIDS services, Medicaid service coordination) to more than 16,000 individuals living in the New York City area. This presentation will focus on making the transition from graduate school to this large not-for-profit. More specifically, issues explored include required skills, the interview and relocation processes, and managing graduate school and job responsibilities. More specifically, issues related to required skills, the interview and relocation processes, and managing graduate school and job responsibilities concurrently will be explored.
A Policy-Based Predoctoral Fellowship Program: The Good, Bad, and In-Between
Bianca Montrosse,  University of North Carolina, Chapel Hill,  montrosse@mail.fpg.unc.edu
Pre- and postdoctoral fellowships are one avenue in which those seeking to expand their repertoire of skills and gain relevant evaluation experiences can pursue. The current paper explores one such opportunity, namely, a year-long predoctoral fellowship at the University of North Carolina at Chapel Hill. After describing the fellowship program, a number of questions are addressed. For example, what was the process of locating the opportunity? What have been positive and challenging experiences within the context of the fellowship? The paper concludes with recommendations for those interested in pursuing similar opportunities.
Does Size Matter?
Daniela C Schroeter,  Western Michigan University,  daniela.schroeter@wmich.edu
Graduate students commonly juggle coursework with gaining useful practical experience that contributes to the professional development in their careers of interest. Graduate assistantships, internships, and personal projects are viable options, especially if they provide evaluation-specific experience. Saying ôyesö to emerging opportunities can open doors to a variety of evaluative experiences in very different areas including international development evaluations, educational program and policy evaluations, national multi-site evaluations, and community-based evaluations. However, the size of the project influences the nature of the experience, as well as the depth and breath of the competency gained.

Session Title: The Use of Theoretical Models and Perspectives to Inform Evaluations
Multipaper Session 605 to be held in Liberty Ballroom Section A on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the Program Theory and Theory-driven Evaluation TIG
Chair(s):
Martha Holleman,  The Safe and Sound Campaign,  mholleman@safeandsound.org
Structures and Impacts on Program Evaluation: Applying a Peace Builders Model
Presenter(s):
Didi Fahey,  The Ohio State University,  fahey.13@osu.edu
Abstract: To avoid the development of culturally biased program evaluation, it may be necessary to employ a strategy developed by peace psychologists. Originally designed to examine how social and political structures simultaneously hold direct and indirect impacts upon certain individuals within the society, this perspective of peace-building can apply for program development and evaluation. This paper looks at how programs and program choices may benefit some while visit violences upon others. Taking the example of university outreach programs in local school districts, the peace-builders approach demonstrates how program evaluation can be designed to at least acknowledge, if not accommodate the various social and organizational structures affecting program implementation for all three levels of program service and development. Individuals who engage in programming need to remain sensitive to how indirect structures and impacts can affect the overall program, and how that program is perceived and valued.
Developing and Testing a Developmental Model to Promote the Civic Engagement of Youth
Presenter(s):
Joyce Serido,  University of Arizona,  jserido@email.arizona.edu
Lynne Borden,  University of Arizona,  bordenl@ag.arizona.edu
Abstract: Despite the need for actively involved citizenry to ensure that communities are both stable and healthy, current research suggests that there is decreasing civic engagement among today's youth (Ginwright & James, 2002; Mahoney, Larson, Eccles, & Lord, 2005; Sherrod, Flanagan, & Youniss, 2002). Some studies have found that youth who participate in both school-based and community-based programs during high school remain more civically engaged than their contemporaries throughout adulthood (Verba, Scholzman, & Brady, 1995; Youniss, McClellan, & Yates, 1997). However, these studies are not based on a developmental model outlining the processes through which program participation promotes civic responsibility. In the first part of this presentation we describe a three-phased process model of development derived from a qualitative study of an existing community program. We then present the results of empirically testing the model using data from an online evaluation study of rural youth from 29 states.
The Resiliency Model for Organizations: Using Organizational Theory to Inform Evaluation Practices
Presenter(s):
Taj Carson,  Carson Research Consulting Inc,  taj@carsonresearch.com
Laurie Reuben,  Cheshire Consulting Group,  laurie@cheshiregroup.net
Abstract: In this session, the presenters will explore the concept of organizational resiliency. They will describe their framework for looking at factors that identify opportunities to build organizational resiliency. The importance of a constant flow of the right information, thoughtful strategies, the development of key qualities and the indicators of resiliency will be discussed, as well as the elements of a resilient organization and the evidence to support the inclusion of those elements in the framework. Evidence for this organizational model comes from secondary analysis of research conducted with organizations and from the field of organization development. This framework can provide evaluators with a way to think about the organizations in which they work and to identify strengths and weaknesses of the organization.
Logic Model Ownership: Implications for Logic Model Utilization and Program Effectiveness
Presenter(s):
Dustin Duncan,  Harvard University,  dduncan@hsph.harvard.edu
Abstract: Much has been written about the myriad benefits of utilizing logic models, including in facilitating program development/evaluation and in increasing program effectiveness. Less is known about logic model ownership (who physically possesses the logic model; factors that may be make one feel that they should possess the logic model; and who takes credit for the logic model) and how that relates to logic model utilization and program effectiveness. However, logic model ownership may be potentially implicated in these outcomes. The author hypothesizes that logic model ownership is positively associated with logic model utilization. Further, it may be possible that when staff and stakeholders do not “own” the logic model, the program may have limited effectiveness. These hypotheses are discussed in the context of the current evaluation literature.
The Importance of Developing Faith Based Program Theory
Presenter(s):
Ayana Perkins,  Georgia State University,  ayanaperkins@msn.com
Abstract: The power of faith-based institutions is their ability to reach congregants and non congregants in their surrounding community. Public health administrators are well aware of the importance of faith based institutions in human service work. This support for faith-based human service work is further reflected in increased funding for these organizations. Funding requirements often require evaluation to ensure accountability in meeting the needs of target audience assess the process and evaluation of their work. However, the lack of a rigorous evaluation may leave many community stakeholders suspicious and distrustful of federally funded faith-based initiatives. Relying on minimal evaluation methods may serve the needs of the funder but could ultimately lead to loss of confidence in faith-based human service work. In this presentation, the author uses a qualitative systematic review to identify key elements missing in faith-based evaluations and to explicate the importance of developing evaluation theory for faith-based programs.

Session Title: Evaluation Capacity Development: A Systems Perspective in an International Context
Multipaper Session 606 to be held in Liberty Ballroom Section B on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG , the International and Cross-cultural Evaluation TIG, and the Multiethnic Issues in Evaluation TIG
Chair(s):
Patricia Rogers,  Royal Melbourne Institute of Technology,  patricia.rogers@rmit.edu.au
Discussant(s):
Bob Williams,  Independent Consultant,  bobwill@actrix.co.nz
Evaluation Standards Development as Organizational Capability Building
Presenter(s):
Melissa Weenink,  New Zealand Ministry of Education,  melissa.weenink@minedu.govt.nz
Kate McKegg,  The Knowledge Institute Ltd,  kate.mckegg@xtra.co.nz
Abstract: New Zealand has adopted a new public sector management framework - Managing for Outcomes. One of the framework's aims is to improve the state sector's ability to decide what evaluative activity to undertake; undertake it, and; and use the findings. In response, the New Zealand Ministry of Education developed an Evaluation Strategy. The Strategy has two aims – to improve the quality of evaluation the Ministry does, and to improve organizational capacity to use evaluation. Implementation of the Strategy covers: - Creating the incentives and conditions that stimulate demand for evaluation and evaluative activity - Developing the appropriate structures, processes and resources to support evaluative activity - Building the supply of evaluative expertise and capability to scope, design, manage and use evaluation. One of the Strategy's key implementation projects is developing organization-wide evaluation practice standards. Developing these standards strikes at both improving quality and building organizational capacity. This paper describes the process of developing the standards, and reflects on how well this has worked initially as an organizational capacity building tool and the conditions which have influenced its success.
Knowledge Network for Evaluation Capacity Development in Developing Countries
Presenter(s):
Naonobu Minato,  Foundation for Advanced Studies on International Development,  minato@fasid.or.jp
Abstract: Effective use of evaluation results is one of the most important elements of economic and social development in developing countries. In order to establish an effective evaluation system and meet the necessary conditions for effective feedback of evaluation results, the development of institutional and human capacity would be essential. Not only evaluation experts but also those users of evaluation results need to deepen their knowledge and experiences in evaluation. For capacity development of evaluation personnel, knowledge network might be an effective modality. Knowledge network builds and manage to enable information-sharing among evaluation experts, academia, practitioners, government officers, civil society, donors, and NGOs. Knowledge network also provides opportunities for learning from each other and bridging gaps between evaluation and policy making. Knowledge Network among evaluation associations of developing countries will contribute greatly to Evaluation Capacity Development by way of sharing ideas, exchanging experiences, and promoting peer review.

Session Title: Methods in Evaluation
Multipaper Session 607 to be held in Mencken Room on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
Elizabeth Sale,  Missouri Institute of Mental Health,  liz.sale@mimh.edu
Comparing the Use Standardized and Site-specific Instrumentation in National and Statewide Multi-site Evaluations
Presenter(s):
Elizabeth Sale,  Missouri Institute of Mental Health,  liz.sale@mimh.edu
Mary Nistler,  Learning Point Associates,  mary.nistler@learningpt.org
Carol Evans,  Missouri Institute of Mental Health,  carol.evans@mimh.edu
Abstract: Choosing instrumentation in the evaluation of multi-site programs can be challenging. While cross-site evaluators may opt to use a standardized instrument across all sites, local programs may not be amenable to adopting cross-site instruments for a variety of reasons. First, outcomes measured by cross-site evaluators may simply not be of interest to local programs. Second, cross-site instrumentation may not be culturally appropriate or age-specific for a given site. Third, because cross-site evaluations using standardized instrument adopt a “one size fits all” mentality, they may fail to capture changes in individuals that could be captured using site-specific instruments. We compare instrumentation decisions in three multi-site studies (a early childhood program, a mentoring program, and a suicide prevention program) using standardized and program-specific instrumentation and their impact on both cross-site and local program. Implications for instrument selection in future multi-site evaluations are discussed.
Analysis of Nested Cross-sectional Group-Randomized Trials With Pretest and Posttest Measurements: A Comparison of Two Approaches
Presenter(s):
Sherri Pals,  Centers for Disease Control and Prevention,  sfv3@cdc.gov
Sheana Bull,  University of Colorado, Denver,  sheana.bull@uchsc.edu
Abstract: Evaluation of community-level HIV/STD interventions is often accomplished using a group-randomized trial, or GRT. A common GRT design is the pretest-posttest nested cross-sectional design. Two analytic strategies for this design are the mixed-model repeated measures analysis of variance (RMANOVA) and the mixed-model analysis of covariance (ANCOVA), with pretest group means as a covariate. We used data from the POWER (Prevention Options for Women Equal Rights) study to demonstrate power analysis and compare models for two variables: any unprotected sex in the last 90 days and condom use at last sex. For any unprotected sex, the RMANOVA approach was more powerful, but the ANCOVA approach was more powerful for the analysis of condom use at last sex. The difference in power between these models depends on the over-time correlation at the group level. Investigators designing GRTs should do an a priori comparison of models to plan the most powerful analytic approach.
Closing the Gap on Access and Integration: An Evaluation of Primary and Behavioral Health Care Integration in Twenty-four States
Presenter(s):
Elena Vinogradova,  REDA International Inc,  evinogradova@redainternational.com
Elham Eid Alldredge,  REDA International Inc,  alldredge@redainternational.com
Abstract: Four “Closing the Gap on Access and Integration: Primary and Behavioral Health Care” Summits were conducted in 2004 by the Health Resources and Services Administration in collaboration with SAMHSA. During these facilitated meetings, state teams developed state-specific strategic action plans that aimed to integrate mental health, substance abuse, and primary care services. During the following two years, a comprehensive evaluation of the summits' impact was conducted by REDA International, Inc. The evaluation utilized multiple sources of data and used a groundbreaking comparative multiple case study methodology. The data analysis revealed that the extent of the summits' impact on the states' efforts to integrate primary and behavioral health care was largely determined by a few critical factors that need to be better understood in future federal efforts to promote a state-level change.
System-level Evaluation: Strategies for Understanding Which Part of the Elephant Are We Touching?
Presenter(s):
Mary Armstrong,  University of South Florida,  armstron@fmhi.usf.edu
Karen Blase,  University of South Florida,  kblase@fmhi.usf.edu
Frances Wallace,  University of South Florida,  fwallace@fmhi.usf.edu
Abstract: Patton (2002) compares evaluations of complex adaptive systems to nine blind people, all of whom touch a different part of an elephant and thereby have different understanding of the elephant. He points out that from a system perspective, to truly understand the elephant, one must see it in its natural ecosystem. This paper will use a state-level evaluation of New Jersey's children's behavioral health system conducted in 2006 to illustrate the challenges and solutions confronting system level evaluators. Solutions utilized by the study team include a participatory action framework; a multi-method approach to data collection that included key stakeholder interviews, document reviews, web-based surveys, focus groups, analysis of administrative datasets to understand penetration rates, geographic equity, and service utilization, and interviews with caregivers and their case managers; and an interactive hermeneutic approach to data analysis and interpretation. The paper will conclude with a set of challenges for future system-level evaluators.

Session Title: Evaluating School District Emergency Management Plans Using Government Performance and Review Act (GPRA) Performance Measures and Indicators
Panel Session 608 to be held in Edgar Allen Poe Room  on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the Disaster and Emergency Management Evaluation TIG
Chair(s):
Kathy Zantal-Wiener,  Caliber an ICF International Company,  kzantal-wiener@icfcaliber.com
Abstract: In response to the rise in crises and emergencies affecting school environments, such as natural disasters, school shootings, deaths/suicides, fires, and chemical spills, the U.S. Department of Education (ED) established the Emergency Response and Crisis Management (ERCM) Grant Initiative to support schools and school districts in developing emergency management plans. As part of the grant, schools and school districts must evaluate the formation, implementation, and sustainability of their emergency management plans, using the Government Performance and Results Act (GPRA) performance measures and indicators. The purpose of this panel is to provide an overview of ED's ERCM Grant Initiative, and discuss the implementation of GPRA measures and indicators as an evaluation mechanism for ERCM grantees.
United States Department of Education's Initiative to Improve School Emergency Management Plans
Thomas J Horwood,  Caliber an ICF International Company,  thorwood@icfcaliber.com
This presentation will focus on orienting participants to the U.S. Department of Education's Emergency Response and Crisis Management (ERCM) Initiative. The presentation will include an overview of the initiative and will present the four phases of emergency management: prevention-mitigation, preparedness, response, and recovery. The presenter also will provide a synopsis of the grant program that supports the initiative, including funding ranges, eligible grant recipients, grantee requirements, and demographic data about the projects funded by the initiative. Lastly, an overview of the Government Performance and Results Act (GPRA) will be presented and the specific measures ERCM grantees must use to evaluate grant outcomes will be discussed.
Government Performance and Review Act Performance Measures and Indicators for Evaluating School Emergency Response and Crisis Management Plans
Kathy Zantal-Wiener,  Caliber an ICF International Company,  kzantal-wiener@icfcaliber.com
This presentation will include a discussion about the challenges associated with evaluating the activities involved in designing and implementing a U.S. Department of Education Emergency Response and Crisis Management grant to include: no required set-aside funds for evaluation, a short project period, lack of experienced evaluators, and the use of Government Performance and Results Act (GPRA) performance measures and indicators to evaluate the grant activities. The session will focus on what GPRA measures and indicators are appropriate, data collection timelines, data collection activities and instruments, and evaluator qualifications. To conclude, the presenter will discuss how to use the evaluation data to communicate relevant evaluation findings to the various stakeholders (e.g., school district personnel, first responders, school board members, and the community). Case scenarios will provide opportunities to explore the difficulties and realities as one district embraces evaluation to provide safe schools.
Using Government Performance and Review Act Performance Measures and Indicators to Improve the Seattle (Washington) Public Schools Emergency Response and Crisis Management Grant Project
Thomas J Horwood,  Caliber an ICF International Company,  thorwood@icfcaliber.com
Maintaining a safe and healthy learning environment is one of the Seattle Public School's most important functions. However, few school district personnel have expertise in emergency management, and do not understand how to evaluate the effectiveness of current plans. In this era of reduced funds, increased accountability and aggressive vendors, more effective evaluation measures of school-based emergency management plans are needed. Seattle Public Schools have collected data on emergency management planning for a variety of reasons, from responding to lawsuits to providing data to the school board, superintendent and parents to help them understand the condition of school safety and make effective policy and funding decisions. Case scenarios will provide opportunities to explore the difficulties and realities as one district embraces evaluation to provide safe schools.

Session Title: Getting To Outcomes at the Federal, State, County, and Local Levels: Session I
Panel Session 609 to be held in Carroll Room on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Abraham Wandersman,  University of South Carolina,  wandersman@sc.edu
Catherine Lesesne,  Centers for Disease Control and Prevention,  ckl9@cdc.gov
Abstract: Getting To Outcomes is an approach to help practitioners plan, implement, and evaluate their programs to achieve results. The roots of GTO are traditional evaluation, empowerment evaluation, continuous quality improvement and results-based accountability. GTO uses 10 accountability questions; addressing the 10 questions involves a comprehensive approach to results-based accountability that includes evaluation and much more. It includes: needs and resource assessment, identifying goals, target populations, desired outcomes (objectives), science and best practices, logic models, fit of programs with existing programs, planning, implementation with fidelity, process evaluation, outcome evaluation, continuous quality improvement, and sustainability. GTO workbooks have been developed in several domains (substance abuse prevention, preventing underage drinking, positive youth development) and is currently under development in several other domains (preventing teen pregnancy, preventing violence, emergency preparedness). The papers in this panel will show how GTO is being used at the federal, state, county, and local levels.
Improving Teen Pregnancy Prevention Practice Using Getting to Outcomes: A National Capacity-building Project
Catherine Lesesne,  Centers for Disease Control and Prevention,  ckl9@cdc.gov
Kelly Lewis,  James Madison University,  lewiskristi@gmail.com
Claire Moore,  Centers for Disease Control and Prevention,  cxo7@cdc.gov
Diane Green,  Centers for Disease Control and Prevention,  dcg1@cdc.gov
In the teen pregnancy prevention field there are many efficacious programs but adoption, implementation, and evaluation of these has been limited nationally. In response to this issue, CDC is funding a capacity-building program called "Promoting Science-based Approaches" (PSBA) aimed at improving adolescent reproductive health by encouraging the use of science-based prevention approaches. PSBA recently adopted the Getting to Outcomes (GTO) framework and began customization of a new GTO for the teen pregnancy prevention field called PSBA-GTO. PSBA-GTO will serve both as a guide to state grantees providing support and technical assistance to local partners and as well as a process to build the capacity of local partners to plan for, select, implement, and evaluate science-based prevention programs. This multi-stakeholder, capacity-building approach offers a national-level perspective on using the GTO framework to improve prevention practice. The authors will present the project model and discuss successes and challenges to date.
Embedding Getting To Outcomes in State and County Government Operations
Lawrence Pasti,  New York State Office of Children and Family Services,  larry.pasti@ocfs.state.ny.us
In New York State, county government shares responsibility with state agencies for the planning, funding, implementation and monitoring of services. Both are committed to accounting for the results of their use of resources and interested in use of evidence-based programs. Getting To Outcomes provides a logical set of questions to achieve results, both for specific programs and for county level planning. Since public sector agencies have existing requirements to provide those functions, GTO enhances them rather than imposing new functions.
Getting To Outcomes with State and Local Social Services and Benefits Offices in New York State
Marilyn Ray,  Finger Lakes Law and Social Policy Center Inc,  mlr17@cornell.edu
This paper describes a year-long contract to train workers in social services and benefits areas at the state, regional, and local levels in NYS in the Getting to Outcomes (GTO) logic model for program planning, implementation, and evaluation. A key lesson we all learned from this project is the adaptability of the GTO model to a vast range of projects, including: developing training programs in a range of contexts; working with local services providers on positive youth development projects; designing blended learning institutes; assisting a county coalition redesign a failing program; and working with local service providers to develop results-oriented contracts. We also relearned the critical importance of follow-up support and technical assistance if new approaches are to take hold and be incorporated into daily work tasks.
Getting to Outcomes for Emergency Preparedness: A Pilot Adaptation for Local Practitioners
Melanie Livet,  University of South Carolina,  melanielivet@yahoo.com
Karen Pendleton,  Centers for Disease Control and Prevention,  ktpendl@gwm.sc.edu
Duncan Meyers,  University of South Carolina,  meyersd@gwm.sc.edu
Joselyn Burdine,  Centers for Disease Control and Prevention,  burdinjr@gwm.sc.edu
Despite the rapid increase of federal funding since 9/11, there is often a lack of accountability for the monies that are awarded towards emergency preparedness. In addition, federal agencies have developed emergency preparedness approaches which are primarily a blend of military and business planning models. Because preparedness and response is ultimately a local issue, it is important that the national guidance be translated into systematic community-based guidelines. Getting To Outcomes (GTO) was selected and adapted to address both the lack of accountability and the need for a community-based planning system that complements the national framework. Our Emergency Preparedness GTO (EP-GTO) was pilot tested as part of a team-based preparedness training for public health and their response partners. We will discuss: (1) the resulting EP-GTO; (2) its use as part of the training; and (3) preliminary results on the effectiveness of EP-GTO.

Session Title: When Funders, Evaluators and Service Providers Work Together a Good Idea Gets Better
Panel Session 610 to be held in Pratt Room, Section A on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Anita Baker,  Anita Baker Consulting,  abaker8722@aol.com
Discussant(s):
Beth Bruner,  Bruner Foundation,  bbruner@brunerfoundation.org
Abstract: In 2003, Lifespan, a non-profit organization dedicated to serving older adults, joined with Al Sigl center an alliance of independent agencies that serve people with disabilities and their families, to create support for aging adults with disabilities who have specialized future care needs. The result was Future Care Planning Services (FCPS). Evaluation was an initial component and continues to be an important aspect of the work. Through this session, the various partners will present details about how FCPS was conceived, delivered and evaluated, and how evaluation has informed ongoing implementation. The session will include a presentation of some evaluation findings, but more importantly it will include a multi-perspective discussion about how working in a collaboration that includes service professionals, funders and evaluators has transformed all of their approaches to work. The discussant from the Bruner Foundation will reflect on how evaluative thinking has governed and enhanced FCPS.
Developing and Modifying Future Care Planning Services: Key Lessons About Working With Provider Partners, Evaluators and Funders
Ann Marie Cook,  Lifespan of Greater Rochester,  amcook@lifespan-roch.org
Daniel Meyers,  Al Sigl Center,  d_meyers@alsiglcenter.org
Lifespan a Rochester, a non-profit organization dedicated to serving older adults, (and a former participant in an evaluation learning project) joined with the Al Sigl center, an alliance of independent agencies that serve people with disabilities and their families, to create Future Care Planning Services (FCPS). FCPS also involves multiple funders and has included rigorous program evaluation since inception. This part of the session will focus on why FCPS was created including clarification about context and need for FCPS, why the developers decided to deliver the program collaboratively; and why FCPS was designed to include multiple funders and an evaluator all working together. After other presenters have provided specific details about service delivery, funder involvement and evaluation design and findings, the FCPS developers will talk about key lessons they've learned from FCPS, what's next, and how working with evaluators and funders has changed the way their organizations work.
Supporting FCPS and Partnerships between Evaluators, Service Providers and Funders
Ann Costello,  Golisano Foundation,  acostello@golisanofoundation.org
FCPS provides an opportunity for the Golisano Foundation to work closely with a program they support. This has included serving on the steering committee which helps monitor program outcomes and crafting new service delivery approaches when evaluation findings suggest they are necessary. During this part of the panel session, the roles of funders in FCPS will be described. Additionally, this part of the panel will include a discussion about how funders can be supportive when evaluation findings show there are problems that need to be addressed. Finally, after other panel members have presented the specific details about project development, service delivery, and evaluation design and findings, key lessons about working together and establishing meaningful roles for funders beyond check-writing will be discussed.
Initiating, Implementing, Institutionalizing Future Care Planning Services
Doris Green,  Future Care Planning Services,  dgreen@futurecareplanning.org
Jody Rowe,  The ARC of Monroe County,  j_rowe@arcmonroe.org
FCPS continues to be an engaging and challenging project to manage. Doris Green, FCPS Director, will share with the audience the specifics about how this project is delivered, how it is received, and what has been accomplished. After each panelist has made his/her initial presentation, key lessons about what's been learned through the evaluation about information management/data collection, program outcomes and challenges will be discussed. Key strategies for working productively with evaluators will also be shared.
Developing Implementing and Enhancing Use of Future Care Planning Services Evaluation
Anita Baker,  Anita Baker Consulting,  abaker8722@aol.com
Anita Baker has served as the evaluator for FCPS since it was in the design stages. This part of the panel session will focus on how the FCPS evaluation was initially designed, how and why its been modified as program modifications happened and new information needs arise, and what has been learned through the evaluation. During this part of the panel, the speaker, an evaluator, will present some specific findings about FCPS such as who is served, how that changes, how long it takes to develop future care plans, whether there has been implementation of plans. The presenter will also discuss she and her colleagues have learned about evaluation through FCPS.

Session Title: Engaging Communities in Sustainable Systemic Change: A Five Year Analysis of the W K Kellogg's Leadership for Community Change Series
Panel Session 611 to be held in Pratt Room, Section B on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Matthew Militello,  University of Massachusetts, Amherst,  mattm@educ.umass.edu
Discussant(s):
Teresa Behrens,  W K Kellogg Foundation,  tbehrens@wkkf.org
Abstract: This session will explore the evaluation of the W.K. Kellogg Foundation initiative, Kellogg Leadership for Community Change (KLCC). The KLCC series aims to focus on, articulate, and celebrate leadership needs and styles at the community level to develop collective leadership for systemic change. The national evaluation team and local site evaluators were charged with responding to three overarching issues: (1) to learn how collective leadership is developed and sustained for improvements in teaching and learning, (2) to learn about community readiness and capacity to create change, and (3) to learn how organizational structures can enhance community building and leadership for change. The first series of KLCC focused on six communities across the U.S. between 2002 and 2004. Recently, a longitudinal evaluation of the sites was conducted. This presentation will report on the findings from KLCC from 2002-2007 with evaluation data that includes interviews, surveys, Photovoice, and Q-methodology.
Evaluating Collective Leadership for Community Change
Maenette Benham,  Michigan State University,  mbenham@msu.edu
Maenette Benham is the principal investigator for series one and two of the Kellogg Leadership for Community Change initiative. She is also leads the longitudinal evaluation team. Benham will provide an overview of the evaluation design.
Q-Methodology for Collective Leadership
Matthew Militello,  University of Massachusetts, Amherst,  mattm@educ.umass.edu
Matt Militello has been a member of the national evaluation team since 2002. He has also been contracted by Kellogg to be a member of the longitudinal evaluation team. Militello will describe the use of Q-methodology in the six sites as a longitudinal measure of collective leadership.
Surveying for Collective Leadership
Anna Ortiz,  California State University, Long Beach,  aortiz6@csulb.edu
Anna Ortiz is leading the survey development in the KLCC series. She will report on the use of site-based and on-line surveys in the longitudinal evaluation efforts.
The Power of the Local Evaluation Team
Crystal Elissetche,  Kalamazoo College,  kurisuteru04@yahoo.com
Crystal Elissetche began her work at a KLCC Fellow in the South Texas site. As a fellow, she also worked as a member of the local evaluation team. In 2005, Crystal was contracted as a member of the longitudinal national evaluation team. Her presentation will focus on the implementation, support, and sustainability of local evaluation efforts.
Using the Photovoice Process as a Data Collection Tool
John Oliver,  Michigan State University,  oliver10@msu.edu
John will be discussing the use of the Photovoice process as a data collection tool for this evaluation.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Evaluation of an HIV Awareness and Sexual Decision-making Peer Education Program Among University Students: Lessons Learned
Roundtable Presentation 612 to be held in Douglas Boardroom on Friday, November 9, 1:55 PM to 3:25 PM
Presenter(s):
Natalie De La Cruz,  University of Alabama, Birmingham,  ng36@uab.edu
Nish McCree-Hale,  University of Alabama, Birmingham,  mccree-hale@mindspring.com
Ann Elizabeth Montgomery,  University of Alabama, Birmingham,  annelizabethmontgomery@gmail.com
Faith Fletcher,  University of Alabama,  fletch95@gmail.com
Abstract: Sexual Health Awareness Through Peer Education (SHAPE) is an outreach project of the Center for AIDS Research at the University of Alabama at Birmingham (UAB). SHAPE educators are students who present workshops to improve sexual decision-making and HIV awareness among their peers. Peer education is an important strategy to prevent HIV and STIs. The study team (1) conducted focus groups with peer educators to define program objectives and develop survey instruments to conduct pretests and posttests of workshop participants' knowledge and behaviors; and (2) developed an evaluation plan comprising a one-group pretest-posttest design using a nonequivalent dependent variable to measure changes in participants' sexual decision-making skills and HIV-related knowledge, attitudes, and behaviors. Data will be collected through Fall 2007. This presentation will highlight the process and activities aimed at developing an evaluation for an established community-based program.
Roundtable Rotation II: At the Starting Gate: Planning the Evaluation of an Initiative to Enhance Student Engagement at a State University
Roundtable Presentation 612 to be held in Douglas Boardroom on Friday, November 9, 1:55 PM to 3:25 PM
Presenter(s):
Marc Braverman,  Oregon State University,  marc.braverman@oregonstate.edu
Lizbeth Ann Gray,  Oregon State University,  grayli@oregonstate.edu
Anne Hatley,  Oregon State University,  anne.hatley@oregonstate.edu
Brandi Hall,  Oregon State University,  hallbra@onid.orst.edu
Abstract: This roundtable will address issues related to evaluation planning for a formal program aimed at increasing undergraduate student engagement at Oregon State University. For the 2007-08 academic year, OSU's College of Health and Human Sciences is launching a multi-faceted initiative that incorporates the promotion of student learning communities, development of new courses, linkages to existing courses, and informal campus activities. Each of the College's four departments has planned its own program, so the evaluation must be responsive to between-program differences, overriding themes, and college-wide commonalities. Further complexity for the evaluation is introduced by the numerous primary audiences, including central campus administration, the college Dean's office, individual departments, faculty, and students. Most of these parties have little direct experience with evaluation, and initial activities have focused largely on consensus-building, goal specification, and negotiation. This roundtable will highlight challenges common to similar higher education programs, to promote insights for the evaluation process.

Session Title: Culturally Responsive Evaluation Training for Students of Color: From Classroom to Fieldwork and Back
Panel Session 613 to be held in Hopkins Room on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Chair(s):
Veronica Thomas,  Howard University,  vthomas@howard.edu
Abstract: How can evaluation faculty provide meaning coursework and practical experiences for students of color? How sufficient is traditional coursework in preparing students of color for complex roles and responsibilities in real-world planning and implementation of an evaluation, especially in settings serving diverse populations? What differences emerge between students of color's expectations based upon classroom training and their experiences based upon fieldwork in a diverse setting? These questions will guide the panelists as they explore student learning through traditional evaluation coursework coupled with a directed evaluation field experience. Panelists, which include the faculty mentor along with the three person student-led evaluation team (all African American females), will discuss how although the range of issues discussed in the classroom provided a foundation for preparing students for fieldwork, the subsequent 15-week practical experience (coupled with weekly in-class meetings) provided students with a keener understanding of the intricacies of planning and implementing evaluations in a diverse setting.
Planning and Implementing Relevant Evaluation Training for Students of Color: Successes and Hard Lessons Learned
Veronica Thomas,  Howard University,  vthomas@howard.edu
There are ongoing efforts in the evaluation field to attract a more diverse pool of students. This is coupled with efforts to improve the quality of graduate teaching and aid in the dissemination of effective teaching strategies to a broad community of evaluators. This presenter will discuss her 10-year experience as a teacher of evaluation to students of color at a historically Black university and her shift from a teacher-centered to a learner-centered approach. A set of critical questions will be addressed: (1) What kinds of evaluation learnings and experiences would be most impactful for students of color? (2) How can an instructor create courses and practical experiences that will facilitate such experiences? (3) What lessons learned from faculty and students' traditional evaluation coursework and their field experiences can be utilized to build a more robust graduate training program that increase the value and relevance of the profession for students of color.
Planning for Fieldwork: How Coursework Prepared (and Didn't Prepare) the Student-led Team for the Field
Shelia Mitchell,  Howard University,  she714@aol.com
Janine Jackson,  Howard University,  teach15980@aol.com
The first step to any evaluation study is to plan and design the evaluation. Further, designing an evaluation that takes place in settings that serve diverse populations calls for even more careful attention. This presentation will highlight the student-led evaluation team's reflections on how courses in evaluation theory and methods prepared, and failed to prepare, them for some of the intricate issues that they faced in co-constructing with stakeholders and designing the methodology for their evaluation project. Differences between course-based and field-experience based perceptions will be elaborated. The student-led team's strategies for determining what to study, who to study, what type of evidence was required to meet the needs of the evaluand/key stakeholders, and identifying resource needs will also be discussed.
The Practical Experience: Successes, Challenges, and Things in Between
Janine Jackson,  Howard University,  teach15980@aol.com
Shelia Mitchell,  Howard University,  she714@aol.com
This presentation will focus upon the successes, challenges, expected and unexpected pitfalls faced by the student-led team in actually conducting the evaluation, writing the report, and disseminating the results. Team members' initial apprehensions about working in the field will be highlighted, as well as the strategies they utilized to overcome these apprehensions. The benefits of closely working with a faculty mentor, having ongoing in-class sessions during the field experience, and working within a small group will be discussed.
Where Do We Go From Here? Life After Graduate Coursework and Field Experiences
Shelia Mitchell,  Howard University,  she714@aol.com
Janine Jackson,  Howard University,  teach15980@aol.com
In this paper, the student-led evaluation team will explore how the evaluation in-class and field experiences have shaped their professional development. Further, the presenters will discuss their current view of the place of evaluation in an increasingly diverse society and their role in this evolving process. The student-led evaluation team will also reflect upon how they will continue to enhance their own growth and development in the evaluation field through non-academic credit professional development, active involvement in professional associations (such as the American Evaluation Association), and collaborating with practicing evaluators in diverse settings.

Session Title: Evaluation of Educational Outcomes: Experience of Jordan
Multipaper Session 614 to be held in Peale Room on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Husein Abdul-Hamid,  University of Maryland University College,  habdul-hamid@umuc.edu
Discussant(s):
Harry Patrinos,  World Bank,  hpatrinos@worldbank.org
Abstract: Jordan has made huge investments in education reforms. The latest education reform (2003-2007) was designed to transform the education system to focus on the skills necessary for a knowledge economy. Curriculum and assessment tools have been enhanced to focus on learning outcomes and new textbooks and supplementing materials have been developed. Technology has been added to all schools in the Kingdom. The proposed session will focus on how three evaluation studies are helping in shaping education policies and reform initiatives.
Performance of Jordan in International Assessment
Khattab Abdu-Libdeh,  Jordan National Center For Human Resources Development,  klebdeh@nchrd.gov.jo
Jordan has been participating in the Trends of International Mathematics and Science Study (TIMSS) since 1999. In this paper we will present the performance of Jordan in both 1999 and 2003. Overtime changes in performance as well as determinates of learning will be covered. Analysis was conducted using Hierarchal Linear Models (HLM). Jordanian eighth-grade students perform relatively well in science but still lag behind other countries in mathematics. While socioeconomic and family characteristics related to education continue to have the biggest influence on student achievement, between-schools differences in achievement are associated with school authority (public versus private), school location (urban versus rural), and school climate (including teacher morale). Gender is also a significant factor in achievement to the advantage of girls. School resources and teacher qualifications were also investigated and tend to have positive influence on achievement.
An Evaluation of the Discovery Schools' Experiment
Khaled El-Qudah,  Jordan National Center For Human Resources Development,  kqudah@nchrd.gov.jo
A pilot of 100 schools (K-12 named discovery schools) was organized to assess the value-added as a result of the use of information and communication technologies in instructions. The purpose of this evaluation study was to provide a relatively complete portrait of the status of learning and teaching currently existing in discovery schools, with a concern to identifying positive and negative features of that status. It is hoped that the findings of this study will lead the Ministry of Education and other stakeholders to carefully review the process of educational change at these schools. Multi-case study method was used to answer questions related to what is used, frequency of use, how it is used, proficiency of use, and observe the value on the overall learning environment. Perceptions were also captured regarding the use of information and communication technologies in teaching and learning.
Assessment of Knowledge Economy Skills in Jordan
Husein Abdul-Hamid,  University of Maryland University College,  habdul-hamid@umuc.edu
As part of the evaluation framework for the Education Reform for Knowledge Economy (ERfKE) project, a national assessment of knowledge economy skills was developed to measure the change, overtime. The instrument was constructed for students in three grades: fifth, seventh, and eleventh representing the three education cycles. The test measures students' ability to apply concepts to deal with and solve real-life situations under three content domains: reading, mathematics, and science. The test was administered at 400 schools representing all schools in the Kingdom and about 11,000 students participated. Survey data was also collected from students, teachers and school principals. In this paper we will discuss the achievement levels and the determinants of learning based on Generalized Least Squares (GLS) analysis. The study offers policy recommendations related to school resources, environment, and classroom activities controlling for socioeconomic status, school location, and school authority.

Session Title: Advocacy Evaluation: Practical Research Findings
Demonstration Session 615 to be held in Adams Room on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the Advocacy and Policy Change TIG
Presenter(s):
Lily Zandniapour,  Innovation Network Inc,  lzandniapour@innonet.org
Johanna Gladfelter,  Innovation Network Inc,  jgladfelter@innonet.org
Jackie Williams Kaye,  The Atlantic Philanthropies,  j.williamskaye@atlanticphilanthropies.org
Thomas Kelly,  Annie E Casey Foundation,  tkelly@aecf.org
Abstract: Nonprofit advocacy and evaluation is a field of growing interest demanding practical, usable information. As part of its Advocacy Evaluation Project, Innovation Network has conducted original research and will present the following in this demonstration: -Advocacy strategies and capacities that have been proven effective; and -Common short-term and intermediate outcomes that can be used as milestones towards success. Short-term and intermediate-term milestones are important in assessing progress since a 100% policy change victory is rare and heavily influenced by external factors beyond the control of the advocacy organization, and many advocacy campaigns have a longer duration than a typical grant. This demonstration will share findings from our publication about the sector's current use and needs related to advocacy evaluation tools and approaches. This understanding is crucial to develop advocacy evaluation resources to build the capacity of nonprofits engaging in advocacy to measure and evaluate their success, and to move the field of advocacy evaluation forward.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Evaluation in Higher Learning Curriculum Development
Roundtable Presentation 616 to be held in Jefferson Room on Friday, November 9, 1:55 PM to 3:25 PM
Presenter(s):
Maria Clark,  United States Army Command and General Staff College,  maria.clark1@conus.army.mil
Rhoda Risner,  United States Army Command and General Staff College,  rhoda.risner@us.army.mil
Abstract: Evaluation processes during curriculum development will help educators prepare course plans directed at adult learners' needs. The evaluation process allows the educator to get feedback from adult learners at every step of the curriculum development process. This session will provide a method for utilizing evaluation through curriculum development to best meet the needs of adult learners.
Roundtable Rotation II: Enhancing a Masters in Evaluation Curriculum by Learning From Consumers of Evaluation
Roundtable Presentation 616 to be held in Jefferson Room on Friday, November 9, 1:55 PM to 3:25 PM
Presenter(s):
Sharon Ross,  Founder's Trust,  sross@founderstrust.org
Gibbs Kanyongo,  Duquesne University,  kanyongog@duq.edu
Abstract: Duquesne University's Department of Foundations and Leadership undertook a project to review the curriculum for the M.S.Ed. in Program Evaluation in order to improve the services offered to evaluation students. The underlying idea behind this study was to learn from key stakeholders what their evaluation needs are, what evaluator qualifications are valued, and what other universities are doing to meet these demands. One of the most interesting aspects of this study was the ability to add a new voice to the process by surveying local nonprofit organizations that are consumers of evaluation and may one day hire these student evaluators. This paper highlights the value and challenges of adding these new voices to a curriculum review.

Session Title: Lessons From the Field in Building Evaluative Capacity of Restoration Activities: A Field Trip of the Herring Run Watershed Association Project
Demonstration Session 617 to be held in Washington Room on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the Environmental Program Evaluation TIG
Presenter(s):
Matthew Birnbaum,  National Fish and Wildlife Foundation,  matthew.birnbaum@nfwf.org
Amanda Bassow,  National Fish and Wildlife Foundation,  amanda.bassow@nfwf.org
Brian Kittler,  National Fish and Wildlife Foundation,  brian.kittler@nfwf.org
Abstract: This is a continuation of a field trip that will be held offsite. It begins at 11:15 and continues through until 1:45. You must sign up in advance. To do so, please contact Katherine Dawes at dawes.katherine@epa.gov. The health of the Chesapeake Bay watershed has been declining due to over-influx of a variety of nutrients from consumption and production patterns. In response, Congress established ambitious goals in reducing nutrient levels in restoring this important watershed. Despite meeting many of the established mid-term target performance indicators, the watershed's health continues to deteriorate based on a recent GAO report. The Baltimore area is a major part of the Chesapeake Bay watershed and the focus for many of the most ambitious conservation projects. Participants will visit the highly urbanized Herring Run watershed in northeast Baltimore. The Herring Run Watershed Association is pro-actively engaging local residents in protecting and restoring their watershed through stream restoration and innovative storm water management. The demonstration provides an exemplar case study of the challenges presented to those responsible for innovations in watershed conservation and evaluation at a site level scale.

Session Title: Variance Explained and Explaining Variance: An Overview of Variance in General, in the General Linear Model, and in Statistical Programs
Panel Session 618 to be held in D'Alesandro Room on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Julius Najab,  George Mason University,  jnajab@gmu.edu
Abstract: Variance is a crucial concept in common evaluation analytic procedures. Unfortunately, the concept is elusive to many researchers and evaluators. Most statistical texts and courses do not emphasize how different statistical models handle variance. Our aim is to examine and explain variance for evaluators unfamiliar with the concept. These three presentations will describe and explain the relevance and importance of variance. The first presentation covers variance in various distributions. The second presentation applies the basics of variance into the General Linear Model with specific regression, Analysis of Covariance, and Repeated Measures Analysis of Variance examples. The final presentation will examine how different statistical programs utilize variance in different analyses. The discussions of variance are orientated towards a comprehensive description, absent the advanced technical jargon.
Variance in Distributions
Julius Najab,  George Mason University,  jnajab@gmu.edu
Variance in the data is rarely the focus of traditional statistical courses and texts. The variability in data distributions is an assumption for every statistical analysis. Researchers frequently assume a normal distribution (the bell curve) in the data. The normal distribution is based on specific data variability or variance and the normal distribution has restrictions to subsequent analyses and inferences we researchers should make. I intend to describe the concept of variance and provide various data distribution examples. By the end of the presentation the non-quantitative evaluators should be able to understand variance conceptually.
Variance Within the General Linear Model
Susan Han,  George Mason University,  shan8@gmu.edu
Data analytic procedures differ in the way that they allow researchers to partition variance due to an effect or error. The General Linear Model (GLM) is for various univariate and multivariate data analytic procedures. The GLM is an overarching model encompassing multiple regression, Analysis of Covariance and Repeated Measure Analysis of Variance and many others. This presentation described how the GLM utilizes the data variance. Those common procedures in evaluation deal with variance uniquely and those differences are important to the results interpretation.
To Choose or Not to Choose: Examining the Generalized Linear Model (GLM) Default Options in R, Statistical Package for the Social Sciences (SPSS), and Statistical Analysis System (SAS)
Caroline Wiley,  University of Arizona,  crhummel@u.arizona.edu
Understanding how different statistical packages treat variance is pertinent to ensuring that the obtained results reflect what the evaluator ultimately draws inferences about; the inferences drawn ought to match the specifications of the model. However, developing a deeper understanding of the options available will help specify more accurate models. In addition to the multiple methods a specific analysis deals with variance many statistical packages may and probably do treat the same analysis both statistically and conceptually differently. It is therefore important to understand both the default and available options in the package of choice and how the choices one makes or does not make affects the observed outcomes.

Session Title: Treating Data According to Purpose: Frequentist Versus Bayesian Analyses
Skill-Building Workshop 619 to be held in Calhoun Room on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
J Michael Menke,  University of Arizona,  menke@u.arizona.edu
Abstract: In situations where clinical trials or long-term repeated sampling cannot be done or assumed to be done, classical research design and analysis are impossible. Evaluation research and program evaluation designs may help improve inference, but analytic methods are also essential to good inference and better-informed decisions. By comparing classical and Bayesian analytic techniques on data that are normally distributed versus those that deviate from normalcy, we may see how significance testing, parameter estimation, and other inferences may inform decisions.

Session Title: Partnering With and Learning From Indigenous Peoples
Panel Session 620 to be held in McKeldon Room on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the Indigenous Peoples in Evaluation TIG
Chair(s):
Donna Mertens,  Gallaudet University,  donna.mertens@gallaudet.edu
Discussant(s):
Donna Mertens,  Gallaudet University,  donna.mertens@gallaudet.edu
Abstract: Members of indigenous communities in many parts of the world have a legacy of being pushed to the margins and denied access to the privileges of colonizing powers. The mainstream resistance to the recognition and legitimacy of indigenous peoples parallels in significant ways the resistance towards members of other groups whose gender, disability, race/ethnicity, or other dimensions of diversity have been used to award them less privilege in our society. As indigenous peoples have made their presence known in the mainstream evaluation world, they have raised up issues related to theory and practice in evaluation that provide important learning opportunities for evaluators. The focus of this panel is on the insights gained from partnering with and learning from members of indigenous communities in the field of evaluation.
Culling: Tenets of Success - From Hawaiian Promising Practices in Education - Assets Based Inquiry, a Community Based Process
Kanani Aton,  Hawaiian Education Services,  k-aton@hawaii.rr.com
Fiona Cram,  Katoa Ltd,  finoac@katoa.net.nz
Morris Lai,  University of Hawaii,  lai@hawaii.edu
Alice Kawakami,  University of Hawaii,  alicek@hawaii.edu
Native Hawaiian students make up 26% of the overall population in Hawaii's public schools. Specific strategies for improving the quality of their learning are being collaboratively designed by the Hawaiian Education Community and the State Department of Education in an initiative called Na Lau Lama. In 2006, this statewide effort focused on identifying characteristics of Hawaiian best educational practices from the community using Assets Based Inquiry. This approach was adapted from Appreciative Inquiry, intending to strategize results around what works, rather than trying to fix what doesn't. The outcomes include identification of 'Tenets of Success' describing promising Hawaiian education practices with regard to assessment, culture-based education, professional development, and strengthening families and community. These outcomes led to current planning to pilot promising practices in the broader DOE system. Challenges include the DOE system's lukewarm response to the usefulness of the 'Tenets' thus far.
Listening and Learning: A Canadian Perspective on Evaluation in Aboriginal Education Circles
Linda Lee,  Proactive Information Services Inc,  linda@proactive.mb.ca
The Canadian province of Manitoba has one of Canada's largest Aboriginal populations (First Nations peoples, MTtis and Inuit). The challenge for educational institutions, from schools to the Aboriginal Education Directorate (ministry of education), is to balance the pressure to conduct evaluations that provide data credible to the 'majority' population (who hold the institutional and systemic power) with the need for Aboriginal communities to understand and improve education in culturally appropriate and meaningful ways. This presentation will address not only the challenges inherent in this endeavor, but also will explore the approaches that Aboriginal communities have used to address this tension. When evaluators working in other contexts listen to the learnings to be gleaned from the experiences of Manitoba's Aboriginal communities, they have the opportunity to enhance their own evaluation practice, particularly as it applies to the empowerment of other marginalized communities.
Transformative Evaluation in Deafness: Learning From Indigenous Peoples
Raychelle Harris,  Gallaudet University,  raychelle.harris@gallaudet.edu
Heidi Holmes,  Gallaudet University,  heidi.holmes@gallaudet.edu
People who are deaf are quite heterogeneous in terms of level of hearing loss, gender, race/ethnicity, and other dimensions of diversity such as indigenous people status. There is a group of people who have a cultural identification with deafness who use the capital letter D to denote their status as Deaf people. This culturally Deaf group recognizes the power differentials associated with being able to hear. They also recognize the power associated with mode of communication (use of sign language or auditory/verbal language.) Harris and Holmes will discuss issues of power, language, and ethics in evaluation contexts in the Deaf community, building on what they have learned from the indigenous communities scholarship.
De-colonizing and Cleaning Our Cultural Lenses: Preliminary Steps
Pauline Brooks,  Brooks Cross Cultural/International Evaluation, Research and Racism Consulting,  pbrooks_3@hotmail.com
For half a millennium, Western nations' relationships with indigenous peoples have been largely relationships of conquest and domination. Rarely considering indigenous populations as equals, many Western nations evolved cultures (later including scientific cultures) that incorporated various negative stereotypes, misinformation and even lies concerning those whom they subordinated. Subjugated indigenous populations were very often People of Color, and White racism was a major (though not the only) subjugating force that, up to the present, has contributed to deep-seated culturally accepted mainstream biases concerning indigenous populations and their relationships with various dominating Western White cultures. Given this history, removing and minimizing racial and other biases are necessary steps for enabling mainstream Western researchers/evaluators to work effectively with, and to the benefit of, indigenous people and communities.
Bulding Evaluation Capacity Through Partnerships With Community-based Organizations Serving Minorities With Disabilities
Yolanda Suarez-Balcazar,  University of Illinois, Chicago,  ysuarez@uic.edu
Tina Taylor-Ritzler,  University of Illinois, Chicago,  tritzler@uic.edu
Edurne Garcia,  University of Illinois, Chicago,  edurne21@yahoo.com
Community-based organizations (CBOs) in this country provide a variety of services to individuals with diverse social problems and common predicaments. Due to sharp decrease in funding sources, coupled with growing skepticism in the general public regarding the efficiency of social programs, these agencies are experiencing pressure from stakeholders to engage in program evaluation. Presenters will discuss the building of partnerships with CBOs serving minorities with disabilities in order to create capacity for evaluation and create learning communities. Lessons learned from scholars working in indigenous communities facilitated the identification of challenges and solutions related to mainstreaming evaluation activities within CBOs daily work and sustaining that capacity over time. Presenters will emphasize the use of empowerment and participatory strategies to building capacity and the role of the researcher from coach to facilitator to teacher.

Session Title: Educating Educators to Support Lesbian, Gay, Bisexual, and Transgender Students: Documenting Needs, Exploring Strategies
Multipaper Session 621 to be held in Preston Room on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the Lesbian, Gay, Bisexual, Transgender Issues TIG
Chair(s):
Barbara Radecki,  University of Nevada, Las Vegas,  globarrvers@cox.net
Preparation of and Provision by School Staff of Health and Mental Health Services to Gay, Lesbian, Bisexual and Questioning Students
Presenter(s):
Richard Sawyer,  Academy for Educational Development,  rsawyer@aed.org
Abstract: A national-level study examined training and educational needs of high school psychologists, counselors, social workers, and nurses for providing health and mental health services to gay, lesbian, bisexual and questioning (GLBQ) students. Within a cross-sectional design, representative samples (n = 941) from national-level professional membership organizations completed a mail survey. Participants indicated that the extent to which they had received postsecondary education and on-the-job preparation to address health and mental health needs of GLBQ students was relatively low; and also indicated they should be providing more services to GLBQ students than was occurring. Analyses and presentation will focus on differences identified among health and mental health provider groups. Results can be used to: 1) provide targeted training and resources; 2) inform the work of national organizations; and 3) increase national-level awareness and support regarding the health and mental health needs of GLBQ students.
Visibly Safe: Setting Standards of Performance for an Evaluation of a University Lesbian, Gay, Bisexual, and Transgender (LGBT) Safe Zone Program
Presenter(s):
Virginia Dicken,  Southern Illinois University, Carbondale,  vdicken@siu.edu
Abstract: Safe Zone programs are designed to improve campus climate for lesbian, gay, bisexual, and transgender (LGBT) people by providing training on LGBT concerns and issuing placards to those trained. The program at Southern Illinois University was evaluated in 2005. After a review of relevant literature, four standards of performance were identified: 1. Regular training on LGBT issues, 2. Change in knowledge, attitudes, and behaviors, 3. Visible placards, and 4. Assurance that those who display the placards are truly “safe.” Results showed that placards are highly concentrated in a few areas, and it was often unclear who had received training. Those that completed training did not have significantly higher scores on a measure of knowledge and attitudes when compared to untrained individuals. Those who displayed a placard, however, showed higher scores. Recommendations included: 1) Targeted training in underrepresented areas, 2) Redesigned placards to include trained person's name, 3) Follow-up trainings.
Educating the Educator: A Theory-based Evaluation of a Training Program on Supporting Lesbian, Gay, Bisexual, and Transgender Students and Addressing Homophobia in K-12 Schools
Presenter(s):
Emily Greytak,  Gay, Lesbian and Straight Education Network,  egreytak@glsen.org
Joseph Kosciw,  Gay, Lesbian and Straight Education Network,  jkosciw@glsen.org
Abstract: Many training programs seek to build educators' capacity to address bias in schools. However, relatively little information exists about the effectiveness of these programs, particularly for programs about bias related to sexual orientation and gender identity/expression. The Gay, Lesbian, and Straight Education Network's educator training workshops aim to increase educators' ability to support lesbian, gay, bisexual, and transgender students and address homophobia in K-12 schools. This paper describes a collaborative process between education and research staff to develop a theory-based evaluation assessing the workshops' effectiveness. The paper will also detail the challenges around implementation of such an evaluation and share the findings about the trainings' impact. The paper will explore the value of designing an evaluation grounded in program theory, both for program and research purposes and will detail ways in which the findings can be used both to understand further educators' attitudes, beliefs, and practices and improve training workshops.

Session Title: Advancing Organizational Learning Through the Study and Development of Diversity
Multipaper Session 622 to be held in Schaefer Room on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG and the Multiethnic Issues in Evaluation TIG
Chair(s):
Molly Engle,  Oregon State University,  molly.engle@oregonstate.edu
Discussant(s):
Katrina Bledsoe,  Planning, Research and Evaluation Services Associates Inc,  katrina.bledsoe@gmail.com
Helping the Helpers: The Excellence Through Diversity Institute as an Assessment-Savvy Leadership Development Initiative
Presenter(s):
Hazel L Symonette,  University of Wisconsin, Madison,  hsymonette@odos.wisc.edu
Abstract: The University of Wisconsin Excellence Through Diversity Institute (EDI) is an intensive train-the-trainers/facilitators workforce learning community organized around appreciatively-framed and culturally-grounded evaluation processes. It focuses on generative evaluative thinking and reflective practice for faculty, classified staff, academic staff and administrators. EDI helps participants discover and bring forward their *Best Self* in full voice to do their best learning, their best engaging and their best work so that they can better help others do the same while facilitating the university's development of such transformational processes. As a social-justice grounded leadership development resource for many campus and community initiatives, EDI helps faculty, staff and administrators to expand their diversity-grounded developmental evaluation capacities and their border-crossing bridge-building capacities. EDI remains a still evolving project-in-process as it strives for excellence through cultivating authentically inclusive and vibrantly responsive teaching, learning and working environments that are conducive to success for all.
The Quality Assurance Team (QAT): Developing Mechanisms for Multiple Voices to be Heard in Transdisciplinary Multi-site Community Research
Presenter(s):
Leah Neubauer,  DePaul University,  lneubaue@depaul.edu
Gary Harper,  DePaul University,  gharper@depaul.edu
Audrey Bangi,  University of California, San Francisco,  audrey.bangi@ucsf.edu
Jonathan Ellen,  Johns Hopkins University, 
Abstract: Previous researchers have noted that multi-site transdisciplinary research endeavors present particular sustainability challenges as they attempt to link multiple research centers and various organizations across geographic and cultural settings. As part of a federally- funded multi-site, multi-stage HIV/AIDS community research project, the Quality Assurance Team (QAT) was created to structure and facilitate the inclusion of multiple voices from multiple disciplines. To facilitate this inclusion, the QAT viewed the transdisciplinary research team as an organization, applied organizational development and program evaluation concepts and theories, and created a process-related internal evaluation feedback system which identified organizational deficits and strengths and helped to correct obstacles that could inhibit effective team and project functioning. To achieve effectiveness, the responsive system addressed unequal power structures, pinpointed systems of power imbalance and oppression, and promoted supportive and empowering relationship among members.
The Role of Evaluation in Advancing Organizational Change: A Case Study in Diversity
Presenter(s):
Gwen M Willems,  University of Minnesota,  wille002@umn.edu
Mary Marczak,  University of Minnesota,  marcz001@umn.edu
Abstract: Organizations are asked to continually evolve in a range of areas. This paper examines the challenges in crafting evaluation methods and questions that effectively examine the gap between where an organization currently is on a topic and where it wants to be. One of those areas, diversity, is increasingly recognized as critical to strengthening organizations and their culture. Cultural and demographic shifts, along with increased opportunities to tap the contributions of communities of color, are transforming organizational staffing and the needs of constituents. The presenters will share the conceptualization and methods of a case study evaluating the institutional commitment to diversity of 40 service-learning organizations. They will discuss drawbacks and draw generalizable lessons about how to obtain useful data on the discrepancies between current and ideal status in an organization.

Session Title: How Should We Measure Child Outcomes in Early Childhood Evaluations and Accountability Efforts?
Think Tank Session 623 to be held in Fairmont Suite on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Donna Spiker,  SRI International,  donna.spiker@sri.com
Kathy Hebbeler,  SRI International,  kathleen.hebbeler@sri.com
Discussant(s):
Shari Golan,  SRI International,  shari.golan@sri.com
Lauren Barton,  SRI International,  lauren.barton@sri.com
Michelle Woodbridge,  SRI International,  michelle.woodbridge@sri.com
Abstract: This Think Tank will examine the relative strengths and weaknesses of traditional direct standardized assessments and newer authentic assessments to evaluate improvements in development and well-being in large-scale early childhood evaluations and state accountability efforts. The session will begin with an overview of the characteristics of the two approaches and their relative strengths and weaknesses with regard to issues such as validity, reliability, cultural competence, cost, feasibility, and promotion of learning among families, providers, researchers, and policymakers. Breakout groups will discuss participants' experiences with each approach and explore what kind of assessment is best suited for various types of program evaluation. The groups will then develop a set of recommendations or considerations for appropriate uses of standardized versus authentic assessments in early childhood evaluations and accountability systems to share with the full group.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Building Evaluation Capacity in a State Maternal and Child Health Agency
Roundtable Presentation 624 to be held in Federal Hill Suite on Friday, November 9, 1:55 PM to 3:25 PM
Presenter(s):
Nurit Fischler,  Oregon Public Health Division,  nfischle@dhs.state.or.us
Collette Young,  Oregon Public Health Division,  collette.m.young@state.or.us
Abstract: Over the past year, the Oregon Public Health Division's Office of Family Health (OFH) has launched a process to build its evaluation capacity and integrate evaluation into the planning, design, implementation and funding of its' Maternal and Child Health programs. A variety of programs including WIC, family planning, immunizations, and perinatal and child health are included in this effort. An interdisciplinary Rapid Improvement Process team surveyed staff and management in order to assess current practices and evaluation capacity, and then proposed a series of capacity-building recommendations, which managers used to develop action steps. The recommendations are currently being implemented through the combined efforts of management, a cross-office work team, and a contracted evaluation specialist. The process brought to light the diversity in needs and skills related to evaluation among both staff and management. Successes and challenges of this effort raise interesting questions about how to facilitate organizational learning, build a culture of evaluation, and develop systems, supports and guidelines for evaluation in a state organization with diverse programs and funders.
Roundtable Rotation II: Facilitating Collaborative Evaluation Projects for Building and Sustaining Evaluation Capacity: Reflections and Lessons Learned
Roundtable Presentation 624 to be held in Federal Hill Suite on Friday, November 9, 1:55 PM to 3:25 PM
Presenter(s):
Ellen Taylor-Powell,  University of Wisconsin,  ellen.taylor-powell@ces.uwex.edu
Matthew Calvert,  University of Wisconsin,  matthew.calvert@ces.uwex.edu
Abstract: Collaborative evaluation efforts are suggested as a way to build and sustain evaluation learning (Arnold, 2006; Huffman, et al., 2006). Working collaboratively is a common scenario, and one component, of the University of Wisconsin-Extension evaluation capacity building work. We will present two recent statewide collaborative evaluation projects from the 4-H Youth Development Program area – the Arts and Communication program evaluation and a study of the youth development educator's role in promoting youth/adult partnerships. What did/are we learning from these cases in terms of building learning communities and sustaining evaluation capacity? What is peculiar to each case that facilitates/hinders individual and organizational learning? As a Roundtable, we will present the projects and answer the two questions above through a critical review of our experience. Then, we will pose targeted questions to the participants to solicit their insights and lessons learned. This roundtable will begin to define indicators to use in evaluating our capacity building work.

Session Title: Evaluation in Non-traditional and Informal Learning Contexts
Multipaper Session 625 to be held in Royale Board Room on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Tom McKlin,  Georgia Institute of Technology,  tom.mcklin@gatech.edu
Anane Olatunji,  George Washington University,  dr_o@gwu.edu
Students Gobble Blood Oranges for Harvest of the Month
Presenter(s):
Andy Fourney,  Network for a Healthy California,  andy.fourney@cdph.ca.gov
Andrew Bellow,  Network for a Healthy California,  andrew.bellow@cdph.ca.gov
Sharon Sugerman,  Network for a Healthy California,  sharon.sugerman@cdph.ca.gov
Helen Magnuson,  Network for a Healthy California,  helen.magnuson@cdph.ca.gov
Kathy Streng,  Network for a Healthy California,  kathy.streng@cdph.ca.gov
Abstract: Students attending schools with funding from the California Nutrition Network (Network) taste fruits, like blood oranges, as part of a nutrition education Toolkit to increase fruit and vegetable consumption. One “harvest” item is featured each month. Harvest of the Month (HOTM) is a theory-based nutrition education Toolkit designed to increase fruit and vegetable consumption in low-resource schools. It includes elements that connect the classroom, cafeteria and community in a synergistic way to augment consumption. A 2005-06 evaluation showed that 1,322 primarily 4th and 5th grade students showed a significant difference in knowledge, preferences, self-efficacy and consumption. The evaluation methodology was deemed practical and feasible. It led to accurate and useful results that were used to refine nutrition education strategies and justify funding. The sound methods led to findings that indicate change in outcomes can partially be attributed to HOTM.
Learning From School Evaluation: Leadership at a Large High-School in a Changing Community
Presenter(s):
Laurie Moore,  Mid-continent Research for Education and Learning,  lmoore@mcrel.org
Abstract: Evaluators from Mid-continent Research for Education and Learning (McREL) in collaboration with a public school district in the central United States conducted an evaluation of high-school leadership. The study provided valuable lessons learned in strengthening leadership evaluation services offered to clients. Sharing experiences and offering guidelines for conducting similar evaluations may help evaluators make practical decisions when designing services, particularly as these relate to methods and fiduciary constraints. Our evaluation was intended as a catalyst for high school reform. As a result, numerous stakeholders were concerned about the study's impact on students, parents and faculty, as well as school and district administrators. Presenters will discuss how these concerns impacted the design and implementation of the evaluation. Attendees will receive a handout of useful guidelines to consider in their own work. Although conducted in a public school, these guidelines are relevant to small-scale evaluation of leadership in broad evaluation settings.
Evaluating a Museum-Community Science Collaboration
Presenter(s):
Colleen Manning,  Goodman Research Group Inc,  manning@grginc.com
Abstract: The proposed paper uses an evaluation of a long-term NSF-funded museum-community science collaboration as a lens through which to consider broader issues of evaluation methods and practices. Methodological and practical issues addressed include responding to NSF's priorities, the role of evaluation in supporting a replicable national model for community partnerships and training peer presenters, involving program participants in data collection, and using data for both evaluative and administrative purposes. In addition, the paper discusses using families as units of analysis in evaluation.
Children Learning Through Fun: Evaluation of a University-sponsored Children's Festival
Presenter(s):
Heather M Scott,  University of South Florida,  hscott@coedu.usf.edu
Melinda Hess,  University of South Florida,  mhess@tempest.coedu.usf.edu
James Coraggio,  University of South Florida,  coraggio@coedu.usf.edu
Teresa Chavez,  University of South Florida,  chavez@coedu.usf.edu
Tina Hohlfeld,  University of South Florida,  thohlfeld@coedu.usf.edu
Abstract: In this age of accountability, the role of informal learning environments is likely to be overlooked as an educational venue. Furthermore, there is limited information about how to appropriately evaluate these initiatives. This study does so by evaluating a children's festival sponsored by a college of education in a large university with an extensive teacher preparation program, and provides a framework for similar evaluations. The festival was geared towards children from pre-kindergarten (3 years old) up to high school (15 years old). The evaluation addressed both the process (planning the event) and product (the event) of the festival. Findings indicate strong satisfaction among those planning and sponsoring events as well as attendees. Although the evaluation process was effective and provided valuable information to organizers, improvements to the process were identified that could enhance future evaluations.

Session Title: Evaluating Arizona's School-based Tobacco Prevention Program: Lessons Learned in Outcome Evaluation
Panel Session 626 to be held in Royale Conference Foyer on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
Frederic Malter,  University of Arizona,  fmalter@email.arizona.edu
Abstract: Various tobacco prevention curricula are administered in Arizona schools in grades four through eight through the Tobacco Education and Prevention Program (TEPP). Yet, the intensive school-based prevention program (in effect since 1996 and funded through tobacco tax revenue) has not been subjected to a comprehensive quantitative evaluation. The panelists will outline the rationale and implementation of the curricula and proceed with demonstrating a quasi-experimental approach to obtain effectiveness estimates. Our approach will draw from different methodological toolboxes (e.g. statistically derived control groups, integration of single indicator measures into multivariate measures) to produce meaningful findings. The presenters will show ways of translating scientific findings into actions tailored to programmatic needs and goals. Acknowledging that the quantitative evaluation of a multi-site, multi-curriculum prevention program remains a challenging endeavor, the panel will provide their audience with straight-forward examples that can be applied to their evaluation practice.
The Challenges of Evaluating a Tobacco Prevention Program: Curricula, Coverage and Why a Program Could Make a Difference
Arian Sunshine Coffman,  University of Arizona,  scoffman@email.arizona.edu
Arizona's Tobacco Education and Prevention Program (TEPP) provides intensive school-based tobacco prevention programming to public schools serving grades four through eight using a variety of curricula. A description of past and current tobacco prevention programming will be provided. A focus will be placed on differences in curriculum content and why the structure of a curriculum could be expected to affect future intentions to use tobacco. Preliminary results regarding post-class satisfaction and estimates of treatment fidelity will be given. Geographical and administrative coverage were examined and results indicate vast differences with respect to geographic location and school districts. Symmetry of need and provision were assessed with findings indicating that there may be an imbalance between those schools with highest prevention needs and those getting the most attention. Additionally, because prevention programs offer unique challenges in regards to evaluation, experience in the evaluation of TEPP's prevention programming will be shared.
Synthesizing Data Bases for a Quasi-experimental Study of Program Outcomes and Program Effectiveness
Frederic Malter,  University of Arizona,  fmalter@email.arizona.edu
This presentation will outline how a school-based tobacco use prevention program can be evaluated by combining data from multiple sources. A huge statewide youth surveillance system was merged with administrative post-class survey data to obtain statistical comparison groups, i.e. students who never received any prevention curriculum that can be compared to those students who underwent various levels and kinds of curriculums. Quantitative comparisons allowed a first careful estimation of a dose-response relationship within each curriculum. Outcome measures included actual self-reported use of tobacco products, self-reported behavioral intentions, attitudes and risk perceptions. Measures of program effectiveness were estimated for units of analyses with increasingly higher levels, beginning with the school level and climaxing at the state level. Caveats of the methodology, such as the danger of an ecologic fallacy and issues of statistically derived controls will be discussed and future directions highlighted.
What we have Learned from the Evaluation of the School-based Prevention Program in Arizona? Results from a Quasi-experimental Approach
Mei-kuang Chen,  University of Arizona,  kuang@u.arizona.edu
Our quasi-experimental approach yielded preliminary findings that allow for comparing curriculums to each other and to our statistically derived controls. By incorporating measures from a state-wide risk-and-protective-factors surveillance system we were enabled to identify current & future needs that should affect strategic program directions. Results from merged data bases allow us to present details regarding the match of administrative assessment of school environments (e.g. percent of kids receiving free lunches) and how individuals within the same environment perceive their proximal and distal environment. Implications for tobacco prevention programming will be shown. The presentation will share experiences in how evaluating a tobacco prevention program can provide helpful information for improving program development, program implementation, research design, outcome instruments, data analysis, and synthesis of evaluation results. Possibilities of how to apply what we've learned from the evaluation of this specific prevention program to evaluating preventive interventions in general will be outlined.

Session Title: Evaluating Graduate Education in Health and Medicine
Multipaper Session 627 to be held in Hanover Suite B on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the Assessment in Higher Education TIG
Chair(s):
Beverly Parsons,  InSites,  bevandpar@aol.com
Discussant(s):
Beverly Parsons,  InSites,  bevandpar@aol.com
What Physician Competence is Assessed Well by Patient Surveys of Medical Residents?
Presenter(s):
Sue Hamann,  Coastal Area Health Education Center,  sue.hamann@coastalahec.org
Jason Eudy,  Coastal Area Health Education Center,  jason.eudy@coastalahec.org
Abstract: The Accreditation Council for Graduate Medical Education (ACGME) requires graduate medical (residency) training programs to evaluate the achievement of specific educational goals. The ACGME identified six areas of competence in which medical residents are to be proficient by the end of their graduate training: patient care, medical knowledge, practice-based learning and improvement, interpersonal and communication skills, professionalism, and systems-based practice. For three years, we have collected patient data about resident performance presumed to be relevant to the competencies of interpersonal and communication skills and professionalism. Multiple assessment methods were employed, including individually administered inpatient and outpatient surveys, mailed surveys, and standardized patient surveys. Moreover, extensive interviews of nurses and graduate medical education faculty addressed these same competencies. Reliability, validity, and utility of these assessments will be described.
Evaluation and Learning: Experiential Learning in Medical School Training
Presenter(s):
Summers Kalishman,  University of New Mexico,  skalish@salud.unm.edu
Jan Mines,  University of New Mexico,  jmines@salud.unm.edu
Lisa Serna,  University of New Mexico,  lserna@salud.unm.edu
Renee Quintana,  University of New Mexico,  requintana@salud.unm.edu
Roger Jerabek,  University of New Mexico,  rjerabek@salud.unm.edu
Phil Szydlowski,  University of New Mexico,  pszydlowski@salud.unm.edu
Abstract: This paper triangulates evidence from multiple methods to validate experiential learning settings in medical education. Our medical school curriculum is based on six competencies, which are 1) Medical Knowledge, 2) Patient Care, 3) Practice-based Learning, 4) Professionalism and Ethics, 5) Interpersonal and Communication Skills, and 6) Systems-and Community-based Practice. Experiential, case-based or hands-on learning as well as one-on-one teaching with guided practice constitute venues that best address the six competencies of the Accreditation Commission on Graduate Medical Education (ACGME). These venues and competencies frame our medical school curriculum and provide learners with opportunities for contextual practice and feedback. In these settings, students report that they are able to 1) integrate conceptual and practical spheres of knowledge, 2) observe different physician preceptors and receive guidance, and 3) engage in active learning.
Learning From Résumé-Analysis: A Tool to Analyze Career Pathways and Evaluate Training Programs of National Institutes of Health (NIH) Funded Alumni
Presenter(s):
Susan Tucker,  Evaluation and Development Association,  sutucker@sutucker.cnc.net
Raymond Ivatt,  Evaluation and Development Association,  ray.ivatt@wat-if.com
Simeon Slovacek,  California State University,  sslovac@exchange.calstatela.edu
Jackie Stillisano,  Texas A&M University,  jstillisano@coe.tamu.edu
Abstract: Our interest in knowledge value and the potential of résumés or CVs as an evaluation resource, stems from a general interest in assessing the impacts of federally financed training projects whose goal is to increase the number of minority doctoral researchers in science, technology, engineering and mathematics (STEM). Our study addresses the impact of this investment, namely, the research productivity after these students have completed what are sometimes very lengthy and circuitous paths to the doctorate. The purpose of this paper is to assess the utility of Bozeman and Dietz's (2000) CV-based methodology within the context of three of NIH's most successful Minority Opportunities in Research (MORE) projects for preparing biomedical researchers. To test the utility of using CVs to study the career pathways of MORE alumni, three data collection approaches were used: searching NIH and NSF databases; Internet search; and sampling 100 doctoral alumni CVs from the three MORE projects.

Session Title: Indigenous Knowledge Creation and Evaluation Practice
Multipaper Session 628 to be held in Baltimore Theater on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the Presidential Strand
Chair(s):
Carrie Billy,  American Indian Higher Education Consortium,  cbilly@aihec.org
Joan LaFrance,  Mekinak Consulting,  joanlafrance1@msn.com
Discussant(s):
Karen Kirkhart,  Syracuse University,  kirkhart@syr.edu
Abstract: Evaluation is a form of knowledge creation. Formal training in evaluation practice has been defined by Western ways of looking at knowledge creation, whether it flows from positivism or constructionist theory. To make evaluation practice truly responsive to the culture and values of Indigenous communities, we must do more than explore how to adapt methodologies based on Western frameworks. We must explore how Indigenous communities looked at knowledge creation and from this framing, discuss approaches to practice. The American Indian Higher Education Consortium (AIHEC) has undertaken this task. Through extensive consultations with an advisory committee of cultural experts, evaluators and science educators, a series of focus groups and pilot meetings in tribal communities, AIHEC has developed an Indigenous framing and training for evaluation. This panel will discuss the process of developing the framework, the elements of Indigenous knowledge creation that influence evaluation practice, and the relevance of the project to tribal communities.
Building the Indigenous Framework
Iris Prettypaint,  University of Montana,  iris.prettypaint@mso.umt.edu
AIHEC recognized that building an Indigenous framing for evaluation would require extensive consultation with tribal people in the United States as well as discussions with Indigenous peoples in other countries. This paper describes the consultation process that included regular meetings of the advisory committee comprised of cultural experts, three focus groups to discuss the framing, meetings in seven different communities to test the basic foundations of the framework, and pilot training institutes to further refine the curriculum developed to train tribal college personnel and Indian educators. The author is one of the cultural experts who served on the advisory committee and participated in at least one of each of the different consultation meetings. She will discuss how she was able to offer input from her own tribal cultural framing and learn from those with whom she interacted throughout the process.
Cultural Grounding
Richard Nichols,  Colyer Nichols Inc Consulting,  colyrnickl@cybermesa.com
This paper will describe the ways in which the project team explored foundations in Indigenous knowledge creation and defined common values that form the framework. Elder scholars and Indian academics were consulted to learn how they would describe the function of assessing worth or evaluation within the context of their culture and language. The consultations were augmented by a literature review describing Indian ways of knowing or knowledge creation. This research resulted in five propositional statements regarding how Indigenous views of knowledge creation should influence evaluation. Through the consultation process, the project team identified four common cultural values that also influence evaluation. These values and Indigenous ways of knowing form the foundations for the framework. The consultation process also assisted in developing a basket weaving metaphor to describe the relationship of evaluation and program implementation
Implications for Evaluation
Joan LaFrance,  Mekinak Consulting,  joanlafrance1@msn.com
This paper describes how evaluation practice is influenced by the cultural grounding. It explores the meaning of evidence within an Indigenous framework. Assessment and data gathering methods should reflect common values for community, personal integrity, sense of place and tribal sovereignty. The discussion includes ways in which all of these elements are considered in the Indigenous framing. It describes how evaluators working in Indian country have to continually 'reframe' their practice by pulling from cultural ways of knowing and doing inquiry as they apply evaluation methodologies familiar to Western modalities.
The View From the Field
Dawn Frank,  Oglala Lakota College,  dfrank@gwtc.net
The final paper in the panel discusses the relevance of the Indigenous framing and related training program from the perspective of an Indian educator and evaluator who works within her own tribal community. The paper considers how the overall Indigenous framing has meaning and adaptability at a tribal level. It describes the value of the training in building capacity among Indians educators and the proposed Indigenous Evaluation Resource Center that will be maintained by AIHEC to continue the dialogue about and development of Indigenous evaluation.

Session Title: Alternative Approaches to Assessing Outcomes in Health Services Research
Multipaper Session 629 to be held in International Room on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Souraya Sidani,  Ryerson University,  s.sidani@utoronto.ca
Abstract: Demonstrating the effectiveness of health care requires an accurate assessment of the extent to which patients achieve the desired outcomes. Trends in health services outcome research have witnessed an expansion of what is considered desired outcomes to include patients' perspective on various dimensions of health including symptoms, functional status, and general health. Determining the effects of health care on these outcomes involves examining changes in the outcomes that are observed following a care episode. Prospective assessment, which involves measurement before and after a care episode, is the traditional method for examining changes in outcomes. Prospective assessment may be difficult to implement within the context of shortened care episode. In this panel, the utility of alternative approaches for outcome assessment, retrospective pretest and transitional or perceived change scales, is explored and illustrated with examples from a methodological study.
Approaches to Outcomes Assessment: Advantages and Limitations
Joyal Miranda,  University of Toronto,  joyal.miranda@utoronto.ca
The first presentation focuses on the advantages and limitations of the three approaches to measurement, that is, prospective assessment, retrospective pretest, and transition or perceived change scales. The methodological and clinical issues with each approach are reviewed. The design of the study in which the three approaches were tested will be presented, setting the stage for the next two presentations.
Feasibility and Reliability of Retrospective and Transition Measures
Souraya Sidani,  Ryerson University,  s.sidani@utoronto.ca
The second paper presents results of feasibility and reliability test of the transition and retrospective pretest. Feasibility will be discussed in terms of percentage of missing data and of qualitative comments provided by participants on issues with responding to the items. Test - retest reliability results will be discussed.
Detecting Changes in Outcomes: Performance of Three Approaches to Assessment
David Streiner,  University of Toronto,  dstreiner@klaru-baycrest.on.ca
In the third paper, results of comparison of the three approaches to outcome assessment (prospective, retrospective, and transition) in terms of their ability to detect changes in patient groups with different acute condition and trajectories of change.

Session Title: Good, Better, Best: Evaluation Approaches to Determine Best Practices
Panel Session 630 to be held in Chesapeake Room on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the Health Evaluation TIG
Chair(s):
James Hersey,  RTI International,  hersey@rti.org
Discussant(s):
Maureen Wilce,  Centers for Disease Control and Prevention,  mwilce@cdc.gov
Abstract: Best practice is a strategy to identify and promote excellence in practice that has been increasingly applied to public and private health programs. However, there is no standard template or approach to determining best practices. Therefore, various evaluation methods and approaches have been used to identify best practices. As a result, evaluators are at the forefront of the debate about -how to determine the best or most promising practices that work in public health programs? This session introduces best practice terminology, highlights recent literature regarding best practices, and presents three examples where program evaluation approaches have been used to determine best practices. Presenters will describe their best practice framework, evaluation methods and evidence standards, as well as, challenges and lessons learned.
Best Practices: Cutting Through the Buzzwords and Jargon
Michael Schooley,  Centers for Disease Control and Prevention,  mschooley@cdc.gov
Best practice has become a buzzword in many sectors to guide decision-making about resource investment and support of effective public health programs and practices. While there is no standard definition of best practice, the current debate provides a starting point for a working definition that considers diverse perspectives. For instance, best practices are generally context dependent, systematically reviewed, adoptable approaches to improve practice and desired outcomes. The varied approaches also offer some common principles and potential guidelines for evaluators facing the best practice challenge in public health and other arenas. This presentation will review recent literature regarding best practices and highlight the dilemmas and challenges in the field. A working definition and common principles will be identified to help guide evaluators in addressing the best practice dilemma in their setting.
Best Practices Evaluation: Lessons Learned in the Well-Integrated Screening and Evaluation for Women Across the Nation Program (WISEWOMAN)
Rosanne Farris,  Centers for Disease Control and Prevention,  rfarris@cdc.gov
Programmatic best practices refers to processes for implementing an intervention using the most appropriate strategies for a given population and setting. We identified best practices by systematically gathering practice-based evidence from the Well-Integrated Screening and Evaluation for Women Across the Nation program, operated by the Centers for Disease Control and Prevention. This public health program screens midlife, un- or underinsured women for cardiovascular disease risk factors and provides lifestyle intervention. We will present the case study approach to identifying best practices, including research questions, logic model, data collection and analysis methods. Methods include quantitative program process and outcome data as well as qualitative data from interviews, observations and focus groups. Key best practice results will be highlighted as well as the toolkit for disseminating the practices. The best practice evaluation approach and findings may be useful to a broad range of practitioners interested in strategies to evaluate program practices.
Identifying Promising Practices in Heart Disease and Stroke Prevention
Pam Williams-Piehota,  RTI International,  ppiehota@rti.org
The CDC Division for Heart Disease and Stroke Prevention (DHDSP) funds state health departments to implement interventions focusing on policy and environmental supports that lead to system changes for heart disease and stroke prevention. However, best practices have yet to be identified for system-level interventions that states can implement to prevent heart disease and stroke events. In 2006, DHDSP funded a comprehensive evaluation of eight selected state programs as a step toward identifying promising, or best, practices in the field. This presentation will describe the process used to evaluate these programs in order to identify promising practices, including the identification of evaluation questions, development of an evaluation plan for each program, and data collection and analysis processes. The framework used to guide the evaluation and involvement of key stakeholders in the evaluation process will be discussed. Preliminary results will be presented along with a discussion of challenges and lessons learned.
A New Look at Outcomes for Targeted Testing and Treatment for Latent Tuberculosis Infection Programs
Amy Roussel,  RTI International,  roussel@rti.org
In 2000, the CDC Division of Tuberculosis Elimination (DTBE) funded 17 state and city programs to conduct 5-year targeted testing and treatment of Latent TB Infection (TTTLTBI) projects. The 17 grantees fielded 84 different interventions with varying target populations and activities. In 2003, CDC commissioned RTI International to conduct a stakeholder-driven best practices evaluation of the TTTLTBI projects. Stakeholders identified non-traditional outcomes as something they wanted to see emerge from this evaluation. This paper uses case studies to demonstrate the applicability of a broader perspective on effectiveness of public health programs by applying the Competing Values Framework (CVF) to 6 best practice TTTLTBI programs. The CVF integrates four models from the literature on organizational theory. Each model embodies a different set of values, beliefs, and assumptions about organizational effectiveness. Analysis applying this framework yields multiple dimensions of effectiveness for TTTLTBI programs and suggests a fresh perspective for best practices evaluation.

Session Title: Using Strategic Planning and Strategic Evaluation as Learning Processes
Panel Session 631 to be held in Versailles Room on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the Government Evaluation TIG
Chair(s):
David J Bernstein,  Westat,  davidbernstein@westat.com
Discussant(s):
Kathryn Newcomer,  George Washington University,  newcomer@gwu.edu
Abstract: Strategic planning and strategic evaluation are components of a comprehensive and thoughtful accountability cycle, and have a symbiotic relationship. Strategic evaluation activities can inform managers, staff, and stakeholders about program performance, and provide input for developing a strategic plan. Strategic plans can be used to establish a strategic evaluation and performance monitoring agenda. This leads to thoughtful and more rigorous monitoring of program progress, achievement of goals and objectives, and identification of unintended consequences. The panel will begin with a brief discussion of the Government Performance and Results Act and Program Assessment Rating Tool requirements for strategic planning. Panelists will then explore the relationship between strategic planning, strategic evaluation, and program performance, and how these activities contribute to organizational learning. Panelists will focus on how a strategic planning and strategic evaluation can contribute to sound practice, accountability, and a more in depth understanding of program performance.
Alternative Approaches to Developing Strategic Performance Plans
David J Bernstein,  Westat,  davidbernstein@westat.com
Alternative approaches to strategic plans focused on performance include traditional 'top-down' driven strategic planning, with high-level managers developing a strategic plan to guide organization activities. Stakeholder-oriented strategic plans involve a 'bottom-up' approach, with customer and other stakeholders having input to the strategic planning process. Performance-driven strategic planning involves comparing existing levels of performance to a desired level, with 'gap analysis' to identify activities to reach desired goals. Benchmark-driven strategic planning is a variant of this, with desired levels of performance determined by comparison to 'best-in-class' or other performance standards. Ideally all of these approaches, but especially the latter two, use evaluations and performance measures to inform the strategic planning process. Most strategic planning processes are not easily categorized, and involve a hybrid of these approaches to meet decision makers' needs. This presentation will discuss the differences between these approaches, including illustrations from the presenter's 24 years of evaluation practice.
Strategic Thinking as Applied at the Portfolio and Program Level by the Cooperative State Research Education Extension Service
Djimé Adoum,  United States Department of Agriculture,  dadoum@csrees.usda.gov
The Cooperative State Research, Education, and Extension Service (CSREES) of the United States Department of Agriculture (USDA) integrates strategic planning, strategic thinking, budget formulation, operations management, oversight and evaluation. Although as an agency within the USDA, CSREES generally limits formal strategic planning to aligning the goals and objectives of the CSREES Strategic Plan with those of the USDA Strategic Plan, the range of programs and funding lines within CSREES requires strategic thinking and coordination at multiple levels. CSREES adopted the Portfolio Review Expert Process, including grouping of projects and programs into portfolios aligned with CSREES strategic objectives and periodic self-assessment and review by external panels of experts. This presentation addresses how CSREES Planning and Accountability and National Program Leaders (NPLs) use strategic thinking at the program and portfolio levels, and communicate and coordinate these processes within CSREES.
Strategically Planning Evaluations to Maximize Learning About Program Performance
Stephanie Shipman,  United States Government Accountability Office,  shipmans@gao.gov
Valerie J Caracelli,  United States Government Accountability Office,  caracelliv@gao.gov
In a federal environment with tight evaluation resources and escalating demands to report program results, federal agencies need to learn how to strategically plan evaluations for maximum impact. While no program should evade scrutiny, the need for evaluative information is more pressing for some than others. Which of the following is more important in deciding which program to evaluate: program size, centrality to agency mission, or political sensitivity? This paper describes the various ways that several federal agencies decide when and what to evaluate and how their context affects their decision making. Cases were selected to capture diversity in approaches. We present factors mentioned as constraining evaluation capacity, and potentially creative approaches agencies devise to manage their evaluation accountability function. The paper concludes by stressing the importance of a strategic approach to planning evaluations that would further a program's mission, use evaluation resources more effectively, and better inform policy making.
Communicating Lessons Learned From Strategic Planning and Evaluation to Policymakers
Rakesh Mohan,  Idaho State Legislature,  rmohan@ope.idaho.gov
For evaluation to serve as the feedback loop in the public policy process, we must effectively communicate to policymakers what we learn from strategic planning and evaluation. However, this communication is not an easy task. Getting policymakers' attention is challenging. Who is responsible for communicating this information to the policymakers -- evaluators, program officials, or both? Who are these policymakers and are there others who influence the policy process? How and when should we communicate with them? How do we establish an ongoing working relationship with policymakers and other key stakeholders? These questions will be discussed using examples from Idaho state government.

Search Results