Return to search form  

Session Title: Learning Practical Knowledge Through the Study of Cases
Panel Session 336 to be held in International Ballroom A on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the Presidential Strand
Chair(s):
Jody Fitzpatrick,  University of Colorado, Denver,  jody.fitzpatrick@cudenver.edu
Discussant(s):
Tysza Gandha,  University of Illinois at Urbana-Champaign,  tgandha2@uiuc.edu
Holli Burgon,  University of Illinois at Urbana-Champaign,  inquireevaluate@gmail.com
Jody Fitzpatrick,  University of Colorado, Denver,  jody.fitzpatrick@cudenver.edu
Abstract: The panel will discuss practical knowledge and its role in learning and enhancing the practice of evaluation. Cases on ethical dilemmas faced by evaluators will be used to illustrate how students and evaluators can gain practical knowledge of how evaluators handle ethical issues. Case studies have long been a tool for learning. Panelists and discussants will debate their role in increasing practical knowledge and the manner in which they may do so. One discussant will contrast her case studies on practice with those on ethics. Two student discussants will comment on the value of the cases to them in illuminating evaluation practice.
The Role of Practical Knowledge in Learning
Thomas Schwandt,  University of Illinois at Urbana-Champaign,  tschwand@uiuc.edu
Tom Schwandt will discuss the nature of practical knowledge and its relevance to evaluation practice. He will explain how it is that practical knowledge is always a kind of ethical-political knowledge because it is concerned with deciding what is likely to be both effective and appropriate in a given situation. He will discuss the characteristics by which we recognize practical knowledge and the ways in which we acquire it, including the conundrum that practical knowledge can be learned but it cannot be taught.
Gaining Practical Knowledge from Dialogues on Ethical Cases
Michael Morris,  University of New Haven,  mmorris@newhaven.edu
Michael Morris will discuss how students and others can obtain practical knowledge through cases using the Ethical Challenges he has developed in the American Journal of Evaluation. He will explore the significance of the multiple perspectives that can be applied to specific ethical dilemmas in evaluation. One or more cases from the Ethical Challenges will be used to demonstrate how two commentators viewing the same dilemma can reach contrasting conclusions concerning how the case should be handled by the evaluator. What do these disagreements teach us about practical knowledge regarding decision-making in the realm of evaluation ethics? How much definitive guidance can professional principles and standards realistically provide to evaluators grappling with complex ethical issues?

Session Title: Practicing What we Preach: Exploring the Transformative Potential of Evaluation Processes
Multipaper Session 337 to be held in International Ballroom B on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Chair(s):
Tanya Brown,  Duquesne University,  jaderunner98@gmail.com
Rodney Hopson,  Duquesne University,  hopson@duq.edu
Discussant(s):
Karen Kirkhart,  Syracuse University,  kirkhart@syr.edu
Stafford Hood,  Arizona State University,  stafford.hood@asu.edu
Abstract: How do our practices change once we acknowledge that learning within evaluation is dynamic and multi-directional? This question becomes even more prescient when we align it with dominant concerns of social justice and social change in evaluation practice, and the multiple learning capacities within the field. This panel, students of the AEA/DU Graduate Education Diversity Internship, provides accounts of dynamic learning processes that take place over the course of an evaluation. Each paper discusses how the evaluator navigated through the evaluation process, with special focus on one of the following: (1) attending to the interpersonal processes between evaluator and stakeholders; (2) utilizing theories of practice that uniquely address the concerns of the evaluation context; and (3) considering how the learning processes of a particular program inform and map onto the evaluation process and the evaluator's development. Discussants lift the presenters' learning experiences to proffer further lessons on evaluation process and its transformative potential.
Planting Collaborative Growth: Coalition Building as a Key Element to the Evaluation Process
Nia Davis,  University of New Orleans,  nkdavis@hlkn.tamu.edu
Since 1991, The United State Department of Justice has implemented Operation Weed and Seed (OWS) in sites across the country with the aim to simultaneously reduce crime and bolster community development. OWS New Orleans has adopted a coalition structure, comprised of representatives of organizations or community groups with a vested interest in the residential area designated as the community of focus. Unique to the coalition structure is the integration of the research and evaluation contacts, who collaborate with community representatives on a regular basis. Critical then, was an attunement to interpersonal relationships amongst coalition members, trust building, and the development of the community through the OWS initiative. This presentation highlights the evaluation activities employed to build cohesion and commitment to community development among coalition participants. The presenter also parallels this process to her own development as an evaluator.
An Analysis of Organizational Capacity and Research Inquires: Incorporating Cultural Competence in Evaluation Research Agendas
Milton Ortega,  Portland State University,  mao@pdx.edu
The literature on cultural competency in evaluation research has grown considerably over the last decade. However, relatively little has been done to implement coherent evaluation practices in accordance with cultural competency. This lack of attention to organizational capacity may be further echoed in the failure of some program evaluations to place cultural competency centrally in research agendas. The pursuit of cultural competence in research evaluation is further constrained by a lack of organizational and methodological approaches. This paper examines the organizational learning capacities of a research evaluation organization in its attempts to incorporate cultural competency in its own evaluation projects. The purpose of this analysis is to provide organizations with some recommendations in preparing for research inquires that promote cultural competency, with the hope that better understandings be attained.
Illuminating Community Meanings: Utilization of a Narrative Framework to Document Community Change
Josephine Sirineo,  University of Michigan,  jsirineo@umich.edu
The success of an evaluation is largely dependent on gathering information that accurately depicts a context under investigation. How people make sense of their environment (Weick, 1995) and how they choose to communicate their experiences to others are important concepts for evaluators to recognize throughout the evaluation process. Tobin (2005) notes how storytelling can be an integral component in program evaluation when it is used as a primary or secondary data gathering technique. This paper will present a framework that documents a process of applying the Most Significant Change methodology to a national, multi-site, cluster evaluation. The MSC technique is a systematic process for recording, collecting and analyzing stories around specific themes (Davies and Dart, 2005). Preliminary findings show that storytelling can demonstrate the dynamic nature of learning occurring at different levels (individual, group, and organization) and with varying intensities.
Evaluation of Non-Traditional Approaches for Preventing High School Dropout
Roderick Harris,  University of Pittsburgh,  rlh1914@yahoo.com
Experiential education is the process of actively engaging students in an authentic experience that will have benefits and consequences. Students make discoveries and experiment with knowledge themselves instead of hearing or reading about the experiences of others. Students also reflect on their experiences, thus developing new skills, new attitudes, and new theories or ways of thinking (Kraft & Sakofs, 1988). This presentation discusses the candid field experience of a novice evaluator who used an experiential approach to learn practical program evaluation within the context of a high school dropout prevention organization, Communities in Schools (CIS). Since 1985 CIS aims to help young people successfully transition out of high school and build on their potential. Building on the tenets of experiential education and the aims of CIS, the presenter will outline participatory methods used for evaluating an in-school program, and two different alternative learning academies.

Session Title: Incorporating Technological Innovations in Data Collection
Multipaper Session 338 to be held in International Ballroom C on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the Qualitative Methods TIG
Chair(s):
Sandra Mathison,  University of British Columbia,  sandra.mathison@ubc.ca
Discussant(s):
Sandra Mathison,  University of British Columbia,  sandra.mathison@ubc.ca
Using On-line Diaries as an Evaluative Tool to Improve Program Development and Implementation
Presenter(s):
Nicole Gerardi,  University of California, Los Angeles,  gerardi_nicole@yahoo.com
Abstract: Conducting an evaluation of a program in its early implementation stage can be difficult when there are many aspects to the program and multiple stakeholders. Before focusing on measuring program impact it is often useful to focus on understanding program components and interactions. Using a multiple site, holistic juvenile rehabilitation program from the Los Angeles area as a case study, this paper explores how on-line diaries can be used as a tool for formative evaluation of project implementation and process. Both the benefits and challenges of using on-line diaries with a newly established collaborative program will be addressed. The paper concludes with a discussion of various ways to analyze on-line diaries as well as the various uses of the diary findings.
Photolanguage Use With Novice Teachers Participating in a School University Partnership to Provide Optimal Resources for Teachers (SUPPORT) Network
Presenter(s):
Ann Bessell,  University of Miami,  agbessell@miami.edu
Adriana Medina,  University of Miami,  amedina@miami.edu
Paola Pilonieta,  University of Miami,  absolut_paola@yahoo.com
Valentina Kloosterman,  University of Miami,  vkloosterman@yahoo.com
Abstract: This session focuses on an evaluation of the SUPPORT Network for novice teachers; an induction program, mentoring and ongoing support, for university teacher education graduates. As part of the mixed method evaluation, we included a randomized trial of 48 participants who were assigned to either a traditional focus group or one that used a process called Photolanguage. Photolanguage is an innovative process that utilizes black and white photographs to stimulate individual's imagination, memory, and emotions. It also provides an opportunity to articulate thoughts by speaking through photographs using rich descriptions and imagery. In our study, both groups responded to the same probes and all responses were audio-taped, transcribed, and thematically coded. Photolanguage group participants' responses contained nearly four times more words per response than those in the traditional focus group. In addition, their expansive descriptions, use of adjectives, and adverbs led to unanticipated themes that did not emerge in the traditional focus group.
Fitting PhotoVoice Into an Evaluator's Repertoire of Qualitative Tools: Possibilities and Caveats
Presenter(s):
Amy La Goy,  Evaluation and Research Consulting,  amylagoy@earthlink.net
Edward Mamary,  San Jose State University,  mama100w@yahoo.com
Abstract: Championed as a means of illuminating the concerns and perceptions of marginalized groups and of bringing these to policy makers' consciousness, PhotoVoice has gained a following among evaluators and researchers of community-based health and education programs. PhotoVoice fits easily – philosophically and methodologically—into action research, and empowerment and participatory evaluation approaches. However, it is not yet clear whether or how PhotoVoice can be used as a qualitative method in the repertoire of evaluators using other approaches. In this paper, using data from a project that used both PhotoVoice processes and intensive interviews to learn about community members' perceptions of health outreach strategies, the authors will explore the relationship between PhotoVoice images, the discussions they inspired, and data gathered through interviews. They will then explain if, how, and with which caveats PhotoVoice can be adapted for use as qualitative tool in program evaluation

Session Title: Models of Evaluation Use and Influence in Social and Educational Services
Multipaper Session 339 to be held in International Ballroom D on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the Evaluation Use TIG
Chair(s):
Dennis Affholter,  Affholter and Associates,  thedpa@yahoo.com
Supporting the Conditions for Organizational Development: A Case Study Examining the Role of the Evaluator
Presenter(s):
Cheryl-Anne Poth,  Queen's University,  pothc@educ.queensu.ca
Abstract: Using the Queen's University Inter-Professional Patient-centered Education Direction (QUIPPED) as an evaluation case study, this paper describes an approach to the study of evaluation use when it is informed by the field of complexity science. Our current understandings describe organizations as operating in a constant state of flux. These shifting organizational contexts demand close contact to be maintained with the evaluator. Evaluation use studies have yet to examine the role of the evaluator in meeting the conditions supportive of organizational learning under these contexts. Popular approaches to evaluation planning remains focused on reducing the complexity of evaluation by the identification of pathways of use. Complexity science and organizational theory offer a powerful alternative to conceptualizing the role of evaluation and the role of the evaluator. This paper reports the analysis of a developmental evaluation process examining the role of the evaluator participating in an organization's development.
Evaluation Influence and Evaluation Utilization: Comparison of Theories and Application to a Case
Presenter(s):
Mijung Yoon,  University of Illinois at Urbana-Champaign,  myoon1@uiuc.edu
Abstract: This paper reviews the theories on influence of evaluation and examines relevance of the theories to evaluation practice by applying theoretical concepts to an evaluation case. First, it briefly describes the background of theories on evaluation influence as a response to Patton's utilization-focused evaluation. Next, it compares on several dimensions the theories on evaluation influence by Kirkhart (2000) and Henry and Mark (2003), and the conceptual framework by Cousins (2003) on utilization of evaluation. Finally, it discusses and critiques these conceptualizations by applying them to a case of evaluation of a community youth service organization, showing what are the strengths and weaknesses of the conceptualizations and what can be the contexts in which the conceptualizations are most relevant.
Implications of a Case Study for Mark and Henry's Schematic Model of Evaluation Influence
Presenter(s):
Shu-Huei Cheng,  National Hsinchu University of Education,  chen0777@umn.edu
Jean King,  University of Minnesota,  kingx004@umn.edu
Abstract: This case study, based on Mark and Henry's (2004) framework, used interviews and document review to explore the influence of the evaluation of a literacy improvement program that had been implemented in two elementary schools in a Midwestern state of the USA. Consistent with Mark and Henry's model, this study identified that various types and processes of evaluation effects, facilitated by general influence, were interrelated with one another. This study, nevertheless, proposed a modified framework to better describe the change process through which evaluation affected literacy instruction in these schools. Compared to Mark and Henry's model, the study did not endorse step-by-step pathways that led to behavioral change. Instead, the influence processes were complex and nonlinear, raising questions about the extent to which research can accurately identify specific pathways or mechanisms as the model suggests. The paper will discuss implications for both theory and practice.

Session Title: Evaluation Training: Developing Professionals
Multipaper Session 340 to be held in International Ballroom E on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the Graduate Student and New Evaluator TIG
Chair(s):
Chris Coryn,  Western Michigan University,  christian.coryn@wmich.edu
Teaching Evaluation Skills in Trinidad and Tobago: Obstacles and Solutions
Presenter(s):
Lindsay Nichols,  Loyola University, Chicago,  lnicho2@luc.edu
Lisa Sandberg,  Loyola University, Chicago,  lsandbe@luc.edu
Aisha Leverett,  Loyola University, Chicago,  jlevere@luc.edu
Linda Heath,  Loyola University, Chicago,  lheath@luc.edu
Abstract: Evaluation Research is often exported from the U.S. around the globe, but students rarely have the opportunity to conduct international evaluations during their training. This lapse is unfortunate, as working on foreign soil teaches the importance of cultural awareness and sensitivity, as well as requiring the evaluator to work harder to understand the systems and assumptions of the host country. With funding from the Graduate School of Loyola University Chicago, we were able to conduct an evaluation of the Student Advisory Services at the University of the West Indies, St. Augustine Campus, as part of a graduate course in Evaluation Research. Working collaboratively with faculty and staff at UWI and depending heavily on “on-site experts,” the authors completed the evaluation within one academic term. The background, logistics, and lessons learned are discussed.
Growing New Buds on the Evaluation Tree: Undergraduate Students' Interest in Program Evaluation
Presenter(s):
John LaVelle,  Claremont Graduate University,  john.lavelle@cgu.edu
Abstract: Currently, few people seek a career in program evaluation even though the demand for evaluators exceeds the supply. In order for the profession to grow there needs to be a consistent flow of trained evaluators who enter the field annually. Undergraduate students are a potential pool of future evaluators, but little is known about their interest in pursuing a career in program evaluation. The purpose of this study was to collect preliminary data on undergraduate students' interest in the field of program evaluation. The researcher collected data from 89 undergraduate students from a Midwestern university. Participants were asked to describe program evaluation (PE), read a description of PE, indicate their familiarity with and interest in PE as a career, and respond to semantic differentials to assess their global attitude towards PE. Their responses may guide our quest to grow the evaluation profession.
Program Evaluation to Guide Training for State-wide Federally Funded College Access Initiative: The Experience of First-time Evaluators
Presenter(s):
Karyl Askew,  University of North Carolina, Chapel Hill,  karyls@email.unc.edu
Bridget Weller,  University of North Carolina, Chapel Hill,  bweller@email.unc.edu
Tangie Gray Fleming,  University of North Carolina, Chapel Hill,  tangie_gray@unc.edu
Abstract: Graduate student courses can be used to both advance the field of evaluation and provide a quality service to organizations. As part of an introductory graduate-level evaluation training course offered at the University of North Carolina at Chapel Hill, a three-member team was commissioned to conduct an evaluation to inform the training of district-level administrators of a multi-million dollar statewide federally funded college access initiative. The sample included 20 school district-level and three state-level directors serving over 6,000 aspiring first-generation college attendees. Presenters will highlight benefits and challenges to learning the fundamentals of evaluation as part of a graduate course. The purpose of this presentation is to share reflections on how course assignments, classroom discussions, instructor mentoring, and real-world pilot study facilitated 1) the delivery of an evaluation product that had both utility and influence for the client, and, 2) the professional development of first-time evaluators.
Integrating Client Education With the Evaluation Process
Presenter(s):
Christopher L Vowels,  Kansas State University,  cvowels@ksu.edu
Jason Brunner,  Kansas State University,  jbrunner@ksu.edu
Abstract: Evaluation offices are commonly approached by clients with varying levels of understanding of the evaluation process and related research methodologies. To promote client understanding, we propose the Evaluate-And-Educate (E2) method. This method allows a training of the client during the evaluation process, recognizing that evaluation is often subsumed by limited resources and time. By integrating the client's level of understanding into the evaluation plan, opportunities become available for instruction on correctly utilizing graphical information, accurate and effective use of statistical test results, and better understanding of the effects of different research design implementations. Thus, the client not only receives the requested evaluation services, but is also educated as a functional part of the evaluation process. In this respect, the education is innately linked to evaluation and not seen as an additional request or burden. Likewise, by increasing client understanding, future evaluation opportunities of a more stringent quality become more likely.

Session Title: Evaluation Capacity Building Unplugged
Think Tank Session 341 to be held in Liberty Ballroom Section A on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Presenter(s):
Hallie Preskill,  Claremont Graduate University,  hallie.preskill@cgu.edu
Shanelle Boyle,  Claremont Graduate University,  shanelle.boyle@gmail.com
Abstract: In spite of the many evaluation capacity building (ECB) efforts that are underway around the world, there is little empirical research that guides evaluators in their design and implementation of such activities. For example, few have written about the linkages between ECB and adult and workplace learning theory, or the theories and practices of organizational learning. In addition, few have offered a typology of strategies and their appropriate uses, or provided evaluation data on the effectiveness of various capacity building strategies. In this session, participants will: 1) be engaged in developing a logic model of evaluation capacity building and, 2) be asked to review and critique a draft of a new evaluation capacity building conceptual framework. Our hope is that participants will leave the session with new insights about ECB and practical ideas for maximizing the ways in which they work to develop others' evaluation capacity.

Session Title: Professional Status for Evaluators: Canadian and American Views
Panel Session 342 to be held in Liberty Ballroom Section B on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the AEA Conference Committee
Chair(s):
Gerald Halpern,  Fair Findings Inc,  gerald@fairfindings.com
Abstract: The American Evaluation Association does not award professional designations in evaluation; the Canadian Evaluation Society is on the road to seriously considering the development and installation of such a system. This session describes why the Canadian Evaluation Society is moving in this direction and how it would expect to achieve professional status for evaluators in the near future. Professional designations in Canada would have implications for practice in the United States and elsewhere. The process being followed in Canada may have utility for other-country professional associations of evaluators. The Canadian experience will be examined and critiqued by two American evaluators experienced with the issue. Discussion from non-panel participants will be encouraged and significant time will be reserved for this purpose.
Warming up to the Prospect of Professional Designations: Reflections on the Canadian Process
J Bradley Cousins,  University of Ottawa,  bcousins@uottawa.ca
Jim Cullen,  Thomas More Institute,  jimcullen99@msn.com
The presentation will be an examination of the forces acting upon the Canadian Evaluation Society that led to its decision to seriously consider the adoption of professional designations for Canadian Evaluators. J. Bradley Cousins is an ex-officio member of CES National Council by virtue of his role as Editor of the Canadian Journal of Program Evaluation. Jim Cullen was Chair of CES Membership Services Committee throughout the process of consultation on professional designations for evaluators in Canada. In this presentation, they reflect on the impetus for initiating a request for proposals for an action plan and subsequent negotiations. They also share insights on the process of developing a National Council Response to the resultant action plan and the development and launch of a tripartite consultation process that involved CES members, partners of the Society (including employers), and CES Chapters. Consideration will be given to lessons learned from this process.
Arriving at an Action Plan
Gerald Halpern,  Fair Findings Inc,  gerald@fairfindings.com
Gerald Halpern served as organizer and first drafter of reports and plans for the 11 person Consortium that developed the Action Plan for professional designations. His doctorate is in industrial psychology with an emphasis on experimental design. He has 43 years of experience in all aspects of program evaluation. The methodology and the resultant Action Plan are described. The Plan presents professional designations for evaluators. There were three components to the methodology: (a) a literature review conducted by Irene Huse (a doctoral candidate at the University of Victoria) with supervision by Professor James C. McDavid; (b) interviews with relevant other professional organizations (designed and supervised by Gerald Halpern and Bud Long; and (c) successive iterations of discussion among Consortium members to develop feasible means for achieving the objectives. The outcome was a recommendation for a multi-level and staged series of professional designations: Member; Credentialed Evaluator; Certified Professional Evaluator.
Critical Examination of the Canadian Plan
James W Altschuld,  The Ohio State University,  altschuld.1@osu.edu
Dr. Altschuld brings to the panel his expertise developed on this issue over the years. His experience includes (a) the writing of a number of articles on evaluator certification and (b) service as the chair of the American Evaluation Association's taskforce that examined the issue in the late 1990's. From the perspective of an American-based evaluator, Dr. Altschuld will presents his views regarding the path followed and the progress achieved in Canada. These will include an analysis of deficiencies and the highlighting of aspects of the approach taken that are useful to the American situation. As well, there will be a presentation of an historical perspective on the nature of the certification/credentialing debate in the US and why, despite the level of interest, certification or credentialing for evaluators is not yet in place in America.

Session Title: Exploring the Implications of the Administration of Aging's Performance Outcomes Measures Project for Evaluators
Panel Session 343 to be held in Mencken Room on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the Government Evaluation TIG
Chair(s):
Patricia Yee,  Vital Research, LLC,  patyee@vitalresearch.com
Discussant(s):
Melanie Hwalek,  Social Program Evaluators and Consultants Inc,  mhwalek@specassociates.org
Abstract: This panel will investigate the federal government's framework for measuring outcomes of social services for the aging and what it means for local evaluators. The first presenter will provide an historical overview and summary of the current research on the Administration on Aging's (AoA) core set of performance measures for state and community programs on aging operating under the Older Americans Act. Then, two presenters will discuss their own evaluations of programs in aging: (1) a utilization focused evaluation in senior affordable service-enriched housing and (2) a parenting grandparent caregivers program. In addition to describing their evaluations, the two presenters will examine the extent to which their outcomes relate to the Performance Outcome Measures Project (POMP) of AoA. The discussant will facilitate audience feedback about ways that local evaluators could link up with the performance measures AoA is using to build systems of accountability for social programs in aging.
Administration on Aging's (AoA) Performance Outcomes Measures Project (POMP) as a Resource: History and Use as a Resource for Evaluators
Saadia Greenberg,  United States Department of Health and Human Services,  saadia.greenberg@aoa.gov
Cynthia Agens Bauer,  Administration on Aging,  cynthia.bauer@aoa.gov
This presentation will describe and outline the resources available from the AoA Performance Outcomes Measures Project (POMP). POMP supports state/area agencies on aging to develop performance indicators and conduct assessments of their services. Over the past seven years, these projects have developed assessment instruments for their own states. The projects work cooperatively and share extensively both among themselves and with AoA's POMP support contractor, Westat, Inc. Projects have built up a considerable body of tested and validated assessment instruments. Since 2003, AoA has conducted national surveys of its program participants. Three national surveys have been conducted; a fourth in the design phase; and a fifth is planned. The surveys included detailed assessments of services received by recipients of case management, congregate and home delivered meals, transportation, homemaker services, information and assistance assessment, senior center participation, as well as caregivers. In addition, survey instruments were designed to document client characteristics.
Assessing the Utility and Validity of the Senior Center Performance Measure in Senior Affordable Housing Developments
Joelle Greene,  National Community Renaissance,  jgreene@nationalcore.org
Service-enriched affordable housing developments frequently include community centers that provide services parallel to those offered by government-funded, community-based Senior Centers. These services typically include case management, resource referral, and socialization activities aimed at increasing quality of life and aging-in-place for low-income seniors. The utility and validity of the Senior Center Performance Measure from the Performance Outcome Measurement Project (POMP) (both center and participant components) will be discussed using data drawn from a portfolio of 12 senior affordable housing developments (serving over 1,200 residents) located in southern California. Relationships between center usage and participant emotional, social and physical functioning will be discussed and compared to findings reported by Aday (2003) indicating strong positive relationships between senior center participation and healthy aging. Implications for the use of POMP tools in these evaluative settings will be discussed.
Adapting the Caregiver Support and Assessment Survey Instrument to Assess Kin-caregiver Needs
Allison Nichols,  West Virginia University,  ahnichols@mail.wvu.edu
Most researchers of care giving focus on those who provide care to the frail elderly. There is another group of older adult caregivers, those who are who are raising their grandchildren. These caregivers have similar, yet different, needs. The Performance Outcome Measurement Project (POMP) has developed a Caregiver Support and Assessment Survey Instrument that collects data on services received, ratings of services, demographics, care provided, and burdens and rewards. Most of the questions in the POMP survey are relevant to kin-caregivers, but certain changes would have to be made. These changes might include the addition of services from the child/youth arena and legal/guardianship services, to name a few. Additionally, the survey would need to address the relationship of the caregiver to the young care recipient as well as to the biological parent or adult child. This presentation will make suggestions for changes to the survey to meet the needs of kin-caregivers.

Session Title: Evaluation in Education
Multipaper Session 344 to be held in Edgar Allen Poe Room  on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
Rene Lavinghouze,  Centers for Disease Control and Prevention,  shl3@cdc.gov
A Case Study of Involvement and Influence: Multi-year Cross-site Core Evaluation of the National Science Foundation's Collaboratives for Excellence in Teacher Preparation Program
Presenter(s):
Kelli Johnson,  University of Minnesota,  johns706@umn.edu
Frances Lawrenz,  University of Minnesota,  lawrenz@umn.edu
Lija Greenseid,  University of Minnesota,  gree0573@umn.edu
Abstract: This paper presents a case study of the program-level evaluation of the National Science Foundation's Collaboratives for Excellence in Teacher Preparation (CETP) Program. The CETP program was designed to improve mathematics and science teacher preparation through the improvement of undergraduate science and mathematics courses and education courses. Using data collected from extensive document review, key informant interviews and a survey, this case study describes the involvement in the overall program evaluation by various stakeholders across 25 CETP project sites nationwide. It highlights issues related to voluntary multi-year, cross-site participation by multiple projects whose willingness to collaborate on instrument development, to adhere to standard protocols, and to share data with the core evaluation team varied widely. This case study informs an exploration of relationships between evaluation activities, especially involvement, and evaluation influence and the pathways that result in greater evaluation use and influence.
Multiplicity in Action: Creating and Implementing a Multi-program, Multi-site Evaluation Plan for a Predominantly Minority/Urban School District
Presenter(s):
Mehmet Dali Öztürk,  Arizona State University,  ozturk@asu.edu
Abstract: Multiplicity can often mean complexity in designing sound and reliable evaluation plans for educational partnerships. However, multi-program, multisite evaluations that are carefully designed/planned and conducted can be very effective in understanding the effects of educational innovations, interventions, and reforms in diverse settings, thus contributing to systemic change efforts. This paper examines a multi-component evaluation plan, which focuses on identifying which program factors, if any, are related to improved outcomes for students, schools, and families. Specifically, the paper shares experiences in creating and implementing a multi-program, multi-site evaluation plan with an ongoing university-school partnership initiative that is designed to achieve the common goal of making a difference in the educational improvement of students in a culturally, linguistically, and economically diverse region of the United States.
Lessons Learned From Rating the Progress and Extent of Reform
Presenter(s):
Patricia K Freitag,  COSMOS Corporation,  patfreitag@comcast.net
Darnella Davis,  COSMOS Corporation,  ddavis@cosmoscorp.com
Abstract: Innovative evaluation frameworks are needed to understand the progress and extent of complex education reforms and their relationship to research-based components of systemic reform. The lessons learned from measuring and rating reform components and progress are at the heart of this paper. Assertions regarding the relative progress and impact of funded projects will be made within the context of well-developed case studies of comprehensive school reform. Recommendations for adapting reform progress measures for broader use in evaluation are derived from pilot test findings. Ratings reveal shifting priorities and constraints that may be correlated with system alignment, project maturity, as well as incremental changes in student performance. The paper discusses how clarifying the underlying theory of action is helpful for revising measures of progress, and may improve rating and ranking reliability between multiple raters of reform.
Value-added Assessment: Teacher Training Designed to Improve Student Achievement
Presenter(s):
Laurie Ruberg,  Wheeling Jesuit University,  lruberg@cet.edu
Judy Martin,  Wheeling Jesuit University,  jmartin@cet.edu
Karen Chen,  Wheeling Jesuit University,  kchen@cet.edu
Abstract: Recent studies question the belief that family and socio-economic backgrounds have a strong influence on student learning compared with teachers and schools that have only a limited effect. Current research shows that students can learn a lot from and are greatly influenced by an effective teacher. This report examines the outcomes of a three-year, multi-site, multi-level professional development program for teachers situated at schools serving low socio-economic and ethnically diverse populations. Program interventions are designed to provide professional development strategies aimed ultimately at increasing student achievement in science. The evaluation combines qualitative and quantitative research methods. Being the third year of program implementation, this analysis builds upon prior evaluations addressing organizational and service utilization plans and focuses on program impact. The professional development guidelines used to design the interventions are applied to the data analysis to assess whether the program had the desired effect on teachers and students.
Using Threshold Analysis to Develop a Typology of Programs: Lessons Learned from the National Evaluation of Communities In Schools (CIS)
Presenter(s):
Allan Porowski,  Caliber an ICF International Company,  aporowski@icfi.com
Stella Munya,  Caliber an ICF International Company,  smunya@icfi.com
Felix Fernandez,  Caliber an ICF International Company,  ffernandez@icfi.com
Susan Siegel,  Communities in Schools,  siegels@cisnet.org
Abstract: One of the principal challenges of cross-site evaluations is making sense of the variability across programs. In this presentation, we present the case of a particularly challenging cross-site evaluation of Communities in Schools, a program with more than 2,500 sites across the country – each delivering highly tailored services. To address this challenge and to bring a coherent framework to a highly diverse program, the authors developed a typology of programs using threshold analysis, a scoring method that brings together quantitative and qualitative data to address both performance measurement and adherence to the ideal program model. Our threshold analysis resulted in a typology that captured the essence of each site's program without sacrificing the flexibility of the program model. In this presentation, we will describe our methodology as well as implications for interpreting the results of the typology analysis.

Session Title: Foundation Policy Change Efforts: Internal and External Evaluation Strategies
Multipaper Session 345 to be held in Carroll Room on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the Advocacy and Policy Change TIG
Chair(s):
Claire Brindis,  University of California, San Francisco,  claire.brindis@ucsf.edu
Developing a Framework for Evaluating Policy and Advocacy Activity at the Foundation Level
Presenter(s):
Charles Gasper,  Missouri Foundation for Health,  cgasper@mffh.org
Leslie Reed,  Missouri Foundation for Health,  lreed@mffh.org
Abstract: Grant making foundations are increasingly involved in assessing various levels of governmental policy as one strategy to fulfill their missions. Recently, there has been an emergence of interest among foundations in understanding what activities are effective in changing policy. Critical to this evaluation is awareness of the operational model and how the various policy related activities support the overall goals of the foundation. Theory-based evaluation is used as a framework to link the activities and programs of the policy arm of the Missouri Foundation of Health to organizational goals and ultimately to the mission. In turn, selected activities, grants, and programs are further evaluated using the program logic model. Lastly, measures of process and outcome are developed based upon these frameworks at the master and lower levels. This presentation shares the process of developing these frameworks, discusses the successes and difficulties, and provides sample results of the experience.
The Role of Policy Advocacy in Assuring Comprehensive Family Life Education in California
Presenter(s):
Claire Brindis,  University of California, San Francisco,  claire.brindis@ucsf.edu
Sara Geierstanger,  University of California, San Francisco,  sara.geierstanger@ucsf.edu
Adrienne Faxio,  University of California, San Francisco,  adrienne.faxio@ucsf.edu
Abstract: As part of their 10-year $60 million Teenage Pregnancy Prevention Initiative in California, The California Wellness Foundation funded organizations to conduct policy advocacy to strengthen the types of policies developed at the local and statewide level. This paper describes evaluation data on a subset of these advocacy grantees that focused on California's Family Life Education policies. They accomplished noteworthy goals, including the passage of the California Comprehensive Sexual Health and HIV/AIDS Prevention Education Act (AB 71), the prevention of California's pursuit of federal “abstinence-only-until-marriage” funding, as well as the passage of local school district CFLE policies. Grantee progress is presented through a five-stage policy change framework: Institutional Capacity and Leadership Building; Policy Issue Recognition; Policy Prioritization; Policy Adoption; and Policy Maintenance. Implications and lessons learned are also shared for advocates, policymakers and funders of other initiatives aimed at improving the health of adolescents.

Session Title: Conducting Large Scale Evaluations of Federal Cancer Control Programs
Panel Session 346 to be held in Pratt Room, Section A on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Lenora Johnson,  National Institutes of Health,  johnslen@mail.nih.gov
Abstract: With decreasing funding, there is a greater need for federal programs to demonstrate their effectiveness. To aid in conducting federal evaluations, the government has instituted programs like HHS's 1% Evaluation Set-Aside program, which off-sets the cost of evaluation and provides opportunities for capacity-building. However, challenges remain. GAO documents several barriers in its case study report on the assessment of information dissemination including: variations local-level program implementation; assessing impact of multi-media programs; observation of delayed outcomes; reliance on self-report; and accounting for all factors contributing to behavior change. Despite barriers, agencies are expected to implement rigorous evaluations that deliver reliable, timely, and useful results. In this session, three examples of ongoing, large-scale evaluations of federal cancer control programs will be presented. Authors will share methodologies and findings, including challenges and strategies to address them. A discussion around how to best meet challenges associated with conducting these evaluations will follow.
Evaluation of the National Network of Tobacco Cessation Quitlines (NNTCQ) Initiative
Candace Deaton Maynard,  National Institutes of Health,  maynarc@mail.nih.gov
The goal of NNTCQ is to ensure access to cessation services. NNTCQ provided funding to enhance/establish quitlines; made services available until states had a quitline; and established a single, access number that routes callers to state services. Currently, all 50 states and DC have quitlines and 1-800-QUIT-NOW has received over 600,000 calls since November 2004. A program logic model and a three-phased evaluation plan were developed. The design, methods, and outcomes from the first-phase process evaluation will be shared. This evaluation was innovative as it employed primary data collection via key informant interviews and secondary data analysis from an annual survey to assess questions including: "How did quitlines use the resources provided?" and "What extent did support facilitate/hinder quitlines?" Initial findings will be used to 1) strengthen partnerships within cessation research and practice, 2) build and enhance capacity to provide services; 3) increase the usage and sustainability of quitline.
Evaluation of the National Cancer Institute's National Body & Soul Dissemination
Felicia Solomon,  National Institutes of Health,  solomonf@mail.nih.gov
Body & Soul is a faith-based program to increase fruit and vegetable consumption among African Americans. Body & Soul is based on over 10 years of research in churches, and when implemented as intended, is successful in increasing fruit and vegetable consumption. Currently NCI is leading efforts to disseminate the program to churches nationally and to evaluate these efforts. This presentation will report preliminary evaluation findings. The evaluation is being conducted in three Phases: (1) Describe the model, (2) Assess implementation of the model, and (3) Assess the impact of the dissemination. A logic model has been developed to support the evaluation of the dissemination. Inputs, outputs, and expected outcomes of the dissemination will be discussed. We will also discuss the challenges of evaluating the transfer of research to practice.
The Impact of a Smoking Cessation Media Campaign in the Military
Herbert Baum,  National Institutes of Health,  herbert.m.baum@orcmacro.com
TRICARE, the US Military health system, is concerned about the rate of tobacco use among young members of the military. Macro International was tasked with developing, implementing, and evaluating a media campaign to promote tobacco cessation. Data on use of tobacco were gathered, via in-person interviews from 200 individuals at each of four military bases, representing the various branches of the service (Army, Navy, Air Force, and Marines). A 30-day pilot media campaign was launched in late February 2007, incorporating radio, print media (post newspapers), and poster advertisements. When the pilot ends, in-person interviews will be conducted with military personnel at these same bases. Comparing results from the two surveys provides a measure of changes in behavior as well as actions taken. Awareness of the media campaign will also be estimated. This paper reports on the issues involved in conducting an evaluation of this type in a military environment.

Session Title: Evaluating the Effectiveness of Community Prevention Coalitions: An Interim Report on the Evaluation of the Drug-free Communities Support Program
Panel Session 347 to be held in Pratt Room, Section B on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
David Chavis,  Association for the Study and Development of Community,  dchavis@capablecommunity.com
Discussant(s):
Kenneth Shapiro,  Office of National Drug Control Policy,  kshapiro@ondcp.eop.gov
Abstract: While the use of coalitions to prevent disease and promote health has been popular for many years, the evaluations of such initiatives are extremely challenging. The proposed panel will discuss how the program evaluation of the Drug-Free Communities Support Program addresses some of the unique challenges of evaluating community prevention coalitions through the use of a typology that captures how coalitions develop or mature over time and an innovative statistical technique to compare communities with and without Drug-Free Community Coalitions. In addition to using innovative methodology, this panel will present preliminary findings about the capacities and characteristics of Coalitions that are making positive public health changes in their community. These preliminary findings significantly improve our field's understanding of how Coalitions can facilitate community change to reduce substance abuse. Finally, the panel will conclude with a discussion of how this evaluation has addressed the challenges associated with using self-reported data.
How Do You Evaluate Community Prevention Coalitions?: Design of the Evaluation of Drug-free Communities Support Program
Jeanine Christian,  Battelle Centers for Public Health Research and Evaluation,  christianj@battelle.org
The Office of National Drug Control Policy funds the Drug-Free Communities Support Program and its evaluation. The Program is designed to build community capacity to prevent substance abuse among our nation's youth through Community Anti-Drug Coalitions. This presentation provides an overview of the design of the program evaluation. The specific objectives of the evaluation are to: (1) Assess whether the program has made an impact on reducing the substance abuse outcomes at the community, state, and national level; (2) Assess whether Drug-Free Community Coalitions have increased the capacity and effectiveness of coalitions; and (3) Identify specific factors that contribute to Coalitions' ability to prevent substance abuse. This presentation will also introduce key components of the evaluation which will be discussed in greater detail in other panel presentations.
Framework for a Typology of Community Substance Abuse Prevention Coalitions
Joie Acosta,  Association for the Study and Development of Community,  jacosta@capablecommunity.com
Central, and unique, to the evaluation of the Drug-Free Communities Support Program is the recognition that Coalitions develop or mature over time. To capture this development, the evaluation team reviewed the scientific literature and consulted with experts and Coalition leaders, to develop a typology to classify Coalitions into one of four 'stages of development' (i.e., Establishing, Functioning, Maturing, and Sustaining). The purpose of the typology is to aid in the evaluation of the program in three ways: first, by identifying how Coalitions evolve in their abilities to reduce substance abuse; second, by identifying how they institutionalize these capacities to help communities come together to prevent substance abuse and related problems; and third, by identifying how effectively they focus activities on environmental change and capacity building. This presentation will discuss the development of the typology and its application to the evaluation of the program.
A Nationwide Comparison of Communities With and Without Drug-Free Community Coalitions
Ben Pierce,  Battelle Centers for Public Health Research and Evaluation,  pierceb@battelle.org
The Drug-Free Communities Support Program is based on the assumption that communities with Drug-Free Community Coalitions will more effectively address the substance abuse problems in their community. Therefore, to assess the effectiveness of the program it is critical to compare communities with and without Drug-Free Community Coalitions. However, this comparison cannot be done using traditional methods because they assume that all Coalitions are implementing the same strategies in the same way under the same circumstances. Drug-Free Community Coalitions are in 49 states, target different sized and overlapping populations, and focus on a broad range of outcomes. In response, the evaluation team developed an innovative statistical technique, comparing state and national trends with local trends and patterns across sites, in order to address the lack of an appropriate control or comparison group. This presentation will focus on the development and application of this innovative statistical technique.
Examining Effectiveness: What are the Characteristics of Successful Coalitions in the Drug-Free Communities Support Program?
David Chavis,  Association for the Study and Development of Community,  dchavis@capablecommunity.com
The success of the Drug-Free Communities Support Program will be determined by its ability to reduce substance abuse in communities. Many Drug-Free Community Coalitions have been successful in achieving positive outcomes in their target communities, but little is known about the capacities and other characteristics associated with these Coalitions. A preliminary analysis was conducted to learn about the capacities and other characteristics associated with successful Coalitions, as determined by having significant, faster rates of reduction of 30-day past use of alcohol, marijuana, and tobacco compared to the average coalition. Characteristics of Drug-Free Community Coalitions that had significantly faster rates of reduction in alcohol, tobacco, and marijuana use for the youth in their community in one or more of the three targeted substances (i.e., alcohol, tobacco, marijuana) will be discussed. These findings support the developmental framework of the Drug-Free Communities Program evaluation, such that Coalitions may become more successful as they develop their capacities.
Navigating the Challenges of Evaluating Community Prevention Coalitions
Jennifer Mason,  Battelle Centers for Public Health Research and Evaluation,  malsonj@battelle.org
The national evaluation of the Drug-Free Communities Support Program offers a unique set of barriers and challenges to even the most seasoned evaluation veteran. The last presentation in this panel will provide an in-depth exploration into the challenges associated with the evaluation of this community-based program. The role of diversity in implementation, reliance on self-report data, navigating the evaluation though the eyes of multiple stakeholders, considerations associated with noncompliance and data collection will be discussed. Time will also be spent describing how the evaluation team countered each of these challenges with evolving and innovative strategies. Despite the challenges, the evaluation has the potential to demonstrate not only if and how Drug-Free Community Coalitions reduce substance abuse outcomes, and will make an important contribution to the science of prevention and evaluation.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: A Time Sequencing Evaluation Technique for Exercise Evaluation
Roundtable Presentation 348 to be held in Douglas Boardroom on Thursday, November 8, 11:15 AM to 12:45 PM
Presenter(s):
Lisle Hites,  Tulane University,  lhites@uab.edu
Abstract: Evaluation of drills and exercises typically consist of developing exercise objectives into checklists that consist of measurable and observable items or events. However, such evaluation techniques are ill suited for gathering and quantifying less predictable facets of exercise outcomes and effectiveness. By focusing exclusively on pre-identified exercise objectives, many aspects of response effectiveness data may be overlooked. The technique discussed in this presentation will address an exercise evaluation technique which utilized time sequencing to assess the effectiveness of emergency response in a series of multidisciplinary simulated avian influenza outbreaks. Through use of this technique, assessment of this rich data set resulted in the identification of many different and un-expected insights into emergency response effectiveness.
Roundtable Rotation II: Linking Monitoring, Evaluation and Internal Audit in International Emergency Response to Increase Effectiveness
Roundtable Presentation 348 to be held in Douglas Boardroom on Thursday, November 8, 11:15 AM to 12:45 PM
Presenter(s):
Jason Ackerman,  Catholic Relief Services,  jackerma@crs.org
Carlisle Levine,  Catholic Relief Services,  clevine@crs.org
Stuart Belle,  World Vision International,  stuart_belle@wvi.org
Alex Causton,  Catholic Relief Services,  acauston@crspk.org
Abstract: The international NGO community's ability to leverage organizational learning generated by monitoring and evaluating emergency responses from the Rwanda genocide to the Pakistan earthquake is mixed. The roundtable presenters suggest that collaboration between NGO internal audit and M&E practitioners will increase the likelihood that M&E recommendations lead to long-term, positive emergency response outcomes. Between M&E and Internal Audit functions, a broad array of skills sets, knowledge and capabilities are available, all of which can be more effectively deployed before, during and after an emergency response in order to increase intervention effectiveness. The roundtable discussion will use the international NGO response to the Pakistan earthquake to highlight M&E successes and challenges associated with improving emergency response outcomes. Those highlight include: internal audit compliance authority reinforces M&E recommendations; collaborate don't duplicate; and to be mutually successful M&E and internal audit need a common vocabulary and understanding in their assessment approaches.

Session Title: Exchange Outcome Assessment Linkage System (E-GOALS): A United Sates Department of State Web-Based Approach to Assessing the Performance of International Educational Programs
Panel Session 349 to be held in Hopkins Room on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the Integrating Technology Into Evaluation
Chair(s):
Cheryl Cook,  United States Department of State,  cookcl@state.gov
Abstract: The Bureau of Educational and Cultural Affair's (ECA) Office of Policy and Evaluation within the U.S. Department of State is tasked with assessing the performance of its exchange programs. They have developed a logic model that organizes these diverse programmatic activities and objectives into nine measurable outcomes. These outcomes are then measured using performance indicators through customized surveys using an online system called E-GOALS. The system has the capacity to deliver web-based surveys in multiple languages. The panel will share its learning experiences in combining performance measurement and system delivery. Specifically, the panel will: 1. Provide a summary of the E-GOALS Survey system and its development 2. Outline the nine bureau level performance outcomes 3. Discuss the construction of the nine indicators that are based on the outcomes 4. Demonstrate examples of our pre, post and follow-up templates 5. Highlight critical features e.g. multi-language database
Part 1: A Brief Overview of the Exchange Outcome Assessment Linkage System (E-GOALS) System and its Development
Cheryl Cook,  United States Department of State,  cookcl@state.gov
Part 1: This panel will begin with a brief overview of the E-GOALS system and its development The presenter for parts 1 and 4 will be Ms. Cheryl Cook. Ms. Cook is the E-GOALS Director. She is knowledgeable regarding the program management and system development and is the individual who is most familiar with the E-GOALS system.
Part 2: An Outline of the Nine Bureau Level Performance Outcomes - Part 3: The Construction of the Nine Indicators That are Based on the Nine Bureau Outcomes-Question Bank
Steven Gaither,  United States Department of State,  gaithersa@state.gov
Part 2: Will present an outline of the nine ECA Bureau performance outcomes will be presented. The theoretical basis for logic model will be discussed. Part 3: The construction of the nine indicators will be discussed. This discussion will demonstrate how the nine indicators were directly derived from the nine bureau performance outcomes Dr. Steve Gaithers is a senior evaluation office. As a former professor of statistics, research methods and systems theory, he fills the role of the ECA Division's statistician. with the ECA Evaluation Division of the U.S. Department of State and will present parts 2 and 3. He uses the E-GOALS system as a primary data source.
Part 4: Examples of Our Pre, Post and Follow-up Templates - Part 5: Highlight Critical Features, e.g., Multi-language Database
Michelle Hale,  United States Department of State,  halemj2@state.gov
Part 4: The presenter will discuss the characteristics of the multi-language database. She will briefly outline the process and characteristics of preparing a translation for a survey. Part 5: The presenter will show an example of our pre, post and follow-up templates. Ms. Hale will discuss the financial implications of using electronic verses paper versions of the surveys. The presenter for parts 4 and 5 will be Ms. Michelle Hale. Is the survey creation-design coordinator in the U.S. Department of State, ECA Evaluation Division. She is experienced in preparing, administering and conducting analysis for domestic and foreign language survey projects.

Session Title: Building a Framework for Public Diplomacy Evaluations: Lessons Learned and Best Practices in Public Diplomacy Evaluation
Panel Session 350 to be held in Peale Room on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Melinda Crowley,  United States Department of State,  crowleyml@state.gov
Discussant(s):
Norma Fleischman,  United States Department of State,  fleischmanns@state.gov
Abstract: This panel discusses three pilot evaluation studies launched in FY'06 by the newly created Public Diplomacy Evaluation Office (PDEO), U.S. Department of State. PDEO combines evaluation staffs of the Bureau of Educational and Cultural Affairs (ECA), Bureau of International Information Programs (IIP) and the Office of Policy, Planning and Resources in the Office of the Under Secretary for Public Diplomacy and Public Affairs (R/PPR). PDEO promotes opportunities for organizational learning, and a nimble structure to transform recommendations into actionable program improvements. The panel focuses on three projects implemented through PDEO. The first presentation involves the American Corners program, a partnership between U.S. Embassies and foreign host institutions, usually public or university libraries. The second presentation involves the Strategic Media Outreach Performance Assessment (SMOPA), one of several projects that measure and assess the effectiveness of U.S. Embassy public diplomacy. The third presentation focuses on the Mission Activity Tracker (MAT), a global tool for tracking public diplomacy outreach at U.S. Embassies.
Building the Architecture for Evaluating the American Corners Program Globally
Melinda Crowley,  United States Department of State,  crowleyml@state.gov
This work describes a seven-month pilot evaluation of the American Corners Program, consisting of survey questionnaire(s), focus groups, case studies, observations and in-person individual interviews. Data collection occurred in Malaysia, Indonesia, South Korea and Thailand. American Corners is a partnership between U.S. Embassies and foreign host institutions, usually public or university libraries. American Corners serve as conduits into American culture, society, and values via book collections, printed and multimedia materials, the Internet, through USG-sponsored speakers and other programming to the general public. This pilot evaluation documents initial American Corners Program outcomes and impacts, best practices and recommendations for program improvement. A retrospective approach assesses the effectiveness of American Corners on each of four major public diplomacy indicators: audience reach, incorporation of U.S.-sponsored information and materials, changes in understanding and perceptions of the United States, and participant satisfaction. A formative component identifies effective practices in developing and managing American Corner sites.
Building a Foundation for Assessing Media Outreach at United States Embassies
James Alexander,  United States Department of State,  alexanderjt@state.gov
This presentation addresses the process of establishing a foundation for evaluating the effect of U.S. embassy media outreach activities. The Strategic Media Outreach Performance Assessment (SMOPA) is one of several projects designed to measure and assess the effectiveness of PD. This project addresses three performance measures that focus on identifying measurable improvement in the tone and accuracy of coverage of U.S. policies in foreign media outlets. SMOPA is not a completed project. Yet, the process of building a base for evaluation where none has existed before is instructive. In the case of SMOPA, field trips to several embassies, consultations with local media experts, and a review of relevant literature show the obstacles to assessing the influence of PD programming on host country media presentations. This discovery has necessitated the re-evaluation of the performance measures and a linked, concerted effort to develop tools for assessing a difficult subject matter.
Developing a Global Tool for Tracking Public Diplomacy Outreach at United States Embassies
Catalina Lemaitre,  United States Department of State,  lemaitrecx@state.gov
Pressure from OMB, the American public and other stakeholders to demonstrate effectiveness and impact of Public Diplomacy has resulted in increased attention on the systematic measurement of Public Diplomacy activities. The Mission Activity Tracker (MAT) is one of several projects supporting this effort. The MAT system supports the collection of output data for Public Diplomacy (PD) performance measures and baseline information for public diplomacy program evaluation. MAT is a web-based, globally accessible system soon to be available at embassies across the world to monitor local PD activities, summarize information relevant to PD initiatives (e.g. themes, objectives, audience reached, media placements) and generate reports. MAT will be piloted through April and launched globally in May 2006. This presentation addresses the process for developing a system to track, monitor, analyze and report on Public Diplomacy outreach efforts world-wide. Focus will be paid to challenges and lessons learned in developing and piloting the system.

Session Title: Macro-level and Micro-level Methodologies for Evaluating Education System Functioning in Afghanistan
Multipaper Session 351 to be held in Adams Room on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Edward Kissam,  JBS International Inc,  ekissam@jbsinternational.com
Discussant(s):
Roger Rasnake,  JBS International Inc,  rrasnake@jbsinternational.com
Jo Ann Intili,  JBS International Inc,  jintili@jbsinternational.com
Abstract: This session examines the evaluation research toolkit necessary to effectively track initiatives for strengthening education systems in developing countries--using Afghanistan as a case study. The presentations draw on panelists' analyses of two macro-level datasets: the 2005 Afghanistan National School Survey data and the 2003 NRVA, on their experience conducting micro-level community case studies in remote rural areas of the country, and their ongoing ethnographic research in a demonstration cluster school initiative. The presentations will show that the micro-level research provides crucial supplementation to the national assessment using UNESCO's EFA framework in order to guide effective education system reform. Recommendations will be presented regarding the types of capacity-building needed to assure reliable research, data collection, and analysis.
Challenges in Interpreting National Survey Data on Education: Moving From Summary Tabulation to Practical Action
Craig Naumann,  JBS International Inc,  cnaumann@jbsinternational.com
Shannon Williams,  JBS International Inc,  swilliams@jbsinternational.com
Edward Kissam,  JBS International Inc,  ekissam@jbsinternational.com
The presenters collaborated in detailed analyses of data from Afghanistan's Ministry of Education's 2005 National School Survey. These differed from previous analyses in that efforts were made to clean a dataset generated with limited resources and under difficult data collection conditions and to cross-tabulate key variables rather than simply generating national-level indicators of system status. The presenters will describe key findings from these analyses and their implications for assessing Afghanistan's progress in rebuilding an education system devastated by years of conflict. The discussion will include: strategies to monitor and respond to student dropout and teacher training initiatives to respond to dramatic variations from province to province in teacher qualifications, size of schools, and range of instruction provided. We will also present the team's recommendations for improved school survey design and practical issues to be addressed in strengthening the applied research capacity to reliably monitor national progress in education system reconstruction.
From Ritual Flowchart to Complexities of Real-world Action: Understanding the Local Community Context of School Functioning as an Element of Formative Evaluation
Mohammad Javad Ahmadi,  Creative Associates International Inc,  mohammadj@af.caii.com
Bianca Murray,  JBS International Inc,  bmurray@jbsinternational.com
Afghanistan's centralized command and control education system faces challenges in its efforts to effectively impact local instruction and student outcomes in a rural country with little infrastructure. Decentralization has been an important strand in both international donors' and Ministry of Education strategic planning. However, implementation of this macro-level strategy is problematic due to lack of information on variations in local conditions. This results in reliance on 'cookie-cutter models' for school administrator and teacher training and overall systemic change. The presenters describe their formative evaluation research in support of implementation of the first phase of a national initiative to strengthen local schools and quality of instruction via parallel training for school management teams and teachers. Evidence is presented that attention to variations in local conditions, to local problem-solving strategies, and to local perspectives on educational objectives, and different sorts of local resources contribute significantly to the design of promising decentralization initiatives.
Capacity-Building Challenges, Requirements, and Strategies for Strengthening National Education Systems' Evaluation Research Capacity
Trish Hernandez,  JBS International Inc,  thernandez@jbsinternational.com
Shannon Williams,  JBS International Inc,  swilliams@jbsinternational.com
Craig Naumann,  JBS International Inc,  cnaumann@jbsinternational.com
The authors describe and discuss the practical challenges inherent in efforts to develop the technical capacity of Afghanistan's Ministry of Education to conduct the applied research needed to effectively monitor progress in education system reconstruction and evaluate ongoing strategic initiatives. The discussion includes attention to the specific challenges and types of solutions needed to address problems at each stage in the evaluation research process: developing a baseline profile of system status, systematically identifying research priorities, formulating efficient and viable research strategies, creating workable sampling approaches in the absence of adequate sampling frames, generating and piloting study instruments, collecting, managing, cleaning, and analyzing study data, and reporting findings to decision-makers.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Authentic Demand and Sustainable Community Change: Testing a Theory and Making the Case
Roundtable Presentation 352 to be held in Jefferson Room on Thursday, November 8, 11:15 AM to 12:45 PM
Presenter(s):
Audrey Jordan,  Annie E Casey Foundation,  ajordan@aecf.org
Mary Achatz,  Westat,  achatzm1@westat.com
Thomas Kelly,  Annie E Casey Foundation,  tkelly@aecf.org
Abstract: Making Connections is a shared effort by the Annie E. Casey Foundation, residents, organizational and systems partners, employers, and others to achieve measurable and sustainable improvements in the life chances of children and families in tough neighborhoods in 10 mid-size cities. The effort began in late1999- early 2000 with a broad theory that articulated the field's best thinking about what it would take to do this. Each community, then, adapted the elements of this theory to develop and sequence strategies in ways that built on local history and context and addressed local needs and priorities. This approach is yielding an increasingly robust and testable theory of community mobilization for action and results. This roundtable will focus on what we are calling authentic demand—an emergent area of learning about how resident leadership, civic engagement, community organizing and social network strategies, individually and in combination, contribute to development of new partnerships with local government, funders, service providers, schools, and businesses that work to improve outcomes for families and children.
Roundtable Rotation II: Maximizing Learning From Evaluation Findings for Diverse Stakeholders in a Community Capacity-building Initiative
Roundtable Presentation 352 to be held in Jefferson Room on Thursday, November 8, 11:15 AM to 12:45 PM
Presenter(s):
Liz Maker,  Alameda County Public Health Department,  liz.maker@acgov.org
Mia Luluquisen,  Alameda County Public Health Department,  mia.luluquisen@acgov.org
Tammy Lee,  Alameda County Public Health Department,  tammy.lee@acgov.org
Kim Gilhuly,  University of California,  inertiate@yahoo.com
Abstract: Evaluators working in community capacity-building (CCB) initiatives face the challenge of meeting multiple interests of stakeholders involved in implementing these complex projects. CCB interventions strive to improve a community's health and wellbeing by strengthening residents' leadership skills and relationships with policy makers. CCB interventions also require multi-level strategies aimed at changing individual behaviors, group relationships, social environments and power structures. When conducting evaluations in CCB initiatives, evaluators must balance the competing interests of a wide range of stakeholders, including community residents, organizers, funders and decision-makers. For example, residents and organizers may be particularly interested in “telling people's stories” about neighborhood change. Decision-makers may want to focus on measurable changes in health and social outcomes. This Roundtable will allow for sharing experiences and lessons learned in Oakland, California, on balancing the interests of stakeholders; followed by a dialogue about maximizing learning from evaluation findings with diverse stakeholders in CCB evaluations.

Session Title: Evaluating a State Comprehensive Cancer Control Program: Planning, Implementation and Initial Results
Panel Session 353 to be held in Washington Room on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Lisa Stephens,  National Cancer Institute,  stephens.lisa@mayo.edu
Abstract: The increase in collaboratives to address large scale chronic health issues warrants improved evaluation methods. Effective collaborations have been shown to include key components regarding leadership, capacity building, synergy, partnership, and member satisfaction. Because measurable outcomes are often long-term, formative evaluations that measure collaborative functioning can inform stakeholders on areas requiring improvement in order to reach long-term goals. This presentation will discuss the Minnesota Cancer Alliance's (MCA) internal evaluation of its first two years of partnership. The presentation will include background on comprehensive cancer control in Minnesota, the use of an internal, volunteer committee structure for developing and guiding evaluation activities, utility for using theoretical underpinnings, formative evaluation results and sharing recommendations with stakeholders.
Introduction to Comprehensive Cancer Control in Minnesota: Planning, Implementation and Evaluation
Lisa Stephens,  National Cancer Institute,  stephens.lisa@mayo.edu
In 2002 the Minnesota Department of Health was funded by the Centers for Disease Control and Prevention to establish the state's first comprehensive cancer control plan. The rationale of comprehensive cancer control is to build partnerships, reduce unnecessary duplication, improve coordination of resources, and develop innovative strategies to reduce the burden of cancer. Cancer Plan Minnesota, the state's cancer-related strategic plan contains twenty-four objectives and specific strategies for meeting these objectives that encompass the spectrum of cancer, from prevention and early detection to treatment, survivorship and end-of-life care. The Minnesota Cancer Alliance (MCA) provides governance for implementation of the plan. This session is to introduce Minnesota's Comprehensive Cancer Control program, its plan and the MCA structure, which includes the role of the evaluation committee and how a five year evaluation plan was designed to provide direction in measuring progress and identifying areas for program refinement.
Developing Tools and Methods to Operationalize the Evaluation Plan
Priscilla Flynn,  Mayo Clinic,  flynn.priscilla@mayo.edu
Minnesota Cancer Alliance (MCA) stakeholders identified an interest in tracking the number and types of members, workgroup activity and progress, partnership synergies and both direct and indirect project costs. Membership registration, MCA annual meeting attendance and work group meeting notes were reviewed to partially provide these data. In addition, a member survey was designed including elements indicated in the literature as integral in forming effective collaboratives. These include a clear understanding of member roles, leadership efficacy, member diversity, communication effectiveness, perceived synergy among members and engaging stakeholders. Semi-structured interviews were conducted to gain a deeper understanding of the benefits and barriers experienced by members. This session will discuss the theory and process of evaluating collaborative partnerships including assessing, educating and focusing the committee on evaluation methods to measure outputs; accessing or developing data collection tools; analyzing data; reporting to stakeholders; and framing unexpected outcomes.
Reporting to Stakeholders and Lessons Learned
Julia Johnsen,  University of Minnesota,  john2314@umn.edu
Mixed methods were used to evaluate the processes through which the Minnesota Cancer Alliance (MCA) fosters the development of synergistic relationships across its membership and the perceived efficacy of these efforts in producing examples of increased collaboration among MCA members. An online member survey, semi-structured interviews, direct observations of MCA meetings, and review of meeting minutes were used to collect data from July - September, 2006. Findings revealed that Alliance members felt strongly that the membership was the coalition's greatest asset and perceived that the Alliance added value to their professional lives. Members who were interviewed found it difficult to identify examples of increased collaboration. The literature offered possible explanations for why these apparently incongruent findings could have been expected. This session will explore how the MCA Evaluation team used the Community Coalition Action theory to frame these findings and to discuss the strategies used to present the findings to stakeholders.

Session Title: The Contribution of Evaluation to Building the Capacity of Indigenous, Not for Profit Organizations in New Zealand: Implementation of the Child, Youth and Family Provider Development Fund
Multipaper Session 354 to be held in D'Alesandro Room on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Discussant(s):
Kate McKegg,  The Knowledge Institute Ltd,  kate.mckegg@xtra.co.nz
Abstract: In 2000, the New Zealand government allocated funding to the Department of Child, Youth and Family Services to work with iwi (tribal) and Maori organizations to strengthen their capacity to (1) deliver government programmes and services, and (2) develop their own programmes and services to meet local needs. These papers background the evidence-based development processes and shared 'learnings' that have informed and built a community of practice between the funder and the evaluators to support the ongoing implementation of this capacity building fund. The papers focus on evaluation utility, how evaluation and evaluations methods have contributed to the policy process, fund administration, training development and delivery and discusses the critical attributes and factors that have supported this relationship, since 2001 to the present day.
Taking the Time and Building the Relationship: The Approach Taken to the Design and Implementation of the Iwi and Maori Provider Workforce and Development Fund Evaluation
Nan Wehipeihana,  Research Evaluation Consultancy Ltd,  nanw@clear.net.nz
The paper explores the importance of strong, trusting and respectful relationships within evaluation. How, working with the evaluation sponsor over a period of six months to gain an in-depth understanding of the aims, intent and operation of the fund, prior to the development of an evaluation design, built trust and confidence in the evaluators. How, on the strength of the relationship, funding was made available for the collective development of the evaluation approach/design involving all eight members of the evaluation team in a series of workshops and planning meetings. How the evaluators reciprocated by providing feedback and data, in advance of final reports, to support the decision-making processes in relation to the ongoing management and implementation of the fund. How, the relationship (and the quality of the evaluation outputs) has supported the ongoing involvement of the evaluators from 2001 to 2007, and an invitation to continue that involvement until 2009.
Utilizing Evaluation in the Ongoing Implementation of the Iwi Maori Provider Development Fund
Sonya Cameron,  Department of Child, Youth and Family Services,  sonya.cameron006@cyf.govt.nz
This paper discusses how evaluation has contributed to the implementation and strategic direction of the IMPDF. How the literature review and evaluation framework have been utilized post the evaluation. How the evaluation findings contributed to changes in the funding application process, to the nature of support provided to provider organizations and the range of development activities that could be funded. One of the most significant contributions has been the development of an organizational capacity self-assessment tool by the evaluators. The application of that tool has greatly enhanced the ability of providers to identify their own needs and plan their own development and its value is both as an assessment tool and as a capacity building activity in its own right. The paper concludes by discussing the potential contribution of the evaluation to building the capacity of the wider social services and voluntary sector in New Zealand.
The Contribution of Evaluation to Building the Capacity of Iwi and Maori Social Service, Not-For-Profit Provider Organizations
Miri Rawiri,  Department of Child, Youth and Family Services,  miri.rawiri004@cyf.govt.nz
Providers being self-determining and sustainable development were key messages that arose out of the Iwi and Maori Provider Workforce Development Fund (IMPDF) evaluation. This paper explores how the development of an organizational capacity self-assessment tool (and process) has supported providers to be self-determining and how the capacity building activities have contributed to organizational sustainability. It describes how providers have engaged with the process, the benefits and capacity gains they report to date (after only a period of two years of use) and how they have used the self-assessment tool, information and processes - in ways unimagined by the evaluators - to support the ongoing operation of their organizations.
Building an Evidence Base to Support the Sustainability of Iwi and Maori Social Service Provider Organizations and the Development of Cultural Practice Models
Fiona Cram,  Katoa Ltd,  fionac@katoa.net.nz
The paper explores the contribution of the various evaluation outputs including evaluation reports, literature reviews (2), organizational capacity self-assessment tool and processes and programme logic training to building the organizational capacity of iwi and Maori social service providers. The paper then explores how these capacity building activities have assisted providers to better articulate the cultural basis from which they work; their values, rationale and ways of working. As a result, iwi and Maori providers are better able to document and describe why they do what they do and how it contributes to outcomes which helps to bridge the knowledge gap around cultural practice models of some funders.

Session Title: Evaluation Reports: Reframing the Concept for the Real World
Panel Session 355 to be held in Calhoun Room on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Zoe Clayson,  Abundantia Consulting,  zoeclay@abundantia.net
Discussant(s):
Gale Berkowitz,  Packard Foundation,  gberkowitz@packard.org
Abstract: Foundations use evaluation reporting for a variety of purposes, e.g. resource allocation, internal reflection, external communication, and responding to Board of Directors requests. This panel will explore from the foundation and evaluators perspectives the expectations for reporting and the approaches found most useful for decision makers. Each of the panelists will explore this topic from their particular area of expertise. Patricia Patrizi will focus on how foundations have traditionally used reports or internal reflection, the status of the field now, and thoughts regarding future directions. John Nash will discuss the role of diagrammatic reporting in helping foundations move along the continuum from desired change to strategy. Zoe Clayson brings further depth to the conversation by presenting a web-based approach moving from strategy to management, non-profit grantee implementation, and communications. Finally, Gale Berkowitz will discuss the three presentations from the perspective of the needs of today's foundation decision makers.
Reflection and Learning From Evaluation Reports: Perspectives Across Foundations
Patricia Patrizi,  Patrizi Associates,  patti@patriziassociates.com
This presentation will draw upon the author's broad range of experience working with non-profits and philanthropies in the areas of evaluation, strategic planning and organizational learning. As chair of the Evaluation Roundtable. she is well situated to discuss how foundation decision makers have used evaluation reports in the past for organizational reflection; present several approaches that are currently in use; and pose questions related to the future of reporting within the foundation context.
A Diagrammatic Approach to Fostering Common Talk on Impact
John Nash,  Open Eye Group,  john@openeyegroup.com
Indicators of foundation effectiveness are many, from implementation metrics on expenditures and number of grantees, to output and outcome measures on numbers of people within target groups served or policies changed. Increasingly, the discussion on effectiveness is centering on the topic of impact. Understanding that foundations are only effective to the extent that the organizations they fund are successful, demonstrating impact at the program level can be challenging work. Foundations that position their programs to address root causes can better integrate nonprofit organizations as partners in the definition of program goals, targets and metrics. This increases the probability that their grant making will lead to sought-after results. John Nash will discuss how a diagrammatic form of report can serve as a key starting point in framing foundation programs for increased impact.
Innovation Using the Web and Visuals to Strengthen Common Talk
Zoe Clayson,  Abundantia Consulting,  zoeclay@abundantia.net
This presentation will focus on real time 'reporting' using the web to facilitate decision making, learning, and communications. Grants management, program planning, and communication tools are bundled together strengthening a common vision for implementing strategy. The use of visual approaches to assessment including photography, videography coupled with more standardized measurements will be presented. The incorporation of non-profit voices into the common talk will be emphasized.

Session Title: Reflections and Recommendations Concerning Culturally Competent Evaluation
Think Tank Session 356 to be held in McKeldon Room on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the Indigenous Peoples in Evaluation TIG
Presenter(s):
Arthur Hernandez,  University of Texas, San Antonio,  art.hernandez@utsa.edu
Julie Desjarlais,  Turtle Mountain Community College,  jdesjarlais@tm.edu
Heyda Martinez,  SUAGM,  heyd_martinez@yahoo.com
Ana Marie Pazos-Rego,  University of Miami,  apazosrego@aol.com
Iris Prettypaint,  University of Montana,  iris.prettypaint@mso.umt.edu
Delia J Valles-Rosales,  New Mexico State University,  dvalles@nmsu.edu
Elizabeth Yellowbird,  University of North Dakota,  elizabeth.demaray@und.nodak.edu
JoAnn W L Yuen,  University of Hawaii, Manoa,  joyuen@hawaii.edu
Abstract: The session will provide an opportunity for session leaders and facilitators to present on progress resulting from participation in the National Science Foundation - QEM (Quality Education for Minorities Network) -Broadening Participation Initiative for Minority Serving Institution faculty- and for attendees to learn from and contribute to these ongoing efforts. Each panelist will discuss briefly resultant development and operationalization of concepts and activities related to Culturally Competent Evaluation in teaching, research and service focusing on diverse communities, evaluators in training and particular institutional applications and implications (key question). Groups will be formed focusing on specific academic and professional applications and attendees will discuss, elaborate and suggest related to their own interests and expertise. The Panel will collect and organize the proceedings in an effort to facilitate self examination and further development of evaluation skills and implementation strategies of all participants/attendees which will be emailed to all involved.

Session Title: Reflective Inquiry Into Learning Through Evaluation Practice
Panel Session 357 to be held in Preston Room on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Chair(s):
Daniel Folkman,  University of Wisconsin, Milwaukee,  folkman@uwm.edu
Abstract: The theme for this year's AEA conference is evaluation and learning. The proposed panel presentation will provide three examples of evaluation practice that employs a participatory action research (PAR) approach to program development and assessment. The panel members are developing a framework to assess the immediate and long term learnings that evolve from their PAR strategies and will share preliminary findings from longitudinal case studies that are being compiled as part of a larger study. The panel session will encourage discussion and contributions from the audience aimed at eliciting concrete examples of how evaluation practitioners recognize and/or assess the learning that occurs among themselves and program stakeholders that flows from their evaluation practice.
Learning While Creating Pathways to College
Daniel Folkman,  University of Wisconsin, Milwaukee,  folkman@uwm.edu
This presentation describes the work being done with a Milwaukee's 21st Century Community Learning Center located in a high school. Conversations were held with a small group of students who had college aspirations but needed significant guidance and encouragement along the way. This triggered several meetings with representatives from the local technical college and state university to identify the multiple pathways that exist for students to access their campus. This panel presentation describes how a group of high school teachers, CLC staff, and representatives from the university and vocational school coordinated their services and what they learned along the way. The presentation will demonstrate how the reflective inquiry framework is being employed as part of the program planning, implementation and evaluation process. Preliminary findings include the opportunities and challenges that were encountered in transferring this knowledge into institutional practices that better serve the college bound high school student.
Learning Within a Parent Education Agency
Devarati Syam,  University of Wisconsin, Milwaukee,  devasyam@uwm.edu
This panel presentation will describe the work being conduced with a Milwaukee based parent education agency. The original intent was to evaluate how parents receive parenting information from the agency's printed materials and how that helps them with their parenting needs. The approach to evaluation through participatory action research have brought agency staff members together in developing tools for their program evaluation but have also led to individual and organizational learning. The presentation will focus on how the role of the evaluator has shifted and changed in scope as we have focused more on the learning component in the evaluative process. A framework will be developed and shared as part of the presentation to show how this learning can be captured and what are the long term impact of our role as evaluators.
Learning within Hmong Family Strengthening Programs
Kalyani Rai,  University of Wisconsin, Milwaukee,  kalyanir@uwm.edu
This panel presentation reports the evaluation findings of a family strengthening program that was offered through four Hmong community-based agencies located in separate communities throughout Wisconsin. The evaluation approach used a participatory action research strategy that emphasized an inclusive 'whole community' approach to organizational capacity building, leadership development, and family empowerment. This three year evaluation project, completed five years ago, provides a window to the long term learning that has contributed to the social, political, and psychological empowerment of the participants. However, some corresponding disempowering impacts are also identified. This presentation will end with a discussion of the evaluation of learning approach taken in this study. Special attention will be given to the complex and often contradictory nature of learning and the impact it has on individuals, families, and community agencies within a Hmong cultural context.

Session Title: Sharing, Defining Ethics and Rejections on Training
Multipaper Session 358 to be held in Schaefer Room on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the AEA Conference Committee
Chair(s):
Cheri Levenson,  Cherna Consulting,  c.levenson@cox.net
To Share or Not to Share: A Discussion of the Possibility of a Data Sharing System for American Evaluation Association Members
Presenter(s):
Dreolin Fleischer,  Claremont Graduate University,  dreolin@gmail.com
Abstract: The American Evaluation Association (AEA) provides members with multiple means of communicating and sharing knowledge. In the spirit of promoting these objectives, this paper will be a springboard for a discussion about the possibility of establishing a system for sharing data. What would be the strengths and limitations of making datasets, that otherwise might lie dormant after their primary use is accomplished, available to other AEA members?
Reflecting on Learning, Evaluation and Self-evaluation: The Training Dimension of Evaluation
Presenter(s):
Serafina Pastore,  University of Bari,  serafinapastore@vodafone.it
Abstract: Nowadays reflexivity is the central dynamics of intentional learning. Indeed, it is one of the most relevant theoretical and methodological devices of contemporary educational training which focuses on the professional's subjectivity and ability to convey meaning to certain events and experiences. As a consequence, what emerge are introspective training practices, memory recurrences, auto-analysis of experience protocols, hermeneutics exercises. Blending with reflection, evaluation becomes an internal process of monitoring carried out by the learning subject throughout his/her training pathway. In this sense, reflection may stimulate a new awareness which increases the learner's wish to change, evolve, re-plan, develop and expand. During reflective evaluation, the subject explores and observes himself/herself and evaluates the quality and the quantity of the changes in which he/she is involved.

Session Title: State and Local Public Health Emergency Preparedness: Evaluation at the Centers for Disease Control and Prevention Expands Focus on Capacities to Include Outcomes
Panel Session 359 to be held in Calvert Ballroom Salon B on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the Disaster and Emergency Management Evaluation TIG
Chair(s):
Craig Thomas,  Centers for Disease Control and Prevention,  cht2@cdc.gov
Discussant(s):
Edward Liebow,  Battelle Centers for Public Health Research and Evaluation,  liebowe@battelle.org
Abstract: State and local preparedness for public health emergencies is supported by the Centers for Disease Control and Prevention's (CDC) Division of State and Local Readiness. The need for enhanced preparedness was substantially underscored by the 9/11 attacks and an anthrax release through the US postal system the next month. Panelists from the CDC and Battelle will trace the evolution of public health preparedness evaluation since 2001 and discuss emerging research topics. Panelists will review the history of preparedness measurement and evaluation; identify central evaluation questions of interest concerning accountability, preparedness, and program effectiveness; discuss evidence issues; and explore with session attendees how evaluation findings can be fed back into performance improvement by state and local health agencies responsible for preparedness and response. Disclaimer: The findings and conclusions in this panel are those of the authors and do not necessarily represent the views of the Centers for Disease Control and Prevention. Disclaimer: The findings and conclusions in this panel are those of the authors and do not necessarily represent the views of the Centers for Disease Control and Prevention.
Historical Overview of the Evolution of the Evaluation Focus for the Public Health Emergency Preparedness Program
Patricia Bolton,  Battelle Centers for Public Health Research and Evaluation,  bolton@battelle.org
As early as 1999 the Centers for Disease Control and Prevention (CDC) was engaged in designing and overseeing public health system preparation for bioterrorism events. Following the terrorist events in late 2001, Congress appropriated supplemental funds to enhance the public health preparedness program. The CDC guidance to program grantees in 2002 was designed around six preparedness focus areas based on the content domain of public health. When the Department of Homeland Security (DHS) was activated in 2003 its mission included the development of a framework for Federal, state, and local emergency preparedness for all hazards and an evaluation methodology based on the use of emergency response exercises. The CDC program refocused its guidance to health departments to emphasize the measurement of outcomes as well as progress in establishing emergency response capabilities. This presentation describes the evolution of the CDC's evaluation methods to incorporate more systematic performance measurement and documentation.
Fund Federally, Respond Locally: Evaluating Public Health Emergency Preparedness in Diverse Contexts
Davis Patterson,  Battelle Centers for Public Health Research and Evaluation,  pattersond@battelle.org
CDC's Public Health Emergency Preparedness Cooperative Agreement funds U.S. states, territories, and four cities to develop and improve their capabilities for preventing and responding to public health emergencies. As a federal program, it requires a national monitoring and evaluation strategy, yet preparedness and response are first and foremost local processes. Furthermore, the state and local organization of public health varies greatly across the country, including centralized, decentralized, mixed, and shared management structures. This presentation will review efforts to date to evaluate preparedness (e.g., the Public Health Preparedness and Response Capacity Inventory, assessments by various national health organizations) focusing on several dimensions: process v. outcomes, benchmarks v. quality improvement, and routine v. emergency operations. The implications of diverse public health systems for data comparability across jurisdictions and across time and for CDC's role in providing evaluation technical assistance will also be examined.
Crawl, Walk, Run: An Incremental Approach for Demonstrating Accountability in Centers for Disease Control and Prevention's Public Health Emergency Preparedness Cooperative Agreement
Sue Lin Yee,  Centers for Disease Control and Prevention,  sby9@cdc.gov
In recent years, many federal programs have embraced performance measurement as the 'quick and dirty' method for demonstrating fiscal and programmatic accountability. In the rush to measure impact, a thorough examination of the program, mechanisms for ensuring feedback, and strategies for turning evaluation results into programmatic change has inadvertently been de-emphasized. From the grantee perspective, whether performance measurement is useful for programmatic improvement is in question. Since 1999, CDC's Public Health Emergency Preparedness (PHEP) cooperative agreement has funded 62 states, territories, and local grantees to build capacity and capability in responding to public health emergencies. This presentation takes a candid look at CDC's efforts to define 'Progresso' in this emerging field and examines whether performance measurement alone can adequately measure impact. The presenter will offer alternative strategies for demonstrating accountability and progress in a manner that promotes an environment of learning and simultaneously expands the evidence-base of public health preparedness.

Session Title: Evaluation Contracts: Considerations, Clauses, and Concerns
Demonstration Session 360 to be held in Calvert Ballroom Salon C on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the Independent Consulting TIG
Presenter(s):
Kristin Huff,  Independent Consultant,  khuff@iyi.org
Abstract: This workshop will address evaluation contract issues from the point of consultant, purchaser, and management. Through sample contracts participants will learn about important contractual considerations such as deliverables, timelines, confidentiality clauses, rights to use/ownership, budget, client and evaluator responsibilities, protocol, and more. In addition the workshop will include some discussion about how to negotiate contracts, as well as contract addendums. Participants will receive materials as examples of the items discussed. Samples will include independent consultant contracts; contracts developed by purchasers; and management contracts for those hiring multiple evaluators. Participants are encouraged to bring topics for discussion during the question and answer session at the end.

Session Title: Locating Evidence of Research-based Extension Education Programs
Think Tank Session 361 to be held in Calvert Ballroom Salon E on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the Extension Education Evaluation TIG
Presenter(s):
Heather Boyd,  Virginia Tech,  hboyd@vt.edu
Discussant(s):
Bart Hewitt,  United States Department of Agriculture,  bhewitt@csrees.usda.gov
Dawn Gundermann,  University of Wisconsin,  dmgundermann@wisc.edu
Abstract: Providing research-based educational programs is a major focus, mission and application of the nationwide extension system. Yet, providing evidence of a research base and its contribution to the program has been elusive and difficult to achieve for extension workers and evaluators. Some of this difficulty stems from how different extension contexts define and use research to inform programming. To demonstrate alignment with national extension goals, it is critical that extension evaluators and educators identify ways in which research informs and shapes outreach educational products. Participants in this Think Tank will explore definitions of research, ways in which research is used and incorporated into programming, indicators that research is being used in extension educational programs and the role of evaluators in defining the research and education context in extension programs. Depending on the wishes of the participants, the group may also explore how 1862, 1890 and 1994 institutions approach this issue.

Session Title: Evaluating Teacher Professional Development
Multipaper Session 362 to be held in Fairmont Suite on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Rabia Hos,  University of Rochester,  rabiahos@yahoo.com
Review of Evidence on the Effects of Teacher Professional Development on Student Achievement: Findings and Suggestions for Future Evaluation Designs
Presenter(s):
Kwang Suk Yoon,  American Institutes for Research,  kyoon@air.org
Teresa Duncan,  American Institutes for Research,  tduncan@air.org
Silvia Lee,  American Institutes for Research,  wlee@air.org
Kathy Shapley,  Edvance Research Inc,  kshapley@edvanceresearch.com
Abstract: Our research team conducted a systematic and comprehensive review of the research-based evidence on the effects of professional development (PD) on growth in student achievement in reading/ELA, mathematics, and science. We posed the question: What is the impact of providing teacher professional development on student achievement? In addition to addressing the research question, our goal is to share with researchers examples of evaluation models that align with the rigorous standards of the What Works Clearinghouse (WWC). From over 1,300 manuscripts, nine studies emerged as meeting WWC evidence standards. Although the number of studies that met evidence standards was small, the consistency of effect sizes (approximately 0.50) across three content areas suggests that providing training to teachers does have a moderate effect on their students' achievement. In the presentation, we will provide details of the design of PD effectiveness studies meeting the standards and suggestions for future evaluation studies.
Evaluating the Link Between Teacher Professional Development and Student Achievement: A Longitudinal, Mixed-method Approach
Presenter(s):
Barbara Heath,  East Main Educational Consulting LLC,  bpheath@bizec.rr.com
Bonnie Walters,  University of Colorado, Denver,  bonnie.walters@cudenver.edu
Aruna Lakshmanan,  University of North Carolina, Wilmington,  alakshmanan@emeconline.com
Aaron Perlmutter,  East Main Educational Consulting LLC,  aperimutter@bizec.rr.com
Abstract: This paper discusses the first-stage evaluation activities related to professional development efforts in a NSF funded Math and Science Partnership (MSP) located in Denver, CO. Longitudinal data has been used to determine if the professional development efforts implemented in this program make a difference in participant teachers' classroom teaching practices. A stratified random sample of teachers was selected to participate in a two-year data collection cycle that included post-intervention surveys, classroom observations, and interviews. Specific questions asking teachers what they are more prepared to do and plan to do after taking the courses are paired with observational data collected using the Reformed Teaching Observation Protocol and interview data from the Levels of Use protocol. These results will be used to determine if teachers' gains in content knowledge and pedagogy strategies are changing, a necessary step required prior to determining if the teachers' professional development activities may be contributing to student learning.
The Missing Link: Teacher development, Evaluation and Brain Research
Presenter(s):
Barbara Thomson,  The Ohio State University,  barbara@learningstar.org
Tamara J Barbosa,  PhD's Consulting,  dr.barbosa@phdsconsulting.com
Abstract: The goal of this paper is to discuss and explore the intersection that exists between teacher development theory, evaluation theory, and brain research. Current teacher development places emphasis on content, curriculum and standards. The missing link for teacher professional development is the new advances in brain science and research. How do we integrate this new scientific knowledge into the professional development of teachers? How do we create an effective framework so teachers have this important integration of research data? And how we evaluate it?
Balancing Change and Complexity: Evaluation of a Statewide Professional Development Program for Literacy Instruction
Presenter(s):
Janice Noga,  Pathfinder Evaluation and Consulting,  jan.noga@stanfordalumni.org
Abstract: The purpose of this paper is to discuss the development of an evaluation process for Ohio's State Institutes for Reading Instruction (SIRI) to capture key process elements related to development and implementation as well as overall outcome variables related to increased professional knowledge and transfer of training to classroom practice. This paper will describe the context variables informing program theory and design of the evaluation methodology as well as implementation of data collection and reporting activities. It will also address how the design and actual implementation of the evaluation methodology adapted in response to program changes and challenges encountered during the course of the project.

Session Title: Evaluating Schools and Processes Within Schools
Multipaper Session 363 to be held in Federal Hill Suite on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Paul Lorton Jr,  University of San Francisco,  lorton@usfca.edu
Evaluating a Problem-solving Model: Including Training and Organizational Factors That Influence the Fidelity of Implementation
Presenter(s):
Elizabeth Cooper-Martin,  Montomgery County Public Schools,  elizabeth_cooper-martin@mcpsmd.org
Heather Wilson,  Montomgery County Public Schools,  heather_m_wilson@mcpsmd.org
Abstract: The Collaborative Action Process (CAP) is a problem-solving model in use by more than 50 schools within Montgomery County Public Schools. CAP employs multidisciplinary teams and focuses on designing interventions to address student needs. A large-scale, implementation evaluation of CAP is underway and includes measures of training and organizational factors that may cause variations in the quality of implementation. The paper describes the CAP model, sampling methodology, data collection, and instrument development. The results focus on whether variations in the fidelity of implementation are explained by the following factors: support from district-level staff, administrative support within the school, types and extent of staff professional development, staff level of knowledge and understanding of CAP, staff perceptions about the feasibility and the benefits of participation in CAP, and team composition. The paper concludes with a discussion of the benefits and challenges of measuring fidelity of implementation of problem-solving models in school-based settings.
What Helps, What Hinders: The Interplay of Conditions Associated with High-performing and Under-performing Diverse, Title I Schools
Presenter(s):
River Dunavin,  Albuquerque Public Schools,  dunavin_r@aps.edu
Abstract: The focus of this presentation is to report findings and lessons learned from a multi-method evaluation of high-performing and under-performing Title I schools in a large urban public school district. Triangulation of multivariate methods including cluster analysis, regression, and Adequate Yearly Progress designations from No Child Left Behind legislation were used in site selection. Archival, survey, direct observation, and interview data were collected and analyzed. Constructs investigated include student achievement priority, implementation of standards-based instruction, school climate of academic optimism, leadership and teacher quality, use of data and short-cycle assessments to drive instruction, instructional coaching fidelity, professional development alignment, family support, student developmental assets, and instructional resource adequacy and availability. The evaluation expands understanding of the interplay of conditions associated with levels of student academic performance in schools having high percentages of diversity and a majority of students from low-income families.
Evaluating Organizational Learning in Education: Modifying and Validating an Instrument With Empirical Evidence From Health Settings
Presenter(s):
Catherine Callow-Heusser,  EndVision Research and Evaluation,  cheusser@endvision.net
Wendy Sanborn,  EndVision Research and Evaluation,  wsanborn@endvison.net
Heather Chapman,  EndVision Research and Evaluation,  hjchapman@cc.usu.edu
Abstract: No Child Left Behind (NCLB) and other educational initiatives implemented in traditionally under-performing schools have increased levels of accountability and evaluation. Outcomes from one initiative, Reading First, are being published and demonstrate positive impacts. Yet, many question the sustainability of these programs, and few validated instruments to measure organizational change and sustainability exist in the education literature. Given the dearth of validated instruments, we located the Organizational Change Survey (OCR) used extensively in the health field with published findings and validation data. We modified the OCR to better fit educational settings by changing health, eliminating irrelevant scales, and aligning items with the goals of educational programs designed to improve outcomes for typically underserved students and teachers. This presentation will include reporting of analyses to empirically validate the instrument across multiple time points and with data collected from 13 Bureau of Indian Education Reading First schools.
Learning From School Districts: Practicing Effective Decision-making Through the Use of Multiple Achievement Criteria
Presenter(s):
Paul Gale,  San Bernardino County Superintendent of Schools,  ps_gale@yahoo.com
Abstract: School districts generally have latitude in establishing minimum performance criteria for promoting students. Depending on the program and the district's interpretation of what makes a student ready for promotion, the decision-making methods and data requirements to support them will vary greatly. To illustrate the point, K-12 California districts are required to apply a set of locally-adopted criteria to determine if students have achieved a level of English proficiency to which they no longer are required to receive supplemental language support. However, sites within districts can be very reluctant to interactively use data, especially when the data requires teacher or administrator judgment in determining the students' overall achievement of criteria. The presentation will illustrate how three districts have guided their school sites to now practice effective decision-making for promoting their English language learners.

Session Title: Research Evaluation of the Upcoming Europeans Union’s Framework Programme
Multipaper Session 364 to be held in Royale Board Room on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Peter Fisch,  European Commission,  peter.fisch@ec.europa.eu
Networks of Innovation in Information Technology: Technology Development and Deployment in Europe
Presenter(s):
Nicholas Vonortas,  George Washington University,  vonortas@gwu.edu
Franco Malerba,  Luigi Bocconi University,  franco.malerba@unibocconi.it
Nicoletta Corrocher,  Luigi Bocconi University,  nicoletta.corrocher@unibocconi.it
Lorenzo Cassi,  Luigi Bocconi University,  lorenzo.cassi@unibocconi.it
Abstract: This evaluation study concentrates on the assessment of the effectiveness of collaborative networks and of knowledge transfers between research, innovation and deployment activities related to Information Technology at the EU and regional levels. In particular, it highlights the linkages and influences between the research networks built through Framework Programme funding in IT, on the one hand, and the technology deployment networks built through EU and regional programmes, on the other. The coalescence of these networks within nine selected regions of the European Community has received the bulk of our attention.
A New System for Research Evaluation Under the European Union's Seventh Framework Programme
Presenter(s):
Neville Reeve,  European Commission,  neville.reeve@ec.europa.eu
Abstract: EU research evaluation operates through and by a series of rules, principles and actors. In effect this may be described as the Community research evaluation system. There are European Commission-level evaluation requirements and frameworks and, within this, the more specific requirements for EU research evaluation. There are key users, such as the Council of Ministers, European Parliament, the Member States, the Programme Committees of Member States' representatives and the operational research managers within the European Commission. There are the evaluation specialists and the specialists in various fields of science and technology who provide the independent expertise. There are the standards, good-practice guidelines and norms, which are both determined by, and shape the practice of, research evaluation. This paper will look in particular at how the research evaluation system has been evolved for the 7th Framework Programme.
The European Union's Seventh Framework Programme and the Role of Evaluation
Presenter(s):
Peter Fisch,  European Commission,  peter.fisch@ec.europa.eu
Abstract: There is a long history to the evaluation of European Union (EU) research activities. Evaluation has played, and continues to play, a central role in legitimating policy and funding decisions, in supporting wider accountability for the funding process and for improvements in the efficiency of research implementation. This paper will provide an overall introduction to the EU's research activities and reveal the evaluation challenge for the 7th Framework Programme (FP7). This multi-annual funding scheme has been progressively developed since the middle 1980s comprising a complex set of horizontal and vertical research and research-related themes. It has sometimes been described as the most comprehensive research funding scheme in the world. The paper will also explore the wider policy context of research evaluation in the EU including the Lisbon strategy and the European Research Area.

Session Title: Evaluating the Cultural Competence of Substance abuse and Mental Health Services: Policy, Technology, and Practice
Panel Session 365 to be held in Royale Conference Foyer on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
James Herrell,  United States Department of Health and Human Services,  jim.herrell@samhsa.hhs.gov
Abstract: Although cultural competence is widely considered essential to the delivery of effective substance abuse and mental health services, empirical support for this belief is modest, limited in part by inconsistent definitions of the concept and the lack of tested approaches to measuring cultural competence. This panel will describe advances in operationalizing and evaluating organizational and individual cultural competence, and linking it to client outcomes. Panelists will present key findings from the literature, discuss the emphasis of one federal agency, the Substance Abuse and Mental Health Services Administration (SAHMSA), on culturally competent service delivery, describe advances in technologies for defining and evaluating cultural competence cross-sectionally and longitudinally, and discuss the evaluation of a culturally adapted evidence-based practice employed by a SAMHSA grantee. Panelists will engage the audience in a discussion of using the evaluation of cultural competence to improve services (and maybe even to increase chances of receiving grants).
Perspectives on the Evaluation of Cultural Competence in Substance Abuse and Mental Health Services
James Herrell,  United States Department of Health and Human Services,  jim.herrell@samhsa.hhs.gov
Although it is axiomatic that cultural competence is essential to the provision of effective mental health and substance abuse treatment services, defining and measuring cultural competence, and evaluating its contribution to treatment outcomes, have not kept pace with philosophizing about the concept. Competent evaluations of cultural competence and of culturally informed treatment approaches can guide policy, improve services, and - not trivially - contribute to the development of grant applications. The three components of this presentation are: 1) A brief review of recent literature on the association between factors often assumed to contribute to cultural competence and client outcomes; 2) a description of selected approaches to assessing cultural competence at the organizational and individual levels; and 3) a discussion of the conceptualization of cultural competence by the federal Substance Abuse and Mental Health Services Administration (SAMHSA), as reflected in its funding announcements, and the evaluation implications of that conceptualization.
An Operational Framework for Bridging Cultural Competency Evaluation Policy and Practice
Ramón (Ray) Valle,  San Diego State University,  rvalle@mail.sdsu.edu
The Cultural Competency Organizational Assessment Form (CCOAF) provides a framework for evaluating cultural competency [CC] from both an organizational policy and practice standpoint. It can serve as CC self-assessment tool for the organization. It can likewise be used to assess CC knowledge and skills at the practitioner performance outcomes level. Additionally, the CCOAF can also be employed in whole or in part depending on the evaluator's objectives. The instrument emerges from direct observations of both organizations and practitioners engaged in cross-cultural service delivery, where it has been tested. The framework readily accommodates both quantitative and qualitative data collection and analysis. One of the instrument's particular strengths in its application is that it permits the separating of cultural factors (e.g., preferred language or cultural norms and customs), and socioeconomic status circumstances (e.g., literacy, or poverty). The CCOAF also provides a bridge between CC evaluation policy and direct practice.
Developing and Evaluating Culturally Adaptive Cognitive Behavioral Therapy
Gregory Archer,  Archer, Searfoss and Associates Inc,  gregarcher@msn.com
In 2006, Valle del Sol (VdS) in Phoenix, Arizona received a second SAMHSA Targeted Capacity Expansion grant to increase prevention services and to develop culturally sensitive psychotherapy techniques for Latino seniors. The evaluator of the VdS Tiempo de Oro (TdO) program will discuss the evaluation techniques of the prevention services and initial process evaluation efforts to adapt standard Cognitive Behavioral Therapy into Culturally Adaptive Cognitive Behavioral Therapy (CACBT). The evaluator will discuss the goals, evaluation process, and general outcomes during the adaptation of CACBT. In addition, some of the more interesting principles and practical cultural applications will be presented. The ongoing TdO Prevention and CACBT project, and its evaluation, have had to contend with multiple challenges including serving monolingual Spanish speakers, low literacy rates, participants being 60 years or older, high rates of physical illness, limited mobility and community mistrust related to discrimination and immigration.

Session Title: Empowerment Evaluations: Insights, Reflections, and Implications
Multipaper Session 366 to be held in Hanover Suite B on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Brian Marriott,  Calgary Health Region,  brian.marriott@calgaryhealthregion.ca
Evaluation as a Learning Process for Teachers and School Organizations: Moving From a Judgmental to an Empowerment Model
Presenter(s):
YunHuo Cui,  East China Normal University,  cuiyunhuo@vip.163.com
XueMei Xia,  East China Normal University,  xiaxuemei1120@gmail.com
Abstract: Evaluation conducted by evaluators remains mostly to be judgmental to schools and teachers. How to make evaluation a learning process for teachers and school organizations is still a challenge. In this session, we will describe the steps, the mechanisms, and the tools that help us move from a judgmental to an empowerment model. We will discuss the evidence of the change of teachers' and principle's mental models towards evaluation. The value of this evaluation is significant because that it not only describes the implementation strategy in the Chinese context, but also points out the advantages and weaknesses when it works as a learning process to help individual and school organization engage in the singe-loop learning and the double-loop learning.
Reflections on Empowerment Evaluations in South Africa 2004-7
Presenter(s):
Ray Basson,  University of the Witwatersrand,  raymond.basson@wits.ac.za
Abstract: This paper reflects on 4 empowerment evaluations to this end. Each was completed in Africa, South Africa particularly, for degrees registered at the University of the Witwatersrand, Johannesburg. All attempted to use the approach to increase sustainability of curriculum reform [in implementing inclusion, art curriculum design, values education, hospice care], were informed by the principle of social justice, and asked in different ways how the approach needs adapting for use here. Each evaluation found, one, that drawing evaluees--particularly those typically marginalized--into the process seemed to increase at the micro-level sustainability in the short term, that evaluation effects seemed uneven influencing individuals differently, and prompted positive evaluee responses to the process. Two, each found that drawing those on the margins into the evaluation to be trained and 'drive' the process both excited evaluees and assisted them unlearn their experience of evaluation namely that their--the 'emic--view counts. And three, the use of this approach here outside its country of origin suggests, amongst others, the evaluators need to be more directive in the evaluation than perhaps originally intended, and to think more closely on principles informing evaluation to realize the intentions of macro and micro reform. More surprising in a country which discounts the importance of evaluation research is the uptake of this approach in the face of the demand of 'etic' techniques, data, and adjudications presently in vogue in implementing national curriculum reform. .
Lesson Study: Professional Development for Empowering Teachers and Improving Classroom Practices
Presenter(s):
Robin Smith,  Florida State University,  smith@bio.fsu.edu
Abstract: In the United States, lesson study is an emerging form of ongoing, teacher-led professional development in K-12 schools. Developed in Japan over fifty years ago, lesson study involves a group of teachers collaboratively planning and observing a lesson for evidence of student learning. Although teachers' observations focus on the suitability of the lesson's instructional strategies for facilitating student learning, improving overall practice is the goal of the process. Lesson study provided a group of elementary teachers with a process for evaluating their practice in a collegial setting to assess their lessons for evidence that desired outcomes were achieved. The paper will present preliminary findings and interpretations of lesson study as implemented in an elementary school. The study attempted to qualitatively evaluate whether Japanese lesson study enabled the participating teachers to determine the course of their own professional growth through an approach that is similar to conducting an empowerment evaluation.
Creating a Sense of Community Through Empowerment Evaluation of an Academic Program
Presenter(s):
Asil Ozdogru,  University at Albany,  ao7726@albany.edu
Abstract: In the evaluation of an academic program, students and faculty, as major groups of stakeholders, can perform various phases of evaluation. This study demonstrates a case example of a graduate program utilizing its constituents in the planning and implementation steps of its evaluation. The program accomplished a valid and responsive evaluation as a result of the collaborative project between students and faculty. Triangulation of different perspectives and experiences provided a rich array of information in the identification of major program components, development of essential outcome measures, and interpretation of evaluation results. This collaborative approach also resulted in the strengthening of sense of community among program members by enhancing community knowledge and ownership. Findings and experiences from this participatory evaluation process will be shared to exemplify the lessons learned and best practices for academic program evaluation from an empowerment perspective.

Session Title: Quantitative Methods: Theory and Design TIG Business Meeting and Presentation - Theory Soup for the Quantitative Soul
Business Meeting Session 367 to be held in Baltimore Theater on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
TIG Leader(s):
Patrick McKnight,  George Mason University,  pem@alumni.nd.edu
George Julnes,  Utah State University,  gjulnes@cc.usu.edu
Fred Newman,  Florida International University,  newmanf@fiu.edu
Karen Given Larwin,  Gannon University,  kgiven@kent.edu
Dale Berger,  Claremont Graduate University,  dale.berger@cgu.edu
Presenter(s):
Melvin Mark,  Pennsylvania State University,  m5m@psu.edu
Discussant(s):
William Trochim,  Cornell University,  wmt1@cornell.edu
Abstract: Evaluation theory, in general, has a different -- but complementary – focus than do most evaluators who write about quantitative methods. Methodologists, for example, discuss new and optimal ways of estimating the counterfactual, while evaluation theorists discuss whether, when and why evaluators should try to estimate the counterfactual. Methodologists debate alternative models for data analysis, while evaluation theorists instead debate the alternative ends toward which an evaluation's data and findings might be put. The two lines of thinking might profit from more intersection. Several examples are sketched in support of this assertion.

Session Title: Quality Counts: Becoming Bilingual in Quality Improvement and Evaluation in Human Services and Health Care Settings
Panel Session 368 to be held in International Room on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the Human Services Evaluation TIG
Chair(s):
James Sass,  LA's BEST After School Enrichment Program,  jim.sass@lausd.net
Abstract: The emphasis on accreditation in human services and health care settings has emphasized the institutionalization of Quality Improvement departments. Some evaluators might view these as internal evaluation departments. In this session, presenters offer illustrations of the parallel developments of evaluation and quality improvement traditions, their integration, what they can learn from one another, and models of implementation that have shown signs of learning and improvement at both the individual and organizational levels.
Integrating Quality Improvement and Internal Program Evaluation to Enhance Program Learning and to Facilitate Conditions for Program Success
Lois Thiessen Love,  Uhlich Children's Advantage Network,  lois@thiessenlove.com
People really dread a site visit by you is a frequent comment I hear when I mention I work in Quality Improvement. Unfortunately, a common experience of quality improvement is a volume of paperwork and a focus on how one's program and efforts are not measuring up. Although measuring compliance with standards is one of the tools of quality improvement, the goals of a quality improvement function within an organization is to facilitate ongoing improvement of the processes and level of achievement of the desired outcomes in human services. The activities and tools of quality improvement can be sources of good data for program evaluation as well. This presentation will examine the models of quality improvement programs in human services, illustrate how these models can be used to build evaluation capacity in human services and create learning opportunities for program improvement / evaluation utilization.
Introducing Evaluation Tools to a National Child Abuse Prevention Organization--Program Quality, Participant Outcomes, Model Fidelity
Margaret Polinsky,  Parents Anonymous Inc,  ppolinsky@parentsanonymous.org
Parents AnonymousR Inc. is an international network of accredited organizations that implement Parents AnonymousR Mutual Support Groups for adults and children with the goal of addressing risk and protective factors related to the prevention of child abuse and neglect. About one-third of Parents AnonymousR Inc. accredited organizations conduct some type of satisfaction surveys with their group participants, fewer collect outcomes evaluation data, and as yet, none collection model fidelity data. In an effort to improve the quality of Parents AnonymousR programs, Parents AnonymousR Inc. is working toward providing the accredited organizations with valid, standardized tools to measure participant outcomes and model fidelity. This panel presentation will discuss the journey toward instrument development and validation and adoption of the tools by the Parents AnonymousR Inc. Network.
Learning from Patients: Identifying and Transforming the Culture(s) of a Community Hospital in New Orleans
Paul Longo,  Touro Infirmary,  longop@touro.com
When patients are randomly sampled and surveyed after their hospital stay, the resulting "patient satisfaction" data - quantitative and qualitative - is available along side "clinical" and "outcome" data. This presentation describes the recent roll out of a patient surveying tool coupled with a hospital-wide cultural transformation initiative. It examines the relative status of these two types of service-excellence and clinical data and the advent of a few new mechanisms and practices that are helping the organization become accustomed to learning from both kinds of findings. Finally, it explores the efficacy of promoting functional organizational values as the single set of criteria for evaluating the emergence of the intended culture of quality and service excellence.

Session Title: Mainstreaming and Supporting Needs Assessment in a Large Organization
Panel Session 369 to be held in Chesapeake Room on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the Needs Assessment TIG
Chair(s):
Maurya West Meiers,  World Bank,  mwestmeiers@worldbank.org
Abstract: The panelists will discuss their experiences putting into place a system and tools to enable teams in a large organization to build needs assessment into their training and technical assistance projects. The session will focus on how to structure a process of learning about needs assessments, how to provide the resources necessary to implement needs assessment broadly within an organization, and how to communicate the benefits of needs assessment to a variety of stakeholders. Tools, methods, and practical examples will be highlighted.
Creating a Multi-dimensional Learning Series to Build Awareness, Skills, and Enthusiasm About Needs Assessment
Maurya West Meiers,  World Bank,  mwestmeiers@worldbank.org
Maurya West Meiers will discuss how an evaluation team within an international development organization - the World Bank Institute - encouraged providers of training and technical assistance to incorporate needs assessment into their projects by crafting an integrated series of learning experiences on needs assessment. This session will highlight the importance of understanding what needs assessment is, communicating the benefits of needs assessment, and explaining how a systematic approach can make needs assessment easier. Ms. West Meiers manages the monitoring and evaluation training program for the World Bank Institute Evaluation Group and conducts strategic evaluations of learning programs, including a recent major review of the framework for World Bank staff learning. Publications include 'Fiscal Decentralization in an Era of Globalization: An Evaluation of the World Bank Institute's Decentralization Program' and 'An Impact Evaluation of the Sound Bank Management Program and Banking System Development Program in the Former Soviet Union'.
Needs Assessment for Program Strategy and for Learning: Methodology and Practical Tools
Ryan Watkins,  George Washington University,  rwatkins@gwu.edu
Ryan Watkins, Associate Professor in the Educational Technology Leadership Program at George Washington University, will discuss methodologies for needs assessment, with particular attention to considerations for conducting needs assessments that feed into program strategy decisions, and for needs assessments to inform learning activities. Hands-on tools, such as sample questions, instruments, and an online knowledge base, will be emphasized. Professor Watkins teaches and conducts research on needs assessments, instructional design, e-learning, performance technologies, and research methods. His books include Performance by Design: The selection, design, and development of performance technologies (HRD Press, 2007), 75 E-learning Activities: Making online courses more interactive (Wiley/Jossey-Bass, 2005), and Strategic Planning for Success: Accomplishing high impact results (Wiley/Jossey Bass, 2003). He frequently contributes to the Performance Improvement Journal and the Performance Improvement Quarterly.
Dissemination and Consultations to Foment Implementation of Needs Assessment in an Organization
Joy Behrens,  World Bank,  jbehrens@worldbank.org
Joy Behrens will discuss issues to consider in crafting a dissemination strategy to encourage learning and practice of needs assessment in an organization. She will describe the dissemination methods used in the World Bank Institute (WBI) example and the rationales for implementing or combining particular methods. Ms. Behrens manages the process for evaluating participants' reactions to training activities sponsored by WBI, and other recent projects include a strategic review of WBI's country program briefs, collaboration on WBI's Level 1 Evaluation Toolkit, and collaboration on Advancing a Reporting and Results Framework for the World Bank's External Training. She is particularly interested in qualitative analysis and communication of evaluation results to catalyze organizational change. Past work includes evaluations of welfare reform programs and consulting on systems reform efforts within public social service agencies.

Session Title: Multi-year Evaluation of the Arts Education Reform Efforts in South Carolina
Multipaper Session 370 to be held in Versailles Room on Thursday, November 8, 11:15 AM to 12:45 PM
Sponsored by the Evaluating the Arts and Culture TIG
Chair(s):
Ching Ching Yap,  University of South Carolina,  ccyap@gwm.sc.edu
Discussant(s):
Ken May,  South Carolina Arts Commission,  mayken@arts.state.sc.us
Abstract: The multi-year Arts Education Research Project seeks to track the progress and evaluate effects of arts education reform efforts in various schools that received assistance from the Arts in Basic Curriculum (ABC) Project. These schools were committed to developing arts programs based on the ABC blueprint that is based on the belief that the arts are an indispensable part of a complete education because quality education in the arts significantly adds to the learning potential of all students. The annual objectives of the Arts Education Research Project varied from (a) documenting arts instructions, (b) determining the effects of increased, modified, or integrated arts instruction, and (c) identifying potential influences that promote, inhibit, or sustain changes in schools that implemented arts reform. This session will include three papers that discuss several key findings of the project and the ongoing effort of developing instruments that measure arts integration.
Summary of Five-Year Evaluation in Arts Education Reform Effort
Ching Ching Yap,  University of South Carolina,  ccyap@gwm.sc.edu
In the first years, the evaluation of arts education reform efforts for the Arts Education Research Project was based on (a) observations of arts classes (music, visual arts) and general education classes (ELA, Science, and Math), (b) surveys of teachers, parents, and students, and (c) interviews of teachers and administrators. This paper highlights the major findings including the challenges encountered by teachers and schools in implementing arts reform. The evaluators recommended schools and stakeholder consider (a) leadership and advocacy, (b) realistic and endorsed expectations, (c) mutual respect and appreciation across disciplines, (d) resources, (e) communication and feedback when implementing and evaluating arts reform efforts. Finally, the evaluators recommended using student arts achievement results and developing arts integration evaluation tools in addition to observations and interviews to investigate the effects of arts reform efforts.
Implications of Arts Programming Characteristics on Student Achievement
Leigh D'Amico,  University of South Carolina,  kale_leigh@yahoo.com
Pu Peng,  University of South Carolina,  lemonpu@yahoo.com
The objective of this evaluation project was to compare arts programming and implementation strategies for ABC schools with disparate arts and non-arts achievement levels. Although the majority of the ABC schools demonstrated success in increasing communication among non-arts and arts teacher, enhancing the curriculum using arts-based strategies, and improving student arts and non-arts achievement, a small percentage of schools did not realize their student achievement goals. This presentation will include details regarding the evaluation strategies employed and findings of this evaluation project. In general, the evaluators identified (a) teacher quality, (b) support of arts programming by non-arts teachers and administrators, (c) level of arts integration, and (d) arts-based extracurricular opportunities as arts programming areas that may affect student arts and non-arts achievement. By increasing the awareness of areas that impact arts program implementation, schools can address opportunities and challenges in their arts efforts that allow students to reach maximum potential.
Developing Arts Integration Evaluation Tools
Christine Fisher,  Winthrop University,  fisherc@winthrop.edu
Varied levels of arts integration effort were observed at ABC schools due to individualized five-year arts strategic plans written based on schools' unique needs with regard to school environment, budgeting, and student population characteristics. In an effort to clarify the definitions for and identify levels of arts integration efforts, the ABC project initiated a task force to develop evaluation instruments to communicate with teachers and administration. The first instrument, Essential Elements for Arts Infusion Programming Survey, was designed to inform schools on missing elements needed for arts infusion after using Opportunity-to-Learn Standards. The Arts Infusion Continuum was developed to inform schools regarding best practice in arts integration effort. This paper will present the development process of these two instruments and the initial validation study conducted.

Search Results