Evaluation 2008 Banner

Return to search form  

Session Title: Evaluation Education, Policy, Culture, and Practice in Brazil
Multipaper Session 680 to be held in Capitol Ballroom Section 1 on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Thomaz Chianca,  COMEA Communication and Evaluation Ltd,  thomaz.chianca@gmail.com
Facilitating Evaluation Policy Through Information Analysis: An Approach in the Brazilian Higher Education
Presenter(s):
Ana Carolina Letichevsky,  Cesgranrio Foundation,  anacarolina@cesgranrio.org.br
Maria Vitória Carvalho,  Cesgranrio Foundation,  vitoria@cesgranrio.org.br
Abstract: This paper suggests an investigation procedure of the relation between characteristics of under graduation students and its academic environments and the performance of these students in the National Exam on the Performance of University Students, performed within the scope of the National System of Higher Education Evaluation conceived in order to establish an evaluation policy in all Brazilian higher education institutions. After the introduction, the context of the Higher Education is presented, and then, is shown the path of the Higher Education evaluation in Brazil. The required statistical techniques are briefly presented before the presentation of the procedure suggested. Two case studies are presented in order to show the viability of the given procedure, finally, the main conclusions, showing that even without the construction of a cause and effect model, it is possible to find a link between some characteristics of the students and their academic environments and their performances.
The Brazilian and Ethiopian experience: Challenges and Opportunities in Teaching Monitoring and Evaluation in HIV/Aids Control Programs
Presenter(s):
Elizabeth Moreira Dos Santos,  FIOCRUZ,  bmoreira@ensp.fiocruz.br
Wuleta Lemma,  Tulane University,  lemmaw@gmail.com
Carla Decotelli,  Tulane University,  carladecotelli@gmail.com
Carl Kendall,  Tulane University,  carl.kendal@gmail.com
Kifle Woldermichael,  Jimma University,  betty.kifle@yahoo.com
Sonia Natal,  FIOCRUZ,  sonia.natal@gmail.com
Abstract: A partnership between CDC Brazil, the National School of Public Health (ENSP/FIOCRUZ) and Tulane University initiated a program to build the capacity in M&E of HIV/AIDS control programs. During 2005, Tulane University triangulated a similar initiative in Ethiopia with support from PEPFAR and the Ethiopian government. Both experiences, Ethiopian and Brazilian, involve the development of a diploma and Master Program course in M&E. Both curriculums were developed assuming expected uses for evaluation, defined competencies and roles of evaluators. From those expected outcomes an integrated curriculum was developed utilizing the methodology of Problematization. This approach assumes that adult learning is based in previous experience and knowledge, thus to improve the teaching-learning process, class experience has to replicate reality. Three contributions were: a) model for international cooperation and technologic transferability; b) integrated curriculum to professional evaluation teaching; c) framework for evaluation capacity development based in networking and mobilization of knowledge.
Evaluation Culture, Policy and Practice: Reflections on the Brazilian Experience
Presenter(s):
Thereza Penna Firme,  Cesgranrio Foundation,  therezapf@uol.com.br
Ana Carolina Letichevsky,  Cesgranrio Foundation,  anacarolina@cesgranrio.org.br
Angela Cristina Dannemann,  House Education and Assistance Association of Zezinho,  angeladann@gmail.com.br
Vathsala I Stone,  University at Buffalo - State University of New York,  vstone@buffalo.edu
Abstract: There is no doubt that the existence of evaluation policies makes the evaluator’s job easier, while providing more transparency to the evaluation process and more security to those involved. However, this can only happen when policies are presented to potential stakeholders in clear language, as they are being disseminated and utilized to guide practice. Therefore, building an evaluation culture is necessary in order to effectively implement an evaluation policy, so it can be fully utilized. This paper briefly introduces and presents what the authors understand as the concepts of “evaluation culture” and “evaluation policy”. Then, it discusses the importance of evaluation policies for practice of both evaluation and meta-evaluation, and points out the possible consequences of absence of policies. Recommendations are derived from these considerations in order to inspire procedures to build and implement evaluation policies. Finally, reflections are presented on the Brazilian experience about “culture-policy-practice” inter-relations in evaluation.

Session Title: Peer Reviews for Independent Consultants: New Peer Reviewer Orientation
Skill-Building Workshop 681 to be held in Capitol Ballroom Section 2 on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Independent Consulting TIG
Chair(s):
Sally Bond,  The Program Evaluation Group LLC,  usbond@mindspring.com
Presenter(s):
Marilyn Ray,  Finger Lakes Law and Social Policy Center Inc,  mlr17@cornell.edu
Abstract: At AEA 2003, the Independent Consulting TIG embarked on the professional development of members through a Peer Review process to provide collegial feedback on evaluation reports. The IC TIG appointed Co-Coordinators to develop and recommend guidelines, a framework, and a rubric for conducting Peer Reviews within the membership of the Independent Consulting TIG. At AEA 2004, the process, framework, and rubric the Co-Coordinators had developed were presented and revised during a think tank. Volunteer Peer Reviewers were recruited and oriented to the Peer Review process and rubric. This update and orientation process was repeated in 2005, 2006, and 2007. In 2008, we propose once again to present a skill-building workshop during which we will provide an update on the Peer Review project, offer a forum for volunteer reviewers to share their experiences, and orient new reviewers.

Session Title: Building Evaluation Capacity for Environmental Programs
Multipaper Session 682 to be held in Capitol Ballroom Section 3 on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Environmental Program Evaluation TIG
Chair(s):
Lisa Flowers,  Boone and Crockett Club,  flowers@boone-crockett.org
Using the Internet to Teach Environment Education Program Evaluation
Presenter(s):
Janice Easton,  University of Florida,  jeaston@ufl.edu
Lyn Fleming,  Research Evaluation and Development Services,  fleming@cox.net
Abstract: Applied Environmental Education Program Evaluation (AEEPE) is an online course offered through the Environmental Education and Training Partnership at the University of Wisconsin. Over the last 4 years, approximately 250 non-formal environmental educators from 46 states and 13 countries have enrolled in this 12-week course. Surveys of former course participants have shown significant increases in the knowledge of program evaluation as well as improvements made to over 100 programs serving more than 125,000 people across nation. This presentation will highlight the development of AEEPE and the challenges associated with teaching environmental education program evaluation in an online format. Some of the challenges include striking a balance between evaluation theory and practice, accommodating the schedules of working professionals, and creating a social learning experience for each semester’s cohort of students.
Uncharted Waters: Developing Guidelines for Program Evaluation at the Environmental Protection Agency
Presenter(s):
Britta Johnson,  United States Environmental Protection Agency,  johnson.britta@epa.gov
Abstract: EPA is continually working to increase its performance and program evaluation capacity for all Agency programs. A step in this effort taken in 2008 is the development of evaluation guidelines for individual EPA partnership programs, a set of programs that address environmental protection outside of the traditional regulatory framework. In the nascent world of environmental program evaluation these guidelines represent the first such document produced for a federal environmental program and likely for any environmental program. The guidelines are intended to introduce partnership program staff, who have little or no evaluation experience, to the practice of program evaluation, to assist them with program evaluation planning, priority setting and methodology, and to walk them through the steps necessary to produce rigorous evaluations of their programs. This paper highlights the process of guidelines development undertaken at EPA and the challenges faced in attempting to provide direction relevant to programs of great diversity.
My Environmental Education Resource Assistant (MEERA): A Web-Based Resource for Improving Evaluations of Environmental Education Programs
Presenter(s):
Michaela Zint,  University of Michigan,  zintmich@umich.edu
Abstract: My Environmental Education Resource Assistant” or “MEERA” (www.meera.snre.umich.edu) is an experiment funded by the Environmental Protection Agency and the Forest Service to support the evaluation efforts of environmental educators. MEERA provides step by step guidance for conducting evaluations of EE programs as well as other relevant information and resources such as details on over 20 EE program evaluations. MEERA is different from other clearinghouse type web sites that offer evaluation resources. For example, MEERA indicates whether resources are most appropriate for educators with “beginner,” “intermediate” or “advanced” evaluation experience. MEERA was designed and developed based on needs assessments and formative evaluations involving interviews, focus groups, and reviews. Currently an outcome evaluation is being conducted to assess the extent to which MEERA is able to help educators conduct quality evaluations of their EE programs. This presentation will describe MEERA’s features and focus on the methods and results of MEERA’s evaluations.
Developing Evaluation Guidelines for the United States Environmental Protection Agency's "Partnership Programs": The Challenge, the Process, and the Progress
Presenter(s):
Terell Lasane,  Environmental Protection Agency,  lasane.terell@epa.gov
Abstract: In 2008, the Evaluation Support Division at the United States Environmental Protection Agency (EPA) initiated a ground-breaking project for environmental program evaluation: the development of evaluation guidelines for individual EPA partnership programs. “Partnership programs” are generally defined as programs designed to deliver measurable environmental results by motivating a wide variety of participants (companies, organizations, communities, and individuals) to adoption of good environmental practices. These new guidelines are intended to introduce partnership program staff, who have little or no evaluation experience, to the practice of program evaluation, to assist them with program evaluation planning, priority setting and methodology, and to walk them through the steps necessary to produce rigorous evaluations of their programs. This paper highlights the guideline development process undertaken at EPA and the challenges faced in attempting to provide guidelines relevant to environmental programs that are diverse in terms of size, maturity, program type, and the tools they employ.

Session Title: Basic Considerations in Theory-Based Evaluations
Multipaper Session 683 to be held in Capitol Ballroom Section 4 on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Program Theory and Theory-driven Evaluation TIG
Chair(s):
John Gargani,  Gargani and Company Inc,  john@gcoinc.com
Assessing Program Strategies and Priority Outcomes: Evaluating Both Sides of Program Models
Presenter(s):
Kathryn Race,  Race and Associates Ltd,  race_associates@msn.com
Abstract: Through exemplar case studies pulled from evaluation practice, the paper discusses ways in which program strategies are assessed descriptively such as through rubric development and other assessment tools when sample sizes prohibit the use of quantitative methods such as structural equation modeling. Examples are taken from education evaluations in formal and informal settings, including a 3-year evaluation of a multi-phased, hybrid after-school and family outreach program, a math and science partnership for certification of middle-school teachers in physical science, and science literacy programs for public school teachers. The assessment of program strategies provides a check and balance to help guide the use of strategies that align with empirical evidence. Through this process, program fidelity becomes an integral part of formative as well as outcomes evaluation relative to strength of intervention and program “dosage” assessment. Implications for the application of this approach in other evaluation venues are discussed as well.
Practical Issues in Program Evaluation
Presenter(s):
Doris Rubio,  University of Pittsburgh,  rubiodm@upmc.edu
Sunday Clark,  University of Pittsburgh,  clarks2@upmc.edu
Abstract: As many grant and contract applications are increasingly requiring well developed evaluation plans, program evaluation is becoming a desired skill set. The literature is limited in proving specific examples of how to develop and implement a program evaluation plan. We present different models for evaluating a large, multi-component, institutional program. Across all models, we found that the use of a comprehensive model is critical. The model serves to create a plan that is both formative and summative so that the program can utilize the information for internal improvements and external reporting. Another important component for a successful evaluation plan is regular exchanging of information with the stakeholders to obtain ‘buy in’ which, facilitates the evaluation. For our program evaluation, we found the use of the logic model framework to be particularly helpful in developing a successful evaluation plan. If designed properly, evaluation can significantly enhance the effectiveness of a program.
The Utility of Logic Models in the Evaluation of Complex System Reforms: A Continuation of the Debate
Presenter(s):
Mary Armstrong,  University of South Florida,  armstron@fmhi.usf.edu
Amy Vargo,  University of South Florida,  avargo@fmhi.usf.edu
Abstract: In his presentation at the 2007 American Evaluation Association conference, Michael Quinn Patton challenged the effectiveness of logic models as an evaluation tool for systems where emerging conditions call for rapid response and innovations. Other researchers such as Leonard Bickman continue to identify the need for approaches such as logic models that articulate the theory that underlies a program or intervention. This paper will contribute to this ongoing dialogue through illustrations of the use of logic models in two related evaluations that examine a privatized child welfare system. The paper will track and illustrate how logic modeling techniques may be effective at different points in time and for different audiences during system development.

Session Title: How to Publish an Article in the American Journal of Evaluation: Guidance for First-Time Authors
Demonstration Session 684 to be held in Capitol Ballroom Section 5 on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the AEA Conference Committee
Presenter(s):
Robin Miller,  Michigan State University,  mill1493@msu.edu
Katherine Ryan,  University of Illinois Urbana-Champaign,  k-ryan6@uiuc.edu
Michael Hendricks,  Independent Consultant,  mikehendri@aol.com
Sanjeev Sridharan,  University of Edinburgh,  sanjeev.sridharan@ed.ac.uk
Abstract: Last year, we delivered a workshop at AEA in which participants who had little experience of publishing in a peer-refereed journal were provided a basic introduction to the process of publishing in AJE. Over 100 people attended the session and we were requested to offer it again. We propose to offer the session again in Denver. In the session, we will detail the procedural aspects of submitting a manuscript. Most of our time, however, will be spent on teaching participants, using examples from successful submissions, the key steps to writing a professional article and responding to editors and reviewers' comments. The journals' editorial leadership and members of its Editorial Advisory Board will address specific issues and questions that participants may have about the journal article writing process.

Session Title: Learning Focused Evaluation: Three Perspectives on Leadership Development
Multipaper Session 685 to be held in Capitol Ballroom Section 6 on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Business and Industry TIG
Chair(s):
Judith Steed,  Center for Creative Leadership,  steedj@ccl.org
Abstract: The field of leadership development is complex enough and one size fits all is not how to perform excellent evaluation. The complexity of processes, the multiplicity of outcome targets to be measured at different points in time with feedback loops in all directions including to the participants, their teams, their organizations and to the program itself all add up to huge challenges. The papers included in this multi-paper panel all address different facets of learning focused evaluation ranging from program learning, coachee learning and team learning in the context of various leadership development programs in the Center for Creative Leadership suite.
Program Learning: Double Loop Evaluation for Leadership Development Programs
Judith Steed,  Center for Creative Leadership,  steedj@ccl.org
Jessica Baltes,  Center for Creative Leadership,  baltesj@ccl.org
Gina Hernez Broome,  Center for Creative Leadership,  broomeg@ccl.org
Single Loop Evaluation is not enough for the field of Leadership Development. Often the evaluation of leadership development focuses upon the participant learning; however, this field calls for the double loop learning to best evaluate the program and to formatively influence the strength and agility of a responsive and sustaining program design. The presenters share their double loop evaluation process and inherent challenges faced when evaluating leadership development for executive training. They share their view of measuring change and programmatic impact on business executives as they relate to the program theory and design strength. This double loop evaluation pushes beyond the measurement of programmatic value to iteratively support strong program design as well as executive development. Technological, logistical and participant engagement challenges will also be shared for discussion and possible problem solving.
Coachee Learning: Constantly Changing Targets/ Constant Process, the Challenge of Evaluating Leadership Development Coaching
Gina Hernez Broome,  Center for Creative Leadership,  broomeg@ccl.org
Jessica Baltes,  Center for Creative Leadership,  baltesj@ccl.org
Judith Steed,  Center for Creative Leadership,  steedj@ccl.org
Evaluating consistent targets from learning processes are hard enough. But the challenge increases when the learning targets are unique for each participant. The authors will share their efforts to explore and define the best evaluative practices when evaluating short and long term impact of executive coaching as a follow on feature to leadership development training. The constant single and double learning processes that flow through multiple coaching sessions in the corporate environment lend important challenges to evaluators trying to measuring coaching success as the executives' learning targets emerge and evolve.
Team Learning: Teams Programs with Team Level Objectives
Jessica Baltes,  Center for Creative Leadership,  baltesj@ccl.org
Judith Steed,  Center for Creative Leadership,  steedj@ccl.org
Gina Hernez Broome,  Center for Creative Leadership,  broomeg@ccl.org
Leadership development is often targeted at individual learning goals and impact. However, now executives are increasingly charged with supporting teams as well as independently contributing managers. Our leadership development programs have to adjust to the need. What to do when the intention of the design is to impact the team rather than the 'just' the executives attending the program? The authors will share how they linked the program theory to team level impact after the training delivery. The authors will present their process and challenges faced with this shift in programmatic impact for the teams leadership development program populated by representatives from multiple teams.

Session Title: Course-Evaluation Designs I: Improvement Oriented Practices
Multipaper Session 686 to be held in Capitol Ballroom Section 7 on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Assessment in Higher Education TIG
Chair(s):
Silvia Swigert,  University of California Irvine,  sswigert@uci.edu
Discussant(s):
Molly Engle,  Oregon State University,  molly.engle@oregonstate.edu
Adopting a New End-Of-Course Evaluation: Enablers, Constraints, Challenges
Presenter(s):
Marcie Bober-Michel,  San Diego State University,  bober@mail.sdsu.edu
Abstract: For nearly three decades, San Diego State University/College of Education has used an internally-designed end-of-course evaluation. Although validity and reliability data were not routinely calculated, anecdotal evidence indicated the form lacked rigor. Students complained about its lack of relevance/substance as well as inconsistent implementation practices among faculty. This presentation covers the author’s four-year effort to help the College move to a more statistically sound method for a) measuring student perceptions of course quality and b) determining how specific changes in policy, administrative oversight, and staffing might improve survey implementation, analytic reporting, and dissemination/use of results.
Using the Student Assessment of their Learning Gains for Course Assessment: Is it Feasible?
Presenter(s):
Tim Weston,  University of Colorado Boulder,  westont@colorado.edu
Abstract: The Student Assessment of their Learning Gains (SALG) is a flexible online assessment tool currently used by over a 1000 undergraduate instructors and over 60,000 students. The survey template allows students to identify course components (e.g. activities, materials, information) that “help them learn,” and make self-assessments of their own understandings and skills. The presenter will 1) provide an overview of the rationale behind the SALG and its use as a formative course design (versus teaching) instrument, and 2) present validity research on how the SALG has been used for both formative and summative assessment at the instructor and departmental levels. Findings include content analysis of 394 customized instruments, survey results from 138 instructors describing their use of the SALG for course improvement, and analysis of student responses to open-ended questions from 64 courses. Discussion will focus on the role of assessment of implementation factors in course activities.
Shaping Outcomes: Evaluating Instructor-Mediated Online Course Offerings in Outcomes-Based Planning and Evaluation
Presenter(s):
Howard Mzumara,  Indiana University-Purdue University Indianapolis,  hmzumara@iupui.edu
Abstract: Outcomes-Based Planning and Evaluation (OBPE), which includes development of a Logic Model, has emerged as one of the preferred approaches for evaluating the effectiveness and impact of an institution’s programs and services. This session will provide participants with a report based on a summative evaluation of an IMLS-funded project involved development and delivery of an instructor-mediated online course on outcomes-based planning and evaluation (also known as “Shaping Outcomes” www.shapingoutcomes.org/course) for university students and personnel in museum and library fields. The presentation will include an interactive discussion on the potential usefulness of outcomes measurement and mixed-method evaluation approaches as powerful tools for planning, evaluating, and improving educational programs and services in higher education settings.
Course Evaluation for Continuous Improvement: Qualitative Differences Between Online and Paper Administration of Student Course Evaluations
Presenter(s):
Nancy Rogers,  University of Cincinnati,  nancy.rogers@uc.edu
Janice Noga,  Pathfinder Evaluation and Consulting,  jan.noga@stanfordalumni.org
Abstract: Among institutes of higher education, student evaluations are used as a primary source of information about quality of teaching and course delivery. However, concerns about the usefulness and value of these course evaluations for continuous improvement are plentiful. Current practice emphasizes the use of scaled questions to produce mean ratings for items. Unfortunately, ratings alone can be inadequate for informing continuous improvement efforts. With the introduction of online evaluation formats it is possible that technologically proficient students will contribute more useable feedback for improving course content and teaching effectiveness in space provided for student comments. By comparing the quality and depth of information provided in traditional paper versus online course evaluation formats, the presenters will describe an ongoing course evaluation pilot program designed to provide usable, continuous improvement feedback for faculty.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Broadening Participation and Building Capacity Among Future Scientists: An Evaluation Internship for Undergraduates
Roundtable Presentation 687 to be held in the Limestone Boardroom on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Teaching of Evaluation TIG
Presenter(s):
Kristin Kusmierek,  University of Michigan,  kkusmierek@stanfordalumni.org
Dylan Flather,  University of Colorado Boulder,  dylan.flather@colorado.edu
Tyler Silverman,  University of Colorado Boulder,  tsilverman1@gmail.com
Mary Anne Carroll,  University of Michigan,  mcarroll@umich.edu
David Karowe,  Western Michigan University,  karowe@wmich.edu
Abstract: As a National Science Foundation Integrative Graduate Education and Research Traineeship (IGERT) project, the Biosphere-Atmosphere Research and Training (BART) program has pursued sustained evaluation since 2000. The resulting cross-disciplinary interactions--between multidisciplinary science faculty and an outside social scientist-evaluator--have yielded an enhanced awareness of disciplinary complexity in evaluation and the benefits of cross-disciplinary understandings. Within this context, BART created an Evaluation Experience for Undergraduates, an internship program targeting undergraduate science students, building affinity with and skills in evaluation. We will discuss the achievements and challenges of this internship program, particularly as they might lend insight into undergraduate participation in evaluation and into the structures and activities that might yield the greatest benefits to undergraduates.
Roundtable Rotation II: Using Problem-based Learning to Anchor Theory to Real-World Problems
Roundtable Presentation 687 to be held in the Limestone Boardroom on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Teaching of Evaluation TIG
Presenter(s):
Meghan Kennedy,  Neumont University,  meghan.kennedy@neumont.edu
Abstract: Students often leave undergraduate and graduate programs with technical skills and knowledge, but limited experience solving real-world problems. These problems help students contextualize more theory-based academic learning, as well as prepare them for messy and complex problems they will face in the future. Problem-based learning is not about coming up with one correct solution, but instead, it allows students to evaluate all aspects of a problem, develop effective inquiry skills, and create logical and comprehensive solutions. How can problem-based learning help individuals learn evaluation skills? How can this approach enhance how evaluation is taught in traditional academic courses and workshops for non-evaluators? Integrating problem-based learning into the curriculum of any class or program allows students to hone theory through thinking critically about real-world problems.

Roundtable: Addressing Complex Sampling Issues in Evaluation Analyses
Roundtable Presentation 688 to be held in the Sandstone Boardroom on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Cindy Kronauge,  Weld County Department of Public Health and Environment,  ckronauge@co.weld.co.us
Susan Hutchinson,  University of Northern Colorado,  susan.hutchinson@unco.edu
Abstract: Evaluators are often asked to analyze probability surveys. It is important that survey data be analyzed in the best possible way. Complex sampling designs are often more economical and more accurate than simple random samples. However, many evaluators do not analyze probability survey data according to its underlying sampling design. The purpose of the proposed session is to demonstrate through use of several real datasets, step-by-step procedures for calculating sample weights, incorporating weights into several different popular statistical software packages (e.g., SPSS, LISREL), and conducting both descriptive and inferential statistical procedures. Examples will be shown to illustrate results that might occur when an evaluator conducts various analyses with and without the appropriate weights. Participants will gain a better understanding of the importance of incorporating a survey sample design into their analyses and will receive handouts to consult when they encounter a real life dataset they need to analyze.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Using Evaluation as a Professional Development Strategy in Arts Education
Roundtable Presentation 689 to be held in the Marble Boardroom on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Evaluating the Arts and Culture TIG
Presenter(s):
Don Glass,  VSA arts,  dlglass@vsarts.org
Gail Burnaford,  Florida Atlantic University,  burnafor@fau.edu
Carol Morgan,  ArtsConnection,  morganc@artsconnection.org
Abstract: The field of evaluation has witnessed a growing interest in more collaborative, participatory, and empowering forms of evaluation. Of particular interest to arts education organizations are approaches that are utilization-focused and provide forms of capacity building for program staff. This roundtable features three arts education organization directors and evaluators who will present their professional development work in terms of its evaluative functions. After the presentations, the roundtable participants will further discuss strategies, tools, and directions that seem relevant to the intersection of these two fields
Roundtable Rotation II: Evaluating Aesthetic Development, Engagement, Expression, and Representation Outcomes in Inclusive Museum/School Partnerships
Roundtable Presentation 689 to be held in the Marble Boardroom on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Evaluating the Arts and Culture TIG
Presenter(s):
Carole Brown,  Catholic University of America,  brownc@cua.edu
Michael Smith,  Catholic University of America,  smithmv@cua.edu
Erich Keel,  Kreeger Museum,  education@kreegermuseum.org
Susan Hostetler,  Kreeger Museum,  hsusanart@aol.com
Andrea Waldock,  Catholic University of America,  waldock@cua.edu
Christine Mason,  Student Support Center,  cmason@studentsupportcenter.org
Abstract: This presentation focuses on the evaluation model designed for the Hear Art, See Music program. The Kreeger Museum, in partnership with The Catholic University of America (CUA), received a three-year National Leadership Grant Award from The Institute of Museum and Library Services to develop a national model program that will (1) provide universal access to meaningful experiences in small and mid-size museums and (2) develop capacities in knowledge/ representation, engagement, and expression among students, grades 5-8, with special educational needs. The grant team is developing, implementing and nationally disseminating the Hear Art, See Music curriculum model with the goal of make learning in the arts accessible to all students. In order to create meaningful, and hence, accessible gallery tours and workshops, this process will involve the professional evaluation of the model. This roundtable discussion will pose dilemmas from both the evaluation process and usefulness of the evaluation information about audience and museum traditions; individual versus group responses to museum educational tours and workshops, differences among school populations who learn in different capacities; how to evaluate aesthetic development in regards to music and visual arts, and questions about training and including university students in special education and music education programs as a part of the audience and evaluation process.

Session Title: Closing the Loop From Data to Action: How Evaluation Feeds Program Improvement
Panel Session 690 to be held in Centennial Section A on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Chair(s):
Thomas Chapel,  Centers for Disease Control and Prevention,  tchapel@cdc.gov
Abstract: In large organizations, evaluation and strategic planning often exist in hermetically sealed boxes. Planning and evaluation should become an iterative cycle of 'What do we do?' - 'How are we doing?'- 'What should we do differently? - 'What do we do?' This requires attention to extracting and interpreting evaluation results and defining their implications for the next program action. This panel presents the experience of program staff who have contemplated the program description and the creation of mechanisms for feedback and who have used those mechanisms to turn evaluation results into program change. The presentation provides a more elaborate description and definition of feedback mechanisms, how they work, and why they are important. It also details the evaluation approaches of the presenters' programs and their process for ensuring feedback, and it details the ways in which evaluation results have been turned into immediate program change.
Walking the Talk: Evaluation Is as Evaluation Does
Matt Gladden,  Centers for Disease Control and Prevention,  mgladden@cdc.gov
Michael Schooley,  Centers for Disease Control and Prevention,  mschooley@dcd.gov
Rashon Lane,  Centers for Disease Control and Prevention,  rlane@cdc.gov
In addition to supporting evaluation requirements for funded programs, the Centers for Disease Control and Prevention's (CDC) Division for Heart Disease and Stroke Prevention (DHDSP) has turned its attention inward to institute a portfolio of evaluation activities to enhance evidence-based decision making and assess DHDSP as an enterprise. In contrast to a discrete program evaluation, DHDSP evaluators are working to build an organizational culture which values and routinely uses evaluation techniques to enhance the effectiveness of DHDSP initiatives. This requires simultaneously conducting program evaluations, fostering the use of findings, supporting evaluative thinking, and building evaluation capacity. We discuss the lessons learned about building a culture of evaluation in a complex organization including establishing systems that support evaluation, defining boundaries and priorities for evaluation work, developing and maintaining credibility, and fostering both evaluative thinking and rigorous program evaluation.
Development of a Strategic Planning Process to Complement an Existing Evaluation System
Tessa Crume,  Rocky Mountain Center for Health and Education,  tessac@rmc.org
Jill Elnicki,  Rocky Mountain Center for Health and Education,  jille@rmc.org
Pat Lauer,  Rocky Mountain Center for Health Promotion and Education,  patl@rmc.org
Karen Debrot,  Centers for Disease Control and Prevention,  kdebrot@cdc.gov
Sound program planning is critical to useful program evaluation, but too often these processes are done independently. CDC's Division of Adolescent and School Health (DASH) developed a strategic planning process that serves as a central link for program planning, implementation, and evaluation. This process serves to connect planning and evaluation as part of the program improvement process. This presentation will describe how DASH incorporated current planning tools including logic models, SMART objectives, and evaluation tools, such as process evaluation measures, into a strategic planning process that generates strategies to achieve long-term program goals.
Making Good on the End Game: Putting Evaluation Findings to Work for School-Based Asthma Programs
Marian Huhman,  Centers for Disease Control and Prevention,  mhuhman@cdc.gov
Cynthia Greenberg,  Centers for Disease Control and Prevention,  cgreenberg@cdc.gov
Laura Burkhard,  Centers for Disease Control and Prevention,  lburkhard@cdc.gov
Pam Luna,  Centers for Disease Control and Prevention,  pluna@cdc.gov
The prevalence (10.1%) of asthma among school-aged youth has led many schools to implement asthma management programs for their students. The Division of Adolescent and School Health (DASH) of the Centers for Disease Control and Prevention (CDC) has provided evaluation technical assistance to selected asthma management programs to help them assess fidelity of implementation of the intervention and to determine the short-term impact on students' asthma management. Programs are now using the evaluation findings in various ways, including arguing for the need to expand the program, implementing changes in the intervention, and influencing adjustments in policy and practices. This presentation will describe how the Albuquerque Public School District generated evaluation findings about their asthma programs and how those findings were leveraged for program improvements.

Session Title: Fitting the Design to the Context: Examples of Innovative Evaluation Designs
Panel Session 691 to be held in Centennial Section B on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Debra Rog,  Westat,  debrarog@westat.com
Discussant(s):
Charles Reichardt,  University of Denver,  creichar@du.edu
Abstract: Evaluators are often faced with situations in which they must balance the need for rigor, attention to stakeholder needs, and addressing the complexities of the evaluation context. Adequately balancing all three concerns often takes creative and innovative approaches that need to be nimble and flexible in the light of dynamics that can occur within the study situation. Evaluations that appear to effectively balance the multiple needs are those that produce results that are considered valid and credible but unbiased, and offer direction for action. This session will offer three examples of evaluations -- varying in scale, substantive area, and political attention -- that fit the design to the context while also attending to stakeholder concerns and concerns for rigor.
Fitting the Design to the Context: Using a Collaborative Design Sensitivity Approach To Produce Actionable Evidence
Debra Rog,  Westat,  debrarog@westat.com
Robert Orwin,  Westat,  robertorwin@westat.com
This presentation will describe two studies underway -- one multi-site evaluation of supportive housing programs for homeless families and one multi-program evaluation of programs aimed at reducing poverty. Both studies are using a combination of modified evaluability assessments and a design sensitivity approach in the design of outcome evaluations. The strategy in both evaluation situations is to understand the nature of the program context(s) in order to design outcome evaluations that are maximally sensitive to the features that can affect the ability to adequately understand the outcomes that result. Both evaluation studies also incorporate a high degree of collaboration with decision-makers in designing the evaluation. This presentation will describe each of the experiences, highlighting the dimensions of the context that created design challenges and the need for creativity.
Fitting Design to Context: The Story of a High Profile National Evaluation
Susan Berkowitz,  Westat,  susanberkowitz@westat.com
This presentation will discuss the evolution of the complex design of a high profile longitudinal impact evaluation of a national youth anti-drug media campaign. Starting with proposal writing and formulation of an alternative design, it will identify contextual factors contributing to shifts in the design, including budgetary constraints, Congressional concerns and often conflicting expectations from the funding agency and the study sponsor. At the same time, we consider the challenges experienced by the evaluation team as they modified the design in response to these exigencies, and changing one component of the design necessarily affected the others. Indeed, the need to adapt to context while maintaining basic integrity of the design and analysis persisted throughout the life of the evaluation, even after a clear, if highly challenging, design did emerge.
Fitting the Design to the Context: Examples from ITEST
Leslie Goodyear,  Education Development Center Inc,  lgoodyear@edc.org
In the past five years, the National Science Foundation has funded over 100 projects as part of its Innovative Technology Experiences for Students and Teachers program. Each of these projects offers exciting, hands-on Science, Technology, Engineering and Math experiences for students and teachers with the goal of sparking student interest in pursuing STEM careers. This presentation will highlight some of the innovative approaches used to evaluate the ITEST projects that respond to the complexities of context while at the same time strive for methodological rigor. Examples include innovative performance measures using games; rubrics to gauge learning that is situational; and innovative adaptations of nationally validated measures.

Session Title: Influencing Evaluation Policy and Evaluation Practice: A Progress Report From AEA's Evaluation Policy Task Force
Panel Session 692 to be held in Centennial Section C on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Presidential Strand
Chair(s):
William Trochim,  Cornell University,  wmt1@cornell.edu
Abstract: At its Winter 2007 meeting, the Board of Directors of the American Evaluation Association (AEA) discussed its interest in the Association enhancing its ability to identify and influence evaluation policies that have a broad effect on evaluation practice. To that end the Board established an Evaluation Policy Task Force that can advise AEA on how best to proceed in this arena. Since then, the Task Force has identified opportunities for policy influence, developed materials, overseen the hiring of a consultant, and guided the initiative in carrying out its charge. This session will provide an update on their work and seek member input on their actions and outcomes.
Eleanor Chelimsky,  Independent Consultant,  oandecleveland@aol.com
Leslie J Cooksy,  University of Delaware,  ljcooksy@udel.edu
Katherine Dawes,  United States Environmental Protection Agency,  dawes.katherine@epa.gov
Patrick Grasso,  The World Bank,  pgrasso45@comcast.net
George Grob,  Center for Public Program Evaluation,  georgefgrob@cs.com
Susan Kistler,  American Evaluation Association,  susan@eval.org

Session Title: CC3: Lessons Learned From the Evaluation of Three Comprehensive Centers
Panel Session 693 to be held in Mineral Hall Section A on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Evaluation Use TIG and the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Kim Cowley,  Edvantia Inc,  kim.cowley@edvantia.org
Discussant(s):
Caitlin Howley,  Edvantia Inc,  caitlin.howley@edvantia.org
Abstract: Three teams of evaluators from Edvantia, a West Virginia-based research and development organization, serve as the evaluation subcontractors for three regional comprehensive centers (CCs): (1) the Appalachia Regional Comprehensive Center (ARCC), (2) the Florida and the Islands Regional Comprehensive Center (FLICC), and (3) the Mid-Atlantic Regional Comprehensive Center (MACC). ARCC, FLICC, and MACC are three of 16 federally-mandated regional centers funded by the U.S. Department of Education to provide technical assistance services to state education agencies. Presenters will first position the three CC evaluations on a proposed continuum of involvement and proximity. Individual panelists will then discuss the unique evaluation practices developed by the evaluation teams to ensure effective evaluation use; the successful strategies the teams employed to communicate with program staff from each of the CCs, given their geographical/physical locations; the coordinated strategies utilized to communicate with evaluation teams across the three CCs; and cross-organizational lessons learned.
Evaluation of the Appalachia Regional Comprehensive Center
Kimberly Good,  Edvantia Inc,  kimberly.good@edvantia.org
Laura Plybon,  Edvantia Inc,  laura.plybon@edvantia.org
Karen Bradley,  Edvantia Inc,  karen.bradley@edvantia.org
Kim Cowley,  Edvantia Inc,  kim.cowley@edvantia.org
Nicole Finch,  Edvantia Inc,  nicole.finch@edvantia.org
Kathy McKean,  M&M Evaluations,  kmckean1@suddenlink.net
The Appalachia Regional Comprehensive Center (ARCC) at Edvantia, based in Charleston, WV, serves state education agencies (SEAs) in Kentucky, North Carolina, Tennessee, Virginia, and West Virginia. Edvantia evaluators conduct ARCC's internal evaluation; M&M Evaluations, located in Oklahoma, serve as the external evaluators. ARCC's evaluators are situated geographically near its state liaisons, who serve as the direct contacts for SEAs; this, coupled with the distinct evaluation responsibilities between internal and external evaluators, positions the ARCC evaluators at a far right position on the continuum of involvement and proximity. The relationship between internal and external evaluation is a traditional approach. ARCC evaluators will discuss the forging of relationships with state liaisons in evaluation implementation. Additionally, evaluators will discuss how internal evaluations of ARCC services are conducted and will present evaluation strengths and limitations. Panelists will also discuss the role of the external evaluators and how they support and improve the internal evaluation.
Evaluation of the Mid-Atlantic Comprehensive Center
Nate Hixson,  Edvantia Inc,  nate.hixson@edvantia.org
Karen Bradley,  Edvantia Inc,  karen.bradley@edvantia.org
Kim Cowley,  Edvantia Inc,  kim.cowley@edvantia.org
Based in Arlington, VA, the Mid-Atlantic Comprehensive Center (MACC) at the George Washington University Center for Equity and Excellence in Education (GWU-CEEE) serves state education agencies in Delaware, District of Columbia, Maryland, New Jersey, and Pennsylvania. An evaluation coordinator from MACC leads the evaluation team, while remaining evaluators are housed at Edvantia, a subcontractor based in Charleston, WV. By maintaining clear distinctions in their respective roles/assignments, Edvantia evaluators provide both internal and external evaluation services to MACC. This distinctive relationship positions MACC evaluators in the center of the continuum of involvement and proximity described in the overall panel abstract, and gives the evaluators a unique perspective, one that affects evaluation practices. Edvantia's MACC evaluators will discuss the layering of evaluation services (national, external, and internal); how different evaluation roles affect interactions with MACC staff and other Edvantia evaluators assigned to different comprehensive center evaluations; and cross-organizational lessons learned.
Evaluation of the Florida and the Islands Comprehensive Center
Juan D'Brot,  Edvantia Inc,  juan.d'brot@edvantia.org
Nate Hixson,  Edvantia Inc,  nate.hixson@edvantia.org
Caitlin Howley,  Edvantia Inc,  caitlin.howley@edvantia.org
Located in Tampa, FL, the Florida and the Islands Comprehensive Center (FLICC) provides technical assistance to state education agencies (SEAs) in the state of Florida and the territories of Puerto Rico and the U.S. Virgin Islands. Edvantia is a research and evaluation firm based in Charleston, WV, that serves as the internal evaluation unit for the FLICC. Due to the geographic distance between the two organizations, interactions between FLICC and Edvantia are primarily electronically mediated, placing FLICC evaluators at the left-most position on the continuum of involvement and physicality. This position presents unique challenges and obstacles to the FLICC evaluation team. FLICC evaluators will focus their panel presentation on the emergence of unique practices developed to ensure effective and equitable evaluation of the comprehensive center. Panelists will also discuss the organizational lessons learned and processes utilized to cultivate and maintain productive relationships with geographically distant comprehensive center staff.

Session Title: Techniques for Facilitating Evaluation Learning Among Non-Evaluators
Demonstration Session 694 to be held in Mineral Hall Section B on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Extension Education Evaluation TIG
Presenter(s):
Christine Kniep,  University of Wisconsin Extension,  christine.kniep@ces.uwex.edu
Karen Dickrell,  University of Wisconsin Extension,  karen.dickrell@ces.uwex.edu
Nancy Brooks,  University of Wisconsin Extension,  nancy.brooks@ces.uwex.edu
Ellen Taylor-Powell,  University of Wisconsin Extension,  ellen.taylor-powell@ces.uwex.edu
Abstract: Many evaluators carry the role of educator, facilitator, negotiator, or capacity builder that requires different knowledge and skills than the role of conducting an evaluation. Evaluators may know the techniques of good survey design, questionnaire development, quantitative analysis, but how do we enable non-evaluators to understand, engage in, conduct and use evaluation? Extension educators have much to bring to this area given their experience as nonformal adult educators working with diverse audiences, use of participatory techniques and interest in creating a safe and comfortable environment for learning and interaction. We will demonstrate several interactive techniques we have adapted - shuffle and sort; metaphor imaging; data dialogues; affinity clusters - that facilitate active learning of tough evaluation concepts or tasks. We will provide clear explanations of how we use each one to build evaluation capacity, its strengths and weaknesses, and its application for other evaluation learning purposes and in different settings.

Session Title: Indigenous Peoples in Evaluation TIG Business Meeting and Think Tank: Supporting New Indigenous Evaluators: How can Experienced Indigenous Evaluators Build Capacity in the Field through Work With New and Aspiring Indigenous Evaluators? What Issues and Challenges Do We Face?
Business Meeting Session 695 to be held in Mineral Hall Section C on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Indigenous Peoples in Evaluation TIG
TIG Leader(s):
Katherine Tibbetts,  Kamehameha Schools,  katibbet@ksbe.edu
Kalyani Rai,  University of Wisconsin Milwaukee,  kalyanir@uwm.edu
Joan LaFrance,  Mekinak Consulting,  joanlafrance1@msn.com
Presenter(s):
Deana Wagner,  Harvard University,  dwagner@hsph.harvard.edu
Abstract: This think tank will explore ideas for building Indigenous evaluation capacity through the support of new and aspiring Indigenous evaluators. The facilitator will introduce the topic by characterizing the current state of Indigenous evaluation literature and describing the need for a focus on the complex issues faced by new and aspiring Indigenous evaluators. Participants will be invited to further explore barriers and the ways in which relationships with experienced evaluators might alleviate some of these challenges. Participants will divide into discussion groups based on experience which will allow them to share challenges they have faced, or for new and aspiring evaluators, challenges they are concerned about facing. Following small group report-outs, participants will return to their groups to focus on strategies for addressing the challenges listed by the whole group. The think tank will conclude with report-outs and whole-group dialog designed to stimulate further thinking and exchange on the issue.

Session Title: Making Lemonade: Taking Advantage of Federal Government Performance Results Act (GPRA) Reporting Requirements in Evaluation
Multipaper Session 696 to be held in Mineral Hall Section D on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
Donna Atkinson,  Westat,  donnaatkinson@westat.com
Discussant(s):
Bill Luckey,  Westat,  billluckey@westat.com
Abstract: The role of performance measurement or accountability data collected under Government Performance Results Act (GPRA) has been debated as both a methodological and an evaluation policy issue since it was introduced. Although the debate continues, the emphasis on accountability at the Federal level continues to expand. The Center for Substance Abuse Treatment (CSAT) requires all grantees funded through their discretionary portfolio to collect standardized data for all clients at admission and six-months post-admission. Although accountability data typically are insufficient for evaluation, these standardized data can serve an important role in evaluations, particularly when complemented with other data sources. This session will provide three examples of how evaluators at the local, cross-site and Federal level have incorporated CSAT GPRA into their efforts.
The Center for Substance Abuse Treatment GPRA Data Repository
Deepa Avula,  Center for Substance Abuse Treatment,  deepa.avula@samhsa.hhs.gov
This presentation will provide an overview of CSAT GPRA data collection efforts and how the agency uses the data beyond mandatory reporting. CSAT has implemented a standardized data reporting requirement, with some 400 active grantees that provide substance abuse treatment services. The resulting data repository, which includes standard data on measures of importance to the agency from a national perspective, e.g., abstinence from substance abuse, living conditions, employment and education, criminal activity, and social connectedness, currently includes matched intake-follow-up data on over 70,000 clients served by grantees currently funded by the Center. Grantees are able to access a series of reports on their data as well as download an Excel version that can be imported into statistical packages for further analysis. Other uses that CSAT makes of the data beyond fulfilling reporting requirements, both for responding to external requests from oversight agencies such as SAMHSA and Congress as well as internal management within the Center, will also be discussed.
Using GPRA Data in Local Evaluation
Kristin Stainbrook,  Advocates for Human Potential,  kstainbrook@ahpnet.com
GPRA data, as well as complementary local data, can be used as a tool to improve program services. Advocates for Human Potential, the evaluator for the Sherman Street Program, a CSAT-funded Treatment for the Homeless grantee, has used GPRA data to assist with program management, quality assurance, and sustainability. Through regular presentations to program management, client GPRA information is used to provide an overall picture of program functioning to identify program strengths and weaknesses, as well as areas for staff supervision and training. Data are also presented to program staff on a regular basis to provide feedback on client impact and assist in planning program changes. Finally, data are used in presentations to partnering agencies, community meetings, and local government officials in order to garner program support. This presentation will include examples employing GPRA and local evaluation data to support programs in management, and quality assurance, as well as sustainability.
Substance Abuse and Mental Health Administration (SAMHSA) Center for Substance Abuse Treatment (CSAT) Access to Recovery Cross Site Evaluation: A Focus on Client Data
Laura Dunlap,  RTI International,  ljd@rti.org
Recognizing the enormous societal costs of failure to get needed treatment services, the President and SAMHSA established Access to Recovery (ATR), a 3-year competitive discretionary grant program to States, Territories, and Tribal Organizations. The primary focus of the ATR program has been to promote a client-centered system by improving treatment access through utilizing treatment payment vouchers, expanding independent client choice of providers, expanding access to services, and increasing substance abuse treatment capacity. As part of this effort, CSAT has funded a cross-site evaluation to provide information on the effectiveness and sustainability of the ATR program being implemented in 24 grantees. This presentation will provide a discussion of the cross-site evaluation design pertaining to the use of client-level data collected by these grantees via CSAT's SAIS-GPRA system focusing on the advantages and limitations of GPRA data as well as evaluation and modeling techniques for using these data in a cross-site evaluation.
Winning One for the GPRA? Making Case-Mix Adjusted Performance Monitoring Data Available to Grantees in CSAT-funded Substance Abuse Treatment Programs
Joe Sonnefeld,  Westat,  josefeldsonnefeld@westat.com
Zhiqun Tang,  Westat,  zhiquntang@westat.com
Duke Owen,  Westat,  dukeowen@westat.com
CSAT grantees have had access to web-based reports on GPRA data they have submitted. The Center was interested in providing grantees with comparative information, so this year it began to distribute reports that allow grantees to compare their six National Outcome Measures from the GPRA tool to those of other programs funded under the same discretionary grant program. The outcomes are case-mix adjusted using logistic regression and displayed graphically. Issues in the development of the methods and their usefulness of reports to local evaluations will be discussed. The context is the changing climate of evaluation policy in which an emphasis on performance monitoring may reduce the number of randomized and comparison-group evaluations. Large multi-site naturalistic studies of pre-post data can be argued to be better suited to identifying treatment and setting interactions with severity because they can include patients normally excluded from randomized trials for ethical reasons.

Session Title: Sustainability and Organizational Capacity Assessment: Tools and Strategies
Multipaper Session 697 to be held in Mineral Hall Section E on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Huilan Yang,  W K Kellogg Foundation,  hy1@wkkf.org
Assessing Nonprofits’ Readiness to Engage in Capacity-Building: Measurement Development and Use in the Kellogg Action Lab
Presenter(s):
Scott Peecksen,  Decision Information Resources Inc,  speecksen@dir-online.com
Jean Latting,  University of Houston,  jlatting@uh.edu
Abstract: Capacity-building consultants distinguish readiness for organizational change from readiness to engage in capacity-building (Connolly, 2007). While psychometrically validated measures of generic readiness for organizational change have been developed (Holt et al., 2007), a rigorously tested measure of readiness to engage in capacity-building support has not. This paper describes our preliminary results from developing such a measure for use in the Kellogg Action Lab (KAL), a nonprofit capacity-building initiative managed by Fieldstone Alliance and funded by W.K. Kellogg Foundation. We interviewed 15 capacity-building consultants (under three “readiness” scenarios) to tease out their assumptions about what constitutes readiness to engage in capacity-building, and identify indicators of this readiness. The paper will summarize interview results and present an open-ended interview schedule that consultants might use to assess readiness for capacity building. Subsequently, we will adapt the open-ended measure into a closed-ended measure that can be normed and psychometrically tested among future KAL grantees.
Stakeholder Influence: Using a Multiple Constituency Approach to Assess Nonprofit Organizational Effectiveness
Presenter(s):
Tosha Cantrell-Bruce,  University of Illinois Springfield,  tcantrel@uis.edu
Abstract: This research examines the concept of organizational effectiveness in nonprofit organizations and the obstacles to assessment of organizational effectiveness, including their unique characteristics and the difficulty in generalizing effectiveness criteria. The multiple constituency approach to organizational effectiveness provides the theoretical basis used to assess performance in member-benefit nonprofits, and specifically in self-help groups. Several chapters from the National Alliance on Mental Illness agreed to participate in the study and their members comprise the sample population. Survey research is used to gather data from 550 stakeholders. A set of effectiveness criteria is identified which attempts to accommodate the interests and expectations of different stakeholder groups within the organization. The dimensions of criteria developed are compared to those identified in previous research. This research informs evaluators of specific effectiveness criteria and dimensions used by stakeholders of a self-help nonprofit as well as whether dimensions of these criteria exist across nonprofit type.
Integrating Financial Sustainability in the Evaluation of Non Profit Organizations
Presenter(s):
Ayana Perkins,  MayaTech Corporation,  aperkins@mayatech.com
Kimberly Jeffries Leonard,  MayaTech Corporation,  kjleonard@mayatech.com
Kristianna Pettibone,  MayaTech Corporation,  kpettibone@mayatech.com
Abstract: Funding agencies frequently require non-profit organizations (NPO) to indicate how they will maintain activities beyond a particular funding cycle. The use of evaluation data can be a key strategy to provide evidence of financial sustainability of NPOs. Evaluation methods can provide evidence of program efficiency and effectiveness to enhance funding opportunities with current and future funding agencies. The authors use data from evaluation and sustainability technical assistance requests made by federally funded NPO’s over a five-year period. Examined variables include timing and types of requests; indicators of sustainability goals; and delivery methods. Linear regression will be performed to determine if these variables can predict current viability. These data will be compared with non-federally funded NPO’s who have received similar assistance. These findings will be used to explain recommended strategies that extend both organizational and programmatic lifespan.
Building the Sustainability of Non-profit Organizations via Internal Evaluation: A Case Study of Housing-Based Social Services
Presenter(s):
Joelle Greene,  National Community Renaissance,  jgreene@nationalcore.org
Abstract: Many funders require that evaluation activities be conducted by external evaluators. This can result in sporadic evaluation efforts (when the funding runs out, so does the evaluation) and jeopardizes program and organizational sustainability. In this paper we advance the position that internal evaluators are uniquely positioned to build the long-term sustainability of organizations by increasing capacity to design, implement and sustain evaluation systems. This position is supported by data from a case study of the social services department of a national non-profit affordable housing developer. In the three years since evaluation has become primarily an internal function, the systems for decision making and the success rate of grant applications has substantially increased. The implications for evaluation policy are examined; we argue that funders should consider the benefits of allowing more evaluation dollars to be spent internally and that internal evaluations can be audited to ensure credibility and rigor.

Session Title: Sharing the Load: Team Writing in Qualitative Research
Think Tank Session 698 to be held in Mineral Hall Section F on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Qualitative Methods TIG
Presenter(s):
Mariam Mazboudi,  University of Illinois Chicago,  mmazbo2@uic.edu
Discussant(s):
Matthew Goldwasser,  University of Illinois Chicago,  mlg@uic.edu
Megan Deiger,  University of Illinois Chicago,  mdeiger@uic.edu
Carol Fendt,  University of Illinois Chicago,  crfendt@hotmail.com
Abstract: Qualitative research studies require unique analysis strategies for identifying, coding and categorizing themes found in the data. Because 'clarity and applicability of the findings' depend on the researcher, what happens when data synthesis is a team effort (Bryne, 2001)? When the quality of research findings depends on a group of researchers, myriad issues can arise. This think tank will explore challenges that face evaluators analyzing and reporting qualitative data in teams. Using case study examples of different team approaches to the process, the session will focus on the following major categories: Stages of data reduction, analysis, and writing as a team; how a team negotiates authorship for sections of a report; and power and authority in the writing process.

Session Title: Evaluating Community Capacity Building and Community Engagement in Public Health Research
Panel Session 699 to be held in Mineral Hall Section G on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Sharrice White Cooper,  Centers for Disease Control and Prevention,  swhitecooper@cdc.gov
Abstract: The Center for Disease Control and Prevention's (CDC) Prevention Research Centers (PRC) Program supports the development, implementation, evaluation, and dissemination of prevention research using community-based participatory research (CBPR). In CBPR, entities such as universities, community-based organizations, and local and state health departments form partnerships to conduct research that addresses the public health needs of a specific community or population of interest. How community partners engage in the research process and their capacity (i.e., skill, knowledge, and ability) to effectively do so is critical to meeting partnership goals and moving communities toward sustainability. During this session, three PRCs will share their experiences in evaluating their communities' engagement in research and various efforts of building community capacity. The presenters will discuss evaluation measures or tools and the importance of measuring community capacity will also be addressed.
Assessing Community Engagement in Research: An Evaluation of the Partnership Between an Academic Research Center and a Rural Community Advisory Board
Sally Honeycutt,  Emory University,  shoneyc@sph.emory.edu
Michelle C Kegler,  Emory University,  mkelger@sph.emory.edu
Denise Ballard,  Southwest Georgia Cancer Coalition,  denise.ballard@swgacancer.org
J K Barnette,  Southwest Georgia Cancer Coalition,  jk.barnette@swgacancer.org
Kathy Bishop,  Darton College,  kathy.bishop@darton.edu
Iris Smith,  Emory University,  ismith@sph.emory.edu
Karen Glanz,  Emory University,  kglanz@sph.emory.edu
The Emory Prevention Research Center (EPRC) conducts community-based participatory research on cancer prevention and provides related training and technical assistance in rural Southwest Georgia. EPRC activities are guided by a Community Advisory Board (CAB). A collaborative evaluation of the partnership was conducted to identify strengths and areas for improvement or change. Evaluation included a review of indicators and a survey of CAB members (n=14) and EPRC leaders and staff (n=8). Evaluation topics included collaboration processes, partnership operations, trust, and member satisfaction. Results indicate high satisfaction and trust within the partnership. Influence/power sharing and meeting logistics were identified as topics for further discussion. The review of evaluation indicators revealed activities that were completed or behind schedule. Participants at a CAB retreat discussed the findings and recommended strategies for improvement and changes in select objectives. This presentation will include the collaborative process of developing the evaluation, tools, findings, and resulting actions.
Evaluating Community Capacity Building Using a Participatory Partnership Approach - Lessons From a Collaboration with Rural Latino Communities
Karen Peters,  University of Illinois Chicago,  kpeters@uic.edu
Benjamin C Mueller,  University of Illinois College of Medicine Rockfield,  bmueller@uic.edu
Marcela Garces,  University of Illinois College of Medicine Rockfield,  dmgarces@uic.edu
Sergio Cristancho,  University of Illinois College of Medicine Rockfield,  scrista@uic.edu
A community based participatory action research approach, which views the community as change agent, was utilized by 10 rural communities in collaboration with an academic research team to build community capacity to address locally identified health disparity priorities among rural Hispanic immigrants. A sequenced set of five major, iterative phases (partnership formation, assessment, implementation, evaluation and dissemination) were utilized to build local community capacity under the auspices of community health advisory committees. The committee's conducted local assessments of health priorities among Hispanic residents, implemented local community health projects through the use of a mini-grant program, engaged in a comprehensive 'stakeholder' evaluation process and instituted strategies to ensure sustainability. An 'empowerment' evaluation approach was used to ascertain the impacts of these efforts. Findings reveal that provision of a forum for community dialogue, minimal financial resource investment for action and an active cycle of reflection, promote the sustainability of community partnerships.
Assessing Community Capacity for Local Intervention: A Case Study
Moya Alfonso,  University of South Florida,  malfonso@health.usf.edu
Jen Nickelson,  University of North Florida,  j.nickelson@unf.edu
Liz Bumpus,  Sarasota County Health Department,  liz_bumpus@doh.state.fl.us
Dave Hogeboom,  University of South Florida,  hogeboom@health.usf.edu
Julie A Baldwin,  University of South Florida,  jbaldwin@health.usf.edu
Carol A Bryant,  University of South Florida,  cbryant@health.usf.edu
Robert McDermott,  University of South Florida,  rmmcdermo@health.usf.edu
This presentation is part of a panel discussion on evaluating community capacity building and community engagement in research in public health. The importance of considering coalition capacity for addressing public health issues will be established. The capacity assessment exercise identified elements for implementing and sustaining the VERB Summer Scorecard program. The results of this assessment were summarized in four capacity tables and included the following elements: 1) community (e.g., connections between major organizations), 2) knowledge and skills (e.g., communication), 3) resources (e.g., staff), and 4) power (e.g., perceived ability to affect change). Capacity tables will be presented as a program planning, implementation and transfer tool. Gauging the match between existing local capacity and program capacity requirements will be discussed as a data-based approach to moving locally-derived coalition programs to other communities.

Roundtable: Intermingling Diverse Perspectives on Evaluation Policy in a Non-profit Organization
Roundtable Presentation 700 to be held in the Slate Room on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Presenter(s):
Deborah Brown,  Independent Consultant,  dbrown12@mac.com
Judith Jerald,  Save The Children,  jjerald@savechildren.org
Abstract: Evaluation research is embedded in a political milieu. The various interests and disciplinary demands of the stakeholders tend to diverge politically, ethically, and even in terms of values. This discussion will begin with a field/case presentation that focuses on lessons learned from the program implementation, funding, and evaluation of a pre-birth to five national early childhood program. The program’s director, its independent evaluator, and a representative from the parent organization who acts as a liaison with foundations, will tell the program’s story and discuss the importance and impact of evaluation policy from their three diverse perspectives program implementation and practice, evaluation, and the political dimension of meeting funder needs. Following their presentation panelists will engage the audience in a facilitated discussion about the issues raised and relevant experiences of audience members.

Session Title: New Evaluator "How-to-Session": Conducting Your Evaluation
Multipaper Session 701 to be held in the Agate Room Section B on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Graduate Student and New Evaluator TIG
Chair(s):
Annette Griffith,  University of Nebraska Lincoln,  annettekgriffith@hotmail.com
Developing the Capacity to Think Like an Evaluator
Presenter(s):
Lennise Baptiste,  Kent State University,  lbaptist@kent.edu
Abstract: In this session the presenter will identify resources, organizing tools and activities that would assist new evaluators to build their repertoire of skills. The ideas that contribute to the defining the purpose and approaches to evaluation, and which will shape the personal philosophies of evaluators will be discussed. New evaluators should be able to identify their strengths, biases and preferences in methodology and the types of evaluations they would like to do.
Don’t Believe the Hype: Collecting Data from the Hip-Hop Generation
Presenter(s):
Lewis Ladel,  Western Michigan University,  ladel.lewis@wmich.edu
Abstract: Whether it is for social programming, implementing or revising policies, or ascriptive purposes, evaluating the African American Hip-Hop generation (1960’s-Present) is an essential requirement for a variety of reasons. In fact, issues emerging from societal stereotypes of this generation may lead culturally incompetent evaluators and data collectors to believe that its members are not only homogenous, but uneducated, poverty stricken or law breakers; impacting the reliability and validity of the data. That is why effective methods of data collection from this unique population must be taken into consideration and put into practice. This paper will o Show how hip-hop culture has impacted US Culture o Demonstrate how main stream Hip-Hop perceptions affects data collection. o Suggest Strategies for effective Data Collection within this generation and beyond.
Conducting an Evaluation of a Behavioral Parent Training Program as a Dissertation Study
Presenter(s):
Annette Griffith,  University of Nebraska Lincoln,  annettekgriffith@hotmail.com
Bridget Barnes,  Boys Town,  barnesb@girlsandboystown.org
Stephanie Ingram,  Boys Town,  ingrams@boystown.org
Ray Burke,  Boys Town,  burker@girlsandboystown.org
Ronald W Thimpson,  Boys Town,  thompsonr@girlsandboystown.org
Michael H Epstein,  University of Nebraska Omaha,  mepstein1@unl.edu
Abstract: Often the first experience that graduate students have to independently conduct an evaluation project is during the process of a dissertation study. As a result, the process of conducting a dissertation evaluation can raise many questions as new evaluators attempt to plan studies, analyze data, and interpret results. This presentation will provide information on an evaluation project that was conducted as part of a dissertation study on a behavioral parent training program for children at risk of problem behavior. Specifically, the presentation will review the steps taken during the evaluation process, factors important when an evaluation is conducted for a dissertation study, and lessons learned from the experience.

Session Title: Evaluating Special Education Teachers and Programs
Multipaper Session 702 to be held in the Agate Room Section C on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Special Needs Populations TIG
Chair(s):
Janice Ann Johnson Grskovic,  Indiana University Northwest,  jgrskovi@iun.edu
A Model for Evaluation of Universal Design for Learning Projects: Addressing the Complexity
Presenter(s):
Bob Hughes,  Seattle University,  rhughes@seattleu.edu
Abstract: Universal Design for Learning (UDL) is an increasingly popular model of developing and delivering instruction in P-12, adult, and post-secondary educational systems. Although UDL began as a model for serving special needs populations, its use has expanded widely. Because UDL synthesizes multiple educational aims and technological tools, it offers an interesting challenge for evaluators. UDL identifies three principles that ensure access to learning: multiple means of representation of any idea, multiple means of expression to show learners’ mastery, and multiple means of engagement to ensure that all learners connect to the content. This lean toward flexibility underscores UDL’s intent to serve special needs populations. While UDL implementations are proliferating, models for evaluating UDL have not concurrently emerged. This paper will use lessons from four different evaluations of UDL projects that occurred over a 10-year period to offer a framework for evaluation for UDL implementations.
A Model for Guiding Evaluations Involving Students with Special Needs
Presenter(s):
Bianca Montrosse,  University of North Carolina at Greensboro,  bmontros@serve.org
Abstract: Despite important implications, little is known about the relationship between teacher certification and academic achievement for students with disabilities. As part of a larger evaluation being conducted in North Carolina (NC), the present study draws upon the social model of disability as a framework to assess the consequences of NC high school students with special needs being taught by teachers who are certified in special education. For the purposes of investigating the effects of teacher certification on special education, research on general education students is used to guide empirical estimations, identify potentially influential moderators, and control omitted variable bias. The analysis for this paper is restricted to all special education students who took an End-of-Course exam during the 2004-05 academic year. Hierarchical linear modeling of matched student-teacher data is used in all analyses.
Using Published Standards to Create a Rating Instrument to Assess Skills
Presenter(s):
Deborah Carran,  Johns Hopkins University,  dtcarran@jhu.edu
Stacey Dammann,  York College of Pennsylvania,  sdammann@ycp.edu
Margaret King Sears,  George Mason University,  mkingsea@gmu.edu
Patricia Arter,  Marywood College,  psarter@es.marywood.edu
Abstract: The critical need for certified teachers across the education spectrum has been established, especially in special education. There has also been a concomitant increase in the number and type of teacher training programs. Are programs preparing teachers with the skills determined necessary to meet the needs of children, especially those with special needs? This paper presents information on the development and use of the Skill Survey for Student Teachers Working with Students with Disabilities and method used to evaluate teacher interns. The skill survey was developed as a rating instrument, containing 55 likert-type statements derived from the Council for Exceptional Children’s (CEC) Skills for Preparing Beginning Special Educators. Student teachers were asked to self-rate, school-based supervising teachers and university/college supervisors were asked to rate the student teacher they worked with during the semester, comprising a triad rating score for each student teacher. Multi-site results are presented discussing implications for programs.

Session Title: Strategies for Evaluating Program Development and Professional Improvement
Multipaper Session 703 to be held in the Granite Room Section A on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Susan Connors,  University of Colorado Denver,  susan.connors@cudenver.edu
Susan Connors,  University of Colorado Denver,  susan.connors@cudenver.edu
Using Student Grades as an Outcome Measure in Program Evaluation: Issues of Validity
Presenter(s):
Kelci Price,  Chicago Public Schools,  kprice1@cps.k12.il.us
Susan Ryan,  Chicago Public Schools,  sryan1@cps.k12.il.us
Abstract: As the demand for impact evaluation in education continues to grow it is important for evaluators to have valid measures for assessing program effectiveness. Although student grades are often used to assess the impact of educational interventions, there exists considerably ambiguity about the validity of this measure. This research explores the validity of grades with reference to three main issues: 1) the relationship of grades to student knowledge, 2) the sensitivity of grades to changes in student knowledge, and 3) whether there are systematic factors (student or school) which impact the relationship between grades and student knowledge. Implications of the findings for evaluators’ use of grades as a measure of program impact are discussed.
Rigor Versus Inclusion: Lessons Learned
Presenter(s):
Sheila A Arens,  Mid-Continent Research for Education and Learning,  sarens@mcrel.org
Andrea Beesley,  Mid-Continent Research for Education and Learning,  abeesley@mcrel.org
Laurie Moore,  Mid-continent Research for Education and Learning,  lmoore@mcrel.org
Sandra Foster,  Mid-continent Research for Education and Learning,  sfoster@mcrel.org
Jenna VanBerschot,  Mid-continent Research for Education and Learning,  jvanberschot@mcrel.org
Robyn J Alsop,  Mid-continent Research for Education and Learning,  raslop@mcrel.org
Abstract: The rigor of school counseling research and evaluation has been questioned, leading to the National Panel for Evidence-Based School Counseling’s recommendations for improvement. In this paper, we describe our efforts to design an evaluation that would meet these quality standards. Following this, we provide details on the construction and use of data collection protocols for all participating high schools to examine implementation fidelity, and the process of constructing administrator, counselor, and student data collection instruments. In all phases of this work we have sought to employ an approach that relies on the inclusion of stakeholders—in particular, the program administrators and all site counselors. However, maintaining quality standards while taking an inclusive approach is challenging. We candidly present the obstacles encountered while balancing our evaluation design and an inclusive evaluation approach, as well as our rationale for remaining committed to this approach despite its challenges.
Evaluating a School-University Partnership Program: Evaluation for Program Development and Improvement in the Context of Outcome-based Accountability
Presenter(s):
Tysza Gandha,  University of Illinois Urbana-Champaign,  tgandha2@uiuc.edu
Abstract: School-university partnership (SUP) is viewed as a promising approach to educational reform and SUP programs continue to proliferate despite documented challenges. The potential and problems of SUP programs suggest that efforts to evaluate them can contribute substantially to educational improvements. This paper describes the first-year evaluation of a new SUP program charged with providing workplace-embedded professional development for K-12 educators. In light of the program’s embryonic stage, the first-year evaluation focused on engaging program staff in developing program theory and design, and addressed pressures to gather outcome data to demonstrate success. The commitment to responsiveness which resulted in an amorphic program, a weak evaluative culture in the organization, and external accountability demands which sought outcome data contributed to challenges in implementing the evaluation as planned. This paper will offer an analysis of the educational partnership context and discuss factors to consider in evaluating SUP programs.
Data-Informed Self-Evaluation: A Strategy to Improve Teaching
Presenter(s):
Wenhui Yuan,  Western Michigan University,  whyuan99@gmail.com
Yun Shen,  Western Michigan University,  catherine.y.shen@gmail.com
Abstract: NCLB legislation makes teachers accountable for students’ performance, which brings challenges as well as opportunities for teachers to improve teaching. Confronting the reality, the authors propose to help teachers to gain instructional improvement using ‘teacher as evaluator’ strategy. By answering questions such as what data teachers could use for self-evaluation and how to use, the authors give out suggestions to deal with mountains of data produced by accountability movement. Barriers and necessary external support for teachers to implement data-informed self-evaluation will also be explored.

Session Title: Developing Evaluation Policy in Various Health Settings
Multipaper Session 704 to be held in the Granite Room Section B on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Leslie Fierro,  Claremont Graduate University,  leslie.fierro@cgu.edu
Translating an Organizational Evaluation Policy Across Different Program Settings: The Institute for Community Health Experience
Presenter(s):
Emily Chiasson,  Institute for Community Health,  echiasson@challiance.org
Elisa Friedman,  Institute for Community Health,  efriedman@challiance.org
Abstract: The Institute for Community Health (ICH) is a community-based evaluation and research institute embedded within a health care system. Given this setting, ICH serves as the evaluator for many clinic and community-based health programs. ICH has its own policies on how it approaches and conducts evaluation, which include using a participatory, collaborative approach. However, the programs that ICH is asked to evaluate may have their own, different evaluation policies. This presentation will look at two case studies and examine how ICH’s policies can play out in very different ways depending on the setting of the program being evaluated and its evaluation policies. We will discuss what we do when there is a conflict between our organizational approach and that of the program we are evaluating; how their policies influence our approach; and how we may need to modify our approach to meet the needs of different settings.
Developing a Program Evaluation Policy: Monterey County Health Department Case Study
Presenter(s):
Patricia Zerounian,  Monterey County Health Department,  zerounianp@co.monterey.ca.us
Beverly Tremain,  Public Health Consulting LLC,  btremain@publichealthconsulting.net
Hugh Stallworth,  Monterey County Health Department,  stallworthh@co.monterey.ca.us
Krista Hanni,  Monterey County Health Department,  hannikd@co.monterey.ca.us
Abstract: There is a need for a stronger evaluation culture to assure that local health programs effectively and efficiently achieve their intended outcomes. Recognizing this need, in 2006 Monterey County Health Department (California) initiated a process to develop a program evaluation policy and implementation guide for use by program managers across eight departmental divisions. The process consisted of conducting interviews with division chiefs to determine needs and concerns, examining literature to identify appropriate evaluation focus and core values, and analyzing two feasible evaluation approaches via a weighted scoring of stakeholder and resource impacts. Drawing on utilization-focused, participatory, and real-world evaluation methods, the resulting policy and 12-step implementation guide were adopted for use in 2007. The guide includes evaluation planning templates and completed examples. It can be used to design new health programs using logic modeling or evaluate existing health programs by establishing baselines and progress toward intended outcomes.
A Framework for the Internal Evaluation of a Centre of Excellence: Making Evaluative Inquiry A Reality
Presenter(s):
Evangeline Danseco,  Center of Excellence for Child and Youth Mental Health,  edanseco@cheo.on.ca
Amy Boudreau,  Center of Excellence for Child and Youth Mental Health,  aboudreau@cheo.on.ca
Kristen Keilty,  Center of Excellence for Child and Youth Mental Health,  kkeilty@cheo.on.ca
Ian Manion,  Center of Excellence for Child and Youth Mental Health,  imanion@cheo.on.ca
Susan Kasprzak,  Center of Excellence for Child and Youth Mental Health,  skasprzak@cheo.on.ca
Abstract: The Centre of Excellence for Child and Youth Mental Health was established in 2004 in Ontario, Canada to promote integration and evidence-based care through knowledge exchange, partnership facilitation and the development of evaluation capacity. In addition to providing funding provincially for research, education/training and evaluation, the Centre also forms and maintains linkages with national and international partners and projects. The conceptual approach for the Centre’s evaluation framework was informed by theoretical and empirical work in evaluation, knowledge transfer and utilization, quality improvement, organizational learning and systems change. Twenty-four indicators were identified, developed and integrated into program management and information systems, to monitor progress in meeting strategic goals and priority action areas. Quantitative surveys were used to assess outcomes with qualitatively oriented measures added where appropriate to provide more in-depth information and to document unforeseen and evolving results. The presentation will discuss how evaluative inquiry is fostered in the Centre’s programs.

Session Title: Perspectives, Power, and Policy Through a Feminist Lens
Multipaper Session 705 to be held in the Granite Room Section C on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Feminist Issues in Evaluation TIG
Chair(s):
Kathryn Bowen,  Centerstone Community Mental Health Centers,  kathryn.bowen@centerstone.org
Systematically Paying Attention to the Minority and Women: Experience from Evaluating a Stabilization Program in Kosovo
Presenter(s):
Tristi Nichols,  Manitou Inc,  tnichols@manitouinc.com
Abstract: This paper presentation describes how to consider issues specifically germane to women and the disenfranchised/disadvantaged in an impact assessment in Kosovo – an emotionally, ethnically, and politically charged context. I cover how to systematically include the gender and ethnicity variable(s) throughout key stages of the evaluation process the evaluation framework formulation, survey design and administration, data entry and analysis, and results. I also review the challenges faced when undertaking this systematic exercise, including the influences of my own subjectivity.
Quality Review through a Feminist Lens - Participation, Perspectives, Power, and Policy
Presenter(s):
Sharon Brisolara,  Evaluation Solutions,  evaluationsolutions@hughes.net
Saumitra SenGupta,  APS Healthcare Inc,  ssengupta@apshealthcare.com
Abstract: Quality reviews of managed care programs often integrate qualitative data from various stakeholders into evaluation efforts in an effort to create policy recommendations. While mixed-method designs are common, the way in which divergent perspectives are presented or reconciled is often dictated by various rules, regulations and contract requirements. Structural hierarchies and power dynamics can complicate the attempt to represent multiple perspectives gathered through a mixed method design in a way that appropriately informs practice management, quality of care, access to services, cultural competence, consumer empowerment, fiscal policies, and management. This paper explores the intersection of the theory behind this kind of approach to quality review with its practice, focusing on the case of external quality review of managed mental health plans in California. In addition, the paper explores the benefits of applying different theoretical lens, such as Feminist Evaluation, as approaches to understanding such quality reviews and resultant policy creation.
Evaluability Assessment and Empowerment Evaluation Techniques in Program Level Logic Model Development
Presenter(s):
Patricia K Freitag,  Academy for Educational Development,  patfreitag@comcast.net
Abstract: Evaluability assessment techniques included interviews of principal investigators that revealed underlying and implicit theories of action. Empowerment evaluation techniques provided qualitative input to further the development of a program level logic model for the NSF-Gender Research in Science and Engineering Program.

Session Title: Useful Interactive Strategic Planning Tools for Nonprofit Evaluators
Skill-Building Workshop 706 to be held in the Quartz Room Section A on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Pauline Wilson,  Independent Consultant,  paulenewilson@btinternet.com
Kathryn Braun,  University of Hawaii,  kbraun@hawaii.edu
Abstract: Strategic planners working with non-profits use a number of interactive tools to engage the members of a non-profit in review of their internal and external context, selection of key stakeholders to involve in the process, and in defining their strategic objectives. Tools include stakeholder mapping, SWOT (Strengths, weaknesses, opportunities and threats) analysis, force field analysis, PESTLE (Political, economic, social, technological, legal and environmental) analysis, timelines, as well as brainstorming to develop a common understanding of the organizations purpose or mission. Many of these same tools are used by evaluators who conduct participatory evaluations. When used well they engage participants in an analysis which ensures better quality strategic plans and or evaluations. This workshop will use a real-life case to practice two of the tools. This will be followed by a discussion of their use and value for evaluators and the relevance of practical tools to evaluation policy.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Internal and External Evaluation Issues: Where Do You Draw the Line?
Roundtable Presentation 707 to be held in the Quartz Room Section B on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Multiethnic Issues in Evaluation TIG and the Government Evaluation TIG
Presenter(s):
Samuel Held,  Oak Ridge Institute for Science and Education,  sam.held@orau.org
Pamela Bishop,  Oak Ridge Institute for Science and Education,  pbaird@utk.edu
Abstract: The Department of Energy’s Office of Workforce Development for Teachers and Scientists funds five national experiential learning programs, which provide authentic research opportunities to their participants, with the ultimate goal of increasing the number of highly qualified individuals entering the DOE science, technology, engineering, and math workforce. As contracted evaluators for these workforce development programs, the presenters are in a unique position of being both internal (at the DOE enterprise level) and external (at the local implementation level) evaluators. We are interested in sharing the advantages and disadvantages, and learning about those of other evaluators, in the dual internal/external role.
Roundtable Rotation II: What Are the Best Practices to Use in Evaluating Initiatives to Increase the Diversity of the Scientific and Technical Workforce?
Roundtable Presentation 707 to be held in the Quartz Room Section B on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Multiethnic Issues in Evaluation TIG and the Government Evaluation TIG
Presenter(s):
Jack Mills,  Independent Consultant,  jackmillsphd@aol.com
Abstract: This roundtable session will be devoted to sharing best practices in evaluating programs designed to increase minority participation in the fields of science, technology, math and engineering. Increasing the diversity of the scientific and technical work force is a national priority. Developing evaluations that yield conclusive results has proved difficult. Is this due to the design of programs themselves, or do we need advances in both evaluation theory and practice? The session leader is the evaluation consultant to The Society for Advancement of Chicanos and Native Americans in Science (SACNAS), which is recognized for its programs for minority students and young scientists. This session will describe the lessons learned in using evaluation results to evolve SACNAS services. Round table participants are requested to share their experiences, insights and best practices in evaluating similar programs. The resulting dialogue will spark new ideas and next steps in advancing our evaluation practice.

Session Title: Evaluation of Multi-Site Health Programs
Multipaper Session 708 to be held in Room 102 in the Convention Center on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Katrina Bledsoe,  Walter R McDonald and Associates Inc,  kbledsoe@wrma.com
Bridging the Gap Between Policy and Practice: Health Education Program Evaluation in South Carolina
Presenter(s):
Grant Morgan,  University of South Carolina,  morgang@mailbox.sc.edu
Vasanthi Rao,  University of South Carolina,  vasanthiji@yahoo.com
Ching Ching Yap,  University of South Carolina,  ccyap@gwm.sc.edu
Christine Beyer,  South Carolina Department of Education,  cbeyer@ed.sc.gov
Abstract: The South Carolina Department of Education, University of South Carolina, and Metalogic, Inc. have partnered to conduct a pilot health and safety education program assessment in order to address the current gap between health education policy and practice. The two-fold program assessment involved 1) program implementation survey taken by health and safety educators aimed at identifying school- and classroom-level predictors of student health and safety achievement and 2) an online student assessment. Results of the program assessment will be used in the future to develop a mandatory, statewide health and safety testing program. Therefore, the evaluation was designed to produce a feedback loop where policy and practice are informed simultaneously. The researchers will present the processes and results of the program assessment as well as implications for future endeavors in health education.
The Stakeholder Difference: Designing an Evaluation for Relevance and Use
Presenter(s):
Tracy Patterson,  Center for Creative Leadership,  pattersont@ccl.org
Julia Jackson Newsom,  University of North Carolina Greensboro,  j_jackso@uncg.edu
Abstract: The Center for Creative Leadership, with the support and collaboration of the Robert Wood Johnson Foundation, is implementing a comprehensive, multi-phase leadership development initiative for nearly 300 emerging leaders in community-based health and health-related nonprofits over the next three years. The initiative is targeted within 9 communities in the U.S. serving vulnerable populations and includes action learning sponsorship and goal development. This paper focuses on the design of the evaluation for this initiative, including 1) how the evaluation design process was integrated into the design of the initiative, 2) how a logic model and dashboard were used to focus the design of the evaluation, and 3) how key stakeholders were involved at multiple stages in the design of the evaluation to clarify best use of limited resources, make decisions about methods and measurement, and ensure the evaluation’s relevance and use.
Food Security: A Systematic Evaluation of the Community Food Action Initiative in British Columbia
Presenter(s):
Kim Van der Woerd,  Facilitate This!,  kvanderwoerd@shaw.ca
Jim Mactier,  Facilitate This!,  ruralrootsbb@shaw.ca
Abstract: The purpose of this paper is to discuss a systematic evaluation of the Community Food Action Initiative (CFAI) in British Columbia, Canada. The CFAI is a health promotion initiative to support food security and improve access to healthy foods through the implementation of community, regional and provincial plans and activities, using a population health approach. The process evaluation included determining whether the CFAI accomplished their objectives, and a systems level analysis of how the program was administered. In total, 155 diverse community-based projects were funded throughout the province. Participants in the evaluation included 19 government level program deliverers, 67 (of the 155) leaders of the community-based projects, and 178 community participants. Evaluation results demonstrated an overall achievement of program objectives, and illuminated the complexity of evaluating a complex community initiative. The systematic evaluation revealed the need to build capacity and/or consistency for integrating outcome evaluative practice into program management.

Session Title: Examination of the Role of Evaluation in Denver's Ten Year Plan to End Homelessness: A Review of Evaluation on Policy and Practice
Panel Session 709 to be held in Room 104 in the Convention Center on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Human Services Evaluation TIG
Chair(s):
Katie Page,  OMNI Institute,  kpage@omni.org
Abstract: When the Mayor of Denver adopted the issue of homelessness, accountability and evaluation emerged as a priority. No one predicted how evaluation results would dramatically influence policies and practices at every level, from government contracting to the use of data at local non-profits. This session will explore three different perspectives of evaluation's impact on policy and practices as they relate to the Denver's homelessness initiative. The panel will discuss how Denver addressed these questions and how government, evaluation, and non-profit practices were impacted. As evaluators are included in shaping the way that cities address social problems, issues of neutrality, research rigor, ethics, and data collection and analysis techniques must be carefully examined. Participants will hear about the experience of this effort and engage in honest discussion on the challenges faced by cities focused on systemic social problems, ways to address these problems and how this impacts the field of evaluation.
Government Perspective and Lessons Learned from Incorporating Evaluation in a Major City Wide Initiative
Jamie Van Leeuwen,  Denver Human Services,  jamie.vanleeuwen@denvergov.org
Jamie Van Leeuwen, who leads up Denver's Road Home, the Mayor's Ten Year Plan to End Homelessness will present his experience and lessons learned about building an aggressive evaluation plan into a major city wide initiative. Dr. Van Leeuwen will discuss the development of a community-led planning process involving members from the public, private and nonprofit sectors as well as formerly homeless individuals to provide advice and consult. Through measurable goals, objectives and outcomes, the evaluation was intended to ensure accountability of funding but ultimately provided this and a means to improve government infrastructure, build evaluation capacity and strengthen relationships to critical stakeholders.
Evaluators Perspective and Lessons Learned From Incorporating Evaluation in a Major City Wide Initiative
Katie Page,  OMNI Institute,  kpage@omni.org
OMNI Institute was selected to be the evaluator for Denver's Road Home, 10 Year Plan to End Homelessness initiative. Katie Page, the evaluation project lead will present on the implementation of the evaluation plan, focusing on how evaluation activities surfaced the need to change current policy and practices in government, in local services agencies, and within the evaluation design in order to provide meaningful evaluation. Importantly, this involved the coordination between various stakeholders and partners, as well as a community wide commitment to standardized systemic evaluation tools and methodology. Ms. Page will discuss how this work surfaced issues of neutrality, research rigor, ethics, and data collection and analysis techniques and how Denver responded to these issues.
Local Agency Perspective and Outcomes of a Detoxification Program Targeting Homeless Individuals
Mark Wright,  Denver Cares,  mark.wright@dhha.org
Mark Wright, Director of Denver C.A.R.E.S. will present on the evaluation findings of his Denver's Road Home project. Denver C.A.R.E.S. became Colorado's first public addiction treatment and Detoxification program in 1975 and has been providing a safe Detoxification for public inebriates for over 30 years. Over 67% of the 23,000 admissions to Denver C.A.R.E.S. reported being homeless. In support of Denver 10-Year Plan to End Homelessness, Denver C.A.R.E.S. began to work with various organizations to add 48 residential treatment slots and 50 housing vouchers aimed at reducing the overall Detox admissions of homeless clients. At the end of 2007 a total of 109 clients passed the one-year mark following their enrollment into substance abuse treatment. Detox admissions of these clients totaled 4,439 in the year prior to entering treatment. The overall admissions during the year after their enrollment totaled 1181, representing a 3,258 decrease in admissions or a 73% reduction.

Session Title: Setting Conditions for Meaningful Evaluation: Human Systems Dynamics Theory as a Lens for Program Theory and Theory of Change
Panel Session 710 to be held in Room 106 in the Convention Center on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Systems in Evaluation TIG and the Program Theory and Theory-driven Evaluation TIG
Chair(s):
Marah Moore,  i2i Institute,  marah@i2i-institute
Abstract: The proposed panel session explores the relevance, utility, and application of program theory and Theory of Change (TOC) in evaluation through the lens of Human Systems Dynamics (HSD) theory. This panel discussion will present a conceptual argument for the important role of program theory and TOC and a description of how the HSD theoretical framework can enhance the use of these approaches in an evaluation context. Exemplars from the experience of the panelists will illustrate the concepts presented. The discussion will touch on issues related to the relationship between evaluation and programming, and how to support a dynamic partnership between the evaluator and the program staff that fosters a shared quest for sustainable change.
Program Theory as 'Walkable Path' Toward Meaningful Evaluation
Elena Polush,  Iowa State University,  elenap@iastate.edu
Elena Polush is a postdoc evaluator in the Office of Educational Innovation and Evaluation in the College of Education at Kansas State University. She is an Associate of the Human Systems Dynamics Institute. Polush defended her dissertation in 2007 that focused on the development of a comprehensive evaluation for the US Department of Agriculture's competitive grants' program in higher education. She researched program theory and utilized the analysis of narrative discourses to articulate a theoretical model (i.e. program theory) of that grants program by employing a quantitative content analysis and qualitative oral history interviews within a mixed-methods inquiry. Polush will share her conceptual perspectives about program theory approach in evaluation that evolved from a 'narrative - story as mode of knowing' to a 'walkable path toward meaningful evaluation'.
New Visions for Theory of Change: Complexity, Clarity and Understanding for Programs Through Human Systems Dynamics Systems Theory
Jane Maland Cady,  Criando Research and Evaluation Services,  janemc@mchsi.com
Dr. Jane Maland Cady has been working in program evaluation for nearly 20 years internationally and also with multicultural initiatives in the USA. She is an Associate of the Human Systems Dynamics Institute. Her evaluation work ranges from participatory evaluations, to evaluation capacity building, to evaluation of multi-level systems initiatives. She is a Fulbright Senior Scholar, where she taught educational evaluation in Brazil. Her current systems evaluations have pushed her to consider the application of a program or initiative's Theory of Change (TOC) model and view it through an HSD lens, and then design data collection methods that are consistent with the program TOC. She deepens the understanding of using such methods, by examining the deeper influences to evaluation theory/program theory and evaluation worldviews.
Building Bridges Between Evaluation and Programming: Human Systems Dynamics as a Theoretical Framework for Theory of Change Work
Marah Moore,  i2i Institute,  marah@i2i-institute
Marah Moore is founder and director of i2i Institute, a consulting firm committed to strengthening the quality of community-based work through evaluation, planning, training, and technical assistance. She is an Associate of the Human Systems Dynamics Institute. Her background in community planning has shaped her 15-year evaluation practice, and encourages her to embrace that gray area between evaluation and programming. On-going exploration of theories of change has been central to her program- an initiative-level work, and HSD has provided a dynamic framework that informs and supports the hands-on style and participatory methods that Ms Moore utilizes in her practice. For this panel discussion, Ms. Moore will bring insights related to her experiences applying an HSD framework to three statewide initiatives in New Mexico (early childhood, youth development, and food stamp outreach) as well as a US-Russia initiative in Vladivostok, focused on abandonment prevention through development of an early intervention infrastructure.

Session Title: Out of Africa: Evaluation Case Studies
Multipaper Session 711 to be held in Room 108 in the Convention Center on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Mary Crave,  University of Wisconsin-Extension,  crave@conted.uwex.edu
Constructing A Baseline Using Geographic Information Systems and Household Surveys: An Example from Angola
Presenter(s):
Mary Worzala,  Academy for Educational Development,  mworzala@aed.org
Hugo Melikian,  Academy for Educational Development,  gmelikian@aed.org
Abstract: International development programs are plagued by a dearth of accurate information from which to construct an appropriate baseline. With client-determined standardized indicators becoming increasingly used to assess the outputs and outcomes of programs, establishing an accurate baseline and monitoring framework is crucial to demonstrating results. Using the case of an electricity access program in Angola, the authors show how geographic information systems (GIS) combined with household surveys can be used in constructing a baseline and to develop a program monitoring and evaluation plan. The GIS work in particular can be used to tell a visual story that vividly illustrates program impact. A secondary theme of the paper is how client-defined indicators drive program activities, whether or not they are appropriate measures of program impacts. Using the GIS and household surveys, the authors constructed a broadly defined baseline that included client indicators, an approach that can serve multiple audiences and purposes.
Evaluating One Village at a Time: Using Theory-Driven Evaluation in a Multi-site Intervention in Uganda
Presenter(s):
Susana Bonis,  Claremont Graduate University,  susana.bonis@cgu.edu
Rebecca Eddy,  Claremont Graduate University,  rebecca.eddy@cgu.edu
Abstract: How does one design an evaluation for a project implemented half a world away, in a region with diverse urgent needs, multi-layered politics, and linguistic and cultural differences, and without formal data collection systems or technology for communication? Evaluators of international development programs regularly confront these challenges as they strive to design a rigorous, culturally appropriate, and feasible evaluation meeting stakeholder needs. This paper discusses the application of theory-driven evaluation to develop an evaluation of the Kyabasaijja Village Project in Uganda, an initiative of Village Network Africa. ViNA promotes growth in African villages by “healing one village at a time” through health care, education, agriculture, and animal husbandry. The paper will discuss successes, challenges, and lessons learned in an effort to promote dialogue about ways to improve the design and implementation of evaluation in the challenging context of international development.
Putting Policy into Practice: Lessons Learnt and Challenges Encountered in Conducting a Participatory Evaluation in the Democratic Republic of Congo, Somalia and Uganda
Presenter(s):
Liya Aklilu,  Independent Consultant,  laklilu@yahoo.com
Abstract: This presentation highlights the challenges encountered and lessons learnt from implementing an international non-governmental organization’s monitoring and evaluation framework. The framework’s principles state that evaluation should be about learning, participatory, connect the concerns, interests and problems of stakeholders to the larger context, generate knowledge through collective planning and reflection, involve power sharing in decision making between the stakeholders and evaluator, and respect and incorporate stakeholders’ knowledge. A participatory midterm evaluation, built on these principles, enabled stakeholders in the Democratic Republic of Congo, Somalia and Uganda to contribute to defining the evaluation, developing instruments, collecting and analyzing data, discussing findings and identifying solutions to concerns that emerged from the evaluation. Each country’s challenges and lessons learnt varied and ranged from balancing organizational, donor/funder and stakeholder needs to finding ways to meaningfully engage stakeholders accustomed to being on the periphery to navigating socioeconomic and cultural factors that enhanced or impeded participation.
An Assessment of the African Development Bank’s Country Assistance Evaluation Methodology and Suggestions for Improvement
Presenter(s):
Foday Turay,  African Development Bank,  f.turay@afdb.org
Colin Kirk,  African Development Bank,  c.kirk@afdb.org
Abstract: This article assesses the methodological approaches for country assistance evaluations (CAEs) of the Operations Evaluation Department (OPEV) in the African Development Bank (AfDB) vis-à-vis the proposed good practice standard of the Evaluation Cooperation Group (ECG) of the Multilateral Development Banks’ (MDBs) Country Assistance Evaluations (CAEs). Towards this end, it develops and uses an assessment framework to identify the gap between the current OPEV CAE practice and the recommended ECG standard, and then provides advice on addressing such methodological challenges.

Session Title: Evaluation Challenges of Studying Inmates in Supermax Confinement
Panel Session 712 to be held in Room 110 in the Convention Center on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Crime and Justice TIG
Chair(s):
Maureen O'Keefe,  Colorado Department of Corrections,  maureen.okeefe@doc.state.co.us
Discussant(s):
Jeffrey Metzner,  University of Colorado Denver,  jeffrey.metzner@uchsc.edu
Abstract: Each of three panelists will share their perspective of the challenges they faced in conducting an evaluation of the psychological effects of supermax confinement. Although it is perhaps the most critical issue facing correctional systems today - particularly as it relates to mentally ill inmates - there is only limited research on the topic and the literature fails to answer the question of whether psychological harm is caused to individuals who are placed in long-term solitary confinement. In Colorado, we are undertaking a longitudinal study to assess whether change occurs in individuals over time as a consequence of their environment and/or mental illness and, if so, for whom and in what ways. In this panel, we will discuss the political, methodological, and logistical challenges of this study as well as the project development, organization, and adaptations that were necessary for a successful evaluation.
Navigating Institutional Politics
Maureen O'Keefe,  Colorado Department of Corrections,  maureen.okeefe@doc.state.co.us
The political challenges of engaging in such research require that evaluators gain organizational support, seek funding, and develop collaborative partnerships. In this presentation, we will examine how support was gained given serious legal and cost implications pursuant the study results and how the project weathered changes in heads of state, administrators, and prison operations. In order to maximize funding from external sources, the researchers cultivated administrator and line staff buy-in to collect research data from multiple sources such as staff, inmates, and official records. Collaborative partnerships were developed for carrying out the research activities as well as creating an advisory board of prison officials, academicians, and human rights advocates to promote diverse perspectives and to limit potential researcher bias. Conducting prison-based research is dynamic, with ever-changing environments and organizational attitudes, and is endemic among evaluations carried out in a real-world setting.
Methodological Challenges and Choices
Kelli Klebe,  University of Colorado Colorado Springs,  kklebe@uccs.edu
Conducting quality research in applied settings has long been a challenge for evaluators. To increase the internal validity of such research conclusions, researchers need to carefully consider sampling of participants, identification of proper comparison groups, selection of appropriate measures from multiple sources to rule out response biases and to provide collaborating evidence, and the appropriate design to investigate the phenomenon of interest. A longitudinal design within a context of high security prisons and with a population who may move across facilities and conditions presents additional challenges. Complications including varying lengths of time between measurement periods, predictors that change across time as well as time invariant predictors, and a varying number of data collection periods needed to be considered when selecting appropriate statistical analysis techniques. We will describe the choices made to meet these methodological challenges concerning both study design and data analytical strategies.
Logistics of High Security Prisons
Alysha Stucker,  University of Colorado Colorado Springs,  alysha.stucker@doc.state.co.us
The logistics of collecting research data in supermax and other high security facilities present daily challenges. Among these challenges is coordination with correctional staff who notifies the researcher of offender placement in supermax confinement, schedules testing sessions, enables facility access, and completes observational research data. Within supermax confinement, security concerns remain the foremost priority. However, security measures impede offender movement as well as the researcher's ability to interact with inmates to administer psychological tests. Ultimately, the success of the project depends upon the researcher's ability to gain the offenders' trust, while simultaneously dealing with their inappropriate behavior and protecting subjects' confidentiality in a highly monitored setting. Data collection in a high security setting requires a balance between the needs of the facility and the needs of the research.

Session Title: Meeting Challenges of Evaluating and Sustaining Research Centers and Institutes
Multipaper Session 713 to be held in Room 112 in the Convention Center on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Brian Zuckerman,  Science and Technology Policy Institute,  bzuckerm@ida.org
A "Meta-Evaluation" of Collaborative Research Programs: Definitions, Program Designs, and Methods
Presenter(s):
Christina Viola Srivastava,  Science and Technology Policy Institute,  cviola@ida.org
P Craig Boardman,  Science and Technology Policy Institute,  pboardma@ida.org
Brian Zuckerman,  Science and Technology Policy Institute,  bzuckerm@ida.org
Abstract: Many federal level program solicitations in science and technology focus on promoting research collaboration, especially multidisciplinary and cross-institution collaboration. Examples include a variety of Centers-style programs with different features as well as a growing number of more traditional research and training grants. While considerable research effort has been devoted to understanding research collaboration as a phenomenon, the implications of those findings with respect to program logic, management, and evaluation have not yet been fully explored. This paper will employ a “meta-evaluation” approach to begin addressing these needs. Using a comparative approach, it will draw upon several recent evaluations of research programs at the National Institutes of Health and the National Science Foundation to identify the multiple practical approaches to promoting “collaboration” and variation in the measured outcomes. Implications for evaluation methodology are discussed.
Predictors of Cooperative Research Center Post-Graduation Survival and Success: An Update
Presenter(s):
Lindsey McGowen,  North Carolina State University,  lindseycm@hotmail.com
Denis Gray,  North Carolina State University,  denis_gray@ncsu.edu
Abstract: Industry/University Cooperative Research Centers (I/UCRCs) are supported by funding from NSF but, like other center programs, are expected to achieve self-sufficiency after a fixed term (ten years). However, there is little evidence about the extent to which government funded programs are able to make this transition. This study attempts to identify the factors that predict center sustainability after they have graduated from NSF funding. Archival data and qualitative interviews with Center Directors and outside Evaluators are used to explore program sustainability of I/UCRCs post graduation from initial grant support. The study examines environmental, organizational, program, and individual level constructs to predict center status, fidelity to the I/UCRC program model, ans sustainability in terms of continued infrastructure, program activities, and outcomes. The results will be used to inform the transition process for Centers currently funded under the I/UCRC program as well as to test the applicability of program sustainability theory developed in other content areas to the case of cross-sector cooperative research programs.
From Evaluation Framework to Results: Innovative Approaches piloted with the Interim Evaluation of the Regional Centers of Excellence for Biodefense and Emerging Infectious Diseases Research (RCE) Program
Presenter(s):
Kathleen M Quinlan,  Concept Systems Inc,  kquinlan@conceptsystems.com
Abstract: This paper focuses on four key dimensions of the conduct of an interim, descriptive evaluation of the National Institute for Allergy and Infectious Diseases’ Regional Centers of Excellence (RCE) for Biodefense and Emerging Infectious Diseases Research Program. The paper will highlight 1) participation, especially in co-authoring an evaluation framework of success factors and defining the major elements of the interim evaluation; 2) responding to time and resource constraints by identifying simple measures, extensively mining existing data, and engaging the ten funded regional research Centers in completing a highly structured Information Request; 3) being deliberate about the unit of analysis to ensure it evaluated the Program as a whole but provided sufficient Center-level analyses to inform improvement; and 4) blending qualitative and quantitative data. Innovative approaches developed in this project to address common challenges of this type of evaluation inform the emerging field of evaluation of large scale biomedical research initiatives.

Session Title: Measurement: Issue and Methods
Multipaper Session 714 to be held in Room 103 in the Convention Center on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Karen Larwin,  University of Akron,  drklarwin@yahoo.com
Discussant(s):
Karen Larwin,  University of Akron,  drklarwin@yahoo.com
Getting Into the Black Box: Using Factor Analysis to Measure Implementation
Presenter(s):
Xiaodong Zhang,  Westat,  xiaodongzhang@westat.com
Atsushi Miyaoka,  Westat,  atsushimiyaoka@westat.com
Abstract: Although regarded as a critical aspect in program evaluation, implementation is also known as a “black box” to program evaluators. This situation is compounded by a lack of methodological agreement in how to accurately measure implementation. The paper begins with a brief overview of different approaches to quantifying implementation fidelity. Drawing on our experience with the evaluation of Reading First-Ohio, we present a factor-analysis approach to measure program implementation. We also discuss the use of implementation scales and the theoretical and practical implications of the method.
Necessary Questions for Evaluators to Consider Before, During, and After Conducting Inter-Rater Reliability Analyses
Presenter(s):
Dane Christian,  Washington State University,  danechristian@mail.wsu.edu
Michael Trevisan,  Washington State University,  trevisan@wsu.edu
Denny Davis,  Washington State University,  davis@wsu.edu
Steve Beyerlein,  University of Idaho,  sbeyer@uidaho.edu
Phillip Thompson,  Seattle University,  thimpson@seattleu.edu
Kunle Harrison,  Tuskegee University,  harrison@tuskegee.edu
Robert Gerlick,  Washington State University,  robertgerlick@wsu.edu
Abstract: As part of an evaluation for Capstone Engineering Design Course Assessment Development, Inter-Rater (IR) reliability analyses were conducted to ensure consistency in engineering faculty ratings. Most methods for calculating IR reliability analyses are not without problems. The selection criteria of an IR reliability index ought to be based on its properties and assumptions, the level of measurement of the variable for which agreement is to be calculated, and the number of raters in the analysis. Whichever index is used, researchers need to explain why their choice of index is appropriate given the context of the characteristics of the data being evaluated. This paper considers conceptual and methodological issues among some common IR indices, and the choice of percent agreement for the Capstone project is explained in light of the selection criteria. Some questions for evaluators to consider before, during, and after conducting IR reliability analyses are offered.
The Implicit Association Test (IAT): A Tool for Evaluation?
Presenter(s):
Joel Nadler,  Southern Illinois University at Carbondale,  jnadler@siu.edu
Nicole Cundiff,  Southern Illinois University at Carbondale,  karim@siu.edu
Gargi Bhattacharya,  Southern Illinois University at Carbondale,  gargi@siu.edu
Steven Midleton,  Southern Illinois University at Carbondale,  scmidd@siu.edu
Abstract: The Implicit Association Test (IAT) is a methodological way to measure implicit bias using two group comparisons. While, general measures of implicit biases (automatic reactions) have been present in the psychological literature for quite some time, IAT is relatively new (Greenwald, McGhee, & Schwartz, 1998). Since its inception, IAT measures have been extensively used in stereotype and prejudice research. IAT results have been shown to be weakly related to explicit measures when social desirability concerning the target stimuli is involved. However, when there is no socially expected “right” answer there is a stronger relationship between the two methodologies. Issues of validity and reliability concerning the IAT have been actively discussed in numerous articles since the inception of the IAT. The unasked question is whether a measure assessing automatic, implicit, or “unconscious” reactions can be of use in consulting and evaluation. Theoretical application, practical concerns, and possible appropriate use are discussed.
Policy and Evaluation: Developing a Tool to Synthesize Evidence in Education Partnerships
Presenter(s):
Mehmet Ozturk,  Arizona State University,  ozturk@asu.edu
Abstract: For several years, a substantial evidence-based educational practice has been emerging among researchers and policy-makers. However, much less attention has been given to identifying effective ways to produce the needed evidence. Developing effective techniques and tools that can help education researchers and evaluation experts make sense of the available evidence is critical. In this context, effective synthesis of evaluation evidence has become crucial. This paper discusses the development of effective tools for synthesizing evidence within education partnerships. The Program Effectiveness Scale/Rating System, which was developed to help policy-makers better understand the impact of programs, strategies, and innovations, is presented as a practical example. By synthesizing and placing evaluation results into a format that is easy to understand, the Program Effectiveness Scale can help policy-makers make objective judgments on the worth of a program or strategy.

Session Title: Evaluating Systems Change Efforts
Panel Session 715 to be held in Room 105 in the Convention Center on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Julia Coffman,  Harvard Family Research Project,  jcoffman@evaluationexchange.org
Abstract: Efforts to improve health, education, or human service systems are complex and notoriously hard to evaluate. They involve multiple programs and players and feature outcomes at multiple levels. Additionally, they are long-term efforts that evolve over time in response to the political and economic contexts around them. These complexities and others place systems change efforts directly outside of the more familiar program evaluation comfort zone. Consequently, less consensus exists about how to evaluate them. Advancing the conversation about systems change evaluation is critical, however, as nonprofits and foundations increasingly see systems change as essential for achieving large-scale results for individuals and communities. This session will introduce a new framework that attempts to clarify ideas, approaches, and language about evaluating systems change efforts. It also will feature presentations on the evaluations of two systems change efforts (to improve early childhood development systems) to demonstrate how the framework can be applied.
A Framework for Evaluating Systems Change Efforts
Julia Coffman,  Harvard Family Research Project,  jcoffman@evaluationexchange.org
The presentation will introduce a new framework for evaluating efforts to build or reform health, education, or human service systems. The framework helps evaluators, funders, and individuals implementing systems change efforts break down the complex and often overwhelming task of evaluating systems change efforts into more manageable parts and is designed to support both theory of change development and evaluation planning. It clarifies the kinds of activities or functions that complex systems change efforts perform and the types of outcomes and impacts they aim to accomplish. The framework also presents a menu of methodological options for evaluating systems change efforts. A full paper explaining the framework will be available for session participants.
Evaluating The Build Initiative
Michelle Stover-Wright,  Child and Family Policy Center,  michellesw@cfpciowa.org
This presentation will demonstrate how the framework can be applied to an actual systems change effort, with The Build Initiative and its evaluation serving as the example. The Build Initiative is a foundation-funded multi-state effort to ensure that children from birth through age five are safe, healthy, eager to learn, and ready to succeed in school. Build supports states' efforts to build comprehensive and coordinated early childhood systems of programs, policies, and services that work together to achieve positive outcomes for young children and their families. This presentation will show how the framework can be applied to The Build Initiative in three ways. It shows how the framework can be used to: 1) map Build's focus areas (at an overall initiative level, across Build states), 2) define Build's relevant theory of change elements, and 3) identify Build's existing and potential evaluation options.
Evaluating Kids Matter in Washington State
Kasey Langley,  Organizational Research Services,  klangley@organizationalresearch.com
This presentation will offer a second example of how the framework can be applied to a foundation-funded systems change effort, using Kids Matter in Washington State as the example. Kids Matter is a collaborative and comprehensive plan for building the early childhood system in Washington State to improve outcomes for children. Kids Matter supports the efforts of local and state stakeholders to coordinate, collaborate, and integrate efforts that will lead to children being healthy and ready for school. The presentation will use the framework as a lens for discussing the longitudinal evaluation of efforts to achieve Kids Matter goals.

Session Title: Program Theory in Complex Evaluations
Multipaper Session 716 to be held in Room 107 in the Convention Center on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
Allan Porowski,  ICF International,  aporowski@icfi.com
Using Theory of Change to Guide Multi-site, Multi-level, Mixed Method Evaluations
Presenter(s):
Andrea Hegedus,  Northrop Grumman Corporation,  ahegedus@cdc.gov
Abstract: Evaluators are often asked to design multi-site, multi-level evaluations of complex social systems. This type of assessment requires adapting standard methods of program evaluation to larger scales as well as unifying all efforts to achieve outcomes. One such tool for such a complex approach is the use of theory of change evaluation. Theory of change is an approach that link activities, outcomes, and contexts in a way that maximizes the attribution of interventions to outcomes. This presentation will define the theory of change approach, discuss its various components, describe the use of theory to address different levels of the evaluation model and specify outcomes, and choose appropriate methodologies to answer evaluation questions. As a result, theory of change can become an effective tool to help evaluators design complex, multi-faceted evaluations, ground activities in evidence-based theories, as well as increase the rigor of the evaluation process.
FY05 and FY06 Real Choice Systems Change Grants: Evaluating Medicaid Systems Transformation Across 18 States by Linking the Initiative-Level Evaluation to Local Grant Evaluation Efforts
Presenter(s):
Yvonne Abel,  Abt Associates Inc,  yvonne_abel@abtassoc.com
Deborah Walker,  Abt Associates Inc,  deborah_walker@abtassoc.com
Meg Gwaltney,  Abt Associates Inc,  meg_gwaltney@abtassoc.com
Meredith Eastman,  Abt Associates Inc,  meridith_eastman@abtassoc.com
Margaret Hargreaves,  Abt Associates Inc,  meg_hargreaves@abtassoc.com
Susan Flanagan,  Abt Associates Inc,  sflanagan@westchesterconsulting.com
Abstract: Between FY2005 and FY2006, the Centers for Medicare and Medicaid Services (CMS) awarded Systems Transformation (ST) Grants to 18 states to support states’ efforts to transform their infrastructure to promote more coordinated and integrated long-term care and support systems that serve individuals of all ages and disabilities. The Grants are structured into 1) a start-up phase that requires grantees to participate in a nine-month Strategic Planning process and to complete evaluation plans that link grant goals and objectives with outcomes and 2) an implementation phase during which grantees document their progress through web-based reports completed at six-month intervals. Abt Associates, as the initiative-level evaluator, designed data collection tools that collect consistent information across these key grant phases. This presentation highlights the influence of strategic planning on grant design and implementation, identifies factors that facilitate and challenge grant implementation, and links outcomes from local grant evaluations to the initiative-level evaluation.
Evaluation of a Multi-Site, Multi-Program Obesity Prevention Initiative: Incorporating Site-Level Measurement Tools and Program Theory into a Systematic Assessment of Initiative Strengths and Challenges
Presenter(s):
Zachary Birchmeier,  University of Missouri,  birchmeierz@missouri.edu
Nathaniel Albers,  University of Missouri,  nathaniel.albers@gmail.com
Caren Bacon,  University of Missouri,  baconc@missouri.edu
Dana N Hughes,  University of Missouri,  hughesdn@missouri.edu
Jill Nicholson-Crotty,  University of Missouri,  nicholsoncrottyj@missouri.edu
David Valentine,  University of Missouri,  valentined@missouri.edu
Charles Gasper,  Missouri Foundation for Health,  cgasper@mffh.org
Abstract: Chen’s (2005) Program Theory provides a framework for evaluating program success (i.e., Change Model) as a function of program capacity elements (i.e., Action Model) that are nested within the type and readiness of the target community. Applying the theory to an initiative with diverse program content and outcomes required defining and assessing program success and capacity across sites, as well as the characteristics of each target community. From the content of interviews and web-based surveys of program staff, observations at site visits, program internal evaluation data, as well as interviews with community informants, each site was scored on success, capacity and community elements using modified versions of existing tools. Regressions were used to account for between-sites variability. The system helped to identify the elements of capacity that were possessed by the most successful sites, and to translate those successes into recommendations for other sites that are nested in similar contexts.

Session Title: Collaborative Evaluation Communities In Urban Schools: How Hierarchical Educational Policies Shape Its Implementation
Multipaper Session 717 to be held in Room 109 in the Convention Center on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Chair(s):
Frances Lawrenz,  University of Minnesota,  lawrenz@umn.edu
Discussant(s):
Douglas Huffman,  University of Kansas,  huffman@ku.edu
Kelli Thomas,  University of Kansas,  kthomas@ku.edu
Abstract: Federal, district, and school level educational policies influenced the implementation of Collaborative Evaluation Communities in three urban schools in one Midwestern state over a three-year period. Discussants compare their CEC experiences in another Midwestern state. Federal policy promoted accessibility of student performance data and created a climate of accountability. This increased the importance of educational evaluation, but shifted school district resources to tested subjects, such as mathematics, at the expense of other educational outcomes. Montessori school policies combined students across grades, which created challenges for developing appropriate instructional interventions and assessments, but increased collaboration among teachers. School policies of site-based management and data-based decision-making supported the development of individualized instructional inquiries among middle school teachers in a gifted and talented school. Evaluation capacity increased due to school policies that rotated teachers at an urban elementary school that was not making AYP.
Developing Collaborative Evaluation Communities in Public K-12 Schools
Anica G Bowe,  University of Minnesota,  bowe0152@umn.edu
This multipaper session will discuss how education policies have shaped the progress of the Collaborative Evaluation Communities in Urban Schools project (CEC Project) in the St Paul public schools. The CEC project is designed to enhance the evaluation capacity of K-8 schools and develop the evaluation expertise of graduate students in science, technology, engineering, and mathematics education. The CEC project has created school-based collaborative evaluation communities comprised of teachers, instructional coaches, graduate students, and university faculty. The goals of the project are to improve the evaluation capacity of urban schools; develop graduate-level educational leaders with the knowledge and skills to evaluate science and mathematics education programs; and develop the evaluation capacity of K-8 teachers. The papers presented in this section will focus on the hierarchical effects of federal, state, district, and school policies on the overall progress of the CEC model in the St Paul, MN public school system.
The Influence of Federal Educational Policies on Creating Collaborative Evaluation Communities
Randi K Nelson,  University of Minnesota,  nelso326@umn.edu
Frances Lawrenz,  University of Minnesota,  lawrenz@umn.edu
The implementation of the Collaborative Evaluation Communities (CEC) project was affected by Federal policies most notably the No Child Left Behind (NCLB) Act. The federal government required its agencies to be accountable through the Governmental Performance Results Act and most recently through Program Assessment Rating Tools and the Academic Competitiveness Council report favoring randomized controlled evaluation designs. The NCLB required States to test all children in mathematics and reading and to have a system for certifying teachers as highly qualified. This produced a climate of accountability and provided large amounts of student data. It also highlighted the academic disparities of ethnic minority students. The policies were a double-edged sword for CEC. Achievement became the key component of program evaluation often to the exclusion of any other valued outcomes. However, the pressure it provided made schools and teachers receptive to outside help to show adequate yearly progress and teacher quality.
Navigating NCLB at the District Level
Lesa M Covington Clarkson,  University of Minnesota,  covin005@umn.edu
The No Child Left Behind Act has essentially changed the way school districts do business. Data collection, analysis, disaggregation, and dissemination have moved beyond the district level to the school level and more specifically to the classroom level. While data analysis is not new to any district, the way in which data motivates change is different. The implications of the results have caused proactive and reactive measures including the reorganization of schools. Because the sanctions of NCLB are progressive, school officials are continually evaluating programs and their impact on student achievement, particularly students who are underperforming. Thus, evaluation has become a critical component of school district business. Underperformance prompted CEC to primarily focus attention on mathematics.
The Influence of School-level Policies on Collaborative Evaluation in a Public Montessori Program
Christopher D Desjardins,  University of Minnesota,  desja004@umn.edu
The Collaborative Evaluations Communities (CEC) project has been involved at the Daughter of the Moon school since Summer 2007. The Daughter of the Moon is a Montessori public school located in an urban environment. The classrooms are organized into three levels and students work in classrooms of different age groups. The CEC has had to work through a paradox; how to best develop a mathematical program broad enough to cover students that are traditionally in three separate grades and specific enough to focus on each age groups specific need. The school administration has placed an emphasis on district scores and state standardized tests. These tests are largely absent from the Montessori curriculum which places an emphasis on manipulatives. The CEC has worked with Daughter of the Moon to develop tests and train teacher to interpret data from these tests to prepare the students.
The Influence of School-level Policies on Collaborative Evaluation in a Public Gifted and Talented Middle School
Herb Struss,  University of Minnesota,  strus010@umn.edu
The CEC project was involved at the district's gifted and talented (grades 1-8) magnet school for two and a half school years. The school had a culture of site based management and policies supported data based decision making, group work and individual teacher excellence. The policy of site based management supported an evaluation of the instructional activities of the elementary teachers to help align the curriculum across the middle grades. The teachers capitalized on the support for data based decision making by designing individual projects. One teacher collected data on the achievement gap for non-gifted students enrolled at the school. Another teacher designed a theme based mathematics unit involving technology; gathering student performance data and utilizing it as part of a successful Master's degree effort. A third teacher implemented a teaching strategy change, collected data to evaluate the results, and submitted the project to meet the school's professional development policy.
The Influence of School-level Policies on Collaborative Evaluation in an Urban Public Elementary School
David J Fischer,  University of Minnesota,  fisch413@umn.edu
Banneker Elementary is an urban school with 93% of students receiving free or reduced lunch, and a minority population of 97%. Policies at the school level had several effects on the implementation of the CEC project at Banneker. Taken chronologically, because the grade levels were originally chosen because of low test scores and also because of the needs of the teachers was greater, CEC at Banneker concentrated on curriculum implementation early on as well as supplementation of the curriculum. The policy of keeping students with the same teacher from year to year created early opportunities to expand the project as teachers now outside the target grades wanted to continue with the project. Because the school did not make Adequate Yearly Progress (AYP), the administration of the school placed a greater emphasis on mathematics and the CEC project was expanded to all grades in the building.

Session Title: Evaluating Math and Science Instruction
Multipaper Session 718 to be held in Room 111 in the Convention Center on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
James Dugan,  Colorado State University,  jjdugan@cahs.colostate.edu
Results of Field-Based Comparison of Two Widely Used Science and Math Classroom Observation Instruments
Presenter(s):
Martha Henry,  MA Henry Consulting LLC,  mahenry@mahenryconsulting.com
Keith Murray,  MA Henry Consulting LLC,  keithsmurray@mahenryconsulting.com
Abstract: Observation of classroom lessons is an important if problematic component of educational evaluation. Where programs aim at teacher professional development, student performance enhancement, curricular change, or implementation of reform teaching methods, direct observation is crucial to validating and explaining results. A presentation at the 2007 AEA Conference provided a qualitative analysis of the Inside the Classroom Observation and Analytic Protocol and the Reform Teaching Observation Protocol. These observation instruments evidence not only similarities but marked differences in design, focus, documentation and observer skill requirements. This presentation offers continuing analysis and comparison of these two observational instruments in math and science K-12 classrooms. The authors offer the results of a statistical analysis of an item comparison across instruments based on field observations of 20 teachers in two Math Science Partnerships. Results will inform educational evaluators on instrument differences in practice and on characteristics to consider when selecting classroom observation instrumentation.
Evaluation of the Greater Birmingham Mathematics Partnership: Measuring Teachers’ and Students’ Exposure to Challenging Courses and Curriculum
Presenter(s):
Rachel Cochran,  University of Alabama Birmingham,  danelle@uab.edu
Jason Fulmore,  University of Alabama Birmingham,  jfulmore@uab.edu
Abstract: The Greater Birmingham Mathematics Partnership (GBMP) is an NSF-funded Math and Science Partnership (MSP) located in Birmingham, Alabama. This paper explores the selection and development of GBMP’s evaluation methods and instruments to measure changes in instructional practice and student learning at the middle school level as a result of teachers’ participation in a series of intensive 9-day professional development workshops. The evaluation is guided by GBMP’s unique definition of Challenging Courses and Curriculum (CCC)—deepening mathematical understanding, developing productive disposition, engaging in inquiry and reflection, and communicating mathematical thinking. The paper focuses on the operationalization of CCC and the mapping of items on observation protocols, portfolio and performance assessment rubrics, behavioral checklists, and surveys to each CCC dimension. Relationships among the levels of exposure to CCC in professional development, subsequent delivery of CCC in middle school classrooms, and effects of exposure to CCC on students will be presented.
The Impact of a National Science Foundation-Funded GK-12 Inquiry-Based Science Program: Three years Out and Beyond
Presenter(s):
Susan Henderson,  Wested,  shender@wested.org
Claire Morgan,  Wested,  comorgan@wested.org
Candice Bocala,  Wested,  cbocala@wested.org
Karen Graham,  University of New Hampshire,  karen.graham@unh.edu
Abstract: This presentation describes the methods, collaborative approach, and key findings of a three-year formative evaluation of a National Science Foundation-funded university-schools partnership dedicated to encouraging scientific inquiry in high school classrooms through pairing university graduate STEM fellows with high school teachers and classrooms. The initiative aimed to increase the ability of graduate students to communicate effectively about science and research beyond the university setting and to provide the opportunity for fellows to bring their cutting-edge research and practice into the high school classroom to stimulate student interest in and engagement with the sciences and promote inquiry-based learning practices among teachers. The presentation examines the application of inquiry-based science in the classroom, highlighting the impact of the program on students, teachers, and graduate fellows--particularly the on-going impact of the initiative on teachers’ professional growth and teaching approach after the conclusion of program.
Data-Based Technical Assistance: A Formative Evaluation Process for Improving Mathematics Instructional Support in Restructuring Urban Elementary Schools
Presenter(s):
David Beer,  University of Chicago,  dwbeer@uchicago.edu
Abstract: Data-Based Technical Assistance (DBTA) is a formative evaluation process designed to provide on-going feedback to schools and district leadership concerning the Everyday Mathematics Restructured Schools Project, a multi-year effort to implement standards-based reform mathematics curriculum and improve instruction and learning at ten elementary schools in the Chicago Public Schools (CPS). The schools are undergoing restructuring for failure to meet adequate yearly progress under the No Child Left Behind act. The project, in its second year, hopes to improve mathematics instruction and learning by providing leadership training and support to in-school mathematics coaches, teachers, principals, and other school leaders. DBTA researchers gather data through protocol-driven classroom observations, teacher logs, teacher interviews, mathematics coach interviews, and principal interviews. Feedback and reporting, including recommendations for improvement, are provided at regular intervals at the school level, in Area meetings, in project principal meetings, and in meetings with the CPS mathematics leadership.

Session Title: Evolution of evaluation checklists: From Creation to Validation
Demonstration Session 719 to be held in Room 113 in the Convention Center on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Theories of Evaluation TIG
Presenter(s):
Wes Martz,  Kadant Inc,  wes.martz@gmail.com
Daniela Schroeter,  Western Michigan University,  daniela.schroeter@wmich.edu
Abstract: The number of evaluation checklists to guide, facilitate, and improve evaluation practice continues to grow. As a result, contributions are made toward developing theory while augmenting the practice of evaluation. In this demonstration, attendees will be introduced to two new checklists: one for evaluating organizational effectiveness and the other for evaluating sustainability. Both checklists incorporate criteria of merit, specific steps, and strategies that aid in designing and conducting the evaluation of organizations, programs, or initiatives. After introducing the checklists, the facilitators will review the specific processes used for validating the checklists including feedback from subject matter experts, evaluator practitioners, and field trials. Strengths and weaknesses of the initial checklists will be emphasized and discussed in terms of improvements made as a result of the validation process. The session will conclude with a discussion of lessons learned from the checklist development and validation efforts of both tools.

Return to Evaluation 2008
Search Results for All Sessions