Evaluation 2008 Banner

Return to search form  

Session Title: Systems Oriented Tools and Methods in Public Agency Evaluations: Three Case Studies
Multipaper Session 871 to be held in Centennial Section A on Saturday, Nov 8, 1:20 PM to 2:50 PM
Sponsored by the Systems in Evaluation TIG and the Human Services Evaluation TIG
Chair(s):
Beverly Parsons,  InSites,  beverlyaparsons@aol.com
Discussant(s):
Teresa R Behrens,  WK Kellogg Foundation,  tbehrens@wkkf.org
Abstract: Application of systems theories to evaluation purposes is gaining increasing interest among the evaluation community. Systems tools and methods, in particular those from Human Systems Dynamics (HSD), are seen as offering potential solutions to some of the challenging evaluation questions faced in complex initiative evaluations. Although awareness of the value of introducing systems thinking into evaluation is growing, many evaluators grapple with how to apply systems methods and tools in evaluations. This session provides three case studies that demonstrate application of various HSD methods and tools in initiatives in health care, in human services and in an education initiative, where interactions are complex and numerous, diversity permeates the system and boundaries are unclear. The presenters will discuss the benefits and new insights gained by integrating HSD approaches into the evaluation as well as the challenges of applying these tools within their specific evaluation context.
Using Human Systems Dynamics (HSD) to Evaluate an Initiative to Enhance Interprofessional Collaboration in Health Care
Esther Suter,  Calgary Health Region,  esther.suter@calgaryhealthregion.ca
Glenda Eoyang,  Human Systems Dynamics Institute,  geoyang@hsdinstitute.org
Lois Yellowthunder,  Hennepin County,  lois.yellowthunder@co.hennepin.mn.us
Liz Taylor,  University of Alberta,  liz.taylor@ualberta.ca
HSD methods and tools were used to evaluate an intervention designed to improve professional collaboration across health disciplines. Two tools were used, the Container-Difference-Exchange (CDE) model, and the landscape diagram, which shows organized, self-organizing and unorganized dynamics, in order to describe emerging patterns within and across participating sites as well as in the project team. The HSD evaluation complemented evaluation components that focused on processes and outcomes of collaborative practice. The HSD methods and tools highlighted the fluid nature of professional boundaries, focus changes over the duration of the project, changing relationships, ongoing transformations in work processes, and the co-evolutionary relationship upon which collaborative practice depends. Using HSD led to a more complete understanding of the changes occurring as a result of the collaborative practice intervention.
Evaluating Change in a Complex System
Royce Holladay,  Human Systems Dynamics Institute,  rholladay@live.com
A large county agency with a 5-year plan for significant change, shifting expectations, and looming changes in state and federal funding desired an evaluation of their progress in implementing the change plan accompanied by recommendations for moving the change process forward. This evaluation client wanted to know 1) how 'deeply' change had penetrated the organization, 2) how coherent the perception of the change was across the organization, and 3) what individual changes had been attempted and fully implemented. Tools from human systems dynamics provided for the use of in-depth questions and analysis that allowed evaluators to articulate a clear accurate 'picture' of the progress toward overall change goals, reinforce work that was being done well, and make solid, actionable recommendations about next steps for moving forward. This presentation describes the underlying theories and assumptions, shares the tools and the rationale for their use, and outlines recommendations for moving forward.
Use of Systems Oriented Tool in Evaluation of an After-School Science Project
Patricia Jessup,  InSites,  pat@pjessup.com
This presentation describes the use of a systems-oriented evaluation that focuses on boundaries, diversity, relationships, and perspectives. The evaluand is an after-school and summer program for students traditionally underrepresented in the fields of science, technology, engineering, and mathematics (STEM). Project goals relate to youth participants' content knowledge, attitudes, interest in and pursuit of STEM subjects and careers, and workplace skills. To investigate the role of broad system factors in shaping the outcomes and processes of the project, InSites developed an adaptation of the CDE (containers, differences, and exchange) model from the Human Systems Dynamics Institute to reveal emerging and unexpected patterns and relationships, changing perspectives, and shifting boundaries. The evaluation results are expected to help leaders of the education system effectively position informal education along side formal education offerings within the education system.

Session Title: Evaluating Regression Models: Understanding and Using Regression Diagnostics to Improve your Analyses
Demonstration Session 872 to be held in Centennial Section B on Saturday, Nov 8, 1:20 PM to 2:50 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Charles Collins,  Michigan State University,  colli43@msu.edu
Steven Pierce,  Michigan State University,  pierces1@msu.edu
Abstract: Regression models are frequently used to analyze evaluation data. Examining whether the underlying statistical assumptions are met is a critical task for evaluators employing this technique. Using data from a community change initiative, we will demonstrate how to use regression diagnostics to diagnose and solve problems with regression models. This session will introduce the audience to a variety of graphical and statistical tools for performing regression diagnostics, discuss how to interpret and use the resulting output to detect problems such as non-linearity and non-normality, and how to identify influential and outlying data points that may distort analysis results. Diagnostics allow evaluators to understand in what way the data violate the assumptions of regression models, which can then guide decisions about how to fix the violations and improve the analysis, thereby yielding more accurate and valid conclusions. A complete example from a real evaluation study will be presented.

Session Title: Perspectives on Evaluation Policy and Use From State Policy Makers in Education
Panel Session 873 to be held in Centennial Section C on Saturday, Nov 8, 1:20 PM to 2:50 PM
Sponsored by the Presidential Strand
Chair(s):
Jody Fitzpatrick,  University of Colorado Denver,  jody.fitzpatrick@cudenver.edu
Discussant(s):
Jody Fitzpatrick,  University of Colorado Denver,  jody.fitzpatrick@cudenver.edu
Abstract: With a theme of evaluation policy and evaluation practice, conference attendees need to hear from policy makers and program managers who make decisions regarding evaluation policy. These panelists will discuss how their respective organizations make decisions regarding the conduct of evaluation, its implementation, and use. The panelists represent different types of organizations that influence education and its practice in Colorado. They include panelists from the Colorado Department of Education, from Denver Public Schools, and from two influential foundations in Colorado, the Rose Community Foundation and the Piton Foundation. Each will describe the ways in which they and their organizations have approached evaluation and developed policies, formally or informally, regarding its conduct. They will highlight examples of when evaluation has been successful in influencing decisions in education and, from those successes, discuss the types of evaluation and evaluation policies they hope to see emerging in Colorado.
Evaluation Policy and Use in the Colorado Department of Education
Ken Turner,  Colorado Department of Education,  turner_k@cde.state.co.us
Ken Turner, the Deputy Commissioner of Academics at the Colorado Department of Education and a former Superintendent of School District 20 in Colorado, will discuss how the state makes decisions concerning when to conduct evaluation and how to use them. He will highlight some examples of effective evaluations that have influenced educational practice and policy in Colorado and discuss directions he hopes evaluation will take to increase use.
Evaluation's Role and Use in Education in Colorado and at the Rose Community Foundation
Phil Gonring,  Rose Community Foundation,  pgonring@rcfdenver.org
As senior program officer at Rose Community Foundation, Phil Gonring directs the Foundation's Education area, which oversees programs and grantmaking efforts that emphasize teacher quality. Since 1999, he has led the effort to transform the teacher-compensation system in Denver Public Schools to include a merit pay component. A pilot of that system was evaluated and the current broader implementation is undergoing an evaluation. A former teacher and administrator, Gonring has consulted in schools across the country, including those in Seattle, San Antonio, New York and Boston, and published writings on a variety of education issues including Pay-for-Performance Teacher Compensation: An Inside View of Denver's ProComp Plan, which was published by Harvard Education Press in 2007. He will discuss how the Rose Community Foundation makes decisions regarding evaluation and the characteristics of evaluations they have found to be effective.
Evaluation Policy and Use in the Denver Public School System
Brad Jupp,  Denver Public Schools,  william_jupp@dpsk12.org
As the Senior Academic Policy Advisor in the Denver Public Schools (DPS) and a member of the Superintendent's cabinet, Brad Jupp will discuss how DPS has viewed evaluation and evaluation policy, how evaluations have changed in recent years, and ways he thinks evaluation should develop to become more useful to school districts. Using examples of evaluations that have succeeded and ones that have failed, he will identify attributed he believes are important to successful evaluations and evaluation policies.
Evaluation Policy and Use in the Colorado Education Community
Van Schoales,  The Piton Foundation,  vschoales@piton.org
As the Program Officer for Urban Education at the Piton Foundation, Van Schoales has been involved in evaluation and research projects for the Denver Public Schools and other educational organizations in Colorado. He will discuss how he sees evaluation policies being formed in Colorado and how evaluation is used. Along with other panelists, he will discuss examples of evaluation policies and individual evaluations that have been effective in informing policy makers and school administrators.

Session Title: Research on Program Evaluation Theory and Practice: Overviews and Critiques of Exemplars
Multipaper Session 874 to be held in Centennial Section D on Saturday, Nov 8, 1:20 PM to 2:50 PM
Sponsored by the Research on Evaluation TIG
Chair(s):
Melvin Mark,  Pennsylvania State University,  m5m@psu.edu
Discussant(s):
J Bradley Cousins,  University of Ottawa,  bcousins@uottawa.ca
Abstract: In this session, four presenters and a discussant will address issues in conducting conceptual and empirical research on program evaluation theory and practice. The papers will present (a) an overview and analysis of conceptual and empirical research on evaluation theory, (b) an overview and analysis of empirical studies on evaluation practice, (c) a critique of the limitations of empirical research on evaluation use, which has been studied empirically more than any other topic in the evaluation literature, and (d) an overview and discussion of a small body of research on stakeholder participation in evaluation that has been conducted in conjunction with evaluation studies but largely ignored in the evaluation literature. Collectively and individually, these papers move from broad overviews to increasingly focused analysis, and each paper provides a careful assessment of the current status, important limitations, and needed improvements in research on evaluation theory and practice.
Issues in the Conceptual and Empirical Study of Evaluation Theories
Karen Zannini,  Syracuse University,  klzannin@syr.edu
Nick L Smith,  Syracuse University,  nlsmith@syr.edu
Although much writing has been devoted to advocating and critiquing particular evaluation theories or approaches, less attention has been devoted to the difficult problems of conducting research on evaluation theories. This paper provides an analysis of both conceptual and empirical modes of conducting research on evaluation theory, including a review of the methods and findings of selected prior studies. Attention is devoted to addressing crucial problems in this work, such as (a) what exactly is the nature of theory, models, or approaches in evaluation and (b) whether and how conceptual statements of theory can in fact be empirically tested. The paper concludes with an assessment of the benefits and limitations of both the conceptual and empirical approaches to studying evaluation theory, including recommendations for how to improve future research on evaluation theory.
An Analysis of Empirical Studies of Evaluation Practice
Jie Zhang,  Syracuse University,  jzhang08@syr.edu
Nick L Smith,  Syracuse University,  nlsmith@syr.edu
Studies of evaluation practice provide descriptive information on how evaluations are actually conducted and evaluative information on possible improvements. In assessing the current status of research on evaluation practice, this paper first reviews the range of designs and methods that have been used to study practice including practitioner self-reports, meta-evaluations, independent case studies, surveys of practice, and comparative studies of alternatives. Strengths and weaknesses of alternative designs are assessed, as well as their frequency and scope of use. Second, the paper examines four primary issues that have been addressed through studies of practice, reviewing results to date: What is the technical quality of evaluation practice? What is the utility and impact of evaluation? What is the feasibility and effectiveness of alternative evaluation methods? And what is the relevance and utility of alternative evaluation theories or models? The paper closes with a summary of needed improvements in research on evaluation practice.
Conclusions from Research on Evaluation Use: How Strong Are the Methodological Warrants?
Paul R Brandon,  University of Hawaii Manoa,  brandon@hawaii.edu
J Malkeet Singh,  University of Hawaii Manoa,  malkeet@hawaii.edu
Leviton (2003, p. 526) stated that those who conduct research on the use of evaluation findings often accept 'a [low] standard of evidence that many of us would never dream of applying to the conduct of evaluations.' Many of the studies of evaluation use are reflective accounts of individual practitioners' experiences, and too few use strong designs that result in conclusive results. Furthermore, no studies that we know of have systematically reviewed the large body of studies of evaluation use for their methodological warrants. We address the methodological warrants for the conclusions of research on evaluation use in our paper. We examine the studies identified in reviews of the literature on evaluation use published since about 1980, classify the studies according to their choice of research methodology, and arrive at conclusions about the strength of the warrants for the findings of the research. Leviton, L. C. Evaluation use: Advances, challenges and applications. American Journal of Evaluation, 24, 525- 535.

Session Title: Developing Multicultural Evaluators
Multipaper Session 875 to be held in Centennial Section E on Saturday, Nov 8, 1:20 PM to 2:50 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Chair(s):
Charles Glass,  Texas Southern University,  crglass1@juno.com
Examining Cultural Competent Principles within Children’s System of Care
Presenter(s):
Jonathan Gunderson,  Mental Health Center of Denver,  jd_gunderson@yahoo.com
Antonio Olmos,  Mental Health Center of Denver,  antonio.olmos@mhcd.org
Abstract: Culturally competent systems of care enable service providers to best meet minority youth and family’s mental health needs; however, the multiple players involved with systems of care present differing perspectives on culturally competent principles. These different perspectives create uncertainty about what a culturally relevant program looks like. This investigation explores how culturally competent principles, found in a literature analysis, are being translated to practice by comparing and contrasting the perspectives from several clinicians. Further evaluation will report on how the clinician’s views compare with the principles found in the literature analysis, as well as contrasting the views from other players involved such as youth, parents, teachers, and administrators. The presentation will make recommendations of how to bridge the gap between different perspectives to create a more comprehensive culturally competent system of care.
Preaching to the Choir: Black Evaluators on Cultural Responsiveness
Presenter(s):
Tamara Bertrand Jones,  Florida State University,  tbertrand@fsu.edu
Abstract: Blacks in evaluation have been an untapped research resource. Their professional and personal experiences help to add another dimension to the evaluation field. Their educational experiences show that they are credentialed and experienced in a variety of areas, including education and psychology. Their voices on cultural competence/responsiveness in evaluation are those that seem to lead the discussion in the field. Their scholarship creates a base from which to draw what we know about culture in evaluation. This research presents the experiences of senior Black evaluators. Specifically this research will focus on defining culturally responsive evaluation; tool needed to practice culturally responsive evaluation, the role of race in evaluation, and developing more evaluators of color
Participatory Impact Pathway Analysis: An Approach for Increasing the Cultural Competency of AEA Evaluation Professionals
Presenter(s):
Alice Young-Singleton,  University of Southern California,  youngsin@usc.edu
Abstract: Our underlying values and perspectives about both evaluation and the substantive content areas of the programs and organizations we evaluate influence our work—whether we are aware of this influence or not. This quote supports research suggesting that one needs to examine and acknowledge cultural, ideological, philosophical, and practical influences that may impact one’s ability to objectively evaluate programs and organizations; further, it invokes inquiry to determine how and to what degree these influences inform one’s values, worldview and approaches to evaluation. My paper argues that approaches to increase the cultural competencies of evaluation professions should begin with an introspective examination of one’s cultural, ideological, and philosophical perspectives followed by an assessment of the behavior and general organizational climate within AEA to determine its influence and contribution in outcomes to increase the number of racially/ethnically diverse individuals entering the field of evaluation and cultural competencies of evaluators. Using empirical and theoretical research studies along with my personal experience as an AEA/DU Intern, my paper asserts evaluations conducted using participatory impact pathway analysis enables evaluators to engage stakeholders in a participatory process that ascertains a program’s theory while increasing cultural competency in evaluation practice.
Building Capacity in Culturally Relevant Evaluation: Lessons Learned from a Portfolio of National Science Foundation Awards
Presenter(s):
Darnella Davis,  COSMOS Corporation,  ddavis@cosmoscorp.com
Abstract: On what basis do evaluators decide when, under what circumstances, and by whom cultural relevance should be considered in developing and implementing an evaluation? In its 20th anniversary edition, New Directions for Evaluation (Number 114) counted the coverage of cultural groups among a handful of enduring issues in evaluation. Yet Madison (2007) points out the difficulty of engaging the voices of all stakeholders when conducting responsive evaluations, especially among underrepresented populations. In 2000, the National Science Foundation (NSF) began to fund efforts supporting broadening participation through capacity building in evaluation with a view to improving theory and practice, while supporting more diverse and culturally competent evaluators. The subsequent portfolio covers training and degree programs, internships, and research and model building. This paper discusses the lessons culled from a 12-month study of these awardees’ experiences and situates their accomplishments and reflections within the field of evaluation theory and practice.
A Paradigm Shift for Evaluation Research: Incorporation of Cultural Dynamics in the Evaluation Process
Presenter(s):
Farah A Ibrahim,  University of Colorado Denver,  farah.ibrahim@cudenver.edu
Barbara J Helms,  Education Development Center Inc,  bhelms@edc.org
Abstract: This paper proposes the establishment of a protocol for research and evaluation that will be useful in the U.S and internationally that would provide the most meaningful data to help improve society. Using APA’s (2002) “Multicultural Guidelines for Research” as a guide focusing on all aspects of research, from planning to design, assessment to analysis of findings, we propose a process where evaluators conduct an a priori analysis of their current beliefs and values and those of the community under study. It is critical that we define and understand the cultural characteristics of all members of the community and understand the interrelationships of the non-dominant and dominant members. The diversity that the research community faces within the U.S. and globally requires a paradigm shift that must focus on collaboration and careful preparation for that collaboration prior to developing the design and the assessment or evaluation methodology.
One Small Step Toward Increasing the Cultural Competencies of Evaluators
Presenter(s):
Jeanne F Zimmer,  University of Minnesota,  zimme285@umn.edu
Abstract: This paper presents one way of increasing the cultural competency of evaluators from a dominant culture. Introducing the concepts of culture and cultural competence early in evaluation coursework and professional trainings is essential to the development of competent practice as defined by the American Evaluation Association’s Guiding Principles. (Shadish, 1995) For those learning evaluation, conducting a Cultural Identity Exercise (CIE) as part of introductory program evaluation classes or trainings would initiate the process of self-awareness as a core competency. With the CIE, learners are asked to write a word or phrase that describes them in response to multiple categories, including: nationality; ethnicity/race; religion; political orientation; sex; gender/sex role; SES/class; generation; vocation/avocation; or other factors pertinent to the evaluation context. (Collier & Thomas, 1988) Introducing this simple exercise with a guided discussion may open the doors to deeper self-awareness and understanding of others, and result in more culturally fluent evaluation practitioners. (LeBaron, 2003)

Session Title: Psyching out the Situation: Cultural Auditing and Environmental Mapping for Collaborative Needs Assessment
Skill-Building Workshop 876 to be held in Centennial Section F on Saturday, Nov 8, 1:20 PM to 2:50 PM
Sponsored by the Needs Assessment TIG
Presenter(s):
James Altschuld,  The Ohio State University,  altschuld.1@osu.edu
Jeffry White,  Ashland University,  jwhite7@ashland.edu
Deborah Kwon,  The Ohio State University,  kwon.59@osu.edu
Jing Zhu,  The Ohio State University,  zhu.119@osu.edu
Abstract: Many funders require collaboration across groups providing services and programs to populations in need. Collaboration is in vogue with little guidance in the needs assessment (NA) literature for starting a collaborative assessment. This session begins with a didactic exposure to two approaches for working together in terms of needs. One is Cultural Auditing and the other is Environmental Mapping. After exposure to the techniques, participants will be divided into small teams to work on an environmental mapping exercise for scenarios (aging, education, community services, etc.) supplied by the facilitators. Then a guided discussion will occur to get a sense of what the teams produced.

Session Title: Evaluation Policy for Extension Education
Multipaper Session 877 to be held in Centennial Section G on Saturday, Nov 8, 1:20 PM to 2:50 PM
Sponsored by the Extension Education Evaluation TIG
Chair(s):
Heather Boyd,  Virginia Polytechnic Institute and State University,  hboyd@vt.edu
A Framework for Integrating Extension-Research Activities
Presenter(s):
Rama Radhakrishna,  Pennsylvania State University,  brr100@psu.edu
Abstract: The concept of integrating extension and research dates back to the enactments of Morrill (1862) and Smith-Lever (1914) Acts. The rationale for this integration was that new research conducted in labs and other facilities at land-grant universities be transferred into practice via Cooperative Extension. Over the past few decades, U.S. extension systems and research systems have made an attempt to work together. However, these two systems remain and maintain separate cultural and organizational identities with varied, but linked missions (Bennett, 2000). Increased emphasis is being placed on the need for common understanding, expectation, and project language among research and extension faculty. The overall purpose of this project is to develop a framework for integrating extension and research activities. Preliminary results relative to factors that facilitate and inhibit joint extension-research activities will be shared. In addition, individual and institutional strategies and processes required to implement effective research-extension activities will be discussed.
Programs in Mid-life Crisis: Evaluability Assessment as a Tool for Critical Decision Making
Presenter(s):
Gwen M Willems,  University of Minnesota Extension,  wille002@umn.edu
Mary Marczak,  University of Minnesota,  marcz001@umn.edu
Abstract: As programs age, they need to be evaluated and critical decisions made regarding their futures. Evaluability assessment, originally designed to identify whether or not programs were ready for evaluation, has evolved over time from its original objectives. It can be used today to clarify program theory and logic, provide valuable data to make improvements, and help strategize about important decisions whether to revise and improve or sunset programs. The presenters will define evaluability assessment and discuss its origin and history, evolution, positive attributes, and current use. They will feature a case study in which they used an evaluability assessment to examine an Extension program widely used for many years. In particular, they will discuss the methodology they used, their process and data collection, program staff participation, reporting, and implications for the popular program they examined.
Designing and Implementing Evaluation Policies to Sustain Evaluation Practice in Extension Programs
Presenter(s):
Monica Hargraves,  Cornell Cooperative Extension,  mjh51@cornell.edu
William Trochim,  Cornell University,  wmt1@cornell.edu
Abstract: Seven County Extension Offices in New York State have been actively involved in an “Evaluation Planning Partnership” at Cornell Cooperative Extension. Evaluation Plans for diverse educational programs were completed in 2007 and are being implemented in 2008. This experience with systematic Evaluation Planning forms the background for this pilot study of Extension Evaluation Policy. Insights from their experiences with evaluation planning and implementation will be gathered from Program Staff and Senior Administrators in these Extension Offices. These data will be analyzed using Concept Mapping technology to yield: a taxonomy of Evaluation Policy components; a menu of specific tools and procedures that support evaluation practice; strategies for implementing Evaluation Policies in Extension and other organizations; and an assessment tool for Evaluation Policies themselves. These are essential ingredients in the effort design Evaluation Policies that promote sustainable and effective Evaluation Practice in Extension and other organizations.

Session Title: Methods and Techniques for Analyzing, Measuring, and Valuing the Impact of Intellectual Property Assets: A Focus on Patents Derived From Federal Research Funding
Panel Session 878 to be held in Centennial Section H on Saturday, Nov 8, 1:20 PM to 2:50 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Connie Chang,  Ocean Tomo Federal Services,  cknc2006@gmail.com
Abstract: Over the past three decades, intellectual property (IP) assets (i.e., patents, trademarks, copyrights, and trade secrets) have become an increasingly important component of industrial competitiveness in the world economy. The U.S. government occupies an extraordinarily powerful position within the IP marketplace through creating, managing, acquiring, regulating, issuing, and protecting IP. The billions of dollars the U.S. federal government spends to fund research and develop new technologies have led to the creation of new knowledge, new skills, new working relationships, and new products and services that have contributed to our nation's economic growth. This Panel offers attendees a look at how evaluators have used different methods and techniques to examine and analyze patents for the purpose of telling the story of a technology's trajectory, revealing patterns in technological relationships and the formation of emerging technology clusters, and ascertaining the commercial impact of research funding.
Setting the Stage: Introduction to the Panel and General Overview
Connie Chang,  Ocean Tomo Federal Services,  cknc2006@gmail.com
This presenter will provide an introduction to the panel and a general overview of the methods and techniques used to analyze patents that government agencies have employed for evaluation purposes. During her tenure at the Advanced Technology Program and later at the Technology Administration, she funded study contracts that explored how patents and co-citation of patents can be used as a forward looking indicator to reveal emerging technology clusters and worked on policy issues related to the measurement of intangibles. She is now working for a company, Ocean Tomo Federal Services, which provides innovative technology transfer services to government clients to help manage, commercialize, and monetize their intellectual capital, and analytical tools and methods to track knowledge diffusion, value the quality of knowledge created, and evaluate the economic impact of government-funded technologies.
Evaluating the Impact of the United States Advanced Technology Program: What Can Patents Tell Us?
Ted Allen,  National Institute of Standards and Technology,  ted.allen@nist.gov
The U.S. Advanced Technology Program provided funding to 824 high-risk, high payoff projects between 1990 and 2007. Companies have been granted nearly 1,500 patents based on the work performed in the ATP-funded project, and these patents have been cited by nearly 12,000 subsequent patents. The presenter is responsible for collecting data on ATP patents. He will share what ATP has learned from tracking patents and analyzing the impact of these patents.
Identifying Emerging, High-Impact Technological Clusters: An Overview of a Report Prepared for the Technology Administration, United States Department of Commerce
Tony Breitzman,  1790 Analytics,  abreitzman@1790analytics.com
The presenter will share findings from the Emerging Technological Clusters project sponsored by the U.S. Department of Commerce's Technology Administration. The project aimed to validate and further develop a sophisticated methodological tool based on patents, citations, co-citations, and clustering of patents, as well as visualization of inventor locations, that can identify emerging, high-risk, early-stage, technologically innovative activities. Such a tool could provide a greater understanding of how such clusters form; the types of organizations involved; the geographic location of the inventors; the line of research each organization is pursuing; the core technologies being built upon; the technologies that are currently being pursued; and an early indication of potential commercial applications that may result. The data that are captured could provide policymakers with a stronger analytical capacity from which to formulate policy experiments, options, or recommendations for action.
Soup to Nuts: How NASA Technologies Got Transferred to the Marketplace via a Live Intellectual Property Auction
Darryl Mitchell,  National Aeronautics and Space Administration,  darryl.r.mitchell@nasa.gov
The presenter will share his experience in bringing NASA Goddard technologies to the market using Ocean Tomo's Live Intellectual Property Auction as a complementary vehicle to the traditional commercialization activities of technology transfer offices.

Session Title: Best of the Worst Practices: What Every New Evaluator Should Know and Avoid in Evaluation Practice
Panel Session 879 to be held in Mineral Hall Section A on Saturday, Nov 8, 1:20 PM to 2:50 PM
Sponsored by the Graduate Student and New Evaluator TIG
Chair(s):
Katrina Bledsoe,  Walter R McDonald and Associates Inc,  kbledsoe@wrma.com
Discussant(s):
Katrina Bledsoe,  Walter R McDonald and Associates Inc,  kbledsoe@wrma.com
Abstract: Knowledge in the context of evaluation is not limited to the evaluand. Evaluators gain greater insight in perfecting their approaches and relationship building with each project. This presentation highlights the lessons novice evaluators learned in the course of conducting an evaluation. Panel presenters will discuss the learned lesson from their evaluation experience in the American Evaluation Association/Duquesne University Graduate Education Diversity Internship program. These past interns made corrections to their practice in the areas of honoring stakeholder contributions; recognizing when to seek outside assistance; understanding environmental and situational context and navigating the dual role of evaluator and advisor. Each of these corrections will be discussed within the context of the American Evaluation Association's Guiding Principles For Evaluators. Presenters will provide specific case examples of the adjustments made in their evaluation practice and offer recommendations for evaluation practice geared toward new evaluators and graduate students.
Tales of a Gemini Evaluator: Navigating the Dual Role of Evaluator and Technical Advisor
Dymaneke Mitchell,  National-Louis University,  dymaneke.mitchell@nl.edu
Dymaneke Mitchell currently serves as Assistant Professor of Secondary Education at National- Louis University, Chicago campus. As a member of the AEA/DU Graduate Diversity Internship Program's second cohort she conducted a year long evaluation project involving the establishment and maintenance of an Alabama Arise student chapter at the University of Alabama in Tuscaloosa city. Dr Mitchell completed a Ph.D. in Social and Cultural Foundations of Education at the University of Alabama. Specifically related to evaluation, Dr Mitchell is interested in the influences of ableism on evaluative methodologies.
Too Many Irons in the Fire: Honoring the Multiple Perspectives, Roles, Investments and Contributions of Evaluation Stakeholders
Amber Golden,  Florida A&M University,  ambergt@mac.com
Amber Golden completed doctoral studies in Family Relations from Florida State University in Tallahassee, Florida. Dr Golden, a member of the AEA/DU Graduate Diversity Internship Program's second cohort, produced an evaluation design of Communities in Schools of Gaston County, Florida as a part of her internship. She is currently serving as a Visiting Professor and Undergraduate Coordinator in the Psychology Department of Florida A&M University. Dr Golden's current project involves a coordination of a comprehensive evaluation of the undergraduate program in Psychology for the purposes of accreditation at Florida A&M University.
Saving the Sinking Ship: Recognizing When to Solicit Assistance and Support from Others
Roderick L Harris,  Sedgwick County Health Department,  rlharris@sedgwick.gov
Roderick L. Harris is DrPH Candidate in the Department of Behavioral and Community Health Sciences at the Graduate School of Public Health, University of Pittsburgh. In addition to conducting and evaluation of alternative drop-out prevention program of Communities In Schools Pittsburgh-Allegheny County, his AEA/DU Graduate Diversity Internship Program evaluation project, Mr Harris has worked on evaluation projects that involve healthy aging interventions, governmental entities and educational opportunity programs. These experiences have afforded him the opportunity to compare and contrast the experiences and appreciate the breadth of program evaluation field. Currently, Mr. Harris serves as Director of the Center for Health Equity at the Sedgwick County Health Department in Wichita, KS.
Pressure is Only Good for Tires And Coal: Understanding the Environmental and Situational Context for Evaluation Practice
Nia K Davis,  University of New Orleans,  nkdavis@uno,edu
Nia K. Davis, is a Doctoral Student in an Urban Studies PhD program at the University of New Orleans. As a member of the third cohort of the AEA/DU Graduate Diversity Internship Program, Nia conducted an evaluation of the Central City Community Safety Initiative Collaboration in her home tome of New Orleans. Post internship, she has continued the evaluation work of this initiative and is also involved in another evaluation project surrounding parental involvement in education and anti racism strategies. Most recently Nia has formed Purposeful Solutions LLC, an independent consulting firm specializing in research, evaluation and program planning, design and implementation, to further her development as an evaluator.

Session Title: Structural Equation Modeling: The Essential Concepts
Demonstration Session 880 to be held in Mineral Hall Section B on Saturday, Nov 8, 1:20 PM to 2:50 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Karen Larwin,  University of Akron,  drklarwin@yahoo.com
Presenter(s):
Kristi Lekies,  The Ohio State University,  lekies.1@osu.edu
Abstract: Structural equation modeling is an analytical procedure that can determine the degree to which a hypothesized model fits sample data. Its advantages include the ability to allow for the inclusion of latent and observed variables, multiple dependent variables, the ability to work with nested models, to isolate the error and the variance associated with variables, as well as provide the ability to compare and identify best fitting theoretical models with collected data. This demonstration will provide an overview of structural equation modeling for those who are new or for those who have had limited experience working with this type of procedure. An explanation of structural equation modeling, its uses and benefits, and terminology will be discussed, along with the overall process of model specification, testing, and modification. Software programs and examples of helpful resources also will be covered.

Session Title: Rigor and Creativity in Methods for Human Service Evaluations
Multipaper Session 881 to be held in Mineral Hall Section C on Saturday, Nov 8, 1:20 PM to 2:50 PM
Sponsored by the Human Services Evaluation TIG
Chair(s):
Kurt Moore,  Walter R McDonald and Associates Inc,  kmoore@wrma.com
Discussant(s):
Vajeera Dorabawila,  New York State Office of Children and Family Services,  vajeera.dorabawila@ocfs.state.ny.us
Implementing a Randomized, Controlled Study Design in a Community-Based Program Setting: Learning to Balance Service Provision and Robust Science
Presenter(s):
Melanie Martin-Peele,  University of Connecticut Health Center,  peele@uchc.edu
Daria Keyes,  The Village for Families and Children Inc,  dkeyes@villageforchildren.org
Patricia Schmidt,  The Village for Families and Children Inc,  pschmidt@villageforchildren.org
Cheryl Smith,  University of Connecticut Health Center,  casmith@uchc.edu
Lisa Daley,  The Village for Families and Children Inc,  ldaley@villageforchildren.org
Toral Sanghavi,  The Village for Families and Children Inc,  tsanghavi@villageforchildren.org
Abstract: Despite an abundance of published community-based studies, and existing guidelines on community-based research and methods, implementing robust study designs in service-orientated community programs can be challenging. This paper describes the authors’ process of changing a federally-funded program and evaluation study from a simple comparison design (of two similar programs in different cities) into a randomized, controlled trial that includes safety exclusions and an opt-in/opt-out choice for participants. Providing detailed description of the more than six month decision-making process, this paper presents the barriers each competing agenda generated, including the program agency’s administration and staff, the funder, and the evaluator. Solutions to these barriers and the reasoning behind each choice are discussed, including mid-course corrections as the new study design was implemented. Recruitment data from the first year of recruitment and data collection are described and analyzed for necessary problem-solving. Finally, suggestions for future evaluation planning and methods are offered.
A Mixed Methods Approach to Evaluating Individualized Services Planning in the Human Services
Presenter(s):
Michel Lahti,  University of Southern Maine,  mlahti@usm.maine.edu
Abstract: This paper will present the findings of a four year study of an individualized services planning approach, Family Team Meetings, in child welfare – human services. This paper will explore the extent to which quantitative results can help to describe the implementation and (perhaps) outcomes of Family Team Meetings and will present initial findings on the study of interaction in a team meeting setting. An argument will be made for mixed-methods designs and how evaluations of individualized service planning approaches need to include observation of interaction. Research approaches such as Conversation Analysis focus on the study of interaction and this paper will present the authors’ experiences at learning and applying this technique.
Impact Evaluation of a National Citizenship Rights Program In Brazil
Presenter(s):
Miguel Fontes,  Johns Hopkins University,  m.fontes@johnsnow.com.br
Fabrízio Pereira,  Brazilian Industrial's Social Services,  fpereira@sesi.org.br
Lorena Vilarins,  Brazilian Industrial's Social Services,  lorena.vilarins@sesi.org.br
Milton Mattos de Souza,  Brazilian Industrial's Social Services,  milton.souza@sesi.org.br
Rodrigo Laro,  John Snow do Brasil,  r.laro@johnsnow.com.br
Abstract: Objectives: The Brazilian Industrial’s Social Services (SESI) implemented in 2007 a Citizenship Rights Event in 34 municipalities offering low-income populations access to odontological/medical exams, cultural/sports activities, social security/labor identification cards, and professional workshops. The Event reached 1 million individuals in 2007. The objective is to demonstrate the impact of the program. Methods: An ex-ante/ex-post survey was carried out in November 2007. The sampling error for national representation reached 2.4% (n=1,570). A Healthy Citizenship Scale, ranging from -65 to 65 points, was generated by combining results from and attributing specific weight values to 15 types of services. Results: Scale reliability reached 0.60 (Cronbach). Differences between ex-ante and ex-post results were found for all scale’s 15 items (p-value<0.05). Ex-ante average of -2.0 increased to 9.9 in ex-post. Increases were observed in all five regions of the country, by gender, age of respondent, and income. Conclusions: The scale is a reliable evaluation tool.

Session Title: Collaboration and Evaluation Capacity Building
Multipaper Session 882 to be held in Mineral Hall Section D on Saturday, Nov 8, 1:20 PM to 2:50 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG and the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Randi K Nelson,  University of Minnesota,  nelso326@umn.edu
Discussant(s):
Susan Boser,  Indiana University of Pennsylvania,  sboser@iup.edu
Finding Ways to Build Internal Evaluation Capacity Through External Evaluation Processes: The Case of the New Zealand Education Review Office
Presenter(s):
Carol Mutch,  Education Review Office,  carol.mutch@ero.govt.nz
Abstract: The Education Review Office (ERO) is the agency mandated to review all government-funded schools in New Zealand. Schools are also required to undertake internal evaluation to inform that process. Recent reports indicate that there is wide variability in the quality of schools’ internal evaluation. In 2007, the Chief Review Officer indicated the agency’s commitment to using its expertise and access to build schools’ internal evaluation capacity. This paper reports on the first three phases of the Building Capacity in Evaluation project. The first phase gathered data to determine the current state of internal evaluation in schools. The second phase ensured that the agency’s personnel were brought up to a consistent level of understanding of the synergies between external and internal evaluation. The third phase trialed some key strategies to build this capacity. The fourth phase (not reported here) will focus on implementing and embedding successful strategies from the trial.
Making it Worthwhile: Evaluating Organizational Benefits of Community Collaboratives
Presenter(s):
Branda Nowell,  North Carolina State University,  branda_nowell@ncsu.edu
Pennie Foster-Fishman,  Michigan State University,  fosterfi@msu.edu
Abstract: Community collaboratives are prominent vehicles for improving the community level response to a particular issue or problem domain. As such, evaluations of community collaboration have focused the bulk of their attention on evaluating community and population level outcomes. However, another important outcome of community collaboratives concerns their impact on the organizations and agencies which are represented as members of the collaborative. Unfortunately, we have less of an understanding about how participation in community collaboratives impacts member organizations and agencies. This paper will present qualitative and quantitative findings from a mixed methods study of 48 domestic violence community collaboratives examining organizational impacts. Specifically, this paper will address questions concerning the ways in which organizations/agencies benefit from their involvement; what types of benefits are most prominent; and who benefits most. Implications for conceptualizing and evaluating the effectiveness community collaboratives will be discussed.
Organizational Learning and Partnerships for International NGOs: An Evaluation Framework for Building Coalitions and Positive Organizational Change to Promote Sustainable Health Outcomes in Developing Countries
Presenter(s):
Stephanie Chamberlin,  International Planned Parenthood Federation,  schamberlin@ippfwhr.org
Laura Ostenso,  Innovation Network,  lostenso@innonet.org
Abstract: Specifically, this paper will use institutional legitimacy theory in conjunction with community coalition theory to: I. Define gaps in the existing evaluation paradigm of the health activities of the international development NGOs (INGOs) that are funded through external aid; II. Analyze the space (political, social and cultural landscape) and capacity for the development and evaluation of organizational interventions to contribute to the establishment of coalitions that are able to improve measurable, long-term health impacts through integrated health systems. III. Provide recommendations for an analytical, ecological framework, based on Intervention Mapping; which can be utilized in the development and evaluation of ecological interventions that address multiple, integrated health problems through improved health systems. Empirical examples of syndemic health issues will be utilized to demonstrate the integrated program and evaluation recommendations in this paper.

Session Title: Evaluating Program Sustainability: Definitions, Methods, and Evaluation Dilemmas
Panel Session 883 to be held in Mineral Hall Section E on Saturday, Nov 8, 1:20 PM to 2:50 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Mary Ann Scheirer,  Scheirer Consulting,  maryann@scheirerconsulting.com
Discussant(s):
Laura Leviton,  Robert Wood Johnson Foundation,  llevito@rwjf.org
Abstract: An innovative perspective for evaluation policy and practice is to use evaluation across the full range of a program or project's life cycle, from initial needs assessment and planning to evaluating the sustainability of programs after their initial funding has closed. This panel will focus on methods for evaluating a potential end stage of the program life cycle: whether the program's activities, benefits, or other outcomes are sustained beyond its initial funding. We will present evaluations illustrating various aspects of the sustainability of their underlying projects. We will discuss and compare a variety of evaluative methods used to collect data about sustainability, including on-line and other types of surveys, an organizational assessment form, and interviews with project staff. Lessons learned will be discussed about both the methods to evaluate sustainability, and what funders might do to foster greater sustainability of their programs.
Operationalizing Sustainability as a Project Outcome: Results from an On-line Survey
Mary Ann Scheirer,  Scheirer Consulting,  maryann@scheirerconsulting.com
Evaluative research for questions of program sustainability has expanded substantially in recent years, but definitional issues remain. This presentation will provide definitions for four different types of sustainability as potential outcomes of health programs. These definitions will be illustrated with descriptive findings from an on-line survey to 'look back' at the extent and types of sustainability that occurred among 48 community-based projects that had received short-term funding from a foundation-funded health program in New Jersey. Large percentages of respondents reported positively to each of four types of sustainability measures - maintaining program activities, continuing to serve substantial numbers of clients, building and sustaining collaborative structures, and maintaining attention to the ideas underlying the projects by disseminating them to others. Strengths and limitations of this methodology for future evaluation will be discussed.
Implications of Organizational Maturation for Evaluating Sustainability
Russell G Schuh,  University of Pittsburgh,  schuh@pitt.edu
The Staging Organizational Capacity (SOC) is an observational protocol based on a maturity model designed for assessing the developmental maturity of nonprofit service organizations. Just as Tanner Staging led to greater precision in identifying maturation in children than the crude measure of chronological age, the SOC is providing a more refined understanding of organizational maturation that has potential implications for measuring and evaluating sustainability. Developed for the Small Agency Building Initiative (SABI) of the Robert Wood Johnson Foundation, the SOC identifies a standard set of features that change systematically as organizations mature. The potential influence of maturational patterns and developmental dynamics for sustainability of initiatives within and of organizations will be discussed.
Where are They Now? Assessing the Sustainability of Foundation Grants
Karen Horsch,  Independent Consultant,  khorsch@comcast.net
This presentation will focus on methodology and lessons learned from conducting evaluations of the sustainability of grant-funded projects of two different health conversion foundations (Endowment for Health and MetroWest Healthcare Foundation). The presentation will provide: - A brief overview of the two grantmaking organizations and their grantmaking approaches - The guiding evaluation questions. - An overview of methodological challenges to assessing sustainability and how they were addressed including: o defining sustainability and operationalizing this for evaluation o assessing sustainability of different types of projects (service delivery projects, planning efforts, systemic change projects, and those focused on organizational capacity building) o determining the appropriate time to assess sustainability o creating a culture in grantmaking organizations that supported learning from the experiences of past grants rather than focused on how many 'good bets' had been made - Description of methodological approaches, including web-based surveys as well as phone interviews.
Assessing Sustainability of the Lifestyle Education for Activity Program (LEAP): Methodology and Lessons Learned
Ruth Saunders,  University of South Carolina,  rsaunders@sc.edu
The Lifestyle Education for Activity Program was a school-based intervention that changed school environment and instructional practices (9th grade PE) to promote physical activity (PA) in girls. In the main study, 45% of girls in intervention versus 36% in control schools reported vigorous PA; schools with higher (n=6) versus lower levels of implementation (n=5) had more active girls (48% versus 40%). The follow up study used three data sources (PE observation, PE teacher interviews, and 9th grade PE student focus groups) to assess instructional practice and three (PE observation, PE teacher and former LEAP team interviews) to assess environmental elements. We triangulated these results to classify schools as 'implementing' or 'non-implementing' at follow up. Schools that were 'high implementers' in the main study and 'implementers' at follow up were defined as 'maintainers'. We discuss our approach to triangulating quantitative and qualitative data sources to assess follow up implementation and maintenance and lessons learned.

Session Title: Evaluations of Professional Development in Education: When 'Leave Them Smiling' Is Not Enough
Panel Session 884 to be held in Mineral Hall Section F on Saturday, Nov 8, 1:20 PM to 2:50 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Julie Morrison,  University of Cincinnati,  julie.morrison@uc.edu
Abstract: Although the evaluation of professional development is a critical component in the delivery of professional development, it is often poorly conceived. Notable improvements in student learning almost never take place without professional development (Guskey, 2000). We spend $5 to $12 billion annually on professional development (USDOE, 2008). This panel aims at helping evaluators design high-impact evaluations of professional development. The context for the current emphasis on the professional development of educators is presented in Presentation I. Presentation II provides an overview of Guskey's Model for evaluating professional development and how evaluative criteria and determination of merit can be integrated into the model. Presentation III explicates problems inherent in the current focus on participants' reactions to professional development. Presentation IV presents practical guidelines for improving the assessment of participants' reactions and improving evaluations of professional development. The last presentation highlights how new policies might ensure meaningful evaluation of professional development.
Context for Professional Development in Education: Policy Implications
Imelda Castaneda-Emenaker,  University of Cincinnati,  castania@ucmail.uc.edu
Julie Morrison,  University of Cincinnati,  julie.morrison@uc.edu
This presentation explores the historical and legal context for the emphasis on professional development (PD) of educators. The federal mandates embedded in No Child Left Behind and the long-standing requirement for professional development from Title I provide the context for creating more 'highly qualified' teachers. Compliance with federal, state, and district mandates have led educators to various models of PD, such as training, observation/assessment, development/improvement process, study groups, inquiry/action research, individually guided activities, and mentoring/coaching. Teacher PD comes at a considerable cost. More often, teacher PD competes with the financial allocations for other programs needed to be implemented to comply with the mandates. Although evaluation of professional development is identified as a critical component in the delivery of professional development, it is often poorly conceived, and has very limited, if not minimal, resource allocations. Future policy must include guidelines for high-impact evaluations of these often expensive, yet essential PD activities.
Integrating Evaluative Criteria and Merit Determination into the Evaluation of Professional Development
Catherine Maltbie,  University of Cincinnati,  maltbicv@ucmail.uc.edu
Julie Morrison,  University of Cincinnati,  julie.morrison@uc.edu
This presentation integrates Guskey's (2000) model for evaluating professional development with evaluative criteria and merit determination (Davidson, 2006). Guskey proposed a model for evaluating the impact of professional development that is comprised of five levels: (1) participants' reactions, (2) participants' learning, (3) organization support and change, (4) participants' use of new knowledge and skills, and (5) student learning outcomes. Generally, evaluators apply the program standards of feasibility, propriety, accuracy, and utility to uphold the value of evaluation activity. This presentation discusses how the evaluation program standards might be of special interest, or concern, at each of the five levels presented in Guskey's model.
Removing the Rose-Colored Glasses: What are Participants' Reactions to Professional Development Really Saying?
Janet Matulis,  University of Cincinnati,  jmatulis@ucmail.uc.edu
Jerry Jordan,  University of Cincinnati,  jerry.jordan@uc.edu
Participants' evaluations of professional development workshops tend to be positive, in spite of the wide variations in the quality of these sessions. This presentation, informed by research in social psychology, highlights the significant dangers of misinterpreting data obtained from surveys of participants' reactions (Guskey's Level 1). This presentation discusses the various factors that confound the assessment of participants' reactions and provides evaluators with recommendations for maximizing the value of the post-workshop evaluations. Evaluation policy implications for measuring the impact of professional development evaluations will be discussed.
Practical Guidelines for Improving the Evaluation of Professional Development in Education
Julie Morrison,  University of Cincinnati,  julie.morrison@uc.edu
Catherine Maltbie,  University of Cincinnati,  maltbicv@ucmail.uc.edu
Critical domains need to be established as standard practice in the evaluation of professional development workshops for educators. Beyond these standard domains, the evaluation should be carefully tailored to the unique professional development experience. The objective of the professional development experience (i.e., awareness, comprehension, application, analysis, synthesis, evaluation) and the expectations for learning (i.e., acquisition, fluency, generalization, and adaptation) need to be clarified. In light of the professional development objectives, various methods for assessing participants' reactions (and learning) will be presented, along with their advantages and disadvantages. The policy implications of improving the accuracy and utility of the evaluations of professional development for educators will be discussed.
The Path Forward: Future Policy for the Evaluation of Professional Development for Educators
Jerry Jordan,  University of Cincinnati,  jerry.jordan@uc.edu
Imelda Castaneda-Emenaker,  University of Cincinnati,  castania@ucmail.uc.edu
Government (and organizational) policies should require the development of meaningful evaluative criteria for the evaluation of professional development. It is imperative that participants' Reactions (Level 1) NOT be misinterpreted as evidence of participants' learning (Level 2), organization support and change (Level 3), participants' use of new knowledge and skills (Level 4), and student learning outcomes (Level 5). Employing evaluative thinking in a traditional professional development evaluation model such as Guskey's could bring about a powerful tool in conducting high-impact, meaningful evaluations. Policies for the evaluation of professional development for educators should consider the breadth and depth of evaluation in terms of the issues for consideration and stakeholder participation, as well as provisions for optimal cost allocations.

Session Title: Empowerment Evaluation Training: One Method Isn't Enough (Multiple Perspectives on Building Capacity - Funder, Grantee, and Evaluator)
Multipaper Session 885 to be held in Mineral Hall Section G on Saturday, Nov 8, 1:20 PM to 2:50 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
David Fetterman,  Stanford University,  davidf@stanford.edu
Abstract: This panel provides multiple perspectives concerning building sustainable evaluation infrastructures in nonprofits. The panel begins with the donor's perspective, highlighting the need to build internal sustainable evaluation capacity. It is followed by grantee and evaluator perspectives on the need to build institutionalized evaluation capacity. Specific activities discussed include workshops, technical assistance, peer exchanges, symposia, web-based tools, and certificate programs. A domestic emphasis on these matters is complemented by an international perspective focusing on evaluation training efforts in rural Spain.
One Method isn't Enough: Building Sustainable Evaluation Structures in the Nonprofits
Charles Gasper,  Missouri Foundation for Health,  cgasper@mffh.org
Recognizing that sustained evaluation promotes healthier organizations and programming, a Missouri health foundation chose to develop a multifactor approach to develop capacity for evaluation with an eye toward engendering sustainable evaluation in the nonprofits it funds. Included in the support are: * Foundation funded workshops and symposia focusing on the use of evaluation for organizational strategic planning, empowering decision making, and integrating evaluation into standard organizational activities * Technical support for internal evaluation by contracted evaluators (one of which will be presenting their formats) * Peer exchanges and symposia for contracted evaluators on techniques to enhance evaluation and the sustainability of internal evaluation * Development of evaluators to support smaller scale evaluations (partnerships with local educational programs with an evaluation emphasis) The Foundation's experience in orchestrating these efforts combined with a frank discussion of the impact of integrating the various supports for evaluation will be shared.
Implementing a Multi-Modal Approach to Building Capacity
Abbey Small,  Saint Louis University School of Public Health,  asmall1@slu.edu
Amy Sorg,  Saint Louis University School of Public Health,  asorg@slu.edu
In 2004, a Missouri health foundation committed significant funding to establish a nine-year, multi-site initiative to reduce tobacco use in Missouri. As the external evaluators for this initiative, one of our primary goals has been to increase the evaluation capacity of the initiative grantees. For the past three years, we have implemented a multi-modal approach to achieve this goal. We have found this to be a more effective method of reaching people with diverse backgrounds as compared with a single approach. Our capacity building activities include the development and implementation of an annual three day training institute, creation of an interactive website and message board, and ongoing individualized technical assistance. For this panel, we will outline the details of our approach and how our methods may be applied by other evaluators with similar goals. In addition, we will discuss the benefits and challenges of each component and the overall integration of these methods.
Empowerment Evaluation and the Arkansas Evaluation Center: Building Capacity from Workshops and Certificate Programs to the Creation of Local Evaluation Groups
David Fetterman,  Stanford University,  davidf@stanford.edu
Linda Delaney,  LFD Consulting LLC,  info@lindafdelaney.com
The Arkansas Evaluation Center resulted from tobacco prevention work in Arkansas. In the process of collecting tobacco prevention data, grantees and evaluator determined that there was need to build evaluation capacity across the State in various areas (beyond tobacco prevention). A bill was submitted, passed the House and Senate and was signed by the Governor of Arkansas. Empowerment evaluation is a guiding principle and approach at the Center. The Center is housed in the School of Education at the University of Arkansas Pine Bluff. It is responsible for building evaluation capacity throughout the State in the form of: lectures, guest speakers, workshops, certificate programs, and conferences. In addition the Center and the certificate program link with the local evaluation group to facilitate networking, create a natural transition from evaluation training to employment, and enhance socialization into the field.
IGTO: A Web-Based Technology to Build Capacity
Beverly Tremain,  Public Health Consulting LLC,  btremain@publichealthconsulting.net
Matthew Chinman,  RAND Corporation,  chinman@rand.org
Abraham Wandersman,  University of South Carolina,  wandersman@sc.edu
Pamela Imm,  LRADAC,  pimm@lradac.org
In this presentation, developers will discuss planning, implementation, and evaluation of the interactive Getting to Outcomes (iGTO) tool designed for community coalitions. A research project was the backdrop against a State Planning Grant - State Initiative Grant (SPF SIG) for Missouri and Tennessee coalitions. iGTO uses web-based technology to automate much of the work involved in successfully answering the 10 accountability questions. The tool was used by two states and over 48 coalitions in the last 2 years. Eventually, one state chose to adopt the GTO model and iGTO tool and one state did not. The presentation team will describe the function of iGTO and its uses for prevention work in documenting outcomes. A major focus of the presentation will be how the system, coalition, and individual level factors studied during the year intertwined to potentially influence adoption in the two states.
A Case Study of How Tools, Training and Intensive Technical Assistance Improved Adoption of an Innovation
Marilyn Ray,  Finger Lakes Law and Social Policy Center Inc,  mlr17@cornell.edu
Gordon Hannah,  Finger Lakes Law and Social Policy Center Inc,  gordonjhannah@gmail.com
Abraham Wandersman,  University of South Carolina,  wandersman@sc.edu
Research has found that training and manuals are generally insufficient to bring about long-term use of an innovation. Based on such findings, Getting To OutcomesTM (GTOTM) created a system model that includes: developing tools; providing training; and a period of intensive technical assistance (TA) and a process for quality improvement/quality assurance. We tested this model in New York after the state enacted legislation requiring all preventive social services to include outcome based provisions. This paper will discuss a project that employed the GTO System Model to assist nine county social service agencies in New York plan, implement, evaluate, and contract for preventive programs that would meet the terms of the new statute. The project utilized an Empowerment Evaluation approach to facilitate collaboration between county departments and their contracting partners to achieve outcomes and implement new evaluation policies.
Empowering People, Empowering Organizations: Some Keys to Make a Difference in Empowerment Evaluations
José María Díaz Puente,  University of Politics Madrid,  jm.diazpuente@upm.es
From the Universidad Politécnica de Madrid (Technical University of Madrid), the Research Group on Sustainable Planning and Management of Rural/Local Development has been conducting several evaluations in rural and urban settings of Spain, Ireland and some Latin American countries such as Mexico, Ecuador and Peru. The goal of this paper is to share the experience of applying empowerment evaluation in these contexts to train people to conduct their own evaluations, and facilitate their empowerment and the empowerment of the organizations in which they are involved. In order to achieve this empowerment, some conclusions are presented concerning the importance of understanding human nature and behavior, human relationships with any organization, and the important implications of taking into account cultural differences (e.g. Latin vs. Anglo-Saxon culture) when applying participatory approaches and tools.

Session Title: Considering Ethical Relationships When Conducting Qualitative Evaluations
Multipaper Session 886 to be held in the Agate Room Section B on Saturday, Nov 8, 1:20 PM to 2:50 PM
Sponsored by the Qualitative Methods TIG
Chair(s):
Eric Barela,  Los Angeles Unified School District,  eric.barela@lausd.net
Discussant(s):
Scott Rosas,  Concept Systems Inc,  srosas@conceptsystems.com
Using Multiple Qualitative Methods to Improve Participation and Validity in an Evaluation Involving a Disenfranchised Population of Individuals with Severe Mental Illness
Presenter(s):
Gary Walby,  Ounce of Prevention Fund of Florida,  gwalby@ounce.org
Abstract: Multiple qualitative methods were used in an evaluation of a comprehensive community mental health program servicing individuals with severe mental illness. Focus groups, participant observation, content analysis of documents, and semi-structured interviews targeted service delivery processes. These methods were also used to establish outcome consensus for a follow-up quantitative evaluation. Principles of empowerment evaluation (emphasis on improvement, inclusion, democratic participation, and social justice) guided the evaluation. Changes in service delivery resulted during and after the evaluation process. Participants included management, service providers, support staff, service recipients, and multiple community stakeholders. This paper describes the design and implementation of the evaluation, embedding of qualitative methods in an empowerment evaluation model, and the analysis of qualitative data. Checking back, thick description, and triangulation, reliability and validity assurance procedures used in qualitative research, were successfully adapted to the evaluation to increase the usefulness of results and to encourage change based on evaluation findings.
Exploring the Social and Communicative Processes of Focus Groups: Implications for Evaluation Policy and Practice
Presenter(s):
Melissa Freeman,  University of Georgia,  freeman9@uga.edu
Judith Preissle,  University of Georgia,  jude@uga.edu
Kathryn Roulston,  University of Georgia,  roulston@uga.edu
Steven Havick,  University of Georgia,  havick74@yahoo.com
Abstract: Focus groups have been customarily used by evaluators to gather information to contribute to the data pool for evaluation judgments and decision making. However, the social and dialogical nature of focus group interaction permits participants to appropriate and make use of the situation for purposes other than what the moderator intended. What participants produce in their dialogue may thus be analyzed for participant as well as moderator intentions. This paper first reviews the research on the social and dynamic nature of focus groups. Then, using data from our own evaluation work, we show how participants use the focus group interaction to, for example, forge alliances, resolve old disputes, or clarify positions. We end by considering how a deeper understanding of these communicative and social processes can be integrated into the evaluation design itself.
Ethics in Multisite Case Study Evaluation
Presenter(s):
Judith Preissle,  University of Georgia,  jude@uga.edu
Amy DeGroff,  Centers for Disease Control and Prevention,  adegroff@cdc.gov
Rebecca Glover-Kudon,  University of Georgia,  rebglover@yahoo.com
Jennifer Boehm,  Centers for Disease Control and Prevention,  jboehm@cdc.gov
Abstract: We examine the ethical challenges and implications involved in conducting a multi-site case study evaluation using three frameworks: the U.S. Common Rule, professional standards such as those endorsed by the American Evaluation Association, and selected moral theories. Based on our experience of conducting a three-year evaluation of a colorectal cancer screening program funded and evaluated by the Centers for Disease Control and Prevention, we explore methodological decisions regarding who to interview, what to explore, the role of the evaluators, and how to use the information generated all which reflect a range of values and ethical decision making. We are also identifying how what we are learning from various facets of the screening programs illustrates different priorities in healthcare delivery in the U.S., priorities that reflect unique moral positions.

Session Title: Models of Funder Evaluation Policy and Practice
Multipaper Session 887 to be held in the Agate Room Section C on Saturday, Nov 8, 1:20 PM to 2:50 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Srik Gopalakrishnan,  The Ball Foundation,  srik@ballfoundation.org
Grantee-Foundation Evaluation Policy: Waltz or Breakdance?
Presenter(s):
Ann McCracken,  The Health Foundation of Greater Cincinnati,  amccracken@healthfoundation.org
Abstract: Ten years ago The Health Foundation of Greater Cincinnati, a conversion foundation, began a dance with regional health non profits. Evaluation, like funding, capacity building, and communications were braided into the granting process to ensure that grantees were successful. In a recent survey of grantees, 90% of the Foundation start up and expansion grants continued when funding ended. One grantee noted, “The Foundation’s emphasis on evaluation pushed us toward excellence and accountability—we became more sophisticated in our work.” This session will explore some of the critical policy junctures that determine if evaluation is a “part of”, or a “part from” grantmaking. Some of these junctures include: the carrot or the stick; evaluation or research; foundation or grantee needs; grantee capacity or objectivity; logic models for show or use; aligning grantmaking and evaluation; and the critical role of evaluation in sustainability.
Integrating Evaluative Inquiry into the Work of Private Foundations: A Case Study of a Hybrid “Retainer" Model
Presenter(s):
William Bickel,  University of Pittsburgh,  bickel@pitt.edu
Jennifer Iriti,  University of Pittsburgh,  jeniriti@yahoo.com
Catherine Nelson,  Independent Consultant,  catawsumb@yahoo.com
Abstract: Evaluation has an important role in supporting foundation learning and knowledge production. Yet foundation policies vary considerably in how evaluative inquiry is operationalized in their organizational routines. Some have substantial evaluation departments, some outsource specific evaluation contracts, many do no evaluation (Bickel, Millett, Nelson, 2002). This paper describes a hybrid, multi-year “retainer model” wherein a foundation established an on-going partnership with a university-based research and evaluation project. The goal of the experimental partnership is to provide responsive evaluative resources to the foundation while benefiting from long-term knowledge of the foundation’s organizational routines and priorities. Working on evaluation, strategic planning, and grantee and foundation capacity building functions, the partnership attempts to combine the advantages of having inside relationships and knowledge, while maintaining external objectivity and resource flexibility. The paper describes the nature of the partnership work and advantages and challenges to this approach to integrating evaluative thinking into foundation practice.
The Role of Evaluation within Foundations: Which Eggs to Put in the Basket?
Presenter(s):
Erin Maher,  Casey Family Programs,  emaher@casey.org
Susan Weisberg,  University of Washington,  weisberg@u.washington.edu
Abstract: Casey Family Programs is a foundation whose mission is to provide, improve, and ultimately prevent the need for foster care. Casey Family Programs is different from the majority of other large foundations in two ways—it is an operating foundation (i.e., it directly conducts activities and provides services that align with its mission) and it houses an internal research and evaluation department. This paper describes the function and purpose of evaluation within the organization and how priorities for evaluation are set in the face of increasing demands for accountability and the information needs of the child welfare field. We also describe the structure of our evaluation unit including the development and maintenance of a Constituency Research Advisory Team (CRAT) consisting of representatives from our key constituent groups: foster care alumni, birth parents, and foster parents. We will provide examples of several evaluation projects to illustrate our priorities, the role of the CRAT in advising our research projects and findings, and the different functions of different types of evaluation within our organization.

Session Title: Local Schools in the Context of No Child Left Behind: The Challenges of Adequate Yearly Progress
Multipaper Session 888 to be held in the Granite Room Section A on Saturday, Nov 8, 1:20 PM to 2:50 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Anane Olatunji,  Fairfax County Public Schools,  aolatunji@fcps.edu
Multi-stage Evaluation Planning: Evaluating No Child Left Behind's (NCLB's) Free Tutoring Program
Presenter(s):
Judith Inazu,  University of Hawaii,  inazu@hawaii.edu
Daniel Anderson,  Planning and Evaluation Inc,  pandeinc@lava.net
Julie Holmes,  University of Hawaii,  jholmes@hawaii.edu
Nancy Marker,  University of Hawaii,  nmarker@hawaii.edu
Aiko Oda,  Planning and Evaluation Inc,  oda@hawaii.edu
Russell Uyeno,  University of Hawaii,  ruyeno@hawaii.edu
Shuquiang Zhang,  University of Hawaii,  szhang@hawaii.edu
Abstract: A three-year process of simultaneously designing and implementing a federally-mandated evaluation of NCLB’s Supplemental Educational Services (SES) program is described. The evaluation compares standardized test scores of students who received tutoring with those who did not; parental, school, and district satisfaction with tutoring vendors; and vendor compliance with state and federal regulations. Despite lack of information to adequately plan an evaluation (e.g., sample size, stakeholder cooperation), NCLB required that the evaluation be implemented. Given this dilemma, the evaluators worked with school officials to develop a multi-stage approach to the evaluation such that the evaluation design and activities could evolve and expand over time. Minimum evaluation requirements were met in the first year due to sparse data and consent issues. In the last year of the project, a final evaluation plan is expected to be in place for use by the client in subsequent years.
Improving the Validity of Academic Performance Accountability Measures of Schools by Adjusting for Student Population Shifts
Presenter(s):
Simeon Slovacek,  California State University at Los Angeles,  sslovac@calstatela.edu
Jonathan Whittinghill,  California State University at Los Angeles,  jwhittinghill@cslanet.calstatela.edu
Abstract: Annual comparisons of academic accountability measures (such as No Child Left Behind –Annual Yearly Progress Targets, and California’s Academic Performance Index) pose a challenge for schools and districts particularly when school level data are used to measure academic change. Most states do not longitudinally track and aggregate individual student level (value added) performance for accountability, rather school level data is used. Yet virtually all schools experience significant year to year student population shifts as new cohorts enter and the highest grade levels move on or graduate. New charter schools especially grow in size (sometimes doubling). Schools may experience high mobility rates. Assessing valid annual progress (instead of population shifts) requires thoughtful adjustments because repeated Program Improvement Status designation results in school take-over or closure. Also, reauthorization may be at stake for charter schools. The author, an evaluator and a school board founding member, will present issues, examples, and solution formulas.
The Impact of Title I School Choice Program in a Majority Minority School District
Presenter(s):
Kolawole Sunmonu,  Prince George's County Public Schools,  ksunmonu@aol.com
Abstract: One of the key provisions of the No Child Left Behind Act (NCLB) is that parents of students in underperforming Title I schools be offered the choice of transferring their child to a better performing school within the school district. As with other ‘choice’ programs, the theoretical underpinning of the Title I School Choice option is that overall student achievement will be enhanced because parents will choose to send their children to better performing schools while competition and/or sanction will encourage underperforming schools to raise the quality of education offered. Due to the infancy of research focusing on Title I School Choice, no definitive conclusion has been reached regarding the program’s impact on student achievement. Using student achievement data over a four-year period, this study uses a three-group quasi-experimental design to examine the impact of Title I School Choice on student achievement in a low-performing majority-minority district. Data analyses will be conducted using 4 (time) x 3 (group) mixed-model ANOVA.
A Local Program in the Context of Federal Policy and Legislation: An evaluation of Supplemental Educational Services (SES)
Presenter(s):
Ranjana Damle,  Albuquerque Public Schools,  damle@aps.edu
Abstract: The Federal legislation No Child Left Behind, 2001, incorporates policy elements aimed at supporting schools in achieving rigorous academic standards and aiding education of disadvantaged children. The accountability provisions allow tracking student and school progress. Schools are required to use scientifically researched reading programs to ensure that students read at grade level. This educational policy prioritizes parental knowledge and involvement in their children’s education. Parents receive information about their school’s performance and, if their school is failing, are empowered to choose school transfer or supplemental educational services. Under the NCLB mandate, when schools do not make adequate yearly progress for three consecutive years, their most disadvantaged students become eligible for the Supplementary Educational Services (SES). A major expensive enterprise emerges to monitor the SES programs in terms of instructional quality and book keeping. This paper evaluates an SES program in the context of Federal educational policy and legislation.

Session Title: Developing Evaluation Capacity Among Partners and Grantees: Innovative Tools and Approaches - Examples From the Centers for Disease Control and Prevention (CDC)
Panel Session 889 to be held in the Granite Room Section B on Saturday, Nov 8, 1:20 PM to 2:50 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Thomas Chapel,  Centers for Disease Control and Prevention,  tchapel@cdc.gov
Abstract: To achieve the distal outcomes with which public health agencies have been charged requires coordinated efforts among different sectors and participants. The implications for evaluation are that interventions and their evaluation are complex, multilayered, and difficult to achieve. Evaluation challenges can exceed the skills of our partners, who fortunately, are eager for technical assistance and tools to aid at all stages of evaluation. This presentation discusses strategies of different CDC programs to develop tools for providing technical assistance for their partners and grantees. Program representatives will describe their situations briefly and how they selected specific tools. Development and implementation of the tools will be discussed, as will information regarding how the tools are perceived by the grantees and how the tools have changed evaluation practice at the partner or grantee level. Lessons from the CDC experience will also be drawn.
Providing Technical Assistance: One Model of a Collaborative Approach
Karen Debrot,  Centers for Disease Control and Prevention,  kdebrot@cdc.gov
Sonal Doshi,  Centers for Disease Control and Prevention,  sdoshi@cdc.gov
Provision of evaluation technical assistance (TA) to grantees often does not take into account the knowledge and skill level of the grantee. Further, it often focuses on the needs of the funder rather than on the capacity of grantees to conduct meaningful and useful evaluation for their own programs. The Centers for Disease Control and Prevention's (CDC's) Division of Adolescent and School Health (DASH) uses a collaborative approach to provide TA to its grantees, which addresses both their knowledge and skill level as well as enhancing their capacity to evaluate their program beyond the level required by DASH. This presentation will cover DASH's protocol for evaluation TA, including the processes for initiating contact with grantees by a team of DASH staff who assist with programmatic and administrative issues.
How the Division of STD Prevention Uses Program Improvement Plans to Redesign Programs
Betty Apt,  Centers for Disease Control and Prevention,  bapt@cdc.gov
Sonal Doshi,  Centers for Disease Control and Prevention,  sdoshi@cdc.gov
CDC's Division of STD Prevention (DSTDP) requires that all of their 65 funded project areas submit morbidity data. This data is to be used by CDC and the project areas for program planning and monitoring. However, it became evident that many project areas were not using this data to design and monitor their programs and activities. Therefore, their efforts often were ineffective, were not suitable for, or directed to, the appropriate at-risk populations, nor were limited resources being used efficiently. To address this problem, DSTDP has made the use of data for program improvement a Division priority. To help accomplish this goal, DSTDP implemented requirements for evidence-based planning and program improvement plans. In this presentation, we will describe the evidence-based planning and program improvement plan tools that DSTDP developed to assist project areas in using their data to design, monitor, and revise their STD prevention programs.
Interactive Web-Based Tutorials: Development and Implementation
Sonal Doshi,  Centers for Disease Control and Prevention,  sdoshi@cdc.gov
Karen Debrot,  Centers for Disease Control and Prevention,  kdebrot@cdc.gov
Many health and education agencies find completing basic program planning activities challenging; however, program planning is fundamental to performing credible program evaluation. The Centers for Disease Control and Prevention's Division of Adolescent and School Health (DASH) provides technical assistance on basic program planning activities as a primary way to increase the evaluation capacity of state and local health and education agencies. Because DASH serves education and health agencies across the United States that have different levels of capacity to plan programs, DASH offers web-based, interactive, program planning tutorials. DASH designed, developed, and implemented three tutorials: 1) Goals and SMART Objectives; 2) Logic Model Basics; and 3) Developing Your Own Logic Model. In this session the presenters will discuss the process and challenges for developing and implementing the tutorials, including deciding on topic-areas; developing specific content and exercises; designing visually appealing and navigatable tutorials; conducting pilot testing; and lessons learned.

Session Title: M&E Training in Other Countries: Adapting Training to Local Cultures
Panel Session 890 to be held in the Granite Room Section C on Saturday, Nov 8, 1:20 PM to 2:50 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Michael Hendricks,  Independent Consultant,  mikehendri@aol.com
Discussant(s):
Michael Hendricks,  Independent Consultant,  mikehendri@aol.com
Abstract: In the international development community, M&E often confronts the dual challenge to 1) employ systems of project design, monitoring, and evaluation, and 2) involve the participation of local people in the M&E system. How can M&E training and local capacity building balance these two objectives? This panel discussion will examine this and related issues, drawing upon M&E training experiences from Tibet, Russia-NIS and Australasia, and South Asia. Inherently, M&E training introduces new concepts and practices - a process that can empower local partners, or straightjacket and alienate them. Culturally sensitive approaches to M&E training can foster local understanding and involvement, and ultimately local ownership and program sustainability. An added benefit, as this panel will highlight, is that with innovative, flexible approaches to M&E training that encourage the open sharing of ideas, the M&E trainer often walks away learning as much as the training participants.
M&E Training in Tibet: Adapting to and Learning from Other Cultures
Laura P Luo,  China Agricultural University,  luopan@cau.edu.cn
This presentation will discuss monitoring and evaluation (M&E) training lessons based on from M&E training conducted in Tibet. The presentation will focus on the diverse, participatory methods used in the training to help build the M&E capacity of local government officials in Tibet. The presentation will also discuss the importance of learning about and tailoring training methods to local culture when conducting evaluation training. Moreover, the presenter will share with the audience the profound impact that the Tibetan culture has had on her life and how it influences her evaluation practices.
Evaluation Training in Russia-NIS and Australasia: Comparisons and Contrasts
Ross Conner,  University of California Irvine,  rfconner@uci.edu
This presentation will focus on evaluation training sessions conducted recently in two regions of the world, in Russia and the Newly Independent States (NIS) and in Australasia, both in New Zealand and in Australia. The content of the trainings was generally similar, focused on community-based evaluation approaches that involved a collaborative involvement of those from the communities where the program or initiative occurred. The presenter will describe the content of the trainings and the adjustments in emphases made in it; he will then compare and contrast the reactions of the participants and the reasons for differences between and within regions, primarily due to the different circumstances in which evaluators operate.
Fun & Games with M&E: Participatory Strategies for M&E Training
Scott Chaplowe,  American Red Cross,  schaplowe@amcrossasia.org
International development often confronts the challenge to employ systems for project design, monitoring, and evaluation, while also involving local participation in the M&E process. This presentation will examine how M&E training and local capacity building can balance these two objectives? Local ownership in the M&E system is not only critical for overall project sustainability, but also reliable reporting as local partners often gather monitoring data. Logic models (logframes), which have become the industry standard to summarize a project's design and intended results, aptly illustrates some of these issues. At best, they are tools that help project design, monitoring, and evaluation (DM&E). At worst they can straightjacket a project, imposing an outside, techno centric method that alienates rather than fosters local participation. Drawing from the experience of tsunami recovery projects in South Asia, we will examine this these issues and the use of participatory training methods to reinforce local partner understanding of M&E.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Get the Kinks Out: A Pilot Implementation of an Online Student Assessment Tool
Roundtable Presentation 891 to be held in the Quartz Room Section A on Saturday, Nov 8, 1:20 PM to 2:50 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Sarah Bombly,  University of South Florida,  mirlenbr@mail.usf.edu
Diane Kroeger,  University of South Florida,  kroeger@coedu.usf.edu
Bryce Pride,  University of South Florida,  bpride@mail.usf.edu
Abstract: This presentation will focus on lessons learned during the pilot implementation and formative evaluation of an online student assessment program. Purchased by a central Florida school district, the formative assessment program was intended to facilitate student achievement and enhance staff development. Using a collaborative approach (Rodriguez-Campos, 2005), we evaluated the extent to which this product impacted curricular decision making, differentiated instruction, student ownership of learning and the identification of professional development needs. We also analyzed the validity and reliability of the test items. To inform the report, we used focus groups, interviews, surveys, and document analyses. Findings will assist educational decision-makers with using online data to differentiate instruction and determine professional development needs.
Roundtable Rotation II: The Transformative Power of Participatory Evaluation: Facilitating the Growth of a School District From a Traditional Bureaucracy to a Learning Organization
Roundtable Presentation 891 to be held in the Quartz Room Section A on Saturday, Nov 8, 1:20 PM to 2:50 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Beth-Ann Tek,  Brown University,  beth-ann_tek@brown.edu
Ivana Zuliani,  Brown University,  ivana_zuliani@brown.edu
Abstract: Adhering to a policy of utilization-focused evaluation, evaluators used a participatory approach to evaluate a school district’s capacity to systematically improve its schools. As a result, both district staff and evaluators worked collaboratively to develop a multi-level system of improvement based on the principles of rigorous needs assessment, data analysis, and formative evaluation to guide all improvement actions. The district’s refined improvement policy and subsequent approach includes an annual improvement cycle that requires both district and school staff to engage in theory-driven program planning and evaluation. Evaluators will share their experiences facilitating the growth of stakeholders’ knowledge regarding evaluation and the district’s transformation into a learning organization. Evaluators will also share strategies and tools for engaging school-based stakeholders in evaluative processes including question formation and logic modeling.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Building Partnerships: Lessons Learned From Designing and Implementing a Cross-Project Evaluation With K-12, University, and State-Level Mathematics and Science Educators
Roundtable Presentation 892 to be held in the Quartz Room Section B on Saturday, Nov 8, 1:20 PM to 2:50 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Presenter(s):
Karen Mutch-Jones,  Technical Education Research Centers,  karen_mutch-jones@terc.edu
Polly Hubbard,  Technical Education Research Centers,  polly_hubbard@terc.edu
Abstract: Managing a cross-project, multi-site evaluation can be a challenge, especially when there is substantial variability across projects and uneasiness among project coordinators about participation. Creating structures that unify projects and acknowledge individual differences is critical. Using specific examples from a state-level Mathematics and Science Partnership evaluation, we will describe activities that helped 25 project staff from K-12 schools and universities, key stakeholders, and evaluators to establish collaborative relationships, develop cross-project goals, surface concerns and anxieties, distinguish between formative and summative evaluation responsibilities, and construct and commit to the evaluation process. Strategies for scaffolding consistent and timely data collection and for helping project staff understand and use findings will also be presented. Following the presentation, we will answer questions and provide further detail about activities, strategies, and outcomes upon request. We will facilitate a discussion so participants can share thoughts, experiences, alternative approaches, and ideas for creating successful cross-project evaluations.
Roundtable Rotation II: Using Theory of Change Models When Evaluating Complex Initiatives
Roundtable Presentation 892 to be held in the Quartz Room Section B on Saturday, Nov 8, 1:20 PM to 2:50 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Presenter(s):
Martha McGuire,  Cathexis Consulting,  martha@cathexisconsulting.ca
Marla Steinberg,  Michael Smith Foundation for Health Research,  marla_steinberg@phac-aspc.gc.ca
Keiko Kuji Shikatani,  Independent Consultant,  kujikeiko@aol.com
Abstract: Programs are becoming more complicated and long-term results may take decades, making it difficult to determine if the program is going to achieve the desired results. Theory of change models are a tool that can help to determine whether there is movement towards the intended results. We will present an example of a theory of change model that was developed for the Canadian Action Program for Children and the Canadian Prenatal Nutrition Program. Through facilitated small group discussion we will explore the following questions: - What are some examples where theory of change models have been developed as part of the evaluation of other complex programs? - How were the theories of change models used in the evaluation? - What added value did use of theories of change models bring to the evaluation? - What are some pitfalls to avoid?

Return to Evaluation 2008
Search Results for All Sessions