Evaluation 2009 Banner

Return to search form  

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Building National Evaluation Capacity in Senegal: Lessons Learned and Current Challenges
Roundtable Presentation 451 to be held in the Boardroom on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Presenter(s):
Ian Hopwood, Independent Consultant, ihopwood@orange.sn
Abstract: There has been increasing activity to build evaluation capacity in the public and also private sector in Senegal. The presentation will analyze results obtained, and identify lessons learned, drawing upon recent studies and additional insights from the author's experience as former UNICEF HQ Evaluation Chief, UNICEF's Senegal Representative and now evaluation militant/member of Senegalese Evaluation Network. Capacity building has included strategies to promote demand for evaluation as well as measures to meet that demand, including institutional arrangements and incentives, improving evaluation methods and practice, and training and networking. Among key capacity challenges are the need to evaluate public policies (e.g. national poverty strategy, implementation of the Paris Declaration on Aid Effectiveness), and how to ensure that evaluation contributes to government performance and accountability in context of results based management. Questions for discussion include the appropriate institutional arrangements including decentralization, how to ensure independence and quality in context of partnerships, and the role of the international community.
Roundtable Rotation II: Strengthening Evaluator Competencies and Institutional Evaluation Capacity Through Professional Development Programs and Services
Roundtable Presentation 451 to be held in the Boardroom on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Presenter(s):
Sandiran Premakanthan, Symbiotic International Consulting Services (SICS), sandiran_premakanthan@phac-aspc.gc.ca
Abstract: The Canadian Evaluation Society has led the way in proposing competencies for Canadian Evaluation Practice along with the Guidelines for Ethical Conduct and the Program Evaluation Standards. It is an attempt to develop a Professional Designation for Evaluators. The paper presents a review of the professional development programs and services (academic and private sector service providers) and institutional arrangements to cater to the needs of the Community of Evaluators in strengthening their competencies and professional practice. It examines the adequacy of the current professional development programs and services available in the National Capital Region, Ottawa in meeting the future demand for evaluator training and credentialing. The author also shares his experience of a model from the American Society for Quality (ASQ) that certifies its members and awards professional designations in Quality Management Systems Auditing and what is involved in maintaining the designation (recertification) through continuous education, training and professional development activities.

Session Title: Body of Evidence or Firsthand Experience? Evaluation of Two Concurrent and Overlapping Advocacy Initiatives
Panel Session 452 to be held in Panzacola Section F1 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Advocacy and Policy Change TIG
Chair(s):
Carlisle Levine, CARE, clevine@care.org
Abstract: If alleviating global poverty depends on successful pro-poor policies, then, CARE, like other international humanitarian organizations, can promote these policies by presenting evidence based on decades of working in more than 60 countries. With Gates Foundation support, CARE is testing this hypothesis via two initiatives. CARE's LIFT UP grant aims to build organizational capacity to more systematically use country-level evidence to influence U.S. policymakers. CARE's Learning Tours grant provides Members of Congress with firsthand experiences aimed at increasing their support for improving maternal health and child nutrition globally. Working with external evaluators Innovation Network and Continuous Progress Strategic Services/ Asibey Consulting, CARE is assessing the effectiveness of these approaches. This panel will address the challenges in defining and assessing meaningful interim outcomes, and determining the degree to which these two specific investments have indeed increased CARE's ability to influence policy change from the perspectives of the two evaluators and CARE.
Evaluating Influence Within the Context of Systems Change: An Evaluator's Perspective
Ehren Reed, Innovation Network Inc, ereed@innonet.org
Veena Pankaj, Ehren Reed (Innovation Network) and Julia Coffman are part of the evaluation team assessing CARE's LIFT UP Initiative. The ultimate goal of LIFT UP is to influence U.S. policy makers through a number of interventions designed to enhance CARE's ability to 'lift up' information from the field to those individuals within CARE who are in direct contact with policy makers. Panelists will discuss specific tools and strategies used (i.e. systems maps and theories of change) to understand the nuances of the initiative and how these tools and strategies help to define and measure change in the short, intermediate and long-term. Continuous Progress Strategic Services/Asibey Consulting is also addressing similar questions for CARE's closely related Learning Tours initiative. This discussion will highlight points for collaboration in addressing how to define and measure change.
Defining and Evaluating Change Agents/Champions: An Evaluator's Perspective
David Devlin-Foltz, Continuous Progress Strategic Services, david.devlin-foltz@aspeninst.org
Edith Asibey, Asibey Consulting, edith@asibey.com
David Devlin-Foltz (Continuous Progress Strategic Services) and Edith Asibey (Asibey Consulting) are jointly responsible for assessing the policy impact of CARE's Learning Tours initiative. Panelists will discuss how tracking current or potential champions for Maternal, Newborn, and Child Health before, during, and after the Tours helps us answer tough questions: Can we define what it means to be a champion for a given policy change? Can we help CARE translate these learnings into improved Learning Tours that do more to deepen the commitment of existing champions or create new ones? Can we define and measure "champion-ness'? Innovation Network is addressing similar questions for CARE's closely related LIFT UP initiative. This discussion will draw on our collaborative search for answers. Our panel will contribute to advocacy evaluation practice around the notion of policy champions: what defines them; how do advocacy groups influence/create them; and what policy actions and messages constitute success?
The Evaluation of Two Concurrent and Overlapping Initiatives: CARE's Perspective
Carlisle Levine, CARE, clevine@care.org
Carlisle Levine (CARE) is leading a process of determining how CARE can most effectively leverage its program experience to increase the effectiveness of its advocacy efforts by testing two new and related approaches. To assess the effectiveness of these overlapping approaches, CARE and its external evaluators are defining measures of change and means to assess them. We also recognize the need to demonstrate a link between those changes and the advocacy approaches CARE is testing in order to determine the value of such investments and to help CARE adjust them as needed to increase their effectiveness. We will discuss the challenges of establishing the contribution of these approaches to CARE's advocacy outcomes, as well as the methods CARE and its external evaluators are using to respond to this challenge.

Session Title: Enhancing Evaluation and Evaluation Practice Using Web 2.0
Panel Session 453 to be held in Panzacola Section F2 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Independent Consulting TIG
Chair(s):
Amy Germuth, EvalWorks LLC, agermuth@mindspring.com
Abstract: As technology advances, more tools are becoming available that have the potential to positively change the way evaluators work In this panel evaluators will share their professional insights on using Web 2.0 tools to enhance the evaluations they conduct and their own evaluation practices and skills. Panelists will describe ways in which they are using Web 2.0 tools to conduct virtual world evaluations and improve data collection, evaluation management, and client communication. Lessons learned will be shared and discussions will focus on the pros and cons of these various methods and tools in the context of evaluation. The impact of Web 2.0 tools and resources on evaluation as a discipline and practice will be explored with the intent of identifying further ways that technology can advance evaluation.
Adobe Connect Professional: Maximizing Use of the Web in Evaluation Practice
Bianca Montrosse, University of North Carolina at Greensboro, bmontros@serve.org
As technology advances, more tools are becoming available that have the potential to positively change the way evaluators work. Adobe Connect Professional is one such tool. This web-based communication software has been used for several years in the business and education sectors for training, marketing and sales, and collaboration. This presentation will provide an overview of Adobe Connect Professional, with special attention paid to how it can be used to enhance evaluation practice and the potential benefits of that use. Specific examples of Adobe Connect Professional use, grounded in real evaluations that have been or are currently being conducted, will be provided.
Exploring the Utility of Virtual World Evaluations: Opportunities for Teaching, Training, and Collaboration
Anne Hewitt, Seton Hall University, hewittan@shu.edu
Interactive digital learning environments, known as virtual worlds, are quickly being integrated into contemporary society's educational, professional and social networking options. Virtual worlds have been tailored to meet the needs of individual populations and are being used as platforms for teaching, training, collaboration and evaluation. Evaluations conducted using these virtual platforms can be very economical and often provide real-time feedback. Requirements for "in-world" assessments are being encouraged by technology stakeholders as well as virtual designers, educators and learners. The primary purpose of this session is to explore the various opportunities for evaluation within a virtual world environment. Suggestions for appropriate design and methodologies will be presented along with specific examples from direct applications of virtual world assessments. Lessons learned and best practices for planning and integrating virtual world evaluation will be shared.
It's the Little Things...Making Web 2.0 Work for You: From Desktop to Pocket
Geri Lynn Peak, Two Gems Consulting Services, geri@twogemsconsulting.com
There's a lot of cool stuff on the World Wide Web, more for entertainment than productivity, sure. But that's changing as people dream up new ways to harness the power of interconnectivity. Web technology has taken an evolutionary leap and is now a place to create, innovate, store, manage, interact with and reflect on every aspect of life. More than that, the web has come off the desk and out of the laptop to kick-start our mobile devices. "What's out there and how can it help a humble evaluator get the most out of their day," you ask? There's a ton of useful applications and services, but for me it's the little things, tools that help handle important, mundane tasks are changing everything from how I communicate with clients to collecting data. And many are free. Want to know more? I'll show some of the best and tell where to find the rest!
The Wiki Way: Engaging Stakeholders in Meaningful Discussions and Collaborations
Amy Germuth, EvalWorks LLC, agermuth@mindspring.com
A wiki is a webpage or collection of webpages that users may access to contribute to or modify content. Described by the developer of the first wiki software as "the simplest online database that could possibly work", wikis are regularly being used to promote knowledge and information exchange, as in the case of Wikipedia, probably currently the best known wiki. The presenter will describe her use of wikis within the context of evaluation to better engage multiple and diverse stakeholders in the exploration of their program and evaluation findings, and to build their evaluation capacity. The author will identify further uses of wikis in evaluation and discuss reasons why collaborative tools are so underused and ways to increase their use and value in evaluation.

Session Title: Illustration of Assessment Methodologies and Evaluation Approaches
Multipaper Session 454 to be held in Panzacola Section F3 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Health Evaluation TIG
Chair(s):
Sue Hamann,  National Institutes of Health, sue.hamann@nih.gov
Evaluation of an Infrastructure Development Model to Address Substance Abuse and Mental Health Disparities
Presenter(s):
Tabia Henry Akintobi, Morehouse School of Medicine, takintobi@msm.edu
Gail Mattox, Morehouse School of Medicine, gmattox@msm.edu
Eugene Herrington, Morehouse School of Medicine, eherrington@msm.edu
Donoria Evans, Morehouse School of Medicine, devans@msm.edu
Shironda White, Morehouse School of Medicine, swhite@msm.edu
Abstract: The Historically Black Colleges and Universities National Resource Center for Substance Abuse and Mental Health Infrastructure Development (HBCU-NRC), funded by the Substance Abuse and Mental Health Services Administration, supported substance abuse curriculum development, mental health pilot projects, and student career development among 103 HBCUs (2005-2008). This presentation will detail The Morehouse School of Medicine Prevention Research Center HBCU-NRC evaluation approach. Assessment processes included 1) strategic planning among stakeholders to couple measurement expertise with sensitivity to HBCU contexts, 2) translation of abstract program success concepts to measurable objectives and 3) development of a mixed-method evaluation design. Results demonstrated increases in perceived career skill application, curriculum development and research collaboration. Data-driven decision making, tri-directional learning among programmatic staff, evaluators, and HBCUs, technical assistance and built relationships were central to evaluation of the HBCU-NRC. These practices also supported and sustained creative, campus-specific approaches to addressing substance abuse and mental health disparities.
Designing a Comprehensive 'Menu' to Assess and Monitor the Development of Children With Vision Loss
Presenter(s):
Lorna Escoffery, Escoffery Consulting Collaborative Inc, lorna@escofferyconsulting.com
Marta Pizarro, Health Foundation of South Florida, mpizar4682@aol.com
Abstract: Early intervention programs have been shown to make a difference for many children including those with vision loss. The Blind Babies Program was developed to address the absence of a fee-free comprehensive intervention program and includes identification as well as training and rehabilitation vision services for these children. The program is structured around a five-component cycle (Identify and Enroll; Assessments; Interventions; Outreach, Support, Training; and Evaluation) that aims to provide quality, appropriate services promoting a child's physical, psychological and social development. Program staff and the external evaluator have developed and/or identified a variety of tools and methods to be used during each component and that consider the importance of the external and internal context for client and program improvement.
Multiple Contexts, Challenges, and Opportunities of a Rapid Needs Assessment of a Large Health System Using the Environmental Scan Approach
Presenter(s):
Boris Volkov, University of North Dakota, bvolkov@medicine.nodak.edu
Abstract: This paper examines challenges and opportunities of using the environmental scan (ES) approach to provide information about a state-wide health system. The lessons come from the Environmental Scan of Health and Health Care in North Dakota project that was designed as a rapid needs assessment process to help inform decision making and enhance knowledge about the health arena in North Dakota. The specific purposes of the ES were to: 1) promote collaboration of diverse stakeholders in the health arena by enhancing knowledge about health and health care issues and key health initiatives underway in North Dakota; and 2) through the use of relevant health and health care metrics, inform the development of effective interventions to address identified challenges. This paper illustrates how ES can be used to assess and describe the current state of a large health system to establish the baselines for positive health transformation.
Finding Promising Practices: An Evaluability Assessment Approach
Presenter(s):
Aisha Tucker-Brown, Northrop Grumman, atuckerbrown@cdc.gov
Susan Ladd, Centers for Disease Control and Prevention, sladd@cdc.gov
Rachel Barron-Simpson, Centers for Disease Control and Prevention, rbarronsimpson@cdc.gov
Nicola Dawkins, ICF Macro, nicola.u.dawkins@macrointernational.com
Deborah R Brome, GEARS Inc, dbrome@gettingeaar.com
Abstract: Evaluability assessments (EA) can be used to guide investments in evaluations that are likely to lead to conclusive, useful results. EA allows evaluators to pre-assess a program's context, data collection system, apparent outcomes, and readiness for evaluation before embarking on a costly, full-scale evaluation. The Centers for Disease Control and Prevention, Division for Heart Disease and Stroke Prevention (DHDSP) conducted EA to identify promising practices in state health departments, worksites, and hospitals. We first discuss prior methods DHDSP used to build 'practice based evidence' and describe two projects that build on these past experiences, one focused on hypertension and the other on reaching disparate populations. EAs conducted on interventions for these projects also provided feedback to programs on how to develop and strengthen its evaluation capacity. The EA approach allowed DHDSP to be efficient and focused in selecting interventions for rigorous evaluation. Presenters will highlight the context in which these evaluability assessments were conducted, our EA protocol, and lessons learned.

Session Title: Systems Thinking and Logic Models: Two Sides to the Same Coin?
Think Tank Session 455 to be held in Panzacola Section F4 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Systems in Evaluation TIG and the Program Theory and Theory-driven Evaluation TIG
Presenter(s):
Janice Noga, Pathfinder Evaluation and Consulting, jan.noga@stanfordalumni.org
Discussant(s):
Janice Noga, Pathfinder Evaluation and Consulting, jan.noga@stanfordalumni.org
Margaret Hargreaves, Mathematica Policy Research Inc, mhargreaves@mathematica-mpr.com
Abstract: "A picture is worth a thousand words" - how often do we hear that? Approaches to modeling a system, program, or intervention abound as do the inevitable debates over which is best. This think tank challenges participants to think of systems and logic models as complementary tools for evaluation. Examples of both types of models, as well as an example of an integrated model, will be provided and used as the basis for group discussion. Questions that will be considered include: What are the strengths and weaknesses of each? How can the two approaches to modeling be integrated to provide a framework for understanding both the program and the system in which it functions? Ultimately, it is hoped that participants will come away with new ideas about how to capture critical elements of process, context, and program theory in their own work through the use of both types of models.

Session Title: Consequences of Evaluation: The Power of Process Use
Multipaper Session 456 to be held in Panzacola Section G1 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Evaluation Use TIG , the Collaborative, Participatory & Empowerment Evaluation TIG, and the Government Evaluation TIG
Chair(s):
Jacqueline Stillisano,  Texas A&M University, jstillisano@tamu.edu
Discussant(s):
Helene Jennings,  ICF Macro, helene.p.jennings@macrointernational.com
Towards the Measurement of Process Use
Presenter(s):
Lennise Baptiste, Kent State University, lbaptist@kent.edu
Abstract: In his 2008 expanded definition of process use M.Q. Patton, suggested that evaluators could look at areas such as increased evaluation capacity, the infusion of evaluation activities, the revision, clarification, or conceptualization of goals, logic models, program priorities and outcomes measurement to understand the type of learning acquired by participants in an evaluation. The researcher presents the findings of a study in which Q-methodology was employed to capture the different types of process use which emerged in three evaluation contexts. The development of the tool will be described. Such a tool can be employed by evaluators who wish to describe what participation in the evaluation illuminated about the stakeholders and the programs beyond the evaluation findings.
Investigating the Relationship Between Process Use and Use of Evaluation Findings in a Government Context
Presenter(s):
Courtney Amo, National Research Council Canada, courtneyamo@hotmail.com
J Bradley Cousins, University of Ottawa, cjpe@uottawa.ca
Abstract: Despite support for evaluation utilization being facilitated by stakeholder engagement, evaluation practice in government contexts may not be conducive to such engagement. This paper presents the results of a mixed-methods study that explores the extent to which process use is manifest within government; the conditions/factors that enhance process use; and the relationship between process use and use of evaluation findings. Through the use of data collected through a larger study on evaluation capacity building, this study supports the notion that process use - or the consequences of involvement in (or proximity to) evaluation processes - is an important predictor of findings use in government. However, not all consequences of involvement in evaluation processes exhibit the same explanatory strength. This study highlights the importance of timely, higher-level engagement as well as the importance of organizational learning capacity and conditions mediating evaluation use in setting the stage for process use to occur.

Session Title: Needs Assessment TIG Business Meeting and Presentation: Pesky Issues in Needs Assessment and a Generic Model for Conducting Assessments
Business Meeting Session 457 to be held in Panzacola Section G2 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Needs Assessment TIG
TIG Leader(s):
Ann Del Vecchio, Alpha Assessment Associates, delvecchio.nm@comcast.net
Hsin-Ling Hung, University of Cincinnati, hunghg@ucmail.uc.edu
Jeffry White, Ashland University, jwsrc1997@aol.com
Yi-Fang Lee, National Chi Nan University, ivanalee@ncnu.edu.tw
Janet Matulis, University of Cincinnati, janet.matulis@uc.edu
Presenter(s):
James Altschuld, The Ohio State University, altschuld.1@osu.edu
Abstract: Many issues in needs assessment are often glossed over in practice as well as in the literature. Examples include: defining standards or 'what should be's'; separating needs from wants; dealing with missing data when double or triple scaling is used; combining data from mixed methods into a coherent picture of needs; how to meaningfully involve multiple constituencies in the assessment; implementing assessments that are neither too broad or too narrow; getting needs assessment results utilized for the good of organizations; and so forth. A sampling of these problems will be described followed by a brief discussion of a generic model for assessing needs. Attendees will have opportunities to cite problems they have encountered and possible solution strategies for overcoming them.

Session Title: Independent Consulting: Selected Issues and Strategies
Multipaper Session 458 to be held in Panzacola Section H1 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Independent Consulting TIG
Chair(s):
Pat Yee,  Vital Research, patyee@vitalresearch.com
Discussant(s):
Susan Wolfe,  Fort Worth Independent School District, susan.wolfe@fwisd.org
Beyond Freelancing: Building a Lasting Evaluation Business
Presenter(s):
Leah Goldstein Moses, The Improve Group, leah@theimprovegroup.com
Abstract: With increased attention to accountability and results, evaluation will likely be a growing industry in the coming years. The presenter will describe her experiences building an evaluation firm, changes over the last decade in the field of evaluation as practiced by consultants, and emerging trends that will affect the business of evaluation in the coming year. The session will answer the following key questions: - How do you build a pipeline of new projects while honoring existing commitments? - How do you balance expenses so that you have adequate resources to support your work, without increasing overhead costs that are then passed on to clients? - What opportunities are there for collaborating with evaluators in other settings, such as academia, government, and internal evaluators at nonprofits and foundations? - What risks and opportunities increase as your business grows? - Who are your competitors, and how should you prepare to compete?
Sharing Evaluation Information across Programs: Providing a Larger Picture of Some of the Issues and Challenges Stakeholders Face
Presenter(s):
Kathryn Race, Race & Associates Ltd, race_associates@msn.com
Abstract: Based on an exemplar case study approach involving multi-year evaluations, this paper will highlight the benefits of sharing evaluation information across programs when appropriate and when clients agree to this arrangement. Evaluations of two separate science literacy programs are highlighted; both programs conducted by informal science institutions; one directed toward pre-service teachers and the other directed toward experienced elementary public school teachers. One program offers structured summer courses in science content and pedagogy while the other offers professional development opportunities and courses throughout the academic year and summer. Pre-program assessment of participants, however, showed that both pre-service and experienced teachers struggled with attitudes toward their confidence in teaching science, addressing inquiry and reform-based instructional strategies, and reluctance to abandon more traditional teaching methods. Sharing similar data patterns across programs helped both institutions better understand some of the issues and challenges they face in fostering reform-based instructional strategies in science.
When Non-Monetary Benefits Outweigh Cost: Choosing a Pro-bono Project
Presenter(s):
Meghan Lowery, Southern Illinois University at Carbondale, mrlowery@siu.edu
Nicholas G Hoffman, Southern Illinois University at Carbondale, nghoff@siu.edu
Abstract: Decisions to tackle pro-bono work can be made for many reasons. The importance of occasionally doing pro-bono work will be discussed, along with non-monetary benefits that can be gained. The presentation will discuss previous pro-bono projects and the equivalence of compensation gained from participating in these projects. Applied Research Consultants (ARC) is a graduate-student run consulting firm at Southern Illinois University Carbondale. ARC takes on pro-bono projects for the training benefits afforded by such projects,. This presentation will also address how evaluators can make sure pro-bono projects do not overrun paid project time.

Session Title: Quantifying the Evidence for Psychotherapy: The German Way to Quality Control
Panel Session 459 to be held in Panzacola Section H2 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Lee Sechrest, University of Arizona, sechrest@u.arizona.edu
Discussant(s):
Fred L Newman, Florida International University, newmanf@fiu.edu
Abstract: Psychotherapy in Germany either ambulatory or stationary was under the scrutiny of quality control. Our Mannheim research group was directly involved in delivering the evaluation plans and to report about the outcomes for three different large-scale projects. The first one was a meta-analysis of stationary psychotherapy for psychosomatic patients, the second a place randomized trial for evaluating a computerized feedback tool mapping the progress of individual patient financed by a health care insurance company and the third one a similar system sponsored by the Bavarian Association of Compulsory Health Insurance Physicians (KVB). The panel intends to demonstrate the quantitative methodological tools our group implemented in these different projects and what benefits result in applying them.
Quality Assurance in Ambulatory Psychotherapy: Designs, Tools and First Results
Andrés Steffanowski, University of Mannheim, andres@steffanowski.de
Werner W Wittman, University of Mannheim, wittmann@tnt.psychologie.uni-mannheim.de
Client and therapist document the therapy process using handheld computers by answering questions about symptom severity (e.g. depression, anxiety and stress), life satisfaction, therapeutic relationship and problem domains. The encrypted data is sent to the University of Mannheim via Internet, where the data is analyzed by specific software. The KVB provides the documentation software and handheld computers for a sample of 200 psychotherapists. Up to present, more than 1.500 patients participate in the prospective naturalistic study and more than 250 patients have completed their therapy so far. As from now the first 1-year-follow-up measures are done. For outcome evaluation, an overall index of outcome quality is computed, aggregating single pre-post-measures to a multiple outcome criterion. 77% of the 1.500 patients at intake are female; the age mean is 40 years (SD = 12). About 47% suffer from depressive disorders, followed by anxiety disorders (19%). Outcome results for short-term therapies completed so far (N=250), demonstrate impressive effect sizes (Cohen's d>1,0) on a multiple outcome criterion. Electronic documentation is well accepted by most of the participating therapists. The encrypted computer based documentation is a secure and comfortable approach to improve transparency for therapists and patients. It provides useful information for therapy process optimization and outcome documentation of therapy results.
Monitoring Quality in Ambulatory Psychotherapy Using a Place Randomized Trial: The TK project
Manuel Voelkle, Max Planck Institute of Human Development, voelkle@mpib-berlin.mpg.de
Andrés Steffanowski, University of Mannheim, andres@steffanowski.de
Werner W Wittmann, University of Mannheim, wittmann@tnt.psychologie.uni-mannheim.de
Design and goals of this project had already been presented at AEA 2008 Denver. This time we will give more details and information about the first results and the consequences one has to consider given that the intraclass coefficients have substantially changed from the beginning to the end of therapy. It seems that the psychotherapy not only lead to mean changes in outcome assessments, but also to higher similarities of patient derived data within a psychotherapy unit. Whether this phenomenon is an artifact or a positive therapy result, i.e. that the patients take over parts of the worldview of the therapist will be discussed.
Meta-analysis of Stationary Psychotherapy (MESTA) for Psychosomatic Patients: How to Fool Yourself in Not Considering Opportunity Costs
Werner W Wittmann, University of Mannheim, wittmann@tnt.psychologie.uni-mannheim.de
Andrés Steffanowski, University of Mannheim, andres@steffanowski.de
This meta-analysis encompassed 65 different studies where over 25000 patients had been treated in the German rehabilitation system. The effect sizes were computed immediately after the end of the treatment and also one year after. The effect sizes varied between different diagnosis groups. Patients with a primary diagnosis of depression had the largest effect sizes (Cohen d=. 76 one year after). The studies were done in the last 30 years. After financial problems in the health system the dosage in terms of numbers of days treated was substantially reduced. The dosage level turned out to be the most important moderator of the effect sizes, lower dosage leading to lower effects. I can be demonstrated that the smaller effect sizes lead to substantial opportunity costs. The money saved via reduction of treatment dosage was substantially smaller than the money lost due to smaller effects. Without the evidence given by this meta-analysis the opportunity costs would not have become visible.

Session Title: International Perspectives in Evaluation in Higher Education
Multipaper Session 460 to be held in Panzacola Section H3 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Assessment in Higher Education TIG and the International and Cross-cultural Evaluation TIG
Chair(s):
Courtney Brown,  Indiana University, coubrown@indiana.edu
Discussant(s):
George Reinhart,  University of Maryland, greinhart@casl.umd.edu
The Impact of State Educational Standard's Context on Methods and Instruments of Evaluation
Presenter(s):
Victor Zvonnikov, State University of Management, zvonnikov@mail.ru
Marina Chelyshkova, State University of Management, mchelyshkova@mail.ru
Abstract: Russia's joining common European educational area provoked the introduction of the new generation of State Educational Standards. There are requirements to educational outcomes instead of requirements to educational content in the new Standards, which are described in terms of the competence based approach towards educational quality assessment. The purpose of our research was the analysis of changes in university's graduates assessment which were brought in context of the competence based approach. The information for research was received on the basis of questionnaires which were suggested to the teachers of 400 high schools engaged in preparation of graduates in the field of management. The analysis shows that the changes were about the methodological approaches in evaluation, dimensionality of measurement, types of instruments, item format, factors influencing educational quality and scaling methods. In our paper we present the changes which have taken place in assessment caused by the new context of Standards.
Development and Application of the Meta-evaluation Standards to Evaluate Report Results of Internal Quality Assessment for Higher Education Institutions in Thailand
Presenter(s):
Sirinthorn Sinjindawong, Chulalongkorn University, sirinthorn.sin@gmail.com
Nuttaporn Lawthong, Chulalongkorn University, lawthong_n@hotmail.com
Sirichai Kanjanawasee, Chulalongkorn University, skanjanawasee@hotmail.com
Abstract: The purposes of this study were: 1) to develop the meta-evaluation standards for an evaluation of internal quality assessment in Thai higher education institutions. 2) to validate the meta-evaluation standards for evaluating report results of internal quality assessment for higher education institutions from external meta-evaluators 3) to apply the meta-evaluation standards in evaluating report results of internal quality assessment for higher education institutions. The data were collected from internal quality assessment reports, external assessors, and meta-evaluators. Three kinds of instruments consist of meta-evaluation checklist, meta-evaluation manual, and meta-evaluator training curriculum. The results will benefit for higher education institutions in providing a guideline that would improve the quality of higher education institutions and ensure that the findings from meta-evaluators had efficiency and effectiveness.

Session Title: The Internal and External Context of Evaluation in the Non-profit Sector: Can Evaluation Capacity Building Help Non-profits Move From Accountability to Organizational Learning?
Panel Session 461 to be held in Panzacola Section H4 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Joanne Carman, University of North Carolina at Charlotte, jgcarman@uncc.edu
Abstract: The purpose of this panel is to highlight the unique context of evaluation in the nonprofit sector and explore the extent to which specific stakeholder groups have an effect on the evaluation expectations and requirements of nonprofit organizations. In this panel, we bring together the authors of the most recent empirical research about the evaluation practices of nonprofit organizations. Our first panelist, Deena Murphy, will discuss the specific organizational characteristics that are associated with using evaluation as an organizational learning tool, as opposed to an external accountability tool. Our second panelist, Sal Alaimo, will discuss the important role of the nonprofit's board of directors. Our third panelist, Laura Pejsa, will present case study research which highlights the extent to which the evaluation requirements of funders tend to drive evaluation capacity building efforts. Our final panelist, Joanne Carman, will explore the role of nonprofit accrediting bodies.
Context matters: Why it is Critical to Understand the Unique Context for Evaluation in the Nonprofit Sector
Deena Murphy, National Development and Research Institutes Inc, murphy@ndri-nc.org
Research on the context of evaluation in the nonprofit sector suggests that nonprofits remain primarily focused on collecting evaluation data to be used for accountability purposes. Emerging research suggests that organizational characteristics significantly influence the extent to which evaluation findings are used to support organizational learning rather than simply being a tool for accountability. Deena Murphy will draw on data gathered from 283 nonprofits across North Carolina, as well as her ongoing work in evaluation capacity building with substance abuse and mental health treatment organizations, to examine the changing context for evaluation in the nonprofit sector and highlight the implications for evaluation capacity building interventions. Specifically, she will address: What is the context for evaluation in the nonprofit sector and how is this changing? Who is engaged in the evaluation process and why is this important? What would help nonprofits conduct and use evaluation more effectively?
The Role of the Board of Directors in Setting the Context for Program Evaluation Through Evaluation Capacity Building (ECB)
Salvatore Alaimo, Indiana University, salaimo@comcast.net
The increasing call for accountability and competition for resources challenges nonprofit organizations with responding to the external pull from funders, government agencies and accrediting bodies while developing an intrinsically motivated internal push to build long-term capacity to evaluate their programs. The Board of Directors is ultimately accountable for the organization and for fulfilling their duty of obedience, duty of care and duty of loyalty. Salvatore Alaimo will draw on his qualitative study of one-on-one interviews with 20 board chairs, and their executive directors, and two case studies of nonprofit human service organizations to begin to address the following questions: - How the capacity to evaluate nonprofit programs is impacted by the role of the board? - Within that role, what motivates the board to engage in and/or support program evaluation? How do their motivations impact capacity? - What specific actions have boards taken to be successful in ECB?
Performance and Quality Improvement Standards: Meaningful Change or Masquerade?
Joanne Carman, University of North Carolina at Charlotte, jgcarman@uncc.edu
In recent years, researchers have observed that major accrediting bodies are requiring nonprofit organizations to demonstrate that they have plans to monitor and improve the quality of the services that they provide. Moreover, empirical research has indeed suggested that, for some nonprofits organizations, meeting these external expectations and standards is the primary reason for their evaluation activity. In this session, Joanne Carman will explore the following questions: 1) What types of nonprofit organization seeks accreditation, and why? 2) What are the accrediting standards that are related to evaluation and performance measurement? 3) In what ways do these standards affect evaluation practice? 4) What do nonprofit leaders think about these requirements? Are they useful? Or are they just going though the motions? The session will conclude with examples of best practices from organizations that have used these standards to promote evaluation capacity building and cultivate a culture of organizational learning.

Session Title: Federal Evaluation Policy and Performance Management
Multipaper Session 462 to be held in Sebastian Section I1 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Government Evaluation TIG
Chair(s):
David J Bernstein,  Westat, davidbernstein@westat.com
Enhancing the Effectiveness of Federal Agency Performance Measures: Examples From the United States Environmental Protection Agency (EPA) Office of Inspector General (OIG)
Presenter(s):
Melba Reed, United States Environmental Protection Agency, reed.melba@epa.gov
Gabrielle Fekete, United States Environmental Protection Agency, fekete.gabrielle@epa.gov
Abstract: The U.S. Environmental Protection Agency (EPA) Office of Inspector General (OIG) Office of Program Evaluation (OPE) conducts independent evaluations of the agency's programs. Some of these evaluations have examined performance measurement as a means of improving program implementation and management practices. This presentation will share a literature review from an evaluation of an environmental regulation compliance program that led to criteria for assessing the effectiveness of performance measures. In addition to using the criteria to assess the effectiveness of performance measures for an environmental regulation compliance program, OPE also used the criteria in an evaluation of a water pollution program. Findings and recommendations from both studies will be discussed.
Assisting Federal Managers to Gather Reliable Data on Their Performance Measures
Presenter(s):
Herbert Baum, ICF Macro, drherb@jhu.edu
Andrew Gluck, ICF Macro, andrew.gluck@macrointernational.com
Abstract: The Obama Administration has made it clear that data driven management is a fundamental principle of their administration. Macro International Inc. works with a number of federal agencies to obtain data for GPRA and PART measures. Data driven management can only succeed if the data provided are of high quality. Managers need help defining and identifying what this means. As part of this work we have assisted in the refinement of a tool, 'Data Validation and Verification Worksheet.' Managers in conjunction with the data owners answer a series of questions that clarify the data assumptions, data validity, data reliability, timeliness of the data collection, accuracy of the data, integrity of the data, and limitations of the data. In this presentation we share with the audience the various steps in using this tool as well as review the parts of the tool.
Federal Evaluation Policy: What Evaluators Say We Need
Presenter(s):
Margaret Johnson, Cornell University, maj35@cornell.edu
William Trochim, Cornell University, wmt1@cornell.edu
Abstract: This study presents a look at the views of professional evaluators on the essential components of federal evaluation policy. In the spring of 2008, a random sample of members of the American Evaluation Association were surveyed to learn what they thought should be included in a comprehensive set of U.S. federal evaluation policies. Using the concept mapping methodology developed by Trochim, responses were grouped, rated and analyzed. The results constitute a taxonomy of evaluation policy at the federal level, as well as a comparative analysis of views by member sub-group
Performance Measurement in Changing Times: Crafting Goal-Based Measurement for a Federal Demonstration Program
Presenter(s):
Jon Blitstein, RTI International, jblitstein@rti.org
Kimberly Leeks, RTI International, kleeks@rti.org
Alicia Richmond, United States Department of Health and Human Services, alicia.richmond@hhs.gov
Allison Roper, United States Department of Health and Human Services, allison.roper@hhs.gov
Barri Burrus, RTI International, barri@rti.org
Abstract: The fundamental nature of good performance measurement ties an organization's goals and objectives to measurable results that provide evidence of level of effectiveness. Our approach to performance measurement is based on a 'mission-to-measures' alignment matrix that begins with the program's mission and through a targeted process establishes quantifiable and/or qualifiable goals. The mission-to-measures process includes four key steps: (1) defining mission and goals; (2) clarifying levels of accountability; (3) identifying results tied to program operations; and (4) determining indicators of success. This paper presentation provides an overview of the mission-to-measures matrix and describes how it was applied to assist the Office of Adolescent Pregnancy Programs in establishing sound performance measures for the Adolescent Family Life (AFL) Program. AFL program supports grant funded demonstration projects whose purpose is to develop, implement and evaluate innovative science-based projects that increase knowledge and capacity in the field of adolescent reproductive health.

Session Title: Toward Universal Design for Evaluation: Successes and Lessons Learned in Varied Contexts
Panel Session 463 to be held in Sebastian Section I2 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Special Needs Populations TIG
Chair(s):
Jennifer Sulewski, University of Massachusetts Boston, jennifer.sulewski@umb.edu
Discussant(s):
Donna Mertens, Gallaudet University, donna.mertens@gallaudet.edu
Abstract: Universal design refers to designing products or programs so that they are accessible to everyone. Originally conceived in the context of architecture and physical accessibility for people with disabilities, the concept of Universal Design has been adapted to a variety of contexts, including technology, education, and the design of programs and services. This panel will address the idea of Universal Design for Evaluation, drawing on the panelists' individual experiences conducting research with people with and without disabilities. Each panelist will briefly present on what he or she has learned, in the course of his or her own research, about how best to design evaluations to be inclusive of everyone, followed by a discussion of the cross-cutting issues and lessons and what they mean for the evaluation field.
Including Individuals With Disabilities in Evaluation Work: Beginning the Dialogue
Jane Minnema, Saint Cloud State University, jeminnema@stcloudstate.edu
Drawing from her experiences in researching program evaluator competencies and evaluating educational policy, Dr. Jane Minnema brings to this panel both data-based and practical perspectives on the inclusion of individuals with disabilities in evaluation plans, processes, and outcomes. Initial findings from the emergent process of identifying "essential program evaluation competencies" point to considerations of what program evaluators need to know and do to effectively include individuals with disabilities in their practice. More aptly put, these considerations are actually starting points for practitioners and scholars alike to think critically about whether this important subgroup of our society has an authentic voice in our evaluation work. Taking this premise then a step forward, Dr. Minnema will draw on her evaluation practice to identify key learnings from including adolescent English language learners with disabilities in the process of evaluating the implementation of statewide educational testing as mandated by No Child Left Behind, 2001.
Evaluation: Considering the Abilities and Preferences of Traditional and Nontraditional College Students With Disabilities
Lori Corcoran, Quinsigamond Community College, lcorcoran@qcc.mass.edu
Lori Corcoran's career has encompassed a variety of teaching and disability-related positions, particularly at Quinsigamond Community College. Currently, she is the Associate Dean of Disability Services. She is a person with a disability and therefore her sensitivity is heightened when looking at issues surrounding Universal Design. The ultimate goal of Universal Design is to be as inclusive as possible. In applying Universal Design to evaluations, her experiences show that reviewing the development of the research instruments and the mode of administration are critical in broadening the participation for traditional and nontraditional college students with disabilities. Items to be discussed would include people with disabilities in the design of the questions, the wording of the instrument, and the variety of modes of administration. One will hear practical experiences which worked as well as challenges that continue in higher education.
Evaluation With Youth With Disabilities and Special Health Care Needs: Process and Context
Heather Boyd, Virginia Tech, hboyd@vt.edu
Heather H. Boyd has experience in evaluation of programs for people with special needs as an external evaluator, internal evaluator and parent of children with special needs. Currently, she is a program evaluation specialist at Virginia Tech and is a past board member of the Governor's Council on Autism (Wisconsin) and Greater Madison Autism Society. Her presentation will focus on evaluation of a Maternal and Child Health demonstration project related to Healthy & ready to Work. Lessons learned will focus on supports that intended to maximize youth voice. The context for the project is the model of person-centered, asset-based community development that was used to create, support and sustain the project and its outcomes. Youth in the project lived a variety of diversities, including ability, health status, language, geographic location and citizenship status. Implications of a person-centered, asset-based community development approach for Universal Evaluation Design will be shared.
Making Evaluation Instruments and Processes Fully Accessible
Richard Petty, Independent Living Research Utilization, richard.petty@bcm.edu
ILRU has conducted evaluations of its several national technical assistance and training programs. Target audiences have included many individuals with disabilities who lead and work in those programs. As director of three national programs, Richard Petty has ensured that ILRU honored its commitment to access by making program evaluation fully accessible. Petty and his ILRU team have learned several lessons that have helped make their evaluations more accessible to those in a variety of disability groups. Petty will describe how instruments can be design to be usable by those from a variety of backgrounds and learning levels, the use of alternate formats, ways to provide assistance in completing instruments while minimizing bias on the part of the individual providing assistance, and online evaluations. Petty has also coauthored several papers on strategies for involving participants in directing their own services and in direction of programs at the systems level.
Including People With Intellectual/Developmental Disabilities
Jennifer Sulewski, University of Massachusetts Boston, jennifer.sulewski@umb.edu
A major focus of research and evaluation at the Institute for Community Inclusion is studying and improving day and employment services for people with intellectual/developmental disabilities (I/DD). In this context, Dr. Sulewski has considerable experience conducting research and evaluation with the I/DD population. Including the viewpoint of individuals with I/DD is a challenging but important part of such research. Her presentation will focus on approaches and lessons learned interviewing and observing people with I/DD for her dissertation on community-based non-work day programs. Topics covered will include obtaining consent, designing questions and instruments to be accessible to a range of cognitive abilities, combining interviews with observations to maximize information, and the role of proxies in interviewing people with communication barriers. Dr. Sulewski also brings to the panel discussion additional experiences related to inclusion of people with various disabilities on project advisory groups, in surveys, and in dissemination of findings.

Session Title: Context-Sensitive Evaluation: Lessons Learned From Large-scale Education Initiatives
Panel Session 464 to be held in Sebastian Section I3 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
Ann House, SRI International, ann.house@sri.com
Discussant(s):
Jon Price, Intel, jon.k.price@intel.com
Abstract: Thomas "Tip" O'Neill, a longtime Speaker of the House in the U.S. Congress, famously declared, "All politics is local." Although many policies and funding decisions are made at the larger state and federal levels, ultimately the impact of legislation is felt in the form of pothole repair, snowplowing, and a vast range of government functions. In the end, government must work for the people at home. The question we will address in this panel is a corollary to O'Neill's statement: when considering the challenges of implementing large-scale education programs in vastly different contexts, is all education local? How can a state-wide or global program work at the local level of a single school or a single classroom? What challenges do evaluators of these programs face? And what strategies do these programs use to make their programs work at the local level?
Localization and Global Spread: Evaluating a Top-down Education Program
Ann House, SRI International, ann.house@sri.com
As part of their ongoing work in education, Intel has committed to train more than 10 million teachers on the effective use of technology in the classroom, with the intent of helping students develop key skills needed for success in the global economy. Their education initiative is involved in over 50 countries, providing a top-down set of programs and activities disseminated through a train-the-trainer approach. While Ministries of Education can decide which programs they want to implement in their countries and have some say in localization, the programs and evaluation approach are not intended to be locally determined. The challenges to an external evaluator discussed in this presentation will center on how to provide a global overview regarding Intel's programs that also accounts for local uses and adaptations.
Looking at a Locally Shaped Program Across The Globe: Evaluation Challenges and Solutions
Torie Gorges, SRI International, torie.gorges@sri.com
The pilot of Microsoft's Innovative Schools Program has aimed to support 12 diverse schools around the globe - secondary schools and primary schools, private and public, schools in traditional systems and schools that are breaking the mold. What the schools have in common is a desire to prepare students for the 21st century. The program provides them with frameworks for making decisions about reform and opportunities to discuss their plans with each other and with education experts. Evaluating such a locally determined program presents unique challenges; what can we say about the program as a whole that will take into account each school's unique goals? In this presentation, we will describe our ground-up process of determining evaluation benchmarks, which involved discussion with all the schools, and our distributed model of evaluation, which allows for collection of data that are at once local and global.
Evaluating a State-Wide Reform Effort: The Role of Models and Local Context
Viki Young, SRI International, viki.young@sri.com
Chris Padilla, SRI International, christine.padilla@sri.com
The Texas High School Project is a public-private partnership, aimed to ensure all Texas students graduate high school prepared for college and career success. The THSP focuses on high-need schools and districts statewide, with particular emphasis on urban areas and the Texas-Mexico border. While the range of this program is located within a single state, the set of programs being implemented is quite complex, including both nationally established school models (e.g., High Schools That Work, Early College High Schools), as well as homegrown reform models developed by individual schools. The external evaluator of this project must look across school models, across their different goals, and across inner-city and rural locations, to identify how and why THSP is making changes in the schools. This presentation will describe the successes and challenges of evaluating an intricate reform effort as a whole while accounting for distinct school models, and provide some early learnings regarding the impact of local context on school reform.

Session Title: Assessing Evaluation Needs: Multiple Methods and Implications for Practice
Panel Session 465 to be held in Sebastian Section I4 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Research on Evaluation TIG
Chair(s):
Arlen Gullickson, Western Michigan University, arlen.gullickson@wmich.edu
Discussant(s):
Frances Lawrenz, University of Minnesota, lawrenz@umn.edu
Abstract: The Evaluation Center was recently funded by the National Science Foundation to provide evaluation-related technical assistance for one of its programs. In this project's first year, a comprehensive needs assessment was conducted to determine the types of evaluation support that were most needed by the program's grantees. In this panel session, each presenter will describe ways in which evaluation-specific needs were assessed. The first paper describes three distinct metaevaluation efforts. The second discusses how the questions posed for technical assistance were used as means for identifying needs. The last paper discusses how existing data on evaluation practices among the target audience were mined and combined with results from document reviews and interviews to provide another perspective on needs for evaluation support. Together, the panel offers a broad picture of how to assess needs and gives a rich lesson in how evaluators can better assist their clients and improve their practice.
Metaevaluation as Needs Assessment
Lori Wingate, Western Michigan University, lori.wingate@wmich.edu
This presentation describes three metaevaluation studies conducted to identify needs for evaluation resources and training among evaluators of projects funded through a National Science Foundation program. For one study, thirty raters assessed the extent to which ten evaluations met the Joint Committee's Program Evaluation Standards. A second study, conducted by one of the evaluation discipline's top scholars, used a case study approach for in-depth study of evaluation practices at two sites. A third study conducted by another evaluation expert was based on interviews and document reviews to assess evaluation efforts at three sites. The collective results are being used showcase exemplary practices and guide capacity-building efforts in aspects of evaluation that are determined to be in need of improvement. The presentation illuminates different ways in which evaluation efforts can be evaluated and demonstrates how summative metaevaluations can serve a formative function to assess needs and build capacity in evaluation.
Listening to Needs: How Requests for Evaluation Assistance Can Teach Us How to Be Better Evaluators
Stephanie Evergreen, Western Michigan University, stephanie.evergreen@wmich.edu
As a National Science Foundation-funded evaluation technical assistance provider, The Evaluation Center receives numerous requests from project directors who need help evaluating their grant activities. This paper will review the most Frequently Asked Questions (FAQs) and examine them for their implications for evaluators from all backgrounds. The FAQs reveal gaps in the way evaluators communicate with clients, present their plans and work products, and make themselves known to potential clients. In general terms, evaluation clients are expressing a need for greater support in the "softer" skills, rather than more research-oriented tasks like analyzing data. For example, a question about how a project director should balance the roles of internal and external evaluators can provide insight to evaluators about places to be more proactive with a client. The presenter is a research associate at The Evaluation Center who provides technical assistance and creates resources to address evaluation needs.
Comparison of Evaluation Use and Organizational Factors as Needs Assessment
Amy Gullickson, Western Michigan University, amy.m.gullickson@wmich.edu
Regular evaluation is a requirement for projects and centers that receive National Science Foundation (NSF) funding. However, making evaluation a required activity does not mean that it will be used. This paper reports on a study based on ten years of survey data conducted by The Evaluation Center, including information on needs assessment and evaluation practices. The data were used to select cases of NSF's Advanced Technological Educational program grantees who described evaluation either as 'essential' or 'not useful' to their work. Review of the reports and activities from these cases revealed information about the purpose and quality of evaluation at each site. Follow-up interviews with the selected sites explored actual use (or non-use) of reports and possible influences including organizational culture and relationship with the evaluator. The comparison of 'not useful' and 'essential' cases identified factors that inhibit and enable the use of evaluation for program improvement.

Session Title: Community as Context: Evaluation of Comprehensive Community Initiatives
Panel Session 466 to be held in Sebastian Section K on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Presidential Strand
Chair(s):
Teresa Behrens, The Foundation Review, behrenst@foundationreview.org
Discussant(s):
Teresa Behrens, The Foundation Review, behrenst@foundationreview.org
Abstract: Recognizing the complexity of communities, many foundations have adopted a strategy of supporting broad or deep change in targeted geographical areas. Commonly called comprehensive community initiatives, or CCI's, these initiatives present unique evaluation challenges, including multiple interventions; often poorly defined or overly ambitious goals; and an evaluator who may become part of the change work. This panel, which includes authors represented in the first issue of The Foundation Review, this will explore the ways in which evaluators met these challenges and the ways in which the community perspective can be represented in the evaluation.
Lessons From The Colorado Trust's Healthy Communities Initiative
Ross Conner, University of California Irvine, rfconner@uci.edu
Doug Easterling, Wake Forest University, dveaster@wfubmc.edu
Ross Conner and Doug Easterling lead the evaluation team for this initiative to engage communities in developing locally relevant strategies to improve health. They will share lessons about how 29 different community contexts were considered in the initiative evaluation.
Evaluating Scale, Scope and Sustainability
David Chavis, Community Science, dchavis@communityscience.com
Tina Trent, NeighborWorks, ttrent@nw.org
Trent and Chavis reviewed 11 comprehensive community initiatives to identify factors related to scale, scope and sustainability. Chavis will discuss how these constructs were evaluated and what community characteristics influences both the evaluations and the results.
Evaluation of Foundation Readiness for Community Change
Prudence Brown, Independent Consultant, pruebrown@aol.com
Marie Colombo, Skillman Foundation, mcolombo@skillman.org
Della M Hughes, Brandeis University, dhughes@brandeis.edu
Brown, Colombo and Hughes describe how one foundation made significant changes to how it operates in order to work more effectively with the community. In this presentation they will discuss how they evaluated foundation readiness and the challenges of giving feedback to foundation staff.

Session Title: Contextual Challenges of Evaluating Democracy Assistance
Panel Session 467 to be held in Sebastian Section L1 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Rebekah Usatin, National Endowment for Democracy, rebekahu@ned.org
Abstract: Democracy assistance presents a particular set of difficulties to the field of evaluation at both the macro and micro levels. Often, the conditions under which democracy assistance projects and programs take place are challenging for political reasons. By their very nature, these types of projects and programs are extremely difficult to evaluate and attributing causality is virtually impossible. Nonetheless, both donors and implementers of democracy assistance make considerable attempts to utilize qualitative and quantitative data to determine what difference their projects and programs are making. This panel will explore the challenges of context faced by evaluation staff at three different organizations working abroad.
Evaluating Democracy Assistance Grantmaking: Challenges and Opportunities
Rebekah Usatin, National Endowment for Democracy, rebekahu@ned.org
The National Endowment for Democracy (NED) is a private, nonprofit organization created in 1983 and funded through an annual congressional appropriation to strengthen democratic institutions around the world through nongovernmental efforts. NED's grants program provides support to grassroots organizations in more than 80 countries to conduct projects of their own design. The varied political and cultural contexts of NED grantees coupled with the difficulties of attributing programmatic success to a single small grant make for a challenging evaluation context. The panelist is the sole staff member devoted to evaluation at NED. She will discuss strategies and methods employed by NED to evaluate its grants program at the micro and meso levels.
Evaluating Market-Based Democratic Reforms: The Case of the Center for International Private Enterprise
Nigina Malikova, Center for International Private Enterprise, nmalikova@cipe.org
The Center for International Private Enterprise (CIPE) strengthens democracy around the globe through private-enterprise and market-oriented reform. Through a grassroots grants program, an award-winning communications strategy, and capacity and technical assistance, CIPE works to help the business community to become a leading advocate for market-oriented reform and democratic governance. A single partner grant project generally combines a mix of common functional project design components. The challenge for program officers is to disentangle separate components to eliminate unnecessary overlap among project objectives, as well as the context of CIPE's work, which makes evaluation of impact extremely difficult. For example: Is it possible to measure success through enacted reform in national laws and regulations that effect the business communities? Or, is it better to make a link with larger social changes, such as market-based democratic reforms? How do we make those links? What other societal changes do we consider? CIPE value added?
Evaluation of Democracy and Governance Programs in Post-Conflict Environments
Abigail Lewis, National Democratic Institute for International Affairs, alewis@ndi.org
Post-conflict environments present special challenges to those conducting monitoring and evaluation of democracy and governance programs where beneficiary groups could be put at increased risk during M&E processes. Sensitive political contexts, vulnerable beneficiary groups and interruptions in activities are a very real part of programming in emerging or nascent democracies. In these situations, it is crucial to be specific and clear with risk analysis and critical assumptions that must happen within the operating environment for programming to continue, let alone be successful. Extra funding must be allotted for security precautions in data collection and alternative methods may need to be employed. M&E specialists must weigh the risk to themselves and beneficiary groups when choosing such methods. This presentation will look at the available literature and case studies to draw upon best practices and lessons learned for future monitoring and evaluation of democracy and governance programs in these contexts.

Session Title: Good Practice Guidelines and Examples for Evaluating Global and Regional Partnership Programs (GRPPs)
Panel Session 468 to be held in Sebastian Section L2 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Mark Sundberg, World Bank, msundberg@worldbank.org
Abstract: The number of Global and Regional Partnership Programs (GRPPs) addressing cross-country issues such as preserving environmental commons and mitigating communicable diseases has grown exponentially since the early 1990s. While the value of periodic evaluation has been recognized, special challenges in applying standard evaluation criteria include: (a) programs evolve considerably over time; (b) results chains are complex and multi-layered; (c) central functions such as governance also need to be assessed; and (d) global partnership aspects require a tailored approach to assessing sustainability of outcomes. In collaboration with several evaluation networks, the World Bank's Independent Evaluation Group has been developing good-practice guidelines, tools and examples for evaluating GRPPs based on a survey of over 60 such evaluations, to complement the previously published Sourcebook for Evaluating GRPPs. This AEA panel, one of two highlighting the findings of this work, focuses on applying standard evaluation criteria to these challenging partnership evaluations.
Assessing the Relevance of Global and Regional Partnership Programs (GRPPs)
Chris Gerrard, World Bank, cgerrard1@worldbank.org
It is important to expand the usual concept of relevance of public good programs beyond consistency with beneficiary needs and priorities to capture some additional dimensions of relevance arising from the nature of GRPPs as international collective action designed to address global and regional concerns that the partners can only address, or address more efficiently, by acting together. These additional dimensions of relevance include (1) international consensus on the definition of the problem and strategies for action, (2) consistency with the subsidiarity principle, (3) the absence of alternative sources of supply, and (4) the extent to which the program is actually providing global or regional public goods. The presentation will provide a framework, good practice guidelines, tools and examples for addressing these additional aspects of relevance, based on a survey of over 60 GRPP evaluations.
Assessing the Efficacy and Efficiency of Global and Regional Partnership Programs (GRPPs)
Lauren Kelly, World Bank, lkelly@worldbank.org
GRPPs support diverse types of activities such as networking, advocacy, knowledge creation, technical assistance, or investments. Each type of activity presents methodological challenges in assessing the efficacy and efficiency of the program because different types of activities contribute in different ways to domestic policy and institutional reforms, human resource capacity, and investments in the sector, as well as to other objectives such as poverty reduction and improvements in welfare. GRPPs also produce and deliver goods and services at different levels, global, regional, national, and local. This presentation will provide a framework, good practice guidelines, tools and examples for assessing efficacy and efficiency, based on a survey of over 60 GRPP evaluations.
Assessing the Governance and Management of Global and Regional Partnership Programs (GRPPs)
Anna Aghumian, World Bank, aaghumian@worldbank.org
Governance is both a means and an end; both how and whether a program achieves its objectives are important. GRPPs also employ a diverse array of governance models associated with the history and culture of each program along a continuum from pure shareholder models (in which membership on the governing body is restricted to financial and other contributors) to more inclusive stakeholder models (in which membership also includes non-contributors). Therefore, this paper suggests that the performance of the governing bodies and management units in their functions should be measured against certain good governance principles such as legitimacy, efficiency, accountability, responsibility, transparency and fairness. The presentation will provide a framework, good practice guidelines, tools and examples for doing so, based on a survey of over 60 GRPP evaluations.
Assessing the Sustainability of the Outcomes of Global and Regional Partnership Programs
Elaine Wee-Ling Ooi, World Bank, eooi@worldbank.org
The extent to which the benefits arising from the activities of a GRPP are likely to be sustained in the future will depend on a number of factors such as (1) the sustainability of the program itself, (2) complementary activities undertaken by the program's global and regional partners, (3) the ability of country-level stakeholders to take over the program's country-level activities (4) measures to strengthen local ownership and capacity of said activities, as necessary, and (5) external factors beyond the sphere of influence of the program. This presentation will provide a framework, good practice guidelines, tools and examples for assessing the sustainability of program outcomes, based on a survey of over 60 GRPP evaluations.

Session Title: Measuring Outcomes and Building Capacity Within the Informal Science Education Program at National Science Foundation: What Every Evaluator Should Know
Panel Session 469 to be held in Sebastian Section L3 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Chair(s):
Leslie Goodyear, National Science Foundation, lgoodyea@nsf.gov
Abstract: This session will provide participants with an overview of the Informal Science Education (ISE) Program at the National Science Foundation (NSF) and describe ongoing efforts to promote evaluation within the ISE program. NSF Program Directors will present findings from two recent portfolio review efforts: (a) a trend analysis of the ISE portfolio conducted by the Portfolio Inquiry Group [Center for Advancement of Informal Science Education (CAISE) and NSF] and (b) a portfolio review of ISE media projects. Two other major ISE evaluation activities will be presented: 1) A primary author of the Framework for Evaluating Impacts of Informal Science Education Projects-the guide that shapes project evaluation within the ISE program-will discuss what evaluators need to know about using this guide to frame evaluations; and 2) The Senior Study Director leading the development and implementation of the ISE project management system will present the Online Program Monitoring System, used to document the collective impact of the ISE portfolio of funded projects, monitor participants' activities and accomplishments, and obtain information that can inform design and implementation of future ISE projects. The implications of these ISE evaluative efforts on the field and potential ISE project evaluators will be discussed.
Trend Analyses: Understanding the Informal Science Education (ISE) Project Portfolio and Evaluations
Monya Ruffin, National Science Foundation, mruffin@nsf.gov
This session will highlight the results of a recent trend analysis of the ISE project portfolio conducted by the Porfolio Inquiry Group; consisting of representatives from the Center for Advancement of Informal Science Education (CAISE), Inverness Research, and NSF. In addition, this session will present the results of an internal evaluation of ISE media project summative evaluations submitted in the past five years for television, radio, film, and cyberlearning projects.
Using the Framework for Evaluating Impacts of Informal Science Education Projects
Sue Allen, National Science Foundation, sallen@nsf.gov
The Framework for Evaluating Impacts of Informal Science Education Projects was developed to introduce the Informal Science Education audience to spark a field-wide discussion about evaluation. As one of the authors of the Framework, the presenter will share how it supports ISE projects in conducting evaluations, its relationship to other ISE evaluation efforts and how it has been influential in shaping evaluation in the ISE field.
The Informal Science Education (ISE) Project Management System: An Examination of the ISE Portfolio and the Implications of the findings to the Field and Potential ISE Evaluators
Gary Silverstein, Westat, silverg1@westat.com
This session will provide participants with an overview of the OPMS, an online project management system collaboratively designed and implemented by NSF and Westat. The Senior Study Director will present the project management system, which is used to document the collective impact of the ISE portfolio of funded projects and monitor participants' activities and accomplishments. Discussion will include connections to the other ISE evaluation activities and the relevance of the monitoring system to evaluators of ISE projects.

Session Title: Managing Program Evaluation: Towards Explicating a Professional Practice
Multipaper Session 470 to be held in Sebastian Section L4 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Evaluation Managers and Supervisors TIG
Chair(s):
Don Compton, Centers for Disease Control and Prevention, dcompton@cdc.gov
Discussant(s):
Ann Maxwell, United States Department of Health and Human Services, ann.maxwell@oig.hhs.gov
Laura Feldman, University of Wyoming, lfeldman@uwyo.edu
Abstract: Our recent issue of New Directions for Evaluation is Managing Program Evaluation: Towards Explicating a Professional Practice. It was designed to make explicit this practice, and to address our profession with three focal questions: Should we recognize managing evaluation as a core professional expertise? Should we promote this by legitimizing the preparation of experts and expertise? And, if so, what should be the curriculum, pedagogy and learning sites? These three questions are the substance of this multipaper presentation. Baizerman begins with an overview of the purpose, core concepts and insights in the issue, and will be followed by authors of three of the issue's case studies who will present the central themes of their work. Then the co-leaders of the Evaluation Managers and Supervisors TIG, which sponsored the issue and the session, will serve as discussants. Discussion with presenters and participants about the focal questions, the case studies and the issue in general will conclude the session.
Overview of the Issue
Michael Baizerman, University of Minnesota, mbaizerm@umn.edu
Baizerman begins the session with an overview of the purpose, core concepts and insights in the issue.
Managing Studies Versus Managing for Evaluation Capacity Building
Don Compton, Centers for Disease Control and Prevention, dcompton@cdc.gov
Compton provides the key themes from his case study of managing for evaluation capacity building at the American Cancer Society.
Managing Evaluation in a Federal Public Health Setting
Michael Schooley, Centers for Disease Control and Prevention, mschooley@cdc.gov
Michael Schooley, an experienced manager of a heterogeneous evaluation group at CDC, discusses managing in a federal health setting.
Slaying Myths, Eliminating Excuses: Managing for Accountability by Putting Kids First
Robert Rodosky, Jefferson County Public Schools, robert.rodosky@jefferson.kyschools.us
Rodosky and Munoz present their case study of managing an evaluation unit in the Jefferson County public schools.

Session Title: The Logic Model and Systems Thinking: Can They Co-Exist?
Think Tank Session 471 to be held in Suwannee 11 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Extension Education Evaluation TIG
Presenter(s):
Robert Richard, Louisiana State University, rrichard@agcenter.lsu.edu
Abstract: The Logic Model with its familiar Inputs - Outputs- Outcomes linearity has found great use within Cooperative Extension program planning and evaluation. As we recognize the chaotic nature of today's world and the emergence of Chaos Theory it begs the question of how we best utilize the linear Logic Model vis-à-vis a world that seems to change with each sunrise. How do ideas such as those suggested by Peter Senge, Otto Schrarmer, Dee Hock and others impact the way Extension should engage stakeholders, and plan and deliver programs? How do program evaluation ideas such as Michael Patton's and Glenda Eoyang's blend with concepts promulgated by the Logic Model? This session will consider these questions and others as we explore the place of the Logic Model, Systems Thinking and other concepts in keeping Extension program planning and evaluation relevant

Session Title: Using Evaluation Methodologies in Context
Multipaper Session 472 to be held in Suwannee 12 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Social Work TIG
Chair(s):
Brian Pagkos,  Community Connections of New York, bpagkos@comconnectionsny.org
Demonstrating Prevention Program Effectiveness: An Examination of the Validity of the Retrospective Pretest Design
Presenter(s):
Karin Chang-Rios, University of Kansas, kcr@ku.edu
Jacqueline Counts, University of Kansas, jcounts@ku.edu
Heather Rasmussen, University of Kansas, hrasmussen@ku.edu
Elenor Buffington, University of Kansas, elliebuf@ku.edu
Abstract: Demonstrating effectiveness is a major focus of child abuse prevention programs nationally. Most evaluators in the field use a traditional pretest-posttest design to assess impact, however an increasing number are exploring the retrospective pretest-posttest as a viable replacement. Program staff have been particularly interested in the retrospective because it can be administered at one point in time and overcomes the possibility of response-shift bias. This study examines the comparability and validity of the traditional and retrospective pretest using data from the national Protective Factors Survey. A total of 94 participants from seven agencies completed the Protective Factors Survey and two validity scales (mood and social desirability). This presentation summarizes the study's findings and presents recommendations for evaluators considering use of a retrospective design.
Realist Evaluation of What Works and in What Contexts in Achieving Scotland's 'Getting it Right for Every Child' Outcomes
Presenter(s):
Mansoor Kazi, University at Buffalo - State University of New York, mkazi@buffalo.edu
Jeremy Akehurst, Moray Council, jeremy.akehurst@moray.gov.uk
Abstract: Moray Council's Children & Family and Criminal Justice Service has been integrating realist evaluation (Kazi, 2003) into practice to investigate what interventions work and in what contexts to achieve the Scottish Government's 'Getting it Right for Every Child' outcomes. This strategy includes the use of reliable outcome measures repeatedly over time, the recording of children and families' contextual data, and information on the services provided. Regular analysis of patterns of change in this data enable a prospective investigation of where services are more or less likely to achieve the desired outcomes, and the repeated analysis of the findings help to better target the services for children and their families. A four-year longitudinal evaluation (n = 134) found that although the program had been effective in reducing the risk of offending and the number of offences, alcohol misuse was a significant barrier to progress for persistent young offenders.
Understanding the Diverse Contexts of a Multi-site Program: The Utility of In-Depth Telephone Interviews
Presenter(s):
Anne J Atkinson, PolicyWorks Ltd, ajatkinson@policyworksltd.org
Patricia Gonet, Patricia Gonet Ltd, patricia_gonet@earthlink.net
Abstract: Virginia's Adoptive Family Preservation Program (AFP) is a multi-site statewide post-adoption services program serving a broad range of adoptive families in more than a dozen dissimilar community settings. Consistent with a national best practice model for post-adoption services, much emphasis is placed on ensuring that services are family-centered, adoption sensitive, strength focused, and directed by families. 'Listening' to families is an important component of the program's mixed-method comprehensive evaluation; over 600 in-depth telephone interviews have been conducted with adoptive families over the past five years. These interviews have yielded a great deal of rich data on client circumstances, expectations, and needs and on the diverse contexts in which program services are delivered. This workshop focuses on the methodologies employed in the comprehensive evaluation with particular emphasis on the utility of the in-depth interviews for gaining deep insight into important aspects of program context, operation, effects, and outcomes.

Session Title: Working With Contexts Affecting Student Test Scores: Spatial Patterns, Reference Grouping Effects, and Timing
Multipaper Session 473 to be held in Suwannee 13 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Bonnie Swan,  University of Central Florida, bswan@mail.ucf.edu
Discussant(s):
Aarti Bellara,  University of South Florida, abellara@mail.usf.edu
Do Summer Library Reading Programs Impact Students' Reading Ability?
Presenter(s):
Deborah Carran, Johns Hopkins University, dtcarran@jhu.edu
Susan Roman, Dominican University, sroman@dom.edu
Abstract: Summer reading programs are offered by 95% of public libraries in the United States . These programs seek to create and sustain a love of reading in children, and to prevent the loss of reading skills which research shows often occurs during the summer months. This study, commissioned by the Institute for Museum and Library Science (IMLS) under a National Leadership Grant (NLG), examines the impact of public library summer reading programs on 1) student reading achievement over the summer between third and fourth grade and 2) youth motivation to read and enjoyment of reading.
Exploring Spatial Patterns of No Child Left Behind
Presenter(s):
Kristina Mycek, University at Albany - State University of New York, km1042@albany.edu
Abstract: No Child Left Behind (NCLB) is an important legislation affecting education across all states. Due to the political influence of NCLB on policy, school rating, and monetary expenditures it is important for evaluators to make accurate reports. This study examines the role geography has on NCLB, particularly in identifying factors that may influence school or student performance. In order to investigate, spatial analysis was chosen due to the dual dimensionality of the problem; time and space. This study looks at the spatial patterns of New York State school districts' scores on English Language Arts and Mathematics standardized tests across time, while also looking at additional variables that could potentially influence test scores. Initial results implicate poverty and minority population affects on NCLB.
Ability Grouping and Academic Self-concept: A Theory-driven Evaluation
Presenter(s):
S Marshall Perry, Dowling College, smperry@gmail.com
Abstract: This study concerns a two-year evaluation of the effects of a school's organizational structure. It aims to examine if academic tracking causes differences in student self-concept, but also seeks to understand why these self-concepts differ. Specifically, it differentiates between two potential mechanisms. Labeling is the stigmatization of a student due to membership in a track. Reference group effects stem from how a student may compare herself to or gain norms, values, and attitudes from classmates. At both the start and end of an academic year, two mixed-ability classes and two tracked classes responded to a questionnaire that rated various areas of self-concept. Sixth grade students responded a third time in seventh grade. The study demonstrates the relevance of both labeling and reference group effects but emphasizes the consideration of teacher effects and self-protective measures. By clarifying prior contradictory findings, the study offers lessons on the importance of theory-driven evaluation.

Session Title: Emerging Models and Concepts in Educational Program Evaluation
Multipaper Session 474 to be held in Suwannee 14 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Julieta Lugo-Gil,  Mathematica Policy Research Inc, jlugo-gil@mathematica-mpr.com
The Use of Stakeholder-based Theories in the Evaluation of a School Policy: Proposing an Empirically Based Evaluation Framework
Presenter(s):
Shahpar Modarresi, Montgomery County Public Schools, shahpar_modarresi@mcpsmd.org
Faith Connolly, Naviance, faith.connolly@naviance.com
Abstract: A stakeholder-based theory framework was undertaken to conduct a formative evaluation of a new grading and reporting policy in an educational setting. A multi-method approach was employed to empirically examine the extent and consistency of the policy implementation at the district level. This paper begins with a description of the policy followed by the evaluation objectives. Next, the literature pertaining to the theoretical approaches to stakeholders' involvement in evaluations is described. Then, the methodology is presented including the components of the theoretical framework used to conduct this evaluation followed by its findings. The concluding section presents practical and structural (e.g., political, social, utilization) challenges that were encountered in the evaluation process and propose an analytical framework to cope with these challenges.
Understanding Technology Literacy: A Contextual Framework for Evaluating Educational Technology Integration
Presenter(s):
Randall Davies, Brigham Young University, randy.davies@byu.edu
Abstract: Federal legislation currently mandates the integration of technology into the school curriculum because it is commonly believed that learning is enhanced through the use of technology. The challenge for educators is to understand how best to use technology while developing the technological expertise of their students. This session outlines a contextual framework for evaluating technological literacy designed to helps us understand and promote technology integration properly. It also can be used to inform technology enhanced instructional systems development.
Challenges of Evaluating School Reform: Lessons Learned From a Multi-State Small Learning Communities Project
Presenter(s):
Roy Kruger, Northwest Regional Educational Laboratory, krugerr@nwrel.org
Annie Woo, Northwest Regional Educational Laboratory, wooa@nwrel.org
Abstract: Challenges associated with evaluating school reform projects stem from: 1) the long-term, broad nature of the goals of such projects; and 2) conflicts associated with differences among stakeholders. Evaluators need to look at both its technical and cultural contexts. This study explored the impact of Small Learning Communities organizational structures in assisting schools in addressing the Adequate Yearly Progress (AYP) demands of the No Child Left Behind (NCLB) legislations. The study is the research synthesis of the evaluation of the effectiveness of the SLC model in eight school districts (32 individual schools) in four states. In addition to the collection of student achievement data, surveys and interviews with administrators, teachers, and students were conducted for the purpose of measuring the effectiveness of the project in: a) developing effective smaller learning communities; b) providing the students with both intellectually challenging and emotionally supportive learning environment; and c) increasing student achievement.
Response to Intervention (RTI): An Evaluation Conundrum
Presenter(s):
Bill Thornton, University of Nevada, Reno, thorbill@unr.edu
Janet Usinger, University of Nevada, Reno, usingerj@unr.edu
George Hill, University of Nevada, Reno, gchill@unr.edu
Abstract: Response to Intervention (RTI) has captured the imagination of the K-12 school system. However, RTI's widespread and somewhat random use in public schools for both students with Individual Education Plans (IEP) as well as the general student population has created significant challenges for program evaluators. Because RTI follows a permissive rather than prescriptive model, at its very foundation, fidelity of implementation is illusive in that the needs of the students drive the specific interventions. Capturing what is happening at the building level does not necessarily provide sufficient evidence to suggest RTI's impact. The University of Nevada, Reno Department of Educational Leadership has had several doctoral students undertake evaluation dissertations that approach RTI from different perspectives. The purpose of this session will be to describe the various approaches that have been undertaken in evaluating RTI, as well as the results of the dissertations that have been completed.
Charter School Evaluation: Trends, Challenges, and Prospects
Presenter(s):
James Sass, Independent Consultant, jimsass@earthlink.net
Abstract: Over the past two decades, charter schools have grown from a concept to a small movement to an established constituency in public education. The increasing number and importance of charter schools provide both opportunities and challenges for evaluators. Much evaluation work is needed for analyzing the effectiveness of charter school policies, assessing the strength of individual charter schools, and promoting charter schools' improvement. Challenges include charter laws that vary greatly from state to state, the difficulty in establishing appropriate comparison groups, and the uniqueness of individual charter schools. This review will address these opportunities and challenges at three levels: summative evaluation of large-scale charter school policies and the charter school movement, high-stakes evaluation of individual schools by authorizers and accrediting agencies, and formative evaluation of individual schools and school associations. While addressing past developments and current trends, this paper will also provide prospects for the future of charter school evaluation.

Session Title: Evaluating Educational Partnership Projects: Four Approaches
Multipaper Session 475 to be held in Suwannee 15 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Joy Sotolongo,  North Carolina Partnership for Children, jsotolongo@ncsmartstart.org
Discussant(s):
Cindy Beckett,  Independent Consultant, cbevaluate@aol.com
Outcomes of a Character Education Program: Implications for School Districts With Increasingly Limited Resources
Presenter(s):
Tammy DeRoo, Western Michigan University, tammy.l.deroo@wmich.edu
Amy Schelling, Western Michigan University, amy.l.schelling@wmich.edu
Monica Lininger, Western Michigan University, monica.lininger@wmich.edu
Gary Miron, Western Michigan University, gary.miron@wmich.edu
Abstract: Ethical, social, and emotional development plays a significant role in the academic achievement of children. Combined, the resulting knowledge, skills, and abilities pave the road to success as a contributing member of society. This evaluation is examining the short and intermediate outcomes of a character education program implemented by the Sherman Lake YMCA Outdoor Center in conjunction with schools in Southwest Michigan. Over the course of multiple days, this unique off-site camp program engages students in experiential-based activities that teach and reinforce the principles of Honesty, Caring, Respect, and Responsibility (HCRR), based on the premise that student improvement in these areas will be evident both in their academic and social endeavors. A theory-driven model is being used to assess impact on students' problem-solving abilities, initiative, independence, and behaviors, as well as overall school climate. In addition, participating teachers will be assessed for evidence of increased skills in classroom management.
Can't see the Forest For The Trees: Looking Beyond Established Career Education Evaluation Measures Toward the Context for Program and Policy Change
Presenter(s):
Grant Morgan, University of South Carolina, morgang@mailbox.sc.edu
Mark D'Amico, University of North Carolina at Charlotte, damico_mark@yahoo.com
Abstract: For nearly 15 years, many College Tech Prep partnerships between K-12 school districts and community colleges have been evaluated using pre-determined data points. In some cases, measures are developed to comply with reporting requirements without careful consideration of individualized program interventions. The purpose of this presentation is to illustrate how further analysis on reported data elements expands the evaluation by addressing the program policy and delivery context within the College Tech Prep environment. More specifically, the study examines how looking beyond the original intent of an evaluation instrument uncovered the relationships between postsecondary education aspirations aligned with career goals and work-based learning experiences (e.g., job shadowing, internships, and/or co-operative education) or career planning assistance. Data were collected from more than 1,500 students completing a career and/or technical education course of study.
Developing a Contextual Model to Evaluate Education Partnerships in Underserved Areas
Presenter(s):
Mehmet Dali Öztürk, Arizona State University, ozturk@asu.edu
Kerry Lawton, Arizona State University, klawton@asu.edu
Abstract: Over the years, American colleges and universities have attempted to assist historically disadvantaged populations by offering their expertise in research and teaching to schools within the community. However, many university-school partnership efforts are developed without the embedded measurement and evaluation components needed to evaluate their effectiveness. This paper demonstrates the importance of thorough evaluation of university-school partnerships and outlines MEHMET, a conceptual framework that can be used to do so. In addition, this paper also focuses on the most critical step in evaluating effective university-school partnerships, the identification of the layers and levels of measurable engagements and multiple stakeholder perspectives. Last, this paper will describe methodological and statistical considerations that must be made when evaluating partnership efforts between preK-12 schools and an institute of higher education located in a culturally, linguistically, and economically diverse community.
Business Partnerships With High School Mathematics: Evaluation Towards Rigor, Relevance, and Relationships for At-Risk and High-Performing Students
Presenter(s):
Paul Gale, San Bernardino County Superintendent of Schools, ps_gale@yahoo.com
Abstract: The context of the ongoing evaluation study focuses on two initiatives targeting different high school student populations. Both initiatives encompass math / science activities that are co-developed by Southern California business partners and math content experts, who work with diverse student populations from urban high schools. Each of the initiatives has professional business partners or law enforcement investigators leading students through authentic examples of the use of math and science in their careers. The ultimate goal of the initiatives is to motivate students to pursue Science, Technology, Engineering, and Math (STEM) careers from local businesses. The study addresses five implementation questions focused on students' learning, engagement, and interests. The presentation will provide a framework for the study, results, and examine issues presented by stakeholders.

Session Title: Lessons Learned From Working With Our Own: Reflections on How Personal Values and Experience Contribute to Working in an Indigenous Context
Multipaper Session 476 to be held in Suwannee 16 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Indigenous Peoples in Evaluation TIG
Chair(s):
Kataraina Pipi, FEM (2006) Limited, kpipi@xtra.co.nz
Abstract: Three indigenous evaluators with varying levels of expertise and knowledge in evaluation portray some of their experiences of working within their communities where the context is indigenous, and where evaluation practice aligns with their distinctive cultural values. The evaluators discuss 5-6 cultural values that underpin their practice. They provide examples of where these have been applied in different contexts, and examples of beneficial results for the outcomes of evaluations.
Pepeha: My Personal Global Positioning System in Evaluative Action
Kirimatao Paipa, Kia Maia Limited, kiripaipa@ihug.co.nz
This presentation will explore the way that cultural values and practices influenced this indigenous evaluator in an evaluation about the negative impacts of methamphetamine on indigenous families. Interview techniques used included Maori cultural aspects of rapport-building, showing empathy, encouraging story-telling, acknowledging pain and suffering, and acknowledement of the journey to well-being.
Timely Reporting: Working With and Around Cultural Mores
Vivienne Kennedy, VK Associates Limited, vppk@snap.net.nz
This presentation will discuss a Kaupapa Maori approach to evaluating a workforce development mentoring program and how cultural norms and practices affect evaluative reporting. Discussion will cover cultural values of time and place versus the project timelines, and appropriate forms of cultural engagement. All this leads to an acknowledgement of how indigenous evaluators work with and around cultural mores.
Whanaungtanga: The Cost of Utilizing Relationships and Connections in Evaluation
Kataraina Pipi, FEM (2006) Limited, kpipi@xtra.co.nz
This presentation will reflect on the importance of relationships that need to be developed and maintained throughout and post the evaluation project. Reflections on lessons learned from a project regarding indigenous approaches to family violence are shared. Discussion will cover the advantages and disadvantages of utilising existing relationships and connections. The cultural and professional costs are considered.

Session Title: Considering Developmental Issues as Critical Facets to the Evaluation Context
Panel Session 477 to be held in Suwannee 17 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Human Services Evaluation TIG
Chair(s):
Tiffany Berry, Claremont Graduate University, tiffany.berry@cgu.edu
Discussant(s):
Katherine Byrd, Claremont Graduate University, katherine.byrd@cgu.edu
Abstract: What does training in developmental psychology afford the evaluation community? What do evaluators who work with programs serving children need to know about developmental issues? How do evaluation practices change as a result of incorporating salient developmental issues? The purpose of this panel is three-fold: (1) introduce salient developmental issues relevant to the evaluation of programs serving children and youth; (2) describe the unique developmental issues involved with evaluation methods (e.g., design, measurement, assessment, etc.); and (3) illustrate how these salient developmental issues can integrated feasibly and efficiently into an existing evaluation framework (i.e., Center for Disease Control's framework). Ultimately, considering developmental issues as critical facets to the evaluation context may improve evaluation practice, as well as the sensitivity of the evaluation for picking up program effects.
An Overview of Child Development: Theories, Domains, and Tasks
Susan Menkes, Claremont Graduate University, susan.menkes@cgu.edu
Krista Collins, Claremont Graduate University, krista.collins@cgu.edu
In the evaluations of programs serving children and youth, understanding developmental issues is paramount to delivering effective program evaluation. Thus, the purpose of this paper is to provide an overview of salient issues in child development that specifically relate to program evaluation practices (e.g., measuring outcomes, specifying a design, aligning program activities with outcomes, etc.). This paper will present developmental theory to orient the audience towards understanding how children interact with and are influenced by the program context. We will also discuss how children change across developmental domains (e.g., social-emotional, cognitive) and within key age groups (e.g., infancy, preschool, school age, and adolescence). Ultimately, this paper will describe key developmental tasks useful in establishing developmentally appropriate evaluations.
Overcoming Methodological Challenges When Evaluating Programs Serving Children
Katherine Bono, California State University Fullerton, kbono@gmail.com
This paper will focus on several methodological issues relevant to evaluating programs that serve children. First and most importantly, children are developing while participating in any program. Thus, how does an evaluator discern program effects from typical development? Simple pre-post designs without any source of comparison are usually not adequate. Another methodological issue is related to the choice of short-term versus long-term outcomes for desired program changes. It is often necessary to measure precursors to desire behavioral change rather than a behavior that may not change for months or years after the completion of the program. The evaluator must also choose measures that will most accurately indicate behavioral changes related to the program. These decisions often lead to a cost-benefit analysis regarding various measurement strategies (e.g., standardized assessment vs. teacher- or parent-report). Several examples from actual program evaluations will be discussed related to these methodological challenges.
Applying Child Development to Evaluation: Incorporating Developmental Issues Into Existing Evaluation Frameworks
Tiffany Berry, Claremont Graduate University, tiffany.berry@cgu.edu
Building upon the previous two presentations, this paper will apply developmental issues of theory, methods, and design within the context of a practical program evaluation. Specifically, we will illustrate how evaluation practices change when the developmental context is considered. The six-step evaluation framework developed by the Centers for Disease Control (CDC) will guide our discussion of salient developmental issues. The CDC six-step evaluation framework includes the following: (1) engaging stakeholders, (2) describe the program, (3) focus the design, (4) gather evidence, (5) justify conclusions, and (6) share lessons learned. Building upon the CDC's evaluation framework, we will illustrate how, when, and where salient developmental issues related to theory, methods, or design (as well as measurement, assessment, ethics, data collection strategies with youth, etc.) could be meaningfully and feasibly incorporated into evaluation practice. Ultimately, we plan to demonstrate how evaluation practice improves when practitioners consider the developmental context of evaluation.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Forming a Topical Interest Group (TIG) for Internal Evaluation
Roundtable Presentation 478 to be held in Suwannee 18 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG and the AEA Conference Committee
Presenter(s):
Kathleen Tinworth, Denver Museum of Nature and Science, kathleen.tinworth@dmns.org
Wendy DuBow, University of Colorado at Boulder, wendy.dubow@colorado.edu
Boris Volkov, University of North Dakota, bvolkov@medicine.nodak.edu
Abstract: Recent lively discussion on the AEA listserv and amongst colleagues nationwide has solidified the suspicion that there are indeed distinct characteristics to internal evaluation. Issues of ethics, politics, and practice play out in unique and sometimes challenging ways. Many internal evaluators are 'departments of one,' and have few opportunities to address and explore their unique role. Join internal and external evaluators alike to discuss the role of internal evaluation, its strengths, weaknesses, challenges, and importance. We will discuss whether or not forming an AEA TIG would provide community, support and focus for this subset of evaluators. Because internal evaluators work in a wide variety of settings, we will explore the extent of our commonalities to see if they justify a TIG. One of the co-hosts has experience forming an AEA TIG and will share those insights as well. All interested parties are welcome to attend, no matter your perspective.
Roundtable Rotation II: Challenges and Benefits of an Internal Evaluator: Defining Roles and Responsibilities for Optimal Effectiveness
Roundtable Presentation 478 to be held in Suwannee 18 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG and the AEA Conference Committee
Presenter(s):
Leslie Aldrich, Massachusetts General Hospital, laldrich@partners.org
Danelle Marable, Massachusetts General Hospital, dmarable@partners.org
Erica Clarke, Massachusetts General Hospital, esclarke@partners.org
Adriana Bearse, Massachusetts General Hospital, abearse@partners.org
Abstract: This roundtable will focus on the role of an internal evaluator for a hospital-based center that supports roughly 20 community health programs and projects. The benefits and drawbacks of being an internal evaluator will be discussed, as will situations where use of an internal evaluator might be particularly beneficial for programs. Organizational context and politics play an important part in defining the role of an internal evaluator, as organizations often use evaluators as program managers, technical experts, or community liaisons. Issues addressed will include: roles and responsibilities; objectivity; flexibility and structure; relationship building and staff acceptance; funding and cost effectiveness; and capacity building. Participants will be encouraged to discuss the pros and cons of their own experiences as internal evaluators, and will be asked to think critically about the qualities of successful internal evaluators and how to manage and negotiate conflicts that arise.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Using Evaluation to Enhance Program Implementation and Effectiveness in the National Institutes of Health (NIH) Context
Roundtable Presentation 479 to be held in Suwannee 19 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Health Evaluation TIG and the Government Evaluation TIG
Presenter(s):
Deshiree Belis, National Institutes of Health, belisd@mail.nih.gov
Rosanna Ng, National Institutes of Health, ngr@mail.nih.gov
James Peterson, National Institutes of Health, petersonjm2@mail.nih.gov
Linda Piccinino, National Institutes of Health, piccininol@mail.nih.gov
Madeleine Wallace, National Institutes of Health, wallacem2@mail.nih.gov
Abstract: The purpose of evaluation at NIH includes assessing progress toward achieving program objectives, and examining a broad range of information on program performance and its context. The diversity of objectives, scientific scope and funding levels at NIH's 27 institutes and centers creates its own set of evaluation challenges. Biomedical research programs take time to produce outcomes, and measuring progress in science can be difficult. Applying evaluation methodologies to some programs therefore requires added planning and strategic thinking. The Evaluation Branch at NIH strives to enhance program implementation and effectiveness by sharing technical expertise and practical experience with the NIH evaluation community. The presentation will cover examples of methodologies used for different types of evaluations, such as needs assessments, process and outcome evaluations, and feasibility studies. The aim is to use context-sensitive evaluation to foster accountability and transparency in program implementation, and to disseminate actionable evidence to policymakers and stakeholders.
Roundtable Rotation II: Framing Contextual Issues in an Outcome Monitoring Project: The Role of Process Monitoring
Roundtable Presentation 479 to be held in Suwannee 19 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Health Evaluation TIG and the Government Evaluation TIG
Presenter(s):
Elizabeth Kalayil, MANILA Consulting Group Inc, ehk2@cdc.gov
Tobey Sapiano, Centers for Disease Control and Prevention, gvf8@cdc.gov
Andrea Moore, MANILA Consulting Group Inc, dii7@cdc.gov
Ekaterine Shapatava, Northrop Grumman, fpk7@cdc.gov
Tanesha Griffin, Northrop Grumman , tgg5@cdc.gov
Gary Uhl, Centers for Disease Control and Prevention, gau4@cdc.gov
Abstract: The Centers for Disease Control and Prevention (CDC) is conducting the Community-based Organizations Behavioral Outcome Project (CBOP), a longitudinal outcome monitoring study on three group-level evidence-based HIV prevention interventions (EBIs) being implemented nationally by CDC directly-funded community-based organizations (CBOs). This roundtable will focus on the experiences and challenges with process monitoring in an outcome monitoring project. Process monitoring through CBOP has highlighted contextual issues including variation in the delivery of EBIs in real-world settings, which may or may not influence behavioral outcomes. Participants will discuss the extent to which process monitoring should be incorporated in studies such as CBOP. This dialog will contribute to the knowledge base in the field of evaluation by providing a forum for evaluators to exchange ideas on the optimal ways for collecting process monitoring data and how these data can be used to interpret behavioral outcomes, which are key factors in determining intervention effectiveness.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Who Sets the Goals for K-12 Instructional Coaching Programs? Evaluating State, District and School Influences on the Implementation and Impact of a School-based Coaching Program
Roundtable Presentation 480 to be held in Suwannee 20 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Mary Jean Taylor, MJT Associates Inc, mjstaylor@aol.com
Veronica Gardner, MJT Associates Inc, v@veronicagardner.com
Abstract: Instructional coaching and mentoring are strategies that have inherent appeal and would appear to be relatively easy to implement. In fact, many districts across the country have implemented coaching programs under a variety of titles, but with vague goals. The work of coaches is often viewed as hard to quantify and too far removed from students to link coaching and student achievement. This discussion explores some of the ways that state, district, school and coach influences either support and reinforce goals related to student learning or redirect and subvert the potential for change. The discussion is based on data from a three-year evaluation of an instructional coaching program in a large western U.S. school district. The program was eliminated when the economic situation changed, but the vestiges that remain are decentralized and relatively isolated from the influence of the district administrative structure. Even though changes in state funding and policy resulted in program instability and undermined the potential for impact, the evaluation detected positive relationships between coaching and student achievement in the elementary schools.
Roundtable Rotation II: Needs-Based Professional Development: Evaluating the Effects of Response to Intervention (RtI) Training Among Coaches and School Psychologists
Roundtable Presentation 480 to be held in Suwannee 20 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Amanda March, University of South Florida, amandamarch8@hotmail.com
Kevin Stockslager, University of South Florida, kstocksl@mail.usf.edu
Abstract: Evaluation of processes used to enhance student outcomes in schools is essential to determine what activities should be supported, changed, or terminated within the educational system. National policies such as The No Child Left Behind Act (NCLB) of 2002 and the reauthorization of the Individuals with Disabilities Education Act (IDEIA) of 2004 require school staff to use a Response to Intervention (RtI) framework of services delivery to enhance outcomes of all students. However, a significant amount of professional development (PD) for school staff, such as psychologists and RtI coaches, is required to successfully implement RtI processes. This evaluation utilizes fundamentals of both the management-oriented and the participant-oriented approaches to evaluate the PD offered to school psychologists and RtI coaches in one Florida school district. This Roundtable will discuss the outcomes of the evaluation, how findings were used to enhance PD activities, and implications for future PD evaluations in schools.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Evaluating a Longitudinal Group-Randomized Sexual Abstinence Program: Approach and Challenges
Roundtable Presentation 481 to be held in Suwannee 21 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Ann Peisher, University of Georgia, apeisher@uga.edu
Virginia Dick, University of Georgia, vdick@cviog.uga.edu
Katrina Aaron, Augusta Partnership for Children Inc, kaaron@augustapartnership.org
Robetta McKenzie, Augusta Partnership for Children Inc, rmckenzie@augustapartnership.org
Abstract: This community-based evaluation research seeks to determine the effects of a comprehensive saturation approach to reducing premarital sexual activity and pregnancy among middle school youth. Research suggests more scope in abstinence programming and more rigorous evaluation can yield significant change (Hauser, 2004). While this evaluation-intensive initiative offers increased scope and a more rigorous evaluation, it is recognized that conducting evaluation research in the real world poses challenges. Securing adequate groups for sampling, monitoring program/initiative fidelity, maintaining timely, accurate data collection for a longitudinal design and planning and insuring quality statistical analysis that accounts for varying group comparability, potential attrition and missing data are evaluation issues being addressed in this five-year evaluation research effort. Working collaboratively with the community partners to design and implement the initiative and the evaluation research can improve the capacity of the evaluation team to handle these challenges. This session will describe the approach and evaluation tools used, discuss the challenges and entertain questions and suggestions from peer evaluators.
Roundtable Rotation II: Is Anyone Listening? Evaluating an At-risk Youth Mentoring Program
Roundtable Presentation 481 to be held in Suwannee 21 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Corina Owens, University of South Florida, cowens@coedu.usf.edu
George MacDonald, University of South Florida, macdonal@coedu.usf.edu
Abstract: Middle school students, specifically at-risk middle school students, need encouragement and guidance in navigating the treacherous terrain of adolescence. One way of assisting students through this difficult time in their life is to involve them in mentoring programs with carefully selected and properly trained adults from the surrounding community. This paper describes the methods used to evaluate the effectiveness of a newly constituted mentoring program utilizing the Model for Collaborative Evaluations (Rodriguez-Campos, 2005, MCE). The MCE focuses on a set of six interactive components that encourage and promote collaboration between all evaluation team members. A collaborative approach, specifically the MCE, was chosen to actively engage stakeholders in the evaluation process in order to transform this formative evaluation into a learning and growing experience to better serve at-risk youth in a middle school setting.

Session Title: Advancing the Research and Culturally Responsive Evaluation Enterprise in Historically Black Colleges and Universities (HBCU) for Global Justice
Think Tank Session 482 to be held in Wekiwa 3 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Multiethnic Issues in Evaluation TIG
Presenter(s):
Leona Johnson, Hampton University, leona.johnson@hamptonu.edu
Discussant(s):
Stella Hargett, Morgan State University, evaluations561@aol.com
Marie Hammond, Tennessee State University, mshammond@cpsy.com
Ruth Greene, Johnson C Smith University, rgreene@jcsu.edu
Kevin Favor, Lincoln University, kfavor@lincoln.edu
Warren Gooden, Cheyney State University, doctorayo@comcast.net
Abstract: This Think Tank allows for the capacity building initiative, commenced by the American Evaluation Association and supported substantially by the National Science Foundation, to move forward. The planning grant from which six HBCU were able to direct an assessment of each campus communities' available pool of talent, interest, resources, support, needs, and expectations proved valuable in terms of identifying the degree of preparedness and the challenges faced if actualizing those models deemed most attractive by their campus collaborators. Five contextual questions that encapsulate the concerns tapped by the planning effort are to be posed to five groups of attendees. Each group will be facilitated with the intent to report ideas promising for 1) maximizing collaborations intra- and inter-institutionally, 2) obtaining professional development for whom, 3) recruiting/ retaining targeted students, 4) infusing cultural knowledge and familiarity into the institutional community, and 5) addressing gate-keeping courses and administrative challenges.

Session Title: Context, Culture and Evaluation: Ethics, Politics, and Appropriateness
Multipaper Session 483 to be held in Wekiwa 4 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Paula Bilinsky,  Independent Consultant, pbilinsky@hotmail.com
Discussant(s):
Paula Bilinsky,  Independent Consultant, pbilinsky@hotmail.com
Evaluation Across Cultures: Implication for the Ethical Conduct of Evaluations
Presenter(s):
Paul Stiles, University of South Florida, stiles@fmhi.usf.edu
Roger Boothroyd, University of South Florida, boothroy@fmhi.usf.edu
Catherine Batsche, University of South Florida, cbatsche@fmhi.usf.edu
Abstract: As a result of the more global and cross-cultural nature of evaluations, ethical complexities have arisen. Ethical principles from 'western' research and evaluation typically have been adopted and applied globally and cross culturally within nations. Within this context, there is debate regarding whether a Universalist or Relativist approach to research and evaluation ethics should be adopted. The proposed paper will explore these issues further and address whether both the Universalist and Relativist approaches can be integrated in ethical principles simultaneously, and will conclude with a set of recommendations for evaluation practice that will further this debate. Discussions of these cross contextual and cross-cultural issues are critical for the evaluation profession as globalization increases and evaluators must address such complexities with greater frequency.
Capturing the Meaning of Context for a Meaningful Evaluation
Presenter(s):
Thereza Penna-Firme, Cesgranrio Foundation, therezapf@uol.com.br
Vathsala Stone, University at Buffalo - State University of New York, vstone@buffalo.edu
Ana Carolina Letichevsky, Cesgranrio Foundation, anacarolina@cesgranrio.org.br
Angela Cristina Dennemann, Associação Educational e Assistencial Casa do Zezinho, angeladann@gmail.com.br
Abstract: That an individual evaluee is a unique composition of attributes and needs is commonly understood. What is less obvious is that programs, too, have complex personalities of their own, like the individuals that comprise them. Programs draw their needs from their own contexts; and derive meaning from an evaluation operating in these unique contexts. What is meaningful to one program may not be so for another. Evaluation is influenced by the program's context. Reciprocally, it can influence the context, too. The challenge is to fully uncover the context and its needs so the results are meaningful to the program in its sphere of influence. Three types of evaluation contexts from the Brazilian experience will be illustrated and discussed in this paper - a highly visible social program with strong political context; a low visibility program with an unclear context; and a program context with two simultaneously active components.
Negotiating Context-Appropriate Evaluation Methodology, Methods and Tools Between Western Donors and African Evaluators
Presenter(s):
Jerushah Rangasami, Impact Consulting, jerushah@impactconsulting.co.za
Anthony Gird, Impact Consulting, antgird@telkomsa.net
Abstract: The significance of the evaluation context in Africa, and more specifically South Africa, is often highlighted through interaction with donors from developed/Western countries. Donor agencies often have specific, well-grounded research design and methodological expectations based on their European or American experience. Yet in the developing context of South Africa, these specific approaches may not be feasible. In local townships and rural areas issues such as low English proficiency, low literacy levels, poor facilities and lack of infrastructure would render many traditional, well-intentioned scientific approaches deficient. This paper uses specific examples, based on experience in South Africa, to demonstrate the importance of considering local contexts before attempting unrealistic design and methodological approaches. It also stresses the need for compromise from all stakeholders involved in the research design, with a view to producing high quality evaluation research within the constraints of the context.
Translating General Essential Monitoring and Evaluation Competencies and Challenges to an International Context
Presenter(s):
Donna Podems, ICF Macro, donna.r.podems@macrointernational.com
Abstract: This article provides research findings that are part of a much larger evaluation research study that examined nonprofit and donor's monitoring and evaluation (M&E) processes and the related evaluator's role(s) in the developing world. While various evaluation research studies identify potential key competencies, general issues for evaluators working in the United States and Western context, this research explores the interplay between this general knowledge to the specific nonprofit's donor-driven monitoring and evaluation environment in a developing country. While the research is limited to the donor-dependent NPO in a developing country, the findings could apply to other international or domestic settings.

Session Title: Involving Stakeholders in Evaluations: Alternative Views
Multipaper Session 484 to be held in Wekiwa 5 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Wes Martz,  Kadant Inc, wes.martz@gmail.com
The Influence of Context and Rationale on Stakeholder Selection in Participatory Evaluation
Presenter(s):
Randi Nelson, University of Minnesota, palmfam@comcast.net
Abstract: This paper presents dissertation research on stakeholder selection in participatory evaluation, based on interviews with 17 practicing evaluators in the U.S. and Canada. Results indicated that stakeholder selection was influenced by the evaluator's rationales for conducting participatory evaluation and on multiple aspects of program and evaluation context. Evaluators were motivated by multiple, rather than single rationales for stakeholder participation, including pragmatic, utilization, empowerment, and transformative goals. In this study, evaluators with pragmatic rationales, including utilization, were more likely to restrict stakeholder selection to program staff and managers. Evaluators with empowerment or transformative rationales also included program beneficiaries and community members on evaluation teams. In spite of these general patterns, the variability in evaluation team composition among cases with identical rationales indicates that context factors also influenced stakeholder selection. The paper identifies ten context factors that interacted with rationale to influence stakeholder selection, including organizational goals and culture, evaluation resources, and stakeholder attributes.
Using Participatory Impact Pathway Approach in the Context of Water Use Management Projects in Colombia
Presenter(s):
Diana Cordoba, International Center for Tropical Agriculture, d.cordoba@cgiar.org
Boru Douthwaite, International Center for Tropical Agriculture, b.douthwaite@cgiar.org
Sophie Alvarez, International Center for Tropical Agriculture, b.sophie.alvarez@gmail.com
Abstract: Evaluating water use management projects requires a holistic approach that takes into account the vision of the actors involved as well as the context in which they perform. Participatory Impact Pathway Approach (PIPA) starts with a participatory workshop in which different stakeholders make explicit their program theory by describing their impact pathways. In this paper we compare and contrast the use of PIPA in the evaluations of two Challenge Program on Water and Food (CPWF) projects in Colombia. In both evaluations the initial impact pathways derived from PIPA workshops are compared with actual outcomes. This paper argues that the use of PIPA facilitates rapid and well organized evaluation, contributing to: a) understanding the contexts in which projects operate and their influence on project outcomes, and b) empowerment of stakeholders.
Improving Stakeholders' Capacity to Track Their Own Program Data and Create Surveys: Building Capacity By Creating Useful Tools for Stakeholders
Presenter(s):
Kelci Price, Chicago Public Schools, kprice1@cps.k12.il.us
Abstract: Stakeholders tend to have a diverse set of needs which evaluators may be expected to assist with, but in practice evaluators seldom have all the resources (whether people or fiscal) to support stakeholders across all areas. This presentation provides concrete examples of tools developed by the Chicago Public Schools' Department of Program Evaluation to help stakeholders build their capacity to track program data and monitor program performance. The discussion addresses the context which led to the need for these tools, how the tools were developed, the specific content of the tools, and how the tools are integrated into evaluation services in a way designed to build stakeholder and organizational capacity.
How Analogies Can Save the Day When it Comes to Explaining Difficult Statistical Concepts To Stakeholders
Presenter(s):
Pablo Olmos, Mental Health Center of Denver, antonio.olmos@mhcd.org
Kathryn DeRoche, Mental Health Center of Denver, kathryn.deroche@mhcd.org
Christopher McKinney, Mental Health Center of Denver, christopher.mckinney@mhcd.org
Abstract: This presentation describes our experiences in presenting complex statistical terms to stakeholders with minimal background in statistics. We describe the importance of relating statistical terms/concepts to some terms that the stakeholders can understand (like IQ) and provide specific examples of how we have used some of these analogies to describe very complex concepts in statistics and measurement. We describe some of the graphical approaches (schemas, diagrams, charts) and simple tables we have created to describe some of the outcomes of our evaluation efforts, and describe our experiences with those tools. Finally, we describe some of our successes and our experiences with analogies that can be taken too far.
Building Program Capacity for Evaluating Tuberculosis Surveillance Data Accuracy
Presenter(s):
Lakshmy Menon, Centers for Disease Control and Prevention, hua2@cdc.gov
Kai Young, Centers for Disease Control and Prevention, deq0@cdc.gov
Lori Armstrong, Centers for Disease Control and Prevention, lra0@cdc.gov
Valerie Robison, Centers for Disease Control and Prevention, vcr6@cdc.gov
Abstract: Data accuracy is essential for effective program management. The goal of this evaluation is to gain insight into factors affecting the accuracy of tuberculosis data collection at the local and state program level, and to build program capacity for on-going monitoring and evaluation of their data collection processes. In-depth interviews and discussions with local stakeholders (data entry, data collection and clinical personnel) will be conducted to document data collection activities from local and state programs, and data transfer to the national surveillance system. This step establishes a common understanding of the data collection process for all participants, and provides the foundation for evaluation. Hands-on data abstraction from primary data sources with program staff and evaluators provides opportunities for collaboration and increases buy-in. This paper will share lessons learned and demonstrates how this process engages stakeholders in evaluation and builds their capacity for on-going data quality improvement at the local level.

Session Title: New Evaluation Techniques For Estimating the Impacts of Science and Technology Innovations
Multipaper Session 485 to be held in Wekiwa 6 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Research, Technology, and Development Evaluation TIG and the Costs, Effectiveness, Benefits, and Economics TIG
Chair(s):
Jerald Hage, University of Maryland, hage@socy.umd.edu
Abstract: Government concern about demonstrating the value of investments in science and technology has been heightened by the current economic crisis. This panel presents three novel approaches for evaluating the returns on investment in research and technology development from three distinct government agencies, two in the United States and one in Canada. Together these papers illustrate the importance of developing new measures for benefits of science and technology (S&T) innovations that move beyond the traditional economic measures of the dollar value of improved productivity and revenue from sales. The methods also address health, environmental, security, and knowledge benefits in quantitative as well as qualitative ways, and get at intermediate impacts as well as global impacts which helps attribute benefits to specific S&T programs.
A Credible Approach to Benefit-Cost Evaluation for Federal Energy Technology Programs
Gretchen Jordan, Sandia National Laboratories, gbjorda@sandia.gov
Rosalie Ruegg, TIA Consulting, ruegg@ec.rr.com
This paper describes a methodology that improves upon an already credible approach developed for a 2001 National Research Council study: "Energy Research at DOE: Was It worth It?" Three benefit-cost studies using this modified approach will be completed by the U.S. Department of Energy's Energy Efficiency and Renewable Energy in 2009. Economic performance metrics that are calculated are Net benefits, Benefit-cost ratio, and Internal Rate of Return. Benefits and costs for selected technology "winners" are calculated compared against the next best alternative. Additionally, an innovative "Cluster approach" is used that compares benefits of larger elements of a program to investment costs of the entire program. Environmental and Security benefits are also assessed, as are knowledge benefits. In contrast to the 2001 NRC study, the modified approach requires a case-by-case assessment of an array of ways additionality can occur, the difference that DOE made in the outcome.
Techniques for Evaluating Potential Benefits of New Scientific Instruments
Jonathan Mote, University of Maryland, jmote@socy.umd.edu
Aleia Clark, University of Maryland, alclark@socy.umd.edu
Jerald Hage, University of Maryland, hage@socy.umd.edu
This paper proposes a technique for evaluating the potential impacts of new scientific instruments in a way that avoids the pitfalls of "economic-only" cost-benefit analysis and meets the needs of the customer organization and Congress. The evaluation should convince Congress to fund a new suite instruments, the Hyperspectral Sounder (HES) on a new weather satellite to be launched by the National Oceanographic and Atmospheric Agency (NOAA). The evaluation will focus on improvements in warning time for severe weather events using more localized forecasting. More warning time will result in saved lives and reduced health consequences of sudden decreases in air quality. The context of the evaluation requires dealing with how the collection of regional weather data can be made compatible with current collection systems, and, of course, the question is what evidence can be marshaled for a system that is not yet operational.
A New Evaluation Strategy for Measuring the Returns on Investments in Medical Research: The Meso Level of the Treatment Sector
Jerald Hage, University of Maryland, hage@socy.umd.edu
Gretchen Jordan, Sandia National Laboratories, gbjorda@sandia.gov
Typically health evaluations are at either the micro level of a particular treatment or the macro level of a series of health benefits. With a small grant from the Canadian Academy of Health Sciences, we developed a new strategy that allows for the synthesis of evaluation of research findings from a variety of studies, treatment sector by treatment sector. The specific metrics of the framework are 1) health care impact by stage in the treatment process; 2) research investment by arenas within the production of medical knowledge within the specific treatment sector; 3)contributions to scientific knowledge; 4) network gaps in the production of innovative treatment protocols; and 5) economic and social benefits of medical research. Two unusual features are recognition that the advantages of alternatives kinds of research can be estimated and the potentiality of the valley of death in the transfer of medical research into health care products.

Session Title: Contextualizing the Evaluand: Planning and Implementing an Evaluation of the Injury Control Research Center (ICRC) Program
Multipaper Session 486 to be held in Wekiwa 7 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Sue Lin Yee, Centers for Disease Control and Prevention, sby9@cdc.gov
Discussant(s):
Thomas Bartenfeld, Centers for Disease Control and Prevention, tbartenfeld@cdc.gov
Abstract: The context of a program and perspectives of key stakeholders introduce a set of assumptions that influence the planning and implementation of an evaluation and ultimate utility of the evaluation findings. In evaluations of research and technology programs, systematic attention toward these assumptions yields an evaluation that thoughtfully addresses competing realities of different contexts. In 2008, CDC's National Center for Injury Prevention and Control (NCIPC) conducted a portfolio evaluation of 12 Injury Control Research Centers (ICRC) using the CDC Framework for Program Evaluation, a utilization-focused planning tool. The evaluation team will discuss the dilemmas and resulting solutions that arose from addressing the myriad of contexts in stakeholder engagement, clarifying the evaluation focus, data collection and analysis, and communicating findings. In closing, we will offer lessons learned that will be insightful for any evaluator of research and technology seeking to maximize the utility of their evaluation.
Negotiating Diverse Contexts and Expectations in Stakeholder Engagement
Sue Lin Yee, Centers for Disease Control and Prevention, sby9@cdc.gov
In most evaluations, the contexts and perspectives of key stakeholders overlap and often compete with one another. Conducting an evaluation that is meaningful and useful to all stakeholders requires an understanding of each stakeholder's expectations from the beginning and a willingness to revisit them throughout the evaluation. In the ICRC Portfolio Evaluation, the primary stakeholders are university grantees conducting research, training, and coordination of injury activities, and the funder, the CDC National Center for Injury and Control (NCIPC). Secondary stakeholders also play an important a role in assessing and providing program recommendations. To negotiate these diverse contexts and expectations, the ICRC Portfolio Evaluation Workgroup was established to guide the planning, implementation, and use of evaluation findings. This presentation will describe the perspectives of the major stakeholders and the contexts in which they operate and offer strategies for sustained interaction, despite the reality of varied priorities and power differentials.
Clarifying the Evaluation Focus in a Complex Program Context
Howard Kress, Centers for Disease Control and Prevention, hak6@cdc.gov
This presentation describes the iterative and dynamic processes undertaken to focus the purpose of the ICRC Portfolio Evaluation. Specifically, we developed the following tools to guide our understanding of the program context and that of the key stakeholders: (1) a hierarchical tree of the evaluation questions, (2) a program conceptual model, and (3) two logic models that describe the ICRC program. We will describe the iterative process of developing, vetting, and validating the evaluation questions with the logic models, and discuss the manner in which these tools laid the groundwork for subsequent phases of the evaluation. The presentation will close with lessons learned that should be helpful for other research and technology evaluations seeking to clarify their evaluation focus as well as negotiate complex program context.
Considering Context in Data Collection and Analysis
Jamie Weinstein, MayaTech Corporation, jweinstein@mayatech.com
The ICRC Portfolio Evaluation Team addressed the context and perspectives of the stakeholders in identifying the most appropriate data collection and analyses methods. The complex nature of the program context seemed best explored through using qualitative data collection methods. The evaluation employed a four phase data collection approach, in which each phase was designed to meet the specific needs of the evaluation and maximize the utility of the findings. Qualitative data was collected through site visits to two centers, teleconference interviews with each of the twelve participating centers, and teleconference interviews with past and current CDC staff. At every stage of the data collection and analysis, iterative data analyses ensured that the evaluation questions and purpose linked back to the evaluation goals, and the needs of the key stakeholders. Challenges faced during the data collection process will be discussed and lessons learned will be shared.
Contextual Influences and Constraints on Communicating Findings
Kristianna Pettibone, MayaTech Corporation, kpettibone@mayatech.com
A critical component of any evaluation is to share findings with stakeholders. The ICRC Portfolio Evaluation involved multiple stakeholders who brought an array of perspectives on how the findings should be shared. As the funder, CDC's National Center for Injury Prevention and Control provided the primary context for determining the utility of the evaluation, which initially was to produce an internal report for documenting accountability and identifying areas for program improvement. Involvement of stakeholders such as the ICRC directors and CDC staff introduced another set of contextual assumptions that influenced decisions related to sharing findings. Finally, in conducting the evaluation and identifying recommendations for program improvement, we discuss other potential uses of the evaluation findings. This presentation examines the influence and constraints that key stakeholders introduced on sharing findings and proposes strategies for evaluators of research and technology on managing these competing contexts.

Session Title: Using Qualitative Methods to Evaluate Military Family Support Programs
Multipaper Session 487 to be held in Wekiwa 8 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Qualitative Methods TIG
Chair(s):
Rhoda Risner,  United States Army, rhoda.risner@conus.army.mil
Conducting a Qualitative Implementation Study of a Collaborative Demonstration: The Case of the Military Spouse Career Advancement Accounts Demonstration
Presenter(s):
Heather Zaveri, Mathematica Policy Research Inc, hzaveri@mathematica-mpr.com
Linda Rosenberg, Mathematica Policy Research Inc, lrosenberg@mathematica-mpr.com
Abstract: Through the Military Spouse Career Advancement Accounts Demonstration, a collaboration of the U.S. Departments of Labor and Defense, eligible military spouses can obtain an account worth up to $6,000 for education and technical training received over a two-year period. Eight states received a grant to implement the demonstration. In turn, One-Stop Career Centers of the participating local workforce investment areas and participating military bases identified and enrolled spouses and managed their accounts. Mathematica Policy Research, Inc. conducted an implementation study of the demonstration that involved interviews and focus groups with demonstration staff from the workforce and military partners in all states. This presentation will use this study as a case study for conducting qualitative data collection and analysis of a collaborative effort between new partners. Study findings will be used to illustrate our process for ensuring the collection of high-quality data and conducting systematic analysis of data on collaborations.
Using Phenomenological Approach to Evaluate a Military Family Support Program: The Case of Operation- Military Kids
Presenter(s):
James Edwin, University of Alaska Anchorage, afjae1@uaa.alaska.edu
Abstract: This study utilized a qualitative methods research paradigm to evaluate the impact of deployment of National Guard and Reserves military personnel on their children and the supportive role of Operation: Military Kids (OMK) program in cushioning the parent-child separation stressors resulting from this deployment. Using a phenomenological approach, were provided by three children-centered focus group discussions and two adult-centered focus group discussions. Results from the qualitative analysis revealed that family members were the most immediate and effective support system available to the children. However, OMK was an important support system in coping and adjusting to deployment stressors. Children qualitatively recognized the role of the OMK Program in helping to alleviate their stress. The study also showed that in addition to social support programs that address children's needs, 'suddenly military children' are in need of interventions that specifically address the development of effective coping skills.

Session Title: Climate Change Mitigation and the World Bank
Panel Session 488 to be held in Wekiwa 9 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Environmental Program Evaluation TIG
Chair(s):
Cheryl Gray, World Bank, cgray@worldbank.org
Abstract: This session assesses the impact of three World Bank Group-supported activities on reducing greenhouse gas (GHG) emissions. Two of these activities - support for industrial energy efficiency and for solar power in China - had explicit GHG reduction goals. The third - support for protected areas in tropical forests - was motivated by biodiversity concerns but offers lessons for the emerging agenda on reducing emissions from deforestation.
Can Financial System Innovations Promote Energy Efficiency? An Impact Analysis for China
Hiroyuki Hatashima, International Finance Corporation, hhatashima@ifc.org
The International Finance Corporation (the private sector arm of the World Bank Group) launched an energy efficiency program (CHUEE) in China in 2004. The program's objective is to create commercially sustainable delivery mechanisms for financing energy efficiency projects. The evaluation focused on efficacy and impact of the program using a quasi-experimental design that compared participating and nonparticipating financial institutions.
Assessing a Program to Promote the Diffusion and Adoption of Solar Photovoltaic Home Systems in Rural China
Fan Zhang, World Bank, fzhang1@worldbank.org
This presentation will sketch a general model of alternative investment instruments and technology adoption; discuss empirical strategies for evaluating technology diffusion effects; and apply the model to an evaluation of the Renewable Energy Development project, which used incentives and grants with the goal of reducing the cost and increasing the quality of solar photovoltaic systems.
Do Protected Areas Protect Areas? A Global Analysis of the Impact of Forest Protection on Deforestation
Kenneth Chomitz, World Bank, kchomitz@worldbank.org
Andrew Nelson, Independent Consultant, dr.andy.nelson@gmail.com
Tropical deforestation accounts for about one fifth of global GHG emissions, and policy attention now focuses on finding instruments to reduce these emissions. However, there is a long and largely unevaluated track record in trying to reduce deforestation as a means of protected biodiversity. This impact evaluation applied statistical methods to remote sensing data on forest cover and forest fires to assess the impact of forest protection on reduced deforestation. The analysis controls for confounding effects - such as the tendency for protected areas to be set up in remote regions with lower pressure for deforestation.

Session Title: Evaluators: It's all in the Method!
Multipaper Session 489 to be held in Wekiwa 10 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Graduate Student and New Evaluator TIG
Chair(s):
Melinda Davis,  University of Arizona, mfd@email.arizona.edu
Techniques for Successful Longitudinal Street Outreach Evaluations
Presenter(s):
Jeanine Hanna, Advocates for Human Potential Inc, jhanna@ahpnet.com
Abstract: Maintaining follow-up rates, a key component to any longitudinal evaluation, is discussed frequently in evaluation literature. However, achieving successful follow-up rates within the context of a street outreach programs can be especially problematic and is deserving of special attention. Meaningful evaluations of street outreach programs must adapt to the unique program structure and transient nature of the population being served. This presentation will include 1) a discussion of the characteristics of street outreach programs that pose particular difficulties for evaluators, including the transient nature of the homeless population; 2) how traditional follow-up techniques are limited in scope; and 3) suggest techniques for maintaining high follow-up rates for evaluations of street outreach programs, such as, maintaining frequent contact with the street outreach team and periodic visits to local shelters and other places visited frequently by the target population.
Using Participatory Evaluation to Enhance Collaboration: Changing Evaluation Strategies at the Center for Ocean Science Education Excellence-Southeast (COSEE-SE)
Presenter(s):
Jonathan Rauh, University of South Carolina, wjrauh@gmail.com
Ching Ching Yap, Savannah College of Art and Design, ccyap@mailbox.sc.edu
Abstract: With increased emphasis in science education, many professional development programs focus on involving research and applied scientists in the curriculum delivery process. Sponsored by the National Science Foundation, the Center for Ocean Science Education Excellence-Southeast (COSEE-SE) is a regional site of COSEE. COSEE-SE offers such professional development programs to facilitate collaboration between research scientists and science educators to improve ocean science literacy of diverse populations. Traditionally, COSEE-SE utilized participant-observer model to evaluate the effectiveness of the collaborative nature of those professional development programs. Although positive feedback and evaluation results were obtained, limited diagnostic information regarding collaboration strategies was available to inform collaboration between scientists and educators. With the intent to gain additional insights regarding effective collaboration strategies in this context, this study employs participatory evaluation. By using this approach, relevant stakeholders are engaged throughout the evaluation process to continuously examine the effectiveness of collaboration strategies.
Individual vs. Nvivo: A Comparative Analysis of Qualitative Analysis Methodologies
Presenter(s):
Jennifer May, University of South Carolina, jennygusmay@yahoo.com
Robert Petrulis, University of South Carolina, petrulis@mailbox.sc.edu
Abstract: Many universities provide access to NVivo software to assist in the analysis of qualitative data. The experience of working with qualitative data led this researcher to ask in what ways the use of qualitative analysis software might influence analysis results. Does NVivo change researchers' perceptions and analysis of the data, and if so, how? This study systematically compares research summaries of qualitative data written by several qualitative researcher-participants with varying degrees of experience. Researchers were assigned to a research methodology group: NVivo, or manual analysis. After analyzing one transcript using their assigned methodology, researchers analyzed a second transcript using the alternative method. All researcher-participants received training for the NVivo program, and were directed to analyze transcripts for the same purpose. Summaries were then compared to determine if a difference regarding the content reported or excluded exists between methods of analysis. Interviews were conducted to obtain insight regarding researchers' perceptions of data analysis procedures.

Return to Evaluation 2009
Search Results for All Sessions