2010 Banner

Return to search form  

Session Title: Using the Program Evaluation Standards, Third Edition, to Define and Enhance Evaluation Quality
Skill-Building Workshop 362 to be held in Lone Star A on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Presidential Strand
Presenter(s):
Donald Yarbrough, University of Iowa, d-yarbrough@uiowa.edu
Lyn Shulha, Queen's University at Kingston, lyn.shulha@queensu.ca
Rodney Hopson, Duquesne University, hopson@duq.edu
Flora Caruthers, Florida Legislature, caruthers.flora@oppaga.fl.gov
Abstract: Attendees will learn about and apply the newly published (August 2010) 3rd edition Program Evaluation Standards (SAGE, 2010). The Joint Committee Task Force authors, serving as session leaders, will review quality control steps taken during the standards development process, guide individuals and groups in applications of the standards, and lead discussions about how the standards define and enhance evaluation quality in specific evaluation settings. Attendees will also have the opportunity to report their own evaluation dilemmas and discuss in small and large groups how to apply the program evaluation standards to increase and balance dimensions of evaluation quality, such as utility, feasibility, propriety, accuracy, and evaluation accountability, in these settings. The workshop will deal explicitly with metaevaluation and its role in evaluation quality improvement and accountability. Attendees will receive handouts to support reflective practice in their future evaluations and evaluation-related work.

Session Title: Systems in Evaluation TIG Business Meeting and Presentation: Meet and Greet With Systems in Evaluation Authors
Business Meeting Session 363 to be held in Lone Star B on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Systems in Evaluation TIG
TIG Leader(s):
Janice Noga, Pathfinder Evaluation and Consulting, jan.noga@pathfinderevaluation.com
Margaret Hargreaves, Mathematica Policy Research, mhargreaves@mathematica-mpr.com
Mary McEathron, University of Minnesota, mceat001@umn.edu
Presenter(s):
Janice Noga, Pathfinder Evaluation and Consulting, jan.noga@pathfinderevaluation.com
Abstract: As the Systems in Evaluation TIG continues to grow and bring in new members, we strive to keep our membership informed of advances in this rapidly developing area within evaluation. In 2010, three new books by TIG members Michael Patton, Patricia Rogers, Bob Williams, and Richard Hummelbrunner appeared in press. We invite you to meet these authors in person to hear about their new books, hear them discuss each others’ work, ask questions about their ideas, and get first-hand advice on vexing systems-related problems. It is an exciting opportunity to have them all together to talk about the cutting edge of systems and evaluation.

Session Title: The Biggest Winners: Empowerment Evaluation Exercises to Strengthen Primary Prevention Capacity
Skill-Building Workshop 364 to be held in Lone Star C on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Sandra Ortega, Ohio DELTA and Rape Prevention Education, ortega.12@osu.edu
Rebecca Cline, The Ohio Domestic Violence Network, rclineodvn@aol.com
Amy Bush Stevens, Owl Creek Consulting, amybstevens@mac.com
Abstract: The Center for Disease Control awarded funds for both the Rape Prevention Education and DELTA initiatives to Ohio creating a unique opportunity for developing a collaboration between the CDC, State of Ohio Department of Health, Ohio Domestic Violence Network (a private, non-profit organization) and local prevention service providers for increasing primary prevention capacity. The presenters will share lessons learned and empowerment evaluation exercises they developed for increasing primary prevention capacity of service providers and the state violence prevention coalition over the past three years. These tools have served to align national, state and local goals, objectives and outcomes regarding violence prevention efforts. Moreover, they have worked in tandem with the Getting to Outcomes framework to increase accountability through integrating evaluation activities into service provision and state planning. Participants will have the opportunity to work with the tools to increase their collaborative evaluation capacity building skills in this hands on workshop.

Session Title: Non-profits & Foundations Evaluation TIG Business Meeting
Business Meeting Session 365 to be held in Lone Star D on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
TIG Leader(s):
Lester Baxter, Pew Charitable Trusts, l.baxter@pewtrusts.org
Charles Gasper, Missouri Foundation for Health, cgasper@mffh.org
Joanne Carman, University of North Carolina at Charlotte, jgcarman@uncc.edu
Helen Davis Picher, William Penn Foundation, hdpicher@williampennfoundation.org

Session Title: Fundamentals of Power Analysis and Sample Size Determination
Demonstration Session 366 to be held in Lone Star E on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Steven Pierce, Michigan State University, pierces1@msu.edu
Abstract: In quantitative studies, statistical power (the probability of detecting an effect that actually exists) is closely tied to sample size. Evaluators can use power analysis to plan what sample size should be targeted during data collection to make best use of limited evaluation resources. This introductory session will cover the fundamental concepts involved in using power analysis and describe how power analysis can be used to improve the quality of a quantitative evaluation study. It will define key terms, explain why power analysis is important, and then discuss practical issues such as how to pick a power analysis method that matches your hypotheses, how to come up with reasonable numbers to plug into power analysis formulas, and why it is important to examine how sensitive the results are to your assumptions. Some examples will be presented, and software tools and other resources will be recommended.

Session Title: Cluster, Multi-site, and Multi-level TIG Business Meeting and Demonstration: A Mixed Methods Approach to Measurement for Multi-site Evaluation
Business Meeting Session 367 to be held in Lone Star F on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
TIG Leader(s):
Rene Lavinghouze, Centers for Disease Control and Prevention, shl3@cdc.gov
Martha Ann Carey, Maverick Solutions, marthaann123@sbcglobal.net
Presenter(s):
Fred Springer, Evaluation, Management & Training Associates Inc, fred@emt.org
Wendi Siebold, Evaluation, Management & Training Associates Inc, wendi@emt.org
Carrie Petrucci, Evaluation, Management & Training Associates Inc, cpetrucci@emt.org
Abstract: This mixed-method approach to measurement for use in multi-site evaluations (MSE’s) treats natural diversity in program context and implementation as a learning opportunity, rather than a challenge to internal validity. Capitalizing on the differences within and across multiple sites, knowledge for evidence-based practice is built inductively by measuring naturally occurring variability across sites, and using it to identify robust relationships with program effects. Multi-level analysis examines relations and interactions at and across individual, group, process, and context levels, and provides strong tests of external validity. Measurement is developed inductively, and combines the use of available and primary data collection, including observational data, semi-structured interviews, standardized instruments, and administrative data. This multi-site, multi-level approach enhances the quality of evaluation by using a “site-level protocol” with measures that are pertinent to practice. The epistemological foundation will be discussed, followed by explicit examples of how this approach is implemented from design to analysis.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Toward Universal Design for Evaluation: Continuing the Conversation
Roundtable Presentation 368 to be held in MISSION A on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Special Needs Populations TIG
Presenter(s):
Jennifer Sulewski, University of Massachusetts, Boston, jennifer.sulewski@umb.edu
Abstract: Universal design refers to designing products or programs so that they are accessible to everyone. Originally conceived in the context of architecture and physical accessibility for people with disabilities, the concept of Universal Design has been adapted to a variety of contexts, including technology, education, and the design of programs and services. At Evaluation 2009, a panel presented on the idea of Universal Design for Evaluation, drawing on the panelists’ individual experiences conducting research with people with and without disabilities. As a follow-up to last year’s session, we invite this year’s conference attendees to a discussion of our collective experiences conducting evaluations with people with disabilities and other vulnerable populations. We will give a brief recap of last year’s session but plan to spend most of the session discussing promising practices, lessons learned, and what Universal Design might look like applied to the evaluation field.
Roundtable Rotation II: Special Populations: Strategies for Collecting Data, Giving Voice
Roundtable Presentation 368 to be held in MISSION A on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Special Needs Populations TIG
Presenter(s):
Sheila A Arens, Mid-Continent Research for Education and Learning, sarens@mcrel.org
Andrea Beesley, Mid-Continent Research for Education and Learning, abeesely@mcrel.org
Abstract: The Guiding Principles direct evaluators to attend to differences among participants. Paying attention to diversity and actively seeking to include voices of those who may be marginalized is not, however, just a matter of abiding by the Guiding Principles; it is a matter of technical adequacy of data and hence, the validity of evaluative endeavors. Presenters will draw on their experiences collecting data from special populations. Through a series of questions and scenarios, presenters will discuss the importance of clarification of values, issues of selecting participants (and concomitant concerns about attrition), planning for accommodations to data collection instruments, and following through. This roundtable is relevant to seasoned and newer evaluators. Discussing scenarios and sharing experiences will prepare participants for evaluations including special populations.

Session Title: Historical Shifts in Evaluation Policy abnd Evaluation Practice: What We've Learned About Quality Evaluation
Multipaper Session 369 to be held in MISSION B on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the
Chair(s):
Ruth Anne Gigliotti,  Synthesis Professional Services Inc., rgigliotti@synthesisps.com
Discussant(s):
Catherine Callow-Heusser,  EndVision Research and Evaluation LLC, cheusser@endvision.net
The Interface Between Evaluation Policy, Quality Evaluation, and Mission/Service Alignment: A Comparative Analysis of Human Service Organizations
Presenter(s):
Kristin Kaylor Richardson, Western Michigan University, kkayrich@comcast.net
Abstract: Evaluation policy can play a significant role in how, when and why an agency conducts evaluation, as well as create a context to support quality evaluation work. This paper highlights methods and findings of a systematic, empirical comparison of the nature, scope and influence of evaluation policy in multiple human service organizations. A range of qualitative methods were used to study how evaluation policy (implicit and explicit) functions in a sample of American and Canadian organizations providing child and family mental health services. Agencies were compared on a variety of dimensions, including statement of mission, vision and values, service provision and policy, and evaluation policy and practice. Implications of study findings for improving the quality of evaluation work in human service settings, as well as for future research in the area of evaluation policy and evaluation practice will be discussed.
The Interplay of Evaluation Requirements and Political, Economic, and Technological Developments: A Historical Study of the Elementary and Secondary Education Act From 1965 to 2005
Presenter(s):
Maxine Gilling, Western Michigan University, maxine.gilling@wmich.edu
Abstract: Program evaluation does not take place in a vacuum. Instead, evaluation is influenced by a number of political, economic and technological factors. These influences include changes in executive and legislative branch leadership, new political coalitions, shifts in ideology and new reform movements. The major event that influenced the establishment of contemporary educational program evaluation was the passage of the Elementary and Secondary Education Act (ESEA) of 1965. Since then there have been nine major reauthorizations to ESEA. This study examines the Interplay of Evaluation Requirements and Political, Economic, and Technological Developments through a historical study of the evolution of Title I of the Elementary and Secondary Education Act of 1965.
Importing Randomized Evaluations From Medicine to Education and International Development: Pitfalls, Policy Implications, and Recommendations
Presenter(s):
Rahel Kahlert, University of Texas, Austin, kahlert@mail.utexas.edu
Abstract: The paper analyzes how randomized evaluations spread from medicine, to education, and international development. Translating the principles of randomization from biological to educational and social phenomena is difficult and more complex than testing drugs. The paper has three parts: The first part refers to Evidence-Based Medicine, which made RCTs the research method of choice, but introduced impartiality bias. The second part analyzes the shift toward scientifically based, randomized evaluations in U.S. education, (No Child Left Behind, 2001). Third, randomized evaluations have been on the rise in international development for the last decade. The paper concludes with a comparative analysis of what role randomized evaluations play in U.S. education and international development aid, using evidence-based medicine as a backdrop. Limitations of “transferability”, policy implications and recommendations are discussed, especially how RCTs can be more effectively translated between policy fields, and how they can be strengthened by qualitative data collection and analysis.

Session Title: Independent Consulting TIG Business Meeting
Business Meeting Session 370 to be held in BOWIE A on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Independent Consulting TIG
TIG Leader(s):
Frederic Glantz, Kokopelli Associates LLC, fred@kokopelliassociates.com
Rita Fierro, Independant Consultant, fierro.evaluation@gmail.com
Michelle Baron, The Evaluation Baron LLC, michelle@evaluationbaron.com

Session Title: Evaluation Quality From a Federal Perspective
Panel Session 371 to be held in BOWIE B on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Chair(s):
Elmima Johnson, National Science Foundation, ejohnson@nsf.gov
Abstract: “Evaluation Quality” has been identified as the conference theme with a focus on its conceptualized and operationalization. Another area of importance is evaluation utilization. This panel will discuss the definitions of evaluation and “new ways of thinking about the systematic assessment of our evaluation work” from a Federal perspective, i.e., the National Science Foundation (NSF), its grantees and contractors and the US Government Accountability Office (GAO). The utilization of a contextual/cultural perspective will be woven throughout the discussions of the various evaluation mechanisms described.
Evaluation for Science, Technology, Engineering and Math Education (STEM) Education Research and Development
Bernice Anderson, National Science Foundation, banderso@nsf.gov
This presentation will focus on issues of quality in evaluation planning for STEM education research and development programs. It will also address the challenges of evaluation quality within the context of implementation programs compared to intervention projects. These insights about evaluation quality will be drawn from recent capacity building and management strategies for the planning and oversight of selected STEM education evaluations of research and development efforts funded by the National Science Foundation in response to Administration's call for a culture of learning and strong evidence of results of the federal investment.
National Science Foundation Committee of Visitors: Evalaution by Experts
Fay Korsmo, National Science Foundation, fkorsmo@nsf.gov
Connie Kubo Della-Piana, National Science Foundation, cdellapi@nsf.gov
Each grants program at the National Science Foundation is reviewed by an external Committee of Visitors every three or four years. Applying a common set of criteria, Committees of Visitors review (a) decision processes leading to awards or declinations of research and education proposals and (b) program management. Program managers respond to the Committee of Visitors determinations, and both the Committee of Visitors reports and the program responses are made available to the public. Does the use of Committees of Visitors lead to quality evaluation? According to Averch (1994), validity of expert judgment in program evaluation is based on acceptance of expert judgment about a program and action taken based on the judgment. As a result of the action taken, benefits are realized or costs are avoided. This presentation examines Committee of Visitor reviews in light of the heightened demand for high-quality evaluation of government programs
Evaluation Quality: Threats and Solutions
Clemencia Cosentino de Cohen, Urban Institute, ccosentino@urban.org
Evaluation rigor and quality are central to the validity of evaluation findings on which important funding and programmatic decisions are made, but researchers often face constraints that require adjustments that may threaten the quality of their work or the rigor of their designs. In this presentation, I will identify and discuss some solutions to these “threats” as well as the window of opportunity that may be created in the process and yield advances in evaluation research. Specifically—relying on evaluations completed, ongoing, and currently being designed—I will discuss three common threats to quality: cross-sectional versus longitudinal data, confidentiality-driven restrictions on information (FERPA), and changing policy environment (using broadening participation programs as an illustration). In so doing, I will discuss how monitoring data collections and portfolio evaluations (based on strategies employed across projects, rather than individual project evaluations) may present viable solutions to the constraints just mentioned.
Comparing Quality Standards in Audit and Evaluation
Valerie Caracelli, United States Government Accountability Office, caracelliv@gao.gov
Today, in government, there is a resurgence of interest in evaluation (see 2011 Budget Perspectives and AEA’s EPTF website) and a concomitant responsibility to provide warranted conclusions on program results. In Evaluation, The Joint Committee will issue the 3rd edition of the Program Evaluation Standards which evaluators use to inform and improve their practice. Evaluators positioned in government, e.g., GAO, IG communities and elsewhere must follow the Government Auditing Standards, the “Yellow Book,” now being updated. This presentation will juxtapose the standards to discuss the values of both professions and the prominence given to particular facets of quality, such as, independence, cultural responsiveness, significance, transparency, among others. In specific instances a conceptual framework used to define how to meet the standard will be discussed. Last, the presentation examines how the performance audit and evaluation community ultimately assure that the standards of practice are being followed via meta-evaluation and peer review.
Assessment of Federal Contractor Evaluation Services
Elmima Johnson, National Science Foundation, ejohnson@nsf.gov
In accordance with Federal Acquisition Regulation (FAR) Subpart 42.15, Contractor Performance Information, Federal agencies are required to prepare evaluations of contractor performance for each contract in access of $100,000 at the time the work is completed and interim reports for contracts exceeding one year. In compliance with the FAR, NSF contracting officials complete a standard contractor performance report, which solicits a rating (unsatisfactory to outstanding) and comments in areas including Quality of Product/Service, Customer Satisfaction, Contractor Key Personnel, Timeliness of Performance and Cost Control. (This assessment is more in line with a fiscal audit and does not attend to outcomes or implications for future Federal actions in the program area being evaluated.) This presentation will discuss the role of this report in the portfolio of evaluation assessment tools utilized by the NSF regarding its definition of evaluation quality and the context in which the information provided is utilized.

Session Title: Indigenous Peoples in Evaluation TIG Business Meeting
Business Meeting Session 372 to be held in BOWIE C on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Indigenous Peoples in Evaluation TIG
TIG Leader(s):
Katherine A Tibbetts, Kamehameha Schools, katibbet@ksbe.edu
Kalyani Rai, University of Wisconsin, Milwakee, kalyanir@uwm.edu
Joan LaFrance, Mekinak Consulting, lafrancejl@gmail.com

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Designing for Change: The Experience of the Quitline Iowa Evaluation
Roundtable Presentation 373 to be held in GOLIAD on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Presenter(s):
Disa Cornish, University of Northern Iowa, disa.cornish@uni.edu
Gene Lutz, University of Nothern Iowa, gene.lutz@uni.edu
Abstract: Since 2008, the Iowa Tobacco Cessation Program Evaluation has comprehensively evaluated state-funded tobacco cessation programs. One of these programs, Quitline Iowa, is a telephone-based counseling service that also offers a free two-week supply of nicotine replacement therapy (gum, patches, or lozenges) to Iowans who are trying to quit using tobacco products. The evaluation of Quitline Iowa includes two methods: follow-up interviews with Quitline Iowa callers and secret shopper calls to 1-800-QUIT-NOW. Follow-up interviews were conducted with participants 3, 6, and 12 months after their first call to Quitline Iowa. A questionnaire developed by the evaluator assessed changes in tobacco use behaviors and aspects of the callers’ experiences. In July 2010, the evaluation follow-up interview protocol will change to a protocol designed by the Centers for Disease Control and Prevention (CDC) for all state quitline evaluations to use. This presentation will discuss challenges and lessons learned through the change process.
Roundtable Rotation II: Adapting the Strategic Prevention Framework Model for Use in Suicide Prevention and Other Abbreviated Funding Cycles Benefiting From Grantee, Stakeholder, Evaluator Collaboration
Roundtable Presentation 373 to be held in GOLIAD on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Presenter(s):
Trena Anastasia, University of Wyoming, tanastas@uwyo.edu
Trish Worley, University of Wyoming, tworley1@uwyo.edu
Abstract: This session will demonstrate a process of adapting the Strategic Prevention Framework model for abbreviated grant cycles while maintaining fidelity to the process. Ideas for using a front loading of the needs assessment to jump start coalition buy-in, in an effort to move toward rapid adoption of needs based prevention strategies will be shared. The example shown demonstrates the adaptation to a three year grant cycle for suicide prevention where outcome baseline measures were identified in the needs assessment phase. Bring your ideas for building community evaluation partners given the constraints of funding time lines, to share with the group.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Evaluating Innovation and Capacity Building in Arts Organizations: Challenges and Lessons Learned in Capturing the Complexity
Roundtable Presentation 374 to be held in SAN JACINTO on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Evaluating the Arts and Culture TIG
Presenter(s):
Mary Piontek, EmcArts Inc, mpiontek@umich.edu
Abstract: In a time of unprecedented change for arts institutions, leaders recognize that business as usual cannot assure organizational health and success in the marketplace. Thriving organizations will be those that increase their emphasis on innovation, and make the most compelling case by demonstrating creative adaptation in their thinking and nimbleness in their response to change. This session will discuss evaluation strategies being used within the evolving, unpredictable, and non-linear context of innovation to (a) support intentionality in making change; (b) document the how, when, why, and by whom changes were made; (c) critically explore the results of and learnings, expected and unexpected; and (d) assist organizations in articulating how power, decision-making processes, policies, knowledge, and resources are used to promote and institutionalize innovation. This work draws upon developmental and formative evaluation practices, traditional program design and evaluation tools, and customized instruments and indicators for assessing capacity and impact.
Roundtable Rotation II: The Beauty of Internal Evaluation in the Arts: Using Metaphors and Symbols to Develop the Evaluation Capacity of the Board and Staff
Roundtable Presentation 374 to be held in SAN JACINTO on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Evaluating the Arts and Culture TIG
Presenter(s):
Kathleen Norris, Plymouth State University, knorris@plymouth.edu
Abstract: A challenge that exists within non-profit arts organizations is the development of evaluation capacity. Though the Board and staff of the organization with whom I work have the capacity to respond to requests for data from funders of discrete projects, there is need for a larger evaluation context that will assist with strategic planning and provide information about the organization as a whole. The use of metaphors and symbols in the process of evaluation development has engaged the board and staff in new ways and has assisted in the integration of evaluation within the routine activities of the staff. In this roundtable presentation and discussion, the metaphors and symbols and the processes used within this organization will be provided, and participants will be asked to discuss whether or not they can imagine using metaphors and symbols to engage members of their organizations and if so, what those might be.

Session Title: Evaluation Without Borders: Lessons From Other Countries
Multipaper Session 375 to be held in TRAVIS A on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Assessment in Higher Education TIG
Quality and University Didactic: The Students’ Perspectives Cues for Evaluation
Presenter(s):
Serafina Pastore, University of Bari, serafinapastore@vodafone.it
Abstract: Following the recent implementation of various laws, Italian universities must operate under a new, more sensitive and sophisticated system of evaluation. Nowadays the framework is complex and confusing (e.g., in the definition of the evaluand). More efforts are directed to identifying ways and models of evaluation of the learning process. However, within the evaluation models the idea of quality seems to exclude the contextual dimension: a crucial element of any educational process. Evaluation of quality involves an overlap of plans: the subjective elements (the expectations) and objectives one are dropped simultaneously in reality. The consequence is a hint of analysis undoubtedly rich and articulated for the evaluation. The paper will illustrate and problematize the results of a research conducted on judgments of students last year on the curricular program of study.
Conducting Meta-evaluation for Receiving Valid Information in Student's Assessment of the Base of Competence's
Presenter(s):
Victor Zvonnikov, State University of Management, zvonnikov@mail.ru
Marina Chelyshkova, State University of Management, mchelyshkova@mail.ru
Abstract: The new State Educational Standards in Russia have all requirements to results of student’s training in the form of competence’s set. So we must estimate numerous competencies during student’s assessment. The difficulties in such evaluation are connected with two factors, which reveal to inherent biases in evaluation results. First, competencies are latent variables. Second, all competencies have delayed character of observing, so we can receive the valid information about student’s competencies only in their professional activity. In connection with this two factors we need in researches of construct and predictive validity. The purpose of our research consists in development those approaches to carrying out meta-evaluation, which allow to receive high validity during graduate’s certification in context of the competency approach. We suggested the method for increasing construct and predictive validity on the base of especial model of meta-evaluation. We analyzed the applicability of this model in State University of Management.
Lessons Learned From an Improvement of Student Evaluations of Faculty
Presenter(s):
Yi-Hsing Chung, National Chi Nan University, yhchung@ncnu.edu.tw
Yi-Fang Lee, National Chi Nan University, ivanalee@ncnu.edu.tw
Shiuh-Sheng Yu, National Chi Nan University, ssyu@ncnu.edu.tw
Abstract: Evaluation of university teaching has been widely conducted to review instructors’ performance in the West, but it was less popular in the Asia due to the influence of Confucianism that respects teachers highly in the past. As the demand for accountability increases, evaluating instructors’ teaching via student rating has become a common activity in the Asian countries. The results are not only used as a feedback for faculty teaching, but information for judging their tenure. Therefore, whether the design could provide reliable data is a major concern. A literature review indicated that there was limited empirical study exploring relative issues in the Asian area. The intent of this presentation is to introduce a process to improve student evaluation system of instructors in a university in Taiwan and to discuss the factors that influence student ratings and the ways we used to decrease the measurement error. Lessons learned are drawn from the findings.

Session Title: Evaluation Managers and Supervisors TIG Business Meeting and Panel: Reflections on Evaluation Management Expertise and Competencies From Two Perspectives
Business Meeting with Panel Session 376 to be held in TRAVIS B on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Evaluation Managers and Supervisors TIG
TIG Leader(s):
Ann Maxwell, United States Department of Health and Human Services, ann.maxwell@oig.hhs.gov
Sue Hewitt, Health District of Northern Larimer County, shewitt@healthdistrict.org
Laura Feldman, University of Wyoming, lfeldman@uwyo.edu
Chair(s):
Thomas Horwood, ICF International, thorwood@icfi.com
Abstract: “Managing evaluation is an almost invisible practice, one little studied and little written about. Illuminating everyday practice and perspectives on it serves to make the taken-for-granted, the seemingly invisible and often ineffable, available” (Baizerman & Compton, 2009, p. 8). This session will feature two evaluation managers who work together to manage several evaluation studies but who represent different perspectives. One perspective (the client) is that of an evaluation manager from a public agency who oversees multiple studies simultaneously. The other perspective (the contractor) is that of an evaluation manager from a consulting firm who also manages multiple studies at the same time. Each panelist will reflect individually on the types of evaluation and management expertise and competencies. Finally, the two panelists will compare and contrast these perspectives based on their roles in their individual organizations and end with an opportunity for attendees to offer their own observations or ask questions.
Reflections on Evaluation Management Expertise and Competencies From the Perspective of Public Agency
Jennifer Broussard, Texas Education Agency, jennifer.broussard@tea.state.tx.us
This presentation will focus on the reflection on evaluation management expertise and competencies from the perspective of an evaluation manager from a public agency who oversees multiple studies simultaneously. Specifically, the presenter will discuss the evaluation expertise and management expertise gained through the everyday management of evaluation studies and evaluators at a public agency and how these might relate to other evaluation managers. The goals of this presentation are to illuminate some of the challenges in managing evaluation studies and ways to overcome these challenges, provide insights into managing consultants, and discuss the specific issues related to delivering the final report to the state legislature.
Reflections on Evaluation Management Expertise and Competencies From the Perspective of a Consultant
Thomas Horwood, ICF International, thorwood@icfi.com
This presentation will focus on the reflection on evaluation management expertise and competencies from the perspective of an evaluation manager from a consulting firm who oversees multiple studies simultaneously. Specifically, the presenter will discuss the evaluation expertise and management expertise gained through the everyday management of evaluation studies and evaluators as a contractor to a public agency and how these might relate to other evaluation managers. The goals of this presentation are to illuminate some of the challenges in managing evaluation studies and ways to overcome these challenges, provide insights into motivating evaluation staff, and discuss the specific issues related to managing the scope of the evaluation contract on time and within budget.

Session Title: Face to Face With the Authors of the Needs Assessment Kit: Challenging Questions (With a Twist) and Hopefully Meaningful Answers
Panel Session 377 to be held in TRAVIS C on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Needs Assessment TIG
Chair(s):
James Altschuld, The Ohio State University, altschuld.1@osu.edu
Discussant(s):
Hsin-Ling Hung, University of Cincinnati, hunghg@ucmail.uc.edu
Abstract: Needs Assessment (NA) is a necessary part of the process of planning, implementing, and evaluating successful programs. The Needs Assessment KIT (five integrated books on the process) was published in late 2009. Its goal was to enhance the practice of NA. This panel is an opportunity to question the authors via a lively and highly interactive format, part of which will be the solicitation before the conference of questions and comments as well as obtaining those from the audience for the session. The discussants will use themes from them to guide the discussion – the panelists will not have access to prior issues and thoughts. Thus the discussion will not be scripted and spontaneous in nature.
A Quick Overview of the Kit
James Altschuld, The Ohio State University, altschuld.1@osu.edu
The conceptualization and structure of the KIT will be explained, all authors will be acknowledged, and books 1-3 will be described in outline form. The rationale for why a KIT was needed and how it might be utilized will be offered. A few unique aspects of what is contained in the first three books will be discussed (steps of the three phases of the NA process, the extensive glossary of NA terms, looking at a small number of techniques but in much greater depth, record keeping strategies of accomplishments, etc).
Pesky Analysis and Prioritization in Needs Assessment
Jeffry White, University of Louisiana, Lafayette, jlw7049@louisiana.edu
White will explain the purpose of book 4 in the KIT especially with an emphasis on how multiple forms of data might be worked with and issues related to prioritizing needs. Included in this discussion would be how to present data to make it meaningful and useful to decision-making audiences and concerned stakeholders. Analysis and prioritization are two essential components of NA and at the same time two of the most frequently glossed over ones.
Where the Rubber Meets the Roaf: Taking Action for Change
Laurel Stevahn, Seattle University, stevahnl@seattleu.edu
Jean A King, University of Minnesota, kingx004@umn.edu
Needs assessments aren’t worth very much if they don’t lead to organizational actions and eventually to change and improvement. That is the essence of book 5 in the KIT. Stevahn and King will explain their rationale for it and the interesting use of what they termed ‘the double dozen’ techniques. One other important topic in the book is that of evaluation of the needs assessment enterprise itself and its and outcomes. Surprisingly the literature does not contain many exemplars for what should be an incumbent act for any individual or group doing an assessment.

Session Title: Health Indicator Systems for Evaluation of Local, State, and National Chronic Disease Prevention and Control Initiatives
Multipaper Session 378 to be held in TRAVIS D on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Todd Rogers,  Public Health Institute, txrogers@pabell.net
The First Illusion of Tobacco: Monitoring and Countering Tobacco Industry Influence Through the Application of Evaluation Indicators
Presenter(s):
Erika Fulmer, Centers for Disease Control and Prevention, duj2@cdc.gov
Todd Rogers, Public Health Institute, txrogers@pacbell.net
Martha Engstrom, Centers for Disease Control and Prevention, cpu5@cdc.gov
Shanta Dube, Centers for Disease Control and Prevention, skd7@cdc.gov
Steven Babb, Centers for Disease Control and Prevention, zur4@cdc.gov
Abstract: The passage of the Family Smoking Prevention and Tobacco Control Act (FSPTCA) in 2009 expanded the ability of the federal government to regulate tobacco. FSPTCA, in coordination with existing policy, provides an opportunity for intensifying tobacco prevention and control activities focused on reducing tobacco industry influences on tobacco use initiation and cessation. The CDC Office on Smoking and Health (OSH) is working with state and national partners to reframe the application of key outcome indicators (KOI). Initially developed to help evaluate comprehensive state tobacco control programs, OSH KOI are being used to identify surveillance gaps and drive innovation in program and policy practices. In this presentation we describe the collaborative process for reassessing the state of tobacco control science, and illustrate feasible approaches for employing indicators to assess the impact of federal, state, and local efforts to reduce tobacco industry influence.

Session Title: Quality by Design: Statewide Human Services Workforce Evaluation Using an Integrated Framework
Panel Session 379 to be held in INDEPENDENCE on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Human Services Evaluation TIG
Chair(s):
Chris Mathias, California Social Work Education Center, cmathias@berkeley.edu
Discussant(s):
Todd Franke, University of California, Los Angeles, tfranke@ucla.edu
Abstract: This states university and human services agency partnership is a consortium of the states schools of social work, public human service agencies, and other related professional organizations. It facilitates the integration of education and practice to assure effective, culturally competent service delivery in the human services. The partnerships goals are to: re-professionalize public human service through a specialized education program for public human services, develop a continuum that connects pre service education to in service training, engage in research and evaluation to develop evidence based practices and finally advocate for responsive policies and resources to support practice improvement and client outcomes. Evaluations from three of the partnerships programs will be presented. Plans for integrating the evaluations using theoretical constructs and longitudinal design as guiding principles will be discussed with the goal of improving the ability to better assess the impact of these programs on practice and client outcomes.
Evaluating a Statewide Public Child Welfare Education Program
Susan Jacquet, California Social Work Education Center, sjacquet@berkeley.edu
Elizabeth Gilman, California Social Work Education Center, egilman@berkeley.edu
The child welfare educational stipend program evaluation addresses the goal of recruiting and preparing a diverse group of social workers for professional careers in public human services, with child welfare emphasis through several research questions: 1. Is the curriculum being delivered as intended? 2. Are the students learning the curriculum? 3. To what extent are the graduates able to practice what they learned within the public child welfare agencies? 4. Do graduates remain in public child welfare? 5. What effects, if any, has the project had on the public child welfare agencies and workforce? 6. Does the program have effects on child and family outcomes? Over the last 20 years the state partnership has conducted seven targeted studies and sponsored research-based curriculum projects to address these questions and evaluate the program. The evolution of these efforts and basic findings will be presented by project research and curriculum specialists.
Statewide Evaluation of In-service Training
Barrett Johnson, California Social Work Education Center, barrettj@berkeley.edu
Leslie Zeitler, California Social Work Education Center, lzeitler@berkeley.edu
Chris Lee, California Social Work Education Center, clee07@berkeley.edu
Given the resources expended on training, a systematic approach to training evaluation is called for – one that evaluates the impact of training at multiple levels, provides data on trainee learning and transfer, and provides a structure for making specific decisions about which evaluation projects to pursue and why. Such an evaluation system requires extensive planning and a strategic approach to implementation. The first five years of a comprehensive evaluation of in-service training was recently completed for a complex state-supervised, county-administered child welfare system in a large state. This portion of the panel will present the evaluation results and outline the strategic plan for the next three-year period. In addition, we will also discuss how the strategic plan for in-service training evaluation intersects with a statewide evaluation framework involving preparatory social work education stipend programs for service in public child welfare and mental health in the same state.
Evaluation of the Mental Health Educational Stipend Program
Gwen Foster, California Social Work Education Center, gwen77f@berkeley.edu
Sevaughn Banks, California Social Work Education Center, sevaughn@berkeley.edu
A new workforce development program for mental health social work professionals was introduced in California in 2005. The Mental Health Educational Stipend Program works in partnership with key state and county mental health agencies and graduate schools of social work to: (1) build and refine a mental health core curriculum that has been implemented in every participating school and internship agency, (2) distribute funds for stipends for approximately 200 students each year who have demonstrated an interest in professional careers in public or nonprofit mental health settings, and (3) conduct process and outcome studies to improve the program and evaluate its impact on workforce quantity and quality. The presenter will discuss evaluation methods and key findings that inform the further collaborative development of this innovative program that aims to enable students from diverse backgrounds to become highly skilled, culturally competent mental health social workers.
Integration Framework
Sherrill Clark, California Social Work Education Center, sjclark@berkeley.edu
Amy Benton, California Social Work Education Center, ymanotneb@berkeley.edu
An integrated evaluation framework that incorporates the child welfare and mental health education programs and in service training is under development to address the partnerships goals. Using relevant theories as the underpinning of the basic research questions of the framework, the following questions are asked: To what extent is the curriculum being delivered to the students? To what extent are the graduates prepared to practice? What factors contribute to retention of the graduates? What are the career paths of the graduates? To what extent do the graduates perceive influence on agency, program, and policy? These questions are posed at crucial points in the graduate’s career using pre-post comparison group design administered in graduate school, during new worker core training and at 3, 6 and 10 years post graduation. The intervals were chosen based on a survival analysis of 416 graduates who have been in the workforce from 5 to 15 years.

Session Title: Assessing Change Over Time
Multipaper Session 380 to be held in PRESIDIO A on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Wendy Garrard,  University of Michigan, wgarrard@umich.edu
Event History Analysis: Modeling Occurrences of Events Over Time
Presenter(s):
Blair Stephenson, Los Alamos National Laboratory, blairs@lanl.gov
Christine Starr, Los Alamos National Laboratory, cstarr@lanl.gov
Melissa Schaum-Nguyen, Los Alamos National Laboratory, mschaum@lanl.gov
Abstract: The present study demonstrates the use of Event History Analysis (EHA). Originally developed in the biostatistics arena (as survival analysis), EHA offers a viable methodology for understanding both the timing and etiology of qualitative outcomes in the social sciences. We demonstrate the use of a related set of techniques in the context of modeling the voluntary (non-retirement) attrition of scientists over two decades. In contrast to traditional methods (e.g., logistic regression), EHA models time to the event of interest, considers information from censored observations, and allows for the inclusion of both invariant (e.g., gender) and time varying covariates (e.g., salary). We discuss basic dataset design, exploratory techniques, popular approaches such as (Cox regression), along with assumptions and alternatives such as Competing Risks Regression (CRR), which allows for an accounting of multiple possible outcomes in competition with the primary outcome of interest.
The Use of Piecewise Growth Models to Estimate a Staggered Interrupted Time Series
Presenter(s):
Keith Zvoch, University of Oregon, kzvoch@uoregon.edu
Joseph Stevens, University of Oregon, stevensj@uoregon.edu
Drew Braun, Bethel School District, dbraun@bethel.k12.or.us
Abstract: The proposed paper describes the use of piecewise growth models as a means for estimating intervention outcomes associated with a complex interrupted time series (ITS) design. The demonstration utilizes literacy data obtained on elementary school students in the Pacific Northwest. During the course of one school year, weekly literacy assessments were administered and supplemental instructional interventions were delivered to students as a means to facilitate the attainment of literacy benchmark goals. However, the timing of treatment was not constant as the onset and duration of particular instructional supplements were purposely differentiated by student. To illustrate the challenges and opportunities associated with the evaluation of staggered ITS designs, a series of multilevel growth models are presented. The demonstration shows that multilevel modeling techniques provide a flexible and powerful approach for capturing the complex structure of individualized treatment regimes while simultaneously documenting the immediate and more distal responses to intervention.
The Z-Kids: What Happens to Individual Clients Over Time? Outcomes With Clinical, Program, and Evaluation Salience
Presenter(s):
Richard Wood, Pima Prevention Partnership, rwood@thepartnership.us
Judith Francis, Pima Prevention Partnership, jfrancis@thepartnership.us
Abstract: This paper describes the use of a single subject design (SSD n=1) for measuring individual client outcomes over time using the ipsative Z test developed by Mauser, Yarnold and Foy (1991) for autocorrelated data. This measure was tested using Global Appraisal of Individual Need(GAIN) data for 613 youth in outpatient drug treatment. The study examined changes over time (baseline, 3,6,12 months) for substance use, self reported criminal behavior and emotional problems. The result was identification of individual clients who significantly improved, significantly deteriorated, or displayed no significant change over time for these GAIN outcomes. This approach yields two advantages. First it can be used to test causation between an intervention and individual client change. Second it provides clinicians with information during an intervention that can be used to modify programs to better serve client needs. The paper includes the SPSS syntax used to compute the ipsative Z score per individuals.

Session Title: A Systems Approach to Building and Assessing Evaluation Plan Quality
Panel Session 381 to be held in PRESIDIO B on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG and the Systems in Evaluation TIG
Chair(s):
Jennifer Urban, Montclair State University, urbanj@mail.montclair.edu
Discussant(s):
William M Trochim, Cornell University, wmt1@cornell.edu
Abstract: The Cornell Office for Research on Evaluation (CORE) uses a systems-based approach to program evaluation and planning that is operationalized through the Systems Evaluation Protocol (SEP). The SEP has been developed, tested, and refined through “Evaluation Partnerships” established with forty education programs in two contexts: Cornell Cooperative Extension, and Outreach Offices in NSF Materials Research, Science and Engineering Centers. Drawing on the SEP, evaluation theory, and experience with these Partnerships, CORE’s concept of evaluation plan quality emphasizes the quality of the program model underlying the plan; how well an evaluation “fits” the program; and the “internal alignment” of the evaluation plan. The panel presents our definition of evaluation plan quality, tools we have developed to begin to assess quality, how we operationalize and observe the development of quality in the Evaluation Partnerships, and education research on the importance of inquiry-based approaches to learning that are embedded in the Evaluation Partnerships.
The Systems Evaluation Protocol and Evaluation Plan Quality: Introduction and Definition
Monica Hargraves, Cornell University, mjh51@cornell.edu
Margaret Johnson, Cornell University, maj35@cornell.edu
The Systems Evaluation Protocol (SEP) brings a particular mix of systems thinking, complexity theory, evolutionary theory and natural selection, developmental theory and evaluation theory to the process of developing program models and evaluation plans. These shape key steps in the Protocol and yield essential elements in the development of a high-quality program model and evaluation plan. The SEP’s definition of evaluation plan quality emphasizes: • consistency with a high-quality program model (grounded in program knowledge, stakeholder perspectives, program boundaries, and underlying program theory); • how well the evaluation questions and evaluation elements “fit” the program (consistent with program context and lifecycle stage, internal and external stakeholder priorities, and priorities yielded by the program theory itself); and • the “internal alignment” of the evaluation plan (the extent to which the measurement, sampling, design, and analysis components of the plan support each other and the stated evaluation purpose and evaluation questions).
Capturing Quality: Rubrics for Logic Models, and Evaluation Plans
Margaret Johnson, Cornell University, maj35@cornell.edu
Wanda Casillas, Cornell University, wdc23@cornell.edu
A notable challenge in evaluation, and particularly systems evaluation, is finding concrete ways to capture and assess quality in program logic models and evaluation plans. This presentation will describe how evaluation quality is measured in the ongoing evaluation of the Evaluation Partnership (EP), a multi-year, systems-based approach to capacity building. The development of logic model and evaluation plan rubrics for assessing quality has been funded by the National Science Foundation as part of a research grant. One of the primary aims of the research is to assess whether the SEP is associated with enhanced logic model and evaluation plan quality. This presentation focuses on how three aspects of quality--richness of program model, fitness of evaluation questions, and alignment of the plan’s evidence framework--are operationalized in our rubrics for logic models and evaluation plans. Ways of capturing the value-added of a systems-based approach to capacity building will be explored.
Inquiry in Evaluation: Connecting Capacity Building to Education Research
Jane Earle, Cornell University, jce6@cornell.edu
Thomas Archibald, Cornell University, tga4@cornell.edu
In building evaluation capacity through the Systems Evaluation Protocol (SEP), a key goal is to teach people how to ask questions. This includes questions about the program as expressed in formal Evaluation Questions, but also foundational questions about program boundaries, stakeholders, and program and evaluation lifecycles that bring people to a deeper understanding of their program and the systems in which they are embedded. In the field of education, “inquiry” refers to a pedagogical method in which students are provided with frequent opportunities to practice posing questions and strategize methods for investigating possible answers. A significant amount of research has been done on how to best facilitate the inquiry process. This presentation will explore the synergy between work in inquiry-based learning and the SEP’s approach to evaluation capacity building. The goal is to establish best practices for helping program implementers become better questioners, investigators, and evaluators.
Early Indications of Process Use Outcomes Associated With Evaluation Planning Through the Systems Evaluation Protocol
Thomas Archibald, Cornell University, tga4@cornell.edu
Jane Earle, Cornell University, jce6@cornell.edu
Monica Hargraves, Cornell University, mjh51@cornell.edu
The Systems Evaluation Protocol (SEP) lays out specific, systems-based steps that internal program evaluators complete as they define and model their programs and develop evaluation plans. Although high-quality evaluation plans are a primary goal, we have found that participants have experienced benefits that they see as valuable even before getting to the evaluation planning step. The steps of the SEP which seem critical are early in the process: the stakeholder analysis, program review, and program boundary analysis steps. These outcomes, or “‘Aha!’ moments,” are valuable in their own right, independent of their potential role in assuring high quality evaluation plans—a novel example of process use. This presentation uses qualitative data to characterize and document “‘Aha!’ moments” among SEP participants. Patton’s enjoinment to consider process use as a sensitizing concept (New Directions for Evaluation volume 116 [2007]) offers a promising framework to help understand and contribute to these outcomes.

Session Title: Data for All: Democratizing Data Without Compromising Quality
Demonstration Session 382 to be held in PRESIDIO C on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the
Presenter(s):
Sarah Kohler Chrestman, Louisiana Public Health Institute, skohler@lphi.org
Lisanne Brown, Louisiana Public Health Institute, lbrown@lphi.org
Abstract: As the greater New Orleans (GNO) region is continuing to rebuild, the need for democratized data at the neighborhood level continues to grow. Instead of maximizing efforts, there is a great deal of duplication of data collection because agencies, organizations, and residents are unaware of what is already available. The Orleans Neighborhood Health Implementation Plan (ONHIP) is working to improve the availability of data through the development and use of a public website with neighborhood specific data and interactive mapping and query capability. This presentation will discuss which data sources are publicly available, the issues to consider when democratizing data, and its benefits and challenges. Data quality is of utmost importance as poor data can cause more harm than good. Participants will learn methods, including education, to ensure the quality of the data is maintained and to reduce any opportunities to misrepresent data.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Evaluation Goes to College: The Collaborative Evaluation of a Graduate Program
Roundtable Presentation 383 to be held in BONHAM A on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Seriashia Chatters, University of South Florida, schatter@mail.usf.edu
EunKyeng Baek, University of South Florida, ebaek@mail.usf.edu
Thanh Pham, University of South Florida, tvpham2@mail.usf.edu
Yvonne Hunter, University of South Florida, yohunter@mail.usf.edu
Abstract: A need was identified at a large, public, US university to determine the level of satisfaction of students and faculty in a Counselor Education graduate program. There is a wealth of research available regarding the evaluation of K-12 programs. However, there is limited information regarding the evaluation of higher education programs especially evaluations applying the collaborative approach. This evaluation was conducted to identify students and faculty satisfaction level of the graduate program and to recognize the differences between evaluating a K-12 program and the evaluation of a graduate program. In order to identify the needs and concerns of all relevant stakeholders, a Collaborative Evaluation approach was utilized. We will discuss how the process and procedures of the collaborative approach were implemented and strengths and challenges of utilizing this method in evaluating a graduate program will be addressed.
Roundtable Rotation II: Working Together to Design Effective Evaluation Tools
Roundtable Presentation 383 to be held in BONHAM A on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Rebeca Diaz, WestEd, rdiaz@wested.org
Abstract: This presentation will discuss a collaborative approach to developing effective evaluation instruments with key stakeholders carrying out a federal education grant. The main goals of the grant are to increase teacher content knowledge in U.S. history, enhance teacher practice, and increase student learning. The presenter has seven years of experience evaluating these federal grants designed to provide professional development for history teachers, and continues to explore new methods to effectively measure teacher outcomes. The evaluation approach, which consists of both qualitative and quantitative methods, is collaborative in nature. The evaluator employs an approach that involves not only project leaders but also the teachers involved in the program.

Session Title: San Antonio River Improvements Project: Field Trip to Ecosystem Restoration Sites
Demonstration Session 384 to be held in OFF SITE FIELDTRIP on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Environmental Program Evaluation TIG
Presenter(s):
Annelise Carleton-Hug, Trillium Associates, annelise@trilliumassociates.com
Abstract: This demonstration session offers an opportunity to learn about the massive multi-agency river restoration efforts underway on the San Antonio River just south of downtown San Antonio. The session includes a site visit and walking tour of the Eagleland and Mission Reach portions of the river that were previously channelized for flood control. The current project is restoring a more natural channel morphology and native plants. The tour will be lead by a specialist from the San Antonio River Authority, and discussions will include the pre-project assessment, project goals, evaluation and monitoring plans. Additional topics will be the challenges of ecosystem restoration and conservation in the urban interface, and addressing recreation uses. Space for this demonstration/field trip is limited.

Session Title: Education Evaluation: Connecting Professional Development to Changes in Classroom Practice
Multipaper Session 385 to be held in BONHAM C on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Bianca Montrosse,  Western Carolina University, bianca.montrosse@gmail.com
Discussant(s):
Susan Connors,  University of Colorado, Denver, susan.connors@ucdenver.edu
Applying Guskey’s Model for Evaluating Professional Development to a Math and Science Partnership Program: Successes and Challenges in Collecting Data Across Schools and Grade Levels
Presenter(s):
Carol Haden, Magnolia Consulting LLC, carol@magnoliaconsulting.org
Jane Kirkley, Northern Arizona University, jane.kirkley@nau.edu
Abstract: This paper describes the successes and challenges of applying Thomas Guskey’s 5-level model for evaluating professional development to a program funded by Arizona’s Math and Science Partnership program. Professional development was provided to two cohorts of K-8 teachers in Northern Arizona across two years of the project. Program goals were to increase participants’ science content knowledge while also building skills in effective science pedagogy. Evaluation activities and data collection were particularly successful for understanding participant reactions (Level 1), participant learning (Level 2), and participant use of knowledge and skills (Level 4). Participants represented many different schools, and grade levels varied from kindergarten to middle school. These unique contextual factors led to challenges in evaluating organizational support and change (Level 3) and student learning outcomes (Level 5). We describe data collection activities at each level and how lessons learned in year one informed revisions to evaluation activities for year two.
Using the Transtheoretical Model of Readiness for Change to Evaluate the True Impact of Evidence-based Professional Development in the K-12 Setting
Presenter(s):
Christa Smith, Kansas State University, christas@ksu.edu
Katherine Sprott, Kansas State University, krs8888@ksu.edu
Abstract: For professional development interventions in the K-12 setting that are delivered in varying degrees, a holistic program evaluation that measures true attitudinal and behavioral change at individual and systems levels is necessary. The Transtheoretical Model (TTM) of stages of change (Prochaska and DiClemente, 1983, 1984, 1986) was applied as a framework to assess the impact of evidence-based professional development interventions for K-12 school personnel. This presentation will demonstrate the use of the TTM framework as illustrated through the program evaluation of evidence-based interventions implemented by a US Department of Education funded Equity Assistance Center. The presentation will discuss the application of the TTM framework in applying relevant metrics to measure the stages of attitude, knowledge, and behavior change and significant relations to appropriate demographic variables and the ultimate equity program outcome of establishing race, gender, and national origin equity in school environments.
Determining the Validity of tEacher Self-Reports as a Cost-Effective Strategy to Evaluate Teacher Performance
Presenter(s):
Kasey McCracken, David Heil & Associates Inc, kmccracken@davidheil.com
Gina Magharious, David Heil & Associates Inc, gmagharious@davidheil.com
Joe Sciulli, National Science Teachers Association, jsciulli@nsta.org
Abstract: This paper reports findings from a validation study of teacher-reported use of instructional strategies, based on independent classroom observations. The validation study was undertaken to support the evaluation of the Mickelson ExxonMobil Teachers Academy — a professional development experience serving approximately 500 teachers annually that is designed to improve elementary school teachers’ use of inquiry-based instructional strategies for teaching science. The evaluation of the Academy includes measuring changes in teachers’ reports of their instructional strategies from before to after their participation in the Academy and one year later. During Spring 2009 and 2010, independent classroom observations were conducted for a sample of 48 teachers. Teachers’ ratings of their instructional strategies were correlated with observers’ ratings. However, teachers tended to provide more favorable ratings of their instructional strategies than did the independent observers. Findings from the validation study inform strategies for designing quality educational evaluations that balance research validity and cost effectiveness.
Qualitative Analysis of Changes in Teachers' Knowledge, Beliefs and Classroom Practices Based on Three Years of Professional Development
Presenter(s):
Carol Baldassari, Lesley University, baldasar@lesley.edu
Sabra Lee, Lesley University, slee@lesley.edu
Rosalie Torres, Torres Consulting Group, rosalie@torresconsultinggroup.com
Abstract: This presentation details analysis and reporting methods for teacher case studies conducted as part of NSF’s Mathematics and Science Partnership Program. This program funds partnerships between universities and school districts to improve teacher quantity and quality. We conducted in-depth case studies of four mathematics teachers, focusing on their participation in an immersion program of mathematics professional development and their subsequent transfer of learning to the classroom. The data collected, over 1.5 to 2.5 years, included observations of their professional development sessions, review of papers they wrote, interviews with both the teachers and the faculty who taught them, and classroom observations. The data analysis and case study writing methods to be presented helped reveal the significant impact of contextual factors (such as school- and/or district-level circumstances, and teachers’ backgrounds and experiences) that bear substantially on teacher learning from professional development and how they are able to transfer it to the classroom.

Session Title: Addressing Schools as Organizations in Educational Evaluation
Multipaper Session 386 to be held in BONHAM D on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Diane Binder,  The Findings Group LLC, diane.binder@thefindingsgroup.com
Discussant(s):
Chad Green,  Loudoun County Public Schools, chad.green@loudoun.k12.va.us
Documenting Patterns in Teacher’s Relationships: Their Impact on K-12 Education
Presenter(s):
Kathy Gullie, State University of New York at Albany, kp9854@albany.edu
Abstract: When looking at the impact of teachers’ professional development on student outcomes, there appears to be an evaluation gap between documenting involvement in professional development and documenting outcomes that represent the acquisition of skills and the transference of those skills to the real world. Current leadership models that stress a stakeholder based approach to the establishment of peer to peer relationships suggests possible solution to the development of evaluation criteria that will help bridge this gap. The purpose of this paper is to report on successful methods of documenting K-12 teacher interactions and their relationship to instructional practices and students’ subsequent outcomes. Findings are based on the result of two consecutive, three year Mathematics Science Partnership grants that unitized leadership theory for pattern matching when analyzing case studies, interviews, observation and focus group data
The Use of School Climate Data for School Improvement
Presenter(s):
Sarah Gareau, South Carolina Educational Policy Center, gareau@mailbox.sc.edu
Diane Monrad, University of South Carolina, dmonrad@mailbox.sc.edu
John May, University of South Carolina, mayjr@mailbox.sc.edu
Karen Price, South Carolina Educational Policy Center, pricekj@mailbox.sc.edu
Diana Mindrila, South Carolina Educational Policy Center, 
Ishikawa Tomonori, University of South Carolina, ishikawa@mailbox.sc.edu
Abstract: Previous research suggests that school climate data can be very useful in understanding the complex dynamics of the relationships between organizational-level contexts and evaluation outcomes. While measures of school success are essential for schools to show progress under state and federal accountability requirements, assessing school climate as a critical element of school improvement has received only passing interest from policy makers. The purpose of the current collaborative work is the analysis of 2008 and 2009 school climate surveys and the development of 4-year school climate profiles (2006-2009) focused on low-performing schools, including a 4-year comparison of mean factor scores by organizational level, percentile ranks of survey factor scores by organizational level, and item-level percentage agreement indices. A discussion of the profiles’ development, meaning, and use for evaluation quality provides a practical application of school climate data.
From Monitoring to Evaluation and Back Again: Implications for Organizational Leadership, Budget, and Success
Presenter(s):
Lisa Schmitt, Austin Independent School District, lschmitt@austinisd.org
Karen Cornetto, Austin Independent School District, kcornett@austinisd.org
Lindsay Lamb, Austin Independent School District, lindsay.lamb@austinisd.org
Abstract: Over the past decade, the Austin ISD Board of Trustees adopted a policy governance management model, under which district administration reported monthly the district’s status on indicators viewed as evidence that policies outlining student expectations and district operations were implemented. This piecemeal monitoring approach, however, led the Board to request a new format that considered a holistic picture of performance at each level (elementary, middle, and high). The transition from performance monitoring towards a systemic academic program evaluation resulted in a unique collaboration among evaluators, district administrators, and campus administrators who provided the Board an integrated, user-friendly report describing internal research on what matters most to student achievement at each level, evidence on programs that influence what matters, and plans for addressing what matters most effectively in the future. This session outlines the challenges and rewards of transitioning from monitoring to evaluation, and discusses the necessary relationship between the two.
The Utility of Situation Models for Capturing the Present State of a School-Wide Initiatives
Presenter(s):
Chad Green, Loudoun County Public Schools, chad.green@loudoun.k12.va.us
Abstract: Honig’s (2008) model of central offices as learning organizations suggests that administrators can cultivate six different school capacities regardless of the school’s level of engagement in teaching and learning practices (from expert to novice). These six capacities were aligned with the NSDC’s (2001) context standards to develop an exploratory conceptual framework that guided the collection, analysis, and interpretation of data from two school-wide initiatives at different stages of implementation. Each initiative served as a leverage point for documenting the coherence and alignment of the school’s program practices within the framework. The resulting situation models (Kintsch, 1988; van Dijk and Kintsch, 1983) revealed different patterns of interrelationships for each school that corresponded to their stage of program implementation.

Session Title: Evaluating the Science of Discovery in Complex Health Systems: Challenges and Opportunities
Panel Session 387 to be held in BONHAM E on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Alison Buchan, University of British Columbia, abuchan@medd.med.ubc.ca
Discussant(s):
Alison Buchan, University of British Columbia, abuchan@medd.med.ubc.ca
Abstract: Complex health problems such as chronic disease or pandemics require knowledge that transcends disciplinary boundaries in order to generate solutions. Such transdisciplinary discovery requires researchers to work and collaborate across boundaries, combining elements of basic and applied science. At the same time, calls for more interdisciplinary health science acknowledge that there are few metrics to evaluate the products associated with these new ways of working. The Research on Academic Research (RoAR) initiative was established to evaluate the process of discovery and impact of collaboration that emerged through the Life Sciences Institute at the University of British Columbia, a state-of-the-art facility designed to support researchers - self-organized around specific health problems rather than disciplines. A logic model depicting the factors influencing such collaboration is presented along with a multi-method evaluation plan to assist understanding of the discovery process in this new environment and develop new metrics for assessing collaborative impact.
An Evaluation Framework for Advancing the Science of Evaluating Team Science: The Research on Academic Research Initiative (RoAR)
Cameron Norman, University of Toronto, cameron.norman@utoronto.ca
Timothy Huerta, Texas Tech University, tim.huerta@ttu.edu
Sharon Mortimer, Michael Smith Foundation for Health Research, smortimer@msfhr.org
Allan Best, Michael Smith Foundation for Health Research, allan.best@in-source.ca
Alison Buchan, University of British Columbia, abuchan@medd.med.ubc.ca
Background: In 2006 the University of British Columbia opened the Life Sciences Institute (LSI), the first building of its size at UBC to be developed to support cross-disciplinary bio-sciences team research. The Research on Academic Research (RoAR) initiative was initiated in 2007 to serve as a platform for conducting exploratory research and evaluation of the effect that co-location of previously disparate researchers and institutional policies have on the organization and output of scientists. Methods: A logic model was developed to guide a multi-method evaluation that aimed to assess the outcomes associated with the re-organization of the scientists from academic department buildings into integrated problem-based research groups. These methods included a survey of psychosocial issues and social networks, an examination of publication and grant application patterns, measures of physical proximity, and investigator interviews. Discussion: Evaluation of team science activities requires a strategy that can address process and outcomes from multiple perspectives.
Advancing the Science of Evaluating Team Science: Psychosocial Factors and Related Outcomes From the RoAR Initiative
Cameron Norman, University of Toronto, cameron.norman@utoronto.ca
Timothy Huerta, Texas Tech University, tim.huerta@ttu.edu
Sharon Mortimer, Michael Smith Foundation for Health Research, smortimer@msfhr.org
Alison Buchan, University of British Columbia, abuchan@medd.med.ubc.ca
Background: Much of the research in basic science is investigator-driven and favors individual scientists or small groups. Team science is a different orientation, requiring new skills and knowledge. A survey was developed to assess the level of comfort and skills of investigators working in a team science environment over four years. Method: Investigators with the Life Sciences Institute at the University of British Columbia were surveyed on their attitudes, knowledge, collaborative skills, and perceived benefits and risks associated with various models of research were taken. Conclusions: The overall attitudes, skills and knowledge about how to do team science and its perceived value to academic work changed considerably over four years. Dr. Norman will draw on his background on health behavior change, systems science and evaluation research to discuss the measurement challenges in evaluating team science.
Advancing the Science of Evaluating Team Science: Social Network Outcomes From the RoAR Initiative
Timothy Huerta, Texas Tech University, tim.huerta@ttu.edu
Cameron Norman, University of Toronto, cameron.norman@utoronto.ca
Sharon Mortimer, Michael Smith Foundation for Health Research, smortimer@msfhr.org
Alison Buchan, University of British Columbia, abuchan@medd.med.ubc.ca
Background: The shift from individual or small-group research to team science models requires shifts in social interaction patterns. Social network analysis can help evaluate these shifts. Method: Participants were asked to indentify the investigators in the Life Sciences Institute (LSI) that they interacted with, the nature of those interactions and the degree to which it influenced their work in annual surveys over four years. Results: An increase in the absolute numbers of collaborations within LSI was observed. While overall network density increased and the frequency of inter-departmental and inter-research group grew, the number of such increases suggest a developmental and slow growing process. Conclusions: After four years the researchers in the LSI show significant alterations structure and processes used to guide interactions. Dr. Huerta will draws on more than a decade of experience with social network and systems research in exploring the role of social networks in evaluating team science. Conclusions: Taking the Donabedian model of Structure-Process-Outcome indicates that the researchers in the LSI show significant alterations structure and processes used to guide interactions after four years.
Advancing the Science of Evaluating Team Science: Scientometric-related Outcomes From the RoAR Initiative
Sharon Mortimer, Michael Smith Foundation for Health Research, smortimer@msfhr.org
Timothy Huerta, Texas Tech University, tim.huerta@ttu.edu
Bianca Cervantes, University of British Columbia, bcervantes@exchange.ubc.ca
Alison Buchan, University of British Columbia, abuchan@medd.med.ubc.ca
Background: The shift from individual or small-group research to team science models can be observed in the publications and grant application patterns of investigators over time. Method: A comparison of funding by research group in the Life Sciences Institute (LSI) was completed for the major Canadian sources: Tri-Council (equivalent to NIH), peer-reviewed grants from other sources and research contracts. Results: The data demonstrated a significant increase in Tri-Council funding in all groups, while comparison of publication metrics indicated potential shifts after 3 years. Conclusions: Team science may be more promising as a discovery strategy within certain LSI areas. It is still too early to make definitive statements about the effect of team science model at the LSI with the exception of a rise in impact of papers in certain fields. Dr. Mortimer will expand on her role as a funder and researcher to discuss team science evaluation and its implications.

Session Title: Government Evaluation TIG Business Meeting and Panel: Happy Anniversary to Us! Celebrating Twenty Years of Government Evaluation
Business Meeting with Panel Session 388 to be held in Texas A on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Government Evaluation TIG
TIG Leader(s):
Stanley Capela, HeartShare Human Services of New York, stan.capela@heartshare.org
David J Bernstein, Westat, davidbernstein@westat.com
Sam Held, Oak Ridge Institute for Science and Education, sam.held@orau.org
Chair(s):
Stanley Capela, HeartShare Human Services of New York, stan.capela@heartshare.org
Abstract: In 1990, the theme of the AEA conference was Evaluation and the Formulation of Public Policy, a topic that is central to the role of evaluation in government. During the 1990 conference, a session attended by about 20 people gathered to discuss the possibility of establishing a State and Local Government Evaluation Topical Interest Group (TIG), which was approved by the AEA Board in early 1991. In 2005, the TIG’s focus was broadened, and the name was changed to the Government Evaluation TIG. This panel will celebrate the 20th anniversary of the TIG with three highly relevant presentations, including a panel discussion with the current and past chairs of the Government Evaluation TIG, a key note address by Joe Wholey, one of the leading experts on government evaluation in the United States, and the Government Evaluation TIG’s annual business meeting.
Panel Discussion: How Has Government Evaluation Changed in the Last Twenty Years?
David J Bernstein, Westat, davidbernstein@westat.com
Maria Whitsett, Moak, Casey and Associates, mwhitsett@moakcasey.com
Rakesh Mohan, Idaho State Legislature, rmohan@ope.idaho.gov
Stanley Capela, HeartShare Human Services of New York, stan.capela@heartshare.org
For the first time ever, a panel comprised of the present and several past chairs of the Government Evaluation TIG will gather to address some questions of interest to AEA and Government TIG members: Over the past 20 years how has the role of program evaluation changed within government? How do those working on government evaluation ensure quality? How have governments in different sectors (federal, state, local, tribal/indigenous, international) defined the concept of quality in evaluation processes? Do intergovernmental funding arrangements (e.g., federal, state, local grants or contracts) change the requirements for quality evaluations, and if so, how? Stan Capela, Chair of the Government Evaluation TIG and three Past Chairs, David Bernstein, Maria Whitsett, and Rakesh Mohan, will conduct a panel discussion to consider how government evaluation has changed (or not) in the last 20 years.
Keynote Address: How Has Evaluation Changed in the Last 20 to 50 Years?
Joseph Wholey, University of Southern California, joewholey@aol.com
Since the 1970s, governments and agencies at all levels have been using performance monitoring and performance management systems, in part because monitoring of progress toward program goals fits comfortably under the definition of program evaluation. By 1990, government evaluation had already been growing for thirty years. In the 2000s, high-stakes performance monitoring spread to every state in response to the No Child Left Behind Act. Evaluation is used in government to increase transparency, strengthen accountability, support decision-making, and improve the performance and value of public programs. As an evaluator, government manager, political appointee, elected official, consultant, academic, researcher, and author, Joe Wholey has been a witness and active participant to these and every other major change in the evaluation field in the last 40 plus years. Dr. Wholey will discuss how government evaluation has changed, and what changes we might expect to see in the near future.
Government Evaluation Topical Interest Group (TIG) Business Meeting
Stanley Capela, HeartShare Human Services of New York, stan.capela@heartshare.org
David J Bernstein, Westat, davidbernstein@westat.com
Sam Held, Oak Ridge Institute for Science and Education, sam.held@orau.org
Following the panel discussion and key note address, the Government Evaluation TIG will hold its annual Business Meeting during the AEA 2010 conference. Topics include succession planning for TIG leadership, ensuring the relevance of the TIG, "hot issues" affecting the conduct of government sponsored evaluation work, and soliciting ideas for TIG-sponsored programs and services.

Session Title: Using Logic Models to Facilitate Comparisons of Evaluation Theory
Multipaper Session 389 to be held in Texas B on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Theories of Evaluation TIG
Chair(s):
Marv Alkin,  University of California, Los Angeles, alkin@gseis.ucla.edu
Discussant(s):
Robin Lin Miller,  Michigan State University, mill1493@msu.edu
Visual Representations of Evaluation Theories
Presenter(s):
Tanner LeBaron Wallace, University of Pittsburgh, twallace@pitt.edu
Mark Hansen, University of California, Los Angeles, markhansen@ucla.edu
Abstract: Here, we describe the development of logic models depicting selected theories of evaluation practice. We begin with a discussion of the particular theories that were chosen for our analysis, then outline the steps involved in constructing the models. Mark’s (2008) framework for research on evaluation provides the organization to this session. We present that framework and describe its relevance to the examination of evaluation theories.

Session Title: Examining the Mixing in Mixed Methods Evaluation
Multipaper Session 390 to be held in Texas C on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the
Chair(s):
Jori Hall,  University of Georgia, jorihall@uga.edu
Discussant(s):
Mika Yamashita,  Academy for Educational Development, myamashita@aed.org
Improving Public Awareness Campaign Evaluation Using Mixed Methods Design
Presenter(s):
Mary Kay Falconer, Ounce of Prevention Fund of Florida, mfalconer@ounce.org
W Douglas Evans, George Washington University, wdevans@gwu.edu
Abstract: This paper documents a triangulation mixed methods design in an evaluation of a statewide campaign to prevent child abuse and neglect. The methods include an online survey using a web-based panel of parents (quantitative) and five parent focus groups (qualitative). The online survey used an experimental design with study participants randomized into campaign and control groups. In the analysis, convergence and divergence in reactions to campaign stimuli (public service announcements and parent resource material) across methods are of interest. In addition, this design relies on data collected in the qualitative method to expand the explanation of the reactions to the campaign stimuli. This mixed methods application is an excellent illustration of how to improve quality in research when evaluating public awareness campaigns.
What’s the Right Mix? Lessons Learned Using A Mixed Methods Evaluation Approach
Presenter(s):
Nicole Leacock, Washington University in St Louis, nleacock@wustl.edu
Virginia Houmes, Washington University in St Louis, vhoumes@wustl.edu
Nancy Mueller, Washington University in St Louis, nmueller@wustl.edu
Gina Banks, Washington University in St Louis, gbanks@wustl.edu
Amy Stringer-Hessel, Missouri Foundation for Health, astringerhessel@mffh.org
Cheryl Kelly, Saint Louis University, kellycm@slu.edu
Abstract: In 2007, the Missouri Foundation for Health funded a comprehensive evaluation of a multi-site obesity prevention initiative across the state. Currently, there are 35 grantees implementing physical activity and healthy eating programs. As external evaluators, we developed measures and methods to capture not only the breadth of grantees’ activities, but also key details of the implementation process. The evaluation involved a mixed-methods approach including a web-based, quantitative data collection system to collect the breadth of data on program activities, and a series of qualitative interviews to capture details about program context. This approach enabled us to strike a balance between a manageable and informative evaluation. This presentation will describe benefits of a mixed-methods approach, how each method contributed to our evaluation, and lessons learned during our evaluation process. It will also present strategies for triangulating quantitative and qualitative data to disseminate a comprehensive picture of the initiative to key stakeholders.
Triangulation in Evaluation Practice
Presenter(s):
Hongling Sun, University of Illinois at Urbana-Champaign, hsun7@illinois.edu
Nora Gannon, University of Illinois at Urbana-Champaign, ngannon2@illinois.edu
Jennifer Greene, University of Illinois at Urbana-Champaign, jcgreene@illinois.edu
Abstract: With the purpose of drawing stronger inferences through convergence of data from multiple methods, triangulation is the most popular form and rationale for mixed methods (Fidel, 2008). However, evaluators know that the practice of triangulation rarely results in convergence and more often than not we observe inconsistencies and even contradictions (Mathison, 1988). How evaluators effectively deal with those inconsistent or contradictory findings, however, is still not clear. In an empirical review of educational evaluation studies, we found triangulation continues to be the most popular stated purpose of mixed methods, consistent with previous claims. In our empirical review, we focused specifically on how evaluators with a triangulation intent actually attended to contradictory findings. Our results provide a snapshot of current practices of triangulation in mixed methods evaluation. As strong inferences are a critical measure of evaluation quality, understanding how evaluators engage with contradictory findings can improve the practice of triangulation.

Session Title: Linking Professional Associations to Advance the Study of Science and Innovation Policy
Panel Session 391 to be held in Texas D on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Susan Cozzens, Georgia Institute of Technology, susan.cozzens@iac.gatech,edu
Abstract: This panel of representatives of professional associations with interests in science and technology policy and evaluation will begin a dialogue to strengthen the linkages among these communities. After each presents an overview of their association, the topics that are central to their discussion, and “hot topics”, they will brainstorm specific ways they might interact more in the future. Organizations represented in addition to the AEA Research, Technology and Development Topical Interest Group are the Atlanta Conference on Science and Innovation, the Association for Public Policy Analysis and Management (APPAM), and the Academy of Management. Interaction with the audience will add other viewpoints such as the Technology Transfer Society. Strengthening this community is a goal of two U.S. federal initiatives, the Science of Science Policy in the White House Office of Science and Technology Policy and the Science of Science and Innovation Policy program at the National Science Foundation.
View From the Atlanta Science and Technology Policy (S&T) Conference and Others
Susan Cozzens, Georgia Institute of Technology, susan.cozzens@iac.gatech,edu
Dr. Susan E. Cozzens is Professor of Public Policy, Director of the Technology Policy and Assessment Center, and Associate Dean for Research in the Ivan Allen College. Dr. Cozzens's research interests are in science, technology, and innovation policies in developing countries, including issues of equity, equality, and development. She is active internationally in developing methods for research assessment and science and technology indicators. Her current projects are on water and energy technologies; nanotechnology; social entrepreneurship; pro-poor technology programs; and international research collaboration. She has been a primary organizer for The Atlanta Conference on Science and Innovation, which is sponsored by Georgia Tech and others, and held every two years. She has been active in numerous related professional associations and organized the workshop “Research Assessment: What Next?” in 2001 that brought together experts and practitioners from around the world to deal in part with the topic of this panel.
View From the American Evaluation Association's Research, Technology and Development Evaluation TIG
Gretchen Jordan, Sandia National Laboratories, gbjorda@sandia.gov
Dr. Gretchen Jordan is a Principal Member of Technical Staff with Sandia National Laboratories. Gretchen works with the Sandia Science and Engineering Strategic Management Unit and the U. S. Department of Energy (DOE) on evaluation and performance measurement and innovative methods of assessing the effectiveness of research organizations. She is the North American Editor of Research Evaluation and has been active in the Washington Research Evaluation Network. She founded the American Evaluation Association’s Research, Technology, and Development Topical Interest Group in 1995 with George Teather and has been the chairperson for all but three years since. The group has grown to have more than 20 sessions at every annual conference with presenters and participants from all over the world.
View From the Association for Public Policy Analysis and Management
Julia Melkers, Georgia Institute of Technology, julia.melkers@pubpolicy.gatech.edu
Dr. Julia Melkers is Associate Professor in the School of Public Policy at the Georgia Institute of Technology. She is an elected member of two national boards: the Policy Council of the Association for Public Policy and Management, and the American Association for the Advancement of Science Committee on Science, Engineering and Public Policy (COSEPP). She coordinates the Technology section of APPAM. APPAM is dedicated to improving public policy and management by fostering excellence in research, analysis, and education. Activities include an annual research conference and a peer-reviewed multidisciplinary journal. Her research addresses capacity development, collaboration patterns, social networks and related outcomes of science, and issues around career development and mentoring in STEM fields with a special focus on women and underrepresented minorities.
View From the Academy of Management
Gordon Kingsley, Georgia Institute of Technology, gordon.kingsley@pubpolicy.gatech.edu
Dr. Gordon Kingsley is Associate Professor in the School of Public Policy at the Georgia Institute of Technology. He is the past Division Chair for the Public and Nonprofit Divsion of the Academy of Management, which is an active position. Prior to that he was elected to served as the Program Chair. He is also a member of the Technology and Innovation Management Division of the AMA which also does work relevant to AEA. Current research projects explore the impacts of public-private partnerships on the development and allocation of scientific and technical human capital. This work is being conducted in three policy domains examining the following: 1) the impact of educational partnerships on the development of math and science instruction; 2) strategies used by a public transportation agency for effectively managing large numbers of engineering consultants and contractors drawn from the private sector; and 3) the development of hybrid organizations and network organizations designed to channel resources from the public and private sectors to stimulate technology-led economic development.

Session Title: Case Studies in Evaluation Use
Multipaper Session 392 to be held in Texas E on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Evaluation Use TIG
Chair(s):
Jennifer Iriti,  University of Pittsburgh, iriti@pitt.edu
Process Evaluation for Program Improvement: Lessons From the Interns for Indiana Program
Presenter(s):
Omolola A Adedokun, Purdue University, oadedok@purdue.edu
Loran Carleton Parker, Purdue University, carleton@purdue.edu
Wilella Burgess, Purdue University, wburgess@purdue.edu
Abstract: Participant perceptions of the extent to which program outcomes are achieved are undoubtedly a popular focus of both formative and summative program evaluations. However, the effects of participants’ characteristics and contextual variables on program perceptions are less explored in formative evaluation studies where the resulting feedback can be used for program improvement. The purpose of this presentation is to demonstrate how the empirical feedback from such investigations can be used to improve program implementation and the general understanding of program processes. Data for the study is from the formative evaluation of the Interns for Indiana (IfI) program, a multi-site entrepreneurial internship program designed to increase interns’ desire to work in their home state after graduation by providing them with opportunities for experiential learning in small/startup companies. Multivariate linear regression analysis was employed to examine the effects of demographic and contextual variables on participants’ perceptions of program outcomes.
Ensuring Program Quality: Lessons Learned From Implementation Evaluation of the Kentucky Alternative Certification in Special Education (KACSE) Program
Presenter(s):
Imelda Castañeda-Emenaker, University of Cincinnati, castania@ucmail.uc.edu
Norma Wheat, Campbellsville University, nrwheat@campbellsville.edu
Abstract: This presentation highlights the role of implementation evaluation in assisting the program manager of the alternative certification program to make informed decisions about the program directions. The changes in the economic situations around the school districts participating in the program posed challenges in the program’s targeted goals, which threatened the program’s major funding. Implementation evaluation and formative feedback helped the program manager in improving its programming and in being proactive to avert the negative effects of delaying placements of certified teachers with the economically-challenged school districts. Implementation evaluation provided the rationale for the program changes made and helped ensure appropriate program accountability, without jeopardizing the program status with its funder.
Evaluation Synergy: A Multi-purpose Evaluation Design for Fundraising, Applied Learning, and Community Service
Presenter(s):
Nancy Rogers, University of Cincinnati, nancy.rogers@uc.edu
Jennifer Williams, University of Cincinnati, jennifer.williams@healthall.com
Brian Powell, University of Cincinnati, powellbb@mail.uc.edu
Abstract: Most non-profit programs recognize the value of evaluation, but more often they do not have the resources to finance an evaluation. Expanding the use of evaluation to more than simply providing program feedback or outcome data could prove advantageous to small programs on a limited budget. One small violence intervention program leveraged the resources of a local university to involve students in service learning, increased community awareness, and applied learning, all while evaluating fundraising materials and raising money for the program. The presenters will share how they crafted a meaningful collaboration using social psychological research to develop and evaluate several fundraising appeals while providing an applied educational experience that resulted in multiple wins.
The Use of Evaluation in Agricultural Policy Making: The Case of Mexico
Presenter(s):
Alfredo Gonzalez Cambero, Food and Agriculture Organization of the United Nations, agonzalez@fao-evaluacion.org.mx
Salomon Salcedo Baca, Food and Agriculture Organization of the United Nations, salomon.salcedo@fao.org
Abstract: Within the framework of international technical cooperation, the Food and Agricultural Organization of the United Nations (FAO) has conducted evaluations for the Mexican Government for a period of more than ten years. Although many evaluations have been carried out in the field of agricultural policy evaluation, little use has been made of the results. While both FAO and the Mexican Government have intended to use evaluation to advance their common objectives of rural development, the evaluations themselves have been used to serve business as usual, rather than to improve the programs through the implementation of findings or to better understand the conceptual underpinnings of the programs. The paper looks into building a virtuous circle of quality-use of the evaluation, identifying factors within the evaluation process itself, as well as institutional constraints, in order to make better use of the evaluation as a valuable tool for policy making.

Session Title: Research on Evaluation Standards and Methods
Multipaper Session 393 to be held in Texas F on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Research on Evaluation TIG
Chair(s):
Matthew Galen,  Claremont Graduate University, matthew.galen@cgu.edu
Social Science Standards and Ethics: Development, Comparative Analysis, and Issues for Evaluation
Presenter(s):
Linda Mabry, Washington State University, Vancouver, mabryl@vancouver.wsu.edu
Abstract: This paper presents a comparative analysis of the Guiding Principles for Evaluators (AEA, 1995), The Program Evaluation Standards (Joint Committee, 1994), and the codes of ethics and standards established by the American Sociological Association (1997), the American Psychological Association (APA, 2007), the American Sociological Association (ASA, 1997) and other social science organizations. The theoretical contexts for cross-codes analysis are Kohlberg's (1984) theory of human moral development and Rawl's theory of justice (1971); the historical context is the origins and development during the past half-century of ethical codes by governments and international agencies. Three issues specific to evaluation are discussed: (1) potential conflicts between evaluation's codes of conduct and U.S. legal requirements regarding ethics in social science, (2) the cultural sensitivity and appropriateness of codes of conduct in international and transnational evaluations, and (3) the posssibility and advisability of enforcing evaluation codes of conduct.
Insight into evaluation practice: Results of a content analysis of designs and methods used in evaluation studies published in North American evaluation focused journals
Presenter(s):
Christina Christie, University of California, Los Angeles, tina.christie@ucla.edu
Dreolin Fleischer, Claremont Graduate University, dreolin@gmail.com
Abstract: To describe the recent practice of evaluation, specifically method and design choices, we performed a content analysis on 117 evaluation studies published in eight North American evaluation-focused journals for a 3-year period (2004-2006). We chose this time span because it follows the scientifically-based research (SBR) movement, which prioritizes the use of randomized controlled trials (RCTs) to study programs and policies. The purpose of this study was to determine the designs and data collection methods reportedly used in evaluation practice in light of federal guidelines enacted prior to 2004. Results show that, in contrast to the movement, non-experimental designs dominate the field, that mixed-methods approaches just barely nudged out qualitative methods for most commonly used, and that the majority of studies where statistical significance was reported indicated mixed-significance.
Standards for Evidence-based Practices and Policies: Do Campbell Collaboration, Cochrane Collaboration, and What Works Clearinghouse Research Reviews Produce the Same Conclusions?
Presenter(s):
Chris Coryn, Western Michigan University, chris.coryn@wmich.edu
Michele Tarsilla, Western Michigan University, michele.tarsilla@wmich.edu
Abstract: Interventions intended to ameliorate, eliminate, reduce, or prevent some persistent, problematic feature of the human condition have existed for millennia. In a climate of increasingly scarce resources and greater demands for accountability, now, more than ever, policy makers and practice-based disciplines and professions are consistently seeking high-quality, non-arbitrary, and defensible evidence for formulating, endorsing, and enforcing “best” policies and practices. In the last few decades, randomized experiments, randomized controlled trials, and clinical trials, universally have become the standard for supporting inferences and claims regarding the efficacy, effectiveness, and, to a lesser extent, generalizability, of such actions. In this presentation the authors will present a study of the degree to which the standards applied by major repositories for evidence-based practices and policies produce the same conclusions about the same studies.
Can Systematic Measurement of an Evaluations’ Goodness of Fit and its Influence Determine Quality?
Presenter(s):
Janet Clinton, University of Auckland, j.clinton@auckland.ac.nz
Abstract: Given the expanding and influencing role of evaluation it is critical that the quality of the evaluation process be more fully scrutinized. While we monitor the quality of evaluation processes it is rare to consider outcomes that are attributable to the evaluation. To understand quality we must take in to account the impact of evaluation on a program’s effectiveness and efficiency. This paper uses a heuristic to illustrate how program components and evaluation processes can be combined to produce an explanation of effectiveness. The goodness of fit of an evaluation process and an evaluations influence can be analysed with appropriate weightings to produce information to ensure quality judgements occur. A number of evaluation cases are used to demonstrate a method of monitoring and measuring evaluation influence. It is argued that a judgement of quality evaluation lies in systematic measurement of an evaluations’ goodness of fit and its influence.

Session Title: Evaluation Methodology
Multipaper Session 394 to be held in CROCKETT A on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Extension Education Evaluation TIG
Chair(s):
Paul Pope,  Texas A&M University, ppope@aged.tamu.edu
Developing Surveys for Low-Literate Adults Receiving Extension Education Classes
Presenter(s):
Karen Franck, University of Tennessee, Knoxville, kfranck@utk.edu
Abstract: Developing effective surveys for low-literate adults is a challenge for program evaluators. This paper examines patterns of missing values in surveys completed by low-income adults. The survey included 3 question types: language only behavior questions, language only attitude questions, and combined photographic and language behavior questions. Over 300 adults completed surveys at the beginning of an Extension nutrition education intervention. This was a diverse, high risk group (30% minorities, 79% food insecure, 35% high school dropouts, 80% unemployed). Participants with lower levels of education and those who spoke English as a second language were more likely to skip questions about attitudes (“It costs more to eat healthy foods) and write-in responses about behaviors even when combined with photographs (“How many servings of fruit do you eat each day?” with a photograph of fruit.) This paper will discuss implications of these findings for developing evaluation tools for low-literate adults.
Optimizing Conditions for Success: An Extension Case Study in Cross-program Surveys
Presenter(s):
Gwen Willems, University of Minnesota, wille002@umn.edu
Abstract: Surveying is a highly popular method that has been used for decades to gather evaluative data. The advantages are numerous: it’s a relatively low-cost and straight-forward way to obtain data from many people in a short period of time, as described by Gary Henry. One of the difficulties is standardizing surveys across a variety of programs. Much of the literature and discussion of surveys gives attention to survey design, error reduction, the audience, and sampling of respondents. This presentation will step back, instead doing a meta-analysis that focuses on a process and environment that led eventually to successful design and adoption of cross-program end-of-educational-session and follow-up surveys for a section of the University of Minnesota Extension. The presenter will describe this case study, the multi-activity process she used with Extension educators, challenges in that environment, and factors that contributed to success of the cross-program survey initiative.
Creating a Cost-Benefits Analysis Calculator for Extension Nutrition Education Programs
Presenter(s):
Karen Franck, University of Tennessee, Knoxville, kfranck@utk.edu
Joseph Donaldson, University of Tennessee, jldonaldson@tennessee.edu
Abstract: The 1998 Virginia Tech cost-benefits analysis study remains the standard for measuring the economic impact of Expanded Food and Nutrition Education Program (EFNEP)—a federally funded national nutrition education program for low-income parents. Since 1998, several other states have used these methods to estimate the economic impacts of EFNEP in their communities. This paper will discuss a project that built on the Virginia Tech study in 3 important ways: first, the 1998 direct and indirect costs were updated to reflect current dollars; the target audience was expanded to include all adults who receive nutrition education programming through a university Extension program; third, an on-line calculator was created to capture the economic impacts of nutrition programs at county, regional, and state levels. This calculator provides program evaluators with an effective method to measure economic impact, to compare the impact of programs between different areas, and to identify areas for program improvement.
Utilizing the Delphi Method to Identify Competencies and Training to Help Reduce Turnover Among County Extension Faculty
Presenter(s):
Diane Craig, University of Florida, ddcraig@ufl.edu
Abstract: Turnover of county Extension faculty costs Extension millions of dollars per year (Ramlall, 2004). Employee turnover occurs for a multitude of reasons including the lack of job satisfaction, organizational commitment, and job embeddedness (Phillips & Connell, 2003). These factors can be addressed through proper training and strategic techniques that improve employee critical competencies, socialization and job satisfaction. The University of Florida conducted a Delphi study to explore the perceptions of county extension faculty regarding job satisfaction, training competencies, and social connectedness during the first three years of hire. A Delphi study is designed to gain consensus among experts on a given topic, with our experts being our county Extension faculty and their supervisors. The goal of this study was to determine the optimal training content and training schedule for newly hired Extension faculty in order to increase job satisfaction and job embeddedness and decrease turnover.

Session Title: Process Lessons for Applied Research and Evaluation from Capital Projects
Demonstration Session 395 to be held in CROCKETT B on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Business and Industry TIG
Presenter(s):
Kate Rohrbaugh, Independent Project Analysis, krohrbaugh@ipaglobal.com
Abstract: Capital projects in industry are projects that require the investment of significant capital to maintain or improve a capital asset. In this demonstration, the presenter will provide an overview on the practices that are considered adequate planning in the world of capital projects and identify the parallels (of which there are many) for applied research and evaluation. These parallels were identified during the development of a research work process intended to improve and maintain the intellectual assets of Independent Project Analysis (IPA), a management consulting firm in Virginia that offers evaluation services and conferences for companies in the process industries. During this demonstration, the audience will become familiar with the practices and the phases of capital projects and how they apply to research and evaluation. Additionally, the presenter will identify areas of divergence and share implementation challenges.

Session Title: Improving the Quality of Peacebuilding Evaluation
Demonstration Session 396 to be held in CROCKETT C on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Presenter(s):
Cheyanne Scharbatke-Church, Child Development Associate, cheyanne.church@tufts.edu
Abstract: This demonstration session will outline 2 new tools specific to the evaluation of peacebuilding in the international arena. The demonstration will be targeted at evaluators working in conflict and post-conflict settings evaluating peacebuilding projects, though all international social change projects may find the demonstration has utility. The presentation is based on the results of over 7 years of collaborative learning with peacebuilding practitioners and donors, led by the Reflecting on Peace Practice project of CDA. This collaborative learning included, but was not limited to, over 15 evaluations that sought to apply these tools and lessons as well as the experience of including key findings into the OECD-DAC guidance on peacebuilding evaluation. The demonstration will focus specifically on how to assess the effect of peacebuilding programming at the societal level, oft called peace writ large. It will also demonstrate how to adopt a systemic approach to the evaluation of peacebuilding.

Session Title: Haiti: Challenges in Emergency Response and Recovery Bring Challenges (and Innovation) in Evaluation
Panel Session 397 to be held in CROCKETT D on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Disaster and Emergency Management Evaluation TIG
Chair(s):
Nan Buzard, American Red Cross, buzardn@usa.redcross.org
Abstract: On January 12, 2010, a magnitude 7 earthquake struck the Haitian coast 10 miles from the capital of Port-au-Prince, causing massive damage and significant loss of life. The American Red Cross (ARC) delegates working in Haiti were among the first to respond, and the public was generous in providing funds to ARC for emergency response and recovery programs. In coordination with the International Federation of Red Cross and Crescent Societies (IFRC), the International Committee for the Red Cross, and UN and US Government agencies, ARC continued the immediate emergency response while an IFRC needs assessment team deployed to conduct field assessments in eleven sector areas, to begin defining medium-term recovery priorities. This panel by ARC staff (including the Lead of the IFRC Recovery Assessment Team) will discuss how this unprecedented urban disaster challenged existing models of needs assessment, data collection and analysis, and M&E design, leading to useful innovation.
Six Weeks After Haiti Disaster: The Challenge of Leading a Multi-donor Emergency Recovery Needs Assessment
Michael Zeleke, American Red Cross, zelekem@usa.redcross.org
When the Haiti Recovery Assessment Team of the International Federation of Red Cross and Red Crescent Societies (IFRC) was organized, the presenter was recruited as an advisor. Several days before departure, he was asked to the lead the team. During the next week, the team size grew from an estimated 6 to over 28, as various Partner National Red Cross Societies who were donating funds and supplies asked to join the team. The challenge of managing this large team also encompassed: seeking model approaches for sector and geographic coverage; question definition and analysis; coordination challenges (with UN and other international actors); deployment of multisectoral teams to respect counterpart and survey respondent time; and working within resource and time constraints in a climate of high expectations. This presentation will give a flavor of how these challenges were met and what can be learned from them for future urban disasters.
Innovation in Collecting and Analyzing Geographic Information: Immediate Contributions to Recovery Efforts and Potential Contributions to Monitoring and Evaluation of Results in Haiti
Dale Hill, American Red Cross, hilldal@usa.redcross.org
After the Haiti earthquake, worldwide interest and compassion catalyzed an unusual set of collaborative initiatives focused on geographic information, made possible also by technological innovation. For example, 500 technical experts from over 22 countries were mobilized to interpret data to assess the earthquake impact in Haiti to feed into a comprehensive damage assessment, accomplished in weeks, not months. Also, some non-profit organizations bringing a different skill set to disaster needs used social networking to develop dynamic technical solutions implemented in days by volunteers, such as usable street maps of Port-au-Prince. These tools and the information that resulted from them, were put to immediate use on the ground in Haiti. The presenter brings perspectives from both development and relief organizations to examine whether these innovative tools devised for near-term damage and needs assessment can also be applied over the long term to monitoring and evaluation of relief and recovery projects.
Haiti: From Response to Recovery: Determining Sectoral Priorities and Beginning the Monitoring Process
Amy Gaver, American Red Cross, gavera@usa.redcross.org
The magnitude of damage from the Haiti earthquake required immediate response on a huge scale in emergency aid such as food, medical assistance, safe water supplies, sanitation, and shelter. But the disaster also destroyed assets and sources of livelihood required for longer term resumption of economic development-- medical facilities, schools (87% destroyed in Port-au-Prince) and markets. Learning from its experience with other disaster relief operations, the American Red Cross (ARC) response program emphasized the transition from emergency relief to recovery early on. But to respond to the emergency and early recovery needs, ARC needed to accelerate and innovate in several of its traditional sectors such as cash transfer programs and livelihood support for beneficiaries. The presenter, the ARC manager in charge of recovery programming will discuss how the IFRC needs assessment was used to help define sectoral priorities and ensure early monitoring systems supported learning from innovative programs.
Recovery Program Monitoring and Evaluation Design: Challenge of an Impacted Population on the Move in Haiti
Christine Connor, American Red Cross, connorch@usa.redcross.org
The Haiti earthquake was unprecedented since it occurred so close to the major urban center of a country with great poverty. Not only homes serving as shelter, but major buildings and facilities serving Government and the market economy were destroyed, affecting livelihoods and services. A large number of people remain displaced, and some have opted to stay with relatives in areas outside of Port-au-Prince. As recovery efforts continue, the “market pull” of jobs and improved services can attract residents to resettle outside their original home base. This creates special challenges for monitoring and evaluation (M&E) design. The presenter will draw on her experience with other projects involving refugees and displaced persons to discuss how M&E design for projects supporting Haiti’s shifting displaced population presents similarities, and how it presents differences which need to be taken into account in the context of the Haiti earthquake recovery program.

Session Title: Student Centered Issues in Evaluation
Multipaper Session 398 to be held in SEGUIN B on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Graduate Student and New Evaluator TIG
Chair(s):
Thelma Woodard,  University of Tennessee, Knoxville, twoodar2@utk.edu
Guidelines for Conducting a High-Quality Mixed Methods Dissertation
Presenter(s):
Rebecca Glover-Kudon, University of Georgia, gloverku@uga.edu
Abstract: Mixed-methods research has emerged as a distinct methodology. As a result, graduate students in the evaluation field are increasingly interested in learning how to combine quantitative and qualitative methods in a single study. Because faculty mentors have typically specialized in either quantitative or qualitative methods, they may lack experience in conducting mixed-methods research and feel reluctant to guide students’ mixed-methods endeavors. This paper summarizes the literature on how to conduct and produce high-quality mixed-methods research and suggests criteria for assessment. Specifically, the paper details core elements of mixed-methods research proposals including study design and procedures, explicit rationale for data mixing, and standard notation for data prioritization, sequencing, and integration. Features and challenges of various designs are also discussed. The intended audience for this paper and presentation are graduate students contemplating or conducting mixed-methods studies and faculty members who advise them.
Assessing the Needs of Students in an Evaluation: Statistics and Measurement Doctoral Program- Results and Lessons Learned
Presenter(s):
Susanne Kaesbauer, University of Tennessee, Knoxville, skaesbau@utk.edu
Thelma Woodard, University of Tennessee, Knoxville, twoodar2@utk.edu
Abstract: A needs assessment was developed to assess the perceptions and needs of current and former students in a Ph.D. program in Evaluation, Statistics and Measurement. The goal of the needs assessment is to inform faculty of current and former student’s perceptions of their experiences, as well as the strengths, weaknesses and opportunities for strengthening the program. This presentation will describe the aggregated results of the data collected using the needs assessment. In addition, lessons learned from the development of the needs assessment and the data collection process will be discussed.
Evaluation Quality: A Model for Reflecting on Evaluation for Evaluation Students
Presenter(s):
Thelma Woodard, University of Tennessee, Knoxville, twoodar2@utk.edu
Andrea Souflee, United Way of Dallas, asouflee@unitedwaydallas.org
Abstract: This reflective practice model provides a framework for evaluation students to reflect on the quality of their evaluation skills and development. Developers of professional evaluator programs express the importance of developing a reflective practice. Proponents of evaluator competencies suggest reflecting on individual strengths and weaknesses benefit evaluator development (Stevahn, King, Ghere, & Minnema, 2006). Developing a reflective practice can be a challenge because defining reflective practice is a challenge. There are many abstract and complex descriptions of reflection. Moreover, there have been many conflicting reflective practice definitions that have been offered in the literature over several decades. While reflection is acknowledged as an important part of professional development, if reflective practice instruction is lacking, students have to determine how to reflect on their own. This presentation will detail a model that evaluation students may use to develop a professional practice.

Session Title: Impact Evaluation at the Millennium Challenge Corporation (MCC): Theory, Application, and Complications
Panel Session 399 to be held in REPUBLIC A on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Marc Shapiro, Millennium Challenge Corporation, shapiromd@mcc.gov
Discussant(s):
Jack Molyneaux, Millennium Challenge Corporation, molyneauxjw@mcc.gov
Abstract: The Millennium Challenge Corporation (MCC) is committed to conducting rigorous independent impact evaluations of its programs as an integral part of its focus on results. MCC expects that the results of its impact evaluations will help guide future investment decisions and contribute to a broader understanding in the field of development effectiveness. MCC’s impact evaluations involve a variety of methods chosen as most appropriate to the context. This panel first provides an overview of evaluation at MCC. This includes the defining the evaluation objectives, the number and variety of approaches to evaluation supported, the criteria that underlie decisions about whether and how to evaluate, and the linkages between decisions to fund projects and eventual evaluation results. Next, the panel provides three examples of evaluations being conducted across sectors and involving different methods across countries. The presenters will discuss the challenges involved in implementing these evaluations and lessons learned.
MCC Project Evaluations: To Err Is Human
Jack Molyneaux, Millennium Challenge Corporation, molyneauxjw@mcc.gov
MCC was established in January 2004 with the objectives of promoting economic growth and reducing poverty by learning about, documenting and using approaches that work. MCC plans to complete 35 rigorous impact evaluations of international development projects over the next two to three years, and the rate of project evaluations likely will double in the following years. This growing pipeline of rigorous evaluations reflects a critical component to achieving these objectives. Another essential component, MCC’s cost benefit approach, defines the objectives of these impact evaluations. The results of these emerging evaluations will soon shape the selection and design of future projects. This presentation will briefly present the Economic Rate of Return (ERR) Analyses that forecast the expected project costs and benefits to influence project selection and define the impact evaluation objectives. Examples of these models will provide the motivation for the impact evaluations presented in the following presentations.
On the Rights Path to Evaluating a Property Rights Project
Marc Shapiro, Millennium Challenge Corporation, shapiromd@mcc.gov
In peri-urban rangelands, Mongolia’s tradition of open access pasture use, combined with the influx of migrants’ herds, has led to severe overgrazing. The Peri-urban Property Rights Project funded by MCC introduces a system of leasing peri-urban rangelands to herder groups and provides infrastructure and training to improve livestock management, productivity and, income. The project’s impact evaluation uses a randomized selection process to determine which herder groups will receive available leasing slots and attempts to measure spillovers/externalities. Data collection involves household surveys and direct measures of changes in land quality. Key problems encountered include large population shifts making irrelevant early designs, lack of ownership of evaluation design by the implementing unit, and a multiple-step design that ended up reducing power when herder productivity changing exogenously.
A Scholarly Assessment of the Impact of Scholarships
Rebecca Tunstall, Millennium Challenge Corporation, tunstallrh@mcc.gov
Average educational attainment in El Salvador’s northern zone is only 4.3 years of schooling, more than 1.5 years lower than the rest of the country. To increase school enrollment and to keep teenagers in schools, MCC is providing scholarships to poor students to cover the cost of books, uniforms, room and board, and transportation. To evaluate the impact of the scholarships, the Government of El Salvador agreed to conduct a lottery of applicants who meet the eligibility criteria. The students not selected in the lottery become the control group for the evaluation. Data collection involves administrative data from the Ministry of Education on grade completion and graduation in addition to household surveys to track employment and income. Key problems encountered during implementation of the evaluation include lower demand for scholarships than initially anticipated, poor management of implementation contractors, and political pressure to provide scholarships to the control group.
An Agricultural Evaluation Standing Out, If Not Outstanding, in the Field
Mamuka Shatirishvili, Millennium Challenge Account-Georgia, m.shatirishvili@mcg.ge
Marc Shapiro, Millennium Challenge Corporation, shapiromd@mcc.gov
The Agribusiness Development Activity in the Republic of Georgia awards grants to small farmers, value-adding enterprises, and service centers that sell to farms. The impact evaluation examines the project’s effects on income and job creation. For farmers, this was planned as an experimental design across nine rounds of grantees. Those randomly selected initially received grants immediately, while others receive grants later. Farm service centers and value-adding enterprise grantees are evaluated by pairing recipients to similar control enterprises using propensity score matching. Data collection involves augmenting the Georgian Department of Statistics’ household survey and using local contractors to collect household level information from direct and indirect beneficiaries. Problems in recruiting farmer grantees in early rounds led to a change to a pipeline design. The confounds of military conflict and the financial crisis plus project delays required adjustments to data collection schedules and extended timelines.

Session Title: Health Evaluation TIG Business Meeting and Presentation: Bridging the Evidence Gap in Obesity Prevention - A Framework to Inform Decision Making
Business Meeting Session 400 to be held in REPUBLIC B on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Health Evaluation TIG
TIG Leader(s):
Robert LaChausse, California State University, San Bernardino, rlachaus@csusb.edu
Jenica Huddleston, University of California, Berkeley, jenhud@berkeley.edu
Debora Goldberg, Virginia Commonwealth University, goetzdc@vcu.edu
Chair(s):
Laura Leviton, Robert Wood Johnson Foundation, llevito@rwjf.org
Presenter(s):
Shirki Kumanyika, University of Pennsylvania, skumanyi@mail.med.upenn.edu
Discussant(s):
Jennifer Greene, University of Illinois at Urbana-Champaign, jcgreene@illinois.edu
Madhabi Chatterji, Columbia University, mb1434@columbia.edu
Abstract: In 2008, the Institute of Medicine of The National Academies convened a committee of experts to examine innovative ways in which the existing evidence base and research on obesity and obesity prevention programs could be accessed, evaluated and made useful to a wide range of policy-makers and decision-makers. The charge was to develop a framework to guide decision-makers in locating and using evidence to make effective decisions. A practical, action-oriented framework was developed to guide policymakers on how to use the available base of research evidence, and supplement that with complementary forms of credible evidence relevant to problem-solving and decision-making in obesity-prevention contexts. The framework, contained in the report “Bridging the Evidence Gap in Obesity Prevention: A framework to Inform Decision Making,” was released in April 2010. This session will unveil the L.E.A.D. framework for evidence-based decision-making and speaks directly to the theme of the 2010 AEA conference: Evaluation Quality.

Session Title: Building Capacity for Youth Participatory Evaluation
Panel Session 401 to be held in REPUBLIC C on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Jane Powers, Cornell University, jlp5@cornell.edu
Discussant(s):
Shep Zeldin, University of Wisconsin, Madison, rszeldin@wisc.edu
Abstract: As the field of Youth Participatory Evaluation (YPE) has grown, a variety of resources have been developed to support its implementation and enhance its practice. Our four panelists have had extensive experience engaging youth in a variety of participatory evaluation projects, roles and contexts (including cyber environments). They will describe effective training approaches and strategies, and share their collective lessons learned in building the capacity of youth and adults to carry out YPE efforts. This will include their experience with proven effective curricula, tools, and processes, and recommendations on how to successfully conduct YPE in a high quality, authentic manner. There will be ample time for dialogue with the audience to enable discussion about the application of these resources to potential YPE efforts.
Putting Evaluation Into the Hands of Children and Youth: Are We Ready?
Kim Sabo Flores, Evaluation Access and ActKnowledge, kimsabo@aol.com
Over the years, my colleagues at the Center for Human Environments and I have been developing tool kits and curricula to support both national and international efforts in youth-led evaluation and research. YEA! (Youth Evaluation Access) is the result of our of research, exploration and experimentation in working with hundreds of youth-led evaluation and research teams. YEA is an online learning community that offers specific workshops, coaching, and resources for children, youth, and their adult allies to evaluate their own programs and organizations. The presenter will share the various elements of this online coaching and training tool, including the real-time face-to-face meeting space, the online interactive website, blogging and networking features, and the resource center. In addition, the presenter will discuss some of the possibilities and challenges of working with youth evaluation teams in this type of cyber environment
Personal and Contextual Relationships That Affect Youth Participatory Evaluation
David White, Oregon State University, david.white@oregonstate.edu
Youth participatory evaluation (YPE) is an inherently personal and social enterprise. Youth conducting research and evaluation bring with them the totality of their personal and contextual relationships. Why do some youth achieve a level of success as youth researchers and evaluators while others do not? Adolescent realities and adultism significantly impact youth-led initiatives. Youth conducting research and evaluation still operate in an adult-led world where the power differential is a significant barrier to YPE. With adult assistance, motivation, and encouragement, our adolescent colleagues can see youth-led research and evaluation projects through tough and trying times. Several procedural and methodological recommendations are proposed that are intended to improve the transformative and practical benefits of youth participatory evaluation. These recommendations are based on experience working with senior 4-H youth trained in YPE using Participatory evaluation with youth: Building skills for youth community action by Arnold & Wells (2007).
Participatory Evaluation With Youth: Education, Training, and Capacity Building for Change
Katie Richards-Schuster, University of Michigan, kers@umich.edu
Berry Checkoway, University of Michigan, barrych@umich.edu
In this presentation, we share lessons learned from a three-year program to build the capacity of young people to engage in participatory evaluation research. The program, funded by the W.K. Kellogg Foundation, engaged youth-led and intergenerational teams in intensive regional and national education and training workshops. Using our workbook Participatory Evaluation With Young People as a basic framework, we developed a set of workshop principles and basic curriculum. We collaborated with community partners to host local, regional, and national workshops focused on building youth leadership for change, developing practical skills for research and evaluation, and creating evaluation plans for community action. This presentation will describe the program’s goals, activities and outcomes; analyze factors that facilitate education and training programs, and provide observations about lessons learned for future efforts.
Enhancing Program Quality Through Engaging Youth in Evaluation
Jane Powers, Cornell University, jlp5@cornell.edu
Shep Zeldin, University of Wisconsin, Madison, rszeldin@wisc.edu
The Youth Adult Leaders for Program Excellence (YALPE) Resource Kit helps youth-serving organizations enhance program quality by building their capacity to engage youth in evaluation and research. This innovative resource provides a simple, structured way for youth-serving organizations to conduct rigorous self-assessment in order to strengthen their programming, including maximizing youth participation, youth voice, and youth/adult partnerships. A key component of the approach is the formation of youth/adult teams which are trained to lead organizations through an evaluation process that involves self assessment and reflection. It provides easily accessible instruction on how to use the assessment findings to guide action planning and create organizational change. The YALPE has been used across a full range of contexts from after-school programs, to community-based organizations, to residential settings. Examples of its successful use will be presented highlighting how it enhances youth participation and builds capacity for programs to conduct Youth Participatory Evaluation.

Return to Evaluation 2010
Search Results for All Sessions