2011

Return to search form  

Session Title: Incorporating Values Into Program Theory and Logic Models
Demonstration Session 351 to be held in Avalon A on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Program Theory and Theory-driven Evaluation TIG
Presenter(s):
Patricia Rogers, Royal Melbourne Institute of Technology, patricia.rogers@rmit.edu.au
Abstract: Logic models are commonly used to communicate how programs and projects are understood to work - articulating the change mechanisms involved in producing the intended outcomes, and how program activities have been constructed to activate these mechanisms. However it can be difficult to identify diverse values about what are desirable and undesirable standards of performance, outcomes/impacts, processes and distributions of costs and benefits, and even harder to represent these in logic models and use them in evaluations. This demonstration will share examples where diverse values have not only been identified, but incorporated in logic models, program logic matrices and evaluation plans. It includes examples from diverse sectors including labor, aged care, early childhood early intervention and natural resource management. The session will discuss how values might be addressed differently depending on the intended use for program theory and the nature of the intervention - in particular its complicated or complex aspects.

Session Title: Introducing Tools to Measure Trainers' Cultural Competence in Training Events with Community Organizations
Demonstration Session 352 to be held in Avalon B on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Presenter(s):
Jeanette Treiber, UC Davis Center for Evaluation and Research, jtreiber@ucdavis.edu
Robin Kipke, University of California, Davis, rakipke@ucdavis.edu
Veronica Acosta-Deprez, California State University, Long Beach, vacosta@csulb.edu
Abstract: In 2010 the Center for Evaluation and Research at UC Davis tested a training observation instrument that measures trainers' cultural competence with a Los Angeles community organization. In May 2011 an enhanced version of this tool and a corresponding trainer self-assessments and participant questionnaire will be tested in three different training events in California and then finalized and released for use. In this AEA demonstration session a trainer will first introduce the tools, which focus on the relationship between trainer and participants (e.g. creating an environment where participants feel comfortable to ask questions; using examples that are relevant to the audience; incorporating learners' knowledge and background into training, etc.), followed by a brief training demonstration with participants on the subject of cultural competence in evaluation. Then the trainer, participants, and an observer will rate the cultural competence of the session using the tools. This will be followed by a discussion.

Session Title: Beyond PowerPoint: Keep the Big Picture in Your Presentation Without Losing the Details
Demonstration Session 353 to be held in California A on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Data Visualization and Reporting TIG
Presenter(s):
Lyn Paleo, First 5 Contra Costa, lpaleo@firstfivecc.org
Abstract: Imagine you are preparing for a presentation. Rather than a pack of Powerpoint slides that dice and slice your presentation into small, equally sized rectangles, you get an enormous white board onto which you place title, text, sets of graphs, photos, pdf files, videos, portions of an Excel sheet. You show the audience the "big picture", and with a click zoom in to examine details such as a paragraph of a document, and zoom out to show how this detail relates to the big picture concept. Prezi is a new alternative to Powepoint-free, simple to use, and engaging. It extends the principles of Tufte and Few from graphs to entire presentations. This demonstration will walk through a Prezi presentation related to evaluation concepts and findings, then will show how simple it is to make a presentation. By the end of the session, participants can create their own presentation.

Session Title: Moving Beyond Fragmentation: Building a Set of Participatory Principles for Evaluation
Think Tank Session 354 to be held in California B on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Elizabeth Whitmore, Carleton University, elizabeth_whitmore@carleton.ca
Lyn Shulha, Queen's University, lyn.shulha@queensu.ca
J Bradley Cousins, University of Ottawa, bcousins@uottawa.ca
Discussant(s):
Lyn Shulha, Queen's University, lyn.shulha@queensu.ca
Elizabeth Whitmore, Carleton University, elizabeth_whitmore@carleton.ca
Michael Harnar, Claremont Graduate University, michaelharnar@gmail.com
Abstract: The last 25 years has seen increasing purposeful engagement of stakeholders in evaluation. Given recent efforts to make definitive distinctions among different approaches to this work, now seems like an appropriate time to consider whether there may be any merit in moving beyond the refinement of unique collaborative models. In this session, we propose to explore with participants the potential of a coherent overarching set of principles that would encompass the wide range of existing participatory and collaborative approaches. Initially working in small, facilitated groups, participants will be asked to draw on their own experiences, and to record their ideas on the wide variety of decisions points embedded in collaborative/participatory work. The session will conclude with a review of responses and a discussion of whether there is value in identifying a set of principles about the nature of collaborative evaluative inquiry to be tested out in future discussions.

Session Title: Using R for the Management of Survey Data and Statistics in Evaluation
Demonstration Session 355 to be held in California C on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Graduate Student and New Evaluator TIG
Presenter(s):
Lindsey Dunn, University of North Carolina, Greensboro, l_dunn@uncg.edu
Lauren Fluegge, University of North Carolina, Greensboro, lbfluegg@uncg.edu
Korinne Chiu, University of North Carolina, Greensboro, k_chiu@uncg.edu
Abstract: R is a free, open-source software program for statistics and graphics that consists of many statistical packages specific to analyses that are useful to evaluators. This session will include an introduction to R, how to import data files into R and how to export data files into other formats. In addition, we will demonstrate how to analyze descriptive statistics, recode and re-label variables, and produce descriptive tables in R. We will also provide an overview of more recent, qualitative data analysis capabilities in R. Syntax will be reviewed in R and results in R will be compared with familiar packages such as SPSS and Excel.

Session Title: Influencing Evaluation Policy and Evaluation Practice: A Progress Report From the American Evaluation Association's (AEA) Evaluation Policy Task Force
Panel Session 356 to be held in California D on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the AEA Conference Committee
Chair(s):
Patrick Grasso, World Bank Group, pgrasso45@comcast.net
Discussant(s):
Jennifer C Greene, University of Illinois at Urbana-Champaign, jcgreene@illinois.edu
Abstract: The Board of Directors of the American Evaluation Association (AEA) established the Evaluation Policy Task Force (EPTF) in September, 2007, to enhance AEA's ability to identify and influence policies that have a broad effect on evaluation practice and to establish a framework and procedures for accomplishing this objective. The EPTF has issued key documents promoting a wider role for evaluation in the Federal Government, influenced federal legislation and executive policy, and informed AEA members and others about the value of evaluation through public presentations and newsletter articles. This session will provide an update on their work and invite member input on their plans and actions.
Introduction to the Evaluation Policy Task Force
Patrick Grasso, Independent Consultant, pgrasso45@comcast.net
As Chair of the EPTF, Mr. Grasso will present an overview of the EPTF and of recent broad events surrounding it, including AEA member approval and the publication of the Evaluation Roadmap for a More Effective Government, guidance on cardinal evaluation policies to be used as a frame of reference for explaining evaluation policies to outside contacts, vetting of public evaluation policy statements with AEA members and the Board, and evaluating the EPTF.
Activities and Plans for the Evaluation Policy Task Force
George Grob, Center for Public Program Evaluation, georgefgrob@cs.com
As Consultant to the EPTF, Mr. Grob will facilitate a discussion involving EPTF members and the audience about the recent activities and future plans of the EPTF. This will include a discussion of successes over the last year related to evaluiont policy for international assistance.

Session Title: Examining Intercultural Communications and Its Implications for Effective Evaluation With Stella Ting-Toomey
Expert Lecture Session 357 to be held in  Pacific A on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Presidential Strand
Chair(s):
Melvin Hall, Northern Arizona University, melvin.hall@nau.edu
Presenter(s):
Stella Ting-Toomey, California State University, Fullerton, sting@exchange.fullerton.edu
Discussant(s):
Melvin Hall, Northern Arizona University, melvin.hall@nau.edu
Abstract: Stella Ting-Toomey, an internationally recognized expert in intercultural communication, will participate in an interactive dialogue with AEA member and presidential strand co-chair, Melvin Hall. This session will explore issues and insights associated with conducting evaluations using a culture-sensitive theoretical framework. Improving intercultural communication competence will be the focus of the dialogue, with key concepts explained including: cultural value dimensions, identity layers, and communication styles. Each aspect of the dialogue will be grounded in evaluation practice issues. Ting-Toomey is perhaps best known for her work on "mindfulness" and "facework" in cross-cultural communication, in particular her face-negotiation theory which deals with ways people negotiate their communication identities during interactions with each other. The author or editor of 15 books and over 90 journal articles and chapters, Ting-Toomey is an active trainer, consultant, and certified mediator who has conducted intercultural conflict competence workshops for corporations, universities, and non-profit centers/institutes.

Session Title: Extending the Focus Group Method
Skill-Building Workshop 358 to be held in Pacific B on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Qualitative Methods TIG
Presenter(s):
Katherine Ryan, University of Illinois at Urbana-Champaign, k-ryan6@illinois.edu
Peter Muhati, University of Illinois at Urbana-Champaign, mmuhati2@illinois.edu
Tysza Gandha, University of Illinois at Urbana-Champaign, tgandha2@illinois.edu
Abstract: Focus groups are fundamentally group interviews or collective conversations about a limited set of topics (Bloor, Frankland, Thomas, & Robson, 2001; Kamberelis & Dimitriadis, 2005; Morgan, 2002). While this method encompasses several approaches, this workshop will specifically compare two focus group types: 1) the more common 'theory-building' focus group, and 2) the 'narrative' focus group which evaluators might be less familiar with. By (re)introducing attendees to multiple focus group approaches, we hope to expand the possibilities for evaluators to use focus groups to gather rich and meaningful data. In this interactive session, brief protocol and data analysis examples will be shared to illustrate each focus group type. Attendees will have the opportunity to practice writing questions/probes for the narrative focus group and to participate in analyzing a taped excerpt from a narrative focus group. References to additional information and tools will be provided for further study.

Session Title: Maximizing Validity for Evaluation Quality
Multipaper Session 359 to be held in Pacific C on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Tony Lam,  University of Toronto, tonycm.lam@utoronto.ca
Research Validity: State-of-the-Art and Barriers
Presenter(s):
Tony Lam, University of Toronto, tonycm.utoronto.ca
Flanny Alamparambil, University of Toronto, flannya@hotmail.com
Abstract: The concept of validity in quantitative research methodology and program evaluation has evolved over several decades, and its most current structure and content has been developed and presented by Shadish, Cook and Campbell (SCC) in their 2002 publication Experimental and Quasi-Experimental Designs for Generalized Causal Inference. However, we have found quantitative method books published as recently as 2010 have ignored SCC's conceptualization of validity. We have also noticed that some authors of published articles do not adhere to the modern framework in discussing validity. It appears that researchers continue to embrace obsolete validity types and their associated threats. Our research assesses the extent to which the field of quantitative method espouses the SCC's validity typology, and the misconceptions and sources of confusion about this validity framework. We also offer elaborations about validity and validity threats and explain how measurement validity is being addressed within SCC's validity framework.
Minimizing Pregroup Differences with Matching and Adjustment
Presenter(s):
William Holmes, University of Massachusetts, Boston, william.holmes@umb.edu
Abstract: This presentation examines the combined use of matching and regression adjustment to produce results superior to matching or adjustment alone. It will discuss strengths and weaknesses of each procedure and the circumstances in which one strategy performs better than the other. It will explain why the combined use of matching and adjustment produces superior results in reducing pregroup differences. This presentation provides an example of its use and diagnostic evidence as to whether the results are reliable. The combined use improves estimates of treatment effects and reduces bias from pregroup differences. The example uses data from a dose response evaluation of a family services program providing intensive case management to substance abusing families that have been substantiated as having abused or neglected their children. The findings show the program has positive effects even after removing pregroup differences.

Session Title: One Size Does Not Fit All: Capturing Change in Non-profit Capacity
Panel Session 360 to be held in Pacific D on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Clare Nolan, Harder+Company Community Research, cnolan@harderco.com
Discussant(s):
Len Finocchio, The California HealthCare Foundation, lfinocch@chcf.org
Shane Goldsmith, Liberty Hill Foundation, sgoldsmith@libertyhill.org
Abstract: As more foundations become interested in capacity-building as a means of helping nonprofits cope with current economic pressures, evaluation has been challenged to find meaningful measures of organizational effectiveness, along with methods that capture the true impact of capacity-building interventions. This session will compare and contrast approaches used to evaluate two very different foundation-sponsored initiatives to increase nonprofit capacity: (1) Liberty Hill Foundation's efforts to strengthen fundraising and advocacy capacity among minority-led and minority-serving organizations in Los Angeles, and (2) California HealthCare Foundation's efforts to strengthen the management and financial capacity of California safety net dental clinics. The presentations will highlight how different objectives of capacity-building and the strategies used to achieve these objectives affect evaluation design and measurement. Merits and limitations of standardized nonprofit capacity metrics will also be discussed. Staff from both foundations will serve as discussants for this session, which will also invite audience discussion and feedback.
A Fine Balance: Mitigating the Financial Changes Faced by Safety Net Dental Clinics
Fontane Lo, Harder+Company Community Research, flo@harderco.com
The recent economic downturn and state budget crisis have made it increasingly difficult for community dental practices that serve low-income populations to uphold their mission and still keep their doors open. To address this issue, the California HealthCare Foundation and the California Pipeline Program funded a demonstration project to test the effectiveness of practice management consulting, a relatively new model for strengthening the viability of safety net dental practices, as a strategy for helping California's community clinics survive and thrive. In this presentation, the evaluator will describe how use of a multi-case study design combined with analysis of clinic financial data provided insights about success of this capacity-building model. Lessons learned include the importance of context in interpreting financial metrics, the necessity of building collaborative relationships with technical assistance consultants and target nonprofit organizations, and the role of qualitative methods for understanding the nuances of capacity building efforts.
Building the Capacity of Minority-Led and Minority-Serving Organizations
Sonia Taddy, Harder+Company Community Research, staddy@harderco.com
In 2009, the California Wellness and Weingart foundation co-funded the Liberty Hill Foundation Leadership Institute for Change to provide capacity-building assistance to minority-led and minority-serving nonprofits in Los Angeles. The Institute focuses on providing community organizers with training in community organizing, grassroots fundraising, board development, and communications. In this presentation, the evaluator will describe lessons learned based on a mixed-methods approach to evaluation. Lessons learned include the importance of a robust evaluation approach that combines diverse inquiry methods, the need to combine nonprofit self-assessments with more objective comparisons of organizational development, and how to ensure that measurement strategies capture outcome indicators specific to each organization's capacity needs.

Roundtable: A Logic Model Framework for Programs with Systems Change Intents
Roundtable Presentation 361 to be held in Conference Room 1 on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Advocacy and Policy Change TIG
Presenter(s):
Anna Lobosco, New York State Developmental Disabilities Planning Council, anna.lobosco@ddpc.ny.gov
Dianna Newman, University at Albany, SUNY, dnewman@uamail.albany.edu
Abstract: Recently, strands of several frameworks have been combined into a global logic model framework to guide systemic change efforts. These include: a) The Route to Success framework (PADDC, 2009) for affecting systems change that includes improving the knowledge base, selecting social strategies, engaging stakeholders, supporting policy entrepreneurs, and using unexpected events (or 'tipping points'); b) a model for change (Newman & Lobosco, 2008) that identifies four domains of successful systems change policies and procedures, infrastructure, design and delivery of services, and expectation of outcomes and experiences; and c) Scheirere's (2010) delineation of four dimensions of sustainability (individual, organization, community and population) that emphasizes concepts or programmatic philosophy rather than funding. These strands, when braided together, form a logic model framework useful for education and human services programs with systems change intents. The purpose of this paper is to summarize the strands, define the logic model, and provide examples of use.

Roundtable: Measuring Changes in Organizational Capacity: Recent Findings and Ideas for the Future
Roundtable Presentation 362 to be held in Conference Room 12 on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Presenter(s):
Amy Minzner, Abt Associates Inc, amy_minzner@abtassoc.com
Kimberly Francis, Abt Associates Inc, kimberly_francis@abtassoc.com
Abstract: Organizational capacity building has become an accepted mechanism to increase the quality and quantity of services delivered by nonprofit organizations. While anecdotal information abounds about capacity building's effectiveness, rigorous evaluations have been limited. Capacity building is difficult to evaluate, which has constrained the number of evaluations completed. In this context, three recent evaluations of large-scale capacity building programs are notable. The roundtable presentation will present the evaluation designs and research findings of these evaluations (the Compassion Capital Fund Demonstration outcome and impact evaluations and the Communities Empowering Youth longitudinal outcome evaluation). It will also discuss the authors' measurement challenges, lessons learned, and ideas about moving the field forward in terms of measurement. The discussion portion of the roundtable will focus on participants' questions, experiences measuring capacity, ideas about defining a limited core set of organizational capacities to be used as indicators of broad capacity, and recommendations for future evaluation.

Session Title: Evaluating Sustainability of Programs in Developing Countries: What do we Measure and How? The Case of Healthcare Quality Improvement in Niger
Expert Lecture Session 363 to be held in  Conference Room 13 on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Presenter(s):
Lynne Franco, EnCompass LLC, lfranco@encompassworld.com
Abstract: All public health programs seek to have lasting positive impact on the populations they serve, particularly in developing countries where the need is great. Yet little evaluation and research has been done to characterize the extent of and factors for sustainability or institutionalization. Institutionalization can be conceptualized as establishing and maintaining something as an integral, sustainable part of a system or organization, woven into the fabric of daily activities and routine. This expert lecture will explore several key issues with evaluation of institutionalization: 1) what is our desired result - how would institutionalization manifest itself at various levels of the system? 2) what evidence can we gather or how can we measure institutionalization? Based on an evaluation case of institutionalization of quality improvement approaches in the Niger Healthcare system, a framework for evaluating institutionalization will be discussed, as well as illustrative findings presented from the application of this framework.

Session Title: Using Bibliometrics for Research Evaluation of Countries, Institutions, and Researchers: A Review of Statistics, Visualizations, and Guidelines
Demonstration Session 364 to be held in Conference Room 14 on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Presenter(s):
Ann Kushmerick, Thomson Reuters, ann.kushmerick@thomsonreuters.com
Abstract: This demonstration will review a selection of metrics and visualizations used in bibliometric research evaluation. Thomson Reuters, formerly Institute for Scientific Information, is the originator of the first multidisciplinary citation index (Science Citation Index), and has been working in the field of bibliometrics for over 50 years. Over this period, the use of bibliometric techniques to measure the output and impact of scholarly research of countries, institutions, research groups, and individuals has become an established method of quantitative research assessment. Bibliometrics have been adopted widely around the globe because they reflect key values of sound assessment: they are repeatable, transparent, and easily understood. We will review well-established metrics such as the journal impact factor, as well as newer metrics like h-index, Eigenfactor™, and aggregate performance indicator. Methods of visualizing bibliometric data for the purposes of decision making will also be discussed.

Session Title: Translating Science to Practice: An Evaluation Perspective
Panel Session 365 to be held in Avila A on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Evaluation Use TIG and the Health Evaluation TIG
Chair(s):
Stephanie Gruss, Centers for Disease Control and Prevention, inf6@cdc.gov
Abstract: The two presentations in this session explore the evaluation approaches and methods used in the translation of science into practice in areas of public health. The projects included are at different points on the continuum of the translation process, from translation of knowledge into actionable products, to implementation and institutionalization . Evaluative criteria based on the REAIM framework have been used to identify, rank, and prioritize the most effective strategies for diabetes prevention and control; the IOM recommendations project is using evaluation methods such as focus groups, key informant interviews, and surveys to assess the key steps in bridging the gaps between science and practice. The audience will learn the different evaluative approaches and methods used in various aspects of translation work.
Translating the Evidence-base for Diabetes Prevention and Control: An Evaluation Perspective
Stephanie Gruss, Centers for Disease Control and Prevention, inf6@cdc.gov
Bina Jayapaul-Philip, Centers for Disease Control and Prevention, ify3@cdc.gov
The Centers for Disease Control and Prevention's Division of Diabetes Translation with the Research Triangle Institute has begun to evaluate the strategies by which state Diabetes Prevention and Control Projects (DPCPs) have translated Evidence-Based Programs, Policies, and Practices (EBPPPs) into effective community and state interventions. This process has involved selecting DPCPs that have successfully implemented EBPPPs and have an accompanying outcome evaluation. Using an interview protocol similar to that proposed by RE-AIM (Reach, Effectiveness, Adoption, Implementation, and Maintenance), we will conduct key informant interviews with these DPCPs. This will allow us to access the EBPPP's reach, impact/evaluation, adoption, implementation, and sustainability. An expert panel will review the resulting data, ranking and prioritizing strategies across each EBPPP. We will produce a list of effective strategies that DPCPs can consider for implementing and evaluating. We will share methods, interview protocols, sample interview responses, and expert panel findings.
Using Evaluation to Assess Support Systems Needed to Move From Scientific Recommendations to Implementation
Rashon Lane, Centers for Disease Control and Prevention, rlane@cdc.gov
Judy Berkowitz, Battelle Memorial Institute, berkowitzj@battelle.org
Steve Sullivan, Cloudburst Consulting, 
John Rose, Battelle Memorial Institute, 
Alessandra Favoretto, Battelle Memorial Institute, 
Tiffiny Bernichon, Battelle Memorial Institute, 
Eileen Miles, Battelle Memorial Institute, 
While evidence based recommendations are intended to provide guidance on effective strategies, the support factors needed to bridge the gap from science to practice are often not explored. The Division for Heart Disease and Stroke Prevention at the Centers for Disease Control and Prevention commissioned an evaluation to assess the uptake, use and implementation of recommendations for a population-based policy and systems approach to the prevention and control of hypertension released by the Institute of Medicine. Multiple evaluation methods, such as focus groups, key informant interviews and surveys were used to assess the critical steps to implement the recommendations. Initial data indicate that support factors such as intervention specific guidance, training, and technical assistance are needed to move evidence based recommendations to full implementation. This presentation will highlight barriers and successes in developing an evaluation design that captures meaningful results on the dissemination, translation, and implementation of public health recommendations.

Roundtable: Whose Evaluations Count? Lessons Learned From Process Facilitation Applied in Evaluation
Roundtable Presentation 367 to be held in Balboa A on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Systems in Evaluation TIG
Presenter(s):
Ritu Shroff, Independent Consultant, ritushroff2003@yahoo.co.uk
Bob Williams, Independent Consultant, bob@bobwilliams.co.nz
Abstract: Methods drawing on systems thinking and process facilitation for designing evaluation frameworks and areas of inquiry, data collection, and data analysis and generation of recommendations have been successfully applied in conducting evaluations in Africa and Asia. These evaluation methods facilitate exploring inter-relationships (between stakeholders and between intervention and context), eliciting and engaging with perspectives and stakes from multiple stakeholders, and undertaking boundary critique. Experiences with a) multi-stakeholder engagement on program theory of change, b) mixing photography and theatre for development techniques with other methods for data collection, and c) large group facilitation for data analysis and recommendations offer lessons learned on consequences/effects of evaluation, particularly usefulness and uptake. The roundtable will focus on the values that such methods are based on and relationships that they foster between evaluator and client, and effects such as greater internationalization and application of findings and recommendations.

Session Title: Government Evaluation Issues in K-12 Education
Multipaper Session 368 to be held in Balboa C on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Government Evaluation TIG and the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Samuel Held,  Oak Ridge Institute for Science and Education, sam.held@orau.org
Federal Legislation and State Standards for Accountability in Assessment Education: Evaluating Policy Implementation in Teacher Preparation Programs
Presenter(s):
Aarti Bellara, University of South Florida, abellara@usf.edu
Christopher Deluca, University of South Florida, cdeluca@usf.edu
Abstract: Since the onset of the accountability movement in education, states have significantly increased their use of large-scale standardized assessments as measures of student achievement, teacher effectiveness, and instruments of public policy (Mazzeo, 2001; McMillan, 2008). Moreover, under current federal legistalation (NCLB) there has been a growing emphasis on teachers' use of classroom assessment information to guide instruction and individualize student programming. Despite the current emphasis on assessment, little research has been conducted that examines the alignment between national and state policies and teacher preparation in assessment. The purpose of this paper is to evaluate the alignment between preparatory assessment education courses and current assessment policies including National Council for Accreditation of Teacher Education and Florida Department of Education teacher education accreditation standards.
Federal Policy and Program Educational Evaluations: A Review of Evaluation for Multiple Federal Agencies
Presenter(s):
Julie Gloudemans, University of South Florida, julie.gloudemans@gmail.com
Abstract: This paper reviews 87 publically-accessible federal educational evaluations that were conducted during a 3 year period (from 2007 to 2010). These evaluations span four different federal agencies: Office of Management and Budget; Policy and Program Studies Services within the Office of Planning, Evaluation and Policy Development; Evaluation, Inspection, and Management Services within the Office of Inspector General; and the General Accountability Office. The research was limited to only publically accessible evaluations and did not include federally-funded local evaluations. The paper determined that there were some overall similarities (e.g. most evaluations were focused outcomes and specified evaluation questions). However, there were differences between departments (e.g. Policy and Program Studies Services used third party contractors to conduct evaluation that largely used secondary data and sophisticated statistical techniques).

Session Title: How to Infuse Learning in Non-profits: Developing a Framework for Learning Based on Three Non-profit Case Illustrations
Think Tank Session 369 to be held in Capistrano A on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building
Presenter(s):
Joelle Cook, Organizational Research Services, jcook@organizationalresearch.com
Discussant(s):
Astrid Hendricks, Hendricks Consulting, ahendricks2@gmail.com
Cameron Clark, Organizational Research Services, cclark@organizationalresearch.com
Abstract: Evaluators talk about "learning" in many ways (e.g., learning organizations, organizational learning, strategic learning). To date, most of the relevant literature and work around organizational learning has focused on foundations and for-profit companies. The goal of this think tank is to discuss and develop frameworks for learning through evaluation in a nonprofit setting. Session facilitators will share an overview of relevant literature and frameworks and present three case examples of advocacy organizations who have successfully used evaluation to support learning in different ways: 1) for operations, 2) to measure the effectiveness of programs and tactics, and 3) to define strategy (ORS, 2010). Following this presentation, session facilitators will lead participants in an exercise to further define a useful, unified framework for learning in these different areas and to gather input on other learning models and practices that would benefit nonprofits who wish to develop this capacity.

Session Title: Developing Internal Evaluation Capacity
Multipaper Session 370 to be held in Capistrano B on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Internal Evaluation TIG
Chair(s):
Amanda Greene,  National Institutes of Health, amanda.greene@nih.gov
Discussant(s):
Boris Volkov,  University of North Dakota, boris.volkov@med.und.edu
Building Project Leaders' Evaluation Capacity Through a Mentorship Group Model: Critical Success Factors and Lessons Learned
Presenter(s):
Flora Stephenson, Alberta Health Services, flora.stephenson@albertahealthservices.ca
Leslie Barker, Alberta Health Services, leslie.barker@albertahealthservices.ca
Abstract: While project leaders typically do not have a formal evaluation background, their roles often include planning and implementing an evaluation of their projects. To address this knowledge gap, an evaluation mentorship group was formed to build capacity among project leaders. This session will explore this model and share the experiences of a facilitator and a participant. The mentorship group effectively brought project leaders together to improve their understanding and commitment to project evaluation, as well as building their capacity to be 'local experts' for other project leaders. Project leaders who benefitted the most were individuals with an intrinsic interest in evaluation, a commitment to learning, and an active project. Having projects in different stages of development was a challenge to effective mentoring in a group setting. Suggestions for changes to the mentorship group model and ideas on how the model can be applied in a different setting will be discussed.
Evaluation Capacity Building (ECB) in Complex Systems: Role of Internal Evaluators
Presenter(s):
Stacey Friedman, Foundation for Advancement of International Medical Education & Research, staceyfmail@gmail.com
Abstract: The aim of capacity building is to enable individuals and groups to more effectively use resources to problem solve and effect desired changes. FAIMER supports six fellowship programs for health professions faculty from developing regions of the world. Evaluation capacity is needed at multiple levels. Fellows need to learn how to evaluate their education innovation projects, program faculty need to be able to teach fellows about evaluation, the programs need to be evaluated, and FAIMER needs evaluation of cross-program strategies. FAIMER employs an internal evaluator who supports capacity building at multiple levels, including via program faculty development, curriculum consultation, and facilitation of an evaluation advisory group. ECB has progressed in stages as capacity, needs, and contexts for the programs and organization have changed over time. This adaptive and evolving ECB model offers a way to capitalize on internal evaluation to allow collaborative learning and support innovation within complex systems.

Session Title: Assessing Your Agency's Capacity to Serve Injecting Drug Users
Demonstration Session 371 to be held in Carmel on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Presenter(s):
Adam Viera, Harm Reduction Coalition, viera@harmreduction.org
Abstract: For nearly all of the CDC-identified evidence-based interventions (EBIs) and public health strategies, organizations can avail themselves of a readiness assessment to determine their existing capacity and capacity-building needs. However, these tools often assume a previous relationship with injection drug users (IDU) and neglect to assess the staff and agency capacity around working with injecting drug users and communities impacted by drug use. Harm Reduction Coalition (HRC) will present a conceptual framework for assessing organizational capacity to serve populations of injecting drug users. This conceptual framework establishes four interrelated levels to be assessed (termed assessment domains), which include the individual staff level, the program level, the organizational level and the community engagement level. Within each assessment domain, facilitators will discuss the various instruments and methods available for assessment at that level, as well as the different areas of capacity (termed capacity domains). HRC will present corresponding examples of capacity domains for each assessment domain, along with sample questions. HRC will also close with a discussion regarding resources for assessing and building capacity to serve injecting drug users.

Session Title: Technology and Classroom Observation: Bringing the ICOT Up to Date
Demonstration Session 372 to be held in Coronado on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Distance Ed. & Other Educational Technologies TIG
Presenter(s):
Talbot Bielefeldt, International Society for Technology in Education, talbot@iste.org
Clare Strawn, International Society for Technology in Education, cstrawn@iste.org
Brandon Olszewski, International Society for Technology in Education, brandon@iste.org
Abstract: The ISTE Classroom Observation tool (ICOT) for several years has provided a convenient platform for conducting evaluation observations in technology-using schools. However, the application has aged quickly. With the loss of funding for programming an upgrade, evaluators moved the protocol into a common spreadsheet, incorporating the latest educational technology standards, eliminating glitches from the original tool, and providing a free and flexible platform than can be easily modified by users without programming. This presentation will introduce users to the ICOT, enable them to download the tool, and discuss how to incorporate the tool into evaluation logic models, achieve reliability across observers, aggregate data for analysis, and present results.

Session Title: Map it Out: A Visual and Physical Participatory Method for Data Collection
Demonstration Session 373 to be held in El Capitan A on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Health Evaluation TIG
Presenter(s):
Jill Lipski Cain, The Improve Group, jill@theimprovegroup.com
Stacy Johnson, The Improve Group, stacyj@theimprovegroup.com
Abstract: Learn about a hands-on tool that engages stakeholders in creating a comprehensive inventory of assets and areas of need. Two evaluators from the Improve Group will demonstrate the community mapping process they developed to help local practitioners assess health assets and needs in Faribault, Martin, and Watonwan Counties. This work was done for a broader health needs assessment funded by Minnesota's Statewide Health Improvement Program. Stakeholders used icons to map the depth, connectedness, and gaps in resources in their rural communities. Findings were used to identify activities that were most effective in addressing the problems of obesity and tobacco use. This session will address what materials and resources are needed, which stakeholders should participate, what can be measured using this tool, how its findings can be used to inform a broader needs assessment, and how this process can be adapted and applied to other contexts.

Session Title: Stages of Evaluation Development in Kazakhstan: Methodology of State Programs Evaluation
Demonstration Session 374 to be held in El Capitan B on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Presenter(s):
Sergey Gulyayev, Decenta Public Foundation, sergey@decenta.org
Abstract: In late 1990s the first experiences of program evaluation took place. Reasons for that: international projects were implemented in Kazakhstan; foreign companies brought this tradition; specialists from Kazakhstan got experience of attending trainings and seminars on evaluation; first local evaluators as representatives of a profession appeared. Early 2000s - large projects financed by foreign investors and donors were evaluated, this was an obligatory condition set by the financing party. In 2002-2005 more and more discussions raised about necessity to evaluate the impact of the programs financed by the state. These discussions were initiated by the NGO sector. By 2006 there were examples of evaluation of the state budget programs. During 2007 - 2009 the professional community of evaluators from NGOs lobbied amendments into the national legislation about evaluation of the governmental programs. In 2010 the national legislation was amended and evaluation of the state programs efficiency, effectiveness and impact became obligatory. In 2011 it became legally possibly to evaluate the programs by contacting evaluators from NGOs.

Roundtable: A Qualitative Evaluation of Second-Step: A School Violence Prevention Program in Southern Mexico
Roundtable Presentation 375 to be held in Exec. Board Room on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Qualitative Methods TIG
Presenter(s):
Enique Polanco-Cabrales, Centro de Estudios de Las Americas, esantacruz@cela.edu.mx
Edith Cisneros-Cohernour, University of Yucatan, Merida, cchacon@uady.mx
Abstract: This paper presents the findings of a qualitative evaluation study on the implementation of Program Second-Step, a bulling and violence prevention program in a private school in Southern Mexico. Stake Responsive Evaluation approach was used for examining the pertinence to the program to the Mexican context and culture and the quality of its implementation. Findings of the study indicate that the program has increased pro-social behavior of students and helped to reduce discipline and emotional violent behavior.

Session Title: 25 Low-cost/No-cost Tools for Evaluation
Demonstration Session 376 to be held in Huntington A on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Integrating Technology Into Evaluation
Presenter(s):
LaMarcus Bolton, American Evaluation Association, marcus@eval.org
Susan Kistler, American Evaluation Association, susan@eval.org
Abstract: Join us for a review of over 25 low-cost/no-cost tools that are useful, used, and user-friendly. Who isn't short on time, short on funds, and short on the patience needed to decide which tools are worth investing the time needed to access and use? Drawing on contributions from over 20 colleagues in different contexts, we'll show examples of tools that are used by practicing evaluators to conduct background research; create and document evaluation plans and logic models; facilitate data cleaning, exploration and analysis; listen to and learn from online exchanges; and promote and enhance collaboration. Each attendee will leave with a handout identifying the tool, its uses, the time needed to learn to use it, and any special considerations. The session will be supplemented by a website with live links to each tool and where and how to learn more.

Session Title: Understanding Sponsors and Stakeholders' Interests and Values in High Stakes Evaluations
Panel Session 377 to be held in Huntington B on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Government Evaluation TIG
Chair(s):
Rakesh Mohan, Idaho State Legislature, rmohan@ope.idaho.gov
Discussant(s):
Rakesh Mohan, Idaho State Legislature, rmohan@ope.idaho.gov
Abstract: The word fact comes from the Latin factum, meaning something that actually took place or is taking place. In the world of performance evaluations, with their focus on evidence-based findings, uncovering the facts is the first step along the path that ultimately leads to the formulation of recommendations - statements about what should be done. However, as the philosopher David Hume argued, there is no logical, certain or obvious way to make value statements about what should be done based on any set of facts. Whether one agrees with this or not, it is useful to recognize that well-reasoned arguments and fact-based evidence may not be enough to gain acceptance of recommendations. This panel explores how knowledge and consideration of key stakeholders' interests and values, and how these values influence their interpretation of the facts, can help to inform the conduct and contribute to the success of utilization-focused evaluations.
Tools for Teasing Out Transparency
Roberta Manshel, Kal Krishnan Consulting Services Inc, roberta.manshel@kkcsworld.com
This presentation explores emerging evaluation tools designed to measure productivity on transportation infrastructure projects and how these tools are being used to help evaluators navigate through partisan environments and promote the delivery of economical, quality programs. Understanding the policy environment and the ideological drivers is essential to executing a meaningful assessment. Often in the public sector, particularly in the realm of infrastructure investments, budget and policy priorities are driven by opinions and political dogma. The evaluator’s challenge is to provide a fact-based framework that fosters public deliberations that are able to transcend ideology and focus on performance. The presenter will provide examples of emerging evaluation tools used to measure the effectiveness of infrastructure development; focusing on evolving analytical tools that help to tease out the bias, which often leads to unrealistic, politically driven assumptions, schedules, and budgets. The presentation will also provide a case study which reveals how missteps in the characterization and dissemination of findings can result in unintended programmatic consequences. In particular, the presenter will describe the development of the Federal Transit Authority’s project oversight auditing framework including tools such as risk assessments and capability and capacity reviews. It will explore how these tools are being used to effectively evaluate performance, balance competing stakeholder concerns and contribute to the successful delivery of major transit projects (including the $900M East Side Light Rail Extension in Los Angeles) and to prompt improvements in complex and varied environments such as Washington, British Columbia, Idaho and North Carolina.
Changing Political Values and Their Impact on Evaluation
Jim Brock, Avant Infrastructure Management Consulting, jbrock@avantimc.com
Political values shift according to the prevailing socio-economic circumstances. Because transportation agencies (local, regional, state, and federal) have a variety of stakeholder allegiances and political influences, operating and funding decisions undergo a complex process. Values have changed during recent years, because as states' funds have dried up, stakeholders and evaluation sponsors tend to follow the federal lead (because that's where the money is). This presentation will discuss the evolution in political values due to significant economic change, and how this change has impacted performance evaluations in recent years. Economic scarcity has raised the visibility of how economically-sound operating decisions are made, and the very nature of best practices. As transportation agencies struggle to adjust to a new operating and funding environment, comparison to "peer" agencies may not be adequate. New "best" practices are being formulated, and comparative-based performance evaluations will need to consider the shift to new operating practices.

Session Title: A Complex Youth Competency Initiative in Cleveland, OH
Panel Session 378 to be held in Huntington C on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
Deborah Volk, Cuyahoga County Family and Children First Council, dvolk@cuyahogacounty.us
Abstract: My Commitment, My Community (MyCom) is an eight community, youth competency initiative in Cleveland, Ohio, funded by The Cleveland Foundation and Cuyahoga County Family and Children First Council. MyCom is composed of 6 major technical assistance organizations, 8 community "lead" agencies-one in each of the eight communities, and 87 funded and volunteer, youth service delivery agencies and organizations. This presentation discusses challenges of evaluation ranging from process and outcome measures to organizational structure to organizational capacity to internal morale, among other elements affecting a geographically wide-spread and organizationally complex youth program. Members of the evaluation team will review challenges, discuss the development of specifically designed capacity instruments, and the application of social network analysis to enhance organizational structure and organization.
Challenges to Evaluation in a Complex Youth Competency Initiative in Cleveland, OH
Deborah Volk, Cuyahoga County Family and Children First Council, dvolk@cuyahogacounty.us
My Commitment, My Community (MyCom) is an eight community, youth competency initiative in Cleveland, Ohio, funded by The Cleveland Foundation and XXXXX [add]. MyCom is composed of 6 major technical assistance organizations, 8 community "lead" agencies-one in each of the eight communities, and 87 funded and volunteer youth service delivery agencies and organizations. This presentation discusses challenges of evaluation ranging from process and outcome measures to organizational structure to organizational capacity to internal morale, among other elements affecting a geographically wide-spread and organizationally complex youth program
Development and Implementation of Evaluation Tools in a Complex Youth Competency Initiative in Cleveland, OH
Mark Fleisher, Case Western Reserve University, mfleishe@kent.edu
MyCom evaluation team developed program-specific evaluation tools, which fit MyCom's organizational structure. Three instruments were developed; these were self-report tools as well as structured and semi-structured instruments to assess agencies' capacity to meet MyCom's goals. The evaluation teams developed a social network instrument which assessed the degree of relationship among agencies and organizations. That assess was used as the blueprint for remediation of MyCom's organizational design and requirements of process and outcome data. The organizational assessment's after-action plan not only provided substantive process and outcome data but also functioned to create informal internal social ties that facilitated communication and shared activities among organizations and agencies.

Session Title: Exploring Praxis in Evaluation
Multipaper Session 380 to be held in Laguna A on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Theories of Evaluation TIG
Chair(s):
Bianca Montrosse,  Western Carolina University, bemontrosse@wcu.edu
Values & Validity
Presenter(s):
James Griffith, Claremont Graduate University, james.griffith@cgu.edu
Bianca Montrosse, Western Carolina University, bemontrosse@wcu.edu
Abstract: A panel at last year's conference featured contemporary responses to Ernie House's classic Evaluating with Validity. House's original discussion and last year's panel focused on what priority should be emphasized. This paper extends that discussion by analyzing the values inherent in validity and in preferences for truth, beauty, and justice. What values are we accepting, rejecting, or balancing when we choose between truth, justice, and beauty? Similarly, what values are inherent in refusing to make the choice? In our analysis, we draw not only from House's writings and other evaluation theory classics, but also from more contemporary ideas, such as multicultural validity, cultural competence, and praxis. An interesting feature of this discussion is that unlike other theoretical discussions about evaluation, theory and practice are intertwined here. This discussion centers on the collision of theory with the constraints of reality, hence the evaluator's concern about choosing when values compete.
How Can Social Theory Influence an Evaluation Design: Discussion of Marxism, Postpositivism and Constructivism
Presenter(s):
Fatma Ayyad, Western Michigan University, f4ayyad@wmich.edu
Julien Kouame, Western Michigan University, j5kouame@wmich.edu
Abstract: In this paper, three different paradigms (Postpositivism, Marxism and constructivism) were used separately to construct three frameworks for the same evaluand. Our experience suggests that, though all evaluators are guided by a common logic, the depth and the outcome of their evaluation could be influenced by the paradigm that guides their thoughts.

Session Title: Rigorous Design for Evaluating Vulnerable Populations
Multipaper Session 381 to be held in Laguna B on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Disabilities and Other Vulnerable Populations
Implementation, Systems Change, and Customer Outcomes: Using Mixed-Methods and Random Assignment to Understand the Disability Employment Initiative
Presenter(s):
Anne Chamberlain, Social Dynamics LLC, achamberlain@socialdynamicsllc.com
Douglas Klayman, Social Dynamics LLC, dklayman@socialdynamicsllc.com
Teserach Ketema, United States Department of Labor, ketema.teserach@dol.gov
Richard Horne, United States Department of Labor, horne.richard@dol.gov
Abstract: The Disability Employment Initiative (DEI) is funded by the U.S. Department of Labor, Employment and Training Administration and the Office of Disability Employment Policy to effect system change in nine states. The goal of that system change is to improve access to educational, training, and employment opportunities for individuals with disabilities. The DEI Evaluation involves random assignment of Local Workforce Investment Areas (LWIAs) to pilot and comparison groups. This paper will describe how the evaluation design responds to the request to measure program impact in a complex systems-change initiative. Progress with multi-level buy-in, site visits, interview and focus-group instrumentation, and the development of a DEI Database will be part of the review of this unique evaluation.
Research and Evaluation as Public Health Program Development Tools
Presenter(s):
Louise Palmer, KDH Research & Communication Inc, lpalmer@kdhrc.com
Jana Eisenstein, KDH Research & Communication Inc, jeisenstein@kdhrc.com
Kristen Holtz, KDH Research & Communication Inc, kholtz@kdhrc.com
Eric Twombly, KDH Research & Communication Inc, etwombly@kdhrc.com
Abstract: This paper explains the importance of iterative waves of research and evaluation in developing the Cochlear Implant (CI) School Toolkit as an efficacious program to help children with CIs successfully enter mainstream school. Formative research included focus groups with parents and teachers experienced and inexperienced in mainstreaming children with CIs to determine the pilot program content and format. A quasi-experimental feasibility evaluation used pretest/post-test surveys with teachers to determine pilot efficacy in increasing knowledge, attitudes, and self-efficacy in teaching a child with a CI. A post-test only was implemented with parents to determine if the pilot was useful, comprehensive, and appropriate. Findings helped further program development. A quasi-experimental outcome evaluation used pretest/post-test surveys with teachers and parents to assess the impact of the final program in changing knowledge, attitudes, and self-efficacy among teachers and parents. The combination of formative research, feasibility, and outcome evaluations resulted in an evidence-based, efficacious program.

Roundtable: Evaluation of Study Abroad Outcomes
Roundtable Presentation 382 to be held in Lido A on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Presenter(s):
Julia Shaftel, University of Kansas, jshaftel@ku.edu
Tim Shaftel, University of Kansas, tshaftel@ku.edu
Abstract: The Intercultural Student Attitude Scale (ISAS) is intended for use by college and university study abroad programs to assess student outcomes of international study and evaluate program goals related to student growth in cross-cultural skills and attitudes. The assessment of attitude change as a result of study abroad is in its infancy and no other validated instrument exists for this purpose. The authors plan to make the ISAS available through the public domain for use by university international study offices and individual study abroad programs as well as for research applications. The audience will learn about the development and validation of the ISAS, the role of sex and personality factors in self-selection for international study, and how student attitudes change with study abroad experience.

Session Title: Measurement of Interprofessional Education and Health Care Collaboratives
Multipaper Session 383 to be held in Lido C on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Robert LaChausse,  California State University, San Bernardino, rlachaus@csusb.edu
Looking Above the Low-Hanging Fruit: The Importance of Capturing Health Outcomes in Evaluations of Healthcare Collaboratives
Presenter(s):
Ryan Burke, Health Policy Research Northwest, rburke@hprnw.org
Heidi Hascall, Health Policy Research Northwest, hhascall@hprnw.org
Clara Williams, Health Policy Research Northwest, cwilliams@hprnw.org
Erin Owen, Health Policy Research Northwest, eowen@hprnw.org
Abstract: In the midst of a developing national healthcare strategy, communities are forming healthcare collaboratives to meet the needs of their uninsured and underinsured populations. These collaboratives are tailored to their communities and specific populations, making each program unique. However, their goal is generally the same - to improve their clients' health and quality of life. Despite this goal, their program evaluations often focus on low-hanging fruit: process outcomes and outputs that are easily captured (e.g., number of clients served, quantity of services provided). While process outcomes are important for program planning, health outcomes are crucial for measuring the program's impact on the community. Health Policy Research Northwest (HPRN) has worked with healthcare collaboratives in Oregon to conduct comprehensive evaluations that include process and health outcome measures. This presentation will discuss HPRN's methods for measuring health outcomes and elicit ideas from audience members about how these outcomes are captured in other fields.
Inventory of Quantitative Instruments to Measure Interprofessional Education and Collaborative Practice in Health Care
Presenter(s):
Lynda Weaver, Bruyere Continuing Care, lweaver@bruyere.org
Rebecca Law, Memorial University, rlaw@mun.ca
Jana Lait, Alberta Health Services, jana.lait@albertahealthservices.ca
Robin Roots, Northern Health, roots@island.net
Luljeta Pallaveshi, University of Western Ontario, lpallave@uwo.ca
Patti McCarthy, Memorial University, pattimccarthy@mun.ca
Siegrid Deutschlander, Alberta Health Services, siegrid.deutschlander@albertahealthservices.ca
Esther Suter, Alberta Health Services, esther.suter@albertahealthservices.ca
Nancy Arthur, University of Calgary, narthur@ucalgary.ca
Judy Burgess, University of Victoria, jburgess@uvic.ca
Abstract: Recent emphasis from the Canadian federal and provincial governments on improving interprofessional teamwork in health care settings has driven a surge of changes in health care professionals' education and ways of practice. Evaluation of these endeavours is vital to map their progress and impact. To support such evaluation efforts, a subcommittee of the Canadian Interprofessional Health Collaborative (CIHC) conducted a comprehensive literature search to compile a state-of-the-art inventory of quantitative evaluation instruments related to interprofessional education and practice. The completed table contains over 120 instruments measuring six modified Kirkpatrick evaluation outcome domains: Attitudes, perceptions; knowledge, skills, abilities; behaviour; organizational practice; patient satisfaction; and provider satisfaction. This inventory, two years in the making, will reside on the CIHC website. In addition to describing the search process and results, this paper will illustrate the subsequent value for educators, administrators, practitioners and evaluators in finding tools that meet their evaluation needs.

Session Title: Using Logical Framework to Identify Outcomes and Develop Performance Indicators in Science & Technology Program Proposals
Skill-Building Workshop 384 to be held in Malibu on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Presenter(s):
Shan Shan Li, Science & Technology Policy Research and Information Center,, ssli@stpi.narl.org.tw
Ling-Chu Lee, Science & Technology Policy Research and Information Center, lclee@stpi.narl.org.tw
Abstract: Ex ante evaluation is a fundamental tool for effective management and a formal requirement. The aim of a "good ex-ante evaluation" is to request the well-built framework and present the kind of logical thinking between the goals and indicators in programs' proposals. The session is divided into two parts. The one part is to explain the main concept of LFA and its procedures step by step. Each step will be its "tricks" in details. And the other part is to let audiences experience, participate, and discuss in the whole process by their own. And we will take a case to demonstrate the whole process. After the session is finished, we are glad to share and discuss the method via email, in order to improve the method. If it is necessary, we also have related documents as references. In the future, we will focus on the development in details, such as how to apply SWOT to LFA, how to develop the hypothesis, etc.

Session Title: I'm With the Brand: A Behind the Scenes Look at the Three T's of Creating an Effective Internet Presence
Demonstration Session 385 to be held in Manhattan on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Independent Consulting TIG
Presenter(s):
Richard Eddy, Cobblestone Applied Research & Evaluation Inc, rich.eddy@cobblestoneeval.com
Abstract: The purpose of this presentation is to provide an overview of the tools, techniques and talents useful in developing an effective online presence. An established Internet presence can be used by an independent consultant to build credibility, to establish a unique brand, and for connecting with existing and prospective clients. The presenter will outline a model that implements some of the ideas that an independent practitioner or small consulting practice can use to leverage their online identity, specifically demonstrating what this online presence should look like, including the integration of popular social media platforms such as blogs (Wordpress, Blogger, etc.), Facebook, Twitter and LinkedIn. Some of the many ways that these tools can be used to enhance the success of an independent consulting practice will also be discussed, in addition to the subjects of privacy, security and reputation management.

Session Title: A Mixed-Methods Approach to Understanding the Impact of Requiring Citizenship Documentation for Medicaid Enrollment
Panel Session 386 to be held in Monterey on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Mixed Methods Evaluation TIG
Chair(s):
Robert Phillips, The California Endowment, rphillips@calendow.org
Abstract: The federal Deficit Reduction Act of 2005 (DRA) requires citizens applying for or renewing Medicaid coverage to provide documentation establishing citizenship and identity. Implementing this policy affected state and county Medicaid administrators who had to modify existing enrollment processes, as well as current and potential beneficiaries who faced an additional application step before obtaining coverage. This session presents findings from a comprehensive mixed-methods evaluation of the impact of DRA implementation in California, a state that aimed to implement the DRA with as much flexibility as possible to avoid the negative consequences for enrollment that some states reported. To assess the impact of DRA implementation on counties and clients, two surveys of county-level administrators and site visits to six counties were conducted. In combination with Medicaid enrollment records, data collected through the surveys enabled a rigorous quantitative estimate of the DRA's impact on enrollment and retention trends for Medicaid beneficiaries.
The Medicaid Deficit Reduction Act of 2005 (DRA) Citizenship and Identity Documentation Requirements: Findings From a Survey of California Counties
Dana Hughes, University of California, San Francisco, dana.hughes@ucsf.edu
Vernon Smith, Health Management Associates, vsmith@healthmanagement.com
Carolina Davis, Health Management Associates, cdavis@healthmanagement.com
To evaluate the impact of the DRA requirements on California, two statewide surveys were conducted of all 58 county social service directors in 2008 and 2009. The first survey gathered information during early implementation, while the second survey was conducted after full operationalization. According to respondents, the DRA requirements had little effect on excluding undocumented residents from Medicaid coverage-its primary intended objective-largely because the undocumented did not apply for Medicaid prior to the new requirements. Rather, the new requirements created burdens on citizens and nationals, possibly leading to delayed entry into care, and caused new administrative burdens and greater costs for counties. The survey findings suggest the benefits of identifying policy and procedural changes to the DRA requirement that would ease the burdens on counties and Medicaid clients while still assuring the integrity of the Medi-Cal eligibility process and supporting federal efforts to streamline Medicaid enrollment.
Impact of the DRA Citizenship and Identity Documentation Requirement on Enrollment and Retention in Medi-Cal
Margaret Colby, Mathematica Policy Research, mcolby@mathematica-mpr.com
Brittany English, Mathematica Policy Research, benglish@mathematica-mpr.com
Between June 2007 and September 2008, California's 58 counties began implementing the DRA's citizenship and identity documentation requirement for beneficiaries seeking Medi-Cal enrollment or renewal. Using enrollment data from the Medi-Cal Eligibility Data System from May 2007 through March 2009, we conducted multivariate regression analyses to estimate average county-level monthly changes in retention, full scope enrollment, and restricted scope enrollment. Models included county and month fixed effects and an indicator for DRA implementation. Separate regressions were run for populations subject to and exempt from the DRA (i.e. current Medicare beneficiaries) and for subgroups defined by age and primary household language. Estimates suggest that DRA implementation did not impact Medi-Cal retention or restricted scope enrollments. However, enrollment for full scope beneficiaries subject to the DRA decreased by 3.8 percent (p=0.019), with larger effects for children. This estimate translates into about 60,000 fewer enrollments than expected in the year following DRA implementation.

Session Title: Video in Evaluation: Methodological Opportunities and Technical Tips
Demonstration Session 387 to be held in Oceanside on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Qualitative Methods TIG
Presenter(s):
Corey Newhouse, Public Profit, corey@publicprofit.net
Abstract: This session will explore opportunities to incorporate video into evaluation, both as a means to complement written findings and as a cognitive elicitation technique. Drawing on the presenter's experiences in an evaluation of a teacher coaching initiative, the session will include a brief summary of the literature, exemplars of video use in evaluation, and tips for successful implementation of this method. The session content is geared toward evaluators who are considering incorporating video into their practice or have just begun to do so.

Session Title: Assessing Coalition Building and Relationships Through Social Network Analysis
Expert Lecture Session 389 to be held in  Palos Verdes A on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Social Network Analysis TIG
Presenter(s):
Todd Honeycutt, Mathematica Policy Research, thoneycutt@mathematica-mpr.com
Marykate Zukiewicz, Mathematica Policy Research, mzukiewicz@mathematica-mpr.com
Debra Strong, Mathematica Policy Research, dstrong@mathematica-mpr.com
Discussant(s):
Tom Bartholomay, University of Minnesota, barth020@umn.edu
Abstract: Among the many objectives of funding a program, funders and participants want to help build relationships among those involved that could potentially last beyond the initial project funding. The Consumer Voices for Coverage program, funded by the Robert Wood Johnson Foundation initially for three years, sought to help 12 state-level consumer advocacy coalitions address health policy in their states as well strengthen the relationships among participating organizations. As part of a larger multi-mode evaluation, we used social network analysis (SNA) methods to assess the extent to which participating organizations of each coalition had worked together before the grant and how organizations communicated and worked collaboratively with each other in the first and third grant years. This presentation will describe how we used SNA for the evaluation and compare our findings on coalition building and relationships with other results from the evaluation.

Session Title: The Canadian Federal Evaluation Policy: Drivers, Features for Supporting Quality and Use of Evaluation, and Lessons Learned
Expert Lecture Session 391 to be held in  Redondo on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Evaluation Policy TIG
Chair(s):
Anne Routhier, Treasury Board of Canada Secretariat, anne.routhier@tbs-sct.gc.ca
Presenter(s):
Anne Routhier, Treasury Board of Canada Secretariat, anne.routhier@tbs-sct.gc.ca
Abstract: On April 1st 2009, a renewed Canadian federal Policy on Evaluation, along with a Directive and Standard, came into effect. The objective of this policy - which applies to most departments and agencies across the Government of Canada - is creating a comprehensive and reliable base of evaluation evidence that is used to support policy and program improvement, expenditure management, Cabinet decision making, and public reporting. In this presentation, the Senior Director of the Treasury Board of Canada Secretariat's Centre of Excellence for Evaluation will provide participants with an overview of the drivers of the renewed policy. Moreover, this presentation will explore the emerging trends in evaluation quality and utilization, since the introduction of the policy in 2009, as assessed through the annual review of evaluation functions in federal departments and agencies. Finally, this presentation will review 'lessons learned' that may inform evaluation policy development in other jurisdictions.

Session Title: Methods and Models for Evaluating Training Programs
Multipaper Session 392 to be held in Salinas on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the AEA Conference Committee
Evaluation of an Internal Training Program: Going Beyond Kirkpatrick: A Case Study of a Federal Training Program and What Can We Learn From It in the Evaluation of Training Programs
Presenter(s):
Anna Dor, Claremont Graduate University, annador@hotmail.com
Abstract: There has been a great demand and interest in filling a gap in the field of evaluation, pertaining to evaluating training programs. This paper outlines an internal evaluation program that was implemented within a component of the Department of Homeland Security, to evaluate (internal) training delivered to the employees. The author outlines the process and presents the strengths and weaknesses of the model. The process presented goes beyond the traditionally used method of evaluating training by utilizing the Kirkpatrick model. The author argues the model presented in this case study builds evaluation capacity and organizational learning. It utilized adult learning methodologies and stakeholder engagement and empowerment. The learners are active participants in the evaluation of the training program(s) as well as the trainers and management. As a result, the training program(s) are continuously enhanced to meet the needs of the learners and the organization.
Evaluation of Cost and Social Effects on Short-term Training Programs
Presenter(s):
Reiko Kikuta, Tokyo Institute of Technology, kikuta.r.aa@m.titech.ac.jp
Hiromitsu Muta, Tokyo Institute of Technology, muta@hum.titech.ac.jp
Abstract: This study proposes a method for measuring the cost and social effects of short-term training programs such as the transfer of acquired knowledge and skills through a program, the application of such knowledge and skills to new development activities, and the monetary effects of a program. The data were collected from 404 participants in Asian Productivity Organization training programs through questionnaires from 22 countries. The results showed that knowledge and skills covered by a program produced an increase in the annual income of participants and other effects including social effects, such as the transfer of the knowledge and skills acquired through the program to others and their application to new development activities. The rate of return of the contribution to new development activities was higher than the rate of return of the increase in participants' annual income and that of the effects in the transfer of knowledge and skills.

Session Title: Methods in Evaluating Advocacy Efforts: Grantmakers' Perspectives
Multipaper Session 393 to be held in San Clemente on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Advocacy and Policy Change TIG
Chair(s):
Aimee White,  Trident United Way, awhite@tuw.org
A Clash of 'Policy Change' Values: The Challenges of Attempting to Evaluate Everything That The Diana, Princess of Wales Memorial Fund Does
Presenter(s):
Andrew Cooper, The Diana, Princess of Wales Memorial Fund, andrew.cooper@memfund.org.uk
Abstract: The Diana, Princess of Wales Memorial Fund ('the Fund') is undertaking a major evaluation of our work, as we spend our remaining capital on policy change, and move towards closure in 2012. Conducting a Fund-wide evaluation raises many challenges, including how our contribution to policy change is measured, how we compare different programme areas, and how to share learning externally. However, the greatest challenges relate to the issue of values. For example, which stakeholders is the evaluation serving our Board, beneficiaries, grantees, or the broader philanthropy sector? As we are spending out, the purpose of the evaluation is to influence other funders and philanthropists. This raises tricky questions about the role of evaluation in policy change and influencing, and the need to be transparent about our successes and failures, without damaging the reputations of others. This session will explore how we are tackling the potential clash of values in relation to this major evaluation of our work.
Taking the Measure of "Role" And "Contribution": A Mixed Methods Approach to Policy Evaluation
Presenter(s):
Bronwyn Mauldin, First 5 LA, bmauldin@first5la.org
Teryn Mattox, First 5 LA, tmattox@first5la.org
Abstract: This case study describes an evaluation of five years of work toward three policy goals by a public grantmaking agency. We began by interviewing internal and external stakeholders to determine what progress had been made toward the policy goal, and to understand perceptions of our agency's role and contribution to that progress. In addition, policy department staff were asked to identify their start and end points on a continuum of policy change and to provide quantitative data on such variables as dollar amount invested and staff effort. Findings from the two methods were not identical, but complementary. We found perceptions of the agency as a funder gave it the greatest impact on policy goals. As a result, we recommended an increase in policy grantmaking. While this effort to quantify "role and contribution" in policy change gave deeper understanding into the agency's work, this model will need further development.

Session Title: Responding to Insufficient RFPs
Skill-Building Workshop 394 to be held in San Simeon A on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Independent Consulting TIG and the Graduate Student and New Evaluator TIG
Presenter(s):
Nakia James, Western Michigan University, nakia.s.james@wmich.edu
Abstract: It is often necessary for Nonprofit Organizations (NPOs) to formulate a Request for Proposal to procure essential services in fulfillment to their grant obligations. This is most common when seeking the services of an external evaluator. Since most NPOs do not have an internal evaluation staff, developing an appropriate RFP can be quite challenging. This may lead to the first obstacle encountered by both the NPO and the potential consultant. Many NPOs are unaware of how to appropriately include key components in the RFP. The quality of the bidder's response is highly dependent on the quality of the RFP. Accordingly, it is imperative that we strive to better understand the RFP process, from its conception to fulfillment. The experience of potential consultants will also be quite useful in further understanding how RFPs are developed, and how to appropriately address and respond to any inadequacies in the RFPs.

Session Title: Recruiting Participants for Education Studies: Practical Strategies and Advice
Demonstration Session 395 to be held in San Simeon B on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Elizabeth Autio, Education Northwest, elizabeth.autio@educationnorthwest.org
Jason Greenberg Motamedi, Education Northwest, j.g.motamedi@educationnorthwest.org
Abstract: Recruitment of participants to take part in a study is a challenge many evaluators face. Failure to accomplish this task can result in the demise of a study before it even begins. In education, recruitment can be particularly daunting, as multiple layers of stakeholders must buy into the study. There are also sometimes ethical concerns about providing or denying the treatment to students. Finally, a plethora of programs and initiatives compete for educators' time and attention. In this session, evaluators who successfully recruited schools for a randomized controlled trial (RCT) of an instructional model share their lessons learned. It will include practical strategies and concrete advice for other evaluators. Topics will include budgeting, developing an approach, creating systems and materials, outreach, interacting with potential participants, delivery of the message, and welcoming, tracking, and retaining participants. These lessons learned can be applied to recruitment for all studies, and RCTs in particular.

Roundtable: Evaluation of a Multi-Site Caregiver Training Program in Rural Arkansas: Challenges and Lessons Learned
Roundtable Presentation 396 to be held in Santa Barbara on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Human Services Evaluation TIG
Presenter(s):
Jasna Vuk, University of Arkansas, jvuk@uams.edu
Amyleigh Overton-McCoy, University of Arkansas, overtonmccoyamyl@uams.edu
Sherry White, University of Arkansas, slwhite2@uams.edu
Robin McAtee, University of Arkansas, mcateerobine@uams.edu
Abstract: The Home Caregiver Training Project was funded in 2009 for a three year period at four Regional Centers on Aging in the state of Arkansas for the purpose of training caregivers for elderly in their homes. The project replicated an already existing training program called Schmieding Home Caregiver Training Program in Northwest Arkansas. The evaluation plan used the Logic Model framework and was designed to facilitate judgments regarding the merit, value, and impact of the program, guide successful replication, and collect evidence to justify the funding of additional training programs at four other locations. A comprehensive evaluation plan was developed; however, evaluation of the program at four sites over multiple years has been challenging. Lessons are learned from implementation of the program and evaluation of the quality of training. The value and impact of the program on communities, caregivers, and elderly who hire trained caregivers present new challenges for evaluation.

Session Title: Hierarchical Linear Modeling as a Valuable Tool in Evaluation
Multipaper Session 397 to be held in Santa Monica on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Investigating the Experimental Impact of the Scholastic READ 180 on Low-Achieving Incarcerated Youth
Presenter(s):
Jing Zhu, Metis Associates, jzhu@metisassoc.com
William Loadman, The Ohio State University, loadman.1@osu.edu
Richard Lomax, The Ohio State University, lomax.24@osu.edu
Raeal Moore, National Center for Educational Achievement, rmoore@nc4ea.org
Abstract: Greater emphasis has been placed upon enhancing literacy skills and achievement of struggling adolescent readers. This study investigated if the Scholastic READ 180 program had a meaningful impact on the reading achievement of low-performing incarcerated youth in the state of Ohio, when salient covariates were controlled for their influences. The study was based on a randomized controlled trial design in which the intent-to-treat (ITT) youth were randomly assigned to either the experimental group or a comparison group being instructed by traditional English class. Hierarchical linear modeling was conducted based on the data collected during four school years (2006-2010). Both cross-sectional and longitudinal data analyses indicated that the ITT incarcerated youth exposed to the READ 180 program significantly outperformed their comparison counterparts. The findings from the analyses of multiple reading outcomes consistently supported this conclusion. Issues related to multiple comparisons were also discussed.

Session Title: Finances and Institutions: Finding Value Through Evaluation
Multipaper Session 398 to be held in Sunset on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Business and Industry TIG
Chair(s):
Thomas Ward II,  United States Army, thomas.wardii@cgsc.edu
Why Financial Institutions Evaluation Needed! Crisis of Finances in Afghanistan and its Impact on the SMEs and Economic Growth
Presenter(s):
Noorullah Noori, Afghan Growth Finance, noorullah.noori2007@gmail.com
Abstract: Till recent days, evaluating financial institutions and development and use of financial institutions evaluations has not been widely discussed but they are extremely important for determining how best to implement robust SMEs financing programs. This paper describes and defines financial institutions evaluation and its importance. It describes financial institutions and commercial banks in Afghanistan avoid lending to SMEs - why viewing them as costly, high-risk credits and instead where prefer to invest. It describes decision makers, networks and decision outcomes. It then describes examples of several commercial banks/financial institutions using public capital illegally for generating more incomes in the country and outside the country. More specifically, it shows how using public capital illegally for generating more incomes by commercial banks/financial have impacted economic growth and SMEs in the country and resulted in diverting the prospects for sustainable development by ignoring the necessity of growth via a bottom-up capital formation.
Inclusive Growth and Private Sector Development: Evidence from Evaluation
Presenter(s):
Izlem Yenice, World Bank, iyenice@ifc.org
Adesimi Freeman, World Bank, afreeman@ifc.org
Abstract: There is growing consensus that the private sector is essential for growth and poverty reduction. Yet, the link from growth to poverty reduction is neither automatic nor universal. Growth is good for the poor but the impact of growth on poverty reduction depends on both the pace and the pattern of growth. This paper assesses the effects of interventions by the International Finance Corporation (IFC) on growth and distributional patterns of growth. Data for the assessment comes from a random sample of 158 projects evaluated by the Independent Evaluation Group. The evaluation shows that the type of growth that IFC supports matters for poverty reduction. An enhanced focus on growth and its distributional aspects need not come at the expense of financial profitability. Most IFC investment projects generate satisfactory returns but it has been challenging to incorporate distributional issues in its interventions.

Session Title: Selecting Measures and Instruments for Studying Fidelity and Outcomes in Education: Lessons Learned From an Evaluation of a Teacher Professional Development Program
Multipaper Session 399 to be held in Ventura on Thursday, Nov 3, 1:35 PM to 2:20 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Castle Sinicrope, Social Policy Research Associates, csinicrope@spra.com
Yasuyo Abe, Berkeley Policy Associates, yasuyo@bpacal.com
Abstract: With the current emphasis on rigorous research and experimental designs in education, evaluators must often negotiate different values and perspectives on measuring implementation and outcomes. While outcomes continue to be a primary focus in education evaluation, there is a growing recognition of the importance of understanding fidelity of implementation. In this session, presenters share lessons learned from measuring implementation fidelity and outcomes in a random assignment evaluation of a multi-year teacher professional development program. The presentation broadly reflects on balancing different stakeholder values in selecting measures and instruments to answer questions about fidelity and outcomes. Specific challenges to be covered in the session include identifying critical program components, selecting fidelity criteria, specifying outcome measures, and deciding between developing, adapting, or adopting existing instruments. This session will also share lessons learned from timing data collection efforts relative to program delivery and reporting implementation and outcome findings.
Lessons Learned From Documenting Implementation Fidelity: Developing Implementation Fidelity Measures for a Teacher Professional Development Program
Vanora Thomas, Berkeley Policy Associates, vanora@bpacal.com
Yasuyo Abe, Berkeley Policy Associates, yasuyo@bpacal.com
There is a growing recognition of the value of measuring fidelity of implementation in education research. An understanding of fidelity of implementation, the degree to which an intervention is delivered as intended, is essential to a comprehensive understanding of intervention outcomes and the identification of contextual factors that support or hinder implementation. However, few published studies provide details on the development and use of program specific fidelity measures for intensive school-based interventions. This presentation will outline the lessons learned from measuring implementation fidelity in a multi-year random assignment evaluation of a teacher professional program. Topics to be covered include the identification of the program's critical components and processes, selection of fidelity criteria, and the development of instruments to measure and monitor implementation. This session will also include the challenges associated with the collection of detailed program records and the potential roadblocks faced when presenting implementation findings.
Measuring Program Outcomes: Defining and Focusing Outcomes for a Teacher Professional Development Program
Castle Sinicrope, Social Policy Research Associates, csinicrope@spra.com
While outcomes are widely studied in education, selecting and measuring outcomes continues to pose challenges for evaluators. This presentation reflects on challenges faced in defining and measuring teacher and student outcomes in a recent multi-year teacher professional development evaluation. Key challenges included identifying outcomes that were aligned with the program theory of change and the values and priorities of the different stakeholders. Topics to be covered during the presentation include selecting short-term versus long-term outcomes, limiting and prioritizing outcomes to minimize multiple comparisons, timing data collection relative to program implementation, and reporting outcome findings. This presentation will also reflect on three additional considerations when selecting teacher and student outcomes for evaluating teacher professional development programs: 1) using established, national student assessments versus local, state-level student assessments; 2) adapting existing instruments versus developing new instruments; and 3) challenges posed by under- and over-alignment of instruments with programs.

Return to Evaluation 2011
Search Results for All Sessions