Return to search form  

Session Title: Understanding the Link Between Research and Practice
Multipaper Session 501 to be held in International Ballroom A on Friday, November 9, 9:25 AM to 10:10 AM
Sponsored by the Program Theory and Theory-driven Evaluation TIG
Chair(s):
John Gargani,  Gargani & Company Inc,  jgargani@berkeley.edu
The Golden Spike: Creating the Link Between Research and Practice
Presenter(s):
Jennifer Brown,  Cornell University,  jsb75@cornell.edu
William Trochim,  Cornell University,  wmt1@cornell.edu
Abstract: The need for evidence-based practice has already been well established and there are many efforts already underway to help facilitate the transition toward practice that is firmly based in solid evidence. However, this transition has focused almost exclusively on how research can be more effectively disseminated to a practitioner audience. For evidence-based practice to truly be successful, a better link between the questions that arise in practice and the research available to address such questions needs to be established. This paper will outline “The Golden Spike” method for linking an evidence base with program theory that will aid in streamlining evaluation by creating a bidirectional system for research-practice integration.
Implications of Scientific versus Stakeholder Theory in Formulating Program Theory and Designing Theory-driven Evaluation
Presenter(s):
Huey T Chen,  University of Alabama, Birmingham,  hchen@uab.edu
Abstract: This article will apply the conceptual formwork of program theory to systematically compare a scientific theory-based program and a stakeholder-based program. The comparisons include all the components of the change model (intervention, determinants, and outcomes) and all the components of the action model (implementing organization, implementers, intervention and service delivery protocols, associated organizations/community partner, ecological context, and target population). Concrete examples will be used to illustrate the differences. The proposed article would provide insightful information on the nature and characteristics between the scientific theory and stakeholder theory traditions on program theory and the implications of these differences in affecting program design and evaluation. This article would also make contributions in highlighting the pros and cons of these two theory traditions, illuminating potential strategies to reconcile the potential conflicts between these two traditions, and proposing methods and strategies that evaluators could use to evaluate these programs.

Session Title: Principles of System Change
Think Tank Session 503 to be held in International Ballroom C on Friday, November 9, 9:25 AM to 10:10 AM
Sponsored by the Systems in Evaluation TIG
Presenter(s):
Teresa Behrens,  W K Kellogg Foundation,  tbehrens@wkkf.org
Discussant(s):
Pennie G Foster-Fishman,  Michigan State University,  fosterfi@msu.edu
Branda Nowell,  North Carolina State University,  blnowell@chass.ncsu.edu
Abstract: Although system change efforts in the human services or community change fields have become increasingly popular, this popularity has significantly outpaced their proven success. In fact, many systems change efforts report outcomes that are far less than what was promised or hoped for (e.g., Amado & McBride, 2002; King-Sears, 2001; Traynor, 2000). We will explore in this think tank the value of incorporating a systems thinking approach to the design and evaluation of systems change efforts. In small breakout groups we will examine specific principles from the systems thinking world and their potential application to systems change efforts. Groups will explore the value of these principles and identify strategies and methods for incorporating them into the design and evaluation of systems change efforts. The costs and benefits of applying a systems thinking perspective will be explored following these breakout sessions.

Session Title: Shifting the Bell Curve: The Benefits and Costs of Raising Student Achievement
Expert Lecture Session 504 to be held in  Liberty Ballroom Section A on Friday, November 9, 9:25 AM to 10:10 AM
Sponsored by the Costs, Effectiveness, Benefits, and Economics TIG
Chair(s):
Brian Yates,  American University,  brian.yates@mac.com
Presenter(s):
Stuart Yeh,  University of Minnesota,  yehxx008@umn.edu
Abstract: Benefit-cost analysis was conducted to estimate the increase in earnings, increased tax revenues, value of less crime, and reductions in welfare costs attributable to nationwide implementation of rapid assessment, a promising intervention for raising student achievement in math and reading. Results suggest that social benefits would exceed costs by a ratio of 3157. Fiscal benefits to the federal government would exceed costs by a ratio of 738. Fiscal benefits for all but two state governments would exceed costs by a ratio of 88. Sensitivity analyses suggest that the findings are robust to an 88-fold change in the underlying parameters, despite numerous conservative assumptions.

Session Title: What Counts as Credible Evidence in Contemporary Evaluation Practice?
Think Tank Session 505 to be held in Liberty Ballroom Section B on Friday, November 9, 9:25 AM to 10:10 AM
Sponsored by the Theories of Evaluation TIG
Presenter(s):
Stewart I Donaldson,  Claremont Graduate University,  stewart.donaldson@cgu.edu
Discussant(s):
Christina Christie,  Claremont Graduate University,  tina.christie@cgu.edu
Sandra Mathison,  University of British Columbia,  sandra.mathison@ubc.ca
Gary T Henry,  University of North Carolina, Chapel Hill,  gthenry@email.unc.edu
George Julnes,  Utah State University,  gjulnes@cc.usu.edu
Debra Rog,  Westat,  debrarog@westat.com
Leonard Bickman,  Vanderbilt University,  leonard.bickman@vanderbilt.edu
Michael Scriven,  Western Michigan University,  scriven@aol.com
Sharon Rallis,  University of Massachusetts, Amherst,  sharonr@educ.umass.edu
Thomas Schwandt,  University of Illinois at Urbana-Champaign,  tschwand@uiuc.edu
Jennifer Greene,  University of Illinois at Urbana-Champaign,  jcgreene@uiuc.edu
Abstract: This session is designed to engage a fundamental issue facing evaluation practice today - what counts as sound evidence for decision making? A renowned group of evaluation scholars will serve as breakout group leaders and we will explore a wide range of issues that address the fundamental challenges of designing and executing high quality applied research and evaluation studies. Each of these leading scholars will offer diverse views about the range of evaluation methods that are likely to produce credible evidence that is both trustworthy and influential. The session facilitators will offer an introduction to the broad range of issues that will be addressed by each breakout session leader, setting the context for answering the question "what counts as evidence." Participants will leave this session with a new understanding of the philosophical, theoretical, methodological, political, and ethical dimensions of gathering credible evidence to answer fundamental evaluation questions across real world contexts. Evaluation scholars associated with this session: Gary Henry, George Julnes & Deborah Rog, Len Bickman, Michael Scriven, Sharon Rallis, Thomas Schwandt, Jennifer Greene, Sandra Mathison, and Mel Mark.

Session Title: Designing Federal Evaluations: Developing Good Project Objectives and Performance Measures
Demonstration Session 506 to be held in Mencken Room on Friday, November 9, 9:25 AM to 10:10 AM
Sponsored by the Government Evaluation TIG
Presenter(s):
Courtney Brown,  Indiana University,  coubrown@indiana.edu
Mindy Hightower King,  Indiana University,  minking@indiana.edu
Marcey Moss,  Indiana University,  marmoss@indiana.edu
Abstract: Federal grants and programs are increasingly emphasizing the importance and attention to evaluation. Strong and measurable project objectives and program measures are critical to both good proposals and successful evaluations. This demonstration focuses on key criteria for high quality project objectives and performance measures using the federal government language and criteria. It will help increase the understanding of the relationships between project activities and intended program outcomes. This will help to create better evaluation designs which will in turn produce evaluations that are easier to implement and run. Participants will receive a framework used for federal government evaluations and information needed to develop strong and measurable project objectives and performance measures. In addition they will be provided with practical strategies and planning devices to use when writing project objective and measures.

Session Title: Energy Efficiency, Education, and Intention: Cradle to Grave
Multipaper Session 507 to be held in Edgar Allen Poe Room  on Friday, November 9, 9:25 AM to 10:10 AM
Sponsored by the Environmental Program Evaluation TIG
Chair(s):
Annelise Carleton-Hug,  Trillium Associates,  annelise@trilliumassociates.com
Is Our Children Learning? Barriers to K-12 Energy Efficiency Education in Connecticut
Presenter(s):
Timothy Pettit,  Nexus Market Research Inc,  pettit@nexusmarketresearch.com
Charles Bruckerhoff,  Curriculum Research & Evaluation Inc,  charles@creus.com
Abstract: The K-12 energy efficiency program evaluated in this proposed paper is an energy efficiency learning initiative implemented by the investor-owned utilities in Connecticut, called eeSmarts. The program's goals are to develop an energy-efficient ethic among all school-age students, encouraging them to incorporate energy efficient practices and behaviors into their lives at home and at school. This process evaluation focused on assessing adequate delivery of the program, identifying barriers to implementation, as well as evaluating progress toward stated goals. The evaluators concluded that although the program was being delivered as designed, the program was not progressing toward its stated goals. In conclusion, the evaluators recommended redesigning the program to emphasize program activities with more certain and measurable short-term indicators of program progress, such as teachers trained who are using the program materials, instead of nearly unmeasurable long-term indicators such as energy savings due to the program.
If You Offer it, Will They Buy it?: Differentiating the Program From the Market in a Voluntary Clean Energy Purchasing Program
Presenter(s):
Greg Clendenning,  Nexus Market Research Inc,  clendenning@nexusmarketresearch.com
Bob Wall,  Connecticut Clean Energy Fund,  bob.wall@ctinnovations.com
Timothy Pettit,  Nexus Market Research Inc,  pettit@nexusmarketresearch.com
Lynn Hoefgen,  Nexus Market Research Inc,  hoefgen@nexusmarketresearch.com
Abstract: Voluntary clean energy purchasing programs are available to more than one-half of all U.S. electricity consumers across 34 states. We evaluated the impacts of the Connecticut Clean Energy Fund's (CCEF) voluntary clean energy purchasing program by comparing public awareness of and attitudes toward clean energy among residents of Connecticut and the U.S. using data from annual surveys conducted in the spring of 2005 and 2006, and a planned survey in 2007. One year into CCEF's campaign, awareness of clean energy, ratings of the importance of global warming, recognition of CCEF and affiliated organizations, and personal actions related to clean energy have all increased among Connecticut residents. However, willingness to pay a premium for clean energy has remained largely the same and appears to be limited by lack of awareness of clean energy and its availability as well as skepticism that clean energy can be produced in sufficient quantities.

Session Title: Evaluation of Community-based Participatory Research and Community Mobilization Strategies to Prevent Chronic Disease and Youth Violence: Advances and Lessons Learned by Two Research Center Programs at the Centers for Disease Control and Prevention
Panel Session 508 to be held in Carroll Room on Friday, November 9, 9:25 AM to 10:10 AM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Alicia Norris,  Centers for Disease Control and Prevention,  anorris@cdc.gov
Abstract: Of the six key strategies that the Centers for Disease Control and Prevention (CDC) developed to guide decisions and priorities, one relates to public health research: -create and disseminate the knowledge and innovations people need to protect their health.- Both the National Center for Chronic Disease Prevention and Health Promotion (NCCDPHP) and the National Center for Injury Prevention and Control (NCIPC) at CDC support national research center programs to develop public health interventions. In both programs, collaboration between academic, community, and public health partners occurs in order to actively engage communities in research. Due to programmatic similarities, common evaluation strategies have developed for the Prevention Research Centers (PRC) Program and the National Academic Centers of Excellence on Youth Violence Prevention (ACEs) Program. This panel presents advances and lessons learned from recent evaluative work conducted to better understand both the PRCs' and ACEs' community-based participatory research and community mobilization strategies.
Using Document Review and Data Abstraction to Inform Management of a Federal Research Program: Lessons, Benefits, and Challenges Found by the Centers for Disease Control and Prevention's Prevention Research Centers Program
Demia S Wright,  Centers for Disease Control and Prevention,  amy7@cdc.gov
The Prevention Research Centers (PRC) Program at the Centers for Disease Control and Prevention funds 33 academic health centers to conduct health promotion and disease prevention research. CDC implemented a multi-component national evaluation consisting of quantitative and qualitative methods. Evaluators completed a document review and data abstraction for all 33 PRCs to understand several components of the program: PRCs' infrastructure and staffing; university resources; PRC partner communities; governance of community committees; and PRCs' prevention research topics, designs, and methods. Documents and data sources consisted of cooperative agreement applications, progress reports, community committee guidelines, budgets, U.S. Census, Bureau of Labor Statistics, and more. The presentation will describe which data sources were most informative, what findings were most useful for understanding the research program, and benefits (low burden on grantees, informing future program requirements) and challenges (inconsistency within data sources, defining each grantee's 'community') of using the methodology.
An Evaluation of Community Based Participatory Research and Community Mobilization: Formative Research Results From the National Academic Centers of Excellence on Youth Violence Prevention
Nancy Stroupe,  Centers for Disease Control and Prevention,  nstroupe@cdc.gov
CDC funds ten Academic Centers for Excellence on Youth Violence Prevention (ACEs). These Centers are designed to foster interdisciplinary research and promote stable, long term strategies to address youth violence. ACE centers are required to demonstrate their ability to design and implement community based participatory research (CBPR) and community mobilization activities in their defined community. In 2006, a formative evaluation of the community mobilization and CBPR components of the ACEs began. Applications, progress reports, and data from an information system were used to assess the extent of these activities. Semi-structured interviews with principal investigators, community mobilization directors, and community partners were conducted to gain additional information regarding perceptions, strategies, lessons learned, and outcomes of community mobilization and CBPR activities. This presentation will discuss the design and methods used for the evaluation; barriers and lessons learned; and how the information is being used to impact program policy.

Session Title: Successful Strategies for Developing Evaluation Instruments Using a Web-based System
Demonstration Session 509 to be held in Pratt Room, Section A on Friday, November 9, 9:25 AM to 10:10 AM
Sponsored by the Extension Education Evaluation TIG
Presenter(s):
Jennifer Bentlejewski,  University of Maryland Cooperative Extension,  jthorn@umd.edu
Abstract: As demand for accountability and improvement continues to intensify, effective evaluation programs are ever-increasingly essential. Results from a study conducted with University of Maryland Cooperative Extension faculty revealed that lack of time, support, and training were challenges to conducting effective evaluation. In response to these results, an advisory group was formed to direct the development of a web-based system for faculty use in creating evaluation instruments. The faculty involved in this pilot project were part of the Family and Consumer Sciences program area of the University of Maryland Cooperative Extension. This session will serve as a demonstration on utilizing a web-based system to enable users to select content-specific questions from a standardized set, which automatically creates pre and post evaluation instruments. The data are then analyzed resulting in statewide impacts related to pre and post participant behaviors. Strategies for how these experiences could be applied by others will be shared.

Session Title: Assessment of Stakeholder Needs and Evaluation Use in an Organizational Context: The Real World
Demonstration Session 510 to be held in Pratt Room, Section B on Friday, November 9, 9:25 AM to 10:10 AM
Sponsored by the Business and Industry TIG
Presenter(s):
Amy Gullickson,  Western Michigan University,  amy.m.gullickson@wmich.edu
Judith Steed,  Center for Creative Leadership,  steedj@leaders.ccl.org
Kelly Hannum,  Center for Creative Leadership,  hannumk@leaders.ccl.org
Abstract: Organizations frequently employ processes to evaluate their training programs. However, the stakeholder groups have evaluation needs, concerns and intended uses that may be not only incompatible or misaligned, but also occasionally in direct conflict. In this demonstration session we describe the tools and process used to identify stakeholders, assess their needs and planned uses for the evaluation data gathered. Using the checklists available on the website of the Evaluation Center at Western Michigan University and the program evaluation standards, an organization's internal evaluators and a doctoral student at Western Michigan University sought to apply the best of guidance for evaluators in a real-life organizational setting. In this session we demonstrate the processes we used and offer practical advice.

Roundtable: Ensuring Fidelity of a Computer Aided Reading Intervention in a Randomized Controlled Study
Roundtable Presentation 511 to be held in Douglas Boardroom on Friday, November 9, 9:25 AM to 10:10 AM
Presenter(s):
Joyce Serido,  University of Arizona,  jserido@email.arizona.edu
Mari Wilhelm,  University of Arizona,  wilhelmm@ag.arizona.edu
Abstract: Technology, specifically, computer assisted instructional (CAI) programs may offer a cost-effective solution to meet the needs of the eight million young people between fourth and twelfth grade struggling to read at grade level (Biancarosa & Snow, 2004). While evaluation studies demonstrate small but consistent effect sizes for CAI programs aimed at beginning reading instruction, the poor quality of these studies may mask the true effects of the intervention. In addition to improved study design (e.g., larger sample sizes, use of control groups and randomized assignment), fidelity of implementation is essential for assessing the impact of an intervention on student outcomes. In this workshop, we outline our approach for improving fidelity of implementation in a multi-state, multi-site randomized controlled study of struggling readers in Grades 1 - 12.

Session Title: Moral Knowledge and Responsibilities in Evaluating Programs for Youth
Expert Lecture Session 512 to be held in  Hopkins Room on Friday, November 9, 9:25 AM to 10:10 AM
Sponsored by the Qualitative Methods TIG
Chair(s):
Melissa Freeman,  University of Georgia,  freeman9@uga.edu
Presenter(s):
Melissa Freeman,  University of Georgia,  freeman9@uga.edu
Judith Preissle,  University of Georgia,  jude@uga.edu
Steven Havick,  University of Georgia,  havick74@yahoo.com
Abstract: In a retrospective assessment of a responsive evaluation conducted on an interfaith youth program designed to educate young people about religious freedom, this paper considers the moral responsibilities inherent in both the program and evaluation practices and ways in which they support and constrain each other. Four ethical dimensions are discussed: the rights and responsibilities of the client and evaluator groups, the complication of moral discourse by an ethic of care, the blurring of boundaries between pedagogy and indoctrination, and the exploitation of those involved for public relations purposes.

Session Title: Teaching Program Evaluation for Diverse Adult Learners Using a Nine-step Evaluation Plan Project
Demonstration Session 514 to be held in Adams Room on Friday, November 9, 9:25 AM to 10:10 AM
Sponsored by the Teaching of Evaluation TIG
Presenter(s):
Annalisa Batson,  HB Consultation & Evaluation Associates LLC,  annalisa@hbassociates.us
Carla Hess,  HB Consultation & Evaluation Associates LLC,  carla@hbassociates.us
Abstract: This session on teaching program evaluation for diverse adult learners will provide attendees with tools developed over the past ten years of classroom refinement by the presenters. Handouts will include the course syllabus, schedule of content for 15 weeks of classes, and the detailed nine-step evaluation plan template that students complete as a semester-long project that forms the backbone of the course. The presentation will include tips and hints for working with classes of graduate students who are at varied life stages with a wide range of previous educational experiences from diverse fields of study. We will also provide examples of how we incorporate problem-based learning, differentiated instruction, cooperative learning, discovery learning, story telling, and lecture to achieve the maximum learning environment.

Roundtable: From Insight to Action: New Directions in Foundation Evaluation
Roundtable Presentation 515 to be held in Jefferson Room on Friday, November 9, 9:25 AM to 10:10 AM
Presenter(s):
Rebecca Graves,  Foundation Strategy Group Social Impact Advisors,  becca.graves@fsg-impact.org
Leigh Fiske,  Foundation Strategy Group Social Impact Advisors,  leigh.fiske@fsg-impact.org
Abstract: What does “evaluation” mean to foundations and how can foundations use evaluation to continuously improve results from philanthropic investments? This session will explore FSG's recent study of practical evaluation approaches that serve to improve foundations' effectiveness – rather than simply documenting what happened as the result of individual grants. Several examples will be shared, from foundations of all sizes and types. FSG believes the foundation field as a whole needs to develop better ways of sharing successful evaluation techniques, so that individual funders can understand the range of choices, and identify which approach best fits their purpose. Our hope is that FSG's research can be the starting point for a process of organizing and sharing simple, timely, and useful techniques for evaluation that can help foundations achieve greater social impact. Roundtable participants will be asked to talk about evaluation processes that have led to more informed decision-making.

Session Title: Recidivism and Re-entry
Multipaper Session 516 to be held in Washington Room on Friday, November 9, 9:25 AM to 10:10 AM
Sponsored by the Crime and Justice TIG
Chair(s):
Roger Przybylski,  RKC Group,  rogerkp@comcast.net
Evaluating a Cross-systems Training Approach to Prepare Communities to Better Serve the Needs of Justice-involved Individuals with Co-occurring Disorders
Presenter(s):
Chanson Noether,  Policy Research Associates,  cnoether@prainc.com
Wendy Vogel,  Policy Research Associates,  wvogel@prainc.com
Abstract: This session will present the results of an evaluation of the ACTION approach to cross-systems collaboration among the criminal justice, mental health and substance abuse systems that was developed through the NIMH-funded Adult Cross-Training Curriculum (AXT) Project. The goal of this approach is to promote recovery for incarcerated people with co-occurring disorders re-entering into the community through education, facilitated strategic planning and follow-up technical assistance. A comprehensive process and outcome evaluation methodology was developed to assess the effectiveness and impact of the AXT Project, as well as the value-added of receiving follow-up technical assistance. As a direct result of participation, communities implemented an array of systems-level changes over the 10 month follow-up period following the cross-training. Sites that received follow-up technical assistance reported an even greater level of achievement. Changes reported by communities were primarily associated with transition planning, community preparedness for re-entry, and development of effective cross-system collaboration strategies.
Explaining Program Outcomes: Analyzing the Joint Effects of Individual, Program and Neighborhoods With Cross-classified Hierarchical Generalized Linear Modeling
Presenter(s):
Heidi Grunwald,  Temple University,  grunwald@temple.edu
Philip Harris,  Temple University,  phil.harris@temple.edu
Jeremy Mennis,  Temple University,  jeremy.mennis@temple.edu
Zoran Obradovic,  Temple University,  zoran.obradovic@temple.edu
Alan Izenman,  Temple University,  alan@temple.edu
Brian Lockwood,  Temple University,  brian.lockwood@temple.edu
Abstract: Risk measures have been used in program evaluation both to examine changes in risk and to control for risk when analyzing program outcomes. These studies have been criticized for ignoring the impact of neighborhoods on recidivism. Programs are, in effect, competing with environmental forces. The study reported here is part of a larger NIJ-funded study to develop spatially-integrated models of juvenile recidivism incorporating neighborhood, program, and individual characteristics. Using a sample of 11,659 Philadelphia male delinquents nested in 35 programs, we explore the following questions: 1) Which individual traits are the strongest predictors of recidivism ignoring neighborhood and program effects? 2) Do recidivism rates vary across neighborhoods? 3) If so, controlling for individual traits, what neighborhood contexts predict recidivism? 4) Do recidivism rates vary across programs? 5) If so, controlling for individual traits, what program characteristics predict recidivism? 6) Do neighborhood contexts and/or program characteristics predict relationships between individual traits and recidivism?

Session Title: Multiethnic Issues Dialogue on Graduate Education and Mentoring
Think Tank Session 517 to be held in D'Alesandro Room on Friday, November 9, 9:25 AM to 10:10 AM
Sponsored by the Multiethnic Issues in Evaluation TIG
Chair(s):
Craig Love,  Westat,  craiglove@westat.com
Tamara Bertrand,  Florida State University,  tbertrand@admin.fsu.edu
Presenter(s):
Deirdre Sharkey,  Texas Southern University,  owensew@tsu.edu
Maurice Samuels,  University of Illinois at Urbana Champaign,  msamuels@uiuc.edu
Discussant(s):
Elmima Johnson,  National Science Foundation,  ejohnson@nsf.gov
Howard Mzumara,  Indiana University Purdue University Indianapolis,  hmzumara@iupui.edu

Session Title: Understanding Terminology in Multi-level Modeling for Program Evaluation
Demonstration Session 518 to be held in Calhoun Room on Friday, November 9, 9:25 AM to 10:10 AM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Caroline Wiley,  University of Arizona,  crhummel@u.arizona.edu
Mei-kuang Chen,  University of Arizona,  kuang@u.arizona.edu
Julius Najab,  George Mason University,  jnajab@gmu.edu
Abstract: In program evaluation, especially in the area of education, multilevel modeling is often used for data analysis to deal with the nested data structures. However, the terminology related to multilevel modeling, also known as linear mixed models, is somewhat confusing. For example, random coefficient model, slopes-as-outcomes model, random effect model, hierarchical linear model, variance component model, and value-added model are all slightly different models in the family of linear models. In this demonstration, these commonly used statistical terms will be presented in the context of program evaluation and conducting and interpreting statistical analyses. Understanding the meaning of these terms is pertinent to planning evaluation and using various software packages.

Session Title: Lessons From Evaluation Use at the United Kingdom National Audit Office and the World Bank Group
Panel Session 519 to be held in McKeldon Room on Friday, November 9, 9:25 AM to 10:10 AM
Sponsored by the Evaluation Use TIG
Chair(s):
Keith MacKay,  World Bank,  kmckay@worldbank.org
Abstract: This session will compare and contrast experiences related to use of evaluation findings, at the United Kingdom's National Audit Office (NAO) and the World Bank's Independent Evaluation Group (IEG). The comparison would follow broadly along these lines: - clarify what is meant by learning, the appropriate institutional context (in particular, taking into account the accountability roles of both organizations), and any related prerequisites; - identify key lessons resulting from the various approaches to learning from evaluation findings; and - how to measure the effectiveness of using an evaluation as learning tool.
Ensuring Learning in an Accountability Setting: Lessons from Performance Audit as a Form of Evaluation
Jeremy Lonsdale,  United Kingdom National Audit Office,  jeremy.lonsdale@nao.gsi.gov.uk
This paper focuses on the role that performance auditors play in evaluating government programmes and the extent and how they contribute to learning processes. It focuses on the experience in the United Kingdom and in particular, considers: - what is meant by 'learning' from the performance audit undertaken by the state audit body, the National Audit Office, and how it is meant to take place; - how those carrying out the work seek to maximize and sustain the learning within government (as part of, and in parallel with, their formal roles within the accountability processes), including, for example, new forms of output and greater involvement with policy-makers and operational staff to take forward change; and - whether there is evidence that the range of approaches taken in recent years to make audit work more -useful- for learning purposes has been effective and what more can be done to support learning in government.
An Empirical Review of the Utilization of Evaluation Knowledge at the World Bank
Klaus Tilmes,  World Bank,  ktilmes@worldbank.org
This paper looks at several initiatives from the World Bank's Independent Evaluation Group (IEG) designed to foster utilization of findings about what works and why, which complement the unit's accountability role. The actions reviewed hail from recent evaluation work, and center on stimulating the use of IEG knowledge by both World Bank staff and the global development community. Findings and lessons would be summarized along the following process dimensions: - strengthening the assessment of results-oriented monitoring and evaluation (M&E) in Bank operations; - introducing new, quick turnaround products addressing immediate needs for evaluative findings and lessons from experience; - issuing annual 'Good Practice Awards' for operations that exemplify strong performance, to heighten their profile and recognition; - intensifying communications and outreach efforts to build up stakeholder relationships; and - following up on recommendations to management as part of country, sector, thematic, and corporate evaluations (Management Action Record, or MAR).

Session Title: Emerging Perspectives on Curriculum, Pedagogy, and Lesbian, Gay, Bisexual, and Transgender Youth
Multipaper Session 520 to be held in Preston Room on Friday, November 9, 9:25 AM to 10:10 AM
Sponsored by the Lesbian, Gay, Bisexual, Transgender Issues TIG
Chair(s):
Sheryl Hodge,  Kansas State University,  shodge@ksu.edu
Queering Evaluation: Understanding and Sensibility
Presenter(s):
Francis Broadway,  University of Akron,  fsb@uakron.edu
Abstract: By queering the evaluation mantra of what do students know and are able to do, understanding and sensibilities serve as critiques of evaluation. For Wiggins and McTighe (2005), understanding is not limited to explanation (knowledge), interpretation, and application, but also having a perspective, empathizing, and having self-knowledge. Sensibility, a construct that makes the invisible visible and gives voice to non-dominate students by reclaiming curriculum and pedagogy and being more inclusive of the individual who is a philosophical, emotional, historical, social, political, and sexual being, permits understanding to be evaluated. If sensibilities, as personal transactions, creations and stories, are realities that the observer forms through transactions with the world in which the observer is, and understandings are “getting below the surface or achieving greater nuance and discrimination in judgment…insight and wisdom (p.41), then sensibilities make possible not only the evaluation of understanding, but enables curriculum and pedagogy to resurface in schools.
Self-identified Gay Youth; What is Happening to Them in the Mathematics Classroom
Presenter(s):
David Fischer,  University of Minnesota,  fisch413@umn.edu
Abstract: In this paper the connection between what happens to gay youth when they self-identify and the path they take in the mathematics classroom is explored. This discussion of the abstract will explore the history of gay youth in the mathematics classroom as indicated in the literature and the implications of that history, or the lack of history, to this proposed research project. The methodology and instruments to be used will be discussed in this important paper session. Also to be discussed will be the reasoning and justification for expanding research of the mathematics classroom to be inclusive of self-identified gay youth.

Session Title: The AEA Ethics Committee's Comparative Analysis of International Evaluation Associations' Ethical Guidelines: Similarities, Differences and Lessons Learned
Expert Lecture Session 521 to be held in  Schaefer Room on Friday, November 9, 9:25 AM to 10:10 AM
Sponsored by the AEA Conference Committee
Chair(s):
Valerie J Caracelli,  United States Government Accountability Office,  caracelliv@gao.gov
Presenter(s):
Scott Rosas,  Nemours Health and Prevention Services,  srosas@nemours.org
Discussant(s):
Jules M Marquart,  Centerstone Community Mental Health Centers Inc,  jules.marquart@centerstone.org
Abstract: This session focuses the results of a comparative analysis of the various ways ethical evaluation practice is codified as set of ethical standards or principles across different professional evaluation organizations throughout the world. More than 30 national and international evaluation organizations were queried as to the mechanisms by which they specify and communicate ethical evaluation practice. Two major foci were considered: 1) the extent to which individual evaluator behavior is promoted and how this varies across national and international evaluation organizations, and 2) the extent to which evaluation practice as a social endeavor is highlighted and how the normative orientations of evaluation practice vary across national and international evaluation organizations. Drawing on comparative research methods used in cross-cultural studies to identify, analyze and explain similarities and differences across societies, this work compares and contrasts the different configurations of ethical practice and previews the implications of the findings.

Roundtable: Increasing Stakeholders' Understanding of Evaluation Results: How We Report Matters!
Roundtable Presentation 522 to be held in Calvert Ballroom Salon A on Friday, November 9, 9:25 AM to 10:10 AM
Presenter(s):
Heidi Sweetman,  University of Delaware,  heidims@udel.edu
Ximena Uribe,  University of Delaware,  ximena@udel.edu
Abstract: Using the results from the Math Partnership Evaluation as a springboard for the discussion, this roundtable discussion will focus on how the manner in which evaluation results are reported greatly impacts the learning of stakeholders. Revealing the results of the Math Partnership Evaluation through PowerPoint, verbal presentation and different report formats, participants will analyze and discuss how the method such as in person presentation, paper report, digital presentation (such as PowerPoint) greatly influence the learning that stakeholders experience as they are exposed to evaluation results. The discussion will highlight the importance of identifying findings that are salient to different groups of stakeholders and utilizing innovative ways of presenting results in a way that fosters correct understanding.

Roundtable: Mentoring and Growing Local Affiliates of the American Evaluation Association (AEA)
Roundtable Presentation 523 to be held in Calvert Ballroom Salon B on Friday, November 9, 9:25 AM to 10:10 AM
Presenter(s):
Deborah Loesch-Griffin,  University of Nevada, Reno,  trnpt@aol.com
Rachel Hickson,  Montgomery County Public Schools,  rhickson731@yahoo.com
Abstract: Mirroring the growth of the evaluation profession and of AEA, the number of local affiliates forming and aligning with AEA has grown rapidly in the past five years. As part of a continuing series of sessions on developing viable affiliates of AEA, this roundtable will share challenges and successful strategies in critical activities such as recruitment, membership, and programming. A panel of representatives from new and young affiliates will present real-life challenges and success stories. Invited mentor affiliate representatives will be on hand to share their stories in response to the panel's challenges. There will be an opportunity to connect new and developing affiliates with mentors in established affiliates who can provide practical support. Resources including the New Affiliate Tool Kit and Evaluation Conference Tool Kit, both products of the Local Affiliates Collaborative, will be shared. New and young affiliates are especially encouraged to send a representative to this session.

Roundtable: Addressing Evaluation Costs: Producing Rigorous Evaluations on a Shoestring Budget
Roundtable Presentation 524 to be held in Calvert Ballroom Salon D on Friday, November 9, 9:25 AM to 10:10 AM
Presenter(s):
Dennis W Rudy,  Lakehouse Evaluation Inc,  drudy@lakehouse.org
Abstract: Completing rigorous evaluations in a cost-savings mode is a goal for many in the non-profit and public sectors. Examples of completed evaluations that utilized various cost-savings procedures will be noted, along with a summary of effective strategies that produce evidence-based research and program evaluation studies in a cost-effective manner. Best practice examples include: quasi-experimental methods, comparison group designs, and mixed-methods approaches to program evaluation.

Roundtable: A Dialogue About Building the Cross-cultural Competency of Evaluators
Roundtable Presentation 525 to be held in Calvert Ballroom Salon E on Friday, November 9, 9:25 AM to 10:10 AM
Presenter(s):
Kien Lee,  Association for the Study and Development of Community,  kien@capablecommunity.com
Nancy Csuti,  The Colorado Trust,  nancy@coloradotrust.org
Lutheria Peters,  Association for the Study and Development of Community,  lpeters@capablecommunity.com
Abstract: Evaluators have to understand how a group of people perceive an intervention, communicate their views and act on the knowledge gained from the evaluation. This process of information exchange, interpretation and application of knowledge are influenced by the cultures of the participants, including the evaluator. Because of this, cross-cultural competency is essential in evaluation and among evaluators. The facilitators will present key points from a recent report sponsored by The Colorado Trust about ways to develop and conduct a cross-culturally competent evaluation. The purpose of the roundtable is to expand on this report by soliciting feedback from evaluators about what it actually takes to build evaluators' capacity to be cross-culturally competent.

Session Title: Teacher Evaluation
Multipaper Session 526 to be held in Fairmont Suite on Friday, November 9, 9:25 AM to 10:10 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Darlene Thurston,  Jackson State University,  darlene.a.thurston@jsums.edu
Defining, Assessing, and Developing Teacher Expertise: Using Evidence to Both Assess and Assist Teachers
Presenter(s):
Richard West,  University of Georgia,  rickwest@uga.edu
Bruce Gabbitas,  University of Georgia,  gabbitas@uga.edu
Arthur Recesso,  University of Georgia,  arecesso@uga.edu
Michael Hannafin,  University of Goergia,  hannafin@uga.edu
Abstract: Teacher development and performance assessment of classroom practices are often addressed in separation. In our presentation we will discuss a new model for teacher assessment that has been developed to answer the call for greater accountability, as well as the need to support teachers as they develop. This model includes a description of teacher expertise with over 40 specific attributes of quality teaching. These attributes are differentiated along a continuum from emerging skills to excelling in practice. Attached to each attribute are various descriptions of what might qualify as credible evidence that a teacher exhibits an attribute. Empowered by this model, principals and teacher mentors can identify the most powerful evidence to describe a particular teaching practice or event. In addition, the teacher can use the model in self-assessment to identify a trajectory of specific areas for improvement.
Evaluating With Lenses to Capture the Multi-faceted Nature of Teacher Performance
Presenter(s):
Bruce Gabbitas,  University of Georgia,  gabbitas@uga.edu
Richard West,  University of Georgia,  rickwest@uga.edu
Arthur Recesso,  University of Georgia,  arecesso@uga.edu
Michael Hannafin,  University of Georgia,  hannafin@uga.edu
Abstract: Education departments and school administrators are increasingly pressured to improve accountability through teacher assessment. However, most assessment methods fall short because they fail to recognize the multi-faceted nature of teaching. We will present a different model that allows schools, teachers, and education agencies to engage in a more complete assessment of teacher performance. Part of this model involves using lenses to focus the collection, analysis, and interpretation of evidence around very specific criteria. By using a set of defined lenses, evaluators can clarify which assessment criteria are important to school stakeholders and engage in an evaluation that is sensitive to the complex nature of teaching yet remains focused to local needs. For this presentation we will explain the lens metaphor and results from implementing this model with teachers and administrators.

Roundtable: Global Activities of United States Based Social Work Faculty: Missed Opportunities for Research and Evaluation
Roundtable Presentation 527 to be held in Federal Hill Suite on Friday, November 9, 9:25 AM to 10:10 AM
Presenter(s):
Goutham Menon,  University of Texas, San Antonio,  goutham.menon@utsa.edu
Abstract: With “globalization” entering our common lexicon in the past few years, we have seen an explosion within the ranks of academia to tout international programs as a way and means to show case the work done by faculty and students. Ranging from academic tourism activities of study abroad to full fledged program offerings in other countries, we are seeing the whole gamut of experimentation, all in the name of globalization. This roundtable highlights the work that is being done by US bases social work faculty and where we need to be going with our work dealing with oppressed populations.

Session Title: Promising Approaches to the Evaluation of Social Policy
Multipaper Session 528 to be held in Royale Board Room on Friday, November 9, 9:25 AM to 10:10 AM
Sponsored by the Human Services Evaluation TIG
Chair(s):
Roger Boothroyd,  University of South Florida,  boothroy@fmhi.usf.edu
Discussant(s):
Margaret Polinsky,  Parents Anonymous Inc,  ppolinsky@parentsanonymous.org
Archival Research and Evaluation: Utilization of Federal and State Court Data in Evaluating Welfare Policies and Programs
Presenter(s):
Elizabeth Hayden,  Northeastern University,  hayden.e@neu.edu
Abstract: When evaluating welfare policies and programs, researchers can access state and federal court records to determine the fairness of service delivery. Is the current reform working? Are minority welfare recipients more likely to be subject to termination of benefits, insufficient job training and placement as well as transportation difficulties than white recipients? Utilizing national and court data bases, evaluators can assess the nature and frequency of discriminatory practices pre and post PRWORA Reform. Through archival research and document analysis, I will examine what legally constitutes as effective and fair practices in welfare-to-work programs and (2) if welfare policy favors some client groups over others.
Learning From Service Users: Measuring the Well-being of Children and Families, the Elderly, and the Community
Presenter(s):
Tina Olsson,  Göteborg University,  tina.m.olsson@telia.com
Rebecka Arman,  Göteborg University,  rebecka.arman@handels.gu.se
Anna Johansson,  Göteborg University,  anna.johansson@gri.gu.se
Abstract: Evaluation is concerned with describing “what works”. In the case of social policies, this can be expanded to “what works” to improve welfare. Social policies are deliberately designed to impact social change thereby increasing the welfare or well-being of individuals and society as a whole. The current evaluation environment is fraught with debates regarding evidence, outcomes, and accountability. Both proponents and opponents in these debates are concerned with improving the lives of service users. In the current environment of evidence, outcomes, and accountability, this paper examines the methods being used to assess service user well-being and integrate measures of well-being in assessing program outcomes. This paper reviews the literature in three specific areas of social policy: children and families, the elderly, and the community from 1990 to present in order to assess the extent to which evaluation is learning from service users by integrating measures of well-being into outcome evaluations.

Session Title: Real-life Lessons Learned in Building Capacity for Advocacy and Policy Evaluation
Panel Session 529 to be held in Royale Conference Foyer on Friday, November 9, 9:25 AM to 10:10 AM
Sponsored by the Advocacy and Policy Change TIG
Chair(s):
Jane Reisman,  Organizational Research Services,  jreisman@organizationalresearch.com
Abstract: The Annie E. Casey Foundation is developing their understanding of how to best approach advocacy and policy evaluation work in "real time" and with "realistic resources." This panel will share lessons learned in the process of developing advocacy and policy change outcomes and measurement strategies in a variety of KIDS COUNT advocacy settings. KIDS COUNT-- a project of the Annie E. Casey Foundation-is a national and state-by-state effort to track the status of children in the U.S. Four grantees volunteered to develop an evaluation strategy for a particular advocacy campaign promoted in their organization-Children Now (CA), Family Connection Partnership (GA), Michigan League of Human Services, and Action for Children North Carolina. This panel will share lessons learned about numerous topics including: outcome selection, direction for evaluation, selection of data collection tools, resource requirements, how to use evaluation to strengthen strategic directions, implementation and, working with partnerships.
Evaluating Policy Advocacy Grant making: One Foundation's Call to Action
Thomas Kelly,  Annie E Casey Foundation,  tkelly@aecf.org
The Annie E. Casey Foundation dedicates its grant making to the improvement of outcomes and life chances for the most vulnerable children in the US. In order to accomplish this, we believe that we need to address the roots of social problems and inequity through public policy advocacy at local, state, and national levels. This has included both state-level policy advocacy through the KIDSCOUNT network, as well as empowering small community service organizations to engage in relevant policy advocacy at the local level. However, advocacy grants are not easily monitored and assessed using traditional program evaluation techniques. The importance of advocacy grant making has forced Casey to begin to identify relevant evaluation frameworks that help grantees assess their progress and effectiveness and help the foundation evaluate our overall policy strategy. This has also required us to first get clearer about our intent, including evaluation goals, expectations, methodologies, audiences, and uses.
What Do Advocacy and Policy Organizations Need in Order to Successfully Carry Out Evaluation?
Cory Anderson,  Annie E Casey Foundation,  canderson@aecf.org
Don Crary,  Annie E Casey Foundation,  dcrary@aecf.org
As an integral part of the public policy formation process, advocacy organizations work at the whim of frequent changes in policy priorities and have in many cases developed highly successful organizations able to quickly adapt to those chances and capitalize on new opportunities. Measuring their work, within those ever changing contexts is difficult and requires a clear sense of the most effective strategies, different ways to document the use and success of those strategies and perhaps most importantly, more than one way to measure success. Over the past several years, the Annie E. Casey Foundation has been working with the KIDS COUNT Network of grantees to address these issues in a way that will provide support to both the KIDS COUNT Network and to the field of policy advocacy in general. This section will address: What challenges and successes have State KIDS COUNT grantees experienced?
How to Guide Advocacy and Policy Evaluation Organizations in Successful Evaluations: Lessons Learned From KIDS COUNT Grantees
Jane Reisman,  Organizational Research Services,  jreisman@organizationalresearch.com
Anne Gienapp,  Organizational Research Services,  agienapp@organizationalresearch.com
Corey Newhouse,  Children Now,  cnewhouse@childrennow.org
Julie Sharpe,  Family Connection Partnership,  jksharpe@friendlycity.net
This presentation shares lessons gained from applying practical guidance presented in A Guide to Measuring Advocacy and Policy. This Guide offers approaches to classifying outcomes, developing a theory of change, selecting a course for evaluation and practical selection of evaluation tools. Organizational Research Services is piloting the content of this Guide with four KIDS COUNT grantees. This pilot will be the basis for distilling specific lessons about the types of practical lessons learned about what works and what is needed for designing and carrying out evaluation in real time and with realistic resources.

Session Title: Who Needs a College Goal Sunday? Using Evaluation to Expand and Improve a Large-scale Financial Aid Awareness Program
Panel Session 530 to be held in Hanover Suite B on Friday, November 9, 9:25 AM to 10:10 AM
Sponsored by the College Access Programs TIG
Chair(s):
Wendy Erisman,  Institute for Higher Education Policy,  werisman@ihep.org
Abstract: While many college access programs work with one or more cohorts of students over an extended period of time, others take the approach of holding one-time workshops for large numbers of students in order to pass along crucial information about the college admissions and financial aid process. This limited contact with students and their families, combined with the extensive use of volunteers to provide program content, makes it challenging to assess the effectiveness of such programs. College Goal Sunday, which offers one-day financial aid workshops for low-income, minority, and first-generation college students at more than 630 sites in 33 states, has worked over the past three years to develop an evaluation model that addresses these challenges. The findings from this evaluation have been used increase participation by the program's target audience while also building a stronger and more engaged volunteer base.
The Challenges of Evaluating an Annual One-day Multi-site Financial Aid Awareness Event
Wendy Erisman,  Institute for Higher Education Policy,  werisman@ihep.org
Wendy Erisman is Senior Research Analyst and Director of Evaluation at the Institute for Higher Education Policy (IHEP) in Washington, DC. She received her Ph.D. in cultural anthropology from the University of Texas at Austin, where her research focused on organizational cultures. Prior to her appointment with the Institute, she was a member of the faculty at Duke University in Durham, NC, and St. Edward's University in Austin, TX. Dr. Erisman currently manages several major projects for IHEP, including a nationwide evaluation of the College Goal Sunday financial aid access program and an evaluation of the USA Funds Access to Education scholarship program. She is also project director for the Institute's new role as manager of the Bill & Melinda Gates Foundation's research and evaluation program on its two major scholarship initiatives.
College Access Marketing: Putting Evaluation Data Into Action
Marcia Weston,  National Association of Student Financial Aid Adminstrators,  westonm@nasfaa.org
Marcia Weston is Director of College Goal Sunday Operations for the National Association of Student Financial Aid Administrators (NASFAA). She is responsible for overall management of the College Goal Sunday program at the national level, with emphasis on program maintenance and expansion, fundraising, public relations, and reporting. Since starting this position in 2004, she has overseen the expansion of the College Goal Sunday program to 33 states and the District of Columbia. Ms. Weston came to NASFAA from the Finance Authority of Maine, where she served as manager of education outreach programs since 1998. She wrote Maine's successful College Goal Sunday grant proposal and served as statewide coordinator for the program.

Session Title: The Next Generation of Learning Measurement: Measuring and Communicating the Value of the Learning Function
Expert Lecture Session 531 to be held in  Baltimore Theater on Friday, November 9, 9:25 AM to 10:10 AM
Sponsored by the Presidential Strand
Presenter(s):
Daniel Blair,  American Society for Training & Development,  dblair@astd.org
Abstract: For the past 50 years the standard measures used to report the effectiveness and efficiency of the learning functions have been consumption of learning programs, their costs, time spent in classroom, and participant satisfaction. With the shift on most business’ balance sheet valuation from tangible to intangible assets, the exclusive use of cost and consumption measurement is becoming increasingly insufficient. Today’s learning executives have a mandate to communicate not only the cost of training, but also the value that results from the learning investment made in the organization. This interactive presentation will provide a forum to discuss how organizations are using both cost and value measurement strategies to communicate the value of the learning function.

Session Title: Challenges and Prospects in the Evaluation of Housing Programs
Multipaper Session 532 to be held in International Room on Friday, November 9, 9:25 AM to 10:10 AM
Sponsored by the Human Services Evaluation TIG
Chair(s):
Paul Longo,  Touro Infirmary,  longop@touro.com
Discussant(s):
Paul Longo,  Touro Infirmary,  longop@touro.com
Housing Stability Among Homeless Individuals With Serious Mental Illness Participating in Housing First Programs
Presenter(s):
Carol Pearson,  Walter R McDonald & Associates Inc,  cpearson@wrma.com
Ann Elizabeth Montgomery,  University of Alabama, Birmingham,  annelizabethmontgomery@gmail.com
Abstract: This paper presents findings from an exploratory study of three programs using the Housing First approach to provide permanent supportive housing for single, homeless adults with serious mental illness and often co-occurring substance-related disorders. This approach provides direct, or nearly direct, access to housing that is perceived to be permanent without requiring sobriety or psychiatric treatment. Most of the research to date has focused on one variation of the Housing First approach utilized at Pathways to Housing in New York City. This exploratory study examined and compared three Housing First programs, including Pathways to Housing that varied in their key features and described program characteristics that seem to be influential in housing tenure, stability, and other positive outcomes. This presentation will highlight the method for the study, describe the characteristics of the Housing First programs selected for study and the study participants, and discuss findings related to housing stability.
Using a Panel Study of Residents Relocated From Low-Income Housing to Generate Actionable Information for Evaluation Stakeholders
Presenter(s):
Laurie Dopkins,  George Mason University,  ldopkins@gmu.edu
Abstract: What happens to residents who are forced to relocate as part of neighborhood redevelopment? Working with the Annie E. Casey Foundation, the Center for Social Science Research at George Mason University is tracking families from a low-income housing project to:(1) monitor the effect of relocation on relocated residents,(2) stay in contact so they have an opportunity to return to the new development, and(3) provide information to community-based organizations who can provide needed services, including assistance in meeting eligibility criteria for the mixed-income housing complex. This paper focuses on how the findings from a panel study tracking 98 low-income households since 2004 generates information for decision making and action by evaluation stakeholders. Multiple methods for collecting data at regular intervals through household surveys and administrative records are described and the ways in which information is provided to and used by the foundation and community-based organizations are discussed.

Session Title: Engaging Stakeholders in the Evaluation Process
Multipaper Session 533 to be held in Chesapeake Room on Friday, November 9, 9:25 AM to 10:10 AM
Sponsored by the Qualitative Methods TIG
Chair(s):
Tessie Catsambas,  EnCompass LLC,  tcatsambas@encompassworld.com
Building on the Best: Using Appreciative Inquiry to Evaluate Worker-Trainer Led Health and Safety Training Programs
Presenter(s):
Katherine King,  University of Michigan,  krking@umich.edu
Judith Daltuva,  University of Michigan,  jdal@umich.edu
Thomas Robins,  University of Michigan,  trobins@umich.edu
Abstract: Since 1990 the United Automobile Workers (UAW) union has received federal funding to provide industrial emergency response training to its members. The program relies heavily on a cadre of worker (peer) trainers. Since its inception, the program has benefited from a third party evaluation process. In the early formative years, evaluation was critical to documenting the effectiveness of the worker-led programs and to providing feedback leading to improvements. Now that the program is firmly established, evaluation has taken on the additional role of building on the best for continual improvement. When compared to more traditional evaluation approaches, recent use of Appreciative Inquiry based evaluation strategies (positively worded inquiry processes that identify and build on successes) has resulted in a dramatic increase in the number of constructive comments on how to improve the programs. Further, the participants themselves enjoy the process, resulting in more lively and empowering responses.
Evaluators Train Stakeholders to Understand Data Collection Strategies and to Use Data Base Management Systems: What are the Lessons to be Learned?
Presenter(s):
Janice Fournillier,  Georgia State University,  jfournillier@gsu.edu
Sheryl Gowen,  Georgia State University,  sgowen@gsu.edu
Abstract: Evaluation no longer takes for granted the various roles of the participants from whom data are collected, the stakeholders who are increasingly being asked to play a role in the data collection process, and the evaluators themselves. Evaluation now adopts and adapts more participatory, collaborative, and dialogic approaches. Little however is known about what is learned in situations where the evaluators find themselves assuming the additional role of trainers of the stakeholders. The assumption often is that once training takes place the participants will better be able to deliver the goods. In this paper we focus on: the kinds of learning that take place or do not take place within and without the training processes; the impact that the uses of the materials and the training sessions have on the quality of the data collected; and the kinds of understandings the participants gain about the methodological processes involved in the evaluation.

Session Title: Fulfilling the Promise: An Alternative to the Traditional Literature Review
Demonstration Session 534 to be held in Versailles Room on Friday, November 9, 9:25 AM to 10:10 AM
Sponsored by the AEA Conference Committee
Presenter(s):
Brian Marriott,  Calgary Health Region,  brian.marriott@calgaryhealthregion.ca
Christopher Cameron,  Calgary Health Region,  christopher.cameron@calgaryhealthregion.ca
Abstract: External information is commonly collected for, and provided to, evaluation stakeholders without giving due consideration to their precise needs. As a result, evaluation resources are often ineffectively consumed and the impact of the evaluation process is diminished as the utility of the information is brought into question. A methodology is offered which enhances both the efficiency and the efficacy of the information searching and formatting process. Applicable to all projects that stand to benefit from having objectively collected external information, the ISO/IFO (information search order/information format order) process provides a framework for searching for and presenting information-based data while recognizing that the intricacies of these activities are subject to personal preferences. The framework stresses that the information searcher collaboratively work with his or her stakeholders to ensure that their informational needs are well meet, the information is presented in a manner that is useful to them, and, ultimately, the searcher's promise is fulfilled.

Search Results