Return to search form  

Session Title: Research on Evaluation TIG Business Meeting
Business Meeting Session 567 to be held in International Ballroom A on Friday, November 9, 11:15 AM to 12:00 PM
Sponsored by the Research on Evaluation TIG
TIG Leader(s):
Tarek Azzam,  University of California, Los Angeles,  tazzam@ucla.edu
Christina Christie,  Claremont Graduate University,  tina.christie@cgu.edu

Session Title: Yes, When Will We Ever Learn? How Evaluators Can Learn Better Ways to Understand Cause and Effect
Expert Lecture Session 569 to be held in  International Ballroom C on Friday, November 9, 11:15 AM to 12:00 PM
Sponsored by the Systems in Evaluation TIG
Chair(s):
Bob Williams,  Independent Consultant,  bobwill@actrix.co.nz
Presenter(s):
Patricia Rogers,  Royal Melbourne Institute of Technology,  patricia.rogers@rmit.edu.au
Discussant(s):
Bob Williams,  Independent Consultant,  bobwill@actrix.co.nz
Abstract: The substantial international efforts currently underway to improve the quality of evaluations, particularly in international development, have drawn attention to inadequacies in providing credible evidence of impact - most notably in the report "When will we ever learn?". Remarkably, these efforts have focused almost exclusively on the use of randomized control trials, with little or no recognition of their limitations or the development of alternatives that are more suited to the evaluation of complex interventions in open implementation environments. This session will turn the question of evaluation and learning onto the evaluation community itself and ask why the theory and practice of evaluation has been so slow to learn from current scientific thought, and remains largely bogged down in outdated approaches to causal attribution. Advocates of so-called scientific approaches to impact evaluation rely exclusively on the counter-factual argument for causal attribution - developing information about what would have happened in the absence of the intervention. This type of analysis fails to take into account more complex causal relationships - such as where an intervention is necessary but not sufficient (with other contributing factors needed for success), or sufficient but not necessary (with alternative causal paths available), or where the causal relationships are of interdependence not simple linear causality. This paper compares examples of the logic and methods of causal analysis using traditional 'scientific' evaluation and those that draw on complexity science. It discusses possible reasons for the failure of advocates for 'scientific' evaluation to learn from current scientific thinking and how this might be done.

Session Title: Exploring the Sacrifice Fly Phenomenon in Evaluation Use
Think Tank Session 570 to be held in International Ballroom D on Friday, November 9, 11:15 AM to 12:00 PM
Sponsored by the Evaluation Use TIG
Presenter(s):
Emmalou Norland,  Institute for Learning Innovation,  norland@ilinet.org
Discussant(s):
Joe Heimlich,  The Ohio State University,  heimlich.1@osu.edu
Beverly Sheppard,  The Institute for Learning Innovation,  sheppard@ilinet.org
Julia Washburn,  National Park Service,  julia@jlwashburn.com
Abstract: The Parks as Resources for Knowledge in Science- (PARKS) project evaluation, completed in 2000, is one of the largest and most sophisticated cluster evaluations ever conducted of US National Park Service education programs. A stakeholder approach guided the planning, implementation, and sharing of the findings. Methodologically, it had all the bells and whistles. Results showed impact across programs. So why, several years later, were the evaluation and its results relatively unknown to those who could have benefited the most? Evaluators followed all the 'rules' that should have placed it in the 'home run' category of evaluation use. Instead, until October 2006, it wasn't even sitting on the shelves of likely users of the information. Building upon this example and others, participants in the think tank will wrestle with the issues of evaluations in which immediate evaluation use seems to be sacrificed for future evaluation influence.

Session Title: Lessons Learned From Evaluation Practice
Multipaper Session 571 to be held in International Ballroom E on Friday, November 9, 11:15 AM to 12:00 PM
Sponsored by the Graduate Student and New Evaluator TIG
Chair(s):
Gary Miron,  Western Michigan University,  gary.miron@wmich.edu
Evaluation to Go: Problems and Solutions of Consulting With Time Constraints
Presenter(s):
Steven Middleton,  Southern Illinois University, Carbondale,  scmidd@siu.edu
Joel Nadler,  Southern Illinois University, Carbondale,  jnadler@siu.edu
Nicole Cundiff,  Southern Illinois University, Carbondale,  karim@siu.edu
Abstract: Applied Research Consultants, a student run consulting firm, was contracted to evaluate Southern Illinois University Carbondale's main website using quantitative and qualitative methods. The purpose of the evaluation was to gather a quick understanding of what the website users found useful and appealing about the website. This was preformed in order to make an informed decision about how best to develop a new website design for the university. This study required a quick turn around from the development of an online survey to the finished report, due to time constrains placed on the final decision. The evaluators had to overcome a poor response rate that restricted the interpretation of the quantitative results. Because of this problem, qualitative methods were focused on to help provide the necessary information desired by the client. Difficulties and solutions dealing with evaluation under time constraints will be discussed.
Evaluation for Educational Accountability: Local Impact of No Child Left Behind
Presenter(s):
Linda Mabry,  Washington State University, Vancouver,  mabryl@vancouver.wsu.edu
Abstract: In this paper, I propose to report findings from four years of an ongoing evaluation of the local effects of the "No Child Left Behind" (NCLB, 2001) federal legislation. Sited in two school districts with relatively high scores on state tests, data document increasing classroom test preparation practices and district administrators' concerns and predictions of adjustments to the law's requirements. Conflicts regarding requirements and teachers' personal philosophies and understandings of best practice and child development were universal in the first two years but, in the third, teachers in one district began to indicate positive curricular outcomes. Data were analyzed in terms of Knapp's (1997) four levels of educational reform; Bronfenbrenner's (1979) stages of ecological analysis, with emphasis on working relationships in the exosystem; and three theories of change articulated in the literature of educational reform and accountability (NRC, 1999; Marion & Gong, 2003; Mabry, Poole, Redmond, and Schultz, 2003).

Session Title: Measuring Effectiveness, Efficiency, and Sustainability in Innovative Health Programs Reaching the Underserved
Multipaper Session 572 to be held in Liberty Ballroom Section A on Friday, November 9, 11:15 AM to 12:00 PM
Sponsored by the Costs, Effectiveness, Benefits, and Economics TIG
Chair(s):
Samuel Bickel,  United Nations Children's Fund,  sbickel@unicef.org
Cost and Effectiveness of Health Delivery in Underserved Communities: The Evaluation of Education, Community Health Outreach (ECHO-2) in North Carolina
Presenter(s):
Anne D'Agostino,  Compass Consulting Group,  anne-d@mindspring.com
Sarah Heinemeier,  Compass Consulting Group,  sarahhei@mindspring.com
Amy Germuth,  Compass Consulting Group,  agermuth@mindspring.com
Abstract: The John Rex Endowment offers funding for activities, programs, and organizations with the goal of improving the health of underserved people in the triangle area of NC. One such program, ECHO-2 (Education, Community Health Outreach), is a 2-year extension of the ECHO program provided by County Human Services and is scheduled to end in June 2007. ECHO-2 is a comprehensive system designed to bridge gaps in the health delivery system in six underserved, primarily Latino neighborhoods in the county. In 2006, the Endowment requested an evaluation of ECHO-2 to assess its effectiveness and efficiency in achieving its goals. This evaluation first entailed conducting indepth interviews with program staff as well as members of the population served. Next,information gained through cost-effectiveness analyses were reconciled within the very complicated and complex cultural context of the ECHO-2 program recipients. In this presentation, Compass will discuss the value of combining these approaches to evaluation for maximizing learning about program costs and benefits; using evaluation results to facilitate discussions with the larger community around improving, enhancing, and/or changing “traditional” methods of providing health care services; and some of the ways in which the ECHO-2 evaluation has already changed some of the health delivery practices within the county's Human Services organization.
Measuring Program Support Using the Quantification of Leveraged Resources
Presenter(s):
Antoinette Brown,  Independent Consultant,  antoinettebbrown@juno.com
Abstract: In 2006 the North Carolina legislature created the Initiative to Eliminate Health Disparities. Twenty-three grantees were funded to implement a variety of activities intended to eliminate health disparities in the areas of diabetes, cardiovascular disease and cancer. An evaluation was designed to capture outcomes of the initiative. Among the outcomes were behavioral and clinical changes, policy changes and support for the initiative. The indicator selected to measure initiative support was the amount of resources that grantees were able to leverage. Resources leveraged included volunteer hours, in-kind contributions, and monetary resources. Grantees entered data about participant behaviors, clinical indicators, policy activities and resources monthly into a web-based database. Grantees exhibited a wide range of resource acquisition but were generally successful in securing volunteers from the community and in-kind corporate contributions.

Session Title: Learning (More) About Evaluation: Unfinished Business
Panel Session 573 to be held in Liberty Ballroom Section B on Friday, November 9, 11:15 AM to 12:00 PM
Sponsored by the Theories of Evaluation TIG
Chair(s):
Thomas Schwandt,  University of Illinois at Urbana-Champaign,  tschwand@uiuc.edu
Abstract: This panel invites the audience to think about two ways in which we learn about and develop a self-understanding of evaluation. One commonly accepted self-understanding of evaluation is that it is a logic, set of methods, procedures, and evaluation models used by an individual evaluator (agent) or team of evaluators (agents) to judge merit and worth, and that evaluations are 'used' for the purposes of 'improvement' or 'betterment' (variously understood). Another way of learning about evaluation is to regard it as a socially constituted discursive practice (or set of practices) and to ask 'what is accomplished in the name of evaluation' and 'how do social practices of evaluation shape other social practices in education, health care, public administration, and social service.' In this panel we draw out differences between these self-understandings of evaluation and point to some consequences for what that means to 'learn about' evaluation.
Tools to Evaluate Evaluands?
Peter Dahler-Larsen,  University of Southern Denmark,  pdl@sam.sdu.dk
Dahler-Larson will explain several assumptions embedded in the idea of evaluation as a set of tools and associated ideas of the 'use' of evaluation. Typically, evaluation is presented as something that is both neutral and rational-a tool that serves to examine the value of means or bridge the gap between goals and their accomplishment; in short, a rational way of getting things done.
Practices That Evaluate Practices
Thomas Schwandt,  University of Illinois at Urbana-Champaign,  tschwand@uiuc.edu
Schwandt will discuss what it means to think of evaluation as a set of socially constituted practices that engage and influence other kinds of social practices. For example, there is a strong sense in which evaluation is actually agentless, that is, it is a set of processes, ways of proceeding, and ways of thinking that govern, or otherwise influence practices of public administration, teaching, social work, and so on. Evaluation practices serve to mobilize languages of science, reason, and common sense and thereby shape the way we think about practices of public health, social services, education, the environment, and so on. In this way, we look at what is accomplished or done in the name of evaluation other than as a contractual relationship between evaluator and client.

Session Title: Strategies for Building and Evaluating Organizational Capacity: A Case Study of 30 Children's Residential Homes Utilizing Strategies to Address Childhood Obesity
Panel Session 574 to be held in Mencken Room on Friday, November 9, 11:15 AM to 12:00 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Toni Freeman,  The Duke Endowment,  tfreeman@tde.org
Discussant(s):
Toni Freeman,  The Duke Endowment,  tfreeman@tde.org
Abstract: Structural or environmental interventions demonstrate promise in addressing obesity in young people. However, these interventions, which focus on organizational change rather than individual change, are challenging to design, implement and evaluate. Effective programs of these types require the development and maintenance of numerous partnerships, including participating organizations, funders, and other key stakeholders. This presentation will describe the approach utilized in The ENRICH (Environmental Interventions in Children's Homes) Duke Endowment Wellness Program for developing and evaluating a structural intervention to promote and support physical activity and healthful nutrition among children and adolescents residing in approximately 30 residential children's homes (RCHs) in North Carolina and South Carolina. The presenters will discuss the successful implementation of the processes described above. Additionally, they will provide copies of the project's conceptual framework, logic model, and comprehensive evaluation plan. Copes of project instruments will also be available for participants to review.
An Implementation and Evaluation Planning Process for Structural Interventions
Ruth Saunders,  University of South Carolina,  rsaunders@sc.edu
Involving the stakeholders in developing a logic model for any intervention is important. Logic models are particularly useful in structural interventions as a framework to integrate multiple elements and to organize measures from multiple data sources. A structural intervention includes the desired organizational changes based on the conceptual framework as well as the influence of the changed environment on individual behavior. This information forms the basis for outcome or impact evaluation. The structural logic model includes information needed to facilitate organizational change. These form the basis for process evaluation. Once both the conceptual framework and logic model are well defined and acceptable to key stakeholders, it is imperative to identify and develop appropriate process and outcome measures. The logic model allows evaluators to easily identify what needs to measured or documented. The conceptual framework provides the foundation for what questions to ask or what to observe in the participating organizations.
Environmental Intervention in Children's Homes (ENRICH) Process Evaluation: Implementation Monitoring Results
Kelli Kenison,  University of South Carolina,  enrichkelli@aol.com
In this presentation, the ENRICH Program will be described, along with the methods for the implementation monitoring aspect of the process evaluation plan. Results of the implementation monitoring which includes the measurement of dose delivered, dose received and fidelity and completeness will be shared. In ENRICH, the Wellness Teams (WT), composed of key staff from each children's home, are trained to assess the physical activity and nutrition environments in their homes and then develop a Wellness Plan specific to their home. The Wets carry out the plan with administrative support from the home and with community resources they have identified, as well as with on-going technical support from USC ENRICH staff. In End of Year Surveys, WT Coordinators indicate that the majority of the teams successfully developed and implemented plans which had a big effect on the physical activity and fruit and vegetable consumption of their residents.

Session Title: The South Central Center for Public Health Preparedness Training Evaluation Process: A Comprehensive Approach to Evaluating the Effectiveness of Emergency Preparedness and Response Training
Demonstration Session 575 to be held in Edgar Allen Poe Room  on Friday, November 9, 11:15 AM to 12:00 PM
Sponsored by the Disaster and Emergency Management Evaluation TIG
Presenter(s):
Sue Ann Sarpy,  Tulane University,  ssarpy@tulane.edu
Laurita Santacaterina,  Tulane University,  lsantaca@tulane.edu
Abstract: The literature has long recognized the need for comprehensive, systematic evaluations of training effectiveness with respect to training-related knowledge, performance, and desired outcomes. In an attempt to address this need, an evaluation process was established for assessing the effectiveness of training delivered at the South Central Center for Public Health Preparedness (SCCPHP). The SCCPHP is an academic/practice partnership that provides competency-based training via distant delivery methods to prepare the public health workforce to respond to public health threats and emergencies including biological, chemical, nuclear, radiological, terrorism, and mass trauma. This presentation will demonstrate the SCCPHP training evaluation process that was developed, including the standardized measures and procedures associated with its use. Further, applied examples of use of the SCCPHP evaluation process in assessing the effectiveness of various emergency preparedness and response training initiatives will be presented.

Session Title: Arkansas Evaluation Center and Empowerment Evaluation: We Invite Your Participation as We Think About How to Build Evaluation Capacity and Facilitate Organizational Learning in Arkansas
Think Tank Session 576 to be held in Carroll Room on Friday, November 9, 11:15 AM to 12:00 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
David Fetterman,  Stanford University,  profdavidf@yahoo.com
Discussant(s):
Linda Delaney,  University of Arkansas,  linda2inspire@earthlink.net
Abstract: A new Arkansas Evaluation Center will be housed at the University of Arkansas Pine Bluff. The Center emerged from empowerment evaluation training efforts in a tobacco prevention program (funded by the Minority Initiated Sub-Recipient Grant's Office). The aim of the Center is to help others help themselves through evaluation. The Center is designed to build local evaluation capacity in the State to help improve program development and accountability. The Center will consist of two parts: an academic program beginning with a certificate program and later offering a masters degree. It will combine face-to-face and distance learning. The second part of the Center will focus on professional development: including guest speakers, workshops, conferences, and publications. The Center will be grounded in an empowerment evaluation philosophical orientation and guided by pragmatic mixed-methods training. In addition, it will help evaluators learn how to use new technological and web-based tools.

Session Title: Using Images as Catalysts for Expression in Evaluation: A Demonstration of Photolanguage
Demonstration Session 577 to be held in Pratt Room, Section A on Friday, November 9, 11:15 AM to 12:00 PM
Sponsored by the Extension Education Evaluation TIG
Presenter(s):
Rebecca White,  Louisiana State University,  bwhite@agctr.lsu.edu
Diane Sasser,  Louisiana State University,  sdasser@agctr.lsu.edu
Katherine Pace,  Louisiana State University,  kpace@agcenter.lsu.edu
Emily Braud,  Louisiana State University,  elejeune@agcenter.lsu.edu
Abstract: Finding ways to encourage expression among evaluation research participants who are young, shy, reticent, or have limited verbal abilities can be challenging for evaluators. Often evaluation participants find it difficult to address certain sensitive topics or issues. Photolanguage is a resource that evaluators can use to aid personal expression and small group interaction. During this demonstration participants will learn to use Photolanguage as a tool to enhance qualitative evaluation activities. Participants will experience an innovative evaluation process that utilizes black and white photographic images, specifically chosen for their 'sthetic qualities, their ability to stimulate emotions, memory and imagination, and capacity to stimulate reflection in the viewer.

Session Title: Business and Industry TIG Business Meeting
Business Meeting Session 578 to be held in Pratt Room, Section B on Friday, November 9, 11:15 AM to 12:00 PM
Sponsored by the Business and Industry TIG
TIG Leader(s):
Amy Gullickson,  Western Michigan University,  amy.m.gullickson@wmich.edu
Sheri Hudachek,  Western Michigan University,  sherihudachek@yahoo.com
Eric Graig,  Usable Knowledge LLC,  egraig@usablellc.net
Otto Gustafson,  Western Michigan University,  ottonuke@yahoo.com

Roundtable: Evaluating Collaboration Between Science, Technology, Engineering and Mathematics Programs in the National Girls Collaborative Project
Roundtable Presentation 579 to be held in Douglas Boardroom on Friday, November 9, 11:15 AM to 12:00 PM
Presenter(s):
Brenda Britsch,  Puget Sound Center for Teaching, Learning and Technology,  bbritsch@psctlt.org
Karen Peterson,  Puget Sound Center for Teaching, Learning and Technology,  kpeterson@psctlt.org
Carrie Liston,  Puget Sound Center for Teaching, Learning and Technology,  cliston@psctlt.org
Vicky Ragan,  Puget Sound Center for Teaching, Learning and Technology,  vragen@psctitl.org
Abstract: Collaboration and its effects can be difficult to define, observe, and measure. Based on an evaluation of the National Girls Collaborative Project, a project structured to bring organizations that serve girls in science, technology, engineering and mathematics (STEM) together to compare needs and resources, share information, and plan strategically, we will discuss the measurable aspects of collaboration and initial and expected outcomes stemming from the effort to encourage organizations to work together in more complex ways. We will look at a “collaboration rubric”, adapted from the work of Hogue (1993), Borden and Perkins (1988, 1999) and Frey, Lohmeier, Lee, Tollefson & Johanning (2004), developed to capture increasing levels of collaboration between different groups and discuss preliminary results. The rubric describes five levels of collaboration, based on Hogue's Levels of Community Linkage model: networking, cooperation, coordination, coalition, and collaboration.

Session Title: Building Evaluation Capacity at the Society for Advancement of Chicanos and Native Americans in Science (SACNAS)
Multipaper Session 580 to be held in Hopkins Room on Friday, November 9, 11:15 AM to 12:00 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Chair(s):
Jack Mills,  Independent Consultant,  jackmillsphd@aol.com
Abstract: This session will describe the challenges, opportunities and lessons learned in using evaluation results to evolve services provided to underrepresented minority (URM) scholars in higher education. The Society for Advancement of Chicanos and Native Americans in Science (SACNAS) is nationally recognized for initiating and executing effective programs that provide underrepresented minority (URM) students and young scientists with the tools they need to successfully advance in the sciences and related technical fields. This session features a dialogue and interplay between the immediate past president of this nationally prominent minority serving organization and its external evaluator. Members of the audience will be able to hear from the organization's perspective what steps were taken to prepare for a major evaluation initiative as well as challenges, opportunities and lessons learned from the perspective of the external evaluator.
Preparing the Way for Evaluation: The Experience of the Society for Advancement of Chicanos and Native Americans in Science (SACNAS)
Marigold Linton,  University of Kansas,  mlinton@ku.edu
Jack Mills,  Independent Consultant,  jackmillsphd@aol.com
Over the past five years, program evaluation research has become a critical strategic initiative for the Society. Moving an organization toward increasingly more sophisticated evaluation approaches requires commitment on a number of levels. At the board of directors level there was a need to establish evaluation as a priority, allocate program resources to evaluation that otherwise might provide direct service to clients and be open to findings that might challenge traditions that evolved over a number of years. At the program staff level there was a need to establish a degree of comfort in working with a paid skeptic-someone who would both bear good news about the program's successes while pointing out areas in which the program operations could be strengthened. We will discuss ways in which evaluation has affected many aspects of program operations and future evaluation directions as we progress up the organizational learning curve.
A Theory-based Approach to Measuring Minority Career Advancement in the Sciences: A Case Study of the Society for Advancement of Chicanos and Native Americans in Science (SACNAS)
Jack Mills,  Independent Consultant,  jackmillsphd@aol.com
Marigold Linton,  University of Kansas,  mlinton@ku.edu
This presentation will describe the development of program evaluation research at SACNAS. We have distilled a model of program theory for factors we believe are essential in assisting URM students navigate scientific and technical careers. With this model in hand, we began to develop multiple methods to measure SACNAS' impact, including: a survey of students' developmental assets prior to SACNAS involvement, focus groups, interviews and participative observations. The Society is evolving a web-based process to track the outcomes of career progress. The presenters will describe how the program evaluation methodology has evolved at SACNAS and future directions we are taking to strengthen the organization's evaluation practice. The two SACNAS presentations in this session will highlight the dynamic interplay and the beneficial impact on strategy that can emerge when the leadership of an organization and an external evaluator develop a strong collaboration.

Session Title: Monitoring and Evaluating (M&E) in Sector-wide Approach (SWAps): A New Way of Thinking of Monitoring and Evaluation in the New International Development Framework
Expert Lecture Session 581 to be held in  Peale Room on Friday, November 9, 11:15 AM to 12:00 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Nino Saakashvili,  Horizonti Foundation,  nino.adm@horizonti.org
Presenter(s):
Ryoh Sasaki,  Western Michigan University,  ryoh.sasaki@wmich.edu
Abstract: This session discusses lessons learned from M & E activities conducted in Sector-Wide Approach (SWAps), a new approach in international development. The lessons are obtained from the author's field experience as a M & E specialist in Tanzania's agricultural sector, as well as from an extensive review of study reports on SWAps. Based on lessons learned, a new system of M & E is proposed, consisting of: (i) prior (but flexible) mutual agreements of goals, indicators and target values among all relevant stakeholders; (ii) a focus on outputs by local entities and on outcomes by central governments; (iii) a mixed use of review and evaluation; and (iv) a reflection of local values through the conduct of periodic systematic needs assessments. Finally, the presenter will compare the proposal with other evaluation practices in the field, such as: Result-based M & E; Evidence-based evaluation, and the DAC evaluation criteria.

Session Title: Real Application of a Policy Advocacy Evaluation Tool
Demonstration Session 582 to be held in Adams Room on Friday, November 9, 11:15 AM to 12:00 PM
Sponsored by the Advocacy and Policy Change TIG
Presenter(s):
Rhonda Ortiz,  The California Endowment,  rortiz@calendow.org
Sue Hoechstetter,  Alliance for Justice,  sue@afj.org
Traci Endo Inouye,  Social Policy Research Associates,  traci@spra.com
Catherine Crystal Foster,  Blueprint Research & Design Inc,  catherine@policyconsulting.org
Justin Louie,  Blueprint Research & Design Inc,  justin@blueprintrd.com
Abstract: This demonstration will present Alliance for Justice's Advocacy Evaluation Tool. The session will begin with an overview of how it was developed, what it is, and how it can be used. This will be followed by two examples of how it is currently being used in the field. One example will be from The California Endowment's Hmong Health Collaborative and will show how the evaluators of this program used the tool to adapt it to make it more applicable for this community. Another example will highlight the work of other evaluators contracted by The California Endowment to provide technical assistance to a couple of community-based organizations working on comprehensive health care access in the San Francisco Bay Area in using the tool.

Roundtable: Challenges Faced by an External Evaluator in Evaluating a Multi-site Program: Lessons Learned
Roundtable Presentation 583 to be held in Jefferson Room on Friday, November 9, 11:15 AM to 12:00 PM
Presenter(s):
Mary Poulin,  Justice Research and Statistics Association,  mpoulin@jrsa.org
Abstract: This roundtable will explore both the challenges faced and the solutions selected in the ongoing implementation of a federally-funded, quasi-experimental design evaluation of a juvenile mentoring program for at-risk youths in Utah. Challenges that will be discussed include: evaluation design planning, program funding, geographical distance, cross-site variation in program implementation, fostering commitment to data collection, and sample size. Particular attention will be paid to issues pertaining to the needs of the three primary clients of the evaluation—the agency funding the evaluation, the program being evaluated, and the program participants.

Session Title: Conducting a Process Evaluation of a Prisoner Reentry Initiative
Think Tank Session 584 to be held in Washington Room on Friday, November 9, 11:15 AM to 12:00 PM
Sponsored by the Crime and Justice TIG
Presenter(s):
Aisha Nyandoro,  Michigan State University,  smithai1@msu.edu
Discussant(s):
William Davidson,  Michigan State University,  daviso7@msu.edu
Abstract: Almost every state and the federal system have some form of reentry initiative designed to facilitate the prisoner's transition back to society. While there are certainly many important concerns about whether a reentry initiative "works", there are a range of critical issues that must be addressed that are equally important. Before any results can be attributed to a particular initiative it is imperative that the evaluation determine and document the activities conducted to determine if they have been implemented in accord with the program design. The roundtable discussion will divide into work groups; each group will discuss: Is it possible to conduct a process evaluation for a reentry initiative? If so, what are the steps in conducting this type of evaluation? Why is it important to examine the process of model implementation? Who are the stakeholders that should be involved in this process? What are some of the possibilities and challenges?

Session Title: Implementing Process Evaluation in a Dispersed State Program
Demonstration Session 585 to be held in D'Alesandro Room on Friday, November 9, 11:15 AM to 12:00 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Richard Bowman,  University of Arizona,  rbowman@email.arizona.edu
Michele Walsh,  University of Arizona,  mwalsh@u.arizona.edu
Abstract: Implementing process evaluation measures involves a wide variety of stakeholders, and requires strategic compromises. The demonstration will use our experience over the last three years with the Arizona Tobacco Education and Prevention Program to highlight several critical issues - balancing the needs of evaluation and program monitoring, and the needs of central administrators and local service delivery staff - and will outline the steps required to build an organization-wide data system. - Selling the Idea of Process Evaluation - Constructing and Piloting the Instruments - Implementing the Systems - Using the Data The delivered process evaluation system involves an "event report" that is submitted by all providers of services via a web-based tool that makes systematic and continuous program assessment and feedback feasible and effective.

Session Title: Using Multilevel Discrete-time Survival Models to Predict Whether and When Events Occur
Demonstration Session 586 to be held in Calhoun Room on Friday, November 9, 11:15 AM to 12:00 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Steven Pierce,  Michigan State University,  pierces1@msu.edu
Abstract: The session will introduce the audience to discrete-time survival analysis, which builds on logistic regression to predict not only whether an individual will experience an event, but also when the individual experiences the event. The session will then illustrate how to extend the technique to handle the multilevel case where individuals are drawn from a clustered sample and both individual-level and cluster-level characteristics are used to predict the occurrence of the event. The example data examine whether a door-to-door outreach effort affected whether and when households drawn from 52 different neighborhoods in a small city responded to a survey about neighborhood conditions. The session will cover the data needed to do these models, the concept of censoring, exploratory analysis, software tools, testing multilevel survival models, interpreting output, graphical methods for displaying the results, and recommended resources for those who want to learn more about the technique.

Session Title: Using Technology to Enhance Aboriginal Evaluations
Expert Lecture Session 587 to be held in  McKeldon Room on Friday, November 9, 11:15 AM to 12:00 PM
Sponsored by the Indigenous Peoples in Evaluation TIG
Chair(s):
Joan LaFrance,  Mekinak Consulting,  joanlafrance1@msn.com
Presenter(s):
Andrea L K Johnston,  Johnston Research Inc,  andrea@johnstonresearch.ca
Discussant(s):
Katherine Tibbetts,  Kamehameha Schools,  katibbet@ksbe.edu
Abstract: Johnston Research Inc., an Aboriginal owned and directed company has made use of technology in several evaluation projects. This presentation will discuss the relevance of using technology in Aboriginal contexts. Technology assists in honoring the audio and visual communication of Aboriginal people. In particular, we will discuss relevant mediums and approaches. It is not so much the use of technology as it is the manner in which it is employed. We are concerned with the content, such as adding audio to visual presentations. In addition to looking at actual visual and audio examples used by Johnston Research Inc. we will discuss other questions. Is the technology easy to use? Can it be adapted to suit the needs of other programs? What about funding for high-tech research? How do you support technology in northern communities? We will discuss our recent experience with using technology and models.

Session Title: Programs for Lesbian, Gay, Bisexual, and Transgender Students: Interventions for Diverse Populations
Multipaper Session 588 to be held in Preston Room on Friday, November 9, 11:15 AM to 12:00 PM
Sponsored by the Lesbian, Gay, Bisexual, Transgender Issues TIG and the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Sylvia Fisher,  United States Department of Health and Human Services,  sylvia.fisher@samhsa.hhs.gov
Making Schools Safe for All Students: Assessing the Utility of Supportive School Resources for Lesbian, Gay, Bisexual and Transgender Students of Color
Presenter(s):
Elizabeth Diaz,  Gay, Lesbian and Straight Education Network,  ediaz@glsen.org
Joseph Kosciw,  Gay, Lesbian and Straight Education Network,  jkosciw@glsen.org
Riley Snorton,  University of Pennsylvania,  rsnorton@asc.upenn.edu
Abstract: Advocates for safer schools often stress the importance of resources that improve school climate for and provide support to lesbian, gay, bisexual and transgender (LGBT) students, such as comprehensive school anti-harassment policies and Gay-Straight Alliances. Some research demonstrates the potential effectiveness of these types of resources. However, much of this research is based on findings obtained from samples of predominately white LGBT students. The small body of literature that exists about LGBT students of color demonstrates that they often face additional challenges related to their race and ethnicity. Whether the resources traditionally supported by safer schools advocates meet the needs of LGBT students of color is an issue that remains largely unexplored. Using data from a national survey and focus-groups with LGBT students of color, this study examines the accessibility and utility of traditionally recommended resources and the ways in which LGBT students of color seek support in school.
Jump-starting Student Leaders for Creating Safer Schools: An Evaluation of a Student Leadership Program for Addressing Lesbian, Gay Bisexual and Transgender Issues in Secondary Education
Presenter(s):
Joseph Kosciw,  Gay, Lesbian and Straight Education Network,  jkosciw@glsen.org
Elizabeth Diaz,  Gay, Lesbian and Straight Education Network,  ediaz@glsen.org
Abstract: This paper will examine findings from an evaluation of a national leadership program for secondary school students. The purpose of JUMP-START, a project of the Gay, Lesbian and Straight Education Network (GLSEN), is to create a team of student leaders committed to creating safer schools for all students, with particular attention to the unique problems faced by many lesbian, gay, bisexual and transgender (LGBT) students. Using a quasi-experimental design, the evaluation examines differences between these student leaders and a comparison group of other students doing similar safe schools work in their communities. In particular, we will examine the effectiveness of the program related to the students' community involvement, civic engagement, self-efficacy, well-being and academic success. This paper will also examine how local context (e.g., family, school and community climate re: LGBT issues) may affect program delivery and the ability of the leadership program to effect positive change for the participants.

Session Title: Organizational Learning in the Context of Higher Education Institutions
Expert Lecture Session 589 to be held in  Schaefer Room on Friday, November 9, 11:15 AM to 12:00 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG and the Assessment in Higher Education TIG
Chair(s):
Denise Seigart,  Mansfield University,  dseigart@mansfield.edu
Presenter(s):
Susan Boser,  Indiana University Pennsylvania,  sboser@iup.edu
Discussant(s):
William Rickards,  Alverno College,  william.rickards@alverno.edu
Abstract: Tensions currently exist regarding over the role assessment will play in higher education. Regional accreditation bodies urge the use assessment for organizational learning at the institutional level. The federal government seeks to require standardized, comparative assessment across institutions, linking funding to results. At stake here is who will determine what the curricular should be, with what standards, measured how, and how findings will be used. This comes to the heart of academic freedom. Yet faculty often resist movement toward assessment at all, despite the potential negative consequences and, curiously, despite the higher ed mission regarding learning and research. This paper will 1) examine the characteristics of the higher education context that resist and those that enable using evaluation for learning, 2) explore how our learning about evaluation capacity-building and organizational learning might inform the current conflict, and 3) propose how this conflict might advance theory and practice of organizational learning.

Session Title: Issues in Early Childhood and Preschool Evaluation
Multipaper Session 590 to be held in Fairmont Suite on Friday, November 9, 11:15 AM to 12:00 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Michael P Mueller,  The Hospital for Sick Children,  michael.mueller@sickkids.ca
School and Community-based Early Education Programming: Implications for Evaluation
Presenter(s):
Kelly Hallberg,  Learning Point Associates,  khallberg@learningpt.org
Abstract: With increasing attention being paid to early childhood education, many state and federal policy makers are debating whether these services are best provided within traditional K-12 schools or by community based private or non-profit organizations. This presentation will consider the implication of these settings on the evaluation of early education programs and services. Data will be drawn from two evaluations of Early Reading First, a federal grant program designed to improve early language and literacy development in low-income preschools. In one evaluation, the program is being implemented in a large urban school district. In the other, the program is being implemented in privately owned preschool centers within a large urban metropolitan area with support from a local non-profit organization. Consideration will be given to the impact of setting on evaluation methodology and communication of findings as well as initial differences and similarities in program implementation.
The Michigan School Readiness Program Longitudinal Evaluation: Hierarchical Models for Multi-nomial and Binary Outcomes
Presenter(s):
Elena Malofeeva,  High/Scope Educational Research Foundation,  lenam@highscope.org
Marijata Daniel-Echols,  High/Scope Educational Research Foundation,  marijatad@highscope.org
Abstract: The High/Scope Educational Research Foundation has been on the forefront of early childhood education for more than 35 years. The Research Department at High/Scope engages in curriculum research, program evaluation, and instrument development activities to serve the needs of practitioners, policy makers, other researchers, and community stakeholders. Our projects focus on identifying best practices in a full range of contexts—local, state, national, and international; early childhood, youth, and adult learning; and both privately and publicly funded programs. In this presentation, we will present some of the longitudinal results of the evaluation of The Michigan School Readiness Program, a state-funded preschool program offered through school districts and community agencies to help poor or other children at risk of school failure start school ready to learn. Special attention will be paid to issues related to the analyses of the binary and multinomial outcomes in multi-level data.

Roundtable: Measuring Success in Professional Exchange: International Visitor Leadership Program
Roundtable Presentation 591 to be held in Federal Hill Suite on Friday, November 9, 11:15 AM to 12:00 PM
Presenter(s):
Liudmila Mikhailova,  Delphi International of World Learning,  liudmila.mikhailova@worldlearning.org
Abstract: This paper talks about monitoring and evaluation techniques designed for International Visitor Leadership Program (IVLP), a State Department-sponsored program for professional exchange. The paper explores a unique model of effective alliance between U.S. government, national program agencies, and thousands of volunteers across the country (Centers for International Visitors) that work together to administer IVLP. Started in 1940 with Inter-American exchange, IVLP brings annually to the U.S. about 4,500 promising leaders from 185 countries in 50 areas of expertise. The paper analyzes criteria for M&E for measuring IVLP success and discusses its findings. The evaluation design is crafted to measure short-and-long outcomes in light of the State Department program objectives. Program success is measured at four major levels: satisfaction with the program; new subject-related knowledge acquisition; changed attitudes and increase of civic responsibilities; and organizational change. A multi-attribute evaluation model to measure success in international exchange will be presented and discussed.

Session Title: Peer Review and Learning: New Uses
Multipaper Session 592 to be held in Royale Board Room on Friday, November 9, 11:15 AM to 12:00 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
David Roessner,  SRI International,  david.roessner@sri.com
Peer Review of Transformative Research: Strategies and Challenges in Identifying Innovation in Ex Ante Evaluation
Presenter(s):
Elmer Yglesias,  Science and Technology Policy Institute,  eyglesia@ida.org
David Kaplan,  Case Western Reserve University,  drk5@case.edu
Abstract: Scientific advancement is one of the keys to US competitiveness in a global marketplace. A growing priority for federal agencies is to identify and provide grants for transformative research, research that could establish a new paradigm and create a pathway to new scientific frontiers. The authors review current federal initiatives designed to support transformative research, propose a set of guidelines and practices for implementing a robust review system, and perform a first test of a statistical hypothesis that could be used in ex ante evaluation to identify transformative research.
Peer Reviews or Peers Reviewing? Peer Review as Policy Learning in Innovation, Research and Education
Presenter(s):
Erik Arnold,  Technopolis,  erik.arnold@technopolis-group.com
Isabelle Collinns,  Technopolis,  erik.arnold@technopolis-group.com
Abstract: Increasing use is being made of peer review in research and education policy - not simply as a quality control measure but as a mechanism for mutual learning. The paper addresses the use of peer review in evaluation and learning exercises that range from the 5-year assessment of the EU Framework Programmes, reviews of national innovation systems by OECD, so-called 'policy mix' reviews organized by the European Commission in the context of the 'Open Method of Coordination' of research and innovation policy and peer review of university policies and practices. The paper discusses strengths and weaknesses of the approach as a policy learning tool.

Session Title: Challenges Associated With the Implementation and Use of a Statewide Substance Abuse and Mental Health Outcome and Program Performance System
Panel Session 593 to be held in Royale Conference Foyer on Friday, November 9, 11:15 AM to 12:00 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
Robert Hubbard,  National Development and Research Institutes Inc,  hubbard@ndri-nc.org
Abstract: In North Carolina, outcomes for consumers with diagnoses of mental illness and/or substance abuse are monitored through the North Carolina Treatment Outcomes and Program Performance System (NC-TOPPS). NC-TOPPS started in 1997 as a paper-based instrument to collect data on individuals receiving specific substance abuse services. In 2005, this was expanded into a web-based system that is now used to collect information on the life outcomes of all individuals (ages 6+) receiving publicly funded mental health and substance abuse services. In the last two years, over 1000 providers have participated in this system, providing a pool of data that can be used to inform both research and the continuous improvement of the public service system. This panel presents some of the challenges associated with the implementation and use of this statewide system, highlighting the tensions between providers, policymakers and researchers at the local, regional and state level.
The Multi-leveled Tensions Impacting the Implementation and Use of the North Carolina Treatment Outcomes and Program Performance System (NC-TOPPS)
Margaret Cawley,  National Development and Research Institutes Inc,  cawley@ndri-nc.org
Gail Craddock,  National Development and Research Institutes Inc,  craddock@ndri-nc.org
The North Carolina Division of Mental Health, Developmental Disabilities and Substance Abuse Services adopted NC-TOPPS as a tool to provide benchmark data and measure progress toward positive treatment goals for North Carolina's substance abuse and mental health services. Data on NC-TOPPS are collected by clinicians at intake, three months, six months, one year and every 6 months thereafter through interviews with all individuals receiving public services. This provides the state and local management entities with regular progress measures, creating a feedback loop for the continuous improvement of the system. Margaret Cawley is the Project Director for NC-TOPPS and will discuss the multi-leveled challenges resulting from the political context in which NC-TOPPS is implemented and used. Gail Craddock is the Senior Research Analyst connected to NC-TOPPS and will discuss the methodological tensions between researchers, practitioners and policymakers over the use of NC-TOPPS data.
The Factors That Facilitate and Impede the Use of the North Carolina Treatment Outcomes and Program Performance System by Multiple Stakeholders.
Robert Hubbard,  National Development and Research Institutes Inc,  hubbard@ndri-nc.org
Deena Murphy,  National Development and Research Institutes Inc,  murphy@ndri-nc.org
Robert Hubbard is Director of the Institute for Community-Based Research of the National Development and Research Institutes, Inc. (NDRI-NC) and played a key role in the research and development of the North Carolina Treatment Outcomes and Program Performance System (NC-TOPPS) as a systematic tool to improve outcomes and enhance program performance as part of a total quality improvement approach. Deena Murphy is a Principal Research Analyst at NDRI-NC and her dissertation work involved looking at the organizational factors that facilitate use of evaluation data for learning. Based on these data and Robert Hubbard's extensive experience in the substance abuse field, the factors that facilitate and impede the use of NC-TOPPS by multiple stakeholders will be discussed.

Session Title: Building a World-wide Context for Evaluation: A Discussion With the American Evaluation Association's International Committee
Think Tank Session 594 to be held in Hanover Suite B on Friday, November 9, 11:15 AM to 12:00 PM
Sponsored by the AEA Conference Committee
Chair(s):
Donna Podems,  Macro International Inc,  donna@otherwise.co.za
Discussant(s):
Ross Conner,  University of California, Irvine,  rfconner@uci.edu
Alexey Kuzmin,  Process Consulting Company,  alexey@processconsulting.ru
Thomas E Grayson,  University of Illinois at Urbana-Champaign,  tgrayson@uiuc.edu
Gail Barrington,  Barrington Research Group Inc,  gbarrington@barringtonresearchgrp.com
Abstract: The AEA values a multicultural, global and international understanding of evaluation practices and has a commitment to understand and build awareness of the worldwide context for evaluation. During this session, AEA’s International Committee will facilitate an open discussion to gather a broad range of insights regarding how AEA should consider operationalizing their Mission statement with regards to AEA’s international role. Specifically, how should AEA, if at all, support and develop relationships and collaborations with evaluators around the globe to gain a better understanding of international evaluation issues.

Session Title: Beyond the Report: Using Evaluations to Create a College-going Culture
Panel Session 595 to be held in Baltimore Theater on Friday, November 9, 11:15 AM to 12:00 PM
Sponsored by the Presidential Strand and the College Access Programs TIG
Chair(s):
Janet Usinger,  University of Nevada, Reno,  usingerj@unr.edu
Abstract: Evaluations are resource intensive. Ideally, the program and organization are the beneficiaries of this intense resource commitment through meaningful dialogue between evaluators and project staff and effective feedback. More often, however, evaluations are directed upstream to policy-makers and not necessarily toward the individuals directly involved in the day-to day activities of the project. Two Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP) projects have designed their evaluations to serve two roles, to capture the impact of the project activities and to promote organizational learning. One project uses a Logic Model to create a common understanding of the theoretical grounding of project activities. Another uses a longitudinal case study as a means of reflecting organizational growth and development back to the instructional leadership team of participating schools. This panel will present details of the design and implementation of these two approaches that use evaluations to create a college-going culture.
Using a Logic Model for Program Development, Evaluation and Organizational Learning
Mari Wilhelm,  University of Arizona,  wilhelmm@ag.arizona.edu
Margaret R Stone,  University of Arizona,  mrstone@ag.arizona.edu
The Tucson GEAR UP program utilizes a logic model for building trust with school and program partners, determining an evaluation protocol, providing a guide for short and long term outcome assessment, disseminating findings, and exploring longer-term sustainability. This presentation will provide a strategy for incorporating theories of positive youth development and change into a logic model. Examples will be provided as to how such a logic model is used to create dialogue with partners regarding their own beliefs, expectations and hopes for youth. The goal is to initiate a shared perspective relative to the project's objectives. Examples will be provided as to how concepts within the logic model are used to guide the selection of survey items and how results are shared with partners to enhance program development and, overtime, observe change. Finally, a plan for use of the logic model to assess system change and sustainability will be discussed.
Using a Longitudinal Case Study Design for Evaluation and Organizational Learning
Janet Usinger,  University of Nevada, Reno,  usingerj@unr.edu
Bill Thornton,  University of Nevada, Reno,  thorbill@unr.edu
Edith Rusch,  University of Nevada, Las Vegas,  edith.rusch@ccmail.nevada.edu
The Nevada State GEAR UP project includes a longitudinal case study methodology as a means of providing feedback for the six years of the project. Middle school principals and instructional leaders have used the results to inform educational practice and create an environment of organizational learning. This presentation will provide details about the rubrics used for analyzing ethnographic observations and documents to assess a college-going culture of the middle school. The interview protocol will be detailed, as will the survey instruments used to assess academic optimism. In addition to the instruments and analytical processes, the feedback mechanism, structured dialogue and appreciative inquiry processes will be described. The two presenters approach the case study from different perspectives. One has been involved in university outreach and qualitative research; the other has experience as a principal, superintendent, university academic, and quantitative evaluator.

Session Title: Improving Payment Accuracy in the Child Care Program: Error Rate Measurement in the Child Care and Development Fund (CCDF)
Demonstration Session 596 to be held in International Room on Friday, November 9, 11:15 AM to 12:00 PM
Sponsored by the Human Services Evaluation TIG
Presenter(s):
Carol Pearson,  Walter R McDonald & Associates Inc,  cpearson@wrma.com
Harry Day,  Walter R McDonald & Associates Inc,  hday@wrma.com
Abstract: This demonstration will review a methodology for measuring improper payments in the Child Care and Development Fund (CCDF). The CCDF is a 5 billion dollar block grant that allows maximum flexibility for States to set critical policies such as establishing eligibility criteria. The US Department of Health and Human Services (USDHHS) Child Care Bureau (CCB) is following the rulemaking process to impose error rate measurement and reporting requirements on all States receiving CCDF. The presentation will review the development, implementation and utility of the following key components of the error rate methodology based on a pilot study implemented in nine States: - Sampling procedures; - Computation of five error rate measures; - Data collection instruments; - Record review process; - Computation of a national estimate of an annual amount of improper payments in the CCDF; and - Potential evaluation methodology for estimating cost savings through program intervention.

Session Title: Quality Indicators in Health Care: From Training to Accreditation
Multipaper Session 597 to be held in Chesapeake Room on Friday, November 9, 11:15 AM to 12:00 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Molly Engle,  Oregon State University,  molly.engle@oregonstate.edu
Anglo-Saxon Conceptualizations of Performance in Accreditation
Presenter(s):
Pernelle Smits,  University of Montreal,  pernelle.smits@umontreal.ca
François Champagne,  University of Montreal,  francois.champagne@umontreal.ca
Damien Contandriopoulos,  University of Montreal,  damien.contandriopoulos@umontreal.ca
Claude Sicotte,  University of Montreal,  claude.sicotte@umontreal.ca
Johanne Préval,  University of Montreal,  johanne.mc.preval@umontreal.ca
Abstract: Accreditation process is growing worldwide to assess healthcare organizations' overall performance. Various countries conceptualize performance in various ways. Our comparative analysis of the accreditation manuals from Canada, the USA and Australia allows to distinguish various conceptualization of performance. The accreditation manuals were selected from the 2003 WHO report about Quality and Accreditation in healthcare services. For each manual, the standards related to management were classified by two reviewers according to a selected integrative framework which defines health care organizations' performance with four dimensions and their alignments. Such comparative analysis is a first step to better understand the relationship between concepts of performance and associated management styles.
Developing Quality Leaders in Healthcare: Evaluating the Impact of a Multi-faceted Learning Intervention
Presenter(s):
Daniel McLinden,  Cincinnati Children's Hospital Medical Center,  daniel.mclinden@cchmc.org
Gerry Kaminski,  Cincinnati Children's Hospital Medical Center,  gerry.kaminski@cchmc.org
Abstract: Embedding Quality improvement (QI) methods in a large organization is both an important and a complex undertaking. In a medical center, the importance is heightened to due the potential to impact the medical and quality of life outcomes for patients and families. The session presents the results of a multifaceted organizational intervention to develop quality leaders. In addition to sharing the results this project also offers a unique insight into the relationship between QI methodology and program evaluation and how evaluators can learn from this measurement based approach to intervening in organizations.

Session Title: Learning From Leaders: Evaluating Popular Culture Artifacts as a Development Tool
Panel Session 598 to be held in Versailles Room on Friday, November 9, 11:15 AM to 12:00 PM
Sponsored by the AEA Conference Committee
Chair(s):
Jamie Callahan,  Texas A&M University,  jcallahan@tamu.edu
Discussant(s):
Kelly Hannum,  Center for Creative Leadership,  hannumk@leaders.ccl.org
Abstract: This panel will engage participants in a series of storytelling experiences that emphasize the evaluation of leadership in characters and self in the pursuit of development of personal leadership. The panel begins with an exploration of the link between leadership, learning, and evaluation. We then share a series of leadership stories drawn from a leadership development program that uses popular culture artifacts such as film, television, fiction, and non-fiction as learning vehicles. The discussant will integrate these presentations by demonstrating the role of evaluation in learning to lead and in developing others to lead. We conclude the panel by engaging audience members to share their self-evaluations of leadership.
Leadership, Learning, and Evaluation
Jamie Callahan,  Texas A&M University,  jcallahan@tamu.edu
This presentation describes the interconnection between leadership, learning, and evaluation. Evaluation is an essential component to leadership development, not only from a program design and management perspective, but from an individual learning perspective as well. Enhancing participants' ability to use evaluative techniques to increase understanding of self and context is an important aspect of leadership development. However, the opportunity to learn about and apply evaluation skills is not often an aspect of leadership development programs. Examples of ways in which participants can learn about and apply evaluation skills in leadership development programs are shared.
Popular Culture as a Means to Enhance Learning
Manda Rosser,  Texas A&M University,  mrosser@tamu.edu
Popular culture artifacts provide rich information that can be applied to prompt deeper self-reflection and contextual learning. Using popular culture artifacts as learning tools is helpful in that they provide a common reference point for conversations about topics that may be otherwise difficult to discuss. Another advantage of using popular culture artifacts is they are not directly tied to participants' experiences and, as a result, an honest conversation can occur about "those people" or "that situation" which might otherwise provoke heated and unproductive interactions. A third advantage is that popular culture artifacts are often engaging and exciting for learners, invoking a wide array of emotions. Examples of ways in which participants can learn using popular culture artifacts combined with evaluative methods are shared.

Search Results