Return to search form  

Session Title: Learning Systems and Systems of Learning in Practice
Skill-Building Workshop 372 to be held in International Ballroom A on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Presidential Strand
Presenter(s):
Bob Williams,  Independent Consultant,  bobwill@actrix.co.nz
Kate McKegg,  The Knowledge Institute Ltd,  kate.mckegg@xtra.co.nz
Abstract: The knowledge management field has moved beyond dumping wheelbarrows of data into people's laps. It now recognizes that learning is essentially a social process, where communities of practitioners work with data to develop new understandings and meanings. Knowledge management is more about the social process of facilitating learning communities than the technical process of delivering data. In contrast, the evaluation field still predominately operates from a technical paradigm. Few established evaluation methodologies are explicitly based on learning theories and most debates are focused around improving wheelbarrow technique. This workshop will explore in detail the question of how evaluation can promote learning and will demonstrate a method drawn from learning and systems theories.

Session Title: Distributed Leadership & Social Network Analysis in K-12 Education
Multipaper Session 373 to be held in International Ballroom B on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Becky Cocos,  Georgia Institute of Technology,  becky.cocos@ceismc.gatech.edu
Abstract: Social network analysis (SNA) is gaining popularity in K-12 evaluation, and this technique allows us to measure the degree of communication, growth, collaboration, and leadership in and between schools, districts, and partnering organizations. This multi-paper session will review the literature regarding SNA in K-12 education, introduce the steps and preliminary findings involved in using SNA to measure district-wide shared leadership, and present findings discovered through SNA of growth in a K-12 school-university partnership over four years. SNA theory, methodology, survey samples, results, and software tools will be discussed in the context of their use in two specific studies, as well as in other studies being conducted around the nation. SNA has the potential to reveal and document the different types of social networks within schools, districts, and partnerships that either accelerate or block increased student achievement.
Social Network Analysis in K-12 Education Literature Review
Andrew Kerr,  Georgia Institute of Technology,  andrew.kerr@ceismc.gatech.edu
Becky Cocos,  Georgia Institute of Technology,  becky.cocos@ceismc.gatech.edu
Recent interest in social network analysis (SNA) has been spurred largely by the development of easily available, simple-to-use analysis software, plus a growing awareness of the importance of understanding how ideas are transferred across networks of people. More than ever, evaluators are seeing a need for SNA in schools. This paper will survey the history of SNA through its role in understanding and improving K-12 education. The philosophies and science driving school-based SNA in recent years will be discussed through a review of the related literature. Many evaluators are making great strides using SNA in schools to examine networks of relationships, leadership, and connected ties. This paper will serve as a prelude to a presentation of our own SNA research findings in a Georgia school system as well as in a higher education partnership with a K-12 school.
Distributed Leadership and Social Network Analysis at the School District Level
Becky Cocos,  Georgia Institute of Technology,  becky.cocos@ceismc.gatech.edu
Andrew Kerr,  Georgia Institute of Technology,  andrew.kerr@ceismc.gatech.edu
Tom McKlin,  Georgia Institute of Technology,  tom.mcklin@gatech.edu
Many educational leadership theorists make the case that shared leadership structures within a school improve student performance. Guided by this philosophy, Georgia's Leadership Institute for School Improvement (GLISI) is piloting a shared leadership program in Georgia's Paulding County school system. Understanding the distribution of leadership across a school system requires understanding the structure of the social network of that system. Social network analysis (SNA) is a method by which GLISI assesses a school's level of shared leadership. SNA reveals not only the structure of a school district's network in terms of relationships and interdepartmental ties, but also which teacher leaders are emerging and what environmental standards need to be implemented to encourage shared leadership. This paper will use GLISI's analysis of Paulding's school system as a dynamic example of how SNA augments our understanding of education, and how that understanding in turn leads to increased student performance.
Evaluating University-High Schools Partnerships Using Social Networks and Graph Analysis
Marion Usselman,  Georgia Institute of Technology,  marion.usselman@ceismc.gatech.edu
Donna Llewellyn,  Georgia Institute of Technology,  donna.llewellyn@cetl.gatech.edu
Gordon Kingsley,  Georgia Institute of Technology,  gordon.kingsley@pubpolicy.gatech.edu
Educational partnerships created between institutions of higher education and K-12 educational communities are complicated entities that defy easy assessment. Good partnerships have a tendency to grow and develop in wholly unanticipated directions, forming networks and having effects far beyond the scope of the initial project. This growth is the basis for a partnership that can be sustained beyond the time limits of the initial project's external funding. Using mathematical graph analysis and social science network theory Georgia Tech has begun to study the network of interactions that developed as part of one specific university-high school NSF GK-12 partnership program-the Student and Teacher Enhancement Partnership (STEP) program. This paper will analyze the growth from Year 1 to Year 5 of the partnership between Georgia Tech and a 99% African American metro-Atlanta high school to track partnership growth, health, and structure and to correlate this growth to the attainment of educational objectives.

Session Title: Evaluation From a Self-organizing Versus Predictive Systems Perspective: Examples From the Field
Panel Session 374 to be held in International Ballroom C on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Systems in Evaluation TIG
Chair(s):
Beverly Parsons,  InSites,  bevandpar@aol.com
Abstract: Using three cluster evaluations, the presenters illustrate the importance of investigating the unplanned and unpredictable dimensions as well as the planned and predictive dimensions of complex human systems. They explore distinctions between organized and self-organizing system dynamics and how evaluation designs need to consider which aspect of the system is being addressed. They emphasize self-organizing systems and how evaluations can be designed to address these self-organizing systems that persist in a state of disequilibrium and have a densely intertwined web of interacting agents (e.g., subgroups, individuals). Each agent is responding to other agents and the environment as a whole and continually adapting. The complexity of such systems prevents predictions using models based on a few variables as can be done for organized dimensions of a system. The session is designed to include small group interactions around each case and a concluding discussion to integrate ideas across the three cases.
Contrasting Evaluation Designs for Predictive and Self-organizing Dimensions of Complex Human Systems
Beverly Parsons,  InSites,  bevandpar@aol.com
This presentation provides a framework for understanding how differences in dynamics with a given system can be viewed through different evaluation approaches. Many traditional evaluation methods are grounded in assumptions that evaluators are evaluating predictable, controllable systems where cause-and-effect and rational principles can be tested, applied, repeated, and validated with comfortable regularity. Actually, most of the systems that evaluators are looking at today are complex adaptive systems. Complex adaptive systems include a self-organizing dimension that is emergent rather than hierarchical and its future is often unpredictable. It can well be a place of high creativity, innovation and new modes of operation but such features may be missed if the evaluation is approached only through the traditional lens. The presenter will illustrate the use of both predictive and self-organizing evaluation designs using a cluster evaluation example involving eight complex partnerships in a Canadian provincial health care initiative.
Dynamic Evaluation
Glenda Eoyang,  Human Systems Dynamics Institute,  geoyang@hsdinstitute.org
A provincial Canadian health service conducted a research project to explore the influence of lateral mentorship and communities of practice on the development of interprofessional practice among allied health professionals. Suspecting that the project would set conditions for self-organizing dynamics, the team chose to implement an evaluation to uncover and articulate patterns that self-organized over time, look for unexpected consequences, investigate relationships among individuals and sites, and system-wide influence of the project over time. This presentation describes the process of collaborative design, data collection, and analysis used to track and analyze shifts that occurred over the course of the project in multiple sites, levels, and constituencies.
Co-evolving Evaluation
Patricia Jessup,  InSites,  pat@pjessup.com
Over a period of years, InSites has conducted evaluations of foundation-funded programs related to incorporating Asia into the curriculum of K-12 schools in the U.S. In the cluster evaluation of organizations providing study tour programs to Asia for K-12 educators, predictive methods initially were used to look at program outcomes for the educators. Then the focus of the evaluation shifted to self-organizing dynamics related to sustainability of attention to Asia in schools. Evaluation findings were reported to program leaders through means that engaged the leaders in dialogue and bridged previously uncrossed boundaries. Program leaders self-adjusted as did the evaluation team. This presentation illustrates how reflective questions were used to engage in dialogue about evaluation findings, how the users of the evaluation developed their own understanding of how those findings could shape their work, and how this led to a shift in the evaluation.

Session Title: Building Evaluation Capacity in Youth Serving Organizations for Bullying Prevention
Panel Session 375 to be held in International Ballroom D on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Evaluation Use TIG
Chair(s):
Nancy Csuti,  The Colorado Trust,  nancy@coloradotrust.org
Discussant(s):
Nancy Csuti,  The Colorado Trust,  nancy@coloradotrust.org
Abstract: The Colorado Trust has been supporting evaluation capacity building in youth-serving initiatives since 2000. The After-School Initiative (2000-2005) laid the groundwork for learning from evaluation, lessons that have been carried over into the current Bullying Prevention Initiative (BPI). Over 45 grantees working in 100 schools and non-profit organizations are infusing cultural competency and positive youth development into bullying prevention programming. Evaluation is a major component of the initiative. This session will feature the program officer who will present an overview of BPI, and a description of the role evaluation and evaluation capacity building has played in this initiative. The evaluator will describe the process of how technology as well as evaluation technical assistance has supported evaluation capacity building. Finally, a representative of the program technical assistance team will describe how the ongoing evaluation results inform their work as well as the work of the grantees. Insights into the sustainability of the grantee evaluation component once the funding ended will be discussed. Challenges and lesson learned will be shared.
Incorporating Evaluation into Bullying Prevention Programming
Ed Lucero,  The Colorado Trust,  ed@coloradotrust.org
This person was selected to be a panel member because of his experience as the initiative program officer as well as the individual who plays the lead role in ensuring learning through evaluation is a crucial component of the process. His professional background working with positive youth development and as strength-based focus, along with an openness to try new ideas and directions that evaluation uncovers, makes him an essential part of the panel. He will talk about the history of the initiative, why the foundation chose to fund in this area, how learning from past initiatives (specifically the After-School Initiative) served to inform this one, and the role of the mixed-methods evaluation in the ongoing Bullying Prevention Initiative process.
Providing Customized Technical Assistance within a Foundation: Directed Evaluation Framework
Robin Leake,  JVA Consulting LLC,  robin@jvaconsulting.com
This person leads the independent evaluation team that is implementing the statewide initiative's evaluation as well as the customized web-based systems for each grantee organization. Over 75 individual sites across Colorado participate in the initiative evaluation. Two surveys per school year are conducted, data made available on the web-based system immediately, and assistance provide to each grantee to assist in creating useful reports. The evaluation by necessity must be flexible enough to meet the grantees' individual needs, and at the same time meet the overarching needs of the funder. Within the constraints of the evaluation design created by the foundation, the independent evaluation team has created a system by which individual grantee needs can be met. The challenges and how they are being overcome will be highlighted in this presentation.
Using Evaluation to Guide Program Related Technical Assistance: The Real World Intervenes
Jill Adams,  Colorado Foundation for Families and Children,  jilljadams@msn.com
While using evaluation findings to frame technical assistance for grantees sounds like a solid and feasible use of evaluation, the realities presented in schools and communities can overshadow good intentions. This presenter represents a team of technical assistance providers who provide opportunities to schools and community based organizations across the state to improve their bullying prevention work. At the same time, schools are trying hard to relate all non-academic programs to improved academic performance. How to balance the school's need for academic outcomes with the long term needs of the youth being served is the challenge facing this presenter. She will discuss her experiences with the evaluation, what factors influence how well the evaluation is received by the grantees as well as factors that influence the ultimate use of evaluation findings. Challenges implementing "evidence-based" programs as well as how some evidence-based programs have been enhanced through the cultural competency technical assistance will be discussed.

Session Title: Looking Inside the Research Center Black Box: Using Evaluation Research to Promote Organizational Effectiveness of Scientific Research Centers
Multipaper Session 376 to be held in International Ballroom E on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Denis Gray,  North Carolina State University,  denis_gray@ncsu.edu
Discussant(s):
Gretchen Jordan,  Sandia National Laboratories,  gbjorda@sandia.gov
Abstract: Surprisingly, most of the evaluations of research centers (including collaborative centers) have been conducted at the program-level of analysis. That is, they have examined processes and outcomes for a whole program while ignoring variation across and within centers (e.g., for involved students). This session will show how one can move beyond this "black box" approach to evaluation by describing an ongoing NSF-sponsored center evaluation effort that attempts to improve center management through a combination of short term survey feedback methodology and a series of more targeted studies. These studies attempt to link specific center and/or individual-level variation with important center and stakeholder outcomes. Targeted studies will examine: factors that affect graduate students outcomes; role of leadership; factors affecting success of multi-institutional center partnerships; and factors affecting the success and survival of "graduated" centers. Implications for organizational learning and future research will be discussed.
Evaluating Leadership Development in an Research and Development (R&D) Context: Assessing Alpha, Beta, and Gamma Change
Bart Craig,  North Carolina State University,  bart_craig@ncsu.edu
Leaders in research and development (R&D) settings face specialized challenges that require them to motivate and support creative professionals, obtain resources in unstructured environments, and measure success in the absence of well-defined metrics. This presentation will report on the evaluation of a leadership development intervention specifically tailored to the needs of directors of industry-university cooperative research centers (IUCRCs). Directors received feedback and coaching based on results from 360-degree performance assessment and personality assessment. The 360-degree assessment instrument was readministered approximately four months after the initial feedback. The self-ratings from the two administrations will be compared for evidence of beta and gamma change using item response theory and confirmatory factor analysis. Alpha change will be assessed after controlling for beta and gamma change, if those are found to have occurred. Results will be interpreted in terms of implications for evaluating leadership development in this specialized setting.
A Multi-variate Study of Graduate Student Satisfaction and Other Outcomes Within Cooperative Research Centers
Jennifer Schneider,  North Carolina State University,  jsschnei@ncsu.edu
While it is assumed that the multidisciplinary, team-based, experiential elements of center-based training contribute to a variety of variety of educational advantages for graduate students including career opportunities, increased scholarly productivity, and development of soft skills (teamwork, communication, leadership), there is little empirical data to support these effects and less identifying which center features are truly instrumental. A cross-sectional predictive analysis was conducted to identify which individual center mechanisms positively or negatively influence graduate student outcomes. Data were collected from graduate students (n=190, 37% useable response rate) working in National Science Foundation's I/UCRC and STC programs (34 centers, 87% response rate) via a web-based questionnaire. Student outcomes include satisfaction, perceived skills, organizational commitment, scholarly achievements, career goals, and feelings of a competitive advantage. Results indicate that consistent and powerful predictive variables include: Multidisciplinary Center Experience, Experiential Expanded Center Experiences, Technical Project Involvement, and frequency of interactions with thesis/dissertation committee and Center industry members.
Enhancing Collaboration Between Historically Black Colleges and Universities (HBCU) and Research Extensive Universities
Andrea Lloyd,  North Carolina State University,  tejidos24@yahoo.com
Collaborative partnerships between historically black colleges and universities (HBCUs) and research extensive universities (REUs),often through involvement in centers, is regarded as a very promising mechanism for having the dual effect of maintaining the S&E focus of African-American students throughout their educational pursuits and of strengthening the institutions, HBCUs, that prepare a significant number of our African-American scientists (Tanaka & Gladney, 2004). The present research is a mixed methods study that explores the factors that are relevant to the adoption of a partnership strategy, the factors that are relevant to the success or failure of the sampled HBCU/REU partnerships and compares factors between HBCU/REU partnerships and REU/REU partnerships. Data has been collected from 29 partnerships participants across several universities. Key constructs to be examined include satisfaction, perceived obstacles and facilitators.
Predictors of Cooperative Research Center Post-Graduation Survival and Success
Lindsey McGowen,  North Carolina State University,  lindseycm@hotmail.com
Industry/University Cooperative Research Centers are supported by funding from NSF but like other center programs are expected to achieve self-sufficiency after a fixed term (ten years). However, there is little evidence about the extent to which government funded programs are able to make this transition. This study attempts to identify the factors that predict center survival and success after they have graduated from NSF funding. Archival data and qualitative interviews with Center Directors are used to explore the fate of I/UCRCs post graduation. The study examines infrastructure, transition planning, center management, faculty involvement, institutional factors, research area, industrial factors, and educational programs to determine if these constructs predict center success in terms of financial viability, industry engagement and support for multidisciplinary collaborative research, university support, faculty satisfaction and commitment, support for students, and technology transfer.

Session Title: Organizational Learning and Evaluation Capacity Building TIG Business Meeting and Presentation: Learning and Meaning in Organizations: How Evaluation Stops the DRIP
Business Meeting Session 377 to be held in Liberty Ballroom Section A on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
TIG Leader(s):
Susan Boser,  Indiana University Pennsylvania,  sboser@iup.edu
Jean King,  University of Minnesota,  kingx004@umn.edu
Rebecca Gajda,  University of Massachusetts, Amherst,  rebecca.gajda@educ.umass.edu
Emily Hoole,  Center for Creative Leadership,  hoolee@leaders.ccl.org
Presenter(s):
Rebecca Gajda,  University of Massachusetts, Amherst,  rebecca.gajda@educ.umass.edu
Sharon Rallis,  University of Massachusetts, Amherst,  sharonr@educ.umass.edu
Abstract: Organizational stakeholders often find themselves in the state of being data rich, but information poor. Evaluation, the process through which meaning is made about the merit, quality and worth of programs and activities, has the capacity to transform data into information that can be used by organizational personnel to prevent conditions from eroding, address challenges as they arise, improve organizational adaptation, and sustain those changes that have been determined to be worthwhile. In this presentation, Rebecca Gajda and Sharon Rallis will demonstrate through an interactive skit the issues and tensions regarding the role of evaluation (and the evaluator) in building an organization’s capacity for learning, and present frameworks based on their own evaluation practice for thinking about how evaluation might be utilized to stop organizational DRIP.

Session Title: Cost and Sustainability Checklists: Theory and Practice
Panel Session 378 to be held in Liberty Ballroom Section B on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Theories of Evaluation TIG
Chair(s):
Daniel Stufflebeam,  Western Michigan University,  dlstfbm@aol.com
Discussant(s):
Michael Scriven,  Western Michigan University,  scriven@aol.com
Brian Yates,  American University,  brian.yates@mac.com
Mary Ann Scheirer,  Scheirer Consulting,  maryann@scheirerconsulting.com
Abstract: Checklists in evaluation are tools to support practitioners in the design, implementation, as well as metaevaluation of evaluations. They may not quite represent a methodology or theory in their own right, but commonly incorporate a set of components, dimensions, criteria, tasks, and strategies that are critical in the evaluation programs, personnel, systems, etc. (c.f. Scriven 2005). This session introduces two checklists and implications for their application in the field. First, Nadini Persaud introduces her checklist for conducting financial and economic analyses. Second, Daniela Schroeter will share her checklist for evaluating sustainability under consideration of the logic of evaluation and the social, economic, and environmental dimensions promoted in the field of sustainable development. Third, Tererai Trent and Otto Gustafson discuss challenges and opportunities in applying these checklists in evaluation practice on the example of an international development impact evaluation. Each paper will be scrutinized by an expert discussant.
A Cost Analysis Checklist Methodology for Use in Program Evaluations
Nadini Persaud,  Western Michigan University,  npersaud07@yahoo.com
A fundamental but often neglected aspect of professional evaluation is the exclusion of serious and sophisticated cost analysis studies. Knowing that a program is responsible for certain outcomes is of little value in a political environment. The quintessential question at the end of the day is -costs- (Chelimsky, 1997). Is the program cost-effective and cost-feasible and how cost-effective is it compared to similar programs? This paper will present a cost analysis checklist methodology that can be used by novice evaluators to conduct financial and economic analyses. The checklist is divided into 6 main sections: types of costs and benefits, valuation of costs and benefits, other issues that need to be considered in cost analysis, the discount rate, project appraisal methodologies, and reporting. The cost analytical methodology selected requires great care because procedures that are selected based on popularity, evaluator's familiarity, or ease of use may lead to fallacious evaluative conclusions.
The Logic and Methodology of Sustainability Evaluation: A Checklist Approach
Daniela C Schroeter,  Western Michigan University,  daniela.schroeter@wmich.edu
The logic of evaluation is generally subsumed within four steps: (i) identification of criteria, (ii) determination of performance standards on each dimension, (iii) collection and analysis of factual data on each dimension, and (iv) synthesis of results into statements of merit, worth, and significance (c.f. Scriven, 1982; Fournier, 1995). This checklist builds on the logic of evaluation and applies it specifically to the evaluation of sustainability. Evaluations of sustainability encompass a wide range of evaluands in numerous contexts, including for example policies, systems, institutions, communities, programs, and personnel. The checklist incorporates the relevant literature, research, and evaluations, merging best practices with current thought in the logic and methodology of sustainability evaluation. It is a tool for holding evaluands accountable and suggesting improvements, under consideration of growing demands on meeting consumers' needs as well as social, economic, and environmental aspects of evaluands.
The Validity and Utility of the Cost and Sustainability Checklists: A Field-Trial in an International Aid Evaluation
Otto Gustafson,  Western Michigan University,  ottonuke@yahoo.com
Evaluators are inundated with various checklists and other tools, some of which are more academic than pragmatic. In order to assess the validity and utility of evaluation checklists, it is imperative that they are field tested.. Additionally, when using checklists in developing countries it is necessary to examine whether these checklists are compatible with country-specific contexts and values as opposed to simply satisfying donor-driven accountability mechanisms. This paper will present a case study of the application of the previously discussed cost analysis and sustainability checklists in the context of an international aid program evaluation conducted in Africa. The presenters will reflect on the soundness and utility of both checklists as well as the challenges and advantages experienced integrating and implementing the checklists during the planning, execution, and metaevaluation of the evaluation. Finally, this paper will discuss implications of the field testing results for improvement.

Session Title: Evaluating Department of Justice Faith and Community-Based Initiatives That Serve Victims of Crime
Panel Session 379 to be held in Mencken Room on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Government Evaluation TIG
Chair(s):
Carrie Mulford,  United States Department of Justice,  carrie.mulford@usdoj.gov
Abstract: This panel will provide an overview of two National Institute of Justice evaluations of Department of Justice faith-based and community organization (FBCO) initiatives targeting victims of crime. The first is the evaluation of the Office on Violence Against Women's Rural Domestic Violence and Child Victimization Enforcement Grant Program Special Initiative: FBCO Pilot Program that provides funding and technical assistance to 54 organizations that serve domestic violence victims in rural areas via three intermediary organizations. The second is the evaluation of the Office for Victims of Crime's Helping Outreach Programs to Expand II Program that funds 28 organizations that serve urban underserved victims using a single national intermediary. Both presentations will focus on the value added of using an intermediary model to provide funding and technical assistance to small faith-based and community organizations. The evaluators will also discuss the challenges of designing and implementing evaluations of Federal FBCO initiatives.
Evaluating Department of Justice Faith-based Programs: An Overview
Carrie Mulford,  United States Department of Justice,  carrie.mulford@usdoj.gov
Dr. Carrie Mulford will provide an overview of the grant programs that are being evaluated under the Department of Justice's faith-based initiative. The two evaluations that make up this panel came out of a congressional mandate that the National Institute of Justice (NIJ) conduct evaluations of the faith-based and community organization (FBCO) initiatives of the Office on Violence Against Women (OVW) and the Office for Victims of Crime (OVC). The first presentation will give a brief overview of OVW's Rural Pilot Program and OVC's HOPE II Program. Due to the politically sensitive and high profile nature of the initiatives and the mandate for evaluation, the evaluators of these programs have faced unique challenges in developing and implementing their evaluation designs. This presentation will also highlight some of the ways in which the political climate has affected the evaluation efforts.
Evaluation of the Rural Domestic Violence and Child Victimization Enforcement Grant Program Special Initiative: Faith and Community-based Pilot Program
Andrew Klein,  Advocates for Human Potential,  aklein@ahpnet.com
Dr. Andrew Klein will present on the Rural Pilot Program Evaluation. The Rural Pilot Program involves three intermediaries - one regional (3 Idaho counties), one statewide (Wyoming) and one national. This presentation will focus on research findings around two issues. After describing the three intermediary agencies selected by the Office on Violence Against Women, the intermediaries will be compared in terms of each one's success at recruiting faith and community based organizations in rural America to deliver services to victims of domestic violence. Secondly, the presentation will discuss some of the issues involved in reaching out to 'faith-based' organizations, including both definitional issues as well as the perhaps inherent tension arising from the differences in philosophies and orientations of the faith and secular domestic violence advocacy communities in their understanding of and delivery of services for domestic violence victims.
Helping Outreach Programs to Expand II (HOPE II): Faith-based and Community Organization Program Evaluation
Carrie Markovitz,  Abt Associates Inc,  carrie_markovitz@abtassoc.com
Dr. Carrie Markovitz will present on the HOPE II Evaluation. The HOPE II evaluation focuses on the ability of a single national intermediary to improve the capacity of small FBCOs as compared to a comparison sample of organizations that did not receive funding. Dr. Markovitz will discuss the inherent difficulties in evaluating capacity changes among organizations of varying sizes, sophistication, and aspirations. She will also discuss the grant design, specifically the 'intermediary model,' and the evaluation's findings on its effectiveness in building capacity. Dr. Markovitz is the project director for the HOPE II evaluation and she has extensive experience evaluating faith-based and community organizations. Dr. Markovitz currently contributes to two additional evaluations of government faith-based initiatives.

Session Title: Connecting People and Nature: Models of Environmental Education
Multipaper Session 380 to be held in Edgar Allen Poe Room  on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Environmental Program Evaluation TIG
Chair(s):
William Michaud,  SRA International Inc,  bill_michaud@sra.com
Assessing the Effectiveness of a Place-based Conservation Education Program by Applying Utilization-focused Evaluation
Presenter(s):
Lisa Flowers,  University of Montana,  flowers@boone-crockett.org
Abstract: This study assessed the effectiveness of a place-based conservation education program called Hooked on Fishing, modeled after the national Hooked on Fishing, Not on Drugs program. Using a quasi-experimental nonequivalent group design, students received a pre-survey, post-survey, and an extended post-survey. Teachers voluntarily participated in an Internet survey, and program instructors voluntarily participated in a structured open-ended telephone interview. The research question was, does the frequency of outdoor experiences have significant affects on students' knowledge, skills, attitudes, and intended stewardship behaviors? A key component of this study was the decision to conduct the evaluation process using the utilization-focused evaluation approach developed by Michael Q. Patton. The motive to use this approach was to promote the usability and accuracy of the evaluation results. The results are considered to have a better chance to be applied by the program stakeholders to not only gauge program effectiveness, but to be used to improve the program.
The Challenges of Evaluating Campus-Community Partnerships for an Environmental Service-learning Program
Presenter(s):
Christa Smith,  Kansas State University,  christas@ksu.edu
Christopher Lavergne,  WaterLINK,  lavergne@ksu.edu
Abstract: Evaluators frequently perform formative evaluations more intensively during the beginning stages of a project, although it is also common to conduct formative aspects of evaluation throughout the life of the project (Frechtling, et.al., 2002). Process evaluation, a form of formative evaluation, focuses on the impact the program has on the participants at various stages of implementation. The current study focuses on the unexpected changes that arise during the course of a “real-world” evaluation project and how evaluators adapt their formative evaluation to assist clients in meeting their short-term and long-term goals and objectives.
Second Year Evaluation of an Outdoor Recreation Program for At-risk 5th Graders in an Urban School District: Adding Teacher and Parent Assessment Measures
Presenter(s):
Gregory Schraw,  University of Nevada, Las Vegas,  gschraw@nevada.edu
Lori Olafson,  University of Nevada, Las Vegas,  lori.olafson@unlv.edu
Michelle Weibel,  University of Nevada, Las Vegas,  michelle.weibel@unlv.edu
Daphne Sewing,  University of Nevada,  daphne.sewing@univ.edu
Abstract: Discover Mojave Outdoor World (DMOW) is a hands-on outdoor recreation program for urban, economically disadvantaged youth. In Year One of the program, knowledge, attitude, and performance assessments were developed to document the effectiveness of program events over the duration of the program. Year One findings revealed that knowledge, attitudes, and performance increased substantially as a result of participating in the outdoor recreation events. The assessment plan was modified in Year Two by creating assessments for teachers and parents. Findings from Year Two's assessment plan again demonstrated the effectiveness of Discover Mojave Outdoor World in that participants' knowledge, attitudes, and performance increased over the course of program events. Additionally, results demonstrated that teachers and parents had very favorable attitudes towards the program.
Lessons Learned From a Mixed-methods Evaluation of an Online Environmental Education Program
Presenter(s):
Annelise Carleton-Hug,  Trillium Associates,  annelise@trilliumassociates.com
J William Hug,  Trillium Associates,  billhug@trilliumassociates.com
Abstract: In 2001, the interpretation staff of Yellowstone National Park launched a new interpretive program, “Windows into Wonderland” (WIW) providing online electronic “field trips” (eTrips) to teach web visitors about various aspects of Park ecology, geology and cultural history. This presentation reports on the first formal evaluation of the WIW program. Multiple methods of qualitative and quantitative data collection were used, including an online survey sent to over 4000 registered visitors to the WIW site, semi-structured interviews with classroom teachers who had used the field trips, and observations of middle school students using WIW e-trips. The presentation will provide an overview of the evaluation methods used as well as lessons learned from this mixed-method approach. Implications for other distance-delivered program evaluation designs will be discussed.

Session Title: Evaluating Policy and Advocacy Organizations Through Short Term Measures of Organizational Capacity
Panel Session 381 to be held in Carroll Room on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Advocacy and Policy Change TIG
Chair(s):
Astrid Hendricks,  The California Endowment,  ahendricks@calendow.org
Discussant(s):
Don Crary,  Annie E Casey Foundation,  dcrary@aecf.org
Astrid Hendricks,  The California Endowment,  ahendricks@calendow.org
Abstract: The policy and advocacy field has struggled with developing evaluation approaches, measures and tools that are relevant to its information needs, particularly regarding organizational capacity of policy and advocacy organizations, which ultimately leads to successful policy and advocacy work. Drawing from our recent work in evaluating policy and advocacy organizations that received general support grants from The California Endowment, we highlight the importance of organizational capacities for policy and advocacy organizations in four areas: leadership, management, adaptive and technical. We discuss how this core capacity framework can benefit both funders and advocates in assessing readiness factors of organizations. We also present the Advocacy Core Capacity Assessment Tool (ACCAT) as one mechanism for measuring organizational capacity of policy and advocacy organizations. Discussants will offer their reactions and insights about how this core capacity framework and assessment tool could fill the current knowledge gap about how to evaluate policy and advocacy organizations.
The Context for Evaluating Policy and Advocacy Organizations: Challenges and Limitations
Shao-Chee Sim,  TCC Group,  ssim@tccgrp.com
We begin by providing an overview of the changing political, economic, and social contexts of the work of policy and advocacy organizations. We then discuss the increasing trends in philanthropy towards making greater investments in policy and advocacy organizations. Along with this increased attention have come greater efforts to understand the effectiveness of policy and advocacy organizations. Drawing from some recent work in the evaluation literature and our collective experience in this area, we discuss the unique challenges and limitations in evaluating the outcomes of policy and advocacy organizations. More importantly, we argue that the current missing link is how funders, advocates and evaluators have overlooked efforts to strengthen the left side of the logic model - organizational readiness to undertake important policy and advocacy activities.
Proposing A Core Organizational Capacity Framework to Evaluate Short-term Outcomes of Policy and Advocacy Organizations
Pete York,  TCC Group,  pyork@tccgrp.com
Building upon its work of evaluating the organizational capacity of nonprofit organizations, TCC brings an important element of understanding key organizational traits for conducting policy and advocacy activities. We present a core capacities framework that looks at effective nonprofit organizations comprising of four capacities (Leadership, Adaptive, Management and Technical). We also include organizational culture as a component of the assessment since it has a significant impact on each of the above core capacities. We also discuss the similarities and differences between TCC organizational capacity framework and the advocacy assessment tool developed by the Alliance of Justice. Building on that framework, we developed a methodology for evaluating specific capacities critical to policy and advocacy organizations, which is intended to assist funders to assess the readiness of potential grantee organizations and to assist advocates in understanding how to run an effective organization.
Advocacy Core Capacity Assessment Tool: One Mechanism to Measure Organizational Capacity of Policy and Advocacy Organizations
Jared Raynor,  TCC Group,  jraynor@tccgrp.com
Drawing from TCC's work in strengthening organizational capacity of nonprofit organizations, we present the Advocacy Core Capacity Assessment Tool (ACCAT) as a tool for measuring organizational capacity of policy and advocacy organizations. Specifically, in the area of leadership capacity, we examine advocacy board leadership, leadership persuasiveness, community credibility, external credibility, leadership strategic vision and leadership distribution. In the area of adaptive capacity, we examine strategic partnership, measuring policy/advocacy progress, strategic positioning, and funding flexibility. In the area of management capacity, we examine staff roles and management, management systems, staff coordination, and resource management. In the area of technical capacity, we examine legal understanding, general staffing level, policy issue and theory knowledge, stakeholder management skills, stakeholder analysis, media skills, knowledge generation skills, and information dissemination skills. We also share our experiences using this specific tool with major national and state policy and advocacy organizations.

Session Title: Making Health Evaluation More Culturally Competent Using Mixed Methods and Case Studies
Multipaper Session 382 to be held in Pratt Room, Section A on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Kathryn L Braun,  University of Hawaii,  kbraun@hawaii.edu
Use of the Case Study to Understand Program Processes
Presenter(s):
Louise Miller,  University of Missouri, Columbia,  lmiller@missouri.edu
Constance W Brooks,  University of Missouri, Columbia,  brookscw@missouri.edu
Abstract: The Missouri Department of Health & Senior Services (MODHSS) oversees federal and state funded contracts to provide public health services across the 114 counties in the state. While substantial amounts of data are reported from local health departments, physician offices, hospitals, and other health sites, these numerical data often do not adequately answer questions about program effectiveness. The MODHHS is currently using the case study method of evaluation to better understand key processes that determine whether or not contract deliverables and outcomes are met. In this session, we will describe two case studies used to evaluate the implementation of clinical guidelines for the care of adult patients with diabetes and a pilot school health program for at risk students and families. Insights gained and programmatic decisions made at the state level based on these data will be discussed.
Learning From a Community-based Evaluation: The HealthConnect in our Community Evaluation Experience
Presenter(s):
Louise Palmer,  The Urban Institute,  lpalmer@ui.urban.org
Embry Howell,  The Urban Institute,  ehowell@ui.urban.org
Gloria Deckard,  Florida International University,  deckardg@fiu.edu
Carladenise Edwards,  The Bae Company,  edwardshc@earthlink.net
Anna Sommers,  The Urban Institute,  asommers@ui.urban.org
Lee Saunders,  University of Miami,  leesanders@miami.edu
Abstract: HealthConnect in Our Community (HCioC) aims to connect children and their families in Miami-Dade County, Florida with medical homes and health insurance coverage. Funded by The Children's Trust, the program is in its first of at least three years and is implemented through six diverse community-based organizations (CBOs). The Urban Institute and its local consultants were contracted to conduct a formative evaluation of HCioC to assess implementation in the first year and to identify potential improvements. A secondary goal was to examine the data capabilities of CBOs and recommend a feasible outcome data system for HCioC. The evaluation included comprehensive site visits to all CBOs; observation of health workers in diverse settings; in-depth interviews with key community stakeholders; and focus groups with HCioC clients and staff. Findings emphasize the importance of HCioC in improving health access, as well as several key ways in which the program and data collection can be improved.
The Minnesota Healthcare Disparities Task Force: A Case Study of the Use of Complexity Science Based Developmental Planning and Evaluation Methods
Presenter(s):
Meg Hargreaves,  Abt Associates Inc,  meg_hargreaves@abtassoc.com
Abstract: In 2004 the Governor of Minnesota created the Healthcare Disparities Task Force calling the industry wide group to take action to ensure that culturally competent healthcare is provided to all Minnesotans. Hired as a consultant to the project, the author created and piloted new complex-adaptive planning and evaluation methods to support the development and implementation of the task force. This presentation will describe the rationale and purpose of the new methods, how they were used, and the early impacts of their use on the task force's adoption of CLAS (Culturally and Linguistically Appropriate healthcare Services) standards across the state and the task force's involvement in the state's health care insurance policy debate. Use of developmental planning and evaluation methods during the start-up of the task force helped the task force have an immediate impact on state policy and led to the rapid adoption of culturally competent healthcare practices statewide.
Lessons Learned on Use and Integration of System-Level Data From the Evaluation of a Public Health Demonstration Project
Presenter(s):
Bernette Sherman,  Georgia State University,  bernette@gsu.edu
Amanda Phillips Martinez,  Georgia State University,  aphillipsmartinez@gsu.edu
Angela Snyder,  Georgia State University,  angiesnyder@gsu.edu
Dawud Ujaama,  Georgia State University,  alhdau@gsu.edu
Mei Zhou,  Georgia State University,  alhmzzx@langate.gsu.edu
Abstract: The Public Health Division of Georgia's Department of Human Resources (DHR) partnered with county public health offices to deliver the Integrated Family Support home visiting demonstration project. System-level evaluation challenges arose related to identifying and tracking controls. At the state level there was a need to gain permission for and access to data from multiple agencies within DHR. There was a need at the local level for use and integration of data systems to identify control families and that would be used to track participant and control data and outcomes across multiple years. This paper describes how an external evaluator worked with state and local public health agencies to design and implement a rigorous evaluation within system level challenges related to (1) identifying controls, (2) ensuring access to state and local multi-agency datasets, and (3) varying local capacity for increased data collection and service delivery.
Using Evaluation Techniques to Conduct a Community-specific Needs Assessment
Presenter(s):
Kristi Lewis,  James Madison University,  lewiskristi@gmail.com
Abstract: A community assessment was conducted using community-specific indicators to evaluate quality of life. Quality of life was evaluated based eight categories that included 1) education, 2) health, 3) mental health, 4) environment, 5) youth, 6) economy, 7) social well-being and 8) community infrastructure. Evaluation methods used to conduct the assessment included an English speaking assessment instrument, a Hispanic speaking assessment instrument, and existing data sources from both state and national agencies (i.e. state health department data). Of the 1645 English speaking assessment instruments mailed, 684 were returned (42% response rate; sampling error of 3.8%). Assessment results revealed that 83% of Hispanic women stated that they had obtained prenatal care during the first trimester below the Healthy People 2010 goal of 90%. Alcohol consumption among adults increased from 30.2% in 2001 to 57% in 2006 and in seniors (65 and older) from 11.3% in 2001 to 31.7% in 2006.

Session Title: Evaluation to Promote Collective and Individual Learning: Applications in the Human Services
Multipaper Session 383 to be held in Pratt Room, Section B on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Human Services Evaluation TIG
Chair(s):
Lois Thiessen Love,  Uhlich Children's Advantage Network,  lois@thiessenlove.com
Discussant(s):
Ellen L Konrad,  Independent Consultant,  elkonrad@yahoo.com
Child Welfare Caseworker Pre-Service Training: Evaluating Transfer of Skills to the Job
Presenter(s):
Chris Hadjiharalambous,  University of Tennessee, Knoxville,  sissie@utk.edu
Chris Pelton,  University of Tennessee, Knoxville,  peltonc@sworps.utk.edu
Abstract: Traditionally, the scope of training evaluation efforts in the field of human services was limited to measuring participant satisfaction and assessing knowledge and skills acquired. It is only in the last decade that we see agencies investing in assessing on-the-job use of newly acquired knowledge and skills and then use that information as feedback for improving curricula for newly hired workers but also for changes in organizational practices that may hinder transfer of knowledge and skills to the job. The purpose of the present proposal is to share findings related to transfer of knowledge and skills to the job for two cohorts of new child welfare workers. Evaluation findings relate to new workers' acceptance and use of key skills taught in pre-service, as well as the degree to which they felt supported by supervisors and senior fellow workers to employ these skills in their daily work with children and families.
Challenges in Evaluating Adult Education Programs: How Theory Can Help Fill in the Gaps and Connect the Dots
Presenter(s):
Noelle Rivera,  University of California, Los Angeles,  novari320@hotmail.com
Abstract: Evaluating adult education programs have often presented problems with choosing and defining appropriate goals and outcomes. In general, the goal of the vast majority of adult education programs is to educate adults in some vital capacity. For instance, a goal for adult literacy programs is for adult students to become literate; similarly, transition-to-workplace programs intend for adults to successfully gain employment. Official program goals such as these may present challenges to the evaluator in that they may not be easily measurable or consistently defined. Moreover, program goals may not align to student goals or address the potential effects that stem from program participation which in turn may result in negative or no-effect evaluation findings. This presentation proposes the use of adult learning and development theories to enhance the scope of adult learning program evaluations, examine the potential outcomes of such programs for adult students, and inform decision-making and program development.
Mainstreaming Training Evaluation at the New York City Administration for Children's Services: An Interpersonal Process
Presenter(s):
Henry Ilian,  New York City Administration for Children's Services,  henry.ilian@dfa.state.ny.us
Abstract: In the human services, use of training evaluation results is not automatic. Unless the evaluation is entered upon with a specific plan to apply them, results are used only when the evaluator can find managers interested in using them. This paper presents a case history, which focuses on the interpersonal dimension in ensuring that training evaluation is used. It traces a progression in which the evaluators, the Assessment & Evaluation Department of the training Academy of a major metropolitan public child welfare agency, sought to mainstream the evaluation of trainees performance and make the results a component in organizational decision-making. The results of this process are that evaluation results now influence both what occurs in the classroom and the activities of training supervisors at the trainees initial field assignments. At each stage, results depended on specific contacts that were made and the insights and abilities of participants in the process.
A Process Evaluation of a Community Organizing Agency
Presenter(s):
Ayana Perkins,  Georgia State University,  ayanaperkins@msn.com
Mary Wilson,  East Point Community Action Team,  mwilson@ep-cat.org
Abstract: The utility of community organizing agencies is exemplified in the civil rights and environmental justice movements where change was initiated by equipping vulnerable populations with the skills and resources to improve their conditions. The biggest strength of community organizing agencies is their ability to promote self-sustaining practices within a community. These self-sustaining practices are strengthened through the active support of community linkages. The challenge facing these organizations is that their unconventional role in assisting communities easily becomes confusing to the general public. Using a case study of a community organizing agency in East Point, Georgia, a process evaluation examines critical strategies in achieving the standard outcomes for these type of agencies: community ownership, community leadership, and social capital development.

Roundtable: Feminist Evaluation and Accreditation Efforts: What Standards?
Roundtable Presentation 384 to be held in Douglas Boardroom on Thursday, November 8, 1:55 PM to 3:25 PM
Presenter(s):
Denise Seigart,  Mansfield University,  dseigart@mansfield.edu
Abstract: This paper will discuss the implications of attempting to implement feminist evaluation principles while conducting ongoing accreditation monitoring for organizations such as NCATE (National Council of Teacher Education), the NLN (National League of Nursing) or Middle States. While evaluators are often involved in the preparations and ongoing monitoring required for various accreditation reviews, implementing the philosophies and methods associated with feminist evaluation or other participatory approaches can be challenging. This paper discusses my experiences and reflections with regard to my involvement in accreditation preparations for reviews by the NLN, NCATE, the Pennsylvania Department of Education, and Middle States, and provides suggestions for future feminist evaluation efforts.

Session Title: Technological Tools That Build Evaluation Capacity: The Power of Blogs, Clickers and Web-based Customized Reports
Multipaper Session 385 to be held in Hopkins Room on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Integrating Technology Into Evaluation
Chair(s):
Paul Longo,  Touro Infirmary,  longop@touro.com
Evaluating Online Community: Finding Connections in an Informal Blog-based Network
Presenter(s):
Vanessa Dennen,  Florida State University,  vdennen@fsu.edu
Abstract: Online communities have become increasingly prevalent with each passing year. As organizations have come to develop and rely on web-based interaction and communities to support activities such as communication, learning, knowledge management, and marketing, the need to evaluate these communities has arisen. Evaluating the dynamic relationships within these communities – their strength, their meaning, and the relative power among participants – can prove challenging. This session will provide a review of methods that have been used to map online communities and demonstrate how community network mapping was done in an evaluation of a loosely-defined blog-based community of practice.
Assessing Intuitive Responding as a Function of a Technological Classroom Initiative: Attributes and Values of Computer-assisted Data Collection
Presenter(s):
Sheryl Hodge,  Kansas State University,  shodge@ksu.edu
Iris M Totten,  Kansas State University,  itotten@ksu.edu
Christopher L Vowels,  Kansas State University,  cvowels@ksu.edu
Abstract: While much of the recent evaluation data collection literature encompasses Web-based surveying, there has been little focus on the attributes of other computer-assisted data collection strategies. For the university instructor, the advantages of immediate student feedback are substantial; moreover, for evaluators, computer-assisted data collection techniques produce a plethora of valuable information related to important aspects of data collection protocols. Specifically, such indices as time required to answer cognitive questions as well as frequency of changes in response may provide key evaluation protocol information regarding age-old assumptions of test-taking strategies and their relation to student outcomes. These indices, characterized collectively as indicators of intuitive decision-making, stand to inform evaluators about the value of this data collection tool for evaluation practice. As such, this study will examine individual differences between undergraduates' computer-assisted classroom response characteristics and their accompanying perceptions associated with using this tool within large university classrooms.

Session Title: Learning From Each “Other”: Should The Cultural Characteristics of the Evaluator Match the Cultural Characteristics of the Population of Interest?
Debate Session 386 to be held in Peale Room on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Susan Connors,  University of Colorado, Denver,  susan.connors@cudenver.edu
Presenter(s):
Gregory Diggs,  University of Colorado, Denver,  shupediggs@netzero.com
Wendy Dubow,  National Research Center Inc,  wendy.dubow@n-r-c.com
Abstract: To infuse concepts of deliberative democracy into program evaluation, evaluators should include stakeholders from different cultural backgrounds in evaluation activities. But how do we move from theories of inclusion to actually recruiting and engaging this diversity? Referencing experience in conducting focus groups with diverse communities, two evaluators will debate the need for a match between the focus group facilitator and participants.  Does the inclusion of diverse stakeholders in program evaluation provide adequate voice and representation for deliberative democracy?  Do cross-cultural matches inhibit the voice and participation of stakeholders who represent diversity? Dr. DuBow argues that a demographic and/or cultural match is very important in order to demonstrate interest, respect and cultural responsiveness to the groups of interest. Because there are few trained facilitators of color, Dr. Diggs argues that a cultural match is not always practical. Technical skill should not be traded in the service of cultural responsiveness.

Session Title: Learning Through Practice: Developing Evaluation Knowledge Across Settings
Multipaper Session 387 to be held in Adams Room on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Teaching of Evaluation TIG
Chair(s):
Neva Nahan,  Wayne State University,  n.nahan@wayne.edu
How do People Learn to Evaluate? Lessons drawn from work analysis
Presenter(s):
Claire Tourmen,  Ecole Nationale d'Enseignement Supérieur Agronomique de Dijon,  klertourmen@yahoo.fr
Abstract: How do people learn to evaluate? We try to bring new answers to this old question. We have chosen to take a look at real evaluation activities. We have studied the way it is practiced and learned by beginners, and the way it is practiced and has been learned by experts. The use of models and methods stemming from work psychology highlights some specificities of evaluation practices and learning processes. It shows the importance of specific know-how that are more than technical application and procedures. The best way to help beginners is therefore not to put too much emphasis on the method's progress but to make them practice the methods in various situations, according to various goals.
Experiential Lessons From a Five Year Program Evaluation Partnership
Presenter(s):
M Brooke Robertshaw,  Utah State University,  robertshaw@cc.usu.edu
Joanne Bentley,  Utah State University,  kiwi@cc.usu.edu
Heather Leary,  Utah State University,  heatherleary@gmail.com
Joel Gardner,  Utah State University,  jgardner@cc.usu.edu
Abstract: Evaluation is a necessary core competency for instructional designers and instructional technologists to develop. A partnership was developed between Utah Library Division and the Utah State University Instructional Technology department to evaluate the impact of federal monies for technology throughout the state of Utah. These evaluations took place in 2001 and 2006. Many expected and unexpected experiential lessons were learned, including why service learning is an important tool when training instructional designers and instructional technologists.
Developing Understanding: A Novice Evaluator and an Internal, Participatory and Collaborative Evaluation
Presenter(s):
Michelle Searle,  Queen's University,  michellesearle@yahoo.com
Abstract: Cousins and Earl (1992) indicated that research about the “unintended effects” of participatory evaluations is needed (p. 408). Although it seems they primarily refer to clients, it is interesting to consider the effects of an internal, participatory evaluation on novice evaluators and their professional growth. This presentation examines a collaborative and participatory evaluation project that provided fertile ground for a novice evaluator to understand evaluative inquiry by engaging in all aspects of the evaluation process. Empirical data about the evaluation process can help in developing, understanding and planning for the growth of novice researchers. Clandinin and Connelly (2000) state that narrative research is an immersion into the stories of others. Exploring an evaluative context - using narrative strategies in the data collection, analysis and reporting – allows this research to reveal the dynamics of a collaborative and participatory evaluation as a learning tool for the researcher, the evaluators involved and aspiring evaluation-researchers.
What do Stakeholders Learn About Program Evaluation When Their Programs are Being Evaluated?
Presenter(s):
Jill Lohmeier,  University of Massachusetts, Lowell,  jill_lohmeier@uml.edu
Steven Lee,  University of Kansas,  swlee@ku.edu
Abstract: A qualitative analysis of knowledge and beliefs about program evaluation was conducted through pre-and post-interviews with key stakeholders as part of a five year, objectives-based Safe Schools Healthy Students evaluation. The purpose of the interviews was twofold: 1) to assess how stakeholders' views and knowledge about program evaluation changed over the course of the evaluation and; 2) to identify important aspects of knowledge and beliefs about program evaluation that can be used to develop an instrument to measure these constructs. The findings will show how knowledge and beliefs changed over the course of the evaluation. Additionally, we will consider how the results will be used to develop a tool for assessing how, and what stakeholders learn about evaluation when their programs are being evaluated. Limitations of the study and plans for future research will be discussed.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Fostering Learning for All Stakeholders: Lessons From an Evaluation of the Historical Literacy Project
Roundtable Presentation 388 to be held in Jefferson Room on Thursday, November 8, 1:55 PM to 3:25 PM
Presenter(s):
Heidi Sweetman,  University of Delaware,  heidims@udel.edu
Ximena Uribe,  University of Delaware,  ximena@udel.edu
Abstract: Evaluation provides the unique opportunity for all stakeholders, not simply the evaluators, to become a part of the learning process that is research. Using the ongoing evaluation of the Historical Literacy Project as real-world grounding for the discussion, this roundtable session will focus on how the timing and way in which the evaluation team is involved in program development can greatly increase the learning all stakeholders take away from the evaluation process. Experiences from the evaluation of the Historical Literacy Project indicate that involving evaluators early in program development in a manner that fosters collaboration between program developers and evaluators across all aspects of program development can lead to maximized learning for all stakeholders involved.
Roundtable Rotation II: Cozy Up and Read With Us at the Book Café: An Evaluation of a Middle School Mentor/Mentee Literacy Program
Roundtable Presentation 388 to be held in Jefferson Room on Thursday, November 8, 1:55 PM to 3:25 PM
Presenter(s):
Connie Walker-Egea,  Western Michigan University,  cwalkerpr@yahoo.com
Nakia James,  Western Michigan University,  nakiasjames@sbcglobal.net
Abstract: With literacy being an issue that continues to plague our students, the Book Café Mentor/Mentee program evaluation, emphasizes the need for programs that foster a community of readers. This formative evaluation was conducted to examine the impact of positive peer interaction improving the critical reading skills of sixth grade remedial reading students. Though the initial focus of the pilot program was to improve language arts class performance, this evaluation has been designed to guide the decision-making process for the determination of future program expansion into additional core content classes. For this reason, with the collaboration of the school staff and other stakeholders we looked at the objectives established for the program to determine if they have been effectively achieved. Upon the development of our evaluation questions, we utilized a cross-sectional survey design in gaining pertinent information from both the student participants and the teachers directly involved with these students.

Session Title: Outbreaks: How Do You Evaluate Responses to the Unexpected?
Panel Session 389 to be held in Washington Room on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Thomas Chapel,  Centers for Disease Control and Prevention,  tchapel@cdc.gov
Abstract: Outbreaks are, by definition, sudden, unexpected increases in prevalence of disease. When responding to outbreaks, public health programs cannot follow standard program protocols. An intensive intervention must be rapidly employed. Interventions must be flexible and tailored to meet the specific context of the outbreak, thus making each response unique. Often, outbreak response is labor and resource intensive. Public health programs have a responsibility to evaluate outbreak response and ensure that they are conducted efficiently and effectively, but the unique nature of the responses make this difficult. This panel examines different approaches to meeting the challenges of evaluating these unusual interventions.
Using Evaluation Data for Estimating Workforce Capacity Requirements for Emergency Response
Joan Cioffi,  Centers for Disease Control and Prevention,  jcioffi@cdc.gov
Christine Rosheim,  Centers for Disease Control and Prevention,  crosheim@cdc.gov
Can an evaluation of historical emergency response efforts be used to predict the size and skills needed for a scalable public health response? This discussion describes the use of historical data and exercise feedback to design personnel teams to function as the building blocks of a public health response. A model was developed and tested on the basis of agency response to Hurricane Katrina in 2005, severe acute respiratory syndrome (SARS) in 2003, and pandemic influenza planning in 2006. The model for estimating workforce requirements was tested on three hypothetical scenarios - anthrax, smallpox, and viral hemorrhagic fever. The tool is being pilot-tested during upcoming emergency preparedness exercises but has already been useful in identifying potential gaps in the numbers and skill sets needed for operations and logistics, laboratory testing, and public information functions and roles. Although the model was specific to the Centers for Disease Control and Prevention, it can be tailored to other agencies or needs. The project demonstrates the usefulness of evaluation data for model development.
Evaluating the Division of Tuberculosis Elimination Outbreak Response, is Value Added?
Maureen Wilce,  Centers for Disease Control and Prevention,  mwilce@cdc.gov
Brandii Mayes,  Saint Louis City Department of Health Communicable Disease,  mayesb@stlouiscity.com
Maryam Haddad,  Centers for Disease Control and Prevention,  mhaddad@cdc.gov
John Oeltmann,  Centers for Disease Control and Prevention,  joeltmann@cdc.gov
Kashef Ijaz,  Centers for Disease Control and Prevention,  kijaz@cdc.gov
Although TB is curable and preventable, 14,097 persons were reported with TB in the United States during 2005. Each year, dozens of TB outbreaks occur in diverse communities with differing public health contexts. At the request of state health departments, complex outbreaks often require the intervention of the Outbreak Investigations Team (OIT), DTBE. During an outbreak response, the OIT performs numerous epidemiology driven investigative activities and provides recommendations for program improvement to prevent future outbreaks. Although substantial resources are invested in conducting these activities, evaluating its programmatic impact is difficult due to the diversity of situations and tailored responses. In 2006, DTBE conducted a retrospective evaluation, using a utilization approach, to examine the impact and value of outbreak response efforts. This presentation will discuss the benefits and limitations associated with selected methods for data collection and analysis. It will also address challenges associated with aggregately evaluating unique and tailored activities.
Assessment of a Customizable Tuberculosis Outbreak Response Plan
Laura Freimanis Hance,  Westat,  laurafreimanis@westat.com
Karen R Steingart,  Francis J Curry National Tuberculosis Center,  karenst@u.washington.edu
Christine Hahn,  Idaho Department of Health and Welfare,  hahnc@dhw.idaho.gov
Lisa Pascopella,  Francis J Curry National Tuberculosis Center,  lpascopella@nationaltbcenter.edu
Charles Nolan,  Public Health - Seattle and King County,  charles.nolan@metrokc.gov
A customizable TB outbreak response plan (ORP) was developed for a low-incidence region. We assessed usefulness of ORP by comparing local outbreak response activities to ORP guidance. All key stakeholders involved in the outbreak response were interviewed using a semi-structured questionnaire. Common themes were used to assess validity of and identify gaps in ORP. A subset of participants provided structured feedback on ORP. We interviewed 11 public health and six community stakeholders. ORP mirrored response activities. Stakeholders considered ORP sections on outbreak definition and communication most helpful (ORP introduced criteria for defining and guidelines for communicating a TB outbreak). We added an appendix with ten steps to take for a suspected TB outbreak and an evaluation checklist. Interactive assessment of ORP revealed the importance of a standard TB outbreak definition, response steps, and evaluation framework. This assessment by end users informed changes that improved the usefulness of the programmatic tool.
Challenges of Evaluating Rapid Responses to Syphilis Outbreaks
Betty Apt,  Centers for Disease Control and Prevention,  bapt@cdc.gov
Syphilis is a sexually transmitted disease that can be cured with antibiotics, and transmission can be halted during a brief incubation period. Responding rapidly to outbreaks of syphilis can significantly reduce spread of the disease; however, due to the nature of outbreaks, it can be difficult to determine the effectiveness of interventions that are used. It is critical that health departments identify, contact, and treat sex partners, particularly pregnant women, quickly to prevent them from acquisition/transmission of syphilis. When an outbreak occurs, CDC's Division of STD Prevention may send out a Rapid Response Team to assist the project area with control activities. However, evaluating the effectiveness of the Response Team has been difficult due to the uniqueness of every outbreak and the focus on providing services rather than collecting data. This presentation will describe how a retroactive assessment led to proactive changes that will allow better evaluation of each response.

Session Title: Learning From Each Other: Cross-cutting Issues and Opportunities
Multipaper Session 390 to be held in D'Alesandro Room on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Elvis Fraser,  Evaluation and Knowledge Services Group,  efraser@qedgroupllc.com
Aligning and Employing Public Use Data for School-level Analyses: A Quasi-experimental Study of Communities In Schools (CIS)
Presenter(s):
Sarah Decker,  Caliber an ICF International Company,  sdecker@icfi.com
Kelle Basta,  Caliber an ICF International Company,  kbasta@icfcaliber.com
Jill Berger,  Caliber an ICF International Company,  jberger@icfi.com
Susan Siegel,  Communities In Schools,  siegels@cisnet.org
Abstract: Communities In Schools, Inc. (CIS) is a nationwide initiative to connect community resources with schools to help at-risk students successfully learn, stay in school, and prepare for life. One of the key components of our three-year, comprehensive evaluation of CIS is a quasi-experimental study of 300 schools implementing CIS and 300 matched comparison schools. This study was designed to ascertain the impact of CIS on a number of school-level outcomes, including: dropout, attendance, achievement, behavioral outcomes, suspension, promotion, SAT scores, graduation, and post-graduation placement. In this presentation, we will describe a rigorous methodology that utilizes public use data to assess the effectiveness of a diverse network of CIS programs. We will address issues concerning the alignment and standardization of public use files across seven states over multiple school years. The presentation will also include preliminary findings from this study.
Think Globally, Act Accountably: An Exploration of Cross-cutting Issues in Domestic and International Nonprofit Evaluation
Presenter(s):
Monica Oliver,  Georgia State University,  bla1@student.gsu.edu
Shena Ashley,  Georgia State University,  padsra@langate.gsu.edu
Abstract: Globalization, coupled with media attention to the 2004 tsunami and other events and situations highlighting areas of need, has heightened the efforts of corporations, nonprofits, and foundations to respond to poverty issues in urban America and around the world. As more and more foundations and corporations choose to fund both domestic and overseas assistance projects, it becomes essential to evaluate such programs effectively in order that they remain accountable both to their beneficiaries and to the corporations, nonprofits and foundations that fund them. This begs the question: given their disparate forms of governance, what are the fundamental similarities and differences in evaluating an international versus a domestic social assistance effort? What can international and domestic funders and implementers take from one another's practice? This paper looks at existing reporting and governance mechanisms to explore the different and cross-cutting elements of domestic and international social program evaluation.
Effective Communication Strategies for Large Cross-site Evaluations: Lessons Learned From the National Evaluation of Communities in Schools
Presenter(s):
Kellie Kim,  Caliber an ICF International Company,  kkim@icfi.com
Melissa Busch,  Caliber an ICF International Company,  mbusch@icfi.com
Susan Siegel,  Communities in Schools,  siegels@cisnet.org
Abstract: Effective communications strategies are imperative in the planning and execution of rigorous evaluations, especially those of national organizations with complex organizational structures. Communities in Schools, a nationwide initiative to help at-risk students successfully learn, stay in school, and prepare for life, is in the midst of a three-year national evaluation that focuses on all levels of the Network: National, State, Affiliate, and Site. This research requires evaluation staff to work with – and be accountable to – many different entities and stakeholders. Given the complexity of this evaluation, we have been challenged to develop innovative strategies to engage the “front lines” of the Network, to obtain “buy in” for evaluation activities, to manage expectations, and to effectively message evaluation findings. We expect that lessons learned from this study will benefit other evaluators by sharing creative strategies to maintain ongoing communication with diverse entities involved throughout all phases of evaluation study.

Session Title: Innovative Approaches to Impact Assessments
Multipaper Session 391 to be held in Calhoun Room on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Keith Zvoch,  University of Nevada, Las Vegas,  zvochk@unlv.nevada.edu
Treatment Fidelity in Multi-site Evaluation: A Multi-level Longitudinal Examination of Provider Adherence Status and Change
Presenter(s):
Keith Zvoch,  University of Nevada, Las Vegas,  zvochk@unlv.nevada.edu
Lawrence Letourneau,  University of Nevada, Las Vegas,  letourn@unlv.nevada.edu
Abstract: Program implementation data obtained from the repeated observation of teachers delivering one of two early literacy programs to economically disadvantaged students in a large school district in the southwestern United States were analyzed with multilevel modeling techniques in order to estimate the status of and change in provider adherence to program protocol across the intervention period. Results indicated that fidelity to program protocol varied within and between treatment sites and across adherence outcomes. An exploratory examination of select provider and site characteristics indicated that the professional preparation of providers and the particular treatment intervention adopted were associated with fidelity outcomes. These results provide some insight into the range of factors that are associated with protocol adherence and highlight the challenge of achieving and maintaining fidelity to a treatment intervention that is delivered by multiple providers over multiple treatment sites. Consequences for evaluation theory, design, and practice are discussed.
Multiple Random Sampling When Treatment Units are Matched to Numerous Controls Using Propensity Scores
Presenter(s):
Shu Liang,  Oregon Department of Corrections,  shu.liang@doc.state.or.us
Paul Bellatty,  Oregon Department of Corrections,  paul.t.bellatty@doc.state.or.us
Abstract: The Oregon Department of Corrections offers various programs to minimize the likelihood of future criminal activity. These programs can be expensive and resources are limited. Recognizing the most effective programs and eliminating the ineffective programs is essential for more efficient use of limited resources. Randomly assigning inmates to treatment or control groups provides an operative means of quantifying program effectiveness, but pragmatic and ethical considerations often prohibit its application. Non-random designs, without accounting for treatment-control differences, can inappropriately attribute treatment-control differences to treatment effectiveness. Propensity score matching is a useful method for establishing group comparability with non-random designs. A highly refined matching process can eliminate too many from the treatment group. A less refined matching process may retain all treatment individuals but create too many matches for some treatment individuals. Multiple random sampling of one-to-one matches enables researchers to retain more treatment individuals while providing less biased estimates of program effectiveness.
A Quantitative Evaluation Utilizing the 'Ground Effect' Unit: Application to the Evaluation of Foreign Student Policy and Regional Cooperation Program
Presenter(s):
Yuriko Sato,  Tokyo Institute of Technology,  yusato@ryu.titech.ac.jp
Abstract: 'Ground effect' is the (weighted) average change of key indicators of the intervened group from a non-intervened control group, which can be measured by dividing the difference of the mean of key indicators of the intervened group (M) and that of the control group (M') by M', expressed in the following way: (M/M')-1=(M-M')/M' 'Ground effect' is dealt with as a unit and is given the unit name 'effect'. Impact will be calculated by multiplying the 'ground effect' and the intervened population, assuming that the impact is the sum total of the change brought about by the intervention in the target population. Efficiency is calculated by dividing this impact by the total input expressed in a monetary unit. Two cases applying this method will be introduced: the evaluation of Japan's Foreign Student Policy and that of a Regional Reproductive Health Program in the Philippines.
Alternatives Choices When a Comparison/Control Group is Desired but not Planned
Presenter(s):
Deborah Carran,  Johns Hopkins University,  dtcarran@jhu.edu
Stacey Dammann,  York College of Pennsylvania,  sdammann@ycp.edu
Abstract: Evaluations of programs/projects are often planned without benefit of a comparison or control group, resulting in reports lacking internal validity. Alternative methodologies that should be considered, include (a) reporting within the sample comparing participant characteristics, (b) comparing published results from similar programs/projects. This presentation will present examples. Reporting on within sample outcome differences requires establishing relevant characteristics that warrant comparison. Characteristics compared include demographic factors (i.e. experience) or program/project participation level. Two examples of within sample comparisons that have been used in published results will be presented. The second technique identifies published studies from similar program/projects for comparison purposes. For this technique, it is critical to establish similarity between comparison studies and target program/project. Two examples will be presented for this demonstration. The use of these types of nationally based studies with weighted results for comparison has been informative for comparison purposes.

Session Title: Indigenizing Approaches to Evaluation in American Indian, First Nations, and Native Hawaiian Communities
Multipaper Session 392 to be held in McKeldon Room on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Indigenous Peoples in Evaluation TIG
Chair(s):
Joan LaFrance,  Mekinak Consulting,  joanlafrance1@msn.com
Continuous Evaluation of the Use of Problem-based Learning to Engage Native American Students in Environmental Issues
Presenter(s):
MaryLynn Quartaroli,  Northern Arizona University,  marylynn.quartaroli@nau.edu
Abstract: There is a tremendous need for Native American students to pursue careers in science, mathematics, and technology. Tribal agencies desperately search for tribal members who are qualified for professional positions that are crucial in resolving community environmental problems. If these positions are to be filled, educators must find ways to develop enthusiastic, culturally and scientifically knowledgeable students. Can problem-based learning (PBL) serve to engage Native students in the study of environmental issues while simultaneously bridging mainstream and Native cultural differences? For four years, a university-based environmental education outreach program conducted a series of one-week, on-campus Summer Scholars sessions using PBL to provide Native American middle and high school students, incorporating multiple formative evaluation strategies for program development and improvement. Continuous evaluation resulted in the successful use of PBL to investigate current and important tribal issues in ways that honored students' backgrounds while engaging them in authentic scientific inquiry.
The Necessity of Indigenizing Accountability and Assessment
Presenter(s):
Katherine Tibbetts,  Kamehameha Schools,  katibbet@ksbe.edu
Maenette Benham,  Michigan State University,  mbenham@msu.edu
Abstract: Like indigenous and minority communities worldwide, the North American indigenous community is striving to understand and articulate what it means to do evaluation grounded in traditional, culturally-specific values and ways of knowing. This paper presents the reflections of a group of Native American researchers and evaluators on the implications of traditional values for the practice of evaluation and assessment in culturally-based educational contexts. We argue that an indigenous system of education requires an indigenized framework for accountability and assessment. We offer a perspective on accountability that is always present, always personal, and that reflects the importance of the connections to people, place, spirituality, and time. We also share our views on indigenous approaches to assessment and the importance and power of rigor, respectful relationships, relevance, and reciprocity in assessment.

Session Title: Collaborative Evaluations: A Step-by-Step Model for the Evaluator
Skill-Building Workshop 393 to be held in Preston Room on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Liliana Rodriguez-Campos,  University of South Florida,  lrodriguez@coedu.usf.edu
Abstract: This highly interactive skill-building workshop is for those evaluators who want to engage and succeed in collaborative evaluations. In clear and simple language, the presenter outlines key concepts and effective tools/methods to help master the mechanics of collaboration in the evaluation environment. Specifically, the presenter is going to blend theoretical grounding with the application of the Model for Collaborative Evaluations (MCE) to real-life evaluations, with a special emphasis on those factors that facilitate and inhibit stakeholders' participation. The presenter shares her experience and insights regarding this subject in a precise, easy to understand fashion, so that participants can use the information learned from this workshop immediately.

Session Title: Evaluation Managers and Supervisors TIG Business Meeting and Presentation: Managing Evaluation: Towards a Text for Practitioners
Business Meeting Session 394 to be held in Schaefer Room on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Evaluation Managers and Supervisors TIG
TIG Leader(s):
Robert Vito,  Office of Inspector General,  robert.vito@oig.hhs.gov
Sue Hewitt,  Health District of Northern Larimer County,  shewitt@healthdistrict.org
Ann Maxwell,  United States Department of Health and Human Services,  ann.maxwell@oig.hhs.gov
Presenter(s):
Donald Compton,  Centers for Disease Control and Prevention,  dcompton@cdc.gov
Michael Baizerman,  University of Minnesota,  mbaizerm@che.umn.edu
Abstract: Building on our text, The Art, Craft, and Science of Evaluation Capacity Building, we are co-editing another on managing evaluation. It will use three case studies in public health, education and foundation work to ground analyses of evaluation management for accountability, program improvement and evaluation capacity building. We begin by briefly contrasting social research to program evaluation, move to briefly reviewing the literature on managing social research and point out that there is no in-print text and only a few articles on managing program evaluation. After suggesting why this state of affairs exists, we discuss our current understanding about managing the three evaluations, including draft principles and practices. In this, we are in-part enriching our New Directions for Evaluation issue on evaluation capacity building, as well as framing inquiry about the management of evaluation, and offering suggestions for practice.

Session Title: Evaluation, Learning, and Training in Business Industry Settings
Multipaper Session 395 to be held in Calvert Ballroom Salon B on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Business and Industry TIG
Chair(s):
Eric Graig,  Usable Knowledge LLC,  egraig@usablellc.net
Developing Evaluation Tool for E-Learning
Presenter(s):
Ga-jin In,  Ewha Womans University,  gahjin@gmail.com
Abstract: As the human Capital becomes more important for creating firm's value, e-Learning is a core method to increase firm's competitiveness in rapidly changing environment. Many firms have converted the current employee education into digital way and increased the investments in e-Learning. This study develops the KAI(Key Activity Indicator) oriented 'Evaluation tool for e-Learning' as a Guideline for continuously checking firm's e-Learning process. This study develops the framework of e-learning evaluation through 6 steps : 5 key activity dimensions and sub-activities (factors) of e-Learning were identified. And conceptual and operational definition of the factors were presented. Measurement items for each factors were created and were verified by using delphi method. Finally 'Evaluation tool for e-Learning' was suggested (concluding 5 key activity dimensions, 13 sub-activities (factors), and 39 measurement items). 'Evaluation tool for e-Learning' shows managers present condition of operating e-learning and enables them to monitor every sub-activities of e-learning and immediately provide feedback.
Making e-Learning More Effective Through Evaluation
Presenter(s):
Carl Hanssen,  Hanssen Consulting LLC,  carlh@hanssenconsulting.com
John Mattox,  KPMG,  jmattox@kpmg.com
Nicole Green,  KPMG,  ncsantevari@kpmg.com
Heather Maitre,  KPMG,  hmaitre@kpmg.com
Abstract: KPMG trains more than 20,000 US employees annually, delivering more than 1 million hours of training. E-Learning comprises 25% of all credits earned. An evaluation was conducted to determine whether the three e-learning modalities (live facilitated events, recorded events, and web-based training) were differentially effective. Analysis consisted of comparisons of satisfaction ratings and test scores for individual courses across modality and years. Three findings were useful for improving future e-learning efforts. First, efforts to train instructors how to use the e-learning platform yielded substantial improvements in satisfaction ratings. Second, WBT and recorded events were equally effective and received better satisfaction ratings than live events. Third, WBT courses were completed in half the time of other modalities. Course developers who opt for interactive WBT over other asynchronous delivery methods will see no drop in effectiveness of the new courses with the added benefit that WBT may be more cost effective.
Learning for Results: How Evaluation Can Help Corporations in Their Quest for Learning and Business Results
Presenter(s):
Vanessa Moss-Summers,  Xerox Corporation,  vanessa.moss-summers@xerox.com
Abstract: The purpose of this presentation is to share the Learning for Results Evaluation Model. This model was created to address the gaps in existing corporate evaluation models and frameworks. The focus of the model is to help determine the extent that a human performance intervention has an impact on business results.

Session Title: Putting Your Program Logic Model to Use
Panel Session 396 to be held in Calvert Ballroom Salon C on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the AEA Conference Committee
Chair(s):
Susan Ladd,  Centers for Disease Control and Prevention,  sladd@cdc.gov
Abstract: How many times have you, as the program evaluator, encouraged programs to develop a logic model as part of the program planning or evaluation planning? How many times have they followed your advice and actually used the model? This panel will present examples from three CDC funded state health department programs that not only developed program logic models but used their logic model to guide development of a program evaluation plan. The three states are diverse in their evaluation capacity, program focus and planning framework. Each presentation will describe the evaluation approach used, the role the program logic model played in developing the evaluation, and lessons learned from the process.
Using Logic Models for Program Evaluation
Jan Jernigan,  Centers for Disease Control and Prevention,  jjernigan1@cdc.gov
Susan Ladd,  Centers for Disease Control and Prevention,  sladd@cdc.gov
Michael Schooley,  Centers for Disease Control and Prevention,  mschooley@cdc.gov
The Centers for Disease Control and Prevention (CDC) provides funding to state health departments to implement disease prevention and control programs. CDC Divisions encourage their grantees to develop logic models as tools for program planning and evaluation. This introductory presentation will briefly talk about the 'what' and 'why' of logic models as well as describe different types of logic models and when to use each. Next, the ways in which logic models can be used for program evaluation will be discussed, providing a foundation for the following presentations.
Logic Models as a Platform for Program Evaluation: The Washington State Experience
Marilyn Sitaker,  Washington State Department of Health,  marilyn.sitaker@doh.wa.gov
Jan Jernigan,  Centers for Disease Control and Prevention,  jjernigan1@cdc.gov
Susan Ladd,  Centers for Disease Control and Prevention,  sladd@cdc.gov
The CDC Division for Heart Disease and Stroke Prevention (DHDSP) funds state health departments to implement interventions that address heart disease and stroke prevention. All funded programs are encouraged to develop logic models as a way to describe their program and to assist in both planning and evaluation. Logic models provide an important foundation for program evaluation by generating evaluation questions that most appropriately assess program processes and outcomes, and guiding measurement decisions. This presentation will describe how the Washington State Heart Disease and Stroke Prevention program used their logic model to generate an overall program evaluation plan, including the identification of evaluation questions and development of measures to effectively track progress. The use of evaluation results will be described as well as steps state programs can take to utilize logic models in program evaluation.
Logic Models as an Evaluation Planning Tool: The Massachusetts Face Arm Speech Time (F.A.S.T.) Stroke Awareness Campaign
Hilary Wall,  Massachusetts Department of Public Health,  hilary.wall@state.ma.us
The Massachusetts Department of Public Health receives funding from the CDC to implement the Heart Disease and Stroke Prevention and Control Program. One of the many interventions supported by the Program is the stroke awareness campaign entitled F.A.S.T. The Massachusetts program began by collaboratively developing a logic model that described the resources, activities, and expected outcomes for the campaign. This presentation will describe logic model development and stakeholder engagement; how the logic model was used to guide evaluation planning, identify indicators for measurement, and inform decision making; and will highlight lessons learned from this process.
The Logic Model as a Mechanism for Evaluating New Jersey's Statewide Tobacco Control Program
Mary Hrywna,  University of Medicine and Dentistry of New Jersey,  hrywnama@umdnj.edu
Hila Feldman Berger,  University of Medicine and Dentistry of New Jersey,  feldmahi@umdnj.edu
Cristine Delnevo,  University of Medicine and Dentistry of New Jersey,  delnevo@umdnj.edu
Uta Vorbach,  New Jersey Department of Health and Senior Services,  uta.vorbach@doh.state.nj.us
This presentation discusses the logic model as a mechanism for evaluating the New Jersey Comprehensive Tobacco Control Program (CTCP). A brief description of NJ's evaluation process and how this process was enhanced through the utilization of the logic model will be discussed. The logic model provides the conceptual framework for identifying and measuring the impact of activities on CTCP program goals (e.g., reducing initiation of tobacco use). The evaluation plan focuses on the activities, outputs, and outcomes outlined in the CTCP logic model, to direct measurement activities. Using CDC's Key Outcome Indicators for Evaluating Comprehensive Tobacco Control Programs as a resource, we identified relevant indicators for NJ and applied evaluation data directly to the model. The presenter is with the Tobacco Surveillance & Evaluation Research Program at the UMDNJ-School of Public Health, the independent evaluator of the CTCP.

Session Title: Evaluation Specialists: How Those who Evaluate Cooperative Extension Services and Other Educational Organizations Define and Design Their Job
Panel Session 397 to be held in Calvert Ballroom Salon E on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Extension Education Evaluation TIG
Chair(s):
Daniel McDonald,  University of Arizona,  mcdonald@ag.arizona.edu
Abstract: This panel of in-house evaluation specialists will discuss the roles and responsibilities of evaluators who work within educational organizations, such as, but not exclusively, Cooperative Extension. According to a recent study entitled, "An Exploratory Profile of Extension Evaluation Professionals," the job responsibilities of in-house evaluators differ based on organizational expectations as well as where the specialist is located within the organization. In Extension, many evaluators work system-wide and are housed in a separate program evaluation or administrative unit. A small, but significant number, however, serve only a single program area and are housed with program-specific specialists. The panel will discuss the benefits and disadvantages of their own working arrangement and will invite the audience to discuss implications and recommendations.
A 4-H Evaluation Specialist: Building Evaluation Capacity From Within a Program Unit
Mary Arnold,  Oregon State University,  mary.arnold@oregonstate.edu
Cooperative Extension has unique and pressing evaluation needs, and there are many varieties of ways these needs are met. One way is through the creation of an evaluation specialist position within in the organization, and in at least one state, internal evaluator positions have been created within specific program areas. This presentation will outline the unique set of evaluation responsibilities covered in the presenter's job as an internal evaluator for the Oregon 4-H youth development program. Topics to be covered include, position responsibilities and how the specialist works within the 4-H program and the larger Extension organizations. The presenter will also share lessons learned, and ideas for being a successful internal evaluator with multiple roles and responsibilities.
Evaluation/Program Development/Institutional Research: Evaluators Wear Many Hats
Heather Boyd,  Virginia Tech,  hboyd@vt.edu
My role as a program evaluator at Wisconsin Cooperative Extension has me positioned to work with program-related teams and individuals by providing technical assistance, guidance and professional development in program development and evaluation. However, I also facilitate and lead program development and evaluation of organization-wide initiatives and participate in institutional research. My training in quantitative methods makes me a source of information for hypothesis and statistical testing when faculty and staff have questions about this area. I also serve on cross-divisional teams across all four divisions of Wisconsin Extension.
An In-house Product Researcher: Determining the Efficacy of Educational Products and Services by Introducing Randomized Controlled Trials
James Demery,  McGraw-Hill,  james_demery@mcgraw-hill.com
Since the passage of NCLB in 2001 and the Education Sciences Act of 2002, publishers of educational products and services have noticeably increased their efforts to ensure their products are backed by research that meets the federal government's requirements for scientifically based research. To that end, SRA/McGraw-Hill, a division of the McGraw-Hill Companies, created a department that would be responsible for initiating randomized controlled efficacy studies of its major products and conducting quasi-experimental studies to support its supplemental line of products. As is expected with any new position, the director of product research has to cultivate an environment that will be receptive to what many see as a whole new way of thinking about research. The resources, particularly time and money, needed to conduct solid experimental research that can produce internally and externally valid results are not closely scrutinized by some who may view the whole process as overkill.
A State Evaluation Leader: Providing Evaluation Leadership to Cooperative Extension Statewide
Koralalage Jayaratne,  North Carolina State University,  jay_jayaratne@ncsu.edu
The State Evaluation Leader closely works with Program Leaders, District Extension Directors, and State Extension Specialists to facilitate evaluation process within the organization. This position was created to streamline the evaluation process across the state. The State Evaluation Leader is a tenure-track faculty position with 70% extension and 30% teaching in the Department of Agricultural and Extension Education. This is an academic department within the College of Agriculture and Life Sciences. This position is responsible for providing evaluation leadership to county and state Extension faculty. The State Evaluation Leader works with extension faculty to conduct needs assessments; develop faculty expertise in evaluation; and. provide assistance in grant development. Balancing 70% extension with 30% teaching responsibility is a challenge. Use of teaching responsibility to complement Extension Evaluation Training responsibility is the best approach to face this challenge.
Guiding Team Evaluation: Building Capacity for Evaluation Within Program Area Teams
Allison Nichols,  West Virginia University,  ahnichols@mail.wvu.edu
The Extension system in West Virginia is organized around 15 program area teams that encompass four program units. The evaluation specialist, although housed in one of those units, provides evaluation and research training and technical assistance organizational-wide. Working within teams allows the evaluation specialist not only to guide individual evaluation projects, but to assist with the creation of program indicators that have become the basis of the Plan of Work reported to USDA. A position that began as a support to a limited number of grant-funded initiatives has grown into a position that supports the entire organization. There are obvious implications for restructuring and adding additional personnel.

Session Title: Weaving Collaborative Learning Principles into a Multi-dimensional Evaluation of an Early Learning Partnership
Panel Session 398 to be held in Fairmont Suite on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Allison Titcomb,  LeCroy & Milligan Associates Inc,  allison@lecroymilligan.com
Discussant(s):
LaVonne Douville,  United Way of Tucson,  ldouville@unitedwaytucson.org
Abstract: Early learning partnerships seek to improve the quality of early education for children. The United Way of Tucson and Southern Arizona's First Focus on Quality project illustrates how evaluation design and methods can be intentionally crafted to enhance the collaborative learning and relationships among partners as well as long- term planning and resource development. Session presenters will describe how evaluators and evaluation methods were integrated with each step of the process from conceptualization of the quality rating system to sustainable early learning efforts and with partners ranging from child care center staff through the Arizona governor's office. The purpose of this session is to increase awareness of the connection between evaluation and collaborative learning for novice evaluators and to increase "tool box" strategies for seasoned evaluators. Active discussion and reflection from session participants will be encouraged.
Improving Early Learning Quality: A Long Range Vision
LaVonne Douville,  United Way of Tucson,  ldouville@unitedwaytucson.org
LaVonne Douville will provide an overview of the history of the project, the vision and mission of the early learning efforts from United Way of Tucson and Southern Arizona, and will discuss the relationship between the project and the governor's office and other state agencies.
Collaborative Learning and Evaluation Design
Allison Titcomb,  LeCroy & Milligan Associates Inc,  allison@lecroymilligan.com
Allison Titcomb served as the lead evaluator and will describe the utilization-focused and learner-centered perspectives that were included in the evaluation design. She will also give examples of the different roles the evaluation team was asked to fill and short descriptions of the evaluation methods and tools used to serve the project.
Using Quality Ratings and Feedback to Assist Center Learning
Allyson Baehr,  LeCroy & Milligan Associates Inc,  allyson@lecroymilligan.com
Allyson Baehr served as an evaluator using the quality rating tools and provided written and verbal reports to over 25 centers as part of the ELOA grant. She helped fine tune the implementation of the ratings and was instrumental in providing feedback both to the centers and to the partnering agencies involved in the project.
Learning From Quality Rating Results and Other Partner Evaluations
Jen Kozik,  LeCroy & Milligan Associates Inc,  jen@lecroymilligan.com
Jen Kozik served as the evaluator for one of the partners and completed statistical analysis on the pre and post quality ratings for the overall project. She will provide a perspective on challenges encountered with data analysis and how interpretation of the results were communicated back to the individual partners to maximize program improvement decisions.
A Practitioner's View
Ellen Droegemeier,  Tucson Unified School District,  eleanor.droegemeier@tusd.k12.az.us
Eleanor Droegemeier has been involved with the local efforts to improve quality child care since 1999. She is a Preschool Program Director for the Tucson Unified School District and is a Chair of the United Way of Tucson and Southern Arizona's First Focus on Kids Impact Council. She brings a concrete perspective to the results of the ELOA grant and will provide a practioner's view on the value of the collaborative approach used in the grant and other United Way efforts.

Session Title: Using Mixed Methods to Evaluate the North Carolina Disadvantage Student Supplement Fund (DSSF) on Academically Disadvantaged Students
Multipaper Session 399 to be held in Federal Hill Suite on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Gary T Henry,  University of North Carolina, Chapel Hill,  gthenry@email.unc.edu
Abstract: To improve the education of academically at-risk students in 2004, the North Carolina Board of Education initiated a pilot program: Disadvantaged Student Supplement Fund (DSSF). The evaluation team designed and implemented a mixed-methods approach to obtain a deeper understanding of the overarching goal of the DSSF. The evaluation estimates the effects on the learning and performance of students, especially disadvantaged students. Paper one presents the cross-cutting analysis of DSSF implementation in the 16 pilot districts, highlighting the process of and barriers to program implementation. Paper two estimates the impacts of DSSF on high school students' performance on end of course exams. Paper three compares the process results and intermediate outcome measures by examining the students' exposure to higher quality teachers both in pilot districts and the rest of the state. Paper four analyzes change in student outcomes over the two-year period using end of grade test scores.
Implementation of the North Carolina Disadvantaged Student Supplement Fund
Charles Thompson,  East Carolina University,  thompsonchar@ecu.edu
As the 2004-2005 school year began, the 16 Disadvantaged Student Supplement Fund (DSSF) pilot districts in North Carolina were notified that they were going to receive additional funds to improve the education of academically disadvantaged students in their schools. Qualitative data was gathered through extensive field work in each of the 16 pilot districts, including interviews with the superintendents, principals, and teacher focus groups. A detailed analysis of the financial records of the districts was also completed. In this paper we describe: 1) the specific educational problems that the districts sought to remedy using the DSSF; 2) the implementation of their plans, 3) the way the districts actually expended their funds in 2004-2005.
Impacts of the North Carolina Disadvantaged Student Supplement Fund on High School Student Achievement
C Kevin Fortner,  Georgia State University,  dpockfx@langate.gsu.edu
This paper examines the impacts of DSSF monies on high school student achievement using a rigorous, quasi-experimental approach. The detailed nature of the dataset available for this analysis provides the ability to specify a complete education production function for student achievement which includes the effects of school, classroom, individual, and peer inputs with specification of DSSF resources at the classroom level. This paper uses a Hierarchical Linear Modeling approach to estimate impacts on student achievement across the state in three core subject areas using standardized End of Course tests (English I, Algebra I, and Biology I). Detailed student and teacher administrative records and classroom assignments allow an unprecedented level of detail to be integrated into models estimating the effects of DSSF expenditures.
Process Quality - Student's Exposure to Higher Teacher Quality
Dana Rickman,  Georgia State University,  drickman@gsu.edu
A number of studies have shown differences in the quality of teachers for schools with high concentrations of economically disadvantaged students and students of color, suggesting that these students are not being taught by their fair share of higher quality teachers, as defined by teachers' general academic ability, mastery of content, experience, and pedagogical skill (Peske and Haycock, 2006). This paper will assess the allocation of quality teachers. We will show the percentage of time academically disadvantaged students have access to teachers of high quality when compared to their proficient peers. We compare the exposure to high quality teachers within the 16 districts with the exposure in the remainder of the state. In addition, we compare the same types of access rates for students with economic disadvantages and their more advantaged peers as well as the rates for White, African American, and Hispanic students.
Disadvantaged Student Trend Data
Kelley Dean,  Georgia State University,  padkmdx@langate.gsu.edu
Currently the DSSF program concentrates on providing resources to students who are below the state's sufficiency criteria. Comparing 2005 and 2006 data, we will provide descriptive information about the percentage of non-proficient students in the 16 DSSF districts compared to the rest of the state. We will also present an analysis of the extent to which students move into proficiency or drop below proficiency from 2005 to 2006. Tracing the movement of students into and out of proficiency it draws attention to the fact that more students fall out of proficiency than achieve proficiency nearly every year, which suggests that resources to keep students at-risk of dropping below the proficiency scores should be considered in improvement planning. This should have very important implications for changing the focus of the program.

Session Title: LGBT TIG Business Meeting and Think Tank: Lesbian, Gay, Bisexual, Transgender and Intersex Issues and Queer Theory in Evaluation: Planning a Proposal to New Directions for Evaluation
Business Meeting Session 400 to be held in Royale Board Room on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Lesbian, Gay, Bisexual, Transgender Issues TIG
TIG Leader(s):
Denice Cassaro,  Cornell University,  dac11@cornell.edu
Les Burleson,  Syracuse University,  wlburles@syr.edu
Steve Fifield,  University of Delaware,  fifield@udel.edu
Kari Greene,  Oregon Public Health Division,  kari.greene@state.or.us
Presenter(s):
Denice Cassaro,  Cornell University,  dac11@cornell.edu
Steve Fifield,  University of Delaware,  fifield@udel.edu
Abstract: In this Think Tank participants will work together to outline a proposal for a theme issue of New Directions for Evaluation (NDE) on LGBTI issues and queer theory in evaluation. Planning for an NDE proposal is at an early stage. This session is an opportunity for evaluators interested in LGBTI/queer perspectives to contribute to the much needed sharing of emerging issues and concerns with the broader evaluation community. The session will include an overview of the nature of NDE theme issues; work in small groups to identify key issues, theories, methods, and studies that illustrate LGBTI/queer perspectives in evaluation; and a discussion of how to combine these insights in an NDE proposal. Participants are welcome whether they are interested in writing for the theme issue or want to join a discussion of current developments in LGBTI/queer perspectives in evaluation.

Session Title: Small Wins are Winsome: Aggregating Learning From Small Evaluations Into Systems Change
Multipaper Session 401 to be held in Royale Conference Foyer on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
Paul Florin,  University of Rhode Island,  pflorin@mail.uri.edu
Discussant(s):
John Stevenson,  University of Rhode Island,  jsteve@uri.edu
Abstract: What consequences can small and focused evaluation studies produce within discrete elements of a state substance abuse prevention system? Do consequences from each evaluation study stand alone or can they together aggregate and amplify into broader systems change? Each paper in this multi-paper session describes a small and focused evaluation whose results were fed back to stakeholders in a state prevention system. Each paper describes how results produced initial "deviations" from normal operations at a particular ecological level (e.g. program, community or state level). The discussant, drawing on more than twenty years of evaluation practice within the state system under consideration, reflects on both the potential and limits of aggregating evaluations across projects over time to produce "tipping points" for systems change. The conclusions from the papers and the discussant's reflections can inform the practice (and perhaps the patience) of those who seek to use evaluation results to change systems.
Insights Into Implementation of Evidence-based Programs: Lessons Learned Through Focus Groups
Thomas Sawyer,  University of Rhode Island,  tsaw5413@postoffice.uri.edu
Evidence-based programs for substance abuse prevention present special opportunities and challenges. In our role as the statewide evaluation team for a three-year evidence-based demonstration project in Rhode Island, we held monthly meetings with staff liaisons from each of 8 local sites. Valuable learning took place at these meetings that led us to conduct three specific focus groups to gain further insight into start-up and implementation issues with respect to delivering such programs. This paper reports on these one-hour focus groups with agency staff and directors at the final liaison meeting of the second year of the project. The results from the focus groups were fed back to state decision makers who were continuing to promote local implementation of evidence-based programs and were disseminated to local agencies. The 'small win' in this paper was change in implementation practice at the program level.
Lessons Learned from the Evaluation of Environmental Strategies in Community Interventions
Jessica Nargiso,  University of Rhode Island,  jnargiso@mail.uri.edu
Historically, substance abuse prevention efforts have targeted individual factors. More recently, the importance of addressing the social and environmental conditions that facilitate substance use has been recognized. The use of environmental strategies, such as media campaigns, policy change, and enforcement enhancement, introduce new challenges to evaluating the effectiveness of these strategies within communities. As part of the evaluation of the State Incentive Grant (SIG) in Rhode Island, a monthly interview is conducted with a representative from each SIG community to monitor actions taken and outcomes produced by environmental strategies. This paper will focus on what was learned from the evaluation, including methodological challenges and the importance of training and technical assistance. The small win in this paper is how these lessons informed changes implemented at the state level delivered through a training and technical assistance system serving fourteen funded community coalitions.
Lessons Learned from the Evaluation of Prevention Training
Crystelle Egan,  University of Rhode Island,  crystelleann@yahoo.com
Formal training is an important method of disseminating evidence-based practices to practitioners. However, research indicates that trainees often fail to transfer learned material to their jobs. In order to understand how trainees applied principles from a 21-hour course in the 5-step Getting Prevention Results (GPR) prevention planning and implementation framework, a precursor to CSAP's Strategic Prevention Framework, a qualitative evaluation was conducted with trainees one year post-training. Factors facilitating and impeding the application of GPR principles were explored. Key findings of the study will be addressed in this paper. Results from the study were reported to the developers of a newly-established statewide training and technical assistance system. The 'small win' in this paper was the incorporation of recommendations for enhancing training transfer into statewide prevention training.

Session Title: Lessons Learned Through Building Capacity in Collaborative Evaluation in the Field of Education
Demonstration Session 402 to be held in Hanover Suite B on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Ann Brackett,  Learning Innovations at WestEd,  abracke@wested.org
Nancy Hurley,  Learning Innovations at WestEd,  nhurley@wested.org
Sarah Guckenburg,  Learning Innovations at WestEd,  sgucken@wested.org
Abstract: WestEd will present outcomes, lessons learned and impact resulting from facilitating and supporting clients in using collaborative evaluation in a variety of contexts. Clients have attended a Collaborative Evaluation Institute facilitated by the presenters, and used the data gathering methods and strategies learned to evaluate programs within their own school or district. Teams from varied contexts, including special education and use of technology in schools, have gained buy-in and built support for these undertakings by conducting information sessions for school boards, school and district staff, and community members. They have developed questionnaires, conducted interviews and focus groups, done classroom observations, analyzed data and presented findings to stakeholders. Many report good success and a few surprises, but most importantly, they have stimulated the development of a culture of collaborative evaluation within their various schools and school districts. Our demonstration session will include specific examples of outcomes and impact statements from various team members, as well as time for questions and discussion with attendees.

Session Title: Measurement to Improve Precision and Validity of Evaluation Outcomes
Multipaper Session 403 to be held in Baltimore Theater on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Ya-Fen Chan,  Chestnut Health Systems,  ychan@chestnut.org
Discussant(s):
Barth Riley,  University of Illinois, Chicago,  bbriley@chestnut.org
Abstract: This panel presents a variety of applications of measurement, including classical test theory and the Rasch measurement model, a type of item response theory that provides linear, interval measures of psychological constructs. The end product will be measures that are more reliable, more valid and more convenient to use.
Measurement Equivalence: Validity Implications for Program Evaluation
Susan Hutchinson,  University of Northern Colorado,  susan.hutchinson@unco.edu
Understanding differences between groups is a fundamental aim of most evaluation research. Such differences might be based on extant demographic characteristics, program participation status, or exposure to some type of intervention. However, between-group comparisons are only valid to the extent that the underlying constructs on which the groups are being compared demonstrate measurement equivalence. Without evidence of equivalent measurement, a researcher cannot unambiguously determine if differences are truly due to the group characteristics of interest or if they are merely an artifact of differential measurement across groups. While measurement equivalence has received considerable attention across many disciplines, it has been largely ignored within the context of evaluation research. Therefore, the purpose of this presentation is to define measurement equivalence from the perspective of validity generalization, describe analytical methods for assessing measuring equivalence, and discuss implications for evaluation research.
Assessing Outcomes Across Time: Testing Measurement Assumptions
Ann Doucette,  George Washington University,  doucette@gwu.edu
There are several assumptions in longitudinal repeated measure designs. We presume that the measures we use are invariant across time, tapping the same latent construct each time we conduct an assessment and that measures are continuous. Some of the constructs assessed may be highly skewed; such would be the case in severe depression and suicidality, where the probability of positive responses would likely be low. This presentation builds on secondary analyses of longitudinal data from a behavioral healthcare outcomes study; and illustrates the use and advantages of a multilevel Rasch model where items are nested within persons and persons are grouped within therapists. In addition, the Rasch multilevel model allows us to calibrate item change on true interval basis, an assumption of Likert-type scaling that is seldom realized. Results indicate that a change score of 10 points may be markedly different depending where on the construct continuum the change occurs.
I Think the People Changed, or was it the Test?
Kendon Conrad,  University of Illinois, Chicago,  kjconrad@uic.edu
Barth Riley,  University of Illinois, Chicago,  bbriley@chestnut.org
Ya-Fen Chan,  Chestnut Health Systems,  ychan@chestnut.org
Michael Dennis,  Chestnut Health Systems,  mdennis@chestnut.org
If the items on a measure would change in difficulty from pretest to posttest, it would be problematical to infer that the people changed. That is, we could be observing item change instead of person change. This presentation will examine whether there is change in item calibrations on the 16-item Substance Problem Scale of the Global Appraisal of Individual Need (Dennis et al., 2003) over 5 time points, i.e., baseline and four three-month posttests. Once the magnitude of item change has been presented using Rasch differential item functioning analysis in Winsteps software (Linacre, 2006), the measures will be adjusted to correct for change over time using Facets software (Linacre, 2006). Statistical significance and clinical effect size criteria will be used to estimate the magnitude of differences between adjusted and unadjusted posttests. The implications for improved outcome evaluations will be discussed.

Session Title: Patient Preferences for Treatment: Correlates and Impact
Multipaper Session 404 to be held in International Room on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Souraya Sidani,  Ryerson University,  s.sidani@utoronto.ca
Abstract: Patient preferences for treatment refer to positive or negative attitudes toward particular treatments. Preferences present potential threats to validity in intervention evaluation research. Evidence suggests they influence enrollment in a study, participation in and adherence to treatments under evaluation, attrition, and outcome achievement. This panel focuses on an explanation of the mechanisms underlying the impact of patient preferences for treatment on validity and a synthesis of empirical evidence supporting their impact, including the results of a methodological study.
Preferences for Treatment: Methodological Effects
Souraya Sidani,  Ryerson University,  s.sidani@utoronto.ca
The first paper offers a detailed review of the mechanisms through which preferences for treatment influence internal and external validity. Results of various studies, including those of a methodological study conducted by the presented, that examined the impact of preferences on enrollment, attrition, compliance with treatment, and outcome achievement will be synthesized.
Correlates of Preferences
Joyal Miranda,  University of Toronto,  joyal.miranda@utoronto.ca
This paper identifies socio-demographic and clinical factors that are associated with an explicit expression of a treatment preference. The clinical factors related to the type, severity, and duration of the presenting problem (i.e., insomnia). The treatment options included two theoretically sound behavioral interventions for the management of insomnia.
Clinical and Methodological Consequences of Preferences
David Streiner,  University of Toronto,  dstreiner@klaru-baycrest.on.ca
The last paper in this panel will end with an open discussion of the clinical and methodological consequences of preferences for treatment.

Session Title: Evaluating Student Learning Outcomes
Multipaper Session 405 to be held in Chesapeake Room on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Assessment in Higher Education TIG
Chair(s):
Darryl Jinkerson,  Abilene Christian University,  darryl.jinkerson@coba.acu.edu
Discussant(s):
Stanley Varnhagen,  University of Alberta,  stanley.varnhagen@ualberta.ca
Accreditation-Mandated Focus on Learning Outcomes: A Case Study
Presenter(s):
Larry Seawright,  Brigham Young University,  larrys@byu.edu
Joseph Peabody,  Brigham Young University,  peabody@byu.edu
Abstract: Higher education accrediting agencies require that academic programs formulate, clearly state, and publicize appropriate program learning outcome statements. Learning outcomes need to be stated in terms of what students will learn, become, and be able to do upon completing the program, i.e. a learning-centered rather than a teaching-centered focus. This paper describes the development of program learning outcome statements at Brigham Young University in Provo, Utah. We discuss the methods used to develop program learning outcome statements, implications for faculty and administrators, and proposed methodologies for continued monitoring and curriculum improvement. We will also evaluate the effectiveness of the methods used, and the nature and level of response among the faculty, including rationalizing analogies employed by both faculty and administrators. Observations of faculty resistance are contextualized with reports of efforts made to reduce or overcome the resistance.
Using a Rubric to Evaluate Student Learning and to Increase Faculty Involvement in Curriculum Planning
Presenter(s):
Katrina Miller-Stevens,  University of Colorado, Denver,  katrina.miller-stevens@cudenver.edu
Jody Fitzpatrick,  University of Colorado, Denver,  jody.fitzpatrick@cudenver.edu
Abstract: In response to external demands for measures of program outcomes, the School of Public Affairs at the University of Colorado-Denver (UCD) began the process of developing and using a rubric to assess performance of graduate students in a capstone class. Other outcome measures included surveys of students and alumni; however, this paper will focus on the development of the rubric, the results, and its use. The paper will describe how the development of the rubric was used to involve faculty in dialogues concerning program goals and objectives and to review courses for means for obtaining outcomes. Results from the rubric will be presented as well as a review of course content in reference to goals and objectives. The paper will illustrate how an initial external demand was used to change the dialogue and culture within the school and to move toward a formative review of course curriculum.
Learning-centered Evaluation of Teaching
Presenter(s):
Trav Johnson,  Brigham Young University,  trav_johnson@byu.edu
Abstract: Over the past decade, institutions of higher education have placed increased emphasis on promoting student learning. This emphasis has influenced thinking about teaching, course design, and faculty development, but it has had little effect on the way teaching is evaluated. If institutions are serious about promoting student learning, they should align their evaluation practices with their desired teaching outcomes. Evaluating teaching based on student learning can be successful if evaluators focus on answering learning-centered evaluation questions rather than on “measuring” learning (which has been the focus of failed attempts to evaluate teaching based on student learning). In learning-centered evaluation of teaching, evaluators seek answers to questions in four areas: value of learning goals, effectiveness of learning activities, alignment and accuracy of learning assessments, and achievement of learning outcomes. The primary sources for answering these questions are students, instructors, peers, and administrators. Various methods are used to collect data from these sources.

Session Title: Exploring Innovation and Process in Arts Evaluation
Multipaper Session 406 to be held in Versailles Room on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Evaluating the Arts and Culture TIG
Chair(s):
Kathlyn Steedly,  The Academy for Educational Development,  ksteedly@smtp.aed.org
Evaluating AND Encouraging Innovation, Creativity, and Adaptability: Lessons From a Theatre Company
Presenter(s):
Tamara Walser,  University of North Carolina, Wilmington,  walsert@uncw.edu
Keith Bridges,  Charter Theatre,  kbridges@chartertheatre.org
Kate Mattingly,  Windwalker Corporation,  kate.mattingly@windwalker.com
Abstract: The purpose of this presentation is to share an evaluation process that informs development and improvement while encouraging innovation, creativity, and adaptability; and to propose applications of this process to educational evaluation. Charter Theatre is a small professional theatre in Washington, DC. Its mission is to develop and produce new plays—Charter seeks out new plays, works with the playwrights to clarify their 'esthetic intention, develops the play into the strongest script it can be, and then produces those plays. The issue of “how do you evaluate a play without sucking all the life from it,” is a common issue all evaluators face. This presentation will describe Charter Theatre's evaluation process, and how it can be applied in educational settings where innovation, creativity, and adaptability are too often sacrificed in the name of evaluation and accountability. This presentation reinforces the conference theme, evaluation and learning, as it focuses on evaluation for learning, discovery, and growth.
Evaluating Arts Exhibitions: A Constructivist Insight
Presenter(s):
Annabel Jackson,  Annabel Jackson Associates,  ajataja@aol.com
Abstract: In 2006-07 Annabel Jackson Associates carried out three evaluations of arts exhibitions as part of the process of piloting new methods of social and economic impact analysis for the Arts and Humanities Research Council. These used a constructivist methodology in particular, Personal Construct Theory and Repertory Grid, in order to identify conscious and subconscious outcomes from the artists and curators. The outcomes were translated into a logic model, which suggested lines of argument around the value of the arts as a way of thinking, and also around its contribution to innovation and social capital. The logic model was used to formulate precise outcome questions which were applied in visitors' surveys. The results make a strong case for the three exhibitions evaluated and suggest a way forward in defining precise outcomes for hard to measure items. Equally important to the evaluator, the work was well received by the artists and curators involved.
What Makes Opera Thrive: Learning From Evaluation in the Performing Arts
Presenter(s):
Paul Lorton Jr,  University of San Francisco,  lorton@usfca.edu
Abstract: For not-for-profit cultural enterprises to survive, they need to learn what makes them worthy of their community's support for not-for-profit enterprises must secure contributions in addition to the revenues they may generate from their services. This discussion will examine the information contained in the some 500 IRS Form 990 (Return of Organization Exempt From Income Tax) report from organizations that classify their activity as “Opera” and present a matrix of goals, achievement and income mixes to build on earlier efforts at defining success for Opera companies. By systematically exploring what the criteria are, how activity is measured against those criteria and the effect of evaluation on the organization, we expect to help those who wish opera to continue to be performed in learning how a process of consequence of evaluation can do just that.

Search Results