Evaluation 2008 Banner

Return to search form  

Session Title: Starting and Succeeding as an Independent Evaluation Consultant
Panel Session 592 to be held in Capitol Ballroom Section 1 on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Independent Consulting TIG
Chair(s):
Jennifer E Williams,  JE Williams and Associates LLC,  jew722@zoomtown.com
Amy Germuth,  Compass Consulting Group LLC,  agermuth@mindspring.com
Discussant(s):
Michael Hendricks,  Independent Consultant,  mikehendri@aol.com
Abstract: Independent Consultants will share their professional insights on starting and maintaining an Independent Evaluation Consulting business. Panelists will describe ways of building and maintaining client relationships and share their expertise related to initial business set-up and lessons they have learned. Discussions will include the pros and cons of having an independent consulting business, the various types of business structures, methods of contracting and fee setting, as well as the personal decisions that impact on having your own business. They will examine some consequences of evaluation in the context of conducting independent consulting in diverse settings. The session will include ample time for audience members to pose specific questions to the panelists.
Moving (and Shaking): From Employee to Consultant
Jennifer E Williams,  JE Williams and Associates LLC,  jew722@zoomtown.com
Dr. Jennifer E. Williams is President and Lead Consultant of J. E. Williams and Associates, an adjunct professor, licensed counselor, and Independent Consultant. She has extensive experience conducting education, social and market research and program evaluation. She will share her experience of moving from being an employee to a consultant and the impact it has had on her both personally and professionally.
Staging a One-Woman Show
Kathleen Haynie,  Kathleen Haynie Consulting,  kchaynie@stanfordalumni.org
Dr. Haynie, Director of Kathleen Haynie Consulting, has been an evaluation consultant since 2002. Her current projects span the field of science education: early childhood, K-12, learning, teaching, and assessment. She will discuss the "growing pains" of a developing business as a sole proprietor - bringing in projects; balancing workloads and priorities; hiring staff; budgeting; communicating with universities, school districts, and corporations; developing new business under time constraints.
Reflections from 30 Years of Evaluation Experience
Mary Ann Scheirer,  Scheirer Consulting,  maryann@scheirerconsulting.com
Dr. Mary Ann Scheirer has been an evaluator for three decades, working in a variety of settings including higher education, government agencies, large consulting firms, and now, independent consulting. Her presentation will focus on how and why she moved into independent consulting and lessons learned form this move. She will provide a contrasting perspective as her move came after many years of service in multiple organizations
Traveling and Working: International Evaluation Consulting - One Woman's Perspective
Tristi Nichols,  Manitou Inc,  tnichols@manitouinc.com
Dr. Tristi Nichols is a program evaluator and owner of a sole proprietorship consulting business. Her work focuses primarily on international issues, which provides a unique lens through which to view independent consulting. Her reflections about consulting, international travel, the types of decisions she makes, and their impacts on her professionally and personally as a wife and mother will be of interest to novice, veteran, or aspiring independent consultants.

Session Title: Using Logic Models to Support Organizational Alignment, Evaluation, and Learning: One Organization's Journey Toward a Culture of Evaluation
Panel Session 593 to be held in Capitol Ballroom Section 2 on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the College Access Programs TIG
Chair(s):
Michelle Jay,  University of South Carolina,  jaym@gwm.sc.edu
Keren Zuniga McDowell,  Citizen Schools,  kerenzuniga@citizenschools.org
Discussant(s):
Michelle Jay,  University of South Carolina,  jaym@gwm.sc.edu
Abstract: This panel will discuss how one multi-program, multi-site organization used logic models to emerge from an organizational culture where programs operated in isolation, without cross-program outcomes measurement or sharing of data. The central theme of the panel will revolve around the development and use of a series of logic models, which were created collaboratively with organization stakeholders to ensure comprehensiveness and depth of understanding. Levels of success and lessons learned will be reviewed in three categories: strategic alignment of programs' measurement and outcomes; practical implementation of an effective and efficient evaluation policy; and, organizational growth and capacity building resulting from evaluation findings. Examples will be provided as to how logic models were used to align program theory and practice, inform evaluation design, and define program impact.
Aligning Program Practice and Outcomes Across Multiple College Access Programs
Julie Crump,  The Education Resources Institute,  crump@teri.org
The use of logic models to align program theory, practice and outcomes will be described. In the last program year (2007-08), TERI delivered eight separate programs and services and piloted the delivery of three new programs. Prior to 2007-08 programs operated in isolation and had little communication with each other. As the organization adopted a new strategic plan and implemented a culture of evaluation, cross-program communication and measurement became integral. Logic models were used to define the organization's theoretical framework and used to align existing programs' activities and outcomes with the organization's theory of practice, as well as, with one another. When programs were aligned the gaps in outcomes measurement were clearly visible indicating where program activities were targeted but not measured making impact impossible to define.
Logic Models as a Tool for Establishing Efficient and Effective Evaluation Policy
Keren Zuniga McDowell,  Citizen Schools,  kerenzuniga@citizenschools.org
Keren Zuniga McDowell,  Citizen Schools,  kerenzuniga@citizenschools.org
This presentation will focus on how logic models were used to drive the development of an organization-wide evaluation policy and plan and discuss challenges that were encountered. Logic models were used to define the causal links between program inputs, activities, outcomes, and impact and the role of assumptions and external factors on program delivery within the framework of the organization's theory of practice. Key challenges to developing a culture of evaluation within a multi-program, multi-site organization for the very first time included: defining program outcomes, measuring short- and long-term program impact, building an internal evaluation infrastructure, engaging all stakeholders, and securing staff buy-in, all with limited resources and capacity.
Creating a Culture of Learning Through Evaluation to Inform Systemic Change
Adrian Haugabrook,  The Education Resources Institute,  haugabrook@teri.org
This presentation will discuss how TERI used logic models to drive organizational change through data-driven reflection and learning, as well as, advocate for the wide-spread development and integration of a culture of evaluation. Organizational emphasis was placed on the cycle of theory-practice-evaluation and the importance of using evaluation to inform program decision making and ongoing improvement of delivery. TERI found that logic models provided staff with a shared understanding of the organization's new program delivery model and were a helpful tool in defining cross-program accountability through outcomes measurement. The presentation will also describe how logic models were used to communicate organizational goals internally, as well as, externally with funding agencies, community partners, and service recipients.

Session Title: Assessment and Evaluation in Higher Education: Administrative and Policy Perspectives
Multipaper Session 594 to be held in Capitol Ballroom Section 3 on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Assessment in Higher Education TIG
Chair(s):
Jeanne Hubelbank,  Independent Consultant,  jhubel@evalconsult.com
Discussant(s):
Beverly Parsons,  InSites,  beverlyaparsons@aol.com
The Glass is Half Full and Half Empty: The Effect of No Child Left Behind Evaluation Policies on a College/Public Middle School Physics and Engineering Partnership Program
Presenter(s):
Jeanne Hubelbank,  Worcester Polytechnic Institute,  jhubel@evalconsult.com
Abstract: While there are issues with No Child Left Behind’s (NCLB) policy emphasis on "evidence-based” evaluations, the policy is a reality that evaluators must address. Despite our initial concerns, during an evaluation of a Title II, Part B Mathematics and Science Partnership (MSP) program, we found that NCLB’s evaluation policies were both a help and hindrance to the practices of the evaluation and the program. We discuss how five components of the legislation affected the program and its evaluation. These components are: emphasis on “developing evidence-based outcomes,” pre- and post-testing requirement, annual reporting requirements, use of state tests to assess student learning, and liaison with a state external evaluator. NCLB grant policy guided our decisions and actions as we interweaved it with our evaluation views (based on the Program Evaluation Standards) to plan and implement the program and its evaluation. We discuss implications for our program, higher education, and other programs.
Assessment and the Program Evaluation Standards
Presenter(s):
Rick Axelson,  University of Iowa,  rick-axelson@uiowa.edu
Arend Flick,  Riverside Community College,  arend.flick@rcc.edu
Abstract: It has often been argued that there are important distinctions between assessment and evaluation practices. Given the conceptual, methodological, and political challenges that have hampered assessment in higher education, it is understandable that it is the differences between assessment and evaluation that have received the most attention. Yet, it is increasingly evident that there are also important similarities between them as well. As assessment efforts mature, practitioners often encounter many of the issues faced by evaluators (i.e., utility, feasibility, propriety, accuracy). Drawing upon the evaluation literature can provide valuable insight into these challenges. In particular, we believe that the Program Evaluation Standards (http://www.wmich.edu/evalctr/jc/) offer a helpful framework for addressing many of the thorny assessment issues encountered on campuses. In this session we will outline how some of the most commonly used arguments against assessment can be effectively addressed by the practices outlined in the Standards.
The Evaluation in Russian Higher Education on the Base of New Version of the State Educational Standards
Presenter(s):
Victor Zvonnikov,  State University of Management,  zvonnikov@mail.ru
Marina Chelyshkova,  State University of Management,  mchelyshkova@mail.ru
Abstract: Signing by Russia the Bologna’s declaration has served as the precondition of introduction the two-level preparation in Russian high schools. Instead of specialists with 5 years training high schools will prepare bachelors and masters during 4 and 6 years. This changes require the significant processing of content, State educational standards and assessment system for certification of graduates In this report the new approaches to evaluation of higher education’s graduates knowledge and competences are presented. The evaluation is based on the new version of the State educational standards employing the competence model of graduate’s training, the measurement theory and variety of assessment procedures and instruments. The authors of the report suggest the competence model of graduate’s training in Russian high schools, the structure of competences and approaches to constructing multiple measures for assessment on the way of combining evaluation data from multiple sources when making decisions about quality of graduate’s achievements.
Changing Evaluation Policy and Practice: Exploring Evaluation's Potential Role in Facilitating Accreditation Within a Canadian University
Presenter(s):
Stanley Varnhagen,  University of Alberta,  stanley.varnhagen@ualberta.ca
Brad Arkison,  University of Alberta,  brad.arkison@ualberta.ca
Jason Daniels Varnhagen,  University of Alberta,  jason.daniels@ualberta.ca
Abstract: Traditionally, evaluation has been narrowly defined and has typically occurred either in a constrained – primarily summative environment or in addressing specific, mandated, evaluative requirements. Additionally, existing post-secondary evaluations are seldom proactive with well defined criteria. Relatively new requirements around accreditation may require these traditional approaches to change, this will require a shift of existing evaluation policies and practice. . The capacity needed to address the required changes cannot be adequately addressed within the current internal Faculty structure and the process cannot be completely external to the Faculty. More systemic changes are required that will take time, require appropriate support, and will continually evolve. In addition, the process needs to recognize and adapt to specific discipline requirements. Done properly the evaluation process can better facilitate improvement in post-secondary education and allow a more proactive approach that could be helpful in a number of ways, including facilitating accreditation.

Session Title: Relief in Sight: A Systems-Thinking Application of Self Determination Theory-based Logic Models to Modify the Effects of High Stakes Testing
Demonstration Session 595 to be held in Capitol Ballroom Section 4 on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Program Theory and Theory-driven Evaluation TIG
Presenter(s):
Deborah Wasserman,  The Ohio State University,  wasserman.12@osu.edu
Abstract: Whether in education, health care, mental health, or any other human service system, high stakes testing is a double-edged sword. As evidenced by No Child Left Behind, testing for accountability purposes can both improve and diminish program quality. This demonstration presents the use of Self-Determination Theory-based logic models as an approach that reconciles accountability with quality improvement. Based on a systems-thinking approach, these models create a means for holding human service systems responsible for both outcomes and well-being of the individuals the outcomes affect. Data from the evaluation of a comprehensive out-of-school program exemplifies how the data can be collected and the utility of the results. In addition to the theoretical explanation and exemplar, participants will be provided with tools for constructing models specific to their own evaluations, selecting and using measurement instruments, and methodology for analysis.

Session Title: Conversation Hour With the 2008 AEA Award Winners
Panel Session 596 to be held in Capitol Ballroom Section 5 on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the AEA Conference Committee
Chair(s):
James Altschuld,  The Ohio State University,  altschuld.1@osu.edu
Lois-ellin Datta,  Datta Analysis,  datta@ilhawaii.net
Abstract: This is a unique session begun at AEA 2007. It provides an opportunity for AEA members to meet and interact with the 2008 national award winners. What are their insights and perceptions about the field of evaluation that they have gained from their work in it. Attendees at the session will be enable to discuss with the awardees factors (mentorships, learnings, special projects, etc.) in their careers that made a major impression on them and enhanced their efforts and productivity. Such discussion should be illuminating and informative for all members of the Association.
The 2008 Alva and Gunnar Myrdal Government Award
Stephanie Shipman,  United States Government Accountability Office,  shipmans@gao.gov
-
The 2008 Paul F. Lazarsfeld Award
J Bradley Cousins,  University of Ottawa,  bcousins@uottawa.ca
-
The 2008 Marcia Guttentag Award
Chris LS Coryn,  Western Michigan University,  chris.coryn@wmich.edu
Kelly Hannum,  Center for Creative Leadership,  hannumk@ccl.org
-
The 2008 Outstanding Publication Award – Getting to Outcomes
Matthew Chinman,  RAND Corporation,  chinman@rand.org
-

Session Title: Multicultural Issues in Public Health Evaluation
Multipaper Session 597 to be held in Capitol Ballroom Section 6 on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Chair(s):
Tamara Bertrand Jones,  Florida State University,  tbertrand@fsu.edu
Substance Abuse Treatment and HIV Prevention: The Need for Evaluation by Culturally Competent Practitioners
Presenter(s):
Yarneccia Hamilton,  Clark Atlanta University,  yhamilton97@aol.com
Abstract: There is an inherent need for the evaluation of HIV (Human Immunodeficiency Virus) prevention interventions that function within substance abuse treatment centers. Heterosexual transmission has been determined as the leading cause of HIV infection among African American women in the United States. Many of these women engage in risky sexual behaviors for the purposes of acquiring and using illegal substances. Practitioners within substance abuse treatment centers are charged with educating the clients (former substance abusers) on HIV prevention, however, little is known regarding the evaluation of their practice with this vulnerable population as well as evaluation policies that exist which assist in this process (Hall, Amodeo, Shaffer, & Vander Bilt, 2000). This paper seeks to explore the practitioner’s role in HIV prevention education within substance abuse treatment facilities, address prevention policy, and identify additional evaluative tools to be used in evaluating their practice and developing standard operating procedures/policies.
Managed Care and Public Mental Health Services: Implications for Culturally Competent Evaluation Practice
Presenter(s):
Aisha Williams,  Clark Atlanta University,  aishad@comcast.net
Abstract: Due to the implementation of managed care and the privatization of public mental health services, evaluation practice in the field of social work has increased. However, this new environment of management, policy, and accountability has created some unique barriers especially for people of color. This paper seeks to explore the impact of managed care and privatization on mental health service provision for people of color, the unique barriers or considerations it presents for evaluators who evaluate the effectiveness of those services, and how evaluators can overcome those barriers and increase the cultural competence of their evaluation strategies and methodologies. This paper contributes to the field of evaluation by examining how policy can impact the development of a unique evaluation approach that insures respect and competence for vulnerable populations.
Exploring the Impact of Colorado House Bill 1123 on the Hispanic Population's Willingness to Engage in Evaluation Practice
Presenter(s):
Deborah W Trujillo,  Research Evaluation Associates for Latinos,  dr.trujillo@real-consulting.org
Theresa Rosner Salazar,  Research Evaluation Associates for Latinos,  dr.salazar@real-consulting.org
Victoria Watson,  Colorado State University at Pueblo,  vicky@real-consulting.org
Abstract: Since the passing of Colorado House Bill 1123 non-profits throughout the State are struggling to understand what clients they can serve and which ones they have to turn away. Basically the bill states that government issued identification is needed to apply for or receive any services that are supported by the state. This has had a detrimental impact on non-profits practice and policies. Due to this bill many stories are emerging could be considered by many as discriminatory practices. Since many Hispanics documented and undocumented are feeling "fear" to seek services and those that get services, are less likely to provide any additional data for evaluation purposes. This roundtable will explore this policy's impact on evaluating programs and initiatives targeting communities of color in the state of Colorado.

Session Title: Service-Learning and Civic Engagement: Framing the Evaluation Issues
Panel Session 598 to be held in Capitol Ballroom Section 7 on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Assessment in Higher Education TIG
Chair(s):
Annalisa Raymer,  University of Alaska,  afalr@uaa.alaska.edu
Abstract: Service-learning is a concept that involves engaging in community service and learning subjects and dispositions (attitudes, values, etc.) related to being a citizen in a democratic society. A construct that is sometimes viewed as 'an amorphous concept that defies rigid definitions and universal understanding' (Shumer, 1993), service-learning if often defined by its context. Differing contexts create havoc for evaluators because they must continuously negotiate goals, purposes, processes, and outcomes. The goal of this panel is to frame diverse issues involved in conducting evaluations of service-learning and civic engagement. Among the challenges to be discussed are: 1) delineating both the terms and actions of the programs; 2) issues of fit and effectiveness - how program activities lead to measures of effectiveness and quality; and 3) considerations in selecting evaluation approaches that match social and civic outcomes, from participatory approaches, to case studies and individual systems of assessment.
Matching the Methods to the Intent: Considerations in Framing Evaluation of Civic Engagement and Service-Learning
Anne Hewitt,  Seton Hall University,  hewittan@shu.edu
Dr. Anne M. Hewitt is the Director of the Seton Center for Community Health, and she specializes in evaluation of non-profit agencies. She recently completed an evaluation of a $2 million grant focusing on service learning in the university. Dr. Hewitt is also the CEO/Founder of Mountainside Associates, a consulting firm founded in 1996.
Defining the Evaluand and Dealing With Complexity: Challenges in Service-Learning Evaluation
Robert Shumer,  University of Minnesota,  drrdsminn@msn.com
Robert Shumer, Ph.D, is the founder and former Director of the National Service-Learning Clearinghouse at the University of Minnesota. He has been involved in service-learning and civic engagement for almost 40 years and has conducted more than 25 studies of programs dealing with service-learning, civic engagement, character education, and state and national service. He is considered one of the pioneers of the service-learning movement.
Constructs and Measures Employed in Service-Learning Evaluation Today
Bradley Smith,  University of South Carolina,  drbradleyhsmith@gmail.com
Dr. Brad Smith has been teaching service-learning courses in at the University of South Carolina since 2001 and is the Chair of the Provosts Task Force on Service Learning, which is tasked with promoting service learning scholarship at USC. Through these activities, Dr. Smith has seen several evaluations of service learning classes and is leading an evaluation review study on measurement and designs for evaluating service-learning classes. His presentation in the panel will focus on constructs to consider when evaluating service learning and extant measures for these constructs.
Assessing the Impact of Service Learning Through the Lens of Community-Based Research
Naomi Penney,  University of Notre Dame,  naomi5645@yahoo.com
Naomi Penney, PhD, has worked at the local, state, and federal levels in public health. She has worked as an evaluation consultant to the Agency for Toxic Substances & Disease Registry, the Global AIDS Program and for several non-profit organizations over the past five years. Two years ago she began working with the Center for Social Concerns at University of Notre Dame to build the community-based research aspect of their service learning activities. She will speak on using a stepped approach to identifying community impact through qualitative interviews of both faculty and community partners and identifying how each group is envisioning what 'community impact' should look like.
Surveying the Landscape of Understandings: What Do the Terms "Civic Engagement" and "Service Learning" Connote?
Annalisa Raymer,  University of Alaska,  afalr@uaa.alaska.edu
Annalisa Raymer, PhD, serves as Faculty Advisor for the academic program in Civic Engagement at the University of Alaska, Anchorage, where she's also active in efforts to advance Community Engaged Scholarship. Her teaching and research focuses on building democratic participation in public life. Annalisa's background includes work in community planning, civic leadership development, and action research, and she is particularly interested in participatory placemaking of public space. In this session she will present a spectrum of conceptual understandings of the key terms, civic engagement and service-learning.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Can Second Life be a Useful Evaluative Tool in Real Life?
Roundtable Presentation 599 to be held in the Limestone Boardroom on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Integrating Technology Into Evaluation
Presenter(s):
Stephen Hulme,  Brigham Young University,  stephen_hulme@yahoo.com
Tonya Tripp,  Brigham Young University,  tonya.tripp@byu.edu
Abstract: Second Life, a popular Multi-User Virtual Environment, provides many technological advances that were never possible before. News stations (CNN), professional organizations (AECT among others), educators, businesses (Wells Fargo) and vendors (Lexus) have recognized the benefits of this tool, but evaluators have yet to jump on board. The capabilities Second Life should not go overlooked by evaluators; there are many tools that facilitate new and exciting evaluations, and increase flexibility and capability in our current evaluations. These capabilities include synchronous discussion from anywhere in the world, the option to capture (video record) conversations, meetings, presentations, focus groups, etc, which will enable evaluators to do things they’ve never done before. In addition to connecting with their current audience, evaluators will be able to reach an entire different demographic as well. This roundtable discussion will explore the pros and cons of using Second Life as an evaluative tool.
Roundtable Rotation II: New Tools for the Trade: The Role of Interactive Technology in Training the Next Generation of Evaluators
Roundtable Presentation 599 to be held in the Limestone Boardroom on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Integrating Technology Into Evaluation
Presenter(s):
SaraJoy Pond,  Brigham Young University,  sarajoypond@gmail.com
David Williams,  Brigham Young University,  dwilliams@byu.edu
Abstract: What roles do simulations, expert systems, video analysis tools and other forms of interactive technology play in the training of new evaluators? What role could they play? How can we integrate real-world experience into the predominant 1-semester or 1-year course that comprises all the training most new evaluators get? What solutions have been contributed in this area? Where do we go from here? This roundtable will feature a presentation of a new evaluation tool, an exploration of its features and the results of pilot testing, and an open discussion about possible implications and future directions for technology in training new evaluators.

Roundtable: What Works, Effective Recidivism Reduction and Risk-Focused Prevention Programs: A Compendium of Evidence-Based Options for Preventing New and Persistent Criminal Behavior
Roundtable Presentation 600 to be held in the Sandstone Boardroom on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Crime and Justice TIG
Presenter(s):
Roger Przybylski,  RKC Group,  rogerkp@comcast.net
Abstract: This roundtable is based on the presenter’s 2008 publication titled What Works, Effective Recidivism Reduction and Risk-Focused Prevention Programs: A Compendium of Evidence-Based Options for Preventing New and Persistent Criminal Behavior. Based on a comprehensive and systematic review of the literature, the publication discusses the impact of incarceration on crime, what works to reduce recidivism, what works to prevent the onset of delinquent and criminal behavior, and key issues concerning effective program implementation. Methods employed, key findings, and lessons learned from the research will be described during the presentation.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Program Evaluation and Public School Districts: Facing the Challenges
Roundtable Presentation 601 to be held in the Marble Boardroom on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Chandra Johnson,  Clayton County Public Schools,  cfjohnson@clayton.k12.ga.us
Qiana Cutts,  Clayton County Public Schools,  qmcutts@clayton.k12.ga.us
Joe Nail,  Clayton County Public Schools,  jnail@clayton.k12.ga.us
Stephanie Beane,  Clayton County Public Schools,  snbeane@clayton.k12.ga.us
Abstract: In the age of accountability supported by No Child Left Behind (2001) mandates, school districts across the United States are relying more and more on the evaluation of teaching and learning, academic programs, best practices and the like. As such, school districts’ research and accountability departments are faced with an urgent need to engage in constant program evaluation. Some school districts’ desire to have microwave program evaluations detracts from research and accountability departments’ opportunities to provide “methodically sound” evaluations that “produce credible, comprehensive, [and] context-sensitive” results. Often times, research and accountability specialists are transformed from evaluators to evaluation teachers while working with district personnel who possess a limited understanding of evaluation standards and procedures. In our system, we have helped to enhance evaluation knowledge among our stakeholders while increasing our own expertise. This presentation will expound on the issues and challenges around conducting evaluation in a large urban school district and outline measures that were taken to build stakeholders’ program evaluation competency.
Roundtable Rotation II: Evaluation of the Planning and Implementation Efforts for Year One Of an Urban High School Reform Project
Roundtable Presentation 601 to be held in the Marble Boardroom on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Sharon Ross,  Founder's Trust,  sross@founderstrust.org
Gibbs Kanyongo,  Duquesne University,  kanyongog@duq.edu
Rodney Hopson,  Duquesne University,  hopson@duq.edu
Jessica Adams,  Duquesne University,  adams385@duq.edu
Carol Brooks,  Founder's Trust,  cxbrooks@founderstrust.org
Elizabeth Maurhoff,  Founder's Trust,  emaurhoff@founderstrust.org
Abstract: A school district in the northeast United States is in the first year of a multifaceted reform plan for achieving high school excellence. This evaluation focuses on the planning and initial implementation of three of the reform efforts in the district’s high schools, which include a program for students falling behind in their reading level, improved career and technical education programming, and a program to assist students through the critical transition that occurs in the 9th grade. The evaluation is unique in two ways: its use of a culturally responsive approach to ground the project in the context of the city in which the district is located and its use of a more democratic approach as a way to ensure that the voices of those being impacted by reform are heard and incorporated into decisions the district makes as a result of this evaluation.

Session Title: Evaluation Capacity Building: Tools Emerging From Practice
Multipaper Session 602 to be held in Centennial Section A on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG and the Government Evaluation TIG
Chair(s):
Maria Jimenez,  University of Illinois Urbana-Champaign,  mjimene2@uiuc.edu
Discussant(s):
Jennifer Martineau,  Center for Creative Leadership,  martineauj@ccl.org
Building Culturally Competent Evaluation Capacity in California's Tobacco Control Programs
Presenter(s):
Jeanette Treiber,  University of California Davis,  jtreiber@ucdavis.edu
Robin Kipke,  University of California Davis,  rakipke@ucdavis.edu
Abstract: The California Department of Public Health focuses its resources for tobacco control work on local smoke-free policy adoptions, for instance in multi-unit housing, outdoor areas, events, etc. Local programs use process and outcome evaluation methods to inform local campaigns and measure success. One of the greatest challenges facing tobacco control program evaluation in California is the diversity of the state's population that renders one-size-fits-all evaluation approaches ineffective. Therefore, the UC Davis Tobacco Control Evaluation Center, which serves 100 local California tobacco control programs, has been developing tools for culturally competent evaluation that help these programs gain access to culturally specific groups, develop data collection instruments, and analyze results. This paper presents the strategies used in strengthening local organizations’ evaluation capacity. The newly developed series of tools will be of use to local tobacco control programs as well as other programs providing social and health promotion services for diverse populations nationwide.
Feasibility of Obtaining Outcome Data From Informal Science Education Projects
Presenter(s):
Gary Silverstein,  Westat,  silverg1@westat.com
John Wells,  Westat,  johnwells@westat,com
Abstract: The National Science Foundation’s Informal Science Education (ISE) program supports projects designed to increase public interest in, understanding of, and engagement with science, technology, engineering, and mathematics. This session will examine the process by which the ISE program has shifted its emphasis from documenting outputs to measuring outcomes. Of particular interest will be the opportunities for obtaining outcome-oriented results that program officers can use to identify promising practices. We will also focus on the challenges that the initial cohort of respondents encountered in specifying and measuring progress toward audience outcomes—including difficulty (1) articulating valid and measurable outcomes that occur after exposure to an ISE event, (2) documenting project outcomes within the grant period, and (3) developing an effective and rigorous evaluation strategy. The presentation will also describe the range of technical assistance provided during the collection required to help projects devise and measure valid and measurable ISE outcomes.
Improved Evaluation through Enhanced Policies and Better Capacity Building
Presenter(s):
Andy Fourney,  Network for a Healthy California,  andy.fourney@cdph.ca.gov
Barbara Mknelly,  Network for a Healthy California,  barbara.mknelly@cdph.ca.gov
Sharon Sugerman,  Network for a Healthy California,  sharon.sugerman@cdph.ca.gov
Abstract: The Network for a Healthy California (Network) contracts agencies and institutions (contractors) throughout California to provide nutrition education to Food Stamp Eligible populations. Contractors participate in an evaluation governed by policies that were created to standardize methods, increase rigor and maximize intervention impact. These policies, while largely successful, have limitations. For example, contractors feel limited by the requirement that they must use a validated survey to assess change. To address this, capacity building strategies were implemented to help contractors match nutrition education activities with determinants of behavior. This fit was used to accurately match interventions with surveys and refine interventions. Strategies also prepared contractors to report qualitative data that capture successes not measured by the surveys. Discordance between policies and field capacity can lead to an incomplete picture of program impact and inaccurate interpretation of results. The implication for evaluation is that capacity building and policy making are iterative.

Session Title: Longitudinal and Growth Curve Analysis
Multipaper Session 603 to be held in Centennial Section B on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Patrick McKnight,  George Mason University,  pmcknigh@gmu.edu
Discussant(s):
Frederick Newman,  Florida International University,  newmanf@fiu.edu
Evaluating Mental Health Recovery: A Latent Growth Curve Modeling Approach
Presenter(s):
Kathryn DeRoche,  Mental Health Center of Denver,  kathryn.deroche@mhcd.org
Antonio Olmos-Gallo,  Mental Health Center of Denver,  antonio.olmos@mhcd.org
Christopher McKinney,  Mental Health Center of Denver,  christopher.mckinney@mhcd.org
Abstract: The field of adult mental health has been evolving in the last two decades to focus on consumer-centered mental health recovery. The current study evaluated change across time among two measures of mental health recovery, through the use of a multivariate latent growth curve model. The presentation will discuss the influence of moderators of recovery including: level of services being received, characteristics of staff members that promote recovery, and the consumers’ level of daily functioning. The clinical implications regarding the initial level of recovery, the rate of change in recovery across time, and the potential moderators of change for community-based mental health centers and their consumers will be discussed. In addition, the benefits of applying latent growth curve modeling techniques for evaluating change in the social and behavioral science disciplines will be highlighted.
A Longitudinal Examination of the Academic Year and Summer Learning Rates of Full and Half-Day Kindergartners
Presenter(s):
Keith Zvoch,  University of Oregon,  kzvoch@uoregon.edu
Joseph Stevens,  University of Oregon,  stevensj@uoregon.edu
Abstract: Literacy data collected over the course of two academic years were used to estimate the rate at which full and half-day kindergartners acquired literacy skills during kindergarten, first grade, and the intervening summer. Application of piecewise growth models to the time series data obtained on students from a large southwestern school district revealed that economically disadvantaged full-day kindergartners gained literacy skills at a faster rate than their more economically advantaged and initially higher scoring half-day peers during the kindergarten year. However, over the summer between kindergarten and first grade, the literacy performance of full-day dropped while their half-day peers maintained the literacy gains acquired during kindergarten. Full and half-day alumni growth rates then remained equivalent over the first grade school year. Implications for evaluating the short and long term efficacy of school-based initiatives like full-day kindergarten and more generally, the effectiveness of schools and schooling are discussed.
Multilevel Longitudinal Analysis of Teacher Effectiveness and Reading Fluency in Native American Students
Presenter(s):
Heather Chapman,  EndVision Research and Evaluation,  hjchapman3@gmail.com
Abstract: All students need to learn to read in order to be successful in school and in life in general. Unfortunately, many students do not learn to read at grade level by the time they finish high school. In the Native American population, the number of students reading at grade level has been reported to be as little as 26%. Many different interventions have been used to increase achievement, but the analyses used to determine success of these interventions have often not been adequate. Often these methods have been cross-sectional in nature and have failed to account for the clustering of students within classrooms and teachers within schools. The proposed paper aims to investigate the relationship between reading ability and several other factors using more advanced multilevel longitudinal methods. These methods have the potential to decrease the bias introduced into many traditional analyses, which leads to increased accuracy of results.

Session Title: Promoting Policy-Relevant Impact Evaluation for Enhanced Development Effectiveness
Panel Session 604 to be held in Centennial Section C on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Presidential Strand and the International and Cross-cultural Evaluation TIG
Chair(s):
Jim Rugh,  Independent Consultant,  jimrugh@mindspring.com
Abstract: The results agenda set adopted by many development agencies has driven a desire for stronger evidence to be provided by impact evaluation. At the same time there have been calls from some quarters for impact evaluation to become more rigorous. Various agencies have been involved in promoting different initiatives to expand coverage by quality impact evaluations, but have been aware of issues regarding both methodological debates and questions of ownership. The presenters in this session provide differing perspectives on the development impact evaluation debate: from a bilateral agency, a developing country evaluator and that of an insider in the new initiatives
A Bilateral Perspective
Nick York,  Department for International Development,  n-york@dfid.gov.uk
The United Kingdom Department for International Development (DFID) has strongly aligned itself with the Millennium Development Goals and the associated results agenda. Like other UK government departments, DFID has a Public Service Agreement with the Treasury which sets outcome targets to be achieved in the main recipients of UK development aid. But this approach raises the question of if changes in outcomes can be attributed to UK development assistance, hence an increased interest in impact evaluation. DFID financed a partnership with the Bank's Independent Evaluation Group, which not only provided an entry into impact evaluation debates, but created the platform from which NONIE was launched. DFID is also a supporter of 3ie. The presentation will comment on the evolving international architecture for impact evaluation from the point of view of a bilateral agency, and the challenges for evaluation posed by the changing aid environment.
A Developing Country Perspective
Zenda Ofir,  African Evaluation Association,  zenda@evalnet.co.za
There has been a proliferation of interest in impact evaluation in recent years. But the resulting initiatives started out as Northern-driven, with some agencies promoting an approach solely dependent on Randomized Control Trials (RCTs). However, there has been an opening up of these initiatives with expanding membership of NONIE and signs that 3ie is seeking to promote a Southern-led and implemented impact evaluation program. As a result developing country evaluators have shifted their view of at least some of these initiatives from one of skepticism to cautious engagement. However, debates remain. This presentation lays out developments to date from a developing country perspective and lays out the issues which still need to be addressed.
An Insider's View
Howard White,  International Institute for Impact Evaluation,  hwhite@3ieimpact.org
The development of NONIE and 3ie have both taken place over strongly contested territory - from the meaning of impact, through methodological debates to questions of ownership and due process. Moreover the apparent proliferation of initiatives seems to run counter to donor claims to be moving toward harmonization. The presenter has been close to the development of these initiatives, first as a prime mover in NONIE and now as Executive Director of 3ie. At the same time he participated in debates in the World Bank in which IEG was fighting a rear guard action to protect policy relevance in impact evaluation design which was in danger of being overlooked in the clamor for technical rigor. This presentation will lay out the choices which have been made as the initiatives have developed.

Session Title: Research on Evaluation Methods
Multipaper Session 605 to be held in Centennial Section F on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Research on Evaluation TIG
Chair(s):
John LaVelle,  Claremont Graduate University,  john.lavelle@cgu.edu
Methodological Challenges of Collecting Evaluation Data From Sexual Assault Survivors: A Comparison of Three Methods
Presenter(s):
Rebecca Campbell,  Michigan State University,  rmc@msu.edu
Adrienne Adams,  Michigan State University,  adamsadr@msu.edu
Debra Patterson,  Michigan State University,  patte251@msu.edu
Abstract: This project integrated elements of responsive evaluation and participatory evaluation to compare three evaluation data collection methods for use with a hard-to-find (HTF), traumatized, vulnerable population: rape victims seeking post-assault medical forensic care. The first method involved on-site, in-person data collection immediately post-services; the second, telephone follow-up assessments one week post-services; and the third, private, self-administered surveys completed immediately post-services. There were significant differences in response rates across methods: 88% in-person, 17% telephone, and 41% self-administered. Across all phases, clients gave positive feedback about the services they received and about all three methods of data collection. Follow-up analyses suggested that non-responders did not differ with respect to client characteristics, assault characteristics, or nursing care provided. These findings suggest that evaluations with HTF service clients may need to be integrated into on-site services because other methods may not yield sufficient response rates.
Audit Report Styles: Management versus Auditor Perspectives
Presenter(s):
Joyce Keller,  St Edward's University,  joycek@stedwards.edu
Abstract: This study tests the impact of an audit/evaluation report (summary only), written in the two styles reflected in the professional standards promulgated by the Institute of Internal Auditors, the American Evaluation Association, and the General Accounting Office, The report written in the AEA style, will provide a balance of strengths and weaknesses while the report written in GAO/IIA style will place emphasis on findings. Both will include conclusions and recommendations. Approximately thirty managers and thirty auditors will read the report summaries and answer follow-up questions. Half of each group will receive the AEA style report first, the GAO/IIA styled report second. The other half of each group will receive the reports in converse order. Follow-up questions will address the balance in the report, the clarity of findings, the strength of the findings, the receptivity of the reader to the report and other aspects.
Reporting Statistical Practices in Evaluation: Implications of Effect Sizes and Confidence Intervals in the Interpretation of Results
Presenter(s):
Melinda Hess,  University of South Florida,  mhess@tempest.coedu.usf.edu
John Ferron,  University of South Florida,  ferron@tempest.coedu.usf.edu
Jennie Farmer,  University of South Florida,  farmer@coedu.usf.edu
Jeffrey Kromrey,  University of South Florida,  kromrey@tempest.coedu.usf.edu
Aarti Bellara,  University of South Florida,  bellara@coedu.usf.edu
Abstract: As the trend for accountability continues to increase in many fields (e.g., education), the need for quality evaluation efforts is becoming increasingly prevalent. However, regardless of how well an evaluation may have been conducted, failure to adequately convey all aspects of the evaluation, including methods and findings, may result in inadequate, possibly even incorrect, reporting of conclusions and implications. This research examines how studies published in evaluation journals communicate findings of traditional statistical analyses (e.g., ANOVA, chi-square) and the degree to which the reported statistics adequately and accurately support the results and associated conclusions. The study examines how inclusion of other statistics (e.g., effect sizes, confidence intervals) in addition to typical p-values may impact results and conclusions. The findings drawn from this research are anticipated to help bridge the gap between theoretical concepts and applied practices of statistical methods and reporting, thus enhancing the utility and reliability of evaluation studies.
Pragmatic and Dialectic Mixed Method Strategies: An Empirical Comparison
Presenter(s):
Anne Betzner,  Professional Data Analysts Inc,  abetzner@pdastats.com
Abstract: This study empirically compares the pragmatic and dialectic mixed method strategies to assist practitioners in designing mixed method studies and to contribute to theory. Two mixed method evaluations were conducted to understand the impact of smoke-free regulations on participants in stop smoking programs. The pragmatic study was conducted to obtain a broader understanding of regulation impact, and included focus groups and a telephone survey. The dialectic study sought to evoke paradox in findings and generate new insights by mixing the telephone survey described above with phenomenological interviews. The methods were integrated at sampling, analysis and interpretation. Substantive findings from the single methods are compared for convergence, divergence and uniqueness, and findings of the two mixed method approaches are compared similarly. Findings are presented with reflections on the implementation process of the two strategies and the costs of the single methods in terms of billable researcher hours and participant time.

Session Title: Extension Education Evaluators Adapt “A Checklist for Building Organizational Evaluation Capacity” to Extension Contexts
Panel Session 606 to be held in Centennial Section G on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Extension Education Evaluation TIG
Chair(s):
Heather Boyd,  Virginia Polytechnic Institute and State University,  hboyd@vt.edu
Discussant(s):
Michael Lambur,  Virginia Polytechnic Institute and State University,  lamburmt@vt.edu
Abstract: Extension education organizations in the past few years have made a commitment to support evaluation capacity building (ECB) for organizational, workforce and program improvement. These organizations have provided budgets and administrative support to the ECB enterprise as well as hired full- and part-time evaluators and evaluation capacity builders. The Extension system is a dynamic laboratory for evaluation capacity building for several reasons, including the pressures on it to show public value for the tax monies that support it. Panelists for this presentation take elements of 'A Checklist for Building Organizational Capacity' by King and Volkov (2007) and apply and/or adapt the items in the checklist to their extension-based organizational realities.
Internal Organizational Context and Purposeful Socialization
Mary Arnold,  Oregon State University,  mary.arnold@oregonstate.edu
Nancy Ellen Kiernan,  Pennsylvania State University,  nekiernan@psu.edu
The success of ECB efforts is greatly affected by the culture of the organization. Two important strategies for ECB success are creating a positive, evaluation-friendly organizational context, and developing and maintaining a purposeful socialization into the organization's evaluation process. Panel presenters will share experiences and strategies for creating a positive ECB organizational environment, including ways to increase positive attitudes toward evaluation while minimizing the negative influences. The presenters will also explore how ongoing and persistent socialization into the evaluation process can help Extension administrators and educators support the organization's evaluation efforts. Such socialization requires an exposure to the scientific criteria for evaluation and an exposure to the value placed on the need to conduct evaluation both must become organizational values. Other important aspects of evaluation socialization include helping Extension educators to realize the benefits of evaluation for themselves (promotion and program improvement) and for program stakeholders.
External Environment and Peer Learning Structures
Ellen Taylor-Powell,  University of Wisconsin Extension,  ellen.taylor-powell@ces.uwex.edu
Demand for evaluation often starts with external accountability mandate. This actually can provide an important launching pad for nurturing internal demand that sustains evaluation as an organizational function. Our external influences exist at multiple levels, across all sectors: 1993 GPRA mandate; county government level performance-based budgeting and management; non-profit sector demand of outcome reporting (influence of United Way and Kellogg Foundation and other grant giving agencies); federal funding requirements that require evaluation; tenure and promotion requirements; and professional expectations to use evidence-based practice. ECB practitioners need to turn these requirements and expectations into opportunities and positive energy, not let accountability/reporting negativity prevail: e.g., use external influences to build knowledge, understanding and skills; create policies and structures that will sustain evaluation; engage administration and leadership and build the champion pool. For peer learning structures, we will review items in the ECB checklist and suggest additions and examples relevant to Extension context.
Expand Access to Evaluation Resources and Secure Support
Mary Marczak,  University of Minnesota,  marcz001@umn.edu
Conducting quality evaluations can be fairly resource-intensive. It takes time, money, human resources, necessary expertise, a general sense of goodwill from participants, etc. Sometimes, the resources it takes to conduct sound evaluations can be perceived as “taking resources away from direct programming with participants.” Thus, any discussion of resource In terms of ECB and Extension must be twofold. First, we have to be transparent about the resources needed to conduct quality evaluations as well as how to adequately infuse resources to carry them out. Just as important however, is an explicit discussion about how sound evaluations and developed evaluation resources and expertise can enhance Extension’s ability to acquire additional resources. This presentation will discuss these issues using an example of one state’s Extension that has succeeded in infusing resources into the system for evaluation, thus increasing their chances of acquiring additional resources both for evaluation and programming.
Reinforce Infrastructure to Support the Evaluation Process and Communication Systems
Nancy Ellen Kiernan,  Pennsylvania State University,  nekiernan@psu.edu
The specific components of the evaluation process and communication systems are disparate and thus a challenge to sustain support for all of them. Also disparate are the audiences to receive this support within extension: local and statewide administrators, faculty and field educators. Infrastructure created to achieve these objectives with these audiences must create an ongoing persistent process of 1) reinforcing expectations among extension administrators, faculty and educators that evaluation will be done and done at a certain scientific level while at same time 2) producing evaluation models for knowing how to integrate each of the components in an evaluation. Several types of infrastructure used in one state will be presented. An evaluation of one type of infrastructure will demonstrate the degree to which the disparate components of evaluation can be communicated in extension and how the three audiences believe they were impacted.

Session Title: The Core Concepts of Applied Cost-Effectiveness and Cost-Benefit Analysis
Skill-Building Workshop 607 to be held in Centennial Section H on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Costs, Effectiveness, Benefits, and Economics TIG
Presenter(s):
Patricia Herman,  University of Arizona,  pherman@email.arizona.edu
Brian Yates,  American University,  brian.yates@mac.com
Abstract: To engage decision-makers who are charged with doing more for clients and taxpayers with dwindling private and public resources, evaluators increasingly need to measure and improve not just effectiveness but also cost-effectiveness. Because cost-effectiveness analysis (CEA) must start from the determination of effectiveness, an efficient approach is for evaluators to add measures of costs to their planned studies, thus allowing CEA (and if effects are monetizable, cost-benefit analysis or CBA) to be performed. This workshop is gives evaluators both conceptual foundations for the proper application of cost-effectiveness and cost-benefit analysis, and concrete tools for cost and benefit assessment. Core concepts taught through hands-on examples include the appropriate counterfactual, the perspective of the analysis and its implications, and the identification and measurement of the appropriate costs, effectiveness, and benefits so that the cost-effectiveness and cost-benefit of alternative programs can be compared and optimized. Specific assessment tools are referenced as well.

Session Title: Measuring Use and Influence in Large Scale Evaluations
Multipaper Session 608 to be held in Mineral Hall Section A on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Evaluation Use TIG
Chair(s):
Susan Tucker,  E & D Associates LLC,  sutucker@sutucker.cnc.net
Retaining Relevance in Evaluation: Evaluation for Learning or Liability?
Presenter(s):
Laurie Moore,  Mid-continent Research for Education and Learning,  lmoore@mcrel.org
Sheila A Arens,  Mid-Continent Research for Education and Learning,  sarens@mcrel.org
Abstract: Evaluation for accountability has its place in the practice. However, educational evaluation for accountability narrows the rich variety of definitions of evaluation utilized by practitioners to one which speaks only to the evaluands’ success in meeting evaluation outcomes and their worthiness of continued or expanded funding. In this paper, we argue that this narrowing precludes other important outcomes of evaluation, particularly those related to purposes such as guiding and improving educational decision-making, program planning, policy-making, or enhancing reasoning abilities. We propose seeking a balance between evaluation for accountability and evaluation for its other important purposes that ultimately leads to the betterment of social conditions – a balance that benefits funding agencies, the organizations they support, and the beneficiaries of work done by these organizations.
Identifying Factors Associated With Local Use of Large-Scale Evaluations: A Case Study
Presenter(s):
Tania Rempert,  University of Illinois Urbana-Champaign,  trempert@uiuc.edu
Abstract: The primary issues examined in this instrumental case study are: (1) How do local school-level programs use large-scale evaluation processes, information, and findings? and (2) What about evaluation R was useful in particular? Evaluation R is best studied as an instrumental case study, because it is unique in that it is a large-scale evaluation that purposefully aimed to impact the local program implementation while at the same time aligning its methods, strategies, and tools with the federal requirements for evaluation. Several of the methods, strategies, and tools included within Evaluation R are innovative examples of how large-scale evaluations can be useful at the local level. This case study was designed to allow stakeholders at the federal, state, district, and school levels of Evaluation R to voice their intended use of evaluation as well as to collect the perspective of local program implementers regarding usefulness of Evaluation R.
The Development and Validation of Evaluation Use Scales in Large Multi-Site Program Evaluations
Presenter(s):
Kelli Johnson,  University of Minnesota,  johns760@umn.edu
Abstract: Despite widespread agreement in the field relative to the importance of evaluation use and influence, no validated measure has been identified to date. The purpose of this research is to validate a measure for evaluation use and influence in large multi-site program evaluations. The paper describes the development of scales to measure evaluation use from data obtained in an online survey of evaluators and Principal Investigators of four National Science Foundation (NSF) programs. Validity is demonstrated using both theoretical and empirical evidence. This study provides insight in two areas. First, a valid measure of use and influence will benefit the practice of evaluation by identifying factors critical to evaluation use; and second, this study will contribute to research on evaluation use by providing an effective tool for measuring the use and influence of multi-site evaluations.

Session Title: Quasi-Experimental Research Designs
Skill-Building Workshop 609 to be held in Mineral Hall Section B on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Jason Siegel,  Claremont Graduate University,  jason.siegel@cgu.edu
Eusebio Alvaro,  Claremont Graduate University,  eusebio.alvaro@cgu.edu
Abstract: Quasi-experimental designs allow for assessment when an experimental design is not possible. Our 90-minute session will cover 10 different quasi-experimental designs including: Regression-Discontinuity Analyses, Counterbalanced Designs, and Multiple Time-Series Designs. After the quasi-experimental designs are introduced, participants will be given various situations and asked to configure the best possible quasi-experimental design for each.

Session Title: Bringing the Wisdom of Elders to Indigenous Evaluation
Think Tank Session 610 to be held in Mineral Hall Section C on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Indigenous Peoples in Evaluation TIG
Presenter(s):
Joan LaFrance,  Mekinak Consulting,  joanlafrance1@msn.com
Rosemary Christensen,  University of Wisconsin Green Bay,  christer@uwgb.edu
Abstract: This Think Tank's goal is to explore and define what it means to fully engage Elders in program evaluation by engaging with Elders at the annual AEA conference in a relaxed discussion format. Three Elders who are recognized cultural experts will join evaluators interested in learning how to incorporate Elder wisdom and knowledge in evaluations conducted in Indigenous communities. Participants will join in a circle to share experiences and explore ideas that can be tested in various evaluation settings. We hope to stimulate increased partnerships with Elders and to more fully learn from their wisdom while engaging them in our work.

Session Title: Measurement Strategies and Evaluation Approaches in Substance Abuse and Mental Health
Multipaper Session 611 to be held in Mineral Hall Section D on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
Roger Boothroyd,  University of South Florida,  boothroy@fmhi.usf.edu
A Preliminary Study of Population-Adjusted Effectiveness of Substance Abuse Prevention Programming: Towards Making Institute of Medicine Program Types Comparable
Presenter(s):
Steve Shamblen,  Pacific Institute for Research and Evaluation,  sshamblen@pire.org
James Derzon,  Battelle,  derzonj@battelle.org
Abstract: The present study employs a public health perspective to comparisons of substance abuse prevention programs. This perspective (the IOM) distinguishes between programs based on who is targeted: the population (universal), those at risk (selective), or persons exhibiting early stages or problem behavior (indicated). Prior comparisons have found effectiveness and positive cost-benefits accrue to selective and indicated programs, but these studies have failed to make these program types comparable by examining the impact of these programs on the larger population. Such an approximation that makes these programs comparable is offered. A meta-analysis was performed on 43 studies (25 programs) to demonstrate this approach and to examine program comparability. The average adjusted effect sizes for selective and indicated programs were reduced by approximately half. Benefits accrued to universal programs for reducing tobacco and marijuana use (low base rates of frequent use) and to selective/indicated programs for reducing alcohol use.
A Participatory Approach to Developing Quality of Care Indicators for the Evaluation of Children’s Mental Health Services
Presenter(s):
Amy Vargo,  University of South Florida,  avargo@fmhi.usf.edu
Patty Sharrock,  University of South Florida,  psharrock@fmhi.usf.edu
Abstract: The evolution of a methodological approach and most salient findings are presented from a mixed-method three-year study funded by Florida’s Agency for Health Care Administration (AHCA). The evaluation focused on the quality of Medicaid-funded mental health services provided to children with or at risk of serious emotional disturbance in Florida. Participants will learn how caregiver and provider input regarding the importance of quality of care indicators and their measurement was meaningfully incorporated into study methodology. A Quality of Care Framework will be discussed, which includes domains of access to services, appropriateness of services, consumer engagement in services and service planning, and outcomes. This framework emerged from the first year’s study and was utilized to evaluate the quality of care in the two subsequent years. Data collection methods consisted of semi-structured interviews with caregivers and service providers during the first two years and a mail survey during year three.
The Psychometric Properties of the Simple Screening Instrument for Substance Abuse
Presenter(s):
Roger Boothroyd,  University of South Florida,  boothroy@fmhi.usf.edu
Mary Armstrong,  University of South Florida,  armstron@fmhi.usf.edu
Abstract: The Simple Screening Instrument for Substance Abuse (SSI-SA) was developed by a consensus panel (Winters, Zenilman, et al., 1994). It is a self-report screening measure containing 16 yes/no items. Because only 14 items are used when scoring, scores range from 0 to 14. A score of 4 or more has been established as the cut-off point warranting further more comprehensive assessment. Although several studies have examined the SSI-SA’s use and psychometric properties, (e.g., Peters, 2000; Peters, et al., 2004), these studies have primarily focused on correctional populations. This paper will summarize the SSI-SA’s psychometric properties using data obtained from approximately 10,000 Medicaid-enrolled adults who participated in various studies conducted over the past 10 years. The SSI-SA’s factor structure, estimates of internal consistency and test-retest reliability and evidence assessing the measures’ discriminant and convergent validity will be presented.
Evaluation of Measurement Invariance of the Global Appraisal of Individual Needs Scales among Diverse Groups of Youth with Substance Use Disorders
Presenter(s):
Mesfin S Mulatu,  MayaTech Corporation,  mmulatu@mayatech.com
Kimberly Jeffries Leonard,  MayaTech Corporation,  kjleonard@mayatech.com
Dionne C Godette,  University of Georgia,  dgodette@uga.edu
Darren Fulmore,  MayaTech Corporation,  dfulmore@mayatech.com
Abstract: The Global Appraisal of Individual Needs (GAIN) is a standardized biopsychosocial tool widely used in substance abuse treatment (SAT) settings for diagnostic assessment and outcome evaluation. Although its reliability, factorial structure, and validity have been supported by earlier studies, the degree to which its measurement properties are equivalent across racial/ethnic groups has not been adequately studied. We examined measurement invariance of GAIN’s General Individual Severity Scale among 8499 youth entering SAT (21% Black, 23% Hispanic and 56% White), using multi-group confirmatory factor analyses. Results provided evidence for configural and metric invariance of the scale among the three groups (CFI >.95; RMSEA and SRMR <.06). Evidence for scalar and strict factorial invariance was modest (CFI =.94; RMSEA =.06–.07; SRMR =.08–.09). It is concluded that GAIN’s factor structure and the meaning of factor scores are similar across groups. Efforts at reducing group-specific systematic biases may further improve GAIN’s measurement properties.

Session Title: Cross-Culturally Competent Evaluation - Similar Vision But Different Lenses?
Think Tank Session 612 to be held in Mineral Hall Section E on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Presenter(s):
Nancy Csuti,  The Colorado Trust,  nancy@coloradotrust.org
Discussant(s):
Kien Lee,  Association for the Study and Development of Community,  kien@capablecommunity.com
LaKeesha Woods,  Association for the Study and Development of Community,  lwoods@capablecommunity.com
Abstract: The report Case Studies of Cross-Culturally Competent Evaluations is designed to present the reader with real-life situations that involve evaluation in a diverse world. It is a series of situations that evaluators, funders and community members have faced that challenged their notion of cross-culturally competent evaluation. The case studies are presented and responses to questions posed to various individuals follow. The participants in the think tank will receive a draft of this document and hear a short presentation of key themes. Then, breaking up into several small groups, ideally according to profession, each group will discuss the following questions: 1. What is the role of cross-cultural competency in acceptable standards for an evaluation? 2. Who is responsible for assuring evaluations are culturally appropriate? 3. Should funders be expected to pay more for cross culturally appropriate evaluations? The different insights and options held by the different groups will be shared in the reconvening.

Session Title: Incorporating Qualitative Inquiry into Complex, Multi-Site Evaluation
Multipaper Session 613 to be held in Mineral Hall Section F on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Qualitative Methods TIG
Chair(s):
Janet Usinger,  University of Nevada Reno,  usingerj@unr.edu
Discussant(s):
Janet Usinger,  University of Nevada Reno,  usingerj@unr.edu
The Case Study Experience: Sharing Methodological Approaches and Lessons Learned from a Large-Scale, Longitudinal Evaluation
Presenter(s):
Jennifer Boehm,  Centers for Disease Control and Prevention,  jboehm@cdc.gov
Judith Priessle,  University of Georgia,  jude@uga.edu
Amy DeGroff,  Centers for Disease Control and Prevention,  adegroff@cdc.gov
Rebecca Glover-Kudon,  University of Georgia,  rebglover@yahoo.com
Abstract: The Centers for Disease Control and Prevention is funding a three-year, five-site, colorectal cancer screening demonstration program to provide screening to low-income, uninsured populations. As part of a multiple methods evaluation effort, a longitudinal case study including all five sites is underway to assess program implementation and document lessons learned and challenges faced by the demonstration sites. Case study methods include semi-structured interviews, focus groups, documents review, and participant observations. Two waves of data collection have been conducted and include over 120 individual interviews. Given the complexity of the case study design, this paper addresses the methodological approach and related challenges. Specifically, the paper addresses the team-based approach, protocol development, data collection, and analysis. Attention is given to the longitudinal nature of the study, data management, underlying tensions between in-case and cross-case analysis, ethical challenges, and the relationship between the case study team and participants.
Children Home Alone: Modeling Parental Decisions and Associated Factors in Botswana, Mexico, and Vietnam
Presenter(s):
Monica Ruiz-Casares,  McGill University,  monica.ruizcasares@mail.mcgill.ca
Abstract: In the absence of information on children home alone in non-industrialized countries, this paper uses descriptive statistics, content analysis, and ethnographic decision modeling to examine different child-care arrangements utilized by families in Botswana, Mexico, and Vietnam. The Global Working Families Project interviewed 537 working caregivers attending government health clinics. Poverty, social integration, cultural norms, and child development frame parents’ decisions of care. In one-half of the families in Botswana, over one-third of the families in Mexico, and one-fifth of the families in Vietnam, children are left home alone on a regular or occasional basis. Moreover, fifty-two percent of families leaving children home alone relied on paid or unpaid child help with childcare. Rarely a preferred choice, parents identified benefits and risks of these arrangements. Societal insufficient support to working families results on unsafe childcare arrangements and limited parental involvement in child education and health care.
The Legacy for Children™ Study Experience: Promising Approaches in Comprehensive, In-Depth, Mixed-Methods Process Evaluation
Presenter(s):
Jenifer Fraser,  RTI International,  jgf@rti.org
Camille Smith,  RTI International,  cas0@cdc.gov
Ina Wallace,  RTI International,  wallace@rti.org
Angelika Claussen,  Centers for Disease Control and Prevention,  bhv6@cdc.gov
Terri Spinney,  RTI International,  tspinney@rti.org
Andrea Reubens,  RTI International,  areubens@rti.org
Ruth Perou,  Centers for Disease Control and Prevention,  rperou@cdc.gov
Abstract: A comprehensive, mixed-methods process evaluation is being conducted for the Legacy for Children™ study, a CDC-funded multi-site, longitudinal RCT intervention aimed at promoting adaptive parenting among low-income mothers and their children. Participants were randomized to either the intervention (n = 369) or control (n = 246) condition. The Legacy process evaluation focuses on participants in the experimental arm of the study. This paper will provide an overview of the approach and methods employed in the process evaluation, including participant satisfaction surveys, annual focus groups, and case study interviews. The Legacy process evaluation includes an extensive ethnography component examining the delivery of the intervention. We will describe how this large ethnographic data set will be used for fine-grained analysis of factors affecting program implementation and describe our approach for triangulating across the complex array of longitudinal process data. Challenges and lessons learned will be shared, as well as recommendations for the field.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Measuring Capacity Building Through Evaluation: Impacting a Foundation’s Decision-Making
Roundtable Presentation 615 to be held in the Slate Room on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Presenter(s):
Nakia James,  Momentum Consulting & Evaluation LLC,  momentumevaluations@yahoo.com
Michelle Bakerson,  Momentum Consulting & Evaluation LLC,  momentumevaluations@yahoo.com
Abstract: The W.K. Kellogg Foundation, one of the most recognized and prestigious foundations in the world, continues to empower organizations to successfully realize and achieve their mission. Though each organization offers diversity in opportunities to a variety of demographics, each has been supported with both grant funds and professional development resources provided by the Foundation. Subsequently, the W.K. Kellogg Foundation contracted Momentum Evaluations, L.L.P to conduct a formative evaluation of sixteen organizations, with a focus on their Capacity Building efforts. The purpose was to determine the extent to which the identified primary organizations are 1) achieving capacity building, 2) to obtain a clear description of these activities, 3) the progress they are making towards their capacity building efforts, and 4) the use of additional sources of support. This evaluation was designed to be a tool for facilitating grantee improvement and future decision-making. Accordingly, Collaborative Evaluation and Decision-and- Accountability approaches were selected.
Roundtable Rotation II: The Role of Foundation Trustees and Emerging Evaluation Practices
Roundtable Presentation 615 to be held in the Slate Room on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Presenter(s):
Samantha Nobles-Block,  FSG Social Impact Advisors,  samantha.nobles-block@fsg-impact.org
Abstract: What does “evaluation” mean to foundation trustees? How does the trustee perspective shape the way a foundation uses evaluation? This session will explore FSG’s recent dialogue with trustees to understand how they think about evaluation today, uncover what they think is missing from current approaches, and gauge reactions to emerging practices. FSG will share excerpts from materials targeted to foundation trustees that discuss the range of evaluation approaches available in language adapted to their concerns. We hope that these materials also provide insight to evaluators and foundation staff that will enable them to better understand the evaluative needs and expectations of foundation boards. Roundtable participants will be asked to talk about the board-level dynamic they have observed as evaluators or as foundation staff and reflect on the role of trustees in guiding the use of evaluation.

Session Title: Flexibility and Creativity in Evaluation Methods: This Wasn’t in the Textbook!
Multipaper Session 616 to be held in the Agate Room Section B on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Graduate Student and New Evaluator TIG
Chair(s):
Joel Nadler,  Southern Illinois University at Carbondale,  jnadler@siu.edu
A University Public Service Survey, The (de) Evolution of a Project
Presenter(s):
Joel Nadler,  Southern Illinois University at Carbondale,  jnadler@siu.edu
Meghan Lowery,  Southern Illinois University at Carbondale,  meghanlowery@gmail.com
Nicholas G Hoffman,  Southern Illinois University at Carbondale,  nghoff@siu.edu
Gargi Bhattacharya,  Southern Illinois University at Carbondale,  gargi@siu.edu
Abstract: Applied Research Consultants (ARC), a graduate student-run consulting firm, was contacted by the Office of the President at a Midwestern university to conduct a system-wide public service census of all students, faculty, and staff across multiple campuses. Project goals were to collect data from every individual affiliated with the university, to assess personal hours of community service by sub-group, and to assess personal dollar amounts donated by those sub-groups. The objective was to provide an accurate quantifiable picture of the university’s impact on the surrounding community for the purpose of lobbying. The public service project went from being a small simple quantitative survey using a census to a complex qualitative survey sent only to upper administration. The focus of the presentation is on the evolution of a project once the initial idea meets with political reality. The danger of politically-driven alterations reducing a project’s viable use will also be addressed.
Digging Up Dirt: Addressing Accuracy and Feasibility Standards and Accountability in Grant Projects Two Decades Past
Presenter(s):
Tanya C Franke,  Oklahoma State University,  tanya.franke@okstate.edu
K Jill Rucker,  Oklahoma State University,  jill.rucker@okstate.edu
Sheyenne Krysher,  Oklahoma State University,  sheyenne.krysher@okstate.edu
Kathleen D Kelsey,  Oklahoma State University,  kathleen.kelsey@okstate.edu
Abstract: This evaluation digs deep up to 25 years into past funded projects from a state agency to unveil project impact on beneficiaries, their families, and service providers and to determine sustainability of projects post funding. In order to evaluate accountability of the organizations who executed projects, addressing the accuracy and feasibility standards was critical. Adventurous challenges included locating contacts who implemented the projects. Through content analysis and investigation of projects, the graduate student evaluation team and their mentor was successful in identifying and documenting valid contacts familiar with the projects implemented by their organizations. Evaluators used initial phone interviews to establish contacts and followed up with a mailed survey. This paper explores how the graduate student evaluation team and mentor addressed accuracy and feasibility standards in order to save over 50% of the initial cost of the evaluation of grant projects from nearly two decades past.

Session Title: Advancing Evaluation in Organizational Settings
Multipaper Session 617 to be held in the Agate Room Section C on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Business and Industry TIG
Chair(s):
Otto Gustafson,  Western Michigan University,  ottonuke@yahoo.com
Abstract: Evaluation is a regular activity in organizations, yet few managers or business professionals refer to their work as evaluation. The application of serious evaluation in business and industry has traditionally focused on personnel-related aspects of organizations, such as training and human resource development. In this session, the presenters explore the use of evaluation in business beyond these areas to reveal applications of evaluation in business. The first paper considers the characteristics of evaluative organization, those which have integrated both evaluation thinking and practice into their culture. It will also discuss what makes organizations of this type 'something more' than learning organizations. The second paper introduces a criteria of merit checklist for evaluating organizational effectiveness. The tool is designed for use by professional evaluators and management practitioners to assess the overall effectiveness of an organization. The final paper examines how process improvements can be evaluated by organizations.
The Evaluative Organization: Something More than Learning
Amy Gullickson,  Western Michigan University,  amy.m.gullickson@wmich.edu
Evaluations performed in an organizational setting tend to focus on specific internal areas such as process improvement, quality control, or employee performance. Some organizations, however, have integrated evaluation into their culture in such a way that assessing merit, worth and/or significance is an integral part of every employee's daily work. This presentation will outline the characteristics of these 'evaluative' organizations, which are something more than learning organizations. Discussion will include the barriers and enablers to developing an evaluative culture, evaluation anxiety, and cross-disciplinary nature of this kind of culture.
Evaluating Organizational Effectiveness: A New Perspective
Wes Martz,  Kadant Inc,  wes.martz@gmail.com
The current practices of evaluating organizational effectiveness are lacking on a number of fronts - not the least being the struggle to explain the construct either theoretically or empirically. Numerous suggestions have been made to improve the assessment of organizational effectiveness. However, issues abound related to criterion instability, conflicting criteria, official goals versus operative goals, weighting criteria of merit, sub optimization, boundary specifications, narrowly defined value premises, inclusion of multiple stakeholders, ethical considerations, and the struggle to synthesize evaluative findings into an overall conclusion. This presentation will explain the failure to develop satisfactory approaches to evaluate organizational effectiveness, propose a checklist for practicing evaluators and managers to utilize when evaluating organizational effectiveness, and illustrate the practical application of the organizational effectiveness evaluation checklist.
Evaluating Process Improvement: An Organizational Scorecard Approach
Otto Gustafson,  Western Michigan University,  ottonuke@yahoo.com
Continuous improvement programs are designed to help organizations maximize their effectiveness by engaging employees at all levels to improve their daily processes through waste elimination and innovation. But how can organizations understand whether and to what extent continuous improvement is occurring? One method used to gauge business unit performance in the area of continuous improvement is to evaluate and score process improvements, quantify results and compare against set goals. This paper examines how one Fortune 500 company evaluates and drives process improvements in the context of the nuclear power industry. Inherent programmatic strengths and weaknesses will be discussed. In addition, recommendations to strengthen and expand process evaluation to other organizational contexts will be forwarded.

Session Title: Intermeshing Cogs at Work: Experiences and Lessons Learned From State and Local Educational Program Evaluations
Panel Session 618 to be held in the Granite Room Section A on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Kathleen Toms,  Research Works Inc,  katytoms@researchworks.org
Abstract: This panel consists of members of Research Works, Inc., an independent research and evaluation company which consistently evaluates programs at both the State and local levels. We, as principal investigators, research associates and research assistants contributing to multiple evaluations at both levels simultaneously have been struck by the lack of coordination between these two levels of evaluation. This panel will discuss if these levels should be collaborating with and informing each other. Is the State evaluation merely a meta-evaluation of the local studies? Should local evaluators be collecting the data that State evaluators need even if it means their implementation evaluations are not able to be completed? We propose some ways to facilitate a more coordinated approach and will ask the audience for their experiences in navigating this situation from either or both perspectives, and to answer the question: Is there an ideal interaction across system levels of evaluation?
A State Evaluator's Effect on Local Program Evaluations
Elizabeth Whipple,  Research Works Inc,  ewhipple@researchworks.org
This panelist will discuss the issues facing a State evaluation as it attempts to utilize local evaluation efforts. The particular perspective presented here is from the project director of the State evaluation of the 21st Century Community Learning Center Program in New York. What is the best way to collect data relevant to the State evaluation's purpose from local evaluators that does not compromise the efforts of the local evaluator, does not cause time-consuming busywork for the local evaluator and that could also prove to be useful to the purposes of the local evaluation? We propose that it is imperative to listen to local evaluators but also to hold local evaluators to high standards thus using the role of the State evaluator to positively influence evaluation capacity building at both state and local levels.
The Tree, the Forest and the New Evaluator: Experiences and Lessons Learned from Working on State and Local Evaluations from the Perspective of a New Evaluator
Josh De La Rosa,  Research Works Inc,  jdelarosa@researchworks.org
The role of a local evaluator differs yet should complement the duties of a state evaluator and vice versa. Each level of the program delivery system seems to have different objectives for the intervention of interest, often with multiple sub-levels. However, understanding the nuances in the different stakeholders’ objectives is a challenging task. One junction point is an understanding of the purpose of the evaluation as it relates to the purpose of the intervention as it is defined at each level of the system. This panelist will put forth the experience of a new evaluator/graduate student contributing to local evaluations of five Math Science Partnership Programs while supporting the 21st Century Community Learning Centers statewide evaluation. The presentation will focus on the presenter’s trouble switching lenses from project to project. Also, the presentation will inform of lessons learned from working on both the micro and macro levels.
The Increased Importance of Local Evaluators on Direct Federally Funded Educational Grants
Mathew Loatman,  Research Works Inc,  mloatman@researchworks.org
This panelist will focus on the relationship between the local evaluator and the federal evaluation of the Carol White Physical Education Program. The panelist will discuss how the relationship was navigated between the client and the Federal Government. The panelist has spent a considerable amount of time coordinating the data collected to satisfy GPRA reporting requirements with that collected for the local evaluation. The aim of coordinating these activities has been to make data collection easier and more meaningful for the client. This has been fairly easy since there is no state level evaluation activity that would make the overall evaluation more complicated. Also, the evaluator on this project monitored professional development that was targeted at developing the ability of all the teachers to support the achievement of the project results with all students resulting in changes in teacher practice.
Shelf Art 101: Do Differences in Evaluation Requirements Affect the End Use of the Evaluation?
Carolynn Woiler,  Research Works Inc,  cwoiler@researchworks.org
The GPRA Act of 1993, although increasing the need for evaluators, changed the dynamic and context of evaluations. As a result, this new evaluator has found that self mandated evaluations tend to be more collaborative as compared to federally mandated evaluations which, in the experience of the evaluator, tend to be less utilized by the stakeholders. This panelist will focus on the new evaluator's perspective of the relationship between a federally mandated evaluation and compare it to a local evaluation where no evaluator is required but was requested by the grantee. The perspective presented is from the local evaluator of a federally funded Teaching American History grant compared to a locally funded after school program. Data collected by the local evaluator of the federally funded program is often not used by the grantee to inform midcourse corrections of the program. On the other hand, in the case of the locally funded program which does not have an evaluation mandate, the evaluation has a much more significant effect on the program.
Different Perspectives of the Mountain: Understanding the Relationships Between State and Local Evaluators From the View of a State and Local Evaluator
Jeff Wasbes,  Research Works Inc,  jwasbes@researchworks.org
This panelist will focus on the new evaluator's perspective on the relationship between two levels of evaluative activities: state and local. The panelist's perspective stems from serving as researcher on a state level evaluation of the 21st Century Community Learning Center Program for New York, as well as researcher on a local evaluations of several Mathematics Science Partnership Grant (Title II Part B) initiatives. It seems logical to assume that data collected at the local level should help to inform information being collected at the State level evaluation, and vice versa. In the case of the MSP Projects, clear communication channels have not been established that allow for this exchange, with State evaluators presenting their role as technical assistance to the under qualified local evaluators. Further, collected data has not been systemically calibrated to purposefully inform both levels of evaluation, so we do not know if it would work. For these reasons, understanding the role in each evaluation, as well as switching between them, has proven difficult.

Session Title: Translational Health Research: Implications for Evaluation Theory From a Practice Imperative
Panel Session 619 to be held in the Granite Room Section B on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Health Evaluation TIG
Chair(s):
James Dearing,  Kaiser Permanente,  james.w.dearing@kp.org
Abstract: In theorizing about the evaluation of social and health programs, evaluation theorists have tended to focus on issues of internal validity (effect of program components on outcomes) more so than on external validity (replication of program effects across sites) or program diffusion (broad spread of a program across many practice sites). Current Federal emphasis on late-stage translation of research results to affect behavior in practice settings is an opportunity for evaluation researchers to prioritize the study of program external validity as advocated by evaluation theorists (Cronbach, Cook, and Shadish) and program diffusion as epitomized by Everett Rogers. Here, we focus on challenges of an external validity or diffusion design perspective, key variables of interest to evaluators and their clients, and introduce tools that can be used to formatively assess program potential for external validity and diffusion. We provide examples of translational research from the nation's largest nonprofit integrated healthcare system.
New Foci for Evaluation Theory and Practice: External Validity and Diffusion Variables and Measures
James Dearing,  Kaiser Permanente,  james.w.dearing@kp.org
The translation of evidence-based practices and programs from research-based interventions to implementation as practice improvements in organizations has been long-sought but rarely studied by evaluators. Typically, research interventions are evaluated for internal validity - the extent to which they achieve intended effects - but not external validity or diffusion. And beyond research interventions, what practitioners do often bears little resemblance to what was validated by researchers, even when practitioners believe that they have implemented an evidence-based program. This presentation focuses on variables and measures of key concepts that can be operationalized to represent external validity (including setting heterogeneity, fidelity, adaptation, implementation support, sustainability, and institutionalization) and diffusion (including innovation attributes, social influence, reach, rate, and need).
Using Theory to Guide Evaluation: Evaluating a Practice Based Childhood Obesity Program Through a Randomized Control Trial
Jo Ann Shoup,  Kaiser Permanente,  jo.ann.shoup@kp.org
Paul A Estabrooks,  Virginia Polytechnic Institute and State University,  estabrkp@vt.edu
Given the significantly increasing rates of obesity in children, evaluation of existing practice-based child obesity programs and enhanced program innovations is precedence. We evaluated the relative effectiveness of Family Connections (FC) Workbook, FC-Group, or FC Interactive Voice Response (IVR) counseling interventions to support parents of overweight children in changing the home environment. Parent and child (between 8 and 12 years old) dyads (n = 220) were randomly assigned to one of the three FC interventions. Child BMI and other variables were assessed at baseline, 6, and 12 months post randomization. Children whose parents completed at least 6 of the 10 FC-IVR counseling calls decreased BMI z-scores to a significantly greater extent than children in the FC-Workbook or Group conditions at both 6 (p <.05) and 12 months (p <.01). This trial demonstrated that automated telephone counseling can support parents of overweight children to reduce the extent to which children were overweight.
Using Theory to Design and Evaluate a Practical and Generalizable Smoking Reduction Study
Bridget Gaglio,  Kaiser Permanente,  bridget.gaglio@kp.org
Russ Glasgow,  Kaiser Permanente,  russ@re-aim.net@internet
Our research group has developed a smoking reduction intervention that was designed to be broadly applicable and is integrated into other smoking modification options in a large managed care organization. The focus of the program was on behavioral approaches to reduce the number of cigarettes smoked, and not on the use of alternative tobacco products. A social-ecological theoretical approach, including risk perceptions, self-efficacy, problem solving, and environmental support, was used for intervention development. We used the RE-AIM framework to evaluate this intervention. Results will include the reach of this smoking reduction program offered in conjunction with other smoking services of a large HMO, to determine the effectiveness of the program (short-term and long-term outcomes) relative to an enhanced usual care condition in a practical randomized controlled trial, and overall implementation of the intervention components. While results of this study are encouraging additional research is indicated to evaluate public health impact.

Session Title: Preparing Future Evaluators: Approaches, Theories, and Needs
Multipaper Session 620 to be held in the Granite Room Section C on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Teaching of Evaluation TIG
Chair(s):
Rick Axelson,  University of Iowa,  rick-axelson@uiowa.edu
Evaluation as the Core of Teaching Content Knowledge in a Graduate Course
Presenter(s):
Bob Hughes,  Seattle University,  rhughes@seattleu.edu
Abstract: This paper identifies how a project-based, content-focused graduate course meets the criteria for successful adult learning while also deepening students’ knowledge of and experiences with formal evaluation methods. Evaluation, as a learning activity, offers the kinds of project-based and contextual learning experiences that deeply inform students’ learning on multiple levels. The paper provides a case study in which graduate students enrolled in a course titled “Issues in Adult Basic Skills.” Students learned evaluation techniques which they tested and refined through their evaluation of regional professional development practices, and they learned the topic thoroughly by triangulating the findings of their evaluation with what they read in the literature. The paper describes the process that students undertook, the key elements of their findings, and the replicable components of this project for other college-level instructors to use in using evaluation-based projects as the core activity of a content course.
Relationship among Education of Evaluators, Their Practice, and Personal Practice Theory
Presenter(s):
Mijung Yoon,  University of Illinois,  myoon1@uiuc.edu
Abstract: In this paper, I will discuss the relationship among Evaluation Training, Practice, and Theory, focusing on the role of formal education in evaluators’ development of personal practice theory. First, I will review the literature on the training, practice, theory, and/or their relationship with one another. Second, I will describe the findings from an empirical investigation regarding evaluators' personal theory of their practice and their formal education.
Teaching Needs Assessment in a University-Based Evaluation Preparation Program
Presenter(s):
Dorinda Gallant,  The Ohio State University,  gallant.32@osu.edu
Aryn Karpinski,  The Ohio State University,  karpinski.10@osu.edu
Abstract: Teaching evaluation through either professional development training programs or university-based evaluation programs is crucial to preparing evaluators to accurately assess the merit and worth of programs, personnel, or products. However, information on teaching needs assessment as a part of a professional training program or university-based evaluation preparation program is nonexistent in the literature. This study describes a graduate-level needs assessment course taught at a Midwestern university. Data were collected from records maintained by the instructor of the course and through a course evaluation survey administered to students who completed the needs assessment course in the winter quarter of 2008. Information presented in this article will provide useful and practical information to instructors of evaluation that can assist them in preparing students to actively engage in conducting needs assessments.
Evaluation Training Needs: Graduate Student and Faculty/Staff Perspectives
Presenter(s):
Laurie Van Egeren,  Michigan State University,  vanegere@msu.edu
Nicole Greenway,  Michigan State University,  greenw50@msu.edu
Miles McNall,  Michigan State University,  mcnall@msu.edu
Yan Zheng,  Michigan State University,  zhengya1@msu.edu
Abstract: Stevahn, King, Ghere, and Minnema (2005) identified six competency categories for evaluators’ consideration in professional development. Using these competencies as a framework, a survey was implemented to identify training needs to inform the development of a graduate student evaluation training program in a large land-grant. Online surveys were completed by 574 graduate students and 177 faculty and academic staff from 17 colleges/offices and 33 departments on campus. Results revealed that a fair number of students were participating in evaluation work, usually within the context of faculty projects. However, both students and faculty reported that training opportunities were not adequately available around certain core competencies, particularly professional practice and project management. Respondents were most likely to report that statistical training was available, but one-third to one-half of respondents also reported a need for training opportunities in statistics. The findings provide direction for key target areas when developing university-based evaluation training programs.

Session Title: Improving Evaluation Policy by Focusing State, County, and Community Social Service Providers on Results-Oriented Services
Panel Session 621 to be held in the Quartz Room Section A on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Gordon Hannah,  Finger Lakes Law and Social Policy Center Inc,  gordonjhannah@gmail.com
Abstract: New York State passed a law in 2007 requiring all services provided by state agencies to include outcome or performance provisions. This multi-paper presentation will describe an intervention designed to help nine county social service departments meet the requirements of this new law. The intervention attempted to achieve this goal by promoting the systematic use of evaluation and continuous quality improvement processes to achieve desired outcomes. Such systematic use was encouraged through changes to policies and practices regarding contracting and monitoring third-party providers. The papers in this presentation will (1) describe the evaluation policies in place at both the state and county level prior to the intervention; (2) describe the goals and design of the intervention, and how it played out; (3) describe the evaluation policies that changed as a result of our intervention; and (4) discuss factors that impacted the effectiveness of the intervention.
Child Welfare Evaluation Policy in New York State: The Interplay Between State, County, and Service Provider Rules, Regulations, and Norms
Larry Pasti,  New York State Office of Children and Family Services,  larry.pasti@ocfs.state.ny.us
Good evaluation policy can improve the quality of program evaluations which subsequently can improve the implementation of programs and lead to better outcomes for consumers. Child welfare services seek better outcomes for children and are influenced by evaluation policies stemming from state, county, agency, and service provider organization rules, regulations, and norms. This presentation will describe the evaluation policies present across nine counties in New York State in regards to child welfare services prior to the implementation of a new law requiring all services to include outcome or performance provisions. The interaction between these various organizations in the creation and implementation of these evaluation policies and the impact of these policies on child welfare services will be discussed. Plans to meet the requirements of the new state law will be briefly described and elaborated on in subsequent presentations.
A Design to Influence Evaluation Policy: Goals, Capacity, and a Support System
Marilyn Ray,  Finger Lakes Law and Social Policy Center Inc,  mlr17@cornell.edu
This paper describes how we planned, designed, and implemented Getting To Outcomes (GTO) to assist nine counties in NYS develop evaluation policies to include accountability for outcomes, a new state policy. The framework developed to guide the project is a systems model which shows the project goals, initial assessments of county capacities, the project design (including tools, training, TA, and quality improvement/quality assurance), and how this process leads to the outcomes achieved by the counties. Prior to the new state policy, county evaluation policies focused on reporting on units of services. By encouraging a more specific and rigorous planning process, GTO enhances policies to evaluate quality and fidelity of implementation more accurately. The state requires that counties focus on outcomes, and GTO helps formulate new county outcome evaluation policy to measure achievements. Our systems model also includes our use of quality improvement/quality assurance to adjust our process to individual county needs while maintaining fidelity to our systems model.
Changes in Evaluation Policy as a Result of a Getting to Outcomes System Intervention with Departments of Social Services
Gordon Hannah,  Finger Lakes Law and Social Policy Center Inc,  gordonjhannah@gmail.com
The New York State Office of Child and Family Services contracted with PDP and Finger Lakes Law and Social Policy Center to provide training and technical assistance to nine counties in the use of the Getting to Outcomes Results-Based Accountability System in order to help counties change evaluation policies to meet the requirements of a new state law requiring that all services include performance or outcome provisions. This presentation will discuss the changes that occurred in evaluation policies in these counties as a result of this intervention and the implications of these policy changes on evaluation practice both within county agencies and within the service providers with whom they contract. Such policy changes will include changes to contracts, contract review processes, reporting requirements, and program monitoring processes. The implications of these evaluation policy changes on service quality will also be discussed.
Contextual and Organizational Factors Impacting the Success of an Intervention to Enhance Evaluation Policy within Departments of Social Service
Abraham Wandersman,  University of South Carolina,  wandersman@sc.edu
In this presentation we explore how the nine counties in our intervention responded to the project and explore explanations for the varying success of the intervention across the counties using different theories of systems change. We will address such questions as: How did county size, geography, and demographics affect our outcomes? How did agency leadership, resources, and motivation affect our outcomes? How did initial county capacity for results-based accountability affect outcomes? Did the way each county organized themselves for the project and collaborated with their providers affect outcomes? What are the pros and cons of staff involvement at different levels of the agency? What process variables are associated with positive outcomes? What components of the intervention appeared most impactful? What did we learn from this project that can inform other projects focused on changing evaluation policy?

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Developing Advocacy Evidence Systems and More Systematic Approaches for Gathering and Sharing Credible Advocacy Evidence: Lessons Learned from International Non-governmental Organizations (NGOs)
Roundtable Presentation 622 to be held in the Quartz Room Section B on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Advocacy and Policy Change TIG
Presenter(s):
Carlisle Levine,  CARE,  clevine@care.org
Abstract: Operational international non-governmental organizations (NGOs) offer those advocating on behalf of global development issues a unique value: established country presences that provide access to on-the-ground knowledge. A number of international NGOs, some with support from private foundations, are seeking to take greater advantage of this unique value by strengthening their advocacy evidence systems and developing more systematic approaches for gathering and sharing credible advocacy evidence in order to influence policy makers often within the U.S. government but also at all policy making levels. These international NGOs have been laying the groundwork for better systems and more systematic approaches for capturing and sharing basic project and program data, staff learning, and harder evidence in order to define policy problems and identify policy solutions. In this roundtable, a subset of these NGOs will share their experiences to date: their definitions of the problem; their responses; their challenges, lessons learned and advances.
Roundtable Rotation II: Out of the Frying Pan and Into the Fire: When Evaluators Enter the World of Policy
Roundtable Presentation 622 to be held in the Quartz Room Section B on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Advocacy and Policy Change TIG
Presenter(s):
Elizabeth Autio,  Northwest Regional Educational Laboratory,  autioe@nwrel.org
Abstract: As evaluators, we pride ourselves on our unbiased, just-the-facts approach to data collection and reporting. However, this can make us feel disconnected from the real world of social programs; how often have you wondered if your carefully-crafted report is actually read, or is it “another one for the shelf”? Yet, sometimes clients ask us to make recommendations from our evaluation data; other times, we might take on projects that explicitly have a policy component. What happens when evaluators enter the world of policy? The opportunity to do so is exciting in its potential impact, but also differs from our traditional role. What key things are different? Do we have the necessary content expertise? When and how should we exercise caution? This roundtable will start with a brief overview, drawing on examples from two recent projects in the Pacific Northwest. It will then open the floor to discussion.

Session Title: National Evaluations of School Wellness Policy and Programs
Multipaper Session 623 to be held in Room 102 in the Convention Center on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Laura Leviton,  Robert Wood Johnson Foundation,  llevito@rwjf.org
Discussant(s):
Laura Leviton,  Robert Wood Johnson Foundation,  llevito@rwjf.org
Abstract: Childhood obesity has increased rapidly in the past decade and now constitutes a serious epidemic in the US. Both government and the nonprofit sector have developed concerted efforts to address this problem. The Robert Wood Johnson Foundation has funded evaluations of these efforts. This session will describe three evaluations: the USDA's school wellness policy requirements; the efforts of the Clinton Foundation/American Heart Association to improve the school environment, and Arkansas Act 1220 of 2003, an ambitious and comprehensive effort to prevent childhood obesity through the schools. Discussion will focus on comparing evaluations for government versus nonprofit efforts through the schools.
Evaluating School District Wellness Policies: Methodological Challenges and Results
Jamie Chriqui,  University of Illinois Chicago,  jchriqui@uic.edu
Frank J Chaloupka,  University of Illinois Chicago,  fjc@uic.edu
Anna Sandoval,  University of Illinois Chicago,  asando1@uic.edu
In response to growing concerns about childhood overweight and obesity, Congress passed a law (P.L. 108-265) in 2004 requiring local education agencies participating in the National School Lunch Program to adopt and implement a wellness policy by no later than the first school day following June 30, 2006. The federal mandate included goals related to: (1) nutrition education, (2) physical activity, (3) reimbursable school meals, (4) nutrition guidelines for all competitive foods sold/served, and (5) implementation and evaluation. This presentation will review methodological challenges associated with collecting and evaluating a nationally representative sample (n=579) of wellness policies. Policies have been obtained via Web research with telephone follow-up from 504 districts (87%) and confirmed to not exist in 29 districts (5%). District-level factors (e.g., SES, race/ethnicity) associated with response status and response method (Web vs. telephone) will be described. Strategies for evaluating the variability in wellness policy content will be presented.
Assessing the Impact of the Healthy Schools Program: Preliminary Findings
Dennis Deck,  RMC Research Corporation,  ddeck@rmccorp.com
Audrey Block,  RMC Research Corporation,  ablock@rmccorp.com
The Healthy Schools Program is run by the Clinton Foundation and American Heart Association and funded by the Robert Wood Johnson Foundation. It helps schools improve access to healthier foods and increase physical activity opportunities. Schools receive onsite technical assistance and can access an online tool that helps them identify their status as a healthy school and develop a customized action plan. The goal of the evaluation, being conducted by RMC Research Corporation, is to help the Alliance and its partners understand how to better support schools with the implementation and maintenance of the intended policy and program changes and how changes might affect behaviors related to childhood obesity. This presentation will review baseline data that characterize the current state of schools' policies and action plans concerning nutrition and physical activity; students' current nutrition and physical activity behaviors; and students' Body Mass Indices.
The Impact of Arkansas Act 1220 of 2003: Findings to Date From a Comprehensive Evaluation
Martha Phillips,  University of Arkansas,  martha.phillips@arkansas.gov
James Raczynski,  University of Arkansas for Medical Sciences,  jmr@uams.edu
Arkansas Act 1220 of 2003 was among the first and most comprehensive legislative initiatives designed to address the growing rate of childhood obesity in the state. The Act included limited mandates but established mechanisms at the state and local levels to promote, if not ensure, changes in school environments to support healthy nutrition and physical activity choices by students. A comprehensive evaluation of the impact of the Act, grounded in behavior change theory and overseen by a multi-disciplinary research team, is being completed with funding provided by the Robert Wood Johnson Foundation This presentation will provide a brief history of the Act, an overview of the evaluation and its conceptual framework, and a review of findings to date, including a discussion of school environmental and policy changes, changes in family and adolescent behaviors, and findings from the monitoring of potential unintended consequences (e.g., unhealthy diet behaviors, weight-based teasing).
Evaluation Challenges in Working with Foundation-Sponsored Grant Programs Versus Federally-Sponsored Grant Programs
Audrey Block,  RMC Research Corporation,  ablock@rmccorp.com
Dennis Deck,  RMC Research Corporation,  ddeck@rmccorp.com
RMC Research Corporation is the evaluator for the Healthy Schools Program, a school-based obesity prevention program that helps schools improve access to healthier foods and increase physical activity opportunities for students and staff. Schools may receive onsite technical assistance from a relationship manager and access the Healthy School Builder, an online tool that helps them identify their status as a healthy school and develop a customized action plan. The Robert Wood Johnson Foundation is the primary sponsor of the program and the evaluation. This presentation will discuss the evaluation challenges in working with schools to collect data without the normal compliance or accountability criteria that are present in federally-sponsored grant programs. These include lack of meaningful funding available to schools, low program buy-in (possibly related to lack of funding), confusion between the Healthy Schools program and similar and competing initiatives, and lack of understanding about what program participation entails.

Session Title: The Randomized Controlled Trial in Your Evaluation Toolkit: A Candid Discussion
Panel Session 624 to be held in Room 104 in the Convention Center on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Human Services Evaluation TIG
Chair(s):
Jennifer Hamilton,  Westat,  jenniferhamilton@westat.com
Abstract: The randomized controlled train (RCT) is widely considered the optimal study design to minimize bias and provide the most accurate estimate of the impact of a given intervention or program. However, the design and implementation of an RTC presents a unique set of challenges. In fact, without the proper attention, a researcher may unintentionally limit the study's internal validity (the extent to which the difference between the treatment and control groups are real rather than a product of bias) or its external validity (generalizability to a wider population). Therefore, this panel is intended to raise awareness of these issues and to provide a frank discussion of possible solutions.
The Pros and Cons of a Cluster Randomized Trial
Jennifer Hamilton,  Westat,  jenniferhamilton@westat.com
This presentation will use an evaluation of Newark, New Jersey's Striving Readers program, which assesses the impact of Scholastic's Read 180 curriculum, to illustrate the trade offs between randomizing at the group level versus randomizing at the level of the individual. We will share our research design along with the benefits of group level randomization, such as cost, avoidance of many control group contamination issues, ease of implementation and subgroup analysis. However, in a cluster randomized trial, the number of randomized entities is much smaller than when individuals are randomized. This results in reduced power to detect treatment effects. This presentation will therefore go on to discuss several of the options for increasing power in cluster randomized trial such as: - Increasing number of individuals within groups and the number of groups, - Adding covariates to the model, and - Stratifying groups prior to randomization
Student Attrition Over Time in a National Evaluation of Head Start Services
Janet Friedman,  Westat,  janetfriedman@westat.com
This presentation will use the first comprehensive impact study of Head Start to illustrate how to reduce another threat to the internal validity of a RCT'participant attrition over time. This National Head Start Impact Study (HSIS) quantifies the impact of Head Start on 3-and 4-year-old children across child cognitive, social-emotional, and health domains as well as on parenting outcomes. Children were randomly assigned to either a Head Start group that had access to Head Start programs or to a non-Head Start group that could enroll in available community non-Head Start programs, selected by their parents. Presently the children are being followed through third grade. This paper presents successful approaches to implementing a nationally representative RCT and provides practical advice for tracking participants over time so as to minimize attrition as well as strategies for minimizing control group contamination.
Assuring Fidelity of the Treatment and the Generalizability of Findings in a Study of Mental Health Treatment
Susan Azrin,  Westat,  susanazrin@westat.com
The Mental Health Treatment study sponsored by the Social Security Administration, is the largest study to evaluate Federal policy around adults with psychiatric disabilities trying to return to work. This evaluation will be used to illustrate issues regarding measuring the implementation of the treatment as well as the generalizability of findings. In this RCT intent-to-treat design, 2,000 unemployed adults receiving disability benefits for a psychiatric disability were randomized to the treatment condition; an intervention including evidence-based supported employment and mental health services, medication management, and supplemental health insurance. The presentation will focus on how this study incorporates unique quality management features at multiple levels to achieve and maintain high implementation fidelity. Fidelity here refers to the degree to which the intervention implementation is faithful to the practice model that research has shown effective. Achieving both high fidelity and generalizablity has proven challenging
Confronting Threats to Validity in a Study of Alcohol Risk Prevention in a National Fraternity
Scott Crosse,  Westat,  scottcrosse@westat.com
This randomized trial of alcohol risk reduction interventions will be used to describe efforts to compensate for cluster-level attrition early in the study. Ninety-eight chapters of a national college fraternity were randomly assigned to receive a standard practices intervention (waiting-list control group condition), a standard intervention (3-hour alcohol risk reduction training), or an enhanced intervention (baseline training plus two 90-minute risk reduction booster training sessions). During the several months that transpired between randomly assigning chapters to conditions and the start of baseline data collection, the study team learned that five chapters were closing or disaffiliating themselves from the national fraternity. While the attrition occurred prior to notifying chapters about the study, it still raised concerns about internal validity. The proposed paper will discuss options for dealing with this scenario, as well as the one chosen by the study team and its impact on group equivalence at baseline.

Session Title: Systems Thinking for Curriculum Evaluation
Skill-Building Workshop 625 to be held in Room 106 in the Convention Center on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Systems in Evaluation TIG and the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Glenda Shoop,  Pennsylvania State University,  gshoop@psu.edu
Janice Noga,  Pathfinder Evaluation and Consulting,  jan.noga@stanfordalumni.org
Margaret Hargreaves,  Abt Associates Inc,  meg_hargreaves@abtassoc.com
Abstract: An educational program, and the curriculum that is at its center, is not self-contained. In actuality, educational programs are integrated, socio-technical systems that interact with the larger social, political, and organizational environment. If the function of curriculum evaluation is to make decisions about improvement as well as effectiveness, systems thinking can provide the broader perspective needed to understand the quality of what is going on. In this workshop, participants will learn how to apply systems analysis principles to the evaluation of educational curricula. The workshop will present two models for systems thinking currently used by the presenters to evaluate educational programs. Through a series of hands-on exercises, participants will be encouraged to draw on their own experience in curriculum evaluation as they are stepped through the processes of design, data collection, and analysis that underlie each model.

Session Title: Social Impact of the Arts
Multipaper Session 626 to be held in Room 108 in the Convention Center on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Evaluating the Arts and Culture TIG
Chair(s):
Ching Ching Yap,  University of South Carolina,  ccyap@gwm.sc.edu
Discussant(s):
Kathlyn Steedly,  Academy for Educational Development,  ksteedly@gmail.com
Communicating Through the Arts: An Evaluative Journey of Self-Discovery And Career Development Among Young Artists With Disabilities
Presenter(s):
Heike Boeltzig,  University of Massachusetts Boston,  heike.boeltzig@umb.edu
Rooshey Hasnain,  University of Illinois Chicago,  roosheyh@uic.edu
Jennifer Sullivan Sulewski,  University of Massachusetts Boston,  jennifer.sulewski@umb.edu
Abstract: Sharing a voice through the arts is a critical mechanism for self-discovery and expression. This paper presentation will provide an evaluative summary of how an arts program impacted the lives of 47 young award finalists with disabilities who entered career pathways or higher educational opportunities in the arts and who consider their arts to be “work in progress.” The program evaluation employed three data collection methods: a review of relevant documents, a survey of young artists, and in-depth case studies of five of the 47 artists. Textural and visual data will be used to illustrate how meaningful the award application process was for their self-discovery as emerging artists. The evaluators will also reflect on their experiences in using a multi-method design to evaluate this program including the challenge of surveying a group of talented artists with a wide variety of disabilities.
Beyond Maslow: A Theory of the Social Impact of the Arts
Presenter(s):
Annabel Jackson,  Annabel Jackson Associates,  ajataja@aol.com
Abstract: The arts have accumulated a vast amount of qualitative and case study based evidence of social impact, often rather disparaging described as ‘anecdotal’. One of the frequent criticisms of this evidence base is that it lacks theoretical underpinnings. Following a literature review of some 300 publications on the social impact of the arts, the author suggests a tentative theory of the social impact of the arts. This combines Scientific Realism with Basic Psychological Needs Theory. Scientific Realism adds four welcome layers of complexity to the evaluation of the arts: understanding of program elements, contexts, mechanisms and motivations. Basic Psychological Needs Theory moves away from the Maslovian values which have inadvertently stranded the arts at the height of human needs so that they are easily dismissed as a luxury.
Formative and Summative Evaluation of Initiatives to Foster Character Development and Student Learning
Presenter(s):
Melinda Mollette,  Pioneer Regional Educational Service Agency,  melindamollette@yahoo.com
Richard Benjamin,  Pioneer Regional Educational Service Agency,  rbenjamin@pioneerresa.org
Abstract: This paper provides interim results of the first two years of a four year grant project called "School Transformation: Character through the Arts" being implemented at three schools in Georgia. The Bernstein "Artful Learning" Model is used, as well as other research-based, instructional strategies, to improve student achievement and problem solving skills. The project is funded through the U.S. DOE, as part of the Arts in Education Model Development and Dissemination. Summative evaluation components include use of Georgia Criterion Referenced Competency Test (CRCT), Georgia Writing Assessment, student/parent surveys measuring school climate as it pertains to character, as well as student behavior and engagement, and a staff survey measuring adherence to the Char. Educ. Partnership's "11 Principles of Character" - Quality Standards. Formative evaluation components include teacher observation instruments, rubrics to assess student work and development of arts-based instructional units, performance assessments, and teacher portfolios, all of which foster a climate of continuous improvement within the project schools.

Session Title: Case Studies in Evaluation: United States Federal Agencies - Part 2
Multipaper Session 627 to be held in Room 110 in the Convention Center on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Government Evaluation TIG
Chair(s):
Samuel Held,  Oak Ridge Institute for Science and Education,  sam.held@orau.org
Discussant(s):
Susan Berkowitz,  Westat,  susanberkowitz@westat.com
Implementing Mandated Evaluation Research: Case Studies of Federally-Mandated Evaluation Projects
Presenter(s):
David Laverny-Rafter,  Minnesota State University at Mankato,  lavernyrafter@earthlink.net
Abstract: The importance of evaluation in analyzing the impacts of light rail transit (LRT) systems has been reinforced by the recent U.S. Federal Transit Administration’s (FTA) issuance of its Final Rule on Major Capital Investment Projects (2006). This rule requires that project sponsors who obtain Full Funding Grant Agreements for “New Starts” projects (e.g. LRT) submit a complete plan for collection and analysis of information to identify the impacts of their projects and the accuracy of their forecasts. The FTA has provided a template for implementation of these “before and after” evaluation studies which calls upon local sponsors to assemble information in the following areas: • Transit service levels • Capital, operation and maintenance costs • Ridership patterns generated during planning and project development • Ridership patterns prior to, and shortly after, implementation and operation of the project. This paper will present case studies of three FTA-mandated before and after evaluation studies conducted by local transit authorities (e.g. Minneapolis-St. Paul, MN, Portland, OR, San Diego, CA) and compare and contrast the purpose, research design, data gathering methodology, and utilization of findings. The conclusion will identify lessons learned from these cases in relation to evaluation theory and practice of mandated evaluation studies..
Enhancing Peer Review at the National Institutes of Health
Presenter(s):
Andrea Kopstein,  National Institutes of Health,  kopsteina@csr.nih.gov
Abstract: The Center for Scientific Review (CSR) at the National Institutes of Health (NIH) receives nearly 80,000 research applications a year and recruits over 18,000 external experts to review its portion in study sections. For nearly 60 years, the peer review system has enabled NIH to fund cutting-edge research. The expanding breadth, complexity, and interdisciplinary nature of modern research as well as increases in the number of new research applications creates challenges for the NIH system used to support biomedical and behavioral research. In 2007 and 2008, NIH is involved in a peer review self study to identify the most significant challenges to this system and to propose solutions to enhance peer review in the most transformative manner. Each recommendation implemented is evaluated to ensure NIH maintains the core values for peer review: scientific competence, fairness, timeliness, and integrity. This paper will present some of the evaluations related to implemented peer review enhancements.

Session Title: Peer Review: From Evaluating Science to Evaluating Science Policy
Panel Session 628 to be held in Room 112 in the Convention Center on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Isabelle Collins,  Technopolis Ltd,  isabelle.collins@technopolis-group.com
Abstract: Peer review is one of the main elements of the evaluator's toolkit when looking at the evaluation of science and technology. However, as the focus of evaluation has shifted to evaluating the policies and programs behind the research, the use of peer review has evolved with it. New forms and new uses are emerging, some of which stretch the principles beyond their original intentions, and take the ideas into areas beyond the field of RTD. This panel looks at some of these developments and their implications in the field of science, science policy and the wider policy arena.
Papers, Projects, Programs and Portfolios: Peer Review as a Public Health Research Evaluation Tool
Robin Wagner,  Centers for Disease Control and Prevention,  riw8@cdc.gov
Trevor Woollery,  Centers for Disease Control and Prevention,  twoollery@cdc.gov
Robert Spengler,  Centers for Disease Control and Prevention,  rspengler@cdc.gov
Jerald O'Hara,  Centers for Disease Control and Prevention,  johara@cdc.gov
Juliana Cyril,  Centers for Disease Control and Prevention,  jcyril@cdc.gov
John Arujo,  Centers for Disease Control and Prevention,  jarujo@cdc.gov
The independent peer review process for scholarly publications is well known, but this method is also being used as an evaluation tool for research grants and projects at multiple levels, from individual projects to programs to large portfolios comprised of many programs. Peer review can be conducted for a variety of purposes including assessment of scientific merit, mission relevance and potential or actual health impact. The format of peer review is tied to its intended purpose. Reviews may be prospective or retrospective, and include stakeholders or members of the public. The strengths and weaknesses of each peer review approach will be compared, with an emphasis on identifying the types of reviews most suitable for evaluating research outputs, outcomes and impacts. Finally, since peer review arises from qualitative assessment, alternative approaches to research evaluation based on less subjective methods will be discussed.
Peer Review and the Open Method of Co-Ordination: Reviewing National Research and Development Policy Mixes
Patries Boekholt,  Technopolis Ltd,  patries.boekholt@technopolis-group.com
The Open Method of Coordination (OMC) is a voluntary process of mutual learning between European Member States. It was first introduced at the Lisbon Council (2000) to offer means of spreading good practice and to help Member States to develop their own policies. The EU Member States represented by CREST (Committee for Scientific and Technical Research) decided to enhance mutual learning by using this OMC process. In 2006 CREST launched reviews of national R&D Policy Mixes: the full portfolio of policy instruments and strategies used in a particular country. The country reviews were conducted through peer reviews: the peers being fellow R&D policy makers from other EU countries. The presentation will discuss this peer review process applied in nine European countries in a period of two years. The pros and cons of this review method, the role of the peers and of those who volunteered to be reviewed, and the impacts will be addressed.
Peer Review as a Policy Learning Tool
Isabelle Collins,  Technopolis Ltd,  isabelle.collins@technopolis-group.com
Rebecca Allinson,  Technopolis Ltd,  rebecca.allinson@technopolis-group.com
Erik Arnold,  Technopolis Ltd,  erik.arnold@technopolis-group.com
Barbara Good,  Technopolis Ltd,  barbara.good@technopolis-group.com
Increasing use is being made of peer review in policy fields related to the science system: in research policy, innovation policy, and higher education policy - not simply as a quality measure but as a mechanism for mutual learning. Examples include the reviews of national innovation systems by the OECD and the so-called Policy Mix Peer Reviews organized by the European Commission in the context of the 'Open Method of Coordination' of research and innovation policy and peer review of higher education policies and practices. This paper discusses strengths and weaknesses of the approach as a policy learning tool and examines to what extent peer review is transferable to more general policy contexts unrelated to the science system. Specific questions include the place of such expert opinion in the broader stakeholder analysis, tensions between expert opinion and policy objectives and between independence and engagement.

Session Title: How to Create Objective Evaluation Criteria for Complex Processes and Outcomes
Demonstration Session 629 to be held in Room 103 in the Convention Center on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Knut M Wittkowski,  Rockefeller University,  kmw@rockefeller.edu
Tingting Song,  Rockefeller University,  tsong01@rockefeller.edu
Abstract: This demonstration extends commonly used ranking and scoring instruments for univariate data (Mann-Whitney 1947) and censored data (Gehan 1965) to multivariate data with applications in the evaluation of complex processes and outcomes. To reach a broad audience, many examples evaluate athletes and sports teams. Other examples will address medical problems, such as adverse experiences, side effects, and quality of life. The demonstration consists of three parts. The first part discusses the history of u-scores (Arbuthnoth 1692), extends u-scores to multivariate data using a simple representation of partial orderings (Deuchler 1914). The second part demonstrates how information about relationships between variables can be incorporated through (a) transforming data, (b) special partial orderings, and (c) combining partial orderings. The third part discusses computational and statistical aspects of non-parametric 'factor analysis'. Demonstrations will include spreadsheets (available from http://muStat.rockefeller.edu), the package 'muStat' (available from http://cran.r-project.org and http://csan.insightful.com/), and Web-services available from http://muStat.rockefeller.edu.

Session Title: Evaluation Dashboards: Practical Solutions for Reporting Results
Demonstration Session 630 to be held in Room 105 in the Convention Center on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Presenter(s):
Veena Pankaj,  Innovation Network Inc,  vpankaj@innonet.org
Ehren Reed,  Innovation Network Inc,  ereed@innonet.org
Abstract: A driver would be at a loss if not for the valuable information imparted by his car's dashboard. Many nonprofit managers, without easy access to information about their organization's performance, find themselves feeling like a dashboard-less driver. Performance dashboards, popularized by knowledge managers and CIOs, are a natural ally for evaluators and provide a quick and efficient way for managers to gauge the performance of a specific program or an entire organization. This session will walk through the process of planning for and developing a performance dashboard and will spotlight two different dashboards created for nonprofit organizations.

Session Title: Policy and Practice Issues for Evaluators, Project Directors and the Community: Lessons Learned From the Intersection of Local and National Multi-site Evaluations
Panel Session 631 to be held in Room 107 in the Convention Center on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
Sandra Ortega,  National Data Evaluation Center,  ortegas@ndec.us
Abstract: The panel focuses on how lessons learned from multi-level evaluations can impact policy and practice. Panel members have worked on numerous national projects either as local evaluators, national evaluators, or local project directors. The panel will identify the main challenges that evaluators face during multi-level projects and propose solutions to overcome them. They will also review strategies that have not worked for them in the past and examine why they believe these were unsuccessful. The panel will discuss the unique challenges presented by multi-level and multi-site evaluation projects the importance of collaborative work between the multiple levels of stakeholders, how to make national data useful for local communities, how local evaluators can build an effective relationship with local community members, how national stakeholders can facilitate the work of local evaluators, and whether some evaluation practice models are more fitting than others for national demonstration projects.
The Relationship Between Project Directors and Evaluators: Evidence from a National Demonstration Project
Rusti Berent,  University of Rochester,  rberent@childrensinstitute.net
Bill Goddard,  Beta Social Research,  wegoddard2000@yahoo.com
Antoine Beauchemin,  Kent State University,  abeauche@kent.edu
This paper presents the results of a study about the relationship between project directors and evaluators with the ultimate aim of improving collaboration and communication between the two. Data from evaluators and project director colleagues engaged in a large, national demonstration project were collected using an on-line survey adapted from The Readiness for Organizational Learning and Evaluation Instrument (Preskill and Torres 2000) and a follow-up telephone interview. The results highlight the tension that exists between project staff and evaluators. For example, evaluators were more likely than project directors to be optimistic about evaluation results having a positive impact on the program. Project directors were more likely than evaluators to minimize the influence of the evaluation on their own work. The panel will discuss these tensions and offer lessons learned from the respondents on maximizing collaboration between evaluation teams and stakeholders.
Policy and Practice Issues of Multi-site Evaluations: Lessons Learned From a Local Evaluator's Perspective
Joy Kaufman,  Yale University,  joy.kaufman@yale.edu
Cindy Crusto,  Yale University,  cindy.crusto@yale.edu
Dr. Kaufman will describe her experience of working as a local evaluator within national demonstration site and her experience as lead evaluators on multi-site evaluations. Her comments will focus on the structures, processes and technical assistance that is needed from the lead evaluation team and how local site evaluators can utilize the unique strengths of their site to gain buy-in for the evaluation. Dr. Crusto will discuss the role of the local evaluation team in carrying out the evaluation plan of nation initiatives at the local level. The local evaluator is charged with managing and balancing the evaluation requirements of national initiatives with the capacity of the local community to implement the requirements. The local evaluator is in a unique position to facilitate reciprocal knowledge transfer between local communities and Federal initiatives and evaluation capacity building at the local and national levels.
Policy and Practice Issues of Multi-Site Evaluations: Lessons Learned From a Local Project Director's Perspective
Judith Simpson,  Techno-Communications Corporation,  tccorp@tampabay.rr.com
Ms. Simpson will describe her experiences and the lessons learned from working on multi-site projects from a local project director's perspective. Her insights will ground recommendations for aligning evaluation goals with local program goals and requirements. She will provide participants with a firm discussion on methodological concerns for cross-site designs that influence learning communities as well as strategies for gaining community approval.
Policy and Practice Issues of Multi-site Evaluations: Lessons Learned From a National Evaluator's Perspective
Brigitte Manteuffel,  Macro International Inc,  brigitte.a.manteuffel@macrointernational.com
Brigitte Manteuffel will contribute her experience with implementing the national protocol for evaluating the Comprehensive Community Mental Health Services for Children and Their Families program in over 100 communities that include all 50 States, Guam, Puerto Rico, and tribal entities. Implementing this national evaluation requires partnership and collaboration with communities. How concepts of participatory and empowerment evaluation can be applied to a national evaluation will be address.

Session Title: Training-the-Trainer: Building Evaluation Capacity at the United States Environmental Protection Agency
Demonstration Session 632 to be held in Room 109 in the Convention Center on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Presenter(s):
Yvonne Watson,  United States Environmental Protection Agency,  watson.yvonne@epa.gov
Terell Lesane,  United States Environmental Protection Agency,  lasane.terell@epa.gov
Abstract: In response to increasing demands for government accountability and the need to promote program improvement and organizational learning, the U.S. Environmental Protection Agency (EPA)'s Evaluation Support Division designed a series of Train-the-Trainer courses in Logic Modeling, Performance Measurement, Program Evaluation and a Performance Management Primer for Managers that center on equipping Agency staff to deliver training and technical assistance to others. Course materials include, training slides, exercises, case studies and a 'script', complete with key talking points to aid the trainer with course delivery. This demonstration will walk conference participants through the Train-the-Trainers materials, highlighting aspects of the training that were successful or unsuccessful in EPA's organizational context. We will discuss how these and other training efforts have influenced the Agency's evaluation culture, helped develop a common program evaluation language, and shaped perceptions regarding evaluation.

Session Title: From Planning to Use: Methodological Considerations in Evaluation School-Based Programs
Multipaper Session 633 to be held in Room 111 in the Convention Center on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Loria Brown,  Jackson State University,  loria.c.brown@jsums.edu
Fidelity of The Test Development Process Within a Program Evaluation
Presenter(s):
Teresa Brumfield,  University of North Carolina at Greensboro,  tebrumfi@uncg.edu
Abstract: This presentation proposes to inform evaluators of pitfalls and problems that may be encountered when tests are created within a program evaluation. Case study was used to examine this test development process (planned versus actual) as it took place within a national math-science program evaluation. Qualitative data sources included contractual documents, personal communications, and interviews of project personnel. Pattern matching was used to investigate the factors that affected this test development process within a project evaluation. Findings from this study confirmed that constructing psychometrically sound tests within an evaluation is not routine and unproblematic and that sufficient time and resources to construct such measures properly are seldom provided. Based upon the results, it was recommended that stakeholders (i.e., project directors and evaluators) be familiar with the steps and standards used to develop psychometrically sound tests and that all stakeholders be identified and included in the project evaluation process.
Evaluating the Effectiveness of Computer Aided Instruction of English Language Learners Using an Experimental Design
Presenter(s):
Joyce Serido,  University of Arizona,  jserido@email.arizona.edu
Mari Wilhelm,  University of Arizona,  wilhelmm@ag.arizona.edu
Abstract: No Child Left Behind (NCLB, 2002) emphasizes high stakes testing and annual yearly progress toward specified reading achievement for all learners. Yet a burgeoning English Language Learner (ELL) student population is disproportionately “left behind” (Menken, 2007). There is an increased urgency to identify and apply effective intervention strategies to address the reading achievement of this population. However, the empirical base on well-controlled intervention studies of ELL student populations is limited (Vaughn et al., 2006). Research on non-ELL learners finds that structured interventions based on both direct instruction and mastery learning are an effective intervention approach. In this session, we will present findings from an experimentally-designed intervention study on the reading progress of ELL students in Grades 1 – 5.
The Importance of Multi-dimensional Baseline Measurements to Assessment of Integrated Character Education Models
Presenter(s):
Michael Corrigan,  Marshall University,  corrigan@marshall.edu
Doug Grove,  Vanguard University,  doug@ciassociates.net
Philip Vincent,  Character Development Group Inc,  pfvccvmkv@aol.com
Paul Chapman,  West Virginia University,  paul.chapman@mail.wvu.edu
Richard Walls,  West Virginia University,  richard.walls@mail.wvu.edu
Abstract: Currently, the U.S. Department of Education’s Office of Safe and Drug Free Schools Partnerships in Character Education Program provides funds to approximately fifty experimental (or quasi-experimental) efforts that are investigating the effects of character education in relation to academic achievement and other education-related variables. This study highlights the importance of multi-dimensional baseline measurements in the assessment of one such grant awarded to an Appalachian region state education agency to study the effect of the integration of character education models into rural schools. The participants from this study were recruited from eight rural schools selected through a matched random sampling technique. Four were randomly assigned to be control schools, and four were randomly assigned to develop and implement an intervention process model rich in character education. The student participants recruited at the middle/high school level consist of 151 males (42%) and 199 females (55%) for a combined N=366. The participants recruited at the elementary level consist of 61 males (52%) and 56 females (48%) for a combined N= 124. This study investigates how the character education process was defined using a multi-dimensional approach. Baseline MANOVAs identified significant differences between the control and experimental schools. Post hoc analyses suggest that when a student’s self-reported levels of character, educational attitudes, as well as views of school climate increase, theoretically so will one’s academic achievement.
Designing an Evaluation Plan for a One-to-One Laptop Initiative
Presenter(s):
Lori Holcomb,  North Carolina State University,  lori_holcomb@ncsu.edu
Jenifer Corn,  North Carolina State University,  jocorn@ncsu.edu
Jason Osborne,  North Carolina State University,  jason_osborne@ncsu.edu
Elizabeth Halstead,  North Carolina State University,  elizabeth_halstead@ncsu.edu
Sherry Booth,  North Carolina State University,  sebooth@ncsu.edu
Abstract: This paper highlights the evaluation framework and policies for evaluating a one-to-one laptop program across the state of North Carolina. This three year evaluation project examines the overall process of implementing and utilizing a one-to-one laptop model at the high school level. Furthermore, this paper will provide an overview and discussion of the evaluation policies, procedures, and methodologies utilized to evaluate the implementation of a one-to-one laptop program at the high school level. Specifically, the evaluation plan for the one-to-one initiative will be examined and discussed in detail to provide insight and direction into developing and implementing an evaluation plan.

Session Title: Models and Frameworks of Evaluation and Meta-Evaluation
Multipaper Session 634 to be held in Room 113 in the Convention Center on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Theories of Evaluation TIG
Chair(s):
Rebecca Eddy,  Claremont Graduate University,  rebecca.eddy@cgu.edu
An Emergent Theory of Systems Change and Documenting that Systems Change with the Three I Model
Presenter(s):
Dianna Newman,  University at Albany - State University of New York,  dnewman@uamail.albany.edu
Anna Lobosco,  New York State Developmental Disabilities Planning Council,  alobosco@ddpc.state.ny.us
Abstract: A model for evaluating systems change has emerged that includes Initiation, Implementation, and Impact of systems change efforts. The Three I Model was developed for use in a cross site evaluation in the addictions field; this model has had 7+ years in that original use and replication in human services, education and health programs. Currently, there are 30+ examples of individual program use and five meta-evaluations available for analysis. This has led to identification of common patterns and variables that are indicators of successful change that support a replicable model for documenting change to program and organizational systems. The purpose of this paper is to present that model as it has emerged, to discuss the major areas in which change should be present, to summarize key cycles of change that evaluators should document to meet funder needs and to provide a theoretical basis for evaluation of systemic change efforts.
Metaevaluation: Prescription and Practice
Presenter(s):
Lori Wingate,  Western Michigan University,  lori.wingate@wmich.edu
Arlen Gullickson,  Western Michigan University,  arlen.gullickson@wmich.edu
Abstract: “For all the attention, interest, and advocacy, actual examples of metaevaluation are sparse” (Henry & Mark, 2003). To address this gap in the literature, this presentation will describe multiple metaevaluations of an evaluation of a National Science Foundation program. Over the course of eight years, these metaevaluations included four independent, external metaevaluations (conducted at the request of the lead evaluator), in additional to ongoing, formative metaevaluation by an advisory committee. Their foci included the overall evaluation and some of its component parts. Prescriptions for metaevaluation—-such as those put forth by the Stufflebeam (2000, 2001, 2007), Scriven (2007) and the Joint Committee on Standards for Educational Evaluation (1994) will be discussed in relation to the criteria, methods, findings, and utility of the real-world metaevaluation examples.
Evaluation Routines, Roles, and Responsibilities: A Practitioner’s Perspective of the Evaluation Process
Presenter(s):
Gary Skolits,  University of Tennessee,  gskolits@utk.edu
Jennifer Morrow,  University of Tennessee,  jamorrow@utk.edu
Abstract: The purpose of this presentation is to describe a re-conceptualization of the evaluation process and offer a more complete and realistic model that depicts the broader evaluator roles and responsibilities occurring throughout the stages of a typical evaluation. This re-conceptualized model offers a unique conceptual framework that encompasses and highlights the many evaluator roles established by a typical evaluation as well as associated evaluator competencies that are required. In this model, an evaluation is divided into three phases (pre, during, and post evaluation). The sequence of key evaluation events is reflected as nine processes distributed across the three phases and one additional cross-cutting process applicable to all phases. We will describe these 10 routines and their associated role in detail during the presentation as well as present an example of how this model can be applied to an evaluation project.

Return to Evaluation 2008
Search Results for All Sessions