Return to search form  

Session Title: Evaluation as a Learning Tool: Maximizing Outcomes Using Strategic Formative Evaluation
Panel Session 659 to be held in Liberty Ballroom Section A on Friday, November 9, 4:30 PM to 6:00 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Chair(s):
Linda Thurston,  Kansas State University,  lpt@ksu.edu
Discussant(s):
Jan Middendorf,  Kansas State University,  jmiddend@ksu.edu
Abstract: A vital aspect of evaluation work within higher education is to assist academic programs and externally funded projects develop successful programs and/or continuously improve program outcomes. The Office of Educational Innovation and Evaluation at Kansas State University utilizes formative evaluation to provide feedback to program personnel as they focus on program development and improvement. To provide the most focused and useful information for program development and improvement, we use strategic formative evaluation practices which focus on four basic practices: understanding clients' long-term expected outcomes; understanding clients' intended use of the data we collect; asking the right evaluation questions; and reporting our findings in a usable form. This panel will provide case studies that describe these strategic formative evaluation practices with several projects. A discussant will make recommendations for future research and practice.
Impact of Formative Evaluation on Service Learning Projects to Restore Water Quality in Kansas
Christa Smith,  Kansas State University,  christa2@ksu.edu
Bill Hargrove,  Kansas State University,  bhargrove@ksu.edu
Christopher Lavergne,  WaterLINK,  lavergne@ksu.edu
The goal of WaterLINK is to engage Kansas' colleges with local communities as partners in water quality restoration and protection through service learning, with the ultimate goal of improving water quality in high priority watersheds. The client's primary goal was to implement 20 to 30 service-learning projects and maintain or increase that number on an annual basis. The process of establishing the WaterLINK project in campuses and communities proved to be more challenging than expected. Formative evaluation strategies that helped to identify these challenges and provided feedback on successes facilitated the client's goal. Evaluators focused on project impact on participants through administering Web-based surveys to students and community partners, interviewing faculty, and conducting site visits. Providing formative reporting with recommendations for project improvement to WaterLINK stakeholders helped project directors improve and maintain their project.
Informing the Development of Graduate Coursework Through Formative Evaluation
Jennifer McGee,  Kansas State University,  jemcgee@ksu.edu
Amy Conner,  Kansas State University,  amcabe@ksu.edu
Marsha Dickson,  University of Delaware,  quattro.oet.udel.edu
Concern about socially responsible business practices have increased, in part due to the media attention of worst practices such as sweatshop conditions in the apparel industry. Apparel businesses and their relevant stakeholders need a shared framework upon which to base socially responsible solutions. The Social Responsibility in Textile, Apparel, and Footwear Industry project contracted us to help with developing a conceptual framework and relevant definitions for the foundation of Internet-based graduate courses providing competencies related socially responsible textile, apparel, and footwear industry supply chains. Multiple formative evaluation activities shaped this first phase of the project, including: (1) industry expert interviews to produce definitions and framework; and (2) on-line reviews of course syllabi by industry experts to evaluate the appropriateness of course objectives. This paper will discuss the multi-layered approach we designed to provide current, relevant information to form the foundation for graduate preparation of personnel for the industry.
The Evolution of Formative Evaluation for a Statewide Multi-year Initiative
Cindy Shuman,  Kansas State University,  cshuman@ksu.edu
Jan Middendorf,  Kansas State University,  jmiddend@ksu.edu
Cindi Dunn,  Kansas State University,  ckdunn@ksu.edu
As the evaluation unit for Kansas' statewide broadband initiative, Kan-ed, for the last four years, OEIE has provided formative evaluation feedback to the network. Given the scope of the initiative, the evaluation uses a variety of methodologies, both quantitative and qualitative, to document and report results. Over time, as the implementation has moved into different phases, our evaluation procedures have also had to evolve in order to be responsive to the needs of the director, staff, and stakeholders, including the state legislature. The focus of this case study will be on how the evaluation team has continually assessed how the client will use the data. The presentation will also discuss the system OEIE has developed to report timely, relevant and useful information to identify and address effective strategies and challenges during the implementation of the initiative.
Learning From Evaluation: Capacity Building in a Geoscience Education Project
Sheryl Hodge,  Kansas State University,  shodge@ksu.edu
Iris M Totten,  Kansas State University,  itotten@ksu.edu
Clients who were leaders of a National Science Foundation - funded geology education project used OEIE services to evaluate changes in students' knowledge, skill, and attitude that resulted from a geoscience digital tutorial. Although the formative evaluation provided valuable information for the development of this innovative curriculum project, the most significant learning outcome was considerable capacity building outcome that emerged throughout the collaborative process. The evaluation team listened to the PIs explain their methodology and expected outcomes and melded this information with what was presented in the original funded proposal, thereby operationalizing the 'what' that the PIs were hoping to achieve in valid, reliable, and measurable means. The result of this relationship not only fostered improvements in data collection tools and administrations, but it also provided comprehensible strategic formative feedback from complex analyses that presented new and different paths for the PIs to consider in their successive research endeavors.

Session Title: Theories of Evaluation TIG Business Meeting and Presentation: Evaluation Theory: Consolidate it, Nurture it, Learn it, and Teach it. But How?
Business Meeting Session 660 to be held in Liberty Ballroom Section B on Friday, November 9, 4:30 PM to 6:00 PM
Sponsored by the Theories of Evaluation TIG
TIG Leader(s):
Bernadette Campbell,  Carleton University,  bernadette_campbell@carleton.ca
Presenter(s):
Bernadette Campbell,  Carleton University,  bernadette_campbell@carleton.ca
Marvin Alkin,  University of California, Los Angeles,  alkin@gseis.ucla.edu
Discussant(s):
Melvin Mark,  Pennsylvania State University,  m5m@psu.edu
William Shadish,  University of California, Merced,  wshadish@ucmerced.edu
Abstract: Much discussion in recent (and not so recent) years has centered on what is wrong with or missing from evaluation theory. Ten years ago, in his AEA presidential address, Will Shadish declared that “evaluation theory is who we are”, and he encouraged us to “consolidate it, nurture it, learn it, and teach it”. Perhaps easier said than done. Some of the problems with evaluation theory debated recently include (but are not limited to), (a) the predominance of prescriptive vs. descriptive theory, (b) the lack of contingency theories for practice, (c) an unhealthy (and perhaps uncritical) focus on specific “brand-named” evaluation approaches in contrast to a focus on better understanding the important issues facing the field, and (d) wide variability in the profession in formal and informal training in evaluation theory. A convincing case has been made for the importance of developing better evaluation theory, and for recognizing the centrality of theory to our field. What we need now are specific ideas about how precisely to consolidate, nurture, learn, and teach evaluation theory. In this panel discussion, we ask a group of prominent evaluation theorists to begin laying some of the more specific groundwork for carrying out Shadish’s inspirational charge. Following a brief presentation outlining the central issues with respect to evaluation theory development, discussants will share their thoughts about what it is going to take to advance evaluation theory along any or all of the lines suggested by Shadish. What are some of the specific barriers that we are facing? And what are some suggestions for beginning to overcome these barriers?

Session Title: Telling Your Program's Story: How to Collect, Create, and Deliver an Effective Success Story
Skill-Building Workshop 661 to be held in Mencken Room on Friday, November 9, 4:30 PM to 6:00 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Presenter(s):
Rene Lavinghouze,  Centers for Disease Control and Prevention,  shl3@cdc.gov
Ann Price,  Community Evaluation Solutions Inc,  aprice@communityevaluationsolutions.com
Abstract: Prevention programs are often unable to demonstrate outcomes and impacts for several years. Therefore, communicating success during program development and implementation is important for building program momentum and sustainability. Using a workbook developed by the Centers for Disease Control and Prevention's Division of Oral Health entitled: Impact and value: Telling your program's story, this session will focus on 1) Using success stories to through out the program's life cycle and 2) Using success stories to identify themes and promising practices across multiple sites/programs. This practical and hands on presentation will: define success stories, discuss types of success stories, and describe methods for systematically collecting and using success stories to promote your public health program and policy decisions. Discussion will include use of the workbook and lessons learned in conducting a stakeholder forum for collecting success stories. Attendees will create a 10-second sound-bite story and begin to draft a success story.

Session Title: Where Evaluation and Learning Technology Innovations Meet
Multipaper Session 662 to be held in Edgar Allen Poe Room  on Friday, November 9, 4:30 PM to 6:00 PM
Sponsored by the Distance Ed. & Other Educational Technologies TIG
Chair(s):
Tamara J Barbosa,  PhD's Consulting,  dr.barbosa@phdsconsulting.com
Cross-cultural Evaluations-building Bridges With Technology
Presenter(s):
Yehuda Peled,  Western Galilee College,  yhdpld@012.net.il
Gloria Dunnivan,  Kent State University,  gdunniva@kent.edu
Abstract: The overarching goal of the Building Bridges Project is to provide meaningful instructional uses of computer and web-based technology that directly impacts student achievement in science and to build cultural understanding through the process.. The lessons have an inquiry focus and teams of students will design a product or project that has a biological focus. The Building Bridges Partners project presents an opportunity for students and educators to gain first-hand experience with instructional technology in a global and international environment and is a model for effective community collaboration amongst a wide range of partners. American and Israeli teachers and students utilize technology for sharing their work or for doing comparative studies. Electronic bulletin boards, blogs, web pages, video conferencing, and e-mails or e-mailing attachments are examples of the means of technology that will used for communicating between partners. The presentation will discuss the findings from the evaluation.
Digital Travels: User-focused Evaluation of Distance Education in Informal Learning Environments
Presenter(s):
Tamara J Barbosa,  PhD's Consulting,  dr.barbosa@phdsconsulting.com
Abstract: Evaluation research in the development of interactive teleconferences in contextual environments, such as a zoo, is scarce and virtually non-existent at the elementary school level. The Columbus Zoo , a “live” science museum, provides an informal learning environment to ground distance-learning field experiences. The Columbus Zoo Distance-Learning Interactive Field Experience (CZD-LIFE) program provides a curriculum that incorporates a variety of activities for students at a distant site using two-way audio/video interactive technologies. The majority of research on interactive distance-learning has focused on implementation of specific courses in the university, business world and the military. The goal of this paper is to present the results of the CDZ-Life user-focused evaluation project. The goal of the project was to design a new informal education program model, to offer educational and scientific resources to teachers and students and to enhance standards-based math and science instruction in Ohio.
Evaluating Emerging Mobile and Web-based Technologies in Education: A Quality Assurance Process
Presenter(s):
Nancy Gadzuk,  Wexford,  ngadzuk@wexford.org
Sheila Cassidy,  Wexford,  scassidy@wexford.org
Abstract: Emerging mobile and web-based technologies provide exciting new affordances and opportunities for learning and student assessment in education. However, the development of games and game-like educational materials using emerging technologies presents new challenges for evaluation. We will provide an overview of the iterative Quality Assurance process that we have refined over several years of development, review, and testing of these innovative technology-based products. We will describe the review/evaluation steps that we use, and where these steps are most effective in the design/development and field- and pilot-testing process. We will discuss the salient characteristics and indicators of emerging mobile technology products that we identify in the Quality Assurance process. We will address some of the evaluation challenges we have faced in dealing with ever-changing technology capabilities, and our approaches to meeting these challenges.

Session Title: Collaborative Evaluations: Successes, Challenges, and Lessons Learned
Multipaper Session 663 to be held in Carroll Room on Friday, November 9, 4:30 PM to 6:00 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Nakia James,  Western Michigan University,  nakiasjames@sbcglobal.net
Learning From Stakeholders: Using A Collaborative Evaluation Approach With Classroom Teachers to Investigate Cross-site Outcomes of a Screen Education Intervention
Presenter(s):
Karyl Askew,  University of North Carolina, Chapel Hill,  karyls@email.unc.edu
Rita O'Sullivan,  University of North Carolina, Chapel Hill,  ritao@email.unc.edu
Abstract: This session presents the methodology of a collaborative evaluation commissioned by the American Film Institute (AFI) Screen Education Center, demonstrating how flexibility in outcome assessments for multi-site programs can yield important findings. AFI seeks to transform American education by supporting educators as they incorporate videography as a medium for instruction. In its fifth year of operation, the organization molded its Screen Education training specifically for educators working with English Language Learner (ELL). The evaluation was conducted to determine the extent to which the AFI Screen Education program was meeting the goals of 1) supporting ELL teachers integration of videography and 2) enhancing ELL students' literacy skills. Teachers from 13 elementary, middle, and high schools across North Carolina participated in this study. Evaluators will share overall evaluation findings and specifically demonstrate how collaborative design and implementation of custom site evaluation plans and performance measures improved the ability to report cross-site outcomes.
Collaborative Evaluation of Superintendents' Attitudes Toward Leadership: A Qualitative Perspective
Presenter(s):
Rigoberto Rincones-Gomez,  MDC Inc,  rincones@mail.com
Liliana Rodriguez-Campos,  University of South Florida,  lrodriguez@coedu.usf.edu
Abstract: This formative evaluation was designed and implemented using the Model for Collaborative Evaluations (MCE). The MCE establishes priorities in order to achieve a supportive evaluation environment (Rodriguez-Campos, 2005). Therefore, the model transformed this evaluation into a joint responsibility process allowing a more holistic understanding of the multiple leadership perceptions among school district superintendents collaborating in an educational initiative. Interviews, thinking-aloud, and journals were used in order to capture and better understand superintendents' approaches to leadership. Among others, the main evaluation findings were: First, the novice superintendents tended to use only elements from the democratic perspective. However, the experienced superintendent group used a combination of elements from the formal or structural, democratic, as well as the political leadership perspectives. Second, all participating superintendents demonstrated to share a set of common practices and approaches toward leadership perspectives.
Using a Collaborative Approach in Evaluating the Impacts of the Sustainable Agriculture Research and Education (SARE) Professional Development Program (PDP) State Allocations
Presenter(s):
John O'Sullivan,  North Carolina A & T State University,  johno@ncat.edu
Rita O'Sullivan,  University of North Carolina, Chapel Hill,  ritao@unc.edu
Abstract: Collaborative evaluation relies on the ability of key stakeholders to communicate in a common language about evaluation. When federal programs try to align their evaluation needs with regional, state, and grantee perspectives, the evaluation task of communicating with stakeholders to clarify the evaluation can become very challenging. The National Sustainable Agriculture Research and Education Professional Development Program (SARE PDP), which provides approximately $3 million annually so that Extension and other agricultural educators would be trained in the principles and practices of sustainable agriculture requested assistance with the development of tools to help its four regional program office (North East, Southern, North Central and Western Regional SARE offices) better report program impacts. Preliminary discussions that the external evaluators conducted at the various levels of the SARE program revealed many differences among stakeholders' evaluation capacity. The evaluators used the “Model for Collaborative Evaluations” to bridge the communication gaps across the various stakeholders and move the evaluation forward. This paper presentation describes this process.
Lessons Learned, Wisdom Gained: The Collaborative Evaluation of A College Access Initiative Comes Full Circle
Presenter(s):
Michelle Jay,  University of South Carolina,  mjay@sc.edu
Karyl Askew,  University of North Carolina,  karyls@email.unc.edu
Matthew McBee,  University of North Carolina,  mtm@northcarolina.edu
Rita O'Sullivan,  University of North Carolina, Chapel Hill,  ritao@unc.edu
Abstract: This paper, which focuses on collaborative strategies used in the evaluation of GEAR UP North Carolina - a college-access initiative program, illustrates the key successes/challenges that arose during the evaluation (which took place during the last two years of the program's original six-year grant) and highlights the ways in which lessons learned from that evaluation have been incorporated into the collaborative strategies currently used to evaluate the program's second grant. The four critical elements unique to collaborative evaluations - collaborative planning, evaluation technical assistance, evaluation capacity building, and evaluation fairs - and their application in two successive GEAR UP evaluations are addressed. Further, examples of collaborative evaluation elements used including examples of school district evaluation plans, data collection strategies, and evaluation reporting formats are used to demonstrate collaborative evaluation praxis. The authors affirm the importance of the developmental process as a critical aspect of collaborative evaluations for both client and evaluator.
Creating Observational Tools for the District Standards Support Review: Focusing a Formative Evaluation With a Collaborative Approach
Presenter(s):
Ranjana Damle,  Albuquerque Public Schools,  damle@aps.edu
Abstract: An urban school district made a transition towards the standards-based education system that incorporated standards-based curriculum and assessment. The district developed many strategies to achieve successful standards implementation in classrooms. The district also informed the parents and community about the policy of standards-based education for educational success. The district, then, contemplated an objective appraisal of the standards implementation in the district’s 130-odd schools. As the district research/evaluation team took charge to design an evaluation, the district’s ‘site visits’ premise evolved into a full evaluation that defined goals, designed data collection and sampling strategies, and developed data analysis and reporting procedures. A critical component of the evaluation design was the development of observation tools. The evaluators collaborated with the instruction leaders with an expertise in standards-based education. The tools articulated the key components of the standards-based system in the most efficient manner, facilitating data collection on standards implementation.

Session Title: Making Sense of Mobility: Household Survey Data From Comprehensive Community Initiatives, Implications for Evaluation and Theory
Panel Session 664 to be held in Pratt Room, Section A on Friday, November 9, 4:30 PM to 6:00 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Cindy Guy,  Annie E Casey Foundation,  cguy@aecf.org
Discussant(s):
Claudia Coulton,  Case Western Reserve University,  claudia.coulton@case.edu
Abstract: Comprehensive community initiatives (CCIs) seek to build resident capacity, raise and direct resources, and enhance services and supports to improve the wellbeing of children and families living in distressed communities. Recently analyzed household survey data from formative and impact studies of CCIs point to high levels of resident mobility in participant communities; posing challenges to the evaluation and theory of these initiatives. In a panel chaired by Cindy Guy of the Annie E. Casey Foundation, representatives of two national evaluation teams - the Urban Institute team conducting data analysis for Casey's Making Connections Initiative, and the NYU/Wagner School evaluators of the Robert Wood Johnson's Urban Health Initiative(UHI) - are joined by the policy advisor at one of the UHI sites and staff of the Center for Urban Poverty and Community Development at Case Western Reserve to discuss these challenges and the implications for evaluation and theory building moving forward.
Family Mobility and Neighborhood Change: Implications for Evaluation and Design From the Making Connections Initiative
Marge Turner,  Urban Institute,  maturner@ui.urban.org
Marge Turner, Director of the Urban Institute's Metropolitan Housing and Communities Policy Center, is the lead analyst on a ten-site data collection effort that includes a longitudinal family panel and a neighborhood cross-sectional survey designed to provide planning, management, and evaluation data for the Annie E. Casey Foundation's Making Connections initiative, a neighborhood-focused family-strengthening initiative. In a recent analysis of data from neighborhoods in Denver, Des Moines, Indianapolis, San Antonio and White Center, WA, Ms. Turner found that more than half of all families with children had moved in the first three years of the initiative, many of them leaving the neighborhood. In this presentation she will discuss: - Factors affecting family mobility and residential stability; - The different ways in which neighborhoods affect family economic success (as launching pads to success; as poverty traps, etc); - The conceptual challenges to evaluation and initiative design that such mobility presents.
Accounting for Mobility in a Multi-site, Multi-method Evaluation of Comprehensive Community Change
Beth Weitzman,  New York University,  beth.weitzman@nyu.edu
Charles Brecher,  New York University,  charles.brecher@nyu.edu
Tod Mijanovich,  New York University,  tm11@nyu.edu
Diana Silver,  New York University,  diana.silver@nyu.edu
The principal investigator of the Robert Wood Johnson Foundation's Urban Health Initiative, Dr. Weitzman and her colleagues at NYU's Wagner School are in the process of completing an impact analysis of this five city initiative that sought to measurably improve health and safety outcomes for children and youth in Baltimore, Detroit, Oakland, Philadelphia and Richmond from 1996 -2006. Using a Theory of Change approach to evaluation coupled with a quasi-experimental comparison group design, and employing city fixed effects in their final analysis, Dr. Weitzman and her team have sought to capture initiative impacts even amidst the complexity and fluidity of distressed urban communities. They will discuss how they have addressed issues of mobility in their impact analyses, and how the technical and conceptual ways these issues are addressed complicates their findings.
Patterns of Residential Longevity in Baltimore: Implications for Initiative Theory, Design and Evaluation
Martha Holleman,  The Safe and Sound Campaign,  mholleman@safeandsound.org
Martha Holliman served as Policy Advisor for the Baltimore Safe and Sound Campaign - the Baltimore site of the Robert Wood Johnson Foundation Urban Health Initiative (UHI) - from 1996 -2006 and is currently a WT Grant Foundation Distinguished Fellow, working with the national evaluation team for the UHI at NYU/ Wagner. She has been using survey data collected by the UHI evaluation to better understand patterns of residential mobility and longevity in Baltimore and the implications of these patterns for future work aimed at improving the well being of children in her city. She brings a practitioner's voice to the panel and will discuss the implications of emerging findings on residential mobility for initiative theory, design and evaluation.

Session Title: Success Measures: Learning From Community Development Results Through Participation, Common Tools, Shared Data
Panel Session 665 to be held in Pratt Room, Section B on Friday, November 9, 4:30 PM to 6:00 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Maggie Grieve,  NeighborWorks America,  mgrieve@nw.org
Discussant(s):
Dawn Hanson Smart,  Clegg & Associates,  dsmart@cleggassociates.com
Nancy Kopf,  NeighborWorks America,  nkopf@nw.org
Abstract: Success Measures is a participatory evaluation approach based on a comprehensive set of outcome indicators and offered through a package of evaluation services and web-based technology. It was developed by community-based practitioners, funders and evaluators to ensure relevance across a broad spectrum of organization sizes, locations, cultures and programs within the community development field. This panel brings together three nonprofits that have used Success Measures to evaluate results and learn from their work while integrating ongoing evaluation into their programs. Panelists are from a California-based nonprofit serving farm worker housing needs, a Mississippi Delta community development corporation and a multi-service community development organization in Philadelphia. An intermediary funder, NeighborWorks™ America, will highlight strategies to build grantee evaluation capacity across a broad member network and move to greater accountability and shared learning. Serving as a discussant, a Success Measures evaluation trainer will reflect on the different learning experiences shared by the panelists.
Cabrillo Economic Development Corporation: Using Success Measures to Measure Affordable Multi-family Housing Results for Individuals and Communities
Jill Fioravanti,  Cabrillo Economic Development Corporation,  jillbfioravanti@gmail.com
Cabrillo Economic Development Corporation is the leading affordable housing developer in Ventura County, California. Founded to serve farm workers, Cabrillo has built more than 1,000 units of affordable for-sale and multi-family rental housing, manages 440 affordable rental units, and has counseled more than 1,800 households preparing to purchase a home. In addition, Cabrillo has helped 275 families into homeownership through education, counseling, and lending services. The organization has a comprehensive multi-level evaluation framework and recently added Success Measures to enhance its efforts in learning from longer term outcomes. Cabrillo examined impacts of its multi-family rental properties and used the information gained on personal and community level change for a variety of purposes. Jill Fioravanti, the organization's Special Projects Manager and an evaluation consultant, led the Success Measures evaluation and is articulate about both the benefits and challenges involved in implementation. The organization was supported to use Success Measures by NeighborWorks America.
Quitman County Development Organization: Measuring Resident Satisfaction, Security and Stability in the Mississippi Delta
Lela Keys,  Quitman County Development Corporation,  lbkeys2@bellsouth.net
Quitman County Development Organization (QCDO) is a comprehensive community development organization using an empowerment approach in its work with the predominately African American residents in three Mississippi Delta counties. The organization's programs include affordable housing, job creation, day care, education, micro-enterprise development and a credit union. Lela Keys, a professional in the health care industry who is a member of QCDO's board, is leading their Success Measures evaluation. As a life-long resident of the region, Ms. Keys will share both her personal perspective on the value of a participatory evaluation approach for residents that are traditionally disempowered and her considerable professional experience in using evaluation to make strategic decisions. Ms. Keys will discuss QCDO's evaluation of their work to stabilize the community through new construction as well as owner occupied rehabilitation. The organization's participation in Success Measures is supported by one of its long-time funders, the F.B. Heron Foundation.
Hispanic Association of Contractors and Enterprises: Measuring Improved Quality of Life Through Success Measures Tools
Maria Gonzalez,  Hispanic Association of Contractors and Enterprises,  mgonzalez@hacecdc.org
The Hispanic Association of Contractors and Enterprises (HACE), a community development corporation based in Philadelphia, PA, offers a wide range of housing, economic development, empowerment and related services. As part of a neighborhood revitalization initiative funded by the Wachovia Regional Foundation, HACE is using Success Measures to understand how its work contributes to an improved quality of life for neighborhood residents. Maria Gonzalez, Vice President of HACE, is leading the evaluation effort. She will share her organization's experience doing primary level data collection to understand the long term changes in quality of life. Ms. Gonzalez will be able to discuss some of the challenges and benefits of gathering data in a Latino community. She will also share her experience integrating Success Measures into a large organization with multiple types of services. Finally, she will highlight the organization's key learnings and how they plan to use the information over time.
Building Capacity to Measure Community Level Outcomes in the NeighborWorks Network
Brooke Finn,  NeighborWorks America,  bfinn@nw.org
NeighborWorks™ America is a national nonprofit organization created by Congress in 1978 to provide financial support, technical assistance and training to community-based revitalization efforts. The NeighborWorks Network of over 240 organizations in 50 states serves nearly 4,500 urban, rural and suburban communities. Since 2005 NeighborWorks America has supported 42 organizations in its Network to use Success Measures. Brooke Finn, Deputy Director of National Initiatives and Applied Research, has overseen this initiative and will share the key considerations made to ensure that the training and technical assistance support offered met the needs of both larger, more sophisticated organizations as well as smaller organizations new to evaluation. She will also share key thinking about the most important decisions a similar funder or intermediary organization might have to make to invest in building the capacity of supported organizations to measure outcomes. Finally, she will share preliminary learnings from implementation to date.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Developing a Conceptual Framework for Evaluating Policy Change
Roundtable Presentation 666 to be held in Douglas Boardroom on Friday, November 9, 4:30 PM to 6:00 PM
Presenter(s):
Susan Ladd,  Centers for Disease Control and Prevention,  sladd@cdc.gov
Jan Jernigan,  Centers for Disease Control and Prevention,  jjernigan1@cdc.gov
Alice Ammerman,  University of North Carolina, Chapel Hill,  alice_ammerman@unc.edu
Semra Aytur,  University of North Carolina,  aytur@email.unc.edu
Beverly Garcia,  University of North Carolina,  beverly_garcia@unc.edu
Amy Paxton,  University of North Carolina,  apaxton@@email.unc.edu
Abstract: Reducing the population burden of heart disease and stroke requires multi-level policies that address political, environmental, institutional, organizational, and social systems. Few models exist to guide evaluation of policy efforts. The Centers for Disease Control and Prevention (CDC), Division for Heart Disease and Stroke Prevention, and the University of North Carolina (UNC) collaborated to develop a framework to evaluate policy change interventions. An expert panel composed of CDC and other nationally recognized evaluators was engaged to examine existing models, identify gaps and barriers, and develop the framework. The session will present the framework and describe the development process and anticipated next steps in development. This roundtable offers an opportunity for discussion and input on the framework for evaluating policy change as well as its extension to system change.
Roundtable Rotation II: Development of an Outcome Monitoring System for Mental Health Programs in a Large Regional Health Authority
Roundtable Presentation 666 to be held in Douglas Boardroom on Friday, November 9, 4:30 PM to 6:00 PM
Presenter(s):
Colleen Lucas,  Calgary Health Region,  colleen.lucas@calgaryhealthregion.ca
Lindsay Guyn,  Calgary Health Region,  lindsay.guyn@calgaryhealthregion.ca
Abstract: As the primary provider of health care for over a million people, the Calgary Health Region needs an efficient method for routinely assessing the performance of the 148 mental health programs for which it is responsible. This presentation describes a pilot study of five programs which determined the feasibility of implementing a region-wide outcome monitoring system. Several measurement instruments, including the Behavior and Symptom Identification Scale 24, the Multnomah Community Ability Scale, and the Outcome Questionnaire-45, were administered at both admission and discharge; over 3500 outcome measures were completed from March 2005 to December 2006. The pilot provided an opportunity to assess the efficacy of the various psychometric instruments for different client populations and clinical settings. The pilot study also provided valuable logistical learnings, which were instrumental in the on-going development of a practical outcome monitoring process for mental health programs in this large diverse health organization.

Session Title: Building Capacity for Evaluation: A Tale of Four National Youth Development Organizations
Panel Session 667 to be held in Hopkins Room on Friday, November 9, 4:30 PM to 6:00 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Chair(s):
Suzanne Le Menestrel,  United States Department of Agriculture,  slemenestrel@csrees.usda.gov
Karen Heller Key,  National Human Services Assembly,  kkey@nassembly.org
Discussant(s):
Hallie Preskill,  Claremont Graduate University,  hallie.preskill@cgu.edu
Abstract: This panel session features presentations from researchers representing four diverse national youth development organizations that are members of the National Collaboration for Youth, a coalition of the National Human Services Assembly member organizations that have a significant interest in youth development. Members of the National Collaboration for Youth include more than fifty national, non-profit, youth development organizations. The presenters are currently engaged in evaluation capacity-building efforts that are focused on the following major themes: (1) Measurement and instrumentation; (2) Training; (3) Obtaining buy-in and participation; and (4) Applying evaluation results to youth development practice. The presenters will share specific strategies that their respective organizations are developing to address one or more of these themes. Throughout the panel, the presenters will describe ways in which participants can apply these strategies in their own work.
Evaluating for Impact in the 4-H Youth Development Program
Suzanne Le Menestrel,  United States Department of Agriculture,  slemenestrel@csrees.usda.gov
Mary Arnold,  Oregon State University,  mary.arnold@oregonstate.edu
Dr. Suzanne Le Menestrel is a National Program Leader for Youth Development Research at National 4-H Headquarters, U.S. Department of Agriculture. She is an experienced presenter with expertise in child and adolescent development, program evaluation, and out-of-school time programs. Dr. Le Menestrel is co-chair of the -Evaluating for Impact- National 4-H Learning Priority team. Dr. Mary Arnold is a 4-H youth development evaluation specialist at Oregon State University with a special focus on teaching program evaluation. She is co-chair of the -Evaluating for Impact- team and the 2007 chair of the Extension Education (EEE) TIG of the AEA. She has presented on the topic of 4-H program evaluation at several recent national meetings of the AEA. In 2004, she was awarded the national Excellence in Evaluation Training award by the EEE-TIG. She has published numerous articles on the evaluation of 4-H programs.
Developing the Big Brothers Big Sisters of America Impact Survey
Keoki Hansen,  Big Brothers Big Sisters of America,  keoki.hansen@bbbs.org
Keoki Hansen, Director, Research and Evaluation, Big Brothers Big Sisters of America (BBBSA), has been with BBBSA for almost 8 years. Over the past four years, Keoki has been the BBBSA Project Director for the two-phase BBBSA School-Based Mentoring Research Project, which includes an evaluation of effective practices and an impact study utilizing randomized treatment/control groups. She is an experienced presenter and through her work at BBBSA, understands researchers' need to create a reliable and valid measures and practitioners' need to have a simple measure that is easy to use and communicates change in areas of relevance to their communities. Before joining BBBSA, Ms. Hansen taught Research Methods and Statistics in Boston and worked as a consultant for the Army and the Center for Cognitive Neuroscience at the University of Pennsylvania.
Building Capacity for Evaluation - The Girls, Inc. Approach
Heather Johnston Nicholson,  Girls Incorporated,  hjnicholson@girls-inc.org
PeiYao Chen,  Girls Incorporated,  pychen@girls-inc.org
Dr. Heather Johnston Nicholson, the Director of Research at Girls Incorporated, has been with Girls Inc. for 25 years, directing research and evaluation and contributing to program development and advocacy, all with a multicultural focus on girls and young women ages 6 to 18. Dr. PeiYao Chen is the Research Analyst at Girls Inc. Their presentation focuses on strategies and issues involved in the development of a national evaluation system to support and coordinate affiliates' efforts in documenting changes in knowledge, skills, and attitudes of participants in specific Girls Inc. identity programs. The presenters will discuss the collaborative process through which a national organization and its affiliates work together to create appropriate outcome measurement tools that are developmentally appropriate and girl-friendly to streamline data collection and analysis for affiliates, provide concrete evidence of the outcomes of their programming to local audiences, and use results to improve program delivery.
Prove It: Evaluation Tools To Measure Youth Development Outcomes
Barry Garst,  American Camp Association,  bgarst@acacamps.org
M Deborah Bialeschki,  American Camp Association,  moon@email.unc.edu
Dr. Barry Garst is an educator, researcher, presenter, and facilitator. Currently the Director of Research Application with the American Camp Association (ACA), Dr. Garst is a former Assistant Professor and Extension Specialist in youth development at Virginia Tech. Dr. Garst's background includes more than 10 years of programming experience as a municipal outdoor day camp manager, a wilderness mental health counselor, and a residential 4-H camp and conference center program director and staff development trainer. M. Deborah Bialeschki, Ph.D. is a Senior Researcher at ACA. She is also a Professor Emeritus from the University of North Carolina-Chapel Hill after 20 years of faculty service. She has co-authored a book on evaluation and conducted numerous evaluation projects. This presentation focuses on a national evaluation process used by administrators with staff to focus on goals and desired youth outcomes. They will also discuss specific outcome tools developed and psychometrically tested by ACA.

Session Title: Peer Reviews for Independent Consultants: New Peer Reviewer Orientation
Skill-Building Workshop 668 to be held in Peale Room on Friday, November 9, 4:30 PM to 6:00 PM
Sponsored by the Independent Consulting TIG
Presenter(s):
Sally Bond,  The Program Evaluation Group,  usbond@mindspring.com
Marilyn Ray,  Finger Lakes Law and Social Policy Center Inc,  mlr17@cornell.edu
Abstract: At AEA 2003, the Independent Consulting TIG embarked on the professional development of members through a Peer Reviews process to provide collegial feedback on evaluation reports). The IC TIG appointed Co-Coordinators to develop and recommend guidelines, a framework, and a rubric for conducting Peer Reviews within the membership of the Independent Consulting TIG. At AEA 2004, the process, framework, and rubric the Co-Coordinators had developed were presented and revised during a think tank. Volunteer Peer Reviewers were recruited and oriented to the Peer Review process and rubric. This update and orientation process was repeated at AEA 2005 and AEA 2006. In 2007, we propose to present a skill-building workshop during which we will provide an update on the Peer Review project, offer a forum for volunteer reviewers to share their experiences, and orient new reviewers.

Session Title: Lessons Learned: Wrapping up our Evaluation of an Advocacy Campaign
Demonstration Session 669 to be held in Adams Room on Friday, November 9, 4:30 PM to 6:00 PM
Sponsored by the Advocacy and Policy Change TIG
Chair(s):
Ehren Reed,  Innovation Network Inc,  ereed@innonet.org
Presenter(s):
Jennifer Bagnell Stuart,  Innovation Network Inc,  jabstuart@innonet.org
Abstract: Given the nonprofit sector's current focus on results and accountability and the innate challenges to evaluating advocacy efforts and policy initiatives, there recently has been a groundswell of research around advocacy evaluation. The evaluation of advocacy and public policy initiatives involves a number of inherent challenges: outcomes may be far beyond the scope of any single organization or program and contextual factors beyond the organization's control can leave it short of its desired outcome despite brilliant strategies and flawless execution. This demonstration will spotlight the strategy Innovation Network used to evaluate one campaign to enact US federal policy change. Innovation Network will discuss its methodology, the challenges inherent in evaluating this type of campaign, and share the lessons we have learned.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Using a Shared On-line Database to Address Multi-partner Project Management and Evaluation Issues
Roundtable Presentation 670 to be held in Jefferson Room on Friday, November 9, 4:30 PM to 6:00 PM
Presenter(s):
Randy Ellsworth,  Wichita State University,  randy.ellsworth@wichita.edu
Larry Gwaltney,  Allied Educational Research and Development Services,  tgwaltney@cox.net
Patrick Hutchison,  Wichita State University,  patrick.hutchison@wichita.edu
Abstract: This paper describes an evaluation project involving a partnership among four agencies (a county health agency, a Parents-as-Teachers program, a private non-profit wellness center, and an urban school district pre-school) designed to provide seamless services to high need families to ensure children ages 0-5 reach kindergarten with the skills necessary for success in school. Since none of the partners were housed together, nor shared a common data base, evaluators worked with the partners to develop a common data base accessible by all to enter and track services provided to families served in the program. Issues met included (a) involving agency attorneys to develop legal procedures enabling agencies to “share” information, (b) creating a common, secure, Internet accessible, live data-base for all agencies to use, and (c) developing a monitoring process so changes made in children's records would be immediately flagged to alert other agencies of the changes.
Roundtable Rotation II: Instructionally Linked Versus Norm Referenced Assessments to Determine Impact Within an Even Start Program Evaluation
Roundtable Presentation 670 to be held in Jefferson Room on Friday, November 9, 4:30 PM to 6:00 PM
Presenter(s):
Zandra Gratz,  Kean University,  zgratz@aol.com
Abstract: This paper describes the evaluation of a school based Even Start family literacy program which has been in operation for three years. Youngsters were tested using traditional norm referenced assessments to generate a normative control expectation. In addition, instructionally linked school based assessments were accessed to examine change overtime in participants. Inferences from each of these paradigms were compared to each other as well as to regular classroom teacher and parent appraisal of youngster progress. The current study found credible evidence that alternate designs, including those relying on data typically maintained by schools, provide sufficient information to suggest causal inferences.

Session Title: Conducting Multi-method Evaluations
Multipaper Session 672 to be held in D'Alesandro Room on Friday, November 9, 4:30 PM to 6:00 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Linda Morell,  University of California, Berkeley,  lindamorell@earthlink.net
The Health Disease Process: Social Representations of Rural Workers Through Q Methodology
Presenter(s):
Virginia Gravina,  Universidad de la Republica Uruguay,  virginia@fagro.edu.uy
Pedro de Hegedüs,  Universidad de la Republica Uruguay,  phegedus@adinet.com.uy
Carolina Tonini,  Universidad Federal de Santa Maria,  carolinatonini@yahoo.com.br
Abstract: This study was conducted to identify the social representations of rural workers in relation to the way they perceive the health and disease process. To fulfill this objective Q methodology was used. This methodology combines in a unique way qualitative techniques to generate information and quantitative techniques to analyze the information (factor analysis). As a result of the work three factors emerged that were labeled in this way: i) health and integral prevention, ii) health system, and iii) health-work relation. The conclusions of the research are: i) the different context situations of the people can affect their health or disease conditions, ii) the scientific knowledge is used when people need advice from professionals, and iii) the indigenous knowledge is a valuable first kind of advisement to people in relation to good health practices.
Relationships Matter: Using Social Network Analysis to Evaluate Social Capital in the Kenyan Dairy Sector
Presenter(s):
Karabi Acharya,  Academy for Educational Development,  kacharya@aed.org
Charles Wambugu,  World Agroforestry Centre,  c.wambugu@cgiar.org
Esther Karanja,  World Agroforestry Centre,  e.karanja@cgiaf.org
Hellen Arimi,  World Agroforestry Centre,  harimi@cgiag.org
Bette Booth,  Academy for Educational Development,  bbooth@aed.org
Shera Bender,  Independent Consultant,  smbender_2000@yahoo.com
Abstract: This paper presents the results of an evaluation of a project that worked to build social capital among organizations working in the Kenyan dairy sector using the SCALE™ approach. SCALE™ is a systems-wide social change framework, participatory management process, and set of tools that interweaves governance, economic, environmental and social interests in a way that manages and conserves resources while also creating new economic opportunities. The evaluation design pioneered the use of systems theory and social network analysis for program evaluation with data collected at two points in time. The evaluation emphasized the importance of understanding where organizations sit within the whole system, what role they play, and how they are connected to other organizations, using standard social network indicators of cohesion, centrality, density and others. This paper will discuss the range of collaborative actions resulting from strengthening of the network of organizations working with small-scale dairy farmers in Kenya.
Validity Evidence Presented Through a Mixed Model Conceptual Framework
Presenter(s):
Linda Morell,  University of California, Berkeley,  lindamorell@earthlink.net
Abstract: This paper provides an example of how a mixed model conceptual framework can be used in assessment and how the framework illuminates the complementary nature of a validity investigation. Researchers in the areas of psychology and education have been exploring and trying to understand the intersections among aspects of validity, educational measurement, and cognitive theory. This study used a national validity project to investigate how can respondents contribute to the validation process in ways other than providing traditional “subject” information. A mixed model conceptual framework guided this study. Validity evidence was collected through a variety of methods including, traditional paper and pencil tests, surveys, think-alouds, and exit interviews of fifth and sixth grade students, as well as interviews with teachers and science experts.
Data Preparation, Analysis, and Reporting System Evaluation For a School System
Presenter(s):
David MacQuarrie,  Western Michigan University,  dmacquarrie@sbcglobal.net
Abstract: Data has become much more important in making quality educational decisions within the last decade. In order to make sound decisions from quantitative data the data must be collected, cleaned, analyzed, and reported in a manner that the reader's and decision maker's are clearly guided in the interpretation. An evaluation of a secondary level school's quantitative data system included the use of a qualitative focus group process made up of internal experts. First, two protocols were produced from best practices and processes of quantitative research and evaluation. Second, three qualitative instruments were created and aligned with the two protocols to enable the capturing of data. Third, the experts were lead through a process that compared the current data system to the two protocols in a backwards analytical process. The evaluation was summarized based on the protocol sections and also included recommendations for improvements in regards to equipment, software, and processes.

Session Title: Applications of Multilevel Longitudinal Analysis
Multipaper Session 673 to be held in Calhoun Room on Friday, November 9, 4:30 PM to 6:00 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Fred Newman,  Florida International University,  newmanf@fiu.edu
Evaluation of the National Examination's Impact on the Quality of Learning in Russian Schools
Presenter(s):
Zvonnikov Victor,  State University of Management,  zvonnikov@mail.ru
Marina Chelyshkova,  State University of Management,  mchelyshkova@mail.ru
Abstract: This year the Law about National Examination was entered in Russia. Despite of the Law and six years of experiment National Examination has numerous opponents who are afraid of it's negative impact on traditions of the Russian education system. In paper we present the results of the research program focused on evaluation of the National Examination's impact on the quality of leaning. This research was conducted on different directions: the analysis of quality's change by comparative measurement of achievements with using the results of National Examination, the development scaling methods of achievements and test design for comparison of interval estimates of students, the creating methods providing correct interpretation of achievement's scores for management in education, the application of Hierarchical Linear Models for prediction of quality's changes of leaning and another.
Multi-level Longitudinal Analysis as a Method for Evaluating Reading First
Presenter(s):
Bruce Randel,  Mid-continent Research for Education and Learning,  brandel@mcrel.org
Abstract: This study uses longitudinal growth modeling to examine changes in reading proficiency for students in schools participating in Reading First. Data were available from two mid-western states; each state included approximately 15 schools and approximately 500 students. All students were administered the state-wide test of reading comprehension at three time points, one year apart. Scores from both state reading tests are on a vertical scale but analyses were run separately by state because the tests do not share the same measurement scale. Analyses were conducted to model the growth in reading comprehension during and after participation in Reading First programs. Each analysis estimated individual growth trajectories at the student level and also estimated the unique contribution of student demographic characteristics and school characteristics in explaining growth.
The Application of Multi-level Modeling in the Evaluation of After-school Programs: Linking Academic Success to Attendance
Presenter(s):
Jeremy Lingle,  Georgia State University,  jlingle1@gsu.edu
Carolyn Furlow,  Georgia State University,  cfurlow@gsu.edu
Sheryl Gowen,  Georgia State University,  sgowen@gsu.edu
Syreeta Skelton,  Georgia State University,  snskelton@gsu.edu
Abstract: Multilevel modeling provides evaluators with a powerful tool to isolate the individual-level factors that may contribute to program effectiveness as well as to identify the impact of program-level factors and the interaction of variables across levels. These models also allow for evaluation of the effects of social programs which are often limited to quasi-experimental designs. This presentation arises from a state-wide evaluation of federally-funded after school programs. The purpose of this presentation is two-fold: (1) to enumerate the challenges provided by use of state standardized assessment scores and (2) to discuss the hierarchical linear models (HLM) that we used to analyze these data. Findings from our analyses support that attendance in after-school programs has positive effects upon certain, but not all, academic outcomes.
Comparing Urban and Suburban Schools: An Investigation of the Intervention Effects of Reading Recovery With Multi-level Growth Modeling
Presenter(s):
Jing Zhu,  The Ohio State University,  zhu.119@osu.edu
Francisco Gómez-Bellengé,  Reading Recovery National Data Evaluation Center,  gomez-bellenge.1@osu.edu
Abstract: Usually, it is difficult to evaluate the effects of educational interventions. Nonrandom assignment of participants to different groups and various confounding factors cause concern for this kind of investigations. It is believed that multilevel models are effective in dealing with these issues. Because of No Child Left Behind legislation, a crucial question for many interventions is whether the program works equally well for different populations. In this study, multilevel growth modeling is applied to the national longitudinal assessment data from Reading Recovery (RR) during the 2005-2006 academic year. A particular interest for evaluation is comparing the intervention effects of RR on reading achievement in urban schools with suburban. Multilevel models will estimate the trajectories of student reading performance measured by the six tasks in the Observation Survey (OS). The estimates of average difference in mean OS scores between urban and suburban schools and the corresponding effect size will be reported.

Session Title: International and Cross-Cultural TIG Business Meeting
Business Meeting Session 674 to be held in McKeldon Room on Friday, November 9, 4:30 PM to 6:00 PM
Sponsored by the International and Cross-cultural Evaluation TIG
TIG Leader(s):
Thomaz Chianca,  Western Michigan University,  thomaz.chianca@wmich.edu
Gwen M Willems,  University of Minnesota,  wille002@umn.edu
Nino Saakashvili,  Horizonti Foundation,  nino.adm@horizonti.org

Session Title: Evaluating Outcomes for Young Children With Disabilities: Issues at the National, State, and Local Levels
Panel Session 675 to be held in Preston Room on Friday, November 9, 4:30 PM to 6:00 PM
Sponsored by the Special Needs Populations TIG
Chair(s):
Kathy Hebbeler,  SRI International,  kathleen.hebbeler@sri.com
Abstract: In 2005, the U.S. Department of Education required states to submit outcomes data on all children birth through 5 years of age receiving services through IDEA. Responding to pressure from OMB, the Department specified what the states are to report but did not specify how they were to collect the information. The presenters, staff from a Center funded by the Department of Education to assist states an implementing an early childhood outcome measurement system, will describe activities undertaken, as well as issues and challenges at the national, state, and local level that have emerged as states set about to collect these data. Contrasting state approaches will be described including approaches that are incorporating child outcomes data into a broader system of ongoing evaluation. The papers will address the intended and unintended consequences thus far, both positive and negative, of instituting national outcomes measurement for young children with disabilities.
The Federal Need for Outcome Data on Young Children With Disabilities
Kathy Hebbeler,  SRI International,  kathleen.hebbeler@sri.com
This paper will describe the background for the federal reporting requirement for outcomes data on children birth through age 5 years served in programs for young children with disabilities. The numerous challenges to measuring outcomes for this population will be explained to help the audience understand why this is so difficult and why so little progress was made in this area for so long. The paper will summarize the recommendations from stakeholders on what outcomes should be addressed for children and families and summarize what states are required to report. The rationale along with the strengths and limitations of the current requirements will be addressed. The presenter is the Director of the Early Childhood Outcomes Center and has researched issues related to measuring outcomes for young children in large scale data collections for over 25 years.
State Approaches to Collecting and Using Data on Child Outcomes
Lynne Kahn,  University of North Carolina,  lynne_kahn@unc.edu
This paper will summarize how state early intervention programs (birth to age 3) and preschool special education programs (3 to 5 years) are collecting data on child outcomes. Based on information reported in the State Performance Plans and Annual Performance Reports and other data collected from states, we know that the majority of state agencies for both programs have opted to use a process developed by the ECO Center to summarize data on young children from a variety of sources. Contrasting state approaches will be described along with examples of the kinds of questions states plan to address and how they plan to use these data for program improvement. The presenter is the director of the Technical Assistance component of the Early Childhood Outcomes Center and has provided technical assistance to state agencies around evaluation for over 25 years through the National Early Childhood Technical Assistance Center.
Value at the Grassroots Level: Implications of Child Outcomes Data for Teachers, Providers, and Local Administrators
Christina Kasprzak,  University of North Carolina, Chapel Hill,  christina_kasprzak@unc.edu
This paper will describe how some states have developed systems that provide meaningful information for decision-making at the local level, in addition to providing data for the state and federal government. Examples of what early childhood teachers and early interventionists are learning because of the national push for outcomes data and how this is changing their practice will be presented. Examples of some of the implementation challenges being encountered at the local level also will be described. The ECO Center has trained hundreds of providers around the country in how to collect child outcomes data and this paper will share what has been learned about the kinds of support teacher and other local staff need to be able to collect valid outcomes data and use it for decision-making. The presenter, a former Family Service Coordinator, trains providers in child outcomes data collection and also provides evaluation-related technical assistance to state agencies.

Session Title: Deliverables as a Tool to Promote and Support Organizational Learning: Client-centered Strategies for Data Collection and Reporting
Panel Session 676 to be held in Schaefer Room on Friday, November 9, 4:30 PM to 6:00 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Chair(s):
Debbie Zorn,  University of Cincinnati,  debbie.zorn@uc.edu
Abstract: Every program evaluation is expected to have some kind of deliverable. Yet, why write a technical report that ends up on someone's bookshelf rather than being used to make a meaningful contribution to organizational learning and program improvement? How do we as evaluators meet the accountability and reporting needs of our clients while also ensuring that information provided is usable and appropriate for its intended audience? This panel will discuss participatory, collaborative approaches to the planning and design of project deliverables used by the University of Cincinnati Evaluation Services Center (UCESC) that take into account clients' needs for information, accountability, learning, and dissemination. The panelists will share the processes they used in negotiating a design for deliverables that met the unique program context and constraints of the five different projects represented by this group and describe how these approaches contributed to program and organizational learning.
Old Habits Die Hard: Introducing New Approaches to an Established Client
Imelda Castañeda-Emenaker,  University of Cincinnati,  castania@ucmail.uc.edu
Imelda Castaeda-Emenaker will discuss the benefits and challenges of having an established relationship with a client. While the established relationship brings continued opportunities for evaluation work, it can also be complicated by old habits and expectations for how evaluation data are collected and reported. Clients get used to a certain way of doing things and bring these habits of mind to successive projects. Attempts to introduce new approaches are often met with resistance. One such example is the use of capacity-building as a focus for data collection and reporting. Imelda Castaeda-Emenaker will explain how the idea of capacity-building as a focus for data collection and reporting was introduced for use in a statewide, multi-site educational intervention project. She will describe how evaluators worked with project staff to embed evaluation activities into their daily operations and move to a reporting process that emphasized continuous improvement rather than summative review.
A Complex Balancing Act: Reporting Across Multiple Years, Sites, and Program Models for Statewide Professional Development in Literacy Instruction
Janice Noga,  Pathfinder Evaluation and Consulting,  jan.noga@stanfordalumni.org
Jan Noga will address the challenges of meeting the diverse information needs of Ohio's State Institutes for Reading Instruction (SIRI). To improve teaching quality in classroom reading instruction, the Ohio Department of Education (ODE) developed SIRI to provide professional development in reading instruction for classroom teachers. Since its inception in 1999, SIRI has served an estimated 45,000 teachers. Ms Noga will describe how evaluators worked with ODE staff to design an integrated approach for data collection and reporting that utilized a cyclical feedback system to continually inform process while also assessing success at attaining expected outcomes. She will describe how a flexible, evolving design that included formative review and frequent reporting to inform mid-course corrections allowed evaluators to provide intermediate assessment of the need for and effectiveness of mid-course corrections as the SIRI design evolved, as well as subsequent summative assessment of the effectiveness and impact of SIRI overall.
Using Professional Development Standards as a Foundation for Program Evaluation and Program Improvement
Stacey Farber,  University of Cincinnati,  stacey.farber@cchmc.org
Stacey Farber will discuss how national standards for quality professional development and theory on the relationship between teacher training and student learning were used as a foundation for evaluation and continuous improvement of the Ohio Writing Institute Network for Success (OhioWINS), a state-supported, multi-site, professional development program for K-12 teachers of writing. She will describe how the National Staff Development Council (NSDC) Standards for Staff Development (2001) were used as a framework to evaluate the quality of OhioWINS and to provide research-based recommendations for program improvement to policy makers and program implementers. She will also illustrate how the Guskey and Sparks model of the relationship between professional development and student learning was adapted into a program and evaluation logic model. This model was then used to better conceptualize the goals of the program and enhance the design of the evaluation itself for future years.
Community Based Weed and Seed Projects: Using Progress Reports to Promote Continuous Improvement and Improve Project Sustainability
Nancy Rogers,  University of Cincinnati,  rogersne@ucmail.uc.edu
Nancy Rogers will discuss the value of evaluation progress reports as a useful tool for improving data collection activities, discussing continuous improvement processes, and guiding strategic planning discussions when working with loosely organized community members and organizations that volunteer their time and resources to the Weed and Seed project. She will explain how working collaboratively to complete the evaluation progress report reveals gaps in program planning and challenges with data collection. These gaps and challenges are examined and improvements are planned. She will explain how regular reference to these reports at quarterly meetings contributes to committee focus on goals and increased interest in data collection activities for demonstrating changes in the community. Finally, she will describe how the Weed and Seed Steering Committee has benefited from discussions that result from regular review of evaluation progress reports and has consequently focused on developing resources for program sustainability as a project goal.
Building the Educational Community into a Multi-Methods Evaluation of the Cincinnati Art Museum's School Program
Jan Matulis,  University of Cincinnati,  matulij@ucmail.uc.edu
Jan Matulis will discuss the importance of building the local educational community into a multi-methods evaluation of the Cincinnati Art Museum's school program through the Success Project. She will also discuss how educational community members were involved in the planning and implementation of this evaluation, leading to a system of deliverables focused on the Art Museum's standards-based school programs. The benefits and challenges of involving a wide range of both educational community members and data collection methods in this evaluation of an informal education provider will also be discussed. As a result of these efforts, an evaluation framework has been developed that monitors and advises the Art Museum's progress relative to providing programs that meet standards-based curriculum needs of schools in the region and increasing awareness of, and participation in, the Art Museum's school programs.

Session Title: Living and Learning Evaluation: Teaching Evaluation Through Visual, Narrative and Performative Practice
Skill-Building Workshop 677 to be held in Calvert Ballroom Salon B on Friday, November 9, 4:30 PM to 6:00 PM
Sponsored by the Teaching of Evaluation TIG
Presenter(s):
A Rae Clementz,  University of Illinois at Urbana-Champaign,  clementz@uiuc.edu
April Munson,  University of Illinois at Urbana-Champaign,  amunson2@uiuc.edu
Abstract: Attendees will take from this skill building workshop the ability to teach various aspects of evaluation through unconventional forms of representation and exploration, such as puzzle maps, concept maps, metaphor, poetics, game design, role play, and others. This session allows attendees to experience alternative visual and performative conceptualizations of the field of evaluation, understandings of theory, and implementation of methods. As the field strives to attract members of various disciplines, this approach promotes understanding that those members will also have various expertise and learning styles. Attendees will have the opportunity to investigate visual, narrative and performative representations, and work toward creating their own, most suitable to their own understanding and teaching style.

Session Title: Evaluation in Federal Agencies: What Shapes It, and How Could the American Evaluation Association be Part of the "What"?
Panel Session 678 to be held in Calvert Ballroom Salon C on Friday, November 9, 4:30 PM to 6:00 PM
Sponsored by the AEA Conference Committee
Chair(s):
Michael Morris,  University of New Haven,  mmorris@newhaven.edu
Discussant(s):
Debra Rog,  Westat,  debrarog@westat.com
Abstract: The Forum will explore how state-of-the-art knowledge and expertise in evaluation can be more effectively linked to the formulation of evaluation policy at the federal level. Panelists from three different federal agencies will address the following questions: (1) How is evaluation policy established in their agency? (2) What types of evaluation-related input would their agency welcome from a professional organization such as the American Evaluation Association? (3) What are the means through which AEA could provide such input? Against this background, panelists will also discuss the following: To what extent will the 2008 Presidential election and its aftermath present opportunities for the professional evaluation community to play a greater role in the formulation of evaluation policy? What factors are likely to facilitate or hinder this influence? When a professional organization endeavors to elevate its public profile at the federal level, what cautionary tales should it be mindful of?
Overview
Wendell Primus,  United States House of Representatives,  wendell.primus@mail.house.gov
Wendell Primus will provide an overview of key issues in evaluation at the federal level, setting the stage for the panelists' presentations on specific agencies. Dr. Primus is especially well-suited to this task. He is currently Health Policy Advisor to the Speaker of the House of Representatives, and his previous positions include Minority Staff Director of the Joint Economic Committee for the U.S. Congress, Director of Income Security for the Center on Budget and Policy Priorities, Deputy Assistant Secretary for Human Services Policy at the Department of Health and Human Services, and staff director for the Subcommittee on Human Resources of the House Ways and Means Committee. Dr. Primus received his Ph.D. in economics from Iowa State University.
Evaluation at the Centers for Disease Control and Prevention
Thomas Chapel,  Centers for Disease Control and Prevention,  tchapel@cdc.gov
Thomas Chapel will discuss the process of strategic planning and evaluation design at the Centers for Disease Control and Prevention, where he serves as Senior Evaluation Scientist in the Office of Workforce and Career Development. An author of a number of articles and chapters on evaluation, he has an MA in public policy and an MBA, both from the University of Minnesota.
Evaluation at the National Institute of Justice
Patrick Clark,  National Institutes of Justice,  patrick.clark@usdoj.gov
Patrick Clark will focus on evaluation at the National Institute of Justice, where he is Acting Chief of the Evaluation Research Division. He has 30 years of experience in evaluation research in the criminal and juvenile justice systems, and has a Ph.D. in psychology from Michigan State University.
Evaluation at the National Science Foundation
Bernice Anderson,  National Science Foundation,  banderso@nsf.gov
Bernice Anderson will examine the approach to evaluation taken in the Directorate for Education and Human Resources at the National Science Foundation (NSF), where she serves as Senior Advisor for Evaluation. Her publications include Breaking the Barriers: Helping Female and Minority Students Succeed in Mathematics and Science. Dr. Anderson received her doctorate in education from Rutgers University.

Session Title: Evaluation Within Partnerships: Working With Community Groups
Multipaper Session 679 to be held in Calvert Ballroom Salon E on Friday, November 9, 4:30 PM to 6:00 PM
Sponsored by the Extension Education Evaluation TIG
Chair(s):
Mary T Crave,  University of Wisconsin,  crave@conted.uwex.edu
“Catch 'Em Being Good” Cooperative Extension Service Teams Up with Schools to Promote and Evaluate the School Wide Positive Behavior Support Program
Presenter(s):
Kerri Wade,  West Virginia University,  kerri.wade@mail.wvu.edu
Allison Nichols,  West Virginia University,  ahnichols@mail.wvu.edu
Abstract: In collaboration with the West Virginia Board of Education, the West Virginia Extension Service developed an evaluation model for measuring the success of the School Wide Positive Behavior Support Program (SWPBS) in elementary schools throughout West Virginia. In this model, the county Extension agent serves in two supportive roles for the core team, made up of school administrators, teachers, and support staff: coach and evaluator. Other Extension educators support the county agent by providing technical assistance. The results of the program evaluation showed decreases in the number of disciplinary referrals over four years at Grandview Elementary School in Charleston, WV, while the collaborative evaluation process resulted in a successful SWPBS model that has been institutionalized throughout West Virginia's educational system. This presentation will illustrate an excellent example of a state university, through its Cooperative Extension Service, affecting systematic change in a state institution through a systematic evaluation effort.
An Innovative Approach for Building Evaluation Capacity of Grassroots Level Financial Educators Including Extension Agents
Presenter(s):
Koralalage Jayaratne,  North Carolina State University,  jay_jayaratne@ncsu.edu
Angela Lyons,  University of Illinois at Urbana-Champaign,  anglyons@uiuc.edu
Lance Palmer,  University of Georgia,  lpalmer@uga.edu
Abstract: A study revealed that the grassroots level financial educators including the Extension educators who deliver financial education programs do not have adequate skills or necessary tools to evaluate their educational programs. The National Endowment for Financial Education supported a project to address this need nationally. Under this project, an online evaluation resource kit was developed to help financial educators. The resource kit has two major components. The first component is an online database for designing customized evaluation tools. The database has various evaluation options. The second component is a guiding manual to help users understand basic evaluation concepts and instructions for using the evaluation database. The purpose of this paper is to discuss how this online resource kit can be used to build the evaluation capacity of Extension educators. This project contributes to the Extension evaluation practice by providing an innovative evaluation resource kit to Extension educators.
A Recipe for Understanding Food Safety: Using a Concept-oriented Theoretical Frame for Eliciting Adult Food Service Employees' Prior Knowledge
Presenter(s):
Jason Ellis,  University of Nebraska, Lincoln,  jellis2@unl.edu
Abstract: The Extension system in many states is responsible for training foodservice employees to properly handle food, with intentions of reducing the likelihood of food borne illnesses. Despite training, foodservice employees—including workers in schools, hospitals, and restaurants—consistently fail to comply adequately with core tenets for safe food handling. This evaluative study combined two novel educational theories to produce a compelling portrait of previously unknown educational needs and opportunities of (adult) foodservice workers by investigating the first phase of curriculum design: elaboration of prior knowledge. Developing an evaluative study using a theoretical foundation that incorporates multiple disciplines can yield fresh insights into a problem, in this situation an understanding of the poor uptake of conventional training that has eluded other researchers. These results illuminate the necessity for including participants' prior knowledge in needs assessments so that program developers consciously consider how best to teach the learner, not just present the content.
Evaluating Oregon's Food Stamp Nutrition Education Program: Issues in Capacity Building and Compliance
Presenter(s):
Marc Braverman,  Oregon State University,  marc.braverman@oregonstate.edu
Lauren Tobey,  Oregon State University,  lauren.tobey@oregonstate.edu
Carolyn Raab,  Oregon State University,  raabc@oregonstate.edu
Jill Murray,  Oregon State University,  jill.murray@oregonstate.edu
Sally Bowman,  Oregon State University,  bowmans@oregonstate.edu
Abstract: This paper will discuss challenges and approaches used by Oregon State University's Extension Family and Community Development Program in evaluating Oregon's Food Stamp Nutrition Education program. Funded by USDA, FSNE is a major program within Oregon, serving 19,800 people in 2005 through schools, housing complexes, food pantries, and other settings. Evaluation information required and funded by USDA emphasizes program outputs; outcome evaluation is encouraged but little specific guidance has been provided. Consequently, state-coordinated attempts at outcome evaluation have been somewhat uneven, and local program information needs have been largely overlooked. The most successful instance of outcome evaluation was an approach developed and coordinated at the state level for sessions delivered at school sites. This paper will describe Oregon Extension's experience in evaluating FSNE, and analyze directions for future improvement. It will examine how evaluations can satisfy information needs of federal funders, state audiences (legislators, administrators, etc.), and local program staff.

Session Title: Evaluations of Reading and Literacy Programs
Multipaper Session 680 to be held in Fairmont Suite on Friday, November 9, 4:30 PM to 6:00 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Edith Stevens,  Macro International Inc,  edith.s.stevens@orcmacro.com
Comparing Self-report Logs with Classroom Observation of Reading Instruction
Presenter(s):
David Quinn,  Chicago Public Schools,  dwquinn@cps.k12.il.us
Kelci Price,  Chicago Public Schools,  kprice1@cps.k12.il.us
Annette Marek,  Chicago Public Schools,  annettemarek@gmail.com
Alvin Quinones,  Chicago Public Schools,  agquinon@uchicago.edu
Mangi Arugam,  Chicago Public Schools,  marugam@cps.k12.il.us
Abstract: The purpose for this evaluation was to assess the implementation and use of a district created guidebook of reading instruction practices. Classroom observations were conducted over three months in Kindergarten through twelfth grade classrooms. A total of 70 classrooms were observed. Classrooms were observed twice for a total of 138 observations. Each class was observed for an entire class period, approximately 45 minutes to one hour. A second sample of teachers were recruited from elementary and high schools across the district to complete self-report reading instruction logs for their classes. Each teacher was asked to complete up to sixty logs, rotating through a sample of students in their classes. The teacher logs focused on the same reading instruction topics as the observation protocol. Results for the observations were compared to findings from the self-report logs. Similarities and differences in findings were noted.
Criteria, Interferences, and Flexibility: Issues From a School District Evaluation
Presenter(s):
Linda Mabry,  Washington State University, Vancouver,  mabryl@vancouver.wsu.edu
Abstract: In this paper, I propose to report on a four-year evaluation of a project to improve student literacy in four large high schools in one school district. With federal funding, the district implemented a Smaller Learning Communities (SLC) program which involved new or revised reading curricula and professional development in literacy instruction for teachers of literacy/English classes and also teachers of non-literacy related classes. The schools also implemented grade-level advisories intended as supportive communities to which students would belong throughout their high school careers, peer mentoring of freshmen by upperclassmen to ease the transition to high school, and portfolios for showcasing students work, sharing it and plans with parents, and facilitating productive transition to postsecondary life. State test scores showed improved literacy outcomes. Communities were developed in reading classes for struggling and advanced students, but not students working at grade level or in most advisories, especially those for upperclassmen.
Measuring the Fidelity of Literacy Programs: No Shortcuts
Presenter(s):
Nancy Carrillo,  Albuquerque Public Schools,  carrillo_n@aps.edu
Abstract: A large urban school district requested an evaluation of the many reading programs implemented among its elementary schools. Previous research of school-level data had suggested that the type of reading program did not influence assessment outcomes; but these results were not well accepted and a more thorough evaluation was requested. Stakeholders did not see evaluators as being unbiased and some believed that program fidelity was a key factor that had not been considered. I developed a committee of stakeholders with disparate opinions to assist in designing the main data collection tool. This evaluation changed the unit of analysis from school to student and included measures of program fidelity. Results are quite similar to those found in the past – there are few differences between programs when student and school measures are included; nor were fidelity measures found to impact outcomes.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: An Evaluation of Ten Years of Progress in an Autistic Impaired Preschool Program
Roundtable Presentation 681 to be held in Federal Hill Suite on Friday, November 9, 4:30 PM to 6:00 PM
Presenter(s):
Carmen Jonaitis,  Western Michigan University,  cjonaiti@kresanet.org
Jinhai Zhang,  Western Michigan University,  jinhaizhang@hotmail.com
Abstract: A university professor who was instrumental in designing the on-site practicum at a school for children with disabilities requested this evaluation. The purpose of this study was to determine the efficacy of an autistic impaired preschool program in teaching preschool children with developmental disabilities the skills needed to be successful in a less restrictive learning environment. For some children this will be participation in kindergarten, for other children this may mean participation in a less restrictive special education program. This evaluation analyzed the number of skills that children achieve after entering the program, and what percentage of kindergarten readiness skills are achieved before the children leave the program. Additionally, parent satisfaction with the program was evaluated. The intent of this evaluation was to determine what areas of the program can be improved to increase student success. The audience included the practicum coordinator that assists in overseeing the practicum, as well as the graduate assistants responsible for training and supervising the classroom tutors. The audience also included twelve program staff, and the school psychologist who has participated in program design and implementation. Additional stakeholders include parents, local kindergarten teachers, practicum students, and school administrators.
Roundtable Rotation II: Conducting Successful Field Research in School-based Settings
Roundtable Presentation 681 to be held in Federal Hill Suite on Friday, November 9, 4:30 PM to 6:00 PM
Presenter(s):
David Dobrowski,  First 5 Monterey County,  david@first5monterey.org
Raul Martinez,  Harder & Company Community Research,  rmartinez@harderco.com
Abstract: In 2006, First 5 Monterey County worked with 25 schools, representing 11 districts, to implement the Kindergarten Readiness Assessment, a study designed to provide a snapshot of kindergarteners' readiness to begin school. The assessment used four tools based on the National Education Goals Panel's definition of school readiness to gather information about incoming kindergarteners. While First 5 had previously sponsored the Kindergarten Readiness Assessment, it did so with far fewer schools. This roundtable will explore the processes used to successfully collect 1,525 child surveys, 1,485 family surveys, 1,203 matched child and family surveys, and 74 kindergarten teacher surveys. Specifically, we will describe strategies undertaken to achieve high response rates, obtain consent and buy-in from schools and districts, and ensure quality data collection. We will also identify challenges encountered during field operations, offer tips to facilitate the successful implementation of assessments in school-based settings, and invite audience discussion and feedback.

Session Title: Issues in Doing Randomized Trials in Educational Evaluation
Multipaper Session 682 to be held in Royale Board Room on Friday, November 9, 4:30 PM to 6:00 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Burke Johnson,  University of South Alabama,  bjohnson@usouthal.edu
Conducting a Randomized Control Trial in Middle Schools: Challenges and Solutions
Presenter(s):
Kelly Feighan,  Research for Better Schools,  feighan@rbs.org
Jill Feldman,  Research for Better Schools,  feldman@rbs.org
Abstract: With funding awarded by the US Department of Education through the Striving Readers initiative, eight Memphis middle schools are participating in a four-year randomized control trial (RCT) of an intervention that targets struggling readers (defined as students two or more grade levels behind in reading). Researchers randomly selected 480 eligible students to participate in the intervention, and are comparing their progress with a control group of students who instead receive conventional English Language Arts or reading instruction. Researchers reflect on the successes and challenges of conducting the RCT, as they work with a strong district team to ensure adherence to the originally proposed design while balancing the need to accommodate local issues and maintain schools' support of the study, which is critical to its success.
Obtaining Buy-In to Conduct Randomized Controlled Trials in Schools: Lessons Learned From the Communities in Schools (CIS) National Evaluation
Presenter(s):
Heather Clawson,  Caliber an ICF International Company,  hclawson@icfi.com
Eric Metcalf,  Communities in Schools, Central Texas,  emetcalf@cisaustin.org
Mike Massey,  Communities in Schools, Charlotte-Mecklenburg,  mmassey@cischarlotte.org
Susan Siegel,  Communities in Schools,  siegels@cisnet.org
Abstract: Communities In Schools, Inc. (CIS) is a nationwide initiative to connect community resources with schools to help at-risk students successfully learn, stay in school, and prepare for life. CIS is currently in the midst of a comprehensive, rigorous three-year national evaluation, culminating in a randomized controlled trial (RCT) to ascertain program effectiveness. While randomized controlled trials are widely considered to be the “gold standard” in research, it is generally difficult to obtain buy-in from program, school, and district staff to conduct them. In this presentation, we will draw from our experience working with Austin, TX and Charlotte, NC public schools to provide evaluators with strategies for enlisting cooperation in the conduct of highly rigorous evaluations. We will also present a step-by-step plan to implement an RCT within a school. This presentation will include the unique insights of both evaluators and “front-line” program staff.
The Consequences of No Child Left Behind: Challenges to Achieving the "Gold Standard" in a Large Urban School District
Presenter(s):
Cheri Hodson,  Los Angeles Unified School District,  cheri.hodson@lausd.net
Regino Chavez,  Los Angeles Unified School District,  regino.chavez@lausd.net
Abstract: No Child Left Behind (NCLB) sets forth rigorous requirements to ensure that research is scientifically based. Randomized assignment to at least two conditions is an essential component of the “gold standard” of research. However, how does this actually play out in practice? What influences whether this type of a study can actually be carried out within large urban school districts? This paper relates the experience of three research evaluations undertaken in the spirit of NCLB within a large school district in California.

Session Title: Recovery/Resilience, Trajectories, Co-occurring Disorders, and Real Time Program Evaluation
Multipaper Session 683 to be held in Royale Conference Foyer on Friday, November 9, 4:30 PM to 6:00 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
Garrett E Moran,  Westat,  garrettMoran@westat.com
The Process of Mental Health Recovery and Resiliency in Children and Adolescents
Presenter(s):
Erica Gosselin,  Mental Health Center of Denver,  erica.gosselin@mhcd.org
Riley Rhodes,  Mental Health Center of Denver,  riley.rhodes@mhcd.org
Kate DeRoche,  Mental Health Center of Denver,  kathryn.deroche@mhcd.org
Antonio Olmos,  Mental Health Center of Denver,  antonio.olmos@mhcd.org
Abstract: Utilizing constructivism grounded theory; researchers at the Mental Health Center of Denver investigated the process of mental health recovery/resiliency in children/adolescents. Data, gathered through interviews with children/adolescents currently receiving mental health services, parent/guardians of children/adolescents receiving services, teachers of children/adolescents receiving services and child/adolescent clinicians providing mental health services, was fully transcribed including memo-writing by the interviewers. Four types of grounded theory coding, described by Charmaz (2006), were followed to create a preliminary theory of the process of mental health recovery/resiliency in children/adolescents. Future research will continue the qualitative investigation as well as the development of a quantitative measure(s) of mental health recovery/resiliency.
Co-occurring Disorders: Should We Have Different Outcome Measures?
Presenter(s):
Minakshi Tikoo,  Department of Mental Health and Addiction Services,  minakshi.tikoo@po.state.ct.us
Abstract: This paper raises the question of why performance outcome measures for people with co-occurring disorders should be different. Providing services to and recognizing the needs of people with co-occurring mental health and substance use disorders has been central to the current administration, resulting in policies and special initiatives for this population and a push to collect outcome measures. This paper addresses the need for the field to 1) pause and think about why we want to use outcome measures, 2) limit the number of measures to what the system intends to monitor, 3) understand the “value” added by collecting new and different measures, and 4) understand the cost implications of asking states to modify their data systems to track additional consumer data.
Real Time Evaluation of a Wraparound Program
Presenter(s):
Brian Pagkos,  University at Buffalo,  pagkos@buffalo.edu
Heidi Milch,  Gateway-Longview Inc,  hmilch@gateway-longview.org
Mansoor Kazi,  University at Buffalo,  mkazi@buffalo.edu
Abstract: The presenters will discuss the application of a feasible, innovative methodology to evaluate a wraparound program that provides individualized services to youth with serious emotional disturbances. Through the use of repeated measures, collection of contextual information, single system design and binary logistic regression, greater understanding of program outcomes is achieved. The results of the evaluation will be presented along with the development of the core of the evaluation, the collaboration between a university research center and a not-for-profit social service agency.
Tools for a Mixed Method Approach to Understanding Trajectories of Youth Movement in Out-of-home Care Settings
Presenter(s):
Keren Vergon,  University of South Florida,  vergon@fmhi.usf.edu
Norin Dollard,  University of South Florida,  dollard@fmhi.usf.edu
Ren Chen,  University of South Florida,  rchen@fmhi.usf.edu
Mary Armstrong,  University of South Florida,  armstron@fmhi.usf.edu
Abstract: This paper presents the use of Markov Modeling, Agent-Based Modeling, and Ethnographic Interviewing techniques to understand youth movement into, within, and out of Medicaid-funded out-of-home mental health treatment settings in Florida. Mental health, child welfare, and justice system administrative data sets were used to model movements for youth (N=1919). Findings showed that Inpatient, Therapeutic Group Care, and Therapeutic Foster Care are relatively stable placements with 95% of youth found in the same type of location seven days after the first observation; 2/3 of youth leave these placements for less restrictive treatments. Several unexpected youth movements were observed: youth moving directly from the community without mental health services into Inpatient care; youth cycling between justice locations and Inpatient care; and youth leaving Inpatient care and moving directly into the community without mental health services. Discussion includes how these methodological techniques can complement each other and next steps for future research.

Session Title: Diverse Approaches to Evaluative Inquiry in Higher Education
Multipaper Session 684 to be held in Hanover Suite B on Friday, November 9, 4:30 PM to 6:00 PM
Sponsored by the Assessment in Higher Education TIG
Chair(s):
Erin Burr,  University of Tennessee,  eburr@utk.edu
Discussant(s):
Summers Kalishman,  University of New Mexico,  skalish@salud.unm.edu
Revisiting Alternative Methods for Validating Course Placement Criteria
Presenter(s):
Howard Mzumara,  Indiana University Purdue University Indianapolis,  hmzumara@iupui.edu
Abstract: A major component for assessing the effectiveness of a placement testing program involves providing appropriate validity evidence for using placement tests in facilitating course placement decisions and academic advising of undergraduate students. This session will provide evaluators with a second look at addressing alternative methods for validating course placement criteria, which involve use of decision theory and logistic regression approaches in providing validity evidence for assessing the utility and appropriateness of placement cutoff scores. The presentation will include an interactive discussion on assessment issues based on the annual validation of ACT's COMPASS Mathematics Placement Test scores for predicting student success in college-level mathematics courses at a large Midwestern university.
Improving Course and Faculty Evaluations With a Multi-method Approach
Presenter(s):
Meghan Kennedy,  Neumont University,  meghan.kennedy@neumont.edu
Jake Walkenhorst,  Neumont University,  jake.walkenhorst@neumont.edu
Abstract: Teaching effectiveness is a goal of most instructors, but a difficult construct to clearly define. In the past, student ratings have been the primary measure of teaching effectiveness in higher education. This information often comes too late and focuses on teaching shortcomings without providing relevant suggestions and teaching alternatives. Data from student surveys may have validity, but the extensibility and utility of this information is minimal. This has led to the need for a new method of faculty and course evaluation. In an effort to create an effective evaluation methodology, a system was developed that (1) gathers multiple forms of information, (2) uses specific and adaptable questions, and is (3) formative and extensible. The new system adapts a potentially high-stakes evaluation with low-value information into a useful evaluation that improves both the student and faculty experience with evaluation.
Collecting Longitudinal Evaluation Data in a College Setting: Strategies for Managing Mountains of Data
Presenter(s):
Jennifer Morrow,  University of Tennessee,  jamorrow@utk.edu
Erin Burr,  University of Tennessee,  eburr@utk.edu
Marcia Cianfrani,  Old Dominion University,  mcian002@odu.edu
Susanne Kaesbauer,  Old Dominion University,  sk's001@odu.edu
Margot Ackermann,  Old Dominion University,  margot.ackermann@gmail.com
Abstract: The focus of this presentation will be a description of various strategies an evaluator can utilize for managing large amounts of data associated with longitudinal evaluation in a college setting. The strategies discussed will be presented in the context of Project Writing, a ten-week study conducted by the lead presenter. This project is a Department of Education funded project whose goal is to examine expressive writing and behavioral monitoring as a means of reducing and preventing high-risk drinking in first-year students. We utilized a variety of methods to manage the enormous amount of data for the evaluation of this project. A description of the evaluation strategies, suggestions for other researchers, and a discussion of what worked and what did not work will be presented.
Practice-based Inquiry Models for Evaluation and Assessment in Community Colleges
Presenter(s):
William Rickards,  Alverno College,  william.rickards@alverno.edu
Abstract: In higher education, community and technical colleges play a significant role in terms of their attention to teaching and learning, working with a diversity of needs and, for many students, providing a critical foundation for transfer to advanced coursework and degrees. A number of strategies have developed to address student learning outcomes that build on learning research and assessment practices, and need to be explored for applications across colleges and universities. In implementation, how do student learning outcomes affect curriculum practices? How do assessments become the means for further investigations and program development? In this context, inquiry becomes a critical element in designing innovations as well in evaluating them and understanding their impact and their applications. This study reports on a survey of inquiry designs used in 24 community and technical colleges, identifying some basic structures and elements.

Session Title: Learning From Evaluation in Service of Social Justice: Who learns? What is Learned? And Why Does it Matter?
Panel Session 685 to be held in Baltimore Theater on Friday, November 9, 4:30 PM to 6:00 PM
Sponsored by the Presidential Strand
Chair(s):
Sharon Brisolara,  Evaluation Solutions,  evaluationsolutions@hughes.net
Discussant(s):
Saumitra SenGupta,  APS Healthcare,  ssengupta@apshealthcare.com
Abstract: This panel focuses on the character and critical importance of learning that takes place in social-justice oriented evaluation. Two conceptual papers address philosophical and political perspectives on why and how evaluation practice committed to advancing social justice presents meaningful and important opportunities for learning and what the character of that learning is. Presenters address ways of attending to social justice, how attending to these issues shapes the role of the evaluator, and what implications attending to social justice has for the profession, the communities we serve, and the larger society. Two practice-oriented papers will address the significant learning that has taken place within evaluations attending to social justice concerns. Practitioners representing diverse cultural and political contexts who use identity based and other evaluation models address what attending to social justice looks like within an evaluation and offer examples of learning that can occur in this genre of evaluation practice.
Transformative Evaluation in Service of Social Justice
Donna Mertens,  Gallaudet University,  donna.mertens@gallaudet.edu
Raychelle Harris,  Gallaudet University,  raychelle.harris@gallaudet.edu
Heidi Holmes,  Gallaudet University,  heidi.holmes@gallaudet.edu
The transformative paradigm is a metaphysical framework that places social justice and the advancement of human rights at the forefront in evaluation work. This presentation will focus on basic belief systems and theoretical frames that have emerged from the evaluation community through their engagement with culturally complex communities who have been pushed to the margins throughout history. The transformative paradigm is commensurate with feminist, critical race, queer, and disability rights theories, and offers ways to explore opportunities for evaluators, members of complex cultural groups, and society writ large to learn how to better serve social justice and human rights agendas. Implications from theory will be explored in terms of the role of the evaluator and strategies for community engagement.
Contextualizing Social Justice in Evaluation
Jennifer Greene,  University of Illinois at Urbana-Champaign,  jcgreene@uiuc.edu
Jori Hall,  University of Illinois at Urbana-Champaign,  jorihall@uiuc.edu
Evaluation in service of social justice offers multiple and diverse opportunities for learning and for action - both focused on the legitimization of the perspectives, experiences, and value stances of those with the least power and authority in a given evaluation context. The particular character of injustice and inequity in a given context - historically and in present times - importantly defines the possibilities of evaluation to authentically engage with issues of justice. To vividly illustrate both the potential and the contextual boundaries of an evaluation approach committed to social justice, this presentation will use the medium of performance. Ongoing fieldwork of an evaluation that is particularly attentive to understanding the experiences and values held by those most often marginalized within a particular setting will provide the contexts for this performative presentation.
Who is There? Wading Through Labels to Reach Meaning
Denice Cassaro,  Cornell University,  dac11@cornell.edu
Through the illustration of an evaluation spanning a 5 year period, I will demonstrate how social justice/advocacy was incorporated by a focus on identities and their intersections in the evaluation process. What will be examined are ways of incorporating an educational component into the evaluation process that attempts to broaden understandings of identities (beyond binaries like female/male, black/white, gay/heterosexual), to illustrate the significance of the intersections of identities, and to shed light on how systemic oppression is perpetuated. The hope is to further efforts towards social change and justice with evaluation practice serving as a powerful medium in that process. Similarities and differences in approaches based on various identities and their intersections will also be explored as it relates to advocacy within the evaluation process.
From Social Justice to Better Evaluations
Katrina Bledsoe,  The College of New Jersey,  katrina.bledsoe@gmail.com
Focusing on Social Justice is useful in fostering a deeper understanding of the evaluation context and in promoting the utilization of evaluation findings. This presentation further focused on how the inclusion of a social justice perspective can lead to increased accuracy in program development and evaluation. This presentation examines the author's work with the Trenton Obesity Prevention Study as an example of the usefulness of a social justice position.

Session Title: Measuring Fidelity and Assessing Impact of Service Interventions in Ohio's Title IV-E Waiver Evaluation
Multipaper Session 686 to be held in International Room on Friday, November 9, 4:30 PM to 6:00 PM
Sponsored by the Human Services Evaluation TIG
Chair(s):
Madeleine Kimmich,  Human Services Research Institute,  kimmich@hsri.org
Discussant(s):
Andrea Sedlak,  Westat,  andreasedlak@westat.com
Abstract: Federal waivers to Title IV-E of the Social Security Act enable state child welfare programs to redirect federal funds from foster care to alternative services for children suffering abuse or neglect. Ohio's Title IV-E waiver demonstration project operates in thirteen of Ohio's county-administered public child welfare agencies. The county agencies are experimenting with three promising interventions: family team meetings, supervised visitation, and supports to kinship caregivers. Key to evaluating the impact of targeted services on child outcomes is assessing whether the services as implemented conform to the original model. If fidelity varies across or even within sites, can one expect a measurable outcome effect? What can be learned from varied applications of a single model intervention? Three papers discuss fidelity assessment in a multi-year evaluation of Ohio's Title IV-E waiver demonstration. We describe fidelity measures and offer initial findings, highlighting challenges and limitations to fidelity assessment.
Measuring the Fidelity of Protect Ohio Family Team Meetings
Madeleine Kimmich,  Human Services Research Institute,  kimmich@hsri.org
Amy Stuczynski,  Human Services Research Institute,  astuczynski@hsri.org
Family Team Meetings is generally seen as a 'best practice'. Regular meetings, facilitated by a trained professional and bringing together family, friends, service providers and advocates, can lead to creative and effective solutions to case challenges, ultimately reducing the need for foster care placement and improving permanency outcomes. This paper describes the model adopted by 13 demonstration sites, defines the fidelity measures used, presents fidelity findings, and discusses evaluation challenges. Fidelity is measured using case-level and county-level variables. Key issues encountered include: how rigorously to define the model when making judgments about fidelity, how to choose measures that balance the need for specific data with the need to ensure that data is not too onerous to collect, and how best to provide fidelity information to practitioners for the purpose of ensuring that there is a model to evaluate.
Supervised Visitation as a Model Intervention
Adrienne Zell,  Human Services Research Institute,  azell@hsri.org
Julie Murphy,  Human Services Research Institute,  murphy@hsri.org
One service delivery model selected by Ohio counties participating in the Title IV-E Waiver is Supervised Visitation, an enhanced visitation program for children in out-of-home care and their parents. This visitation model provides increased consistency and structure, and is expected to improve parent-child interactions and maximize the chance for reunification. Five programmatic elements define this particular model. Challenges involved in determining model fidelity include: providing clear definitions of the model components, uncovering factors which influence how the model is implemented, and differentiating among counties with consistently high fidelity. As required by law, all child welfare agencies offer supervised visitation of some sort; therefore a unique challenge to fidelity evaluation of this intervention is determining how the model fidelity of study counties compares to non-intervention counties with similar elements in place. Along with this discussion, we also pose the methodological question of examining fidelity-dosage at an individual client level.
Supporting Kinship Caregivers
Julie Murphy,  Human Services Research Institute,  murphy@hsri.org
Madeleine Kimmich,  Human Services Research Institute,  kimmich@hsri.org
Six Ohio counties have focused on identifying and supporting kinship caregivers more consistently. They believe that placing children with relatives or friends is less disruptive than formal foster care and ultimately decreases the time children spend in paid placements. Following the kinship model is expected to lead to increased use of kinship settings and more support to these placements (i.e. offering a variety of services and/or subsidies, having designated kinship staff). Measuring adherence to the model is difficult: not all children placed with kin are identifiable in existing data systems, services provided to kinship caregivers are poorly documented, and the Waiver model is not a truly unique approach -- it simply enhances what other counties were already doing to support kin. The paper describes how we have addressed these issues and how, over time, we have adjusted the evaluation plan.

Session Title: Using Systems Tools to Understand Multi-site Program Evaluation
Skill-Building Workshop 687 to be held in Chesapeake Room on Friday, November 9, 4:30 PM to 6:00 PM
Sponsored by the Health Evaluation TIG
Presenter(s):
Molly Engle,  Oregon State University,  molly.engle@oregonstate.edu
Andrea Hegedus,  Centers for Disease Control and Prevention,  ahegedus@cdc.gov
Abstract: Evaluators working complex multi-site programs must be conscious of systems characteristics. Using systems tools can aid the evaluator in effectively evaluating the program. Connecting multi-site programs with overall program objectives can be accomplished with quick diagramming tools showing function, feedback loops, and leverage points for priority decisions. Designed for evaluators responsible for evaluating large multi-site programs or evaluators within a specific program of a larger multi-site program, participants will, individually or in small groups, draw a program system and consider its value to the programs goals and objectives. Drawings will be discussed, the method assessed, and insights summarized. The workshop will assess, "What did you learn and how do you intend to use this skill?" along with "What was the value of this experience to you?" This skill building workshop integrates the sciences of intentional learning, behavioral change, systems thinking and practice, and assessment as functional systems of evaluation and accountability.

Session Title: Challenges and Opportunities in Evaluating Publicly-Funded Programs
Multipaper Session 688 to be held in Versailles Room on Friday, November 9, 4:30 PM to 6:00 PM
Sponsored by the Government Evaluation TIG
Chair(s):
Rakesh Mohan,  Idaho State Legislature,  rmohan@ope.idaho.gov
The New Federalism and the Paradox of Evaluating State Grant Programs
Presenter(s):
Eileen Poe-Yamagata,  IMPAQ International LLC,  epyamagata@impaqint.com
Abstract: In March 2005 the U.S. Department of Labor initiated the Reemployment and Eligibility Assessment (REA) program, a state grant program designed to reduce Unemployment Insurance claim durations and erroneous payments to UI beneficiaries. Characteristic of the New Federalism approach to program development and implementation, state grantees were given much flexibility in their program and evaluation design while incorporating a few designated program components. While these approaches may make more efficient use of federal funding, these programs are, at their philosophical core, difficult to evaluate. This is especially true in light of the increased emphasis on evidence-based programming. The New Federalism approach may result in programs implemented differently across states, with multiple or different goals, and no unified program design. This paper will describe the challenges and solutions encountered in helping states develop rigorous state-centric evaluation designs and the approaches used to help acquire the necessary data to measure program impacts.
Creating an Integrated Data System Across Publicly-funded Agencies in San Francisco
Presenter(s):
Deborah Sherwood,  San Francisco Department of Public Health,  deborah.sherwood@sfdph.org
Abstract: Program evaluation in government agencies is often hampered by the lack of systems for sharing data across agencies that serve the same clients. This presentation will describe the creation of an integrated data warehouse containing comprehensive data from four child-serving systems in San Francisco: children's mental health, child welfare, juvenile probation, and the school district. The data warehouse, which is updated quarterly, can be used by frontline staff to view individual client histories across systems to improve care coordination, by managers to produce aggregate reports and geo-maps for program planning, and by evaluators to examine service utilization and outcomes for multi-system clients. We will describe the steps involved in obtaining intra-departmental cooperation for data-sharing, establishing data security and quality control measures, and matching youth served in multiple systems. In addition, we will discuss the evaluation challenges and opportunities afforded by access to life-time histories of service utilization across systems.

Search Results