| Session Title: Evaluation as a Learning Tool: Maximizing Outcomes Using Strategic Formative Evaluation | |||||
| Panel Session 659 to be held in Liberty Ballroom Section A on Friday, November 9, 4:30 PM to 6:00 PM | |||||
| Sponsored by the Organizational Learning and Evaluation Capacity Building TIG | |||||
| Chair(s): | |||||
| Linda Thurston, Kansas State University, lpt@ksu.edu | |||||
| Discussant(s): | |||||
| Jan Middendorf, Kansas State University, jmiddend@ksu.edu | |||||
| Abstract: A vital aspect of evaluation work within higher education is to assist academic programs and externally funded projects develop successful programs and/or continuously improve program outcomes. The Office of Educational Innovation and Evaluation at Kansas State University utilizes formative evaluation to provide feedback to program personnel as they focus on program development and improvement. To provide the most focused and useful information for program development and improvement, we use strategic formative evaluation practices which focus on four basic practices: understanding clients' long-term expected outcomes; understanding clients' intended use of the data we collect; asking the right evaluation questions; and reporting our findings in a usable form. This panel will provide case studies that describe these strategic formative evaluation practices with several projects. A discussant will make recommendations for future research and practice. | |||||
| |||||
| |||||
| |||||
|
| Session Title: Theories of Evaluation TIG Business Meeting and Presentation: Evaluation Theory: Consolidate it, Nurture it, Learn it, and Teach it. But How? |
| Business Meeting Session 660 to be held in Liberty Ballroom Section B on Friday, November 9, 4:30 PM to 6:00 PM |
| Sponsored by the Theories of Evaluation TIG |
| TIG Leader(s): |
| Bernadette Campbell, Carleton University, bernadette_campbell@carleton.ca |
| Presenter(s): |
| Bernadette Campbell, Carleton University, bernadette_campbell@carleton.ca |
| Marvin Alkin, University of California, Los Angeles, alkin@gseis.ucla.edu |
| Discussant(s): |
| Melvin Mark, Pennsylvania State University, m5m@psu.edu |
| William Shadish, University of California, Merced, wshadish@ucmerced.edu |
| Abstract: Much discussion in recent (and not so recent) years has centered on what is wrong with or missing from evaluation theory. Ten years ago, in his AEA presidential address, Will Shadish declared that “evaluation theory is who we are”, and he encouraged us to “consolidate it, nurture it, learn it, and teach it”. Perhaps easier said than done. Some of the problems with evaluation theory debated recently include (but are not limited to), (a) the predominance of prescriptive vs. descriptive theory, (b) the lack of contingency theories for practice, (c) an unhealthy (and perhaps uncritical) focus on specific “brand-named” evaluation approaches in contrast to a focus on better understanding the important issues facing the field, and (d) wide variability in the profession in formal and informal training in evaluation theory. A convincing case has been made for the importance of developing better evaluation theory, and for recognizing the centrality of theory to our field. What we need now are specific ideas about how precisely to consolidate, nurture, learn, and teach evaluation theory. In this panel discussion, we ask a group of prominent evaluation theorists to begin laying some of the more specific groundwork for carrying out Shadish’s inspirational charge. Following a brief presentation outlining the central issues with respect to evaluation theory development, discussants will share their thoughts about what it is going to take to advance evaluation theory along any or all of the lines suggested by Shadish. What are some of the specific barriers that we are facing? And what are some suggestions for beginning to overcome these barriers? |
| Session Title: Telling Your Program's Story: How to Collect, Create, and Deliver an Effective Success Story |
| Skill-Building Workshop 661 to be held in Mencken Room on Friday, November 9, 4:30 PM to 6:00 PM |
| Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG |
| Presenter(s): |
| Rene Lavinghouze, Centers for Disease Control and Prevention, shl3@cdc.gov |
| Ann Price, Community Evaluation Solutions Inc, aprice@communityevaluationsolutions.com |
| Abstract: Prevention programs are often unable to demonstrate outcomes and impacts for several years. Therefore, communicating success during program development and implementation is important for building program momentum and sustainability. Using a workbook developed by the Centers for Disease Control and Prevention's Division of Oral Health entitled: Impact and value: Telling your program's story, this session will focus on 1) Using success stories to through out the program's life cycle and 2) Using success stories to identify themes and promising practices across multiple sites/programs. This practical and hands on presentation will: define success stories, discuss types of success stories, and describe methods for systematically collecting and using success stories to promote your public health program and policy decisions. Discussion will include use of the workbook and lessons learned in conducting a stakeholder forum for collecting success stories. Attendees will create a 10-second sound-bite story and begin to draft a success story. |
| Session Title: Where Evaluation and Learning Technology Innovations Meet | ||||||||||||||||
| Multipaper Session 662 to be held in Edgar Allen Poe Room on Friday, November 9, 4:30 PM to 6:00 PM | ||||||||||||||||
| Sponsored by the Distance Ed. & Other Educational Technologies TIG | ||||||||||||||||
| Chair(s): | ||||||||||||||||
| Tamara J Barbosa, PhD's Consulting, dr.barbosa@phdsconsulting.com | ||||||||||||||||
|
| Session Title: Collaborative Evaluations: Successes, Challenges, and Lessons Learned | ||||||||||||||||||||||||||||||
| Multipaper Session 663 to be held in Carroll Room on Friday, November 9, 4:30 PM to 6:00 PM | ||||||||||||||||||||||||||||||
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG | ||||||||||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||||||||||
| Nakia James, Western Michigan University, nakiasjames@sbcglobal.net | ||||||||||||||||||||||||||||||
|
| Session Title: Making Sense of Mobility: Household Survey Data From Comprehensive Community Initiatives, Implications for Evaluation and Theory | ||||||
| Panel Session 664 to be held in Pratt Room, Section A on Friday, November 9, 4:30 PM to 6:00 PM | ||||||
| Sponsored by the Non-profit and Foundations Evaluation TIG | ||||||
| Chair(s): | ||||||
| Cindy Guy, Annie E Casey Foundation, cguy@aecf.org | ||||||
| Discussant(s): | ||||||
| Claudia Coulton, Case Western Reserve University, claudia.coulton@case.edu | ||||||
| Abstract: Comprehensive community initiatives (CCIs) seek to build resident capacity, raise and direct resources, and enhance services and supports to improve the wellbeing of children and families living in distressed communities. Recently analyzed household survey data from formative and impact studies of CCIs point to high levels of resident mobility in participant communities; posing challenges to the evaluation and theory of these initiatives. In a panel chaired by Cindy Guy of the Annie E. Casey Foundation, representatives of two national evaluation teams - the Urban Institute team conducting data analysis for Casey's Making Connections Initiative, and the NYU/Wagner School evaluators of the Robert Wood Johnson's Urban Health Initiative(UHI) - are joined by the policy advisor at one of the UHI sites and staff of the Center for Urban Poverty and Community Development at Case Western Reserve to discuss these challenges and the implications for evaluation and theory building moving forward. | ||||||
| ||||||
| ||||||
|
| Session Title: Success Measures: Learning From Community Development Results Through Participation, Common Tools, Shared Data | |||
| Panel Session 665 to be held in Pratt Room, Section B on Friday, November 9, 4:30 PM to 6:00 PM | |||
| Sponsored by the Non-profit and Foundations Evaluation TIG | |||
| Chair(s): | |||
| Maggie Grieve, NeighborWorks America, mgrieve@nw.org | |||
| Discussant(s): | |||
| Dawn Hanson Smart, Clegg & Associates, dsmart@cleggassociates.com | |||
| Nancy Kopf, NeighborWorks America, nkopf@nw.org | |||
| Abstract: Success Measures is a participatory evaluation approach based on a comprehensive set of outcome indicators and offered through a package of evaluation services and web-based technology. It was developed by community-based practitioners, funders and evaluators to ensure relevance across a broad spectrum of organization sizes, locations, cultures and programs within the community development field. This panel brings together three nonprofits that have used Success Measures to evaluate results and learn from their work while integrating ongoing evaluation into their programs. Panelists are from a California-based nonprofit serving farm worker housing needs, a Mississippi Delta community development corporation and a multi-service community development organization in Philadelphia. An intermediary funder, NeighborWorks™ America, will highlight strategies to build grantee evaluation capacity across a broad member network and move to greater accountability and shared learning. Serving as a discussant, a Success Measures evaluation trainer will reflect on the different learning experiences shared by the panelists. | |||
| |||
| |||
| |||
|
| Roundtable Rotation I: Developing a Conceptual Framework for Evaluating Policy Change |
| Roundtable Presentation 666 to be held in Douglas Boardroom on Friday, November 9, 4:30 PM to 6:00 PM |
| Presenter(s): |
| Susan Ladd, Centers for Disease Control and Prevention, sladd@cdc.gov |
| Jan Jernigan, Centers for Disease Control and Prevention, jjernigan1@cdc.gov |
| Alice Ammerman, University of North Carolina, Chapel Hill, alice_ammerman@unc.edu |
| Semra Aytur, University of North Carolina, aytur@email.unc.edu |
| Beverly Garcia, University of North Carolina, beverly_garcia@unc.edu |
| Amy Paxton, University of North Carolina, apaxton@@email.unc.edu |
| Abstract: Reducing the population burden of heart disease and stroke requires multi-level policies that address political, environmental, institutional, organizational, and social systems. Few models exist to guide evaluation of policy efforts. The Centers for Disease Control and Prevention (CDC), Division for Heart Disease and Stroke Prevention, and the University of North Carolina (UNC) collaborated to develop a framework to evaluate policy change interventions. An expert panel composed of CDC and other nationally recognized evaluators was engaged to examine existing models, identify gaps and barriers, and develop the framework. The session will present the framework and describe the development process and anticipated next steps in development. This roundtable offers an opportunity for discussion and input on the framework for evaluating policy change as well as its extension to system change. |
| Roundtable Rotation II: Development of an Outcome Monitoring System for Mental Health Programs in a Large Regional Health Authority |
| Roundtable Presentation 666 to be held in Douglas Boardroom on Friday, November 9, 4:30 PM to 6:00 PM |
| Presenter(s): |
| Colleen Lucas, Calgary Health Region, colleen.lucas@calgaryhealthregion.ca |
| Lindsay Guyn, Calgary Health Region, lindsay.guyn@calgaryhealthregion.ca |
| Abstract: As the primary provider of health care for over a million people, the Calgary Health Region needs an efficient method for routinely assessing the performance of the 148 mental health programs for which it is responsible. This presentation describes a pilot study of five programs which determined the feasibility of implementing a region-wide outcome monitoring system. Several measurement instruments, including the Behavior and Symptom Identification Scale 24, the Multnomah Community Ability Scale, and the Outcome Questionnaire-45, were administered at both admission and discharge; over 3500 outcome measures were completed from March 2005 to December 2006. The pilot provided an opportunity to assess the efficacy of the various psychometric instruments for different client populations and clinical settings. The pilot study also provided valuable logistical learnings, which were instrumental in the on-going development of a practical outcome monitoring process for mental health programs in this large diverse health organization. |
| Session Title: Building Capacity for Evaluation: A Tale of Four National Youth Development Organizations | ||||
| Panel Session 667 to be held in Hopkins Room on Friday, November 9, 4:30 PM to 6:00 PM | ||||
| Sponsored by the Organizational Learning and Evaluation Capacity Building TIG | ||||
| Chair(s): | ||||
| Suzanne Le Menestrel, United States Department of Agriculture, slemenestrel@csrees.usda.gov | ||||
| Karen Heller Key, National Human Services Assembly, kkey@nassembly.org | ||||
| Discussant(s): | ||||
| Hallie Preskill, Claremont Graduate University, hallie.preskill@cgu.edu | ||||
| Abstract: This panel session features presentations from researchers representing four diverse national youth development organizations that are members of the National Collaboration for Youth, a coalition of the National Human Services Assembly member organizations that have a significant interest in youth development. Members of the National Collaboration for Youth include more than fifty national, non-profit, youth development organizations. The presenters are currently engaged in evaluation capacity-building efforts that are focused on the following major themes: (1) Measurement and instrumentation; (2) Training; (3) Obtaining buy-in and participation; and (4) Applying evaluation results to youth development practice. The presenters will share specific strategies that their respective organizations are developing to address one or more of these themes. Throughout the panel, the presenters will describe ways in which participants can apply these strategies in their own work. | ||||
| ||||
| ||||
| ||||
|
| Session Title: Peer Reviews for Independent Consultants: New Peer Reviewer Orientation |
| Skill-Building Workshop 668 to be held in Peale Room on Friday, November 9, 4:30 PM to 6:00 PM |
| Sponsored by the Independent Consulting TIG |
| Presenter(s): |
| Sally Bond, The Program Evaluation Group, usbond@mindspring.com |
| Marilyn Ray, Finger Lakes Law and Social Policy Center Inc, mlr17@cornell.edu |
| Abstract: At AEA 2003, the Independent Consulting TIG embarked on the professional development of members through a Peer Reviews process to provide collegial feedback on evaluation reports). The IC TIG appointed Co-Coordinators to develop and recommend guidelines, a framework, and a rubric for conducting Peer Reviews within the membership of the Independent Consulting TIG. At AEA 2004, the process, framework, and rubric the Co-Coordinators had developed were presented and revised during a think tank. Volunteer Peer Reviewers were recruited and oriented to the Peer Review process and rubric. This update and orientation process was repeated at AEA 2005 and AEA 2006. In 2007, we propose to present a skill-building workshop during which we will provide an update on the Peer Review project, offer a forum for volunteer reviewers to share their experiences, and orient new reviewers. |
| Session Title: Lessons Learned: Wrapping up our Evaluation of an Advocacy Campaign |
| Demonstration Session 669 to be held in Adams Room on Friday, November 9, 4:30 PM to 6:00 PM |
| Sponsored by the Advocacy and Policy Change TIG |
| Chair(s): |
| Ehren Reed, Innovation Network Inc, ereed@innonet.org |
| Presenter(s): |
| Jennifer Bagnell Stuart, Innovation Network Inc, jabstuart@innonet.org |
| Abstract: Given the nonprofit sector's current focus on results and accountability and the innate challenges to evaluating advocacy efforts and policy initiatives, there recently has been a groundswell of research around advocacy evaluation. The evaluation of advocacy and public policy initiatives involves a number of inherent challenges: outcomes may be far beyond the scope of any single organization or program and contextual factors beyond the organization's control can leave it short of its desired outcome despite brilliant strategies and flawless execution. This demonstration will spotlight the strategy Innovation Network used to evaluate one campaign to enact US federal policy change. Innovation Network will discuss its methodology, the challenges inherent in evaluating this type of campaign, and share the lessons we have learned. |
| Roundtable Rotation I: Using a Shared On-line Database to Address Multi-partner Project Management and Evaluation Issues |
| Roundtable Presentation 670 to be held in Jefferson Room on Friday, November 9, 4:30 PM to 6:00 PM |
| Presenter(s): |
| Randy Ellsworth, Wichita State University, randy.ellsworth@wichita.edu |
| Larry Gwaltney, Allied Educational Research and Development Services, tgwaltney@cox.net |
| Patrick Hutchison, Wichita State University, patrick.hutchison@wichita.edu |
| Abstract: This paper describes an evaluation project involving a partnership among four agencies (a county health agency, a Parents-as-Teachers program, a private non-profit wellness center, and an urban school district pre-school) designed to provide seamless services to high need families to ensure children ages 0-5 reach kindergarten with the skills necessary for success in school. Since none of the partners were housed together, nor shared a common data base, evaluators worked with the partners to develop a common data base accessible by all to enter and track services provided to families served in the program. Issues met included (a) involving agency attorneys to develop legal procedures enabling agencies to “share” information, (b) creating a common, secure, Internet accessible, live data-base for all agencies to use, and (c) developing a monitoring process so changes made in children's records would be immediately flagged to alert other agencies of the changes. |
| Roundtable Rotation II: Instructionally Linked Versus Norm Referenced Assessments to Determine Impact Within an Even Start Program Evaluation |
| Roundtable Presentation 670 to be held in Jefferson Room on Friday, November 9, 4:30 PM to 6:00 PM |
| Presenter(s): |
| Zandra Gratz, Kean University, zgratz@aol.com |
| Abstract: This paper describes the evaluation of a school based Even Start family literacy program which has been in operation for three years. Youngsters were tested using traditional norm referenced assessments to generate a normative control expectation. In addition, instructionally linked school based assessments were accessed to examine change overtime in participants. Inferences from each of these paradigms were compared to each other as well as to regular classroom teacher and parent appraisal of youngster progress. The current study found credible evidence that alternate designs, including those relying on data typically maintained by schools, provide sufficient information to suggest causal inferences. |
| Session Title: Conducting Multi-method Evaluations | ||||||||||||||||||||||||||
| Multipaper Session 672 to be held in D'Alesandro Room on Friday, November 9, 4:30 PM to 6:00 PM | ||||||||||||||||||||||||||
| Sponsored by the Quantitative Methods: Theory and Design TIG | ||||||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||||||
| Linda Morell, University of California, Berkeley, lindamorell@earthlink.net | ||||||||||||||||||||||||||
|
| Session Title: Applications of Multilevel Longitudinal Analysis | ||||||||||||||||||||||||
| Multipaper Session 673 to be held in Calhoun Room on Friday, November 9, 4:30 PM to 6:00 PM | ||||||||||||||||||||||||
| Sponsored by the Quantitative Methods: Theory and Design TIG | ||||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||||
| Fred Newman, Florida International University, newmanf@fiu.edu | ||||||||||||||||||||||||
|
| Session Title: International and Cross-Cultural TIG Business Meeting |
| Business Meeting Session 674 to be held in McKeldon Room on Friday, November 9, 4:30 PM to 6:00 PM |
| Sponsored by the International and Cross-cultural Evaluation TIG |
| TIG Leader(s): |
| Thomaz Chianca, Western Michigan University, thomaz.chianca@wmich.edu |
| Gwen M Willems, University of Minnesota, wille002@umn.edu |
| Nino Saakashvili, Horizonti Foundation, nino.adm@horizonti.org |
| Session Title: Evaluating Outcomes for Young Children With Disabilities: Issues at the National, State, and Local Levels | |||
| Panel Session 675 to be held in Preston Room on Friday, November 9, 4:30 PM to 6:00 PM | |||
| Sponsored by the Special Needs Populations TIG | |||
| Chair(s): | |||
| Kathy Hebbeler, SRI International, kathleen.hebbeler@sri.com | |||
| Abstract: In 2005, the U.S. Department of Education required states to submit outcomes data on all children birth through 5 years of age receiving services through IDEA. Responding to pressure from OMB, the Department specified what the states are to report but did not specify how they were to collect the information. The presenters, staff from a Center funded by the Department of Education to assist states an implementing an early childhood outcome measurement system, will describe activities undertaken, as well as issues and challenges at the national, state, and local level that have emerged as states set about to collect these data. Contrasting state approaches will be described including approaches that are incorporating child outcomes data into a broader system of ongoing evaluation. The papers will address the intended and unintended consequences thus far, both positive and negative, of instituting national outcomes measurement for young children with disabilities. | |||
| |||
| |||
|
| Session Title: Deliverables as a Tool to Promote and Support Organizational Learning: Client-centered Strategies for Data Collection and Reporting | |||
| Panel Session 676 to be held in Schaefer Room on Friday, November 9, 4:30 PM to 6:00 PM | |||
| Sponsored by the Organizational Learning and Evaluation Capacity Building TIG | |||
| Chair(s): | |||
| Debbie Zorn, University of Cincinnati, debbie.zorn@uc.edu | |||
| Abstract: Every program evaluation is expected to have some kind of deliverable. Yet, why write a technical report that ends up on someone's bookshelf rather than being used to make a meaningful contribution to organizational learning and program improvement? How do we as evaluators meet the accountability and reporting needs of our clients while also ensuring that information provided is usable and appropriate for its intended audience? This panel will discuss participatory, collaborative approaches to the planning and design of project deliverables used by the University of Cincinnati Evaluation Services Center (UCESC) that take into account clients' needs for information, accountability, learning, and dissemination. The panelists will share the processes they used in negotiating a design for deliverables that met the unique program context and constraints of the five different projects represented by this group and describe how these approaches contributed to program and organizational learning. | |||
| |||
| |||
| |||
| |||
|
| Session Title: Living and Learning Evaluation: Teaching Evaluation Through Visual, Narrative and Performative Practice |
| Skill-Building Workshop 677 to be held in Calvert Ballroom Salon B on Friday, November 9, 4:30 PM to 6:00 PM |
| Sponsored by the Teaching of Evaluation TIG |
| Presenter(s): |
| A Rae Clementz, University of Illinois at Urbana-Champaign, clementz@uiuc.edu |
| April Munson, University of Illinois at Urbana-Champaign, amunson2@uiuc.edu |
| Abstract: Attendees will take from this skill building workshop the ability to teach various aspects of evaluation through unconventional forms of representation and exploration, such as puzzle maps, concept maps, metaphor, poetics, game design, role play, and others. This session allows attendees to experience alternative visual and performative conceptualizations of the field of evaluation, understandings of theory, and implementation of methods. As the field strives to attract members of various disciplines, this approach promotes understanding that those members will also have various expertise and learning styles. Attendees will have the opportunity to investigate visual, narrative and performative representations, and work toward creating their own, most suitable to their own understanding and teaching style. |
| Session Title: Evaluation in Federal Agencies: What Shapes It, and How Could the American Evaluation Association be Part of the "What"? | |||
| Panel Session 678 to be held in Calvert Ballroom Salon C on Friday, November 9, 4:30 PM to 6:00 PM | |||
| Sponsored by the AEA Conference Committee | |||
| Chair(s): | |||
| Michael Morris, University of New Haven, mmorris@newhaven.edu | |||
| Discussant(s): | |||
| Debra Rog, Westat, debrarog@westat.com | |||
| Abstract: The Forum will explore how state-of-the-art knowledge and expertise in evaluation can be more effectively linked to the formulation of evaluation policy at the federal level. Panelists from three different federal agencies will address the following questions: (1) How is evaluation policy established in their agency? (2) What types of evaluation-related input would their agency welcome from a professional organization such as the American Evaluation Association? (3) What are the means through which AEA could provide such input? Against this background, panelists will also discuss the following: To what extent will the 2008 Presidential election and its aftermath present opportunities for the professional evaluation community to play a greater role in the formulation of evaluation policy? What factors are likely to facilitate or hinder this influence? When a professional organization endeavors to elevate its public profile at the federal level, what cautionary tales should it be mindful of? | |||
| |||
| |||
| |||
|
| Session Title: Evaluation Within Partnerships: Working With Community Groups | ||||||||||||||||||||||||||
| Multipaper Session 679 to be held in Calvert Ballroom Salon E on Friday, November 9, 4:30 PM to 6:00 PM | ||||||||||||||||||||||||||
| Sponsored by the Extension Education Evaluation TIG | ||||||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||||||
| Mary T Crave, University of Wisconsin, crave@conted.uwex.edu | ||||||||||||||||||||||||||
|
| Session Title: Evaluations of Reading and Literacy Programs | ||||||||||||||||||
| Multipaper Session 680 to be held in Fairmont Suite on Friday, November 9, 4:30 PM to 6:00 PM | ||||||||||||||||||
| Sponsored by the Pre-K - 12 Educational Evaluation TIG | ||||||||||||||||||
| Chair(s): | ||||||||||||||||||
| Edith Stevens, Macro International Inc, edith.s.stevens@orcmacro.com | ||||||||||||||||||
|
| Roundtable Rotation I: An Evaluation of Ten Years of Progress in an Autistic Impaired Preschool Program |
| Roundtable Presentation 681 to be held in Federal Hill Suite on Friday, November 9, 4:30 PM to 6:00 PM |
| Presenter(s): |
| Carmen Jonaitis, Western Michigan University, cjonaiti@kresanet.org |
| Jinhai Zhang, Western Michigan University, jinhaizhang@hotmail.com |
| Abstract: A university professor who was instrumental in designing the on-site practicum at a school for children with disabilities requested this evaluation. The purpose of this study was to determine the efficacy of an autistic impaired preschool program in teaching preschool children with developmental disabilities the skills needed to be successful in a less restrictive learning environment. For some children this will be participation in kindergarten, for other children this may mean participation in a less restrictive special education program. This evaluation analyzed the number of skills that children achieve after entering the program, and what percentage of kindergarten readiness skills are achieved before the children leave the program. Additionally, parent satisfaction with the program was evaluated. The intent of this evaluation was to determine what areas of the program can be improved to increase student success. The audience included the practicum coordinator that assists in overseeing the practicum, as well as the graduate assistants responsible for training and supervising the classroom tutors. The audience also included twelve program staff, and the school psychologist who has participated in program design and implementation. Additional stakeholders include parents, local kindergarten teachers, practicum students, and school administrators. |
| Roundtable Rotation II: Conducting Successful Field Research in School-based Settings |
| Roundtable Presentation 681 to be held in Federal Hill Suite on Friday, November 9, 4:30 PM to 6:00 PM |
| Presenter(s): |
| David Dobrowski, First 5 Monterey County, david@first5monterey.org |
| Raul Martinez, Harder & Company Community Research, rmartinez@harderco.com |
| Abstract: In 2006, First 5 Monterey County worked with 25 schools, representing 11 districts, to implement the Kindergarten Readiness Assessment, a study designed to provide a snapshot of kindergarteners' readiness to begin school. The assessment used four tools based on the National Education Goals Panel's definition of school readiness to gather information about incoming kindergarteners. While First 5 had previously sponsored the Kindergarten Readiness Assessment, it did so with far fewer schools. This roundtable will explore the processes used to successfully collect 1,525 child surveys, 1,485 family surveys, 1,203 matched child and family surveys, and 74 kindergarten teacher surveys. Specifically, we will describe strategies undertaken to achieve high response rates, obtain consent and buy-in from schools and districts, and ensure quality data collection. We will also identify challenges encountered during field operations, offer tips to facilitate the successful implementation of assessments in school-based settings, and invite audience discussion and feedback. |
| Session Title: Issues in Doing Randomized Trials in Educational Evaluation | |||||||||||||||||||
| Multipaper Session 682 to be held in Royale Board Room on Friday, November 9, 4:30 PM to 6:00 PM | |||||||||||||||||||
| Sponsored by the Pre-K - 12 Educational Evaluation TIG | |||||||||||||||||||
| Chair(s): | |||||||||||||||||||
| Burke Johnson, University of South Alabama, bjohnson@usouthal.edu | |||||||||||||||||||
|
| Session Title: Recovery/Resilience, Trajectories, Co-occurring Disorders, and Real Time Program Evaluation | |||||||||||||||||||||||||||
| Multipaper Session 683 to be held in Royale Conference Foyer on Friday, November 9, 4:30 PM to 6:00 PM | |||||||||||||||||||||||||||
| Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG | |||||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||||
| Garrett E Moran, Westat, garrettMoran@westat.com | |||||||||||||||||||||||||||
|
| Session Title: Diverse Approaches to Evaluative Inquiry in Higher Education | ||||||||||||||||||||||||
| Multipaper Session 684 to be held in Hanover Suite B on Friday, November 9, 4:30 PM to 6:00 PM | ||||||||||||||||||||||||
| Sponsored by the Assessment in Higher Education TIG | ||||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||||
| Erin Burr, University of Tennessee, eburr@utk.edu | ||||||||||||||||||||||||
| Discussant(s): | ||||||||||||||||||||||||
| Summers Kalishman, University of New Mexico, skalish@salud.unm.edu | ||||||||||||||||||||||||
|
| Session Title: Learning From Evaluation in Service of Social Justice: Who learns? What is Learned? And Why Does it Matter? | |||||
| Panel Session 685 to be held in Baltimore Theater on Friday, November 9, 4:30 PM to 6:00 PM | |||||
| Sponsored by the Presidential Strand | |||||
| Chair(s): | |||||
| Sharon Brisolara, Evaluation Solutions, evaluationsolutions@hughes.net | |||||
| Discussant(s): | |||||
| Saumitra SenGupta, APS Healthcare, ssengupta@apshealthcare.com | |||||
| Abstract: This panel focuses on the character and critical importance of learning that takes place in social-justice oriented evaluation. Two conceptual papers address philosophical and political perspectives on why and how evaluation practice committed to advancing social justice presents meaningful and important opportunities for learning and what the character of that learning is. Presenters address ways of attending to social justice, how attending to these issues shapes the role of the evaluator, and what implications attending to social justice has for the profession, the communities we serve, and the larger society. Two practice-oriented papers will address the significant learning that has taken place within evaluations attending to social justice concerns. Practitioners representing diverse cultural and political contexts who use identity based and other evaluation models address what attending to social justice looks like within an evaluation and offer examples of learning that can occur in this genre of evaluation practice. | |||||
| |||||
| |||||
| |||||
|
| Session Title: Measuring Fidelity and Assessing Impact of Service Interventions in Ohio's Title IV-E Waiver Evaluation |
| Multipaper Session 686 to be held in International Room on Friday, November 9, 4:30 PM to 6:00 PM |
| Sponsored by the Human Services Evaluation TIG |
| Chair(s): |
| Madeleine Kimmich, Human Services Research Institute, kimmich@hsri.org |
| Discussant(s): |
| Andrea Sedlak, Westat, andreasedlak@westat.com |
| Abstract: Federal waivers to Title IV-E of the Social Security Act enable state child welfare programs to redirect federal funds from foster care to alternative services for children suffering abuse or neglect. Ohio's Title IV-E waiver demonstration project operates in thirteen of Ohio's county-administered public child welfare agencies. The county agencies are experimenting with three promising interventions: family team meetings, supervised visitation, and supports to kinship caregivers. Key to evaluating the impact of targeted services on child outcomes is assessing whether the services as implemented conform to the original model. If fidelity varies across or even within sites, can one expect a measurable outcome effect? What can be learned from varied applications of a single model intervention? Three papers discuss fidelity assessment in a multi-year evaluation of Ohio's Title IV-E waiver demonstration. We describe fidelity measures and offer initial findings, highlighting challenges and limitations to fidelity assessment. |
| Measuring the Fidelity of Protect Ohio Family Team Meetings |
| Madeleine Kimmich, Human Services Research Institute, kimmich@hsri.org |
| Amy Stuczynski, Human Services Research Institute, astuczynski@hsri.org |
| Family Team Meetings is generally seen as a 'best practice'. Regular meetings, facilitated by a trained professional and bringing together family, friends, service providers and advocates, can lead to creative and effective solutions to case challenges, ultimately reducing the need for foster care placement and improving permanency outcomes. This paper describes the model adopted by 13 demonstration sites, defines the fidelity measures used, presents fidelity findings, and discusses evaluation challenges. Fidelity is measured using case-level and county-level variables. Key issues encountered include: how rigorously to define the model when making judgments about fidelity, how to choose measures that balance the need for specific data with the need to ensure that data is not too onerous to collect, and how best to provide fidelity information to practitioners for the purpose of ensuring that there is a model to evaluate. |
| Supervised Visitation as a Model Intervention |
| Adrienne Zell, Human Services Research Institute, azell@hsri.org |
| Julie Murphy, Human Services Research Institute, murphy@hsri.org |
| One service delivery model selected by Ohio counties participating in the Title IV-E Waiver is Supervised Visitation, an enhanced visitation program for children in out-of-home care and their parents. This visitation model provides increased consistency and structure, and is expected to improve parent-child interactions and maximize the chance for reunification. Five programmatic elements define this particular model. Challenges involved in determining model fidelity include: providing clear definitions of the model components, uncovering factors which influence how the model is implemented, and differentiating among counties with consistently high fidelity. As required by law, all child welfare agencies offer supervised visitation of some sort; therefore a unique challenge to fidelity evaluation of this intervention is determining how the model fidelity of study counties compares to non-intervention counties with similar elements in place. Along with this discussion, we also pose the methodological question of examining fidelity-dosage at an individual client level. |
| Supporting Kinship Caregivers |
| Julie Murphy, Human Services Research Institute, murphy@hsri.org |
| Madeleine Kimmich, Human Services Research Institute, kimmich@hsri.org |
| Six Ohio counties have focused on identifying and supporting kinship caregivers more consistently. They believe that placing children with relatives or friends is less disruptive than formal foster care and ultimately decreases the time children spend in paid placements. Following the kinship model is expected to lead to increased use of kinship settings and more support to these placements (i.e. offering a variety of services and/or subsidies, having designated kinship staff). Measuring adherence to the model is difficult: not all children placed with kin are identifiable in existing data systems, services provided to kinship caregivers are poorly documented, and the Waiver model is not a truly unique approach -- it simply enhances what other counties were already doing to support kin. The paper describes how we have addressed these issues and how, over time, we have adjusted the evaluation plan. |
| Session Title: Using Systems Tools to Understand Multi-site Program Evaluation |
| Skill-Building Workshop 687 to be held in Chesapeake Room on Friday, November 9, 4:30 PM to 6:00 PM |
| Sponsored by the Health Evaluation TIG |
| Presenter(s): |
| Molly Engle, Oregon State University, molly.engle@oregonstate.edu |
| Andrea Hegedus, Centers for Disease Control and Prevention, ahegedus@cdc.gov |
| Abstract: Evaluators working complex multi-site programs must be conscious of systems characteristics. Using systems tools can aid the evaluator in effectively evaluating the program. Connecting multi-site programs with overall program objectives can be accomplished with quick diagramming tools showing function, feedback loops, and leverage points for priority decisions. Designed for evaluators responsible for evaluating large multi-site programs or evaluators within a specific program of a larger multi-site program, participants will, individually or in small groups, draw a program system and consider its value to the programs goals and objectives. Drawings will be discussed, the method assessed, and insights summarized. The workshop will assess, "What did you learn and how do you intend to use this skill?" along with "What was the value of this experience to you?" This skill building workshop integrates the sciences of intentional learning, behavioral change, systems thinking and practice, and assessment as functional systems of evaluation and accountability. |
| Session Title: Challenges and Opportunities in Evaluating Publicly-Funded Programs | |||||||||
| Multipaper Session 688 to be held in Versailles Room on Friday, November 9, 4:30 PM to 6:00 PM | |||||||||
| Sponsored by the Government Evaluation TIG | |||||||||
| Chair(s): | |||||||||
| Rakesh Mohan, Idaho State Legislature, rmohan@ope.idaho.gov | |||||||||
|