| Session Title: Strategies to Improve Quality of Mixed Methods Evaluations | |||||||||||||||||||||||||||||||||||||||||||||
| Multipaper Session 551 to be held in Avalon A on Friday, Nov 4, 8:00 AM to 9:30 AM | |||||||||||||||||||||||||||||||||||||||||||||
| Sponsored by the Mixed Methods Evaluation TIG | |||||||||||||||||||||||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||||||||||||||||||||||
| Donna Mertens, Gallaudet University, donna.mertens@gallaudet.edu | |||||||||||||||||||||||||||||||||||||||||||||
|
| Session Title: Strengthening Value Through Evaluation Capacity Building | ||||||
| Panel Session 552 to be held in Avalon B on Friday, Nov 4, 8:00 AM to 9:30 AM | ||||||
| Sponsored by the Organizational Learning and Evaluation Capacity Building | ||||||
| Chair(s): | ||||||
| Karen Debrot, Centers for Disease Control and Prevention, kdebrot@cdc.gov | ||||||
| Discussant(s): | ||||||
| Karen Debrot, Centers for Disease Control and Prevention, kdebrot@cdc.gov | ||||||
| Abstract: The Centers for Disease Control and Prevention (CDC) awards a large proportion of its budget to state and local agencies and non-governmental organizations to accomplish its mission of promoting health. Evaluation of CDC-funded activities is an important part of demonstrating accountability for taxpayer dollars. However, funded partners vary in their capacities to evaluate their programs. To address this challenge, CDC has developed different systems to build funded partners' evaluation capacities to help improve program planning, monitor progress, improve implementation, and document outcomes. The presentations in this panel will describe three CDC systems that target evaluation capacity building to increase outcome-level evaluation, measure and track skills and competencies, and to coordinate evaluation across multiple skill levels. These systems demonstrate standardized, coordinated efforts to build skills to support evaluation practice of funded partners. In this way, CDC ensures that stakeholders benefit from well-implemented programs, and the value of evaluation increases among partners. | ||||||
| ||||||
| ||||||
|
| Session Title: Using Artistic Strategies in Collecting, Analyzing and Representing Evaluations |
| Skill-Building Workshop 553 to be held in California A on Friday, Nov 4, 8:00 AM to 9:30 AM |
| Sponsored by the Data Visualization and Reporting TIG |
| Presenter(s): |
| Michelle Searle, Queen's University, michellesearle@yahoo.com |
| Lynda Weaver, Bruyere Continuing Care, lweaver@bruyere.org |
| Abstract: The field of evaluation is methodologically responsive and now features a spectrum of quantitative, qualitative and mixed method approaches (e.g., Greene, 1999; Greene & Caracelli, 1997; House, 1993; McClintock, 2004). In many ways, evaluators have been leaders in methodological innovation by continually reshaping themselves to deal with complex questions. Given this willingness to consider a variety of questions and to be methodologically flexible, it is time to explore ways that arts-informed approaches to evaluation offer value with current accepted orientations of evaluation. This skill-building workshop, developed on theory of arts in research and evaluation, provides hands-on exploration of arts techniques within evaluation practice. The workshop uses a simulated learning activity that unfolds over three phases to involve participants in arts-informed data collection activities, ways of analyzing art generated in an evaluation, and forms for representing data. No art experience is necessary, only a willingness to create and share ideas. |
| Session Title: Tiered Evaluation: Local Evaluators Operating Within the Context of a Cross-site Evaluation | |||
| Panel Session 554 to be held in California B on Friday, Nov 4, 8:00 AM to 9:30 AM | |||
| Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG | |||
| Chair(s): | |||
| Patricia Campie, National Center for Juvenile Justice, campie@ncjj.org | |||
| Abstract: This session will explore the strategic and collaborative approach to tiered evaluation of the National Child Welfare Resource Centers (NRC) Evaluators Workgroup. Each NRC is responsible for evaluating one of the eleven National Child Welfare Resource Centers of the training and technical assistance network coordinated by the Children's Bureau and participating in the national cross-site evaluation. This workgroup of local evaluators has addressed many strategic and methodological questions including: How should the workgroup be organized? How can we best cooperate to minimize shared client burden? How can we capture intermediate outcomes in regards to evaluating technical assistance? How do we adapt to changing initiatives? Which data collection methods should be shared or can be unique across NRC evaluation plans? How can our work best contribute to the cross-site evaluation? This session will discuss these questions and explore the role of local evaluators working in conjunction with a cross-site evaluation team. | |||
| |||
| |||
|
| Session Title: Living Into Developmental Evaluation: Reflections on Changing Practice | ||||
| Panel Session 555 to be held in California C on Friday, Nov 4, 8:00 AM to 9:30 AM | ||||
| Sponsored by the Non-profit and Foundations Evaluation TIG | ||||
| Chair(s): | ||||
| Hallie Preskill, FSG Social Impact Consultants, hallie.preskill@fsg.org | ||||
| Discussant(s): | ||||
| Michael Patton, Utilization-Focused Evaluation, mqpatton@prodigy.net | ||||
| Abstract: Within the last few years, an increasing number of evaluators, nonprofits, and funders have been experimenting with more dynamic, responsive, and emergent forms of evaluation. While some have called these new approaches "adaptive" or "real-time," perhaps the most well-known is Developmental Evaluation, conceptualized and written about by Michael Q. Patton. This approach, which is particularly well suited for evaluating social innovations, or programs/initiatives/strategies that are not well-tested and where the outcomes cannot always be pre-planned or predicted, constitutes a fundamentally different role and set of competencies for evaluators. In this session, we will explore: a) what it means to engage in a Developmental Evaluation, b) how this approach is similar and/or different from formative and summative evaluations, and c) the challenges and opportunities for Developmental Evaluation in the field. Both the evaluators' and the clients' perspectives will be presented. | ||||
| ||||
| ||||
|
| Session Title: What Counts in Social Services Evaluation: Values and Valuing in Evaluation Practice | |||
| Panel Session 556 to be held in Pacific A on Friday, Nov 4, 8:00 AM to 9:30 AM | |||
| Sponsored by the Human Services Evaluation TIG , the Social Work TIG, and the Presidential Strand | |||
| Chair(s): | |||
| Tracy Wharton, University of Michigan, trcharisse@gmail.com | |||
| Discussant(s): | |||
| Thomas Schwandt, University of Illinois at Urbana-Champaign, tschwand@illinois.edu | |||
| Abstract: This panel session responds to the conference theme by exploring the role of values and valuing in social services evaluation. The panelists, accomplished evaluators representing different areas in social services evaluation and different regions of the United States, will present reflections on their professional experiences with values and valuing. Specific topics include values assumed in particular evaluation approaches and methods, valuing the interests of different and often conflicting stakeholders, value conflicts inherent to evaluation practice, and the appropriate use of data to meet both professional and ethical standards. Taken together, these reflections identify and examine significant issues in the field of social services evaluation. The discussant will draw on his broad experience in evaluation to place these issues into the larger context of evaluation theory and practice, and offer specific implications on values and valuing. | |||
| |||
| |||
| |||
| |||
|
| Session Title: Extending the Value of Extension Work: Publishing Evaluation-Related Journal Articles | |||
| Panel Session 557 to be held in Pacific B on Friday, Nov 4, 8:00 AM to 9:30 AM | |||
| Sponsored by the Extension Education Evaluation TIG | |||
| Chair(s): | |||
| Allison Nichols, West Virginia University Extension, ahnichols@mail.wvu.edu | |||
| Abstract: Most of us would agree that it is important to publish evaluation-related articles in professional journals because it spreads the news about our good work, informs new programming, and establishes us as academic professionals. Many of us, however, are intimidated by the process of writing and submitting an article for publication. Some might even feel that journal editors will be less likely to react favorably to an article about an evaluation process than they would about an article based on a research project. Each panel member has had experience writing and publishing articles in peer-reviewed journals both on the craft of evaluation and on individual evaluation projects. They will share their experiences and make recommendations about how best to get evaluation work in print. | |||
| |||
| |||
| |||
|
| Session Title: Master Teacher Series: Writing Effective Items for Survey Research and Evaluation Studies |
| Skill-Building Workshop 558 to be held in Pacific C on Friday, Nov 4, 8:00 AM to 9:30 AM |
| Sponsored by the Quantitative Methods: Theory and Design TIG |
| Presenter(s): |
| Jason Siegel, Claremont Graduate University, jason.siegel@cgu.edu |
| Eusebio Alvaro, Claremont Graduate University, eusebio.alvaro@cgu.edu |
| Abstract: The focus of this hands-on workshop is to instruct attendees how to write effective items for collecting survey research data. Bad items are easy to write. Writing good items is more challenging than most people are aware. Writing effective survey items require a complete understanding of the impact that item wording can have on a research effort. Only through adequate training can a good survey items be discriminated from the bad. This 90-minute workshop focuses specifically on Dillman's (2007) principles of question writing. After a brief lecture, attendees will then be asked to use their newly gained knowledge to critique items from selected national surveys. |
| Roundtable Rotation I: Beyond Satisfaction: Revisiting the Use of Satisfaction Surveys in a Collaborative Evaluation |
| Roundtable Presentation 559 to be held in Conference Room 1 on Friday, Nov 4, 8:00 AM to 9:30 AM |
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG |
| Presenter(s): |
| Connie Walker, University of South Florida, cwalkerpr@yahoo.com |
| Michael Berson, University of South Florida, berson@usf.edu |
| Liliana Rodriguez-Campos, University of South Florida, liliana@usf.edu |
| Abstract: Collaboration is the ability to actively work with others in a mutually beneficial relationship in order to achieve a shared vision, not likely to otherwise occur. The level of collaboration varies for each evaluation and it will depend on the situation within the evaluation. Satisfaction is the fulfillment of a need or want. For the purposes of this evaluation satisfaction was defined as how participants felt about their experiences with the training program. A satisfaction survey was used within a collaborative effort to capture participants' opinions. This roundtable will examine the contribution and the role of the collaboration members throughout the administration of a survey when time and resources are limited. Specifically, the importance of collecting satisfaction measures, problems measuring satisfaction, and solutions on how to handle these situations. |
| Roundtable Rotation II: Long Distance Evaluations Using a Collaborative Approach |
| Roundtable Presentation 559 to be held in Conference Room 1 on Friday, Nov 4, 8:00 AM to 9:30 AM |
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG |
| Presenter(s): |
| Connie Walker, University of South Florida, cwalkerpr@yahoo.com |
| Abstract: Long distance evaluations are the ones in which the client is really far away of the evaluator. The client could be in a different state or even in a different country. In these cases the use of a collaborative approach is central to support an adequate evaluation process. Collaboration is the ability to actively work with others in a mutually beneficial relationship in order to achieve a shared vision. The level of collaboration varies for each evaluation and it will depend on the situation within the evaluation. The collaborative relationship between the evaluators and stakeholders is a key component to achieve the goals and objectives of long distance evaluations. The purpose of this presentation is to discuss what needs to be taken into consideration to reduce difficulties, making these evaluations something doable. This presentation is based on evaluator's firsthand experience successfully conducting evaluations at a distance by incorporating a collaborative approach. |
| Roundtable Rotation I: We Actually Did It, and You Can Too: Creating a Culture of Learning and Evaluation in a Multi-service Nonprofit |
| Roundtable Presentation 560 to be held in Conference Room 12 on Friday, Nov 4, 8:00 AM to 9:30 AM |
| Sponsored by the Non-profit and Foundations Evaluation TIG |
| Presenter(s): |
| Isaac Castillo, Latin American Youth Center, isaac@layc-dc.org |
| Leah Galvin, Omni Institute, lgalvin@omni.org |
| Ann Emery, Latin American Youth Center, anne@layc-dc.org |
| Abstract: Creating a culture where nonprofit staff actually utilizes outcomes measurement and evaluation techniques on a regular basis is extremely challenging. The Latin American Youth Center (LAYC), a multi-service nonprofit in Washington, DC is an example of a nonprofit that has achieved this culture change. This roundtable will allow LAYC's Learning and Evaluation Division, the internal program evaluation group at LAYC, to share some of the lessons learned, challenges encountered, and techniques used during this multi-year process. We will discuss the initial steps we took to convince staff to embrace evaluation concepts, growth of evaluation capacity within the organization, maintenance of the culture over time, and unexpected challenges. |
| Roundtable Rotation II: Evaluating the Development of Community in Communities of Practice |
| Roundtable Presentation 560 to be held in Conference Room 12 on Friday, Nov 4, 8:00 AM to 9:30 AM |
| Sponsored by the Non-profit and Foundations Evaluation TIG |
| Presenter(s): |
| Ruth Mohr, Mohr Program Evaluation & Planning LLC, rmohr@pasty.com |
| Abstract: There is growing interest in many sectors to use a community of practice approach for improving how work around a shared concern is done. Etienne Wenger, co-originator of the term, defines these communities as 'groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly.' Such communities can benefit from attention to factors that can affect collective learning over time. This roundtable will explore potential criteria for assessing development of the community element of this approach for the purpose of improving the community's ability to support learning. Discussion starting points will be: member participation (e.g., self-management of knowledge needs, agreement on style of working together, learning orientation, and concern about quality of relationships), leadership (e.g., making working together as a community possible), and tools/processes that support the work (e.g., for communication, relationship building, and task completion). |
| Session Title: Evaluating Development Interventions in Conflict and in Food Security | |||||||||||||||
| Multipaper Session 561 to be held in Conference Room 13 on Friday, Nov 4, 8:00 AM to 9:30 AM | |||||||||||||||
| Sponsored by the International and Cross-cultural Evaluation TIG | |||||||||||||||
| Chair(s): | |||||||||||||||
| Gwen Willems, Willems and Associates, wille002@umn.edu | |||||||||||||||
|
| Session Title: The Healthy Relationships Approach in Intimate Partner Violence (IPV) Prevention: Turning Practice Into Evidence |
| Multipaper Session 562 to be held in Conference Room 14 on Friday, Nov 4, 8:00 AM to 9:30 AM |
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG |
| Chair(s): |
| Cathleen Crain, LTG Associates, partners@ltgassociates.com |
| Discussant(s): |
| Laura Leviton, Robert Wood Johnson Foundation, llevito@rwjf.org |
| Abstract: This panel, based on the Robert Wood Johnson Foundation Strengthening What Works: Preventing Intimate Partner Violence in Immigrant and Refugee Communities (SWW) initiative, explores the evaluation of eight intimate partner violence (IPV) prevention projects in immigrant and refugee communities. The first presentation argues that this initiative represents an opportunity to build community-based collaborations in which the relationship between evidence-based practice and practice-based evidence becomes complementary. We propose to do this by means of "Learning Collaboratives" in which grantee organizations work together to formulate questions, test models, analyze data, and potentially reshape the field of IPV prevention. The second presentation explores SWW grantee efforts to build and evaluate healthy relationship education for refugee and immigrant populations. We describe the trajectories that brought the grantees to healthy relationship education and the associated evaluation challenges. The third presentation explores the hypothesis that healthy relationship education is a primary prevention response that will address some forms of violence but not all. |
| Closing the Research Gap in IPV Prevention: Turning Practice Into Evidence Using Community-based Learning Collaboratives |
| Alberto Bouroncle, LTG Associates, abouroncle@ltgassociates.com |
| To expand the knowledge of effective IPV prevention, methods should be developed to evaluate community level interventions that have not been tested systematically, shifting the paradigm from evidence-based practice to practice-based evidence. Practitioners at the community level, organized around Learning Collaboratives, will be able to work together and collect data to generate research questions that may influence research agendas and policy making. The RWJF's Strengthening What Works (SWW) will provide a testing ground for the development of Learning Collaboratives addressing the prevention of IPV in immigrant and refugee communities. The wide range of SWW prevention strategies suggests that grantees will be leading these Learning Collaboratives focusing on issues such as the role of culture and language, working with youth using healthy relationship models, working with men using a popular education approach, or working with the LGBTQ community in issues of IPV prevention. |
| Healthy Relationship Curricula for Immigrants and Refugees: Practice and Evidence |
| Greta Uehling, LTG Associates, guehling@ltgassociates.com |
| All of the RWJF Strengthening What Works grantees are currently creating, revising, and implementing curricula that include material on healthy relationships. Many of them have taken this approach as a way of bypassing the stigma associated with IPV and offering project participants knowledge and skills that contribute to the primary prevention of IPV. Evaluating healthy relationship curricula for refugees and immigrants presents a number of challenges. First, how do we account for the immense effect that different facilitators bring to curriculum implementation? Second, what evaluation tools can best overcome formidable barriers of language and literacy to bring promising practice to evidence? Finally, grantees have adapted mainstream material to fit specific target populations. How do we evaluate the extent of that fit considering that the target is continually shifting, and will these curricula be effective with other populations? |
| What Does Healthy Relationship Education Prevent? Prevention and Typologies of Intimate Partner Violence |
| Carter Roeber, LTG Associates, croeber@ltgassociates.com |
| Formative evaluation often requires evaluators to raise questions and conduct new research that looks at an issue from a new perspective. In order to prevent IPV, grantees in Strengthening What Works (SWW) have developed practical responses for teaching people about healthy relationships. In order to assist SWW grantees build their own evaluation capacity and to evaluate their work, LTG is exploring the connections between healthy relationships and IPV prevention more thoroughly. One key issue to explore is whether the curricula and training in healthy relationships can prevent the serious kinds of IPV that require the most attention and services or if they will have a more general effect on the quality of life for couples and communities. We hypothesize that healthy relationship curricula can be effective primary prevention, but they will not prevent more devastating coercive controlling relationships. |
| Session Title: Pass the Aspirin: When Projects Become Headaches | |||
| Panel Session 563 to be held in Avila A on Friday, Nov 4, 8:00 AM to 9:30 AM | |||
| Sponsored by the Evaluation Use TIG | |||
| Chair(s): | |||
| Mary Anne Sydlik, Western Michigan University, mary.sydlik@wmich.edu | |||
| Abstract: Science and Mathematics Program Improvement (SAMPI) at Western Michigan University currently has 25 evaluation projects, seven projects out for review, and six in the early stages of development with potential clients. Members of the SAMPI evaluation team will address challenges that can arise 1) during the pre-submission proposal/project development phase; 2) while trying to coordinate evaluation and project activities with another organization; and 3) when the clients' expectations change mid-course in ways that exceed the evaluation budget, the evaluator's time and energy, and cost-overruns threaten to shut down the evaluation before it can be completed. | |||
| |||
| |||
|
| Session Title: Values and Valuing: The Core of Professional Practice |
| Demonstration Session 564 to be held in Avila B on Friday, Nov 4, 8:00 AM to 9:30 AM |
| Sponsored by the Systems in Evaluation TIG |
| Presenter(s): |
| Bob Williams, Independent Consultant, bobwill@actrix.co.nz |
| Martin Reynolds, Open University, m.d.reynolds@open.ac.uk |
| Abstract: Evaluation is regarded amongst many practitioners as a profession. But does it deserve that status? What is at the core of professional practice and how well do evaluators walk the distance from being a tradesperson doing a client's bidding (serving 'wants') to being a professional with wider, more responsible, concerns (identifying and iterating on 'needs')? The presenters argue that how evaluators handle the implicit and explicit values underpinning the evaluation task is at the core of professional practice. This session demonstrates how core ideas drawn from the field of systems thinking in practice provide a means of exploring key value judgments made in an evaluation and the consequent boundaries of professional evaluation practice. |
| Roundtable Rotation I: Increasing the Cultural Relevance of Evaluation in Informal Settings |
| Roundtable Presentation 565 to be held in Balboa A on Friday, Nov 4, 8:00 AM to 9:30 AM |
| Sponsored by the Multiethnic Issues in Evaluation TIG |
| Presenter(s): |
| Jill Stein, Institute for Learning Innovation, stein@ilinet.org |
| Shelly Valdez, Native Pathways, shilaguna@aol.com |
| Joe Heimlich, Ohio State University, heimlich@ilinet.org |
| Abstract: This roundtable discussion will build upon the authors' work in evaluating informal learning settings as experienced by 'underrepresented' or 'minority' groups, and will explore the role of cultural or community-based values in shaping evaluation practice and thereby rendering evaluations focused on these groups more meaningful and valid. In order for evaluation to be most useful and relevant - particularly within communities outside the mainstream culture that has so far had the most influence on the evaluation field - evaluators need to find ways to ensure that evaluative frameworks, measures of success, methodologies, data collection tools, and analysis approaches have 'ecological validity' within the contexts and communities in which they are working. The authors will briefly present on recent evaluation work that highlights these areas and then will facilitate a discussion focused on how evaluators can refine our practice to better reflect diverse cultural contexts, especially those that are different from our own. |
| Roundtable Rotation II: Linking Developmental Evaluation and Social Justice |
| Roundtable Presentation 565 to be held in Balboa A on Friday, Nov 4, 8:00 AM to 9:30 AM |
| Sponsored by the Multiethnic Issues in Evaluation TIG |
| Presenter(s): |
| Anna Madison, University of Massachusetts, Boston, anna.madison@umb.edu |
| Abstract: Social justice is cited as the mission of many human services organizations serving society's most oppressed populations. Yet, few can explain how their programs advance the mission of social justice. To the contrary, in most cases, these programs address the symptoms of social injustice rather than the causal conditions that created the need for perpetual support to maintain daily living. This roundtable raises questions regarding partnerships between evaluators and community-based human services organizations to align program design with social justice goals. Drawing from Michael Patton's developmental evaluation premise, that evaluators' involvement in program design and development could contribute to improving the effectiveness of programs, the roundtable focuses on clarifying the alignment between human service programming and creating a more justice and democratic society. Hopefully, participants in this roundtable will form a network to test ideas leading to evaluation theory development and to advance movement toward more effective evaluation use. |
| Session Title: Yes, Money Matters! A Conversation With the Stakeholders and Evaluators of Winning Play$: A Financial Literacy Program for High School Students | |||||
| Panel Session 566 to be held in Balboa C on Friday, Nov 4, 8:00 AM to 9:30 AM | |||||
| Sponsored by the Non-profit and Foundations Evaluation TIG | |||||
| Chair(s): | |||||
| Pamela Frazier-Anderson, Frazier-Anderson Research & Evaluation, pfa@frazier-anderson.com | |||||
| Abstract: Stacey Tisdale, author and an on-air financial journalist, in partnership with National Football League Hall of Famer Ronnie Lott's foundation All Stars Helping Kids, created Winning Play$. Winning Play$ is a financial literacy program for high school students. Ms. Tisdale, the program's developer, will describe the program and outline the rationale for requiring the evaluation of a financial literacy pilot program for high school students living in underserved communities. What funders in the private sector value as they increasingly rely on the use of evaluation in their decision making process for the allocation of funds to non-profit organizations such as Winning Play$ will also be discussed. Finally, the evaluators of the Winning Play$ program will discuss the evaluation methods used not only to address the primary stakeholders' needs, but the needs and values of the stakeholder groups in the Winning Play$ financial literacy program and the larger community. | |||||
| |||||
| |||||
|
| Session Title: A Novel Approach to Monitoring and Evaluation of Community-based Disaster Risk Reduction Programs: A Collaboration Between the American Red Cross and Johns Hopkins University | |||
| Panel Session 568 to be held in Capistrano B on Friday, Nov 4, 8:00 AM to 9:30 AM | |||
| Sponsored by the Disaster and Emergency Management Evaluation | |||
| Chair(s): | |||
| Dale Hill, American Red Cross, hilldal@usa.redcross.org | |||
| Abstract: The American Red Cross' Community Based Disaster Risk Reduction (CBDRR) programs aim to reduce the number of deaths, injuries, and socio-economic impact from disasters by building safer, more resilient communities. These programs help build the skills of communities to identify risk and take action to prepare for, respond to, and mitigate potential disasters. The American Red Cross and John Hopkins Bloomberg School of Public Health have collaborated to develop a new evaluation approach for CBDRR programs. The approach is designed to measure the five domains of resilience that correspond to the Hyogo Framework: Governance, Rick Knowledge, Public Awareness, Risk Reduction and Preparedness. By measuring resilience, the evaluation approach aims to assess changes in risk mitigation, preparedness, and response capacity as a result of Red Cross activities. This session will present the components of the evaluation approach and then explore the challenges and rewards of its application in Haiti and Asia. | |||
| |||
| |||
|
| Session Title: Evaluating the Impact of Programs Serving Youth | |||||||||||
| Multipaper Session 569 to be held in Carmel on Friday, Nov 4, 8:00 AM to 9:30 AM | |||||||||||
| Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG | |||||||||||
| Chair(s): | |||||||||||
| Melissa Rivera, National Center for Prevention and Research Solutions, mrivera@ncprs.org | |||||||||||
|
| Session Title: Building Community Collaborative and Evaluator Evaluation Capacity to Measure Community and Systems Change | |||
| Panel Session 570 to be held in Coronado on Friday, Nov 4, 8:00 AM to 9:30 AM | |||
| Sponsored by the AEA Conference Committee | |||
| Chair(s): | |||
| Evelyn Yang, Community Anti-Drug Coalitions of America, eyang@cadca.org | |||
| Abstract: Evaluation of community collaborative efforts is evolving. Previously, evaluation primarily focused on process (e.g., membership satisfaction and collaborative functioning) and distal outcomes (e.g., behavior change). However, there is a growing understanding of the process by which coalitions contribute to distal outcomes that now includes creating systems/community changes. Many collaboratives are struggling to measure systems/community changes related to long-term health outcomes and answer the ultimate question of what value the collaborative provides to address community concerns. While this approach potentially has great value, both collaboratives and evaluators lack tools, resources and knowledge to incorporate this additional level of data and tracking to their existing evaluation efforts. This session will provide three examples of efforts to build both collaborative and evaluator capacity to measure and evaluate community/systems change initiatives. National and local evaluation perspectives will be presented, and there will be time for discussion on challenges and potential solutions to these obstacles. | |||
| |||
| |||
|
| Session Title: Quality Improvement in Health Care: Training Measurement and Reporting | ||||||||||||||||||||||||||||||
| Multipaper Session 571 to be held in El Capitan A on Friday, Nov 4, 8:00 AM to 9:30 AM | ||||||||||||||||||||||||||||||
| Sponsored by the Health Evaluation TIG | ||||||||||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||||||||||
| Wendy Yen, College of Physicians and Surgeons of Ontario, wyen@cpso.on.ca | ||||||||||||||||||||||||||||||
|
| Session Title: Reaping Randomized Results on the Ranch: Rigorous Impact Evaluation Designs and Preliminary Results From Agricultural Interventions in Three Millennium Challenge Corporation (MCC) Compacts Countries |
| Multipaper Session 572 to be held in El Capitan B on Friday, Nov 4, 8:00 AM to 9:30 AM |
| Sponsored by the International and Cross-cultural Evaluation TIG |
| Chair(s): |
| Marc Shapiro, Millennium Challenge Corporation, shapiromd@mcc.gov |
| Abstract: The Millennium Challenge Corporation (MCC) is committed to conducting rigorous independent impact evaluations of its programs as an integral part of its focus on results. MCC expects that the results of its impact evaluations will help guide future investment decisions and contribute to a broader understanding in the field of development effectiveness. MCC's impact evaluations involve a variety of methods chosen as most appropriate to the context. This panel provides a brief overview of evaluation at MCC generally to frame the overall context. Next, the panel provides examples of evaluations being conducted in the agricultural sector across three MCA compact countries. The presenters will discuss the context, evaluation design, challenges involved in implementing these evaluations, and preliminary results. Although projects for MCC are designed to be context specific rather than to best allow cross-project/country comparisons, the panel will discuss lessons learned across and within countries from an evaluation design perspective. |
| Impact Evaluation at MCC: An Overview |
| Marc Shapiro, Millennium Challenge Corporation, shapiromd@mcc.gov |
| Mawadda Damon, NORC, damon-mawadda@norc.org |
| MCC was established in January 2004 with the objectives of promoting economic growth and reducing poverty by learning about, documenting and using approaches that work. MCC plans to complete 35 rigorous impact evaluations of international development projects over the next two years, and the rate of project evaluations likely will double in the following years. The results of these emerging evaluations are intended to shape the selection and design of future projects. This short presentation will provide a brief overview of impact evaluation at MCC. |
| Thinking Small: Impact of Business Services on the Economic Wellbeing of Small Farmers in Nicaragua |
| Anne Pizer, Millennium Challenge Corporation, pizerar@mcc.gov |
| The Rural Business Development (RBD) project aims to increase profits and wages in farms and non-farm businesses by providing technical and financial assistance to small and medium farms and agribusinesses to help them transition to higher profit activities. The impact evaluation will estimate the change in beneficiary household income and other welfare measures attributed to the project. The evaluation relies on the randomized sequencing (pipeline design) of 80 to 100 communities, with half selected randomly to receive services early and half later. Interim impact evaluation results (baseline and mid-term) found that the average increase in RBD household incomes is small (2 percent above the change in household income for those not yet receiving treatment) and not statistically different from zero. The very small magnitude of change in incomes may reflect the limited amount of time between the provision of services and the measurement of change. |
| Sewing the Seeds for Impact Evaluation Success: Problems and Preliminary Results from a Georgian Agricultural Project |
| Marc Shapiro, Millennium Challenge Corporation, shapiromd@mcc.gov |
| The Agribusiness Development Activity in the Republic of Georgia awards grants to small farmers, farm service centers that serve communities, and value-adding enterprises. The impact evaluation examines the project's effects on income and job creation for farmers through a pipeline experimental design used across nine rounds of grantees. Those selected in the first random drawing received grants immediately, while others receive grants later. Farm service center and value-adding enterprise grantees are being evaluated by matching grant recipients to similar enterprises in the comparison group using propensity score matching. Data collection involves augmenting the Georgian Department of Statistics' household survey and using local contractors to collect household level information from direct and indirect beneficiaries. The confounds of military conflict and the financial crisis as well as project delays have required adjustments. Preliminary double difference results for the first seven rounds of grantees will be discussed. |
| Has Beans - Moving from Rice and Beans to Radishes and Bell Peppers: The Impact of High-Value Horticulture Training on Farmer Income in Honduras |
| Algerlynn Gill, Millennium Challenge Corporation, gillaa@mcc.gov |
| Varuni Dayarantna, NORC, dayaratna-varuni@norc.org |
| George Caldwell, NORC, jcaldwell9@yahoo.com |
| The MCC-funded Farmer Training and Development Activity in Honduras provided technical assistance to farmers to transition from subsistence crops to high-value horticultural crops for domestic sale and international export. The impact evaluation assesses the training's effects on household income and production levels, comparing farmers and communities who received training and those who did not. Double-difference estimates will be formulated using two approaches, one involving comparison of a randomly selected treatment and control group of communities and one using a model-based method, due to implementation challenges that necessitated adjustments to the evaluation plan. Results from the evaluation and how the methodology evolved to meet changing conditions on the ground will be discussed. Differences in "monitoring data" collected by the implementer and evaluation data collected through independent surveys also will be presented, to demonstrate how impact evaluations with counterfactuals can correct initial over-estimations of results. |
| Session Title: Data Across the Miles | |||||||||||||||||
| Multipaper Session 574 to be held in Huntington A on Friday, Nov 4, 8:00 AM to 9:30 AM | |||||||||||||||||
| Sponsored by the Integrating Technology Into Evaluation | |||||||||||||||||
| Chair(s): | |||||||||||||||||
| Stephanie Beane, Southeast AIDS Training and Education Center, sbeane@emory.edu | |||||||||||||||||
| Discussant(s): | |||||||||||||||||
| Margaret Lubke, Utah State University, mlubke@ksar.usu.edu | |||||||||||||||||
|
| Session Title: Value in Government Evaluation: Multiple Perspectives | |||||||||||||||||||||
| Multipaper Session 575 to be held in Huntington B on Friday, Nov 4, 8:00 AM to 9:30 AM | |||||||||||||||||||||
| Sponsored by the Government Evaluation TIG | |||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||
| David Bernstein, Westat, davidbernstein@westat.com | |||||||||||||||||||||
|
| Session Title: Utilizing Conceptual Frameworks to Define and Evaluate Public Health Infrastructure | |||||
| Panel Session 576 to be held in Huntington C on Friday, Nov 4, 8:00 AM to 9:30 AM | |||||
| Sponsored by the Government Evaluation TIG | |||||
| Chair(s): | |||||
| Cassandra Martin Frazier, Centers for Disease Control and Prevention, bkx9@cdc.gov | |||||
| Discussant(s): | |||||
| Maryann Scheirer, Scheirer Consulting, maryann@scheirerconsulting.com | |||||
| Abstract: Building the infrastructure of state and local public health systems is vital to promoting the health of the nation. Evaluation can be used as a tool to better understand the multifaceted and broad nature of infrastructure and its role in influencing systems change. Yet, it is challenging to systematically evaluate national-level infrastructure development initiatives. This session explores the use of theory, consensus-building and practice to develop conceptual frameworks to guide efforts to evaluate infrastructure programs. In this panel, presenters from three programs at the Centers for Disease Control Prevention will discuss the following: the development of their respective conceptual frameworks, creation of standardized evaluation tools, application of the conceptual framework to evaluate infrastructure and utilizing evaluation results to revise the framework. | |||||
| |||||
| |||||
|
| Session Title: Moving "The Movement" Forward: Evaluation as a Critical Tool in Furthering the Work of Lesbian, Gay, Bisexual and Transgender Community and Programs |
| Think Tank Session 577 to be held in La Jolla on Friday, Nov 4, 8:00 AM to 9:30 AM |
| Sponsored by the Lesbian, Gay, Bisexual, Transgender Issues TIG |
| Presenter(s): |
| Joseph Kosciw, Gay, Lesbian & Straight Education Network, jkosciw@glsen.org |
| Discussant(s): |
| Emily Greytak, Gay, Lesbian & Straight Education Network, egreytak@glsen.org |
| Elizabeth Diaz, Gay, Lesbian & Straight Education Network, ediaz@glsen.org |
| Abstract: Partnerships between the community programs and researchers can be critical for helping to understand efforts to address health and social problems within specific communities and can result in more effective programs and a better evaluation of effects. For the lesbian, gay, bisexual and transgender (LGBT) community/communities, evaluation is often not integrated in development and delivery of programs seeking to improve life experiences for community members. Yet within the national LGBT movement, there has been interest in promoting coordination and collaboration among organizations in order to maximize use of resources and to ensure integrated (not competing) programming. The purpose of this Think Tank is to capitalize on the experiences and knowledge of expert evaluators interested in and working on LGBT programmatic issues in order to strategize and plan effective interventions to advocate and promote evaluation of efforts by LGBT organizations as well as inclusion of LGBT issues/identities in non-LGBT specific evaluations. |
| Session Title: From Theory to Practice: Potential Avenues via Evaluation Capacity Building and Research on Evaluation | ||||
| Panel Session 578 to be held in Laguna A on Friday, Nov 4, 8:00 AM to 9:30 AM | ||||
| Sponsored by the Research on Evaluation | ||||
| Chair(s): | ||||
| Leslie Fierro, Claremont Graduate University, Leslie.Fierro@cgu.edu | ||||
| Discussant(s): | ||||
| Christina Christie, University of California, Los Angeles, tina.christie@ucla.edu | ||||
| Abstract: Evaluation scholars have repeatedly called for research on evaluation-in particular applied research. Since evaluation is largely a professional discipline, it is important for research on evaluation to move into the applied realm by generating information and developing methodologies that can be widely adopted by practitioners. Panelists in this session will demonstrate how processes and products of evaluation research and practice can inform each other. The first set of presenters will focus on evaluation capacity building (ECB)-describing a model generated from the existing empirical and theoretical knowledgebase about ECB and organizational learning and connect this to practical ECB approaches. The second set of presenters will turn the focus towards the use of evaluation techniques for explicating evaluation and program theory-discussing how the use of tools used in researching evaluation can translate into improved evaluation practice and potentially accelerate the sharing of important insights between evaluation scholars and practitioners. | ||||
| ||||
| ||||
| ||||
|
| Session Title: International & Cross-cultural Partnerships: Understanding how Partners Develop Relationships of Mutual Support and Value | ||||
| Panel Session 579 to be held in Laguna B on Friday, Nov 4, 8:00 AM to 9:30 AM | ||||
| Sponsored by the International and Cross-cultural Evaluation TIG | ||||
| Chair(s): | ||||
| Carol Fendt, University of Illinois at Chicago, crfendt@hotmail.com | ||||
| Discussant(s): | ||||
| Cindy Shuman, Kansas State University, cshuman@ksu.edu | ||||
| Abstract: In developing international partnerships geared towards providing "support" to one of the partners, how do the partners develop relationships of mutual support and value? In other words, how do true partnerships develop? How do partner leaders monitor and assess the value they place on partners? What structures can be developed to support the expressed values of the partnership? What role does the evaluator play in ensuring that the expressed value of each partner are included in project development and implementation? This panel will utilize a case study approach to explore the development of five different partnerships, how they expressed the values of each partner, and how evaluation was instrumental in ensuring the expressed values of the partners. Using an appreciative inquiry approach, project participants reveal how partnerships involved international/cross-cultural stakeholders in the development of the project's vision and partnership goals, and the process of monitoring, evaluating, and adapting these partnerships to the changing needs of the partners. | ||||
| ||||
| ||||
|
| Roundtable Rotation I: Assessing Board Performance: Dysfunctional and Effective Boards |
| Roundtable Presentation 580 to be held in Lido A on Friday, Nov 4, 8:00 AM to 9:30 AM |
| Sponsored by the Program Theory and Theory-driven Evaluation TIG and the Business and Industry TIG |
| Presenter(s): |
| Zita Unger, Independent Consultant, zitau@bigpond.com |
| Abstract: Evaluation of board performance has increased considerably in recent years. More rigorous forms of accountability and compliance are standard for public company boards and more commonplace for boards in the public, private, non-profit and for-profit sectors. What matters most is what goes on inside the boardroom rather than compliance around board structures and systems. What really counts are the values, skills, attitudes and behaviors - the inner workings of boards and how decisions are made. No amount of compliance will overcome the flaws of a fundamentally dysfunctional board, flowing from inadequate expertise of directors, or excessively dominant Chairman or CEO, or a factional board. The roundtable will explore questions about performance assessment in the context of board effectiveness: What are the values of effective boards? Can evaluation enhance and build on these values? Is the purpose of performance assessment to do so? What are optimal conformance and performance measures? |
| Roundtable Rotation II: Planning and Strategy: Making Plans That Support Strategic Behavior in Emergent Environments |
| Roundtable Presentation 580 to be held in Lido A on Friday, Nov 4, 8:00 AM to 9:30 AM |
| Sponsored by the Program Theory and Theory-driven Evaluation TIG and the Business and Industry TIG |
| Presenter(s): |
| Dana H Taplin, ActKnowledge Inc, dtaplin@actknowledge.org |
| Jill K Wohlford, Lumina Foundation for Education, jwohlfor@luminafoundation.org |
| Patricia Patrizi, Public Private Ventures, patti@patriziassociates.com |
| Catherine Borgman-Arboleda, Indendent Evaluation Consultant, cborgman.arboleda@gmail.com |
| Abstract: This roundtable session, following from the Winter 2010 issue of New Directions for Evaluation, "Evaluating Strategy", addresses the relationship between strategy and planning methods such as theories of change and logic models. Strategic work takes unexpected turns as it progresses: evaluating the work against fixed plans may produce unfairly negative judgments. Often too the planning models serve as plans but are not operationalized effectively to support and inform internal learning and evaluation going forward. In our own work using theory of change as a planning tool, the detailed outcomes framework of causal pathways can seem too much like a blueprint, as if practitioners should know all the steps in advance. Are logic models, theories of change, and other forms of planning inimical to truly strategic behavior? At what point does attention to planning begin to undermine strategy? How do we do planning that supports strategy and learning from strategic choices? |
| Session Title: Approaches to Assuring Evaluation Use: Valuing Stakeholders, Context, and Program Priorities in Cancer Control | |||||||
| Panel Session 581 to be held in Lido C on Friday, Nov 4, 8:00 AM to 9:30 AM | |||||||
| Sponsored by the Health Evaluation TIG | |||||||
| Chair(s): | |||||||
| Angela Moore, Centers for Disease Control and Prevention, cyq6@cdc.gov | |||||||
| Discussant(s): | |||||||
| Kimberly Leeks, Centers for Disease Control and Prevention, kfj1@cdc.gov | |||||||
| Abstract: The National Comprehensive Cancer Control Program (NCCCP) provides support for states, tribes, and Pacific Island Jurisdictions to sustain partnerships and implement cancer control plans including goals, objectives, and strategies that span the entire cancer control continuum from prevention to survivorship. In 2010, the Centers for Disease Control and Prevention (CDC) released six new priority areas for the NCCCP. The priorities focus on high-impact, common, and cross-cutting elements among programs, emphasize measureable outcomes, and reflect the cancer control continuum, and grew out of long standing focus areas of the national program. In order to ensure that evaluations are used to improve public health practice, an approach was taken that includes the following guiding principles: a commitment to obtain stakeholder input, the recognition that the NCCCP is evolving and adapting to new priority areas, and the commitment of CDC to increase evaluation capacity among grantees to ensure the ability to demonstrate program effectiveness. | |||||||
| |||||||
| |||||||
|
| Session Title: New Approaches to Assessing National Institutes of Health (NIH) Research Programs |
| Multipaper Session 582 to be held in Malibu on Friday, Nov 4, 8:00 AM to 9:30 AM |
| Sponsored by the Research, Technology, and Development Evaluation TIG |
| Chair(s): |
| Robin Wagner, National Institutes of Health, wagnerr2@mail.nih.gov |
| Abstract: We present four papers on new approaches and tools that are being developed to inform the evaluation of research programs sponsored by the U.S. National Institutes of Health (NIH), which invests over $30 billion per year on biomedical research. The first paper considers how the traditional method of using expert opinion to assess research program performance has been implemented and can be enhanced. The second and third papers employ different text mining and visualization tools to characterize research portfolios to glean insights into the science supported and could facilitate the management of research programs. The fourth paper uses network analysis to evaluate and compare researcher collaborations in two distinct epidemiological cohort studies. While the examples presented in this session focus on NIH, the methods demonstrated can be extended to other organizations and countries seeking to better understand and inform strategies for managing their research programs. |
| Expert Opinion as a Performance Measure in R&D Evaluation |
| Kevin Wright, National Institutes of Health, wrightk@mail.nih.gov |
| Expert opinion continues to be the gold standard when assessing the performance, impact, and importance of R&D programs. This presentation will explore the use of expert opinion as the primary performance measure in evaluations of biomedical research programs. Questions that will be addressed include: 1) Is expert opinion really a performance measure?; 2) In what circumstances is expert opinion used as the primary performance measure in evaluations of biomedical R&D programs?; 3) What are strengths and limitations of expert opinion?; 4) What are various approaches to using expert opinion?; and 5) What are some good practices that might be considered when planning an evaluation using expert opinion? This presentation will be useful to evaluators interested in using expert opinion to evaluate R&D programs. |
| Text Mining for Visualization of Temporal Trends in NIH-Funded Research |
| L Samantha Ryan, National Institutes of Health, lindsey.ryan@nih.gov |
| Carl W McCabe, National Institutes of Health, carl.mccabe@nih.gov |
| Allan J Medwick, National Institutes of Health, allan.medwick@nih.gov |
| Using text mining methods, we will analyze and visualize topical changes in the abstracts of extramural research grants funded by the National Institutes of Health (NIH) over the past decade. The project will use publicly available data provided by the NIH (from RePORT), free and open-source analysis software, and a freely available visualization environment produced by Google (Motion Charts). Our methods will allow the user to interactively explore the first appearance and subsequent increase (or decrease) of substantive keywords in NIH abstracts over a temporal span and to see changes over time in animation. The corpus of abstracts may be subdivided into categories (e.g., fiscal year) in order for the user to explore and compare patterns and changes in NIH research funding. |
| Assessing Grant Portfolios Using Text-Mining and Visualization Methods |
| Elizabeth Ruben, National Institutes of Health, elizabeth.ruben@nih.gov |
| Kristianna Pettibone, National Institutes of Health, kristianna.pettibone@nih.gov |
| Jerry Phelps, National Institutes of Health, phelps@niehs.nih.gov |
| Christina Drew, National Institutes of Health, drewc@niehs.nih.gov |
| Granting agencies have an ongoing need for tools to assure that their portfolio of grants is current, mission-focused, and of high quality. Therefore we are exploring the novel use of a text-mining data visualization tool, OmniVizGäó, to examine patterns in the science distribution of our grants, analyze assignment of project officers, and identify gaps and emerging areas of research. We explore the effect of various options and choices, such as source data, number of clusters, or clustering method. We show examples of our data plots and describe how this could be used to think about the portfolios in new ways and inform our science management. Finally, we discuss the challenges and opportunities of these approaches. This presentation will be useful to evaluators interested in learning how to use visualization tools for data analysis and in understanding how the findings can be applied to science management. |
| Network Analysis of Collaboration Among National Heart, Lung and Blood Institute (NHLBI) Funded Researchers |
| Carl W McCabe, National Institutes of Health, carl.mccabe@nih.gov |
| Mona Puggal, National Institutes of Health, mona.pandey@nih.gov |
| Lindsay Pool, National Institutes of Health, |
| Rediet Berhane, National Institutes of Health, |
| Richard Fabsitz, National Institutes of Health, richard.fabsitz@nih.gov |
| Robin Wagner, National Institutes of Health, robin.wagner@nih.gov |
| We use freely-available, open-source analytical tools to explore co-authorship networks involving researchers funded by the NIH's National Heart Lung and Blood Institute (NHLBI). Underlying our analysis is an interest in the forms of collaboration that exist among researchers in two distinct cohort studies-The Cardiovascular Health Study and The Strong Heart Study. We use co-authorship as a proxy for collaboration, and we produce statistics and visualizations to help dissect the properties of these networks. To add further analytical dimension to the analysis, we examine aspects of network structure in relation to characteristics of the researchers and publications (e.g., institutional affiliation or publication title). Our presentation will use a step-by-step discussion of this project to illustrate some of the computational analysis tools and techniques that may be used to explore the concept of collaboration among a body of researchers. |
| Session Title: Evaluation Lessons From Work Among First Nations, Aboriginal and Metis Peoples in Canada | |||||||||||||||||
| Multipaper Session 583 to be held in Manhattan on Friday, Nov 4, 8:00 AM to 9:30 AM | |||||||||||||||||
| Sponsored by the Indigenous Peoples in Evaluation | |||||||||||||||||
| Chair(s): | |||||||||||||||||
| Joan LaFrance, Mekinak Consulting, lafrancejl@gmail.com | |||||||||||||||||
| Discussant(s): | |||||||||||||||||
| Joan LaFrance, Mekinak Consulting, lafrancejl@gmail.com | |||||||||||||||||
|
| Session Title: Methodological and Theoretical Considerations in Evaluation | |||||||||||||||
| Multipaper Session 584 to be held in Monterey on Friday, Nov 4, 8:00 AM to 9:30 AM | |||||||||||||||
| Sponsored by the Graduate Student and New Evaluator TIG | |||||||||||||||
| Chair(s): | |||||||||||||||
| Krista Schumacher, Oklahoma State University, krista.schumacher@okstate.edu | |||||||||||||||
|
| Session Title: Appreciating Complexity in Qualitative Design and Analysis | ||||||||||||||||||||
| Multipaper Session 585 to be held in Oceanside on Friday, Nov 4, 8:00 AM to 9:30 AM | ||||||||||||||||||||
| Sponsored by the Qualitative Methods TIG | ||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||
| Janet Usinger, University of Nevada Reno, usingerj@unr.edu | ||||||||||||||||||||
| Discussant(s): | ||||||||||||||||||||
| Janet Usinger, University of Nevada Reno, usingerj@unr.edu | ||||||||||||||||||||
|
| Session Title: Climate Change Education Projects: Advancing the Dialogue Through Effective use of Evaluation Strategies |
| Multipaper Session 587 to be held in Palos Verdes A on Friday, Nov 4, 8:00 AM to 9:30 AM |
| Sponsored by the Environmental Program Evaluation TIG |
| Chair(s): |
| Beverly Farr, MPR Associates, bfarr@mprinc.com |
| Abstract: The papers that are included in this session will illuminate how the evaluation process can enhance the goals of a set of projects--in this case Climate Change Education projects supported by NSF, NASA, and NOAA--that are designed to advance the discourse on climate change, uncover effective communication strategies, and translate research into classroom instruction. Two of the papers focus on intervening variables and mitigating factors that evaluators need to tease out to contextualize implementation and explain impact. One paper focuses on the evaluation of research experiences for teachers that can translate into classroom practice, and finally, one paper addresses the issue of developing common indicators of impact across a range of projects focused on common goals. The theme that runs through this set of papers is the process of communicating values and translating values into effective practices and evidence success. |
| Value of Global Climate Change Research Experiences on Classroom Practice |
| Lori Reinsvold, University of Northern Colorado, lori.reinsvold@unco.edu |
| Little is known about how teachers' research experiences change secondary science classroom practices. Literature indicates that teachers value research experiences, but it is unclear how this influences their teaching practices and the learning of their students. Besides the support provided by the project itself, evaluators must also consider the mandates imposed by the school to which the teachers will return if they are going to truly understand how teachers make the transition from laboratories to classrooms. The evaluation of the National Center for Atmospheric Research: Research Experience Institute, a global climate change program for secondary science teachers, will be used to explore the indicators that most influence teacher practice. |
| Emotions, Politics and Climate Change |
| John Fraser, Institute for Learning Innovation, fraser@ilinet.org |
| The vast number of climate change programs targeted toward changing public attitudes focus on public change. Yet, those charged with climate change education are also deeply aware of the negative impacts climate change will have on the world and their own health and well-being. This Cassandra experience is exacting a toll on the educators that may limit their ability to succeed. This paper will address how social discourses surrounding climate change harm educators and how educators may unknowingly undermine their own work. The presenter will offer examples of the environmental movement's dominant persuasion techniques, results of a study on the emotional experiences of conservationists, and recent results from funded climate change programs in order to identify a potential mitigating factor that may serve as a useful predictor in process and summative evaluation. |
| The Right Half of Your Logic Model: How Values Affect the Middle Ground Between Measurable Outcomes and Long-term Goals |
| Ardice Hartry, University California, Berkeley, hartry@berkeley.edu |
| In evaluation, we often do not fully understand the relationship between short-term effects - what we can expect to accomplish over the duration of a project - and long-term outcomes - the overall goals of a project, yet we base our entire evaluation on the rigor of this relationship. For instance, if we assume that changes in students' attitudes towards science leads to increased achievement in science, then we feel we only need measure changes in attitudes. These assumptions are often based upon underlying and unacknowledged values and perspectives, rather than on research and evidence. This presentation explicates the problem using the example of an evaluation of a Global Climate Change Education program. At recent national meetings, multiple evaluators raised the issue of relying on these assumptions and their underlying values; this presentation sets out to describe both pitfalls associated with blind acceptance of these assumptions and potential solutions based on current literature. |
| Use of Common Measures Across Diverse Climate Change Education Projects: How do you Show Collective Value? |
| Beverly Farr, MPR Associates, bfarr@mprinc.com |
| The projects included in the CCEP Grants Program across NSF, NASA, and NOAA all have two goals: 1) Workforce Development: Preparing a new generation of climate scientists, engineers, and technicians equipped to provide innovative and creative approaches to understanding global climate change and to mitigate its impact; 2)Public Literacy: Preparing U.S. citizens to understand climate change and its implications. They vary, however, in the levels they address--from public agencies to research organizations to universities to public schools-and in the strategies they use to achieve their objectives. The activities of the projects cannot always be directly linked to the ultimate goal, however, and intervening outcomes need to be examined to assess the impact of the projects overall. As the funders, NSF, NASA, and NOAA desire to know what the projects together contribute to the accomplishment of the ultimate goals, and the evaluators want to collaborate by establishing common indicators. |
| Session Title: Teaching Program Evaluation for Public Managers | |||
| Panel Session 589 to be held in Redondo on Friday, Nov 4, 8:00 AM to 9:30 AM | |||
| Sponsored by the Teaching of Evaluation TIG | |||
| Chair(s): | |||
| Bonnie Stabile, George Mason University, bstabile@gmu.edu | |||
| Abstract: This proposed panel is intended to address both the importance and the peculiarities of teaching evaluation in public affairs graduate programs, including Masters Programs in Public Administration and Public Policy, and to consider best practices. The growing surge of interest in program evaluation is perhaps nowhere more important than in the public sector, where policymakers at all levels of government strive to ensure that public programs are effective in an era when both budgets and political discourse are strained. Those who teach public managers to be effective evaluators of government program efforts have an important role to play. Despite the importance of their task, they may have only one semester, or less, to instill in their students both an appreciation for evaluation and an ability to tackle its multiplicity of methodologies with some competence. | |||
| |||
| |||
|
| Session Title: Making Our Way Forward: Creating an Evaluation Design for an Indigenous College |
| Think Tank Session 590 to be held in Salinas on Friday, Nov 4, 8:00 AM to 9:30 AM |
| Sponsored by the Assessment in Higher Education TIG |
| Presenter(s): |
| Maenette Benham, University of Hawaii, Manoa, mbenham@hawaii.edu |
| Discussant(s): |
| Antoinette Konia Freitas, University of Hawaii, Manoa, antoinet@hawaii.edu |
| Marlene P Lowe, University of Hawaii, Manoa, mplowe@hawaii.edu |
| Brandi Jean Nalani Balutski, University of Hawaii, Manoa, balutski@hawaii.edu |
| Maya Saffery, University of Hawaii, Manoa, msaffery@hawaii.edu |
| Abstract: How does an indigenous college within a Research I university approach program evaluation? There are both philosophical and practical challenges inherent in the program evaluation process due to differing purposes, perspectives regarding mastery, and eclectic methods of data collection, analysis, and presentation. The challenges for a college's evaluation team is to create an evaluation design that meets both institutional and cultural aims by "re"languaging the process of evaluation and assessment that decolonizes both the processes and the perspectives, reveals a commitment to cultural values as well as an understanding and respect for academic values, and builds bridges that support the work and the vitality of the indigenous programs. How this is done without creating an overly complex evaluation design is the focus of the think tanks discussion. |
| Session Title: Review of Literature Evaluative Steps: A Meta-Framework for Conducting Comprehensive and Rigorous Literature Reviews in Program Evaluation |
| Demonstration Session 591 to be held in San Clemente on Friday, Nov 4, 8:00 AM to 9:30 AM |
| Sponsored by the AEA Conference Committee |
| Presenter(s): |
| Rebecca Frels, Lamar University, rebecca.frels@gmail.com |
| Abstract: Conducting the literature review represents the most important step of the research process in evaluation studies because it is the most effective way of becoming familiar with previous findings and research methodology, as well as being cognizant of previous and existing programs. In our demonstration, we (a) identify myths associated with conducting literature reviews, (b) provide a new and comprehensive definition of the literature review, (c) provide reasons for conducting comprehensive literature reviews, (d) identify the roles of literature reviewers, and (e) introduce the seven Review of Literature Evaluative Steps (ROLES) for program evaluations. Participants will experience ways to document the information search; explore beliefs, values, and valuing; select and deselect literature according to a validation framework; extend the literature review; store literature; and analyze literature using several quantitative and qualitative analysis techniques involving quantitative (e.g., Excel) and qualitative (e.g., QDA Miner [Provalis Research, 2009]) computer software programs. |
| Session Title: Nurturing a Learning Culture: Two Key Tensions |
| Think Tank Session 592 to be held in San Simeon A on Friday, Nov 4, 8:00 AM to 9:30 AM |
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG |
| Presenter(s): |
| David Scheie, Touchstone Center for Collaborative Inquiry, dscheie@touchstone-center.com |
| Discussant(s): |
| Scott Hebert, Sustained Impact, shebert@sustainedimpact.com |
| Jessica Shao, James Irvine Foundation, jshaosan80@yahoo.com |
| Ross Velure Roholt, University of Minnesota, rossvr@umn.edu |
| Abstract: This session explores barriers to learning within projects and organizations, and strategies to enlarge space for learning. It will probe the tension between meticulous data collection and reporting, on the one hand, and thoughtful meaning-making, on the other; and that between painful "learning experiences" and the possibility that learning can be delightful and satisfying. Facilitators will draw on experiences particularly with youth development and community change projects, to sketch these two tensions and identify ways to foster a learning climate amid these challenges. Small groups will engage in dialogue using these prompts: How can we integrate reflective practice with solid data collection? How can we use available data to maximize learning - even when the data aren't pristine? What are the barriers to a learning culture, and how can they be lowered? How can a safe environment for learning from mistakes be established? What does a lively learning culture look like? |
| Session Title: Innovative Techniques for Data Collection and Management in Educational Evaluation | |||||||||||||||||
| Multipaper Session 593 to be held in San Simeon B on Friday, Nov 4, 8:00 AM to 9:30 AM | |||||||||||||||||
| Sponsored by the Pre-K - 12 Educational Evaluation TIG | |||||||||||||||||
| Chair(s): | |||||||||||||||||
| James P Van Haneghan, University of South Alabama, jvanhane@usouthal.edu | |||||||||||||||||
| Discussant(s): | |||||||||||||||||
| Tiffany Berry, Claremont Graduate University, tiffany.berry@cgu.edu | |||||||||||||||||
|
| Roundtable Rotation I: State-Level Evaluation of a Migrant Education Program |
| Roundtable Presentation 594 to be held in Santa Barbara on Friday, Nov 4, 8:00 AM to 9:30 AM |
| Sponsored by the Human Services Evaluation TIG |
| Presenter(s): |
| Alberto Heredia, WestEd, aheredi@wested.org |
| Abstract: This presentation will focus on challenges to and preliminary findings of an evaluation of a Migrant Education Program State Service Delivery Plan (SSDP) in a large western state. Federal and state statutes require that an evaluation of the SSDP determine its effectiveness in meeting performance targets and its fidelity of implementation. Our evaluation approach will assess the effect of the SSDP on student achievement and functioning and identify the practices used to achieve the effect. Formatively, the evaluation documents regional implementation of programs and services in the SSDP annually. Information and evidence on progress toward implementation of programs and services according to SSDP guidelines will be collected. Student outcome data as delineated by SSDP performance indicators will be analyzed annually to gauge progress toward SSDP performance targets. Summatively, an assessment of the impact of the SSDP on academic and non-academic outcomes of migrant children will be provided. |
| Roundtable Rotation II: Gathering Information From Valued Stakeholders: Exploring Effective Ways to Capture Homeless Parents' Perceptions of School-Based Program Implementation |
| Roundtable Presentation 594 to be held in Santa Barbara on Friday, Nov 4, 8:00 AM to 9:30 AM |
| Sponsored by the Human Services Evaluation TIG |
| Presenter(s): |
| Erika Taylor, Prince George's County Public Schools, etaylorcre@aol.com |
| Kola Sunmonu, Prince George's County Public Schools, kolawole.sunmonu@pgcps.org |
| Abstract: In Maryland, school districts are required to coordinate with local social service agencies to provide services for homeless students. Prince George's County Public Schools also conducts a mandatory annual evaluation of their Homeless Education Program (HEP). One such stakeholder group that is actively sought to provide information on HEP service delivery includes caregivers of homeless students. Traditionally, self-administered surveys have been used to collect data from parents and caregivers. However, the response rates have been very low, primarily due to high residential mobility among parents, making them 'hard-to-reach'. As the HEP is designed to address the needs of homeless students and their families, the input of parents is vital to the ongoing development and administration of the program. The purpose of the proposed roundtable is to gain feedback from colleagues about the methods currently used to administer the parent surveys, and to explore new strategies that might increase participation. |
| Session Title: Quantifying Threats to Validity | |||
| Panel Session 595 to be held in Santa Monica on Friday, Nov 4, 8:00 AM to 9:30 AM | |||
| Sponsored by the Quantitative Methods: Theory and Design TIG | |||
| Chair(s): | |||
| Patrick McKnight, George Mason University, pem725@gmail.com | |||
| Abstract: Evaluators often face various threats to validity due to relatively weak non-randomized designs or strong designs that deteriorate due to failed randomization, unexpected environmental changes, or other problems. Unfortunately, we do not know how large of an effect these threats impose upon our findings. The purpose of this panel is to discuss several studies aimed at estimating the largest effect possible for several threats to validity including selection bias, testing effects, and statistical artifacts. By estimating the largest effects possible from these threats, we might all be able to prioritize our efforts in order to protect against the largest and most likely threats to validity. | |||
| |||
| |||
|
| Session Title: Faults Everywhere: An Introduction to Fault Tree Analysis (FTA) |
| Skill-Building Workshop 596 to be held in Sunset on Friday, Nov 4, 8:00 AM to 9:30 AM |
| Sponsored by the Needs Assessment TIG |
| Presenter(s): |
| James W Altschuld, The Ohio State University, maltschuld1@columbus.rr.com |
| Hsin-Ling Hung, University of North Dakota, sonya.hung@und.edu |
| Yi-Fang Lee, National Chi Nan University, ivanalee@ncnu.edu.tw |
| Abstract: Most evaluators know of Fault Tree Analysis (FTA) but may not have much familiarity with what it is and its key features. FTA is used in needs assessment in two ways. The first, after needs have been prioritized, is to help to pinpoint what causes them in order to design solution strategies with greater likelihood of success. The focus is on paths of failure and critical path elements to be eliminated or reduced in their causal power. The second is when the question is asked "How might an already structured solution strategy fail?" The workshop begins with an overview of needs assessment which highlights where FTA and causal analysis fit in and are used. Causal techniques will be overviewed followed by an emphasis on FTA, a hands-on exercise, and participant discussion/comment. |
| Session Title: Exploring New Roles and Responsibilities in Educational Evaluation | |||||||||||||||||||||||||
| Multipaper Session 597 to be held in Ventura on Friday, Nov 4, 8:00 AM to 9:30 AM | |||||||||||||||||||||||||
| Sponsored by the Pre-K - 12 Educational Evaluation TIG | |||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||
| Catherine Nelson, Independent Consultant, catawsumb@yahoo.com | |||||||||||||||||||||||||
| Discussant(s): | |||||||||||||||||||||||||
| Tom McKlin, The Findings Group, tom@thefindingsgroup.com | |||||||||||||||||||||||||
|