2011

Return to search form  

Session Title: Strategies to Improve Quality of Mixed Methods Evaluations
Multipaper Session 551 to be held in Avalon A on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Mixed Methods Evaluation TIG
Chair(s):
Donna Mertens,  Gallaudet University, donna.mertens@gallaudet.edu
Qualitative Comparative Analysis as an Evaluation Tool
Presenter(s):
Emmeline Chuang, University of South Florida, echuang@mail.sdsu.edu
Jennifer Craft Morgan, University of North Carolina, Chapel Hill, craft@email.unc.edu
Abstract: Evaluating new programs, partnerships, and/or collaboratives can be challenging, particularly when the number of cases is too small to be analyzed quantitatively but too large for comparative case study analysis. This paper introduces qualitative comparative analysis (QCA) as a technique for systematically assessing cross-case commonalities and differences in moderately sized samples. The utility of QCA is illustrated using data from the national, mixed-methods evaluation of seventeen frontline health workforce development programs implemented by diverse partnerships of health care employers, educational institutions, and other organizations, including workforce intermediaries. Two examples are provided: In the first, QCA is applied to mixed methods data to identify the effect of employers' use of high performance work practices on worker-level outcomes. In the second, QCA is applied to qualitative data to determine how partnership composition influenced programmatic outcomes.
Content Validity Using Mixed Methods Approach: Its Application and Development Through the Use of a Table of Specifications Methodology
Presenter(s):
Isadore Newman, Florida International University, newmani@fiu.edu
Janine Lim, Berrien RESA, janine@janinelim.com
Fernanda Pineda, Florida International University, mapineda@fiu.edu
Abstract: There is paucity in the literature on content validity (logical validity) and therefore a need to improve procedures for estimating its trustworthiness. This article presents four unique examples, interpretation, and application of tables of specifications (ToS) for estimating content validity. To have a good content validity estimate, the ToS must also have estimates of reliability. The procedures presented -Lawshe's (1975) Content Validity Ratio and Content Validity Index, and expert agreement estimates procedures- would enhance both. The development and the logic of the ToS requires presenting evidence that has transparency and trustworthiness of the validity estimates by maintaining an audit trail as well as triangulation, expert debriefing, and peer review. An argument is presented that content validity requires a mixed methods approach since data are developed through both qualitative and quantitative methods that inform each other. This process is iterative and provides feedback on the effectiveness of the ToS through consensus.
Balancing Rigor and Relevance in Educational Program Evaluation
Presenter(s):
Rebecca Zulli, University of North Carolina, Chapel Hill, rzulli@unc.edu
Adrienne Smith, University of North Carolina, Chapel Hill, adrsmith@eamil.unc.edu
Gary Henry, University of North Carolina, Chapel Hill, gthenry@unc.edu
David Kershaw, Slippery Rock University, dckersh@email.unc.edu
Abstract: Over the past decade there has been great debate amongst evaluation professionals regarding experimental designs and their appropriateness for educational settings. We designed our evaluation of a three-year pilot program implemented in five rural North Carolina school districts to address our own gold standard: Maximizing both Rigor and Relevance. The program was designed to improve recruitment and retention through performance incentives, to improve skills and practice via professional development and to offer quality afterschool programming. This evaluation necessitated a design that examined program theory, scrutinized implementation, provided timely formative information, and ultimately, provided summative information regarding program impact on student outcomes. This paper presents how the completed evaluation met the requirements of both rigor and relevance by incorporating 1) logic modeling; 2) qualitative methods including interviews, focus groups and observations; 3) descriptive analyses of survey and participation data; and 4) rigorous analytic strategies including propensity-score matching and value-added models.
Evaluation of the New York City Health Bucks Farmers' Market Incentive Program: Demonstrating the Value of Stakeholder Input for Evaluation Design, Implementation, and Dissemination
Presenter(s):
Yvonne Abel, Abt Associates Inc, yvonne_abel@abtassoc.com
Jessica Levin, Abt Associates Inc, jessica_levin@abtassoc.com
Leah Staub-DeLong, Abt Associates Inc, leah_staub-delong@abtassoc.com
Sabrina Baronberg, NYC Department of Health and Mental Hygiene, sbaronbe@health.nyc.gov
Lauren Olsho, Abt Associates Inc, lauren_olsho@abtassoc.com
Deborah Walker, Abt Associates Inc, debbie_walker@abtassoc.com
Jan Jernigan, Centers for Disease Control and Prevention, ddq8@cdc.gov
Gayle Payne, Centers for Disease Control and Prevention, hfn5@cdc.gov
Cheryl Austin, Abt Associates Inc, cheryl_austin@abtassoc.co
Cristina Booker, Abt Associates Inc, cristina_booker@abtassoc.com
Jacey Greece, Abt Associates Inc, jacey_greece@abtassoc.com
Erin Lee, Abt Associates Inc, erin_lee@abtassoc.com
Abstract: The effectiveness of an evaluator can be greatly influenced by the values or principles guiding an evaluation. This presentation highlights the value placed on stakeholder input as an essential component to conducting a CDC-funded evaluation of the New York City Health Bucks program, an initiative designed to increase access to and purchase of fresh fruits and vegetables in three underserved NYC neighborhoods. During both the formative and implementation phases of the evaluation, input from key stakeholders was used to refine and enhance data collection methods. In the dissemination phase, lessons learned throughout the evaluation were combined with findings from key informant interviews conducted specifically to inform the design of an online evaluation toolkit targeting broad stakeholder needs. This session demonstrates the value of collecting stakeholder input for implementing a mixed methods evaluation and showcases the format and content of the toolkit for providing meaningful information back to stakeholders.
Using Relational Databases for Earlier Data Integration in Mixed-methods Approaches
Presenter(s):
Natalie Cook, Cornell University, nec32@cornell.edu
Claire Hebbard, Cornell University, cer17@cornell.edu
William Trochim, Cornell University, wmt1@cornell.edu
Claire Hebbard, Cornell University, cer17@cornell.edu
William Trochim, Cornell University, wmt1@cornell.edu
Abstract: This paper discusses the challenges of data management and analysis of a mixed-methods research project. The focus of the paper is on the use of a single MS Access database to allow for both integrated data management and efficient integrated analysis. Modern evaluation teams are increasingly facing many challenges, and the technology to address those challenges is marginally sufficient to manage the complexity that it creates, especially when quantitative and qualitative data are integrated in the analysis. Communication is always critical, and – as in this case - is even more challenging when the team members are geographically dispersed.

Session Title: Strengthening Value Through Evaluation Capacity Building
Panel Session 552 to be held in Avalon B on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Organizational Learning and Evaluation Capacity Building
Chair(s):
Karen Debrot, Centers for Disease Control and Prevention, kdebrot@cdc.gov
Discussant(s):
Karen Debrot, Centers for Disease Control and Prevention, kdebrot@cdc.gov
Abstract: The Centers for Disease Control and Prevention (CDC) awards a large proportion of its budget to state and local agencies and non-governmental organizations to accomplish its mission of promoting health. Evaluation of CDC-funded activities is an important part of demonstrating accountability for taxpayer dollars. However, funded partners vary in their capacities to evaluate their programs. To address this challenge, CDC has developed different systems to build funded partners' evaluation capacities to help improve program planning, monitor progress, improve implementation, and document outcomes. The presentations in this panel will describe three CDC systems that target evaluation capacity building to increase outcome-level evaluation, measure and track skills and competencies, and to coordinate evaluation across multiple skill levels. These systems demonstrate standardized, coordinated efforts to build skills to support evaluation practice of funded partners. In this way, CDC ensures that stakeholders benefit from well-implemented programs, and the value of evaluation increases among partners.
Using Evaluation Technical Assistance to Enhance State Oral Health
Cassandra Martin, Centers for Disease Control and Prevention, cmfrazier@cdc.gov
Victoria M Beltran, Centers for Disease Control and Prevention, vbeltran@cdc.gov
Evaluation capacity building (ECB) at the State level is an important process for strengthening evaluation efforts. CDC's Division of Oral Health (DOH) has prioritized ECB through a proactive technical assistance (TA) plan to ensure that state oral health infrastructure programs are evaluated effectively. An integral part of ECB is helping state evaluators achieve a foundational set of competencies. Using data from the DOH Evaluation Self-Assessment Tool, the DOH TA plan focuses on moving evaluators along a competency continuum adapted from competencies from AEA, CDC, and the Joint Commission on Standards for Educational Evaluation. With the use of the DOH Evaluation TA toolkit, evaluators build evaluation skills, thereby strengthening state evaluation practice so that it becomes a valuable asset to all state oral health programs. This session will focus on the development of the core competencies, measurement to track progress through competencies, and future use of ECB to enhance evaluation TA.
Guiding Asthma Partners in Learning and Growing Through Evaluation
Robin Shrestha-Kuwahara, Centers for Disease Control and Prevention, rbk5@cdc.gov
Amanda Savage Brown, Centers for Disease Control and Prevention, abrown2@cdc.gov
Sarah Gill, Centers for Disease Control and Prevention, sgill@cdc.gov
Paul Garbe, Centers for Disease Control and Prevention, plg2@cdc.gov
The National Asthma Control Program is undertaking a collaborative and comprehensive process to build evaluation capacity with its funded state grantee partners. Since evaluation experience among the state asthma programs varied widely, a flexible system needed to be established. To support state efforts in carrying out this new evaluation vision, a team of Evaluation Technical Advisors (ETAs) was established to provide direct technical assistance. The presenter will share how guidance materials, including Learning and Growing through Evaluation, were developed to assure that the ETAs provided coordinated assistance in developing strategic plans and implementing evaluation activities across state programs. The presenter will discuss the evolving role of the ETAs in providing direct support to states and how Learning and Growing is used to ensure a programmatically-sound, data-driven approach to evaluation. Complementary strategies for providing technical assistance including conference calls, webinars, listserv postings, and site visits will also be described.
Moving Beyond Process: Helping School Health Partners Evaluate Outcomes
Elana Morris, Centers for Disease Control and Prevention, efm9@cdc.gov
The Division of Adolescent and School Health (DASH) has developed a comprehensive evaluation capacity building (ECB) system to address the needs of its funded partners. This system is based on a continuum of needs related to evaluation expectations that vary in complexity. During the past five years, DASH focused on building evaluation skills related to program planning and collecting and using process data. As partners refined their basic evaluation skills, DASH identified the need to provide more advanced support to partners ready to conduct outcome evaluation. The presenter will describe how DASH conducted a needs assessment to build upon its ECB system, how partners were selected for the first year of TA focusing on outcome evaluation, and the types of TA provided. The presenter will share the benefits of TA including increasing individual skill building and group learning, and improving partners' attitudes towards the value of evaluation in their work.

Session Title: Using Artistic Strategies in Collecting, Analyzing and Representing Evaluations
Skill-Building Workshop 553 to be held in California A on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Data Visualization and Reporting TIG
Presenter(s):
Michelle Searle, Queen's University, michellesearle@yahoo.com
Lynda Weaver, Bruyere Continuing Care, lweaver@bruyere.org
Abstract: The field of evaluation is methodologically responsive and now features a spectrum of quantitative, qualitative and mixed method approaches (e.g., Greene, 1999; Greene & Caracelli, 1997; House, 1993; McClintock, 2004). In many ways, evaluators have been leaders in methodological innovation by continually reshaping themselves to deal with complex questions. Given this willingness to consider a variety of questions and to be methodologically flexible, it is time to explore ways that arts-informed approaches to evaluation offer value with current accepted orientations of evaluation. This skill-building workshop, developed on theory of arts in research and evaluation, provides hands-on exploration of arts techniques within evaluation practice. The workshop uses a simulated learning activity that unfolds over three phases to involve participants in arts-informed data collection activities, ways of analyzing art generated in an evaluation, and forms for representing data. No art experience is necessary, only a willingness to create and share ideas.

Session Title: Tiered Evaluation: Local Evaluators Operating Within the Context of a Cross-site Evaluation
Panel Session 554 to be held in California B on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
Patricia Campie, National Center for Juvenile Justice, campie@ncjj.org
Abstract: This session will explore the strategic and collaborative approach to tiered evaluation of the National Child Welfare Resource Centers (NRC) Evaluators Workgroup. Each NRC is responsible for evaluating one of the eleven National Child Welfare Resource Centers of the training and technical assistance network coordinated by the Children's Bureau and participating in the national cross-site evaluation. This workgroup of local evaluators has addressed many strategic and methodological questions including: How should the workgroup be organized? How can we best cooperate to minimize shared client burden? How can we capture intermediate outcomes in regards to evaluating technical assistance? How do we adapt to changing initiatives? Which data collection methods should be shared or can be unique across NRC evaluation plans? How can our work best contribute to the cross-site evaluation? This session will discuss these questions and explore the role of local evaluators working in conjunction with a cross-site evaluation team.
Creating a Successful Structure for Collaboration Among Local Evaluators
Anne George, National Center for Juvenile Justice, george@ncjj.org
The NRC Evaluators Workgroup is a collaborative group of evaluators of the Training and Technical Assistance Network coordinated by the Children's Bureau. This presentation discusses the context and structure of the NRC Evaluators Workgroup and how it operates within the context of the cross-site evaluation while balancing the needs of each specific NRC evaluation. The NRC Workgroup has facilitated discussions on strategies for data collection among shared clients, differences in evaluation design among local evaluators, and strategies for capturing short-term and intermediate outcomes of technical assistance. Increased collaboration, facilitated by the Children's Bureau, has created a community of evaluators that is more adept to evaluate the many levels of the Training and Technical Assistance Network and provide information important to providers and customers. Practical examples of how the local evaluators have successfully collaborated through technology and strategic face-to-face meetings and lessons learned about multi-level evaluation will be provided.
Evaluation of Innovative and Dynamic Practice and Processes Through Collaboration
Sarah-Jane Dodd, University of New York, sdodd@hunter.cuny.edu
This presentation discusses the challenges of evaluating practice as it evolves through innovations, and offers suggestions for successful outcomes within the context of cross-site evaluation collaboration. The presentation discusses the experiences of a group of local evaluators from the National Resource Centers (NRCs) coordinated by the Children's Bureau, who also work collaboratively to support cross-site evaluation. Three significant innovations have been introduced to the NRC's practice model (Implementation Science, Adaptive Leadership and Business Process Mapping). While these practice innovations add a layer of complexity to the evaluation process, the practical and intellectual support created by the cross-site evaluation team provides an invaluable opportunity for learning and collaboration. The presentation demonstrates a suggested "best practice" for planfully and inclusively introducing innovations allowing administrators, practitioners and evaluators to develop and deepen their understanding together so that meaningful evaluation can continue to document outcomes and assist in practice improvement even as innovations occur.
Lessons Learned Through Collaboration to Develop Common and Unique Measures
Brad Richardson, University of Iowa, brad-richardson@uiowa.edu
The strategy of local evaluation combined with national (across projects) evaluation is becoming more common. Working within the context of collaboration among projects and with a national cross-site evaluation poses challenges for identifying and selecting common and unique measures. Strategies that have been effectively employed will be discussed using website tracking and satisfaction as an example of how effective local and national evaluation collaboration to measure results can be achieved. Though a seemingly simple evaluation to conduct, multiple interests, uses, technologies and perspectives add a variety of issues and level of complexity. Improving standards of quality in evaluation theory, methods, and practice will be addressed through the lessons learned by the experience of child welfare National Resource Centers which will be described to assist those who may undertake similar work. The presenter, Dr. Brad Richardson, has directed numerous projects involving local and national evaluation components.

Session Title: Living Into Developmental Evaluation: Reflections on Changing Practice
Panel Session 555 to be held in California C on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Hallie Preskill, FSG Social Impact Consultants, hallie.preskill@fsg.org
Discussant(s):
Michael Patton, Utilization-Focused Evaluation, mqpatton@prodigy.net
Abstract: Within the last few years, an increasing number of evaluators, nonprofits, and funders have been experimenting with more dynamic, responsive, and emergent forms of evaluation. While some have called these new approaches "adaptive" or "real-time," perhaps the most well-known is Developmental Evaluation, conceptualized and written about by Michael Q. Patton. This approach, which is particularly well suited for evaluating social innovations, or programs/initiatives/strategies that are not well-tested and where the outcomes cannot always be pre-planned or predicted, constitutes a fundamentally different role and set of competencies for evaluators. In this session, we will explore: a) what it means to engage in a Developmental Evaluation, b) how this approach is similar and/or different from formative and summative evaluations, and c) the challenges and opportunities for Developmental Evaluation in the field. Both the evaluators' and the clients' perspectives will be presented.
Strategy, Stamina, and Synergy: Essential Elements for Engaging in Developmental Evaluations
Hallie Preskill, FSG Social Impact Consultants, hallie.preskill@fsg.org
For the last two years, FSG has been working with the John S. & James L. Knight Foundation on a developmental evaluation of its Community Information Challenge project - a 5-year, $24m innovative initiative designed to foster local information news and civic engagement through digital media. In this session, we will describe how the evaluation's questions, design and data collection methods, and reporting strategies have evolved over time - how the evaluation has been responsive to emerging client interests and information needs. The presentation will include challenges to doing this kind of evaluation, and opportunities for the field. In addition, we will reflect on how this developmental evaluation has been creating actionable knowledge for both the foundation staff, as well as for external audiences, through the dissemination of multiple evaluation products. We will also provide examples of how the evaluation findings have been used to inform and refine the foundation's strategy.
Building Developmental Evaluation Competencies: Challenges in Adaptation and Learning
Mayur Patel, Knight Foundation, patel@knightfoundation.org
In 2008, we launched the Knight Community Information Challenge to encourage community and place-based foundations to get involved in supporting news and information as a core community issue. Over the past two years, we have partnered with FSG to design an ongoing multi-year evaluation of the initiative to understand the extent to which foundations are increasingly engaging in activities that support community information and to track changes in the extent to which communities are better informed and engaged. The evaluation has involved different data collection methodologies and research outputs for multiple audiences, including our program teams, board, grantees and foundation peers. The presentation will highlight the organizational challenges involved in managing a developmental evaluation process, the internal capacities required to participate in ongoing adaptive learning, and what it takes, both on the evaluator and client side, for an external partner to be integrated into a foundation's internal work.
Using a Developmental Evaluation Approach to Create an Evaluation Partnership for the Healthy Weight Collaborative
Margaret Hargreaves, Mathematica Policy Research, mhargreaves@mathematica-mpr.com
Amanda Cash, United States Department of Health and Human Services, acash@hrsa.gov
Over the last year, Mathematica Policy Research has been working with the Health Resources and Services Administration (HRSA) and the National Initiative for Children's Healthcare Quality (NICHQ) to evaluate the Healthy Weight Collaborative (HWC), an innovative quality improvement (QI) initiative designed to spread clinical and community-based interventions that prevent and treat obesity among children and families. The HWC is adapting the Institute of Health Improvement's (IHI) Breakthrough Series healthcare QI model for use in community-based learning collaboratives. Mathematica is using a developmental evaluation approach to work closely with HRSA and NICHQ evaluation and program staff to provide ongoing evaluation support as the IHI QI model is adapted for this community-based setting. In this session, Mathematica and HRSA staff discuss the challenges and opportunities of this fascinating and rapidly evolving project and evaluation. We will provide examples of how the evaluation's interim products have been used in the adaptation process.

Session Title: What Counts in Social Services Evaluation: Values and Valuing in Evaluation Practice
Panel Session 556 to be held in Pacific A on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Human Services Evaluation TIG , the Social Work TIG, and the Presidential Strand
Chair(s):
Tracy Wharton, University of Michigan, trcharisse@gmail.com
Discussant(s):
Thomas Schwandt, University of Illinois at Urbana-Champaign, tschwand@illinois.edu
Abstract: This panel session responds to the conference theme by exploring the role of values and valuing in social services evaluation. The panelists, accomplished evaluators representing different areas in social services evaluation and different regions of the United States, will present reflections on their professional experiences with values and valuing. Specific topics include values assumed in particular evaluation approaches and methods, valuing the interests of different and often conflicting stakeholders, value conflicts inherent to evaluation practice, and the appropriate use of data to meet both professional and ethical standards. Taken together, these reflections identify and examine significant issues in the field of social services evaluation. The discussant will draw on his broad experience in evaluation to place these issues into the larger context of evaluation theory and practice, and offer specific implications on values and valuing.
Evaluation Itself is a Value: Combining Effectiveness Research and Epidemiology in a Naturalistic Realist Evaluation
Mansoor Kazi, State University of New York, Buffalo, mkazi@buffalo.edu
It is unethical to provide a social service without evaluating its effectiveness, and yet agencies collect enormous amounts of data and do not use it for evaluation. These data can be de-identified and utilized in a 100%, naturalistic and unobtrusive evaluation, at regular intervals in real-time, thereby integrating evaluation into practice and enabling practice decisions to be informed by evidence. This is part of the realist evaluation paradigm, examining patterns in the data among demographics, intervention, and outcomes, to investigate what works, for whom and in what contexts. The anonymity of service users is protected and at the same time there is greater accountability from the agency. The value of evaluation is helping to develop more effective social services and providing evidence of their effectiveness on demand. This evaluation is done in partnership with stakeholders (e.g., analyzing their own data with them), enhancing the valuing of evaluation in society.
Value Conflicts Within Social Services: Framing the Questions for Quality, Value, and Importance
Louis Thiessen Love, Uhlich Children's Advantage Network, lovel@ucanchicago.org
The context of providing and evaluating social service programs is complex, including a set of stakeholders with differing and potentially conflicting values. As evaluation is most often used to determine "quality, value, or importance" of a social service program, it is appropriate to ask "Whose values are informing these judgments?" Yet the very act of asking the question "whose values are in use?" is an act rooted in a value framework of equity and justice. House and Howe (1998 in AJE) propose a "deliberative democratic approach" and endorse that evaluators should be "advocates for democracy and the public interest... an egalitarian conception of justice" (pp. 233-236). Using this value framework, value conflicts and how they affect evaluation priorities, implementation, and use are explored. This discussion will consider systems evaluation factors that address approaches to evaluating how well programs address value conflicts in delivering various social services.
Power and Values: The Role of Client Voice in the Evaluation of Human Services
Rob Fischer, Case Western Reserve University, fischer@case.edu
The evaluation of programs in the social services arena is unique in regard to the dynamic between the clients and the service providers. Often service recipients come from the most disenfranchised sub-populations within society and are at high levels of need and risk when then come into contact with services. The combination of these factors suggests that they are at particular risk for further disempowerment by the evaluation process. To guard against this, evaluation professionals need to take special care in crafting evaluation approaches to ensure that the client perspective is well represented. This presentation will illustrate methods and pitfalls relating to the inclusion of client perspectives in program evaluation. The presentation will propose tiers of client participation in evaluation based on the program context, client characteristics, and funder approach to evaluation, as a mechanism for ensuring maximum client representation.
Values and Politics in Evaluation: When Only Positive Results Will Do
Todd Franke, University of California, Los Angeles, tfranke@ucla.edu
Evaluation has been talked about as a rational enterprise designed to say something about programs or policies, sometimes both. When done appropriately, the evaluation conclusions inform "decision-makers," enabling them to make informed and hopefully wise choices. However, in nearly all cases, evaluation takes place in a political context. The political context may range from being very insular and local to much more regional or national, but it is almost omnipresent. Political considerations often intrude and whether explicitly recognized or not values play a role throughout the process. This presentation will focus on how values (e.g., personal, instructional, political) influence the design, selection, and use/dissemination of the results from the standpoint of decision-makers who often reside much closer to the political arena.
A Conflict of Values? Social Services Evaluation in For-profit Organizations
Katrina Bledsoe, Education Development Center Inc, katrina.bledsoe@gmail.com
It is rare that one mentions social services and for-profit organizations within the same breath; social service provision and evaluation of those services are usually associated with non-profit status. Yet, many for-profit research and evaluation organizations typically conduct work on non-profit and altruistic agendas. Given the divergence between the two, how do for-profit organizations reconcile the values of a profit-driven business model with those of a non-profit agenda of altruism and selflessness? The presenter discusses the tug between business (e.g., profit making, profit sharing), social services values (e.g., funneling funding into service provision), and clients who do not operate under or understand the business model. How do for-profit organizations and non-profit clients coalesce on a similar agenda? What process is used to agree on common values that suit both agendas? The presenter refers to her experiences working in for-profit organizations juxtaposed against conducting federally-funded social services evaluations.

Session Title: Extending the Value of Extension Work: Publishing Evaluation-Related Journal Articles
Panel Session 557 to be held in Pacific B on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Extension Education Evaluation TIG
Chair(s):
Allison Nichols, West Virginia University Extension, ahnichols@mail.wvu.edu
Abstract: Most of us would agree that it is important to publish evaluation-related articles in professional journals because it spreads the news about our good work, informs new programming, and establishes us as academic professionals. Many of us, however, are intimidated by the process of writing and submitting an article for publication. Some might even feel that journal editors will be less likely to react favorably to an article about an evaluation process than they would about an article based on a research project. Each panel member has had experience writing and publishing articles in peer-reviewed journals both on the craft of evaluation and on individual evaluation projects. They will share their experiences and make recommendations about how best to get evaluation work in print.
Publishing Articles About the Craft of Evaluation
Marc Braverman, Oregon State University, marc.braverman@oregonstate.edu
Marc Braverman and three colleagues edited the Winter 2008 volume of New Directions for Evaluation, on evaluation in Extension, and wrote an article in that issue with Mary Arnold, on making decisions about evaluation rigor. He also published an article with Molly Engle in June 2009 in Journal of Extension on theory and rigor in Extension program evaluation. In 2004, he edited a book entitled Foundations and Evaluation, Jossey-Bass. He will describe characteristics of the Extension system that make it a valuable backdrop for exploring evaluation theory and practice. He also will suggest that manuscript submissions will meet with greater success if they pay attention to the larger framework in which the Extension evaluation takes place, that is, the value the evaluation study may have for a wide range of journal readers.
Publishing Articles About Agriculture Program Evaluation
Rajinder Peshin, Sher-e-Kashmir University, rpeshin@rediffmail.com
In 2009, Rajinder Peshin and colleagues Jayaratne and Gurdeep Singh published an article entitled Evaluation research: Methodologies for evaluation of Integrated Pest Management programs, in Integrated Pest Management: Dissemination and Impact, Springer (2009). Also in 2009, he published an article entitled "Evaluation of the benefits of an insecticide resistance management programme in Punjab in India" in the International Journal of Pest Management,Vol. 55, No. 3, 2009, 207-220. In his presentation. he will emphasize that program evaluation is an important field in agricultural development because evaluation studies provide biological scientists empirical feedback. However, most agricultural development programs lack evaluation mechanisms or are not based on robust evaluation designs. Dr. Peshin's experience with evaluation journals has not always been positive. He has found that evaluation journals are reluctant to publish articles related to applied program evaluation.
Publishing Articles About Quality of Service and Customer Satisfaction in Extension
Glenn D Israel, University of Florida, gdisrael@ufl.edu
In 2010, Glenn Israel published an article entitled, "The influence of type of contact with extension on client satisfaction" in the Journal of Extension, 48(1). In 2009, he published articles entitled, "The Influence of agent/client homophily on adult perceptions about extension's quality of service" in the Journal of Southern Agricultural Education Research, 59(1), 70-80, and "Diverse market segments and customer satisfaction: Does extension serve all clients well?" in the Journal of international agricultural and extension education, 16(1). He will discuss how studies of service quality and customer satisfaction can make an important contribution to the knowledge base for educational and service-based organizations. He will describe how such studies can be framed for extension-specific audiences to demonstrate methodological approaches and use of findings, as well as for application in the broader context, both geographically and topically. Israel will illustrate strategies for successfully publishing studies in journals internal and external to extension.
Publishing Articles About Family, Health, and Youth Development Evaluation
Allison Nichols, West Virginia University, ahnichols@mail.wvu.edu
Allison Nichols has published articles from her evaluation work including a 2010 article entitled, "Acceptance of body size and shape: the impact of health at every size programs" in The Forum for Family and Consumer Science Issues), "Examining youth camping outcomes across multiple states: The national 4-H camping research consortium," accept for publication in 2011 in the Journal of Youth Development, and 2009 articles, "Creating the eXtension Family Caregiving Community of Practice." Journal of Extension, and "Parenting education programs: recruiting and retaining low-income parents and family caregivers" in The Forum for Family and Consumer Issues. She will talk about discovering interesting findings from evaluation studies that inform understanding, theory, and practice in related fields. She will discuss how evaluation reports are not always publishable as is, but a piece of information that relates to other studies or theories might make all the difference in getting an article accepted.

Session Title: Master Teacher Series: Writing Effective Items for Survey Research and Evaluation Studies
Skill-Building Workshop 558 to be held in Pacific C on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Jason Siegel, Claremont Graduate University, jason.siegel@cgu.edu
Eusebio Alvaro, Claremont Graduate University, eusebio.alvaro@cgu.edu
Abstract: The focus of this hands-on workshop is to instruct attendees how to write effective items for collecting survey research data. Bad items are easy to write. Writing good items is more challenging than most people are aware. Writing effective survey items require a complete understanding of the impact that item wording can have on a research effort. Only through adequate training can a good survey items be discriminated from the bad. This 90-minute workshop focuses specifically on Dillman's (2007) principles of question writing. After a brief lecture, attendees will then be asked to use their newly gained knowledge to critique items from selected national surveys.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Beyond Satisfaction: Revisiting the Use of Satisfaction Surveys in a Collaborative Evaluation
Roundtable Presentation 559 to be held in Conference Room 1 on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Connie Walker, University of South Florida, cwalkerpr@yahoo.com
Michael Berson, University of South Florida, berson@usf.edu
Liliana Rodriguez-Campos, University of South Florida, liliana@usf.edu
Abstract: Collaboration is the ability to actively work with others in a mutually beneficial relationship in order to achieve a shared vision, not likely to otherwise occur. The level of collaboration varies for each evaluation and it will depend on the situation within the evaluation. Satisfaction is the fulfillment of a need or want. For the purposes of this evaluation satisfaction was defined as how participants felt about their experiences with the training program. A satisfaction survey was used within a collaborative effort to capture participants' opinions. This roundtable will examine the contribution and the role of the collaboration members throughout the administration of a survey when time and resources are limited. Specifically, the importance of collecting satisfaction measures, problems measuring satisfaction, and solutions on how to handle these situations.
Roundtable Rotation II: Long Distance Evaluations Using a Collaborative Approach
Roundtable Presentation 559 to be held in Conference Room 1 on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Connie Walker, University of South Florida, cwalkerpr@yahoo.com
Abstract: Long distance evaluations are the ones in which the client is really far away of the evaluator. The client could be in a different state or even in a different country. In these cases the use of a collaborative approach is central to support an adequate evaluation process. Collaboration is the ability to actively work with others in a mutually beneficial relationship in order to achieve a shared vision. The level of collaboration varies for each evaluation and it will depend on the situation within the evaluation. The collaborative relationship between the evaluators and stakeholders is a key component to achieve the goals and objectives of long distance evaluations. The purpose of this presentation is to discuss what needs to be taken into consideration to reduce difficulties, making these evaluations something doable. This presentation is based on evaluator's firsthand experience successfully conducting evaluations at a distance by incorporating a collaborative approach.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: We Actually Did It, and You Can Too: Creating a Culture of Learning and Evaluation in a Multi-service Nonprofit
Roundtable Presentation 560 to be held in Conference Room 12 on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Non-profit and Foundations Evaluation TIG
Presenter(s):
Isaac Castillo, Latin American Youth Center, isaac@layc-dc.org
Leah Galvin, Omni Institute, lgalvin@omni.org
Ann Emery, Latin American Youth Center, anne@layc-dc.org
Abstract: Creating a culture where nonprofit staff actually utilizes outcomes measurement and evaluation techniques on a regular basis is extremely challenging. The Latin American Youth Center (LAYC), a multi-service nonprofit in Washington, DC is an example of a nonprofit that has achieved this culture change. This roundtable will allow LAYC's Learning and Evaluation Division, the internal program evaluation group at LAYC, to share some of the lessons learned, challenges encountered, and techniques used during this multi-year process. We will discuss the initial steps we took to convince staff to embrace evaluation concepts, growth of evaluation capacity within the organization, maintenance of the culture over time, and unexpected challenges.
Roundtable Rotation II: Evaluating the Development of Community in Communities of Practice
Roundtable Presentation 560 to be held in Conference Room 12 on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Non-profit and Foundations Evaluation TIG
Presenter(s):
Ruth Mohr, Mohr Program Evaluation & Planning LLC, rmohr@pasty.com
Abstract: There is growing interest in many sectors to use a community of practice approach for improving how work around a shared concern is done. Etienne Wenger, co-originator of the term, defines these communities as 'groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly.' Such communities can benefit from attention to factors that can affect collective learning over time. This roundtable will explore potential criteria for assessing development of the community element of this approach for the purpose of improving the community's ability to support learning. Discussion starting points will be: member participation (e.g., self-management of knowledge needs, agreement on style of working together, learning orientation, and concern about quality of relationships), leadership (e.g., making working together as a community possible), and tools/processes that support the work (e.g., for communication, relationship building, and task completion).

Session Title: Evaluating Development Interventions in Conflict and in Food Security
Multipaper Session 561 to be held in Conference Room 13 on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Gwen Willems,  Willems and Associates, wille002@umn.edu
Evaluation of Mine Risk Education (MRE) in Nepal
Presenter(s):
Prabin Chitrakar, Ban Landmines Campaign Nepal, prabinc@gmail.com
Abstract: This paper is on an internal evaluation of Mine Risk Education (MRE) programs that was conducted by Ban Landmines Campaign Nepal (NCBL). A decade long conflict in Nepal which was started in 1996 was officially ended in November 2006 when the Maoist and the Nepal government signed the Comprehensive Peace Agreement. However, the conflict left Nepal with the problem of landmines and explosive remnants of war (ERW). Moreover, the greater threat was of the large number of improvised explosive devices (IEDs) used by the Maoist. The unexploded devices left the continuing threat of civilian deaths and severe injuries. NCBL conducted MRE programs in 2003 to minimize the risk of mines and IEDs to the civilians. By the end of 2010, it had expanded MRE programs to 46 out of 75 districts in Nepal. In 2011, NCBL conducted an internal evaluation to determine the effectiveness and impact of its MRE programs.
Evaluating Development Interventions Using Regression Discontinuities
Presenter(s):
Ron Bose, Milliman Consulting, ron.bose@rediffmail.com
Abstract: Developing nations decision-makers are under increasing pressure to provide credible evidence on effectiveness of publicly funded social assistance programs. Since access to range of social services and benefits is usually need-based, a plausible evaluation strategy is the regression discontinuity design (RDD). In this paper we provide an overview of the RDD approach to policy evaluation, discussing its particular strengths as well as highlighting practical issues related to operationalization of RDD. Of salience for public policy the paper disambiguates a particular strength of this study design that has been hitherto underappreciated in the literature, its unique ability in legitimating rigorous use of both quantitative and qualitative methods of impact assessment to inform and be informed by theory and practice of policy implementation in developing country settings. Finally, in light of the very rapid burgeoning of the RDD literature, we provide an "RDD Cheatsheet", that gives an easy to follow checklist when designing, conducting and reporting evaluations based on this design.
Thematic Review of World Vision International's Food Security Programming in Africa
Presenter(s):
Apollo Nkwake, World Vision, nkwake@yahoo.com
Nathan Morrow, Tulane University, nmorrow@tulane.edu
Abstract: World Vision has a large Food Security programming portfolio in Africa. The organization is well known for its food-oriented programming, and is the largest partner of the United Nations World Food Programme. WVUS is also a full service food aid provider for bilateral food aid grants from the Office of Food for Peace and USDA. Beyond food aid, World Vision also has a large portfolio of nutrition and agriculture programs in Africa. The conceptual framework for this thematic review adopted two of World Vision's organizational goals expressed as Child Well-being Outcomes: 'Children are well nourished' and 'Parents and care givers provide well for their children'. The thematic review's focus on how community level programming translates to specific improvements in children's well-being showed mixed results. The review indicates that sector silos may contribute to limited observable improvements in children's wellbeing from a proportion of World Vision's investment in Food Security programming.

Session Title: The Healthy Relationships Approach in Intimate Partner Violence (IPV) Prevention: Turning Practice Into Evidence
Multipaper Session 562 to be held in Conference Room 14 on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Cathleen Crain, LTG Associates, partners@ltgassociates.com
Discussant(s):
Laura Leviton, Robert Wood Johnson Foundation, llevito@rwjf.org
Abstract: This panel, based on the Robert Wood Johnson Foundation Strengthening What Works: Preventing Intimate Partner Violence in Immigrant and Refugee Communities (SWW) initiative, explores the evaluation of eight intimate partner violence (IPV) prevention projects in immigrant and refugee communities. The first presentation argues that this initiative represents an opportunity to build community-based collaborations in which the relationship between evidence-based practice and practice-based evidence becomes complementary. We propose to do this by means of "Learning Collaboratives" in which grantee organizations work together to formulate questions, test models, analyze data, and potentially reshape the field of IPV prevention. The second presentation explores SWW grantee efforts to build and evaluate healthy relationship education for refugee and immigrant populations. We describe the trajectories that brought the grantees to healthy relationship education and the associated evaluation challenges. The third presentation explores the hypothesis that healthy relationship education is a primary prevention response that will address some forms of violence but not all.
Closing the Research Gap in IPV Prevention: Turning Practice Into Evidence Using Community-based Learning Collaboratives
Alberto Bouroncle, LTG Associates, abouroncle@ltgassociates.com
To expand the knowledge of effective IPV prevention, methods should be developed to evaluate community level interventions that have not been tested systematically, shifting the paradigm from evidence-based practice to practice-based evidence. Practitioners at the community level, organized around Learning Collaboratives, will be able to work together and collect data to generate research questions that may influence research agendas and policy making. The RWJF's Strengthening What Works (SWW) will provide a testing ground for the development of Learning Collaboratives addressing the prevention of IPV in immigrant and refugee communities. The wide range of SWW prevention strategies suggests that grantees will be leading these Learning Collaboratives focusing on issues such as the role of culture and language, working with youth using healthy relationship models, working with men using a popular education approach, or working with the LGBTQ community in issues of IPV prevention.
Healthy Relationship Curricula for Immigrants and Refugees: Practice and Evidence
Greta Uehling, LTG Associates, guehling@ltgassociates.com
All of the RWJF Strengthening What Works grantees are currently creating, revising, and implementing curricula that include material on healthy relationships. Many of them have taken this approach as a way of bypassing the stigma associated with IPV and offering project participants knowledge and skills that contribute to the primary prevention of IPV. Evaluating healthy relationship curricula for refugees and immigrants presents a number of challenges. First, how do we account for the immense effect that different facilitators bring to curriculum implementation? Second, what evaluation tools can best overcome formidable barriers of language and literacy to bring promising practice to evidence? Finally, grantees have adapted mainstream material to fit specific target populations. How do we evaluate the extent of that fit considering that the target is continually shifting, and will these curricula be effective with other populations?
What Does Healthy Relationship Education Prevent? Prevention and Typologies of Intimate Partner Violence
Carter Roeber, LTG Associates, croeber@ltgassociates.com
Formative evaluation often requires evaluators to raise questions and conduct new research that looks at an issue from a new perspective. In order to prevent IPV, grantees in Strengthening What Works (SWW) have developed practical responses for teaching people about healthy relationships. In order to assist SWW grantees build their own evaluation capacity and to evaluate their work, LTG is exploring the connections between healthy relationships and IPV prevention more thoroughly. One key issue to explore is whether the curricula and training in healthy relationships can prevent the serious kinds of IPV that require the most attention and services or if they will have a more general effect on the quality of life for couples and communities. We hypothesize that healthy relationship curricula can be effective primary prevention, but they will not prevent more devastating coercive controlling relationships.

Session Title: Pass the Aspirin: When Projects Become Headaches
Panel Session 563 to be held in Avila A on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Evaluation Use TIG
Chair(s):
Mary Anne Sydlik, Western Michigan University, mary.sydlik@wmich.edu
Abstract: Science and Mathematics Program Improvement (SAMPI) at Western Michigan University currently has 25 evaluation projects, seven projects out for review, and six in the early stages of development with potential clients. Members of the SAMPI evaluation team will address challenges that can arise 1) during the pre-submission proposal/project development phase; 2) while trying to coordinate evaluation and project activities with another organization; and 3) when the clients' expectations change mid-course in ways that exceed the evaluation budget, the evaluator's time and energy, and cost-overruns threaten to shut down the evaluation before it can be completed.
So They Want You to Serve as Evaluators on Their Project?
Mary Anne Sydlik, Western Michigan University, mary.sydlik@wmich.edu
The pre-proposal phase of an evaluation brings to mind the five stages of grief. Denial: Misgivings about the PI, budgetary limitations, hollow sounding promises of providing a quick turnaround regarding goals and a time frame are ignored as you hear yourself agreeing to collaborate. Anger: You want us to do all of that with this budget? Where are those goals and the time frame? Bargaining: Evaluating the project, which sounds quite interesting, might be doable if the PI is willing to increase the budget slightly. Maybe daily calls to the PI will dislodge a rough draft of the goals and a wild guess of the time line in time for firm budget and evaluation plan to be devised. Depression: The proposal will never get submitted. Acceptance: We agreed to serve as evaluators, so we go along with an unrealistic budget and write a quality evaluation plan at the last minute.
So the Partnerships Have Gone Awry?
Kristin Everett, Western Michigan University, kristin.everett@wmich.edu
This presentation will explore the headaches that occur between program partners and the evaluator's role in fixing broken partnerships. Developing and implementing a program usually requires different groups. Although different organizations and agencies bring expertise to a program, complications can occur with so many viewpoints working on the same project. Issues of power, division of labor, differing measures of success, and lack of communication are some of the problems that a project can find itself in when bringing people together. Sometimes an evaluator finds a project in the midst of a floundering or broken partnership. The evaluator may be required to mediate problems between partners, offer suggestions to improve the partnership, and examine the effectiveness of the partnership. The evaluator's role expands from exploring the effects of the program to also evaluating the effectiveness of the partnership. These issues will be explored during this presentation.
So Their Expectations Exceed Your Budget and Resources?
Robert Ruhf, Western Michigan University, robert.ruhf@wmich.edu
So you have agreed to serve as evaluators even though there is an unrealistic budget. What do you do now that expectations are exceeding your resources? This presentation will focus on a specific example that SAMPI dealt with. Three things happened with this project that caused SAMPI to exceed its budget: PIs asked us to collect pre-program baseline data (even though this was not in the original proposal), they asked us to add several questions to a pre/post survey that provided interesting information to the PIs but were not useful within the context of the evaluation (which made the surveys exceedingly long, creating an increase in printing and data entry costs), and they allowed twice the number of participants into the program than originally proposed. The presenter will lead attendees in a discussion involving the following two questions: How would you address these issues? How did SAMPI address these issues?

Session Title: Values and Valuing: The Core of Professional Practice
Demonstration Session 564 to be held in Avila B on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Systems in Evaluation TIG
Presenter(s):
Bob Williams, Independent Consultant, bobwill@actrix.co.nz
Martin Reynolds, Open University, m.d.reynolds@open.ac.uk
Abstract: Evaluation is regarded amongst many practitioners as a profession. But does it deserve that status? What is at the core of professional practice and how well do evaluators walk the distance from being a tradesperson doing a client's bidding (serving 'wants') to being a professional with wider, more responsible, concerns (identifying and iterating on 'needs')? The presenters argue that how evaluators handle the implicit and explicit values underpinning the evaluation task is at the core of professional practice. This session demonstrates how core ideas drawn from the field of systems thinking in practice provide a means of exploring key value judgments made in an evaluation and the consequent boundaries of professional evaluation practice.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Increasing the Cultural Relevance of Evaluation in Informal Settings
Roundtable Presentation 565 to be held in Balboa A on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Multiethnic Issues in Evaluation TIG
Presenter(s):
Jill Stein, Institute for Learning Innovation, stein@ilinet.org
Shelly Valdez, Native Pathways, shilaguna@aol.com
Joe Heimlich, Ohio State University, heimlich@ilinet.org
Abstract: This roundtable discussion will build upon the authors' work in evaluating informal learning settings as experienced by 'underrepresented' or 'minority' groups, and will explore the role of cultural or community-based values in shaping evaluation practice and thereby rendering evaluations focused on these groups more meaningful and valid. In order for evaluation to be most useful and relevant - particularly within communities outside the mainstream culture that has so far had the most influence on the evaluation field - evaluators need to find ways to ensure that evaluative frameworks, measures of success, methodologies, data collection tools, and analysis approaches have 'ecological validity' within the contexts and communities in which they are working. The authors will briefly present on recent evaluation work that highlights these areas and then will facilitate a discussion focused on how evaluators can refine our practice to better reflect diverse cultural contexts, especially those that are different from our own.
Roundtable Rotation II: Linking Developmental Evaluation and Social Justice
Roundtable Presentation 565 to be held in Balboa A on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Multiethnic Issues in Evaluation TIG
Presenter(s):
Anna Madison, University of Massachusetts, Boston, anna.madison@umb.edu
Abstract: Social justice is cited as the mission of many human services organizations serving society's most oppressed populations. Yet, few can explain how their programs advance the mission of social justice. To the contrary, in most cases, these programs address the symptoms of social injustice rather than the causal conditions that created the need for perpetual support to maintain daily living. This roundtable raises questions regarding partnerships between evaluators and community-based human services organizations to align program design with social justice goals. Drawing from Michael Patton's developmental evaluation premise, that evaluators' involvement in program design and development could contribute to improving the effectiveness of programs, the roundtable focuses on clarifying the alignment between human service programming and creating a more justice and democratic society. Hopefully, participants in this roundtable will form a network to test ideas leading to evaluation theory development and to advance movement toward more effective evaluation use.

Session Title: Yes, Money Matters! A Conversation With the Stakeholders and Evaluators of Winning Play$: A Financial Literacy Program for High School Students
Panel Session 566 to be held in Balboa C on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Pamela Frazier-Anderson, Frazier-Anderson Research & Evaluation, pfa@frazier-anderson.com
Abstract: Stacey Tisdale, author and an on-air financial journalist, in partnership with National Football League Hall of Famer Ronnie Lott's foundation All Stars Helping Kids, created Winning Play$. Winning Play$ is a financial literacy program for high school students. Ms. Tisdale, the program's developer, will describe the program and outline the rationale for requiring the evaluation of a financial literacy pilot program for high school students living in underserved communities. What funders in the private sector value as they increasingly rely on the use of evaluation in their decision making process for the allocation of funds to non-profit organizations such as Winning Play$ will also be discussed. Finally, the evaluators of the Winning Play$ program will discuss the evaluation methods used not only to address the primary stakeholders' needs, but the needs and values of the stakeholder groups in the Winning Play$ financial literacy program and the larger community.
Winning Play$: Money Skills for Life
Stacey Tisdale, On-Air Financial Journalist, stis1@aol.com
After completing a six-year study on financial behavior, financial journalist Stacey Tisdale found a combination of social messages from advertisers, the media, as well as race and gender stereotypes drive financial behavior. She used this information to create the Winning Play$ financial literacy program for high school students. This presentation will first provide a brief overview of the program. It will then discuss the need and rationale for an evaluation component, as the pilot program serves students from diverse racial/ethnic backgrounds and in underserved communities. Finally, this presentation will address the importance of understanding what partners in the private sector value pertaining to the evaluation of financial programs such as Winning Play$.
Counting the Costs: The Value of Using Culturally Responsive Evaluation for a Financial Literacy Program in Underserved Communities
Pamela Frazier-Anderson, Frazier-Anderson Research & Evaluation, pfa@frazier-anderson.com
Dominica McBride, The HELP Institute Inc, dmcbride@thehelpinstitute.org
Khawla Obeidat, University of Colorado, Denver, khawla.obeidat@ucdenver.edu
One's culture and context are inextricable to the psychology of personal finance. In other words, the consideration of culture must be present in examining financial decision making and the effectiveness of financial literacy programs for people of minority or subjugated cultures and people in high poverty areas. The evaluators will discuss how Culturally Responsive Evaluation methods were used to evaluate the Winning Play$ financial literacy pilot program and discuss future directions for the evaluation of financial literacy projects using Culturally Responsive Evaluation.
Stages of Change: How the Transtheoretical Model Shapes Outcome Evaluation
Sara Johnson, Pro-Change Behavior Systems Inc, sjohnson@prochange.com
Traditional approaches to outcome evaluation have relied almost exclusively on all or none conceptualization of achievement of a particular behavior, ignoring the continuum of behavior change. The Transtheoretical Model of Behavior Change (TTM) re-conceptualizes behavior change as a process that unfolds over time in a series of stages. The evaluation design for the Winning Play$ program will illustrate how applying this model to outcome evaluation offers the opportunity to demonstrate the impact of programs on the entire target audience, rather than just those who have adopted the behavior change. Recommendations for evaluating stage progress and other key constructs of the TTM will be outlined to assist others in applying the TTM to their own evaluation planning.

Session Title: A Novel Approach to Monitoring and Evaluation of Community-based Disaster Risk Reduction Programs: A Collaboration Between the American Red Cross and Johns Hopkins University
Panel Session 568 to be held in Capistrano B on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Disaster and Emergency Management Evaluation
Chair(s):
Dale Hill, American Red Cross, hilldal@usa.redcross.org
Abstract: The American Red Cross' Community Based Disaster Risk Reduction (CBDRR) programs aim to reduce the number of deaths, injuries, and socio-economic impact from disasters by building safer, more resilient communities. These programs help build the skills of communities to identify risk and take action to prepare for, respond to, and mitigate potential disasters. The American Red Cross and John Hopkins Bloomberg School of Public Health have collaborated to develop a new evaluation approach for CBDRR programs. The approach is designed to measure the five domains of resilience that correspond to the Hyogo Framework: Governance, Rick Knowledge, Public Awareness, Risk Reduction and Preparedness. By measuring resilience, the evaluation approach aims to assess changes in risk mitigation, preparedness, and response capacity as a result of Red Cross activities. This session will present the components of the evaluation approach and then explore the challenges and rewards of its application in Haiti and Asia.
Bridging Concepts and Realities on the Ground: Measuring Progress on Disaster Risk Reduction for Both Technical and Community Audiences
Shannon Doocy, Johns Hopkins Bloomberg University, sdoocy@jhsph.edu
Dr. Shannon Doocy, Associate Professor at the John Hopkins Center for Refugee and Disaster Response (Bloomberg School of Public Health) brought both practical field experience and academic discipline to this new approach to evaluating community based disaster risk reduction (CBDRR) programs. She led the development of the conceptual basis of the approach; the translation of the Hyogo Framework resiliency domains into measurable indicators; and the piloting of the approach in Central America. Dr. Doocy will give an overview of the approach, covering its conceptual underpinnings, its application to CBDRR activities and outcomes, the tools and data collection activities, and the challenges of measuring resiliency and scoring communities on a resiliency scale. After the presentations which follow on the American Red Cross' expanded application of the approach in other countries, she may also address lessons learned from its application and the next steps in further developing the approach for wider application.
The Challenges of Applying This Novel Evaluation Approach to Programs in a Camp-based, Post-disaster Setting in Haiti
Gregg Friedman, American Red Cross, friedmang@usa.redcross.org
The American Red Cross implements a Community-based Disaster Risk Reduction (CBDRR) program in nearly 100 camps for persons displaced by the January 2010 earthquake. Evaluation activities using the new approach were recently launched for this CBDRR program, a first for the American Red Cross in an urban, post-disaster, displaced-person camp setting. Gregg Friedman, Monitoring and Evaluation Advisor for the American Red Cross, has been working with the Haiti program officers on their baseline survey and evaluation plans in many sectors, and has experienced firsthand the challenges of surveying the affected populations in the overcrowded camps. He is leading the effort to apply this CBDRR framework in Haiti and will present lessons learned thus far.
Across the Miles and Across the Sectors: Lessons Learned From Applications in Asia for Other Countries and Sectors
Dale Hill, American Red Cross, hilldal@usa.redcross.org
Early after her arrival as Senior Monitoring and Evaluation Advisor at the Red Cross, Dale Hill participated in the review of proposals that culminated in this novel approach to evaluating disaster risk reduction projects. In this panel, she will present the plans and lessons learned thus far in expanding its application to selected Asian projects, which will cover one or more of the following: a) a recently launched project in Viet Nam for first responder training, early warning systems, emergency response posts, and community contingency funds; and b) in Indonesia, projects for branches of the National Red Crescent Society in hazard prone areas, to fill gaps identified in needs assessments conducted after the earthquake in West Sumatra, and the tsunami in Aceh. Ms. Hill will also reflect on some of the lessons from application of this framework that may apply to assessments, project design, and evaluation plans in other sectors.

Session Title: Evaluating the Impact of Programs Serving Youth
Multipaper Session 569 to be held in Carmel on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
Melissa Rivera,  National Center for Prevention and Research Solutions, mrivera@ncprs.org
Through Their Eyes: An Evaluation of a School Based Mental Health Program
Presenter(s):
Marion Platt, Loyola University, Chicago, marionplatt1@aol.com
David Ensminger, Loyola University, Chicago, densmin@luc.edu
Abstract: This evaluation employed a Responsive Evaluation (Stake, 2004) approach in order to determine the programs outcomes, as well as the perceived values of the stakeholder (i.e. students, parents, and staff) most intimately involved with a school based mental health (SBMH) program. The results of the evaluation not only provided insights into the programs outcomes and value, but also helped to define the processes by which the program achieved these outcomes. Additionally, the evaluation allowed for the construction of a logic model that represented the stakeholder's perceptions of how the program operated to address the needs of stakeholders, and matched with a humanistic theory of the program. This evaluation approach differed from the more traditional experimental approaches used in science to service studies of SBMH research and evaluation. By using a case study method, stakeholders' views of the program became the foundation for determining the program's impact and value. Stake, R.E. (2004). Standards- based and responsive evaluation. Thousand Oaks, CA: Sage Publications.
An Outcome Evaluation of a Family Drug Court Model Aimed at Improving Child Well-Being and Permanency Outcomes for Children and Families Affected by Methamphetamine or Other Substance Abuse
Presenter(s):
Sonja Rizzolo, University of Northern Colorado, sonja.rizzolo@unco.edu
Helen Holmquist-Johnson, Colorado State University, helen.holmquist-johnson@colostate.edu
Abstract: The purpose of this program evaluation was to determine the effectiveness of integrated substance abuse, mental health, and community services to children and families in two neighboring Colorado counties. The Regional Meth Partnership, funded by the federal Children's Bureau, provided intensive services through a family treatment drug court model. The study explored outcome differences between families participating in family treatment drug count and families who did not participate in this model. Program goals included increasing the safety, well-being, and permanency of at-risk children by providing a continuum of integrated services to those children, their parents and caregivers, and their families' support system. Variables included demographics, child welfare, treatment, and family outcomes. Results of outcome data will be presented including comparisons on the occurrence of child maltreatment, average length of stay in foster care, re-entries into foster care, timeliness of reunification, access to treatment, and retention in substance abuse treatment.

Session Title: Building Community Collaborative and Evaluator Evaluation Capacity to Measure Community and Systems Change
Panel Session 570 to be held in Coronado on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the AEA Conference Committee
Chair(s):
Evelyn Yang, Community Anti-Drug Coalitions of America, eyang@cadca.org
Abstract: Evaluation of community collaborative efforts is evolving. Previously, evaluation primarily focused on process (e.g., membership satisfaction and collaborative functioning) and distal outcomes (e.g., behavior change). However, there is a growing understanding of the process by which coalitions contribute to distal outcomes that now includes creating systems/community changes. Many collaboratives are struggling to measure systems/community changes related to long-term health outcomes and answer the ultimate question of what value the collaborative provides to address community concerns. While this approach potentially has great value, both collaboratives and evaluators lack tools, resources and knowledge to incorporate this additional level of data and tracking to their existing evaluation efforts. This session will provide three examples of efforts to build both collaborative and evaluator capacity to measure and evaluate community/systems change initiatives. National and local evaluation perspectives will be presented, and there will be time for discussion on challenges and potential solutions to these obstacles.
National Efforts to Build Coalition and Evaluator Capacity to Track Community and Systems Change
Evelyn Yang, Community Anti-Drug Coalitions of America, eyang@cadca.org
This presentation will describe how Community Anti-Drug Coalitions of America (CADCA) and its National Coalition Institute (Institute) trains and supports both coalitions and coalition evaluators in comprehensive community coalition efforts. CADCA's Institute provides training and technical assistance to coalitions across America to improve their effectiveness at reducing substance abuse rates within their community. This presentation will describe efforts to build capacity to track community/systems changes among evaluators and coalitions. The Institute will discuss the theoretical framework of the training they use to build coalitions as community/systems change agents and a new initiative to create a community of practice to support coalition evaluators in their efforts to measure these important intermediate outcomes as part of a quality coalition evaluation. The intent of this presentation is to show how building both evaluator and coalition capacity is necessary to accurately understand the conditions under which coalitions can change distal, public health problems.
Tools and Technology to Help Communities Track Community and Systems Change
Paul Evensen, Community Systems Group Inc, pevensen@communitysystemsgroup.com
This presentation will focus on the lessons learned by Community Systems Group (CSG), an evaluation consulting organization that works with a variety of community health collaboratives across the country. CSG uses a research-based evaluation model that emphasizes the importance of tracking intermediate outcomes of coalition efforts, including community and systems changes. The presenter will describe how tools and technology can be used to support local collaborative evaluation efforts. These include forms, diagnostic tools and also a local, customized, online database for all evaluation related data, including process, intermediate and distal outcome data so that communities can analyze their contribution to improved community health.
Local Evaluation Consulting: What Does it Take to Change Evaluator and Community Collaborative Evaluation Practices
Ann Price, Community Evaluation Solutions Inc, aprice@communityevaluationsolutions.com
Community collaboratives and the evaluators whom they hire face many challenges in evaluating comprehensive community change efforts. Evaluating community collaboratives is much broader and complex than single focused program evaluation. Evaluation of collaboratives entails tracking a wide variety of data (coalition process and long-term outcomes), using data for on-going decision making, and requires both qualitative and quantitative analysis skills. Furthermore, advising and managing the local evaluation while mindful of national funders goals are a challenge. The increasing pressure to track intermediate and long-term outcomes adds to the local evaluator's already full plate. Realistically, how do evaluators feasibly manage competing pressures within the limited budget available for local evaluations? This presentation will provide a local evaluation consultant's perspective on the challenges she has faced in incorporating evaluation best practices into her work and implications for how the growing emphasis on systems evaluation and tracking community changes has for her work.

Session Title: Quality Improvement in Health Care: Training Measurement and Reporting
Multipaper Session 571 to be held in El Capitan A on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Health Evaluation TIG
Chair(s):
Wendy Yen,  College of Physicians and Surgeons of Ontario, wyen@cpso.on.ca
Designing Monitoring and Evaluation Systems That Make Good Use of Minimum Packages as Indicators of Quality Health Programming
Presenter(s):
Ian Membe, Centers for Disease Control and Prevention, membei@zm.cdc.gov
Ian Milimo, Pepfar Coordination Office, Zambia, milimoi@state.gov
Lungowe Mwenda, Centers for Disease Control and Prevention, mwapelal@zm.cdc.gov
Abstract: Minimum packages are not new. They have been used in various contexts as indices to ensure that programs provide services that can be legitimately counted as support or beneficiary 'reach'. In global HIV/AIDS programs they have provided a yardstick by which programs can be evaluated as being successful. There is a concern as to how these measures relate to a program's overall monitoring and evaluation (M&E) system and its indicators. Do we include them as separate measures or part of the M&E system? What about partners who cannot provide the whole package? Monitoring and Evaluation systems need to be setup on a needs basis so that they respond to specific felt needs of target beneficiaries. Evaluations then respond by measuring the program's response to those needs as well as the causal linkage between met needs and program impacts. The minimum package then becomes the recipient's minimum package.
Health Care Public Reporting: A High Stakes Evaluative Tool
Presenter(s):
Sasigant O'Neil, Mathematica Policy Research, so'neil@mathematica-mpr.com
John Schurrer, Mathematica Policy Research, jschurrer@mathematica-mpr.com
Christy Olenik, National Quality Forum, colenik@qualityforum.org
Abstract: Publicly reporting of health care quality performance measures has become a high stakes game. However, the diversity in purposes, audiences, and data sources among public reporting initiatives can make it difficult to identify opportunities for coordination in pursuit of a national agenda for assessing, evaluating, and promoting health care quality improvement. To help identify such opportunities, we conducted an environmental scan of public reporting initiatives and their measures. Initiative characteristics included audience; geographic level; report dates; payer type; sponsor; organization type; and when public reporting began. Measures were mapped to a framework of national priorities and goals, as well as, other conceptual areas of importance, such as cost and health condition. Measures characteristics such as data source, endorsement by the National Quality Forum, target population, and unit of analysis were also collected. A group of national leaders used the scan results to begin identifying a community dashboard of standardized measures.
Using a Program Evaluation Approach to Ensure Excellence in Physician Practice Assessments
Presenter(s):
Wendy Yen, College of Physicians and Surgeons of Ontario, wyen@cpso.on.ca
Rhoda Reardon, College of Physicians and Surgeons of Ontario, rreardon@cpso.on.ca
Bill McCauley, College of Physicians and Surgeons of Ontario, bmccauley@cpso.on.ca
Dan Faulkner, College of Physicians and Surgeons of Ontario, dfaulkner@cpso.on.ca
Wade Hillier, College of Physicians and Surgeons of Ontario, whillier@cpso.on.ca
Abstract: A key function of medical regulatory authorities is to ensure public/patient safety through development and implementation of programs to assess physician performance and competence. We describe the development of a conceptual model for physician assessment which represents a shift from a singular focus on 'valid and reliable' assessment tools to a framework that places equal importance on all 'components' of an assessment program. That is, while tool development is undoubtedly a key component in quality assessments, the new model places equal emphasis on other components of the assessment process that may also affect outcomes (e.g. assessor training, use of assessment reports by decision bodies). The movement from a measurement model to a program evaluation model represents a paradigmatic shift from a positivistic framework to one that recognizes the inherent complexities in health science research and illustrates how transparency, utility and mixed-methods approaches can also be used to achieve desired outcomes.
Evaluating the Quality of Quality Improvement Training in Healthcare
Presenter(s):
Daniel McLinden, Cincinnati Children's Hospital Medical Center, daniel.mclinden@cchmc.org
Stacey Farber, Cincinnati Children's Hospital Medical Center, stacey.farber@cchmc.org
Martin Charns, Boston University, mcharns@bu.edu
Carol VanDeusen, United States Department of Veterans Affairs, carol.vandeusenlukas@va.gov
Abstract: Quality Improvement (QI)in healthcare is an increasingly important approach to improving health outcomes, improving system performance and improving safety for patients. Effectively implementing QI methods requires knowledge of methods for the design and execution of QI projects. Given that this capability is not yet widespread in healthcare, training programs have emerged to develop these skills in the healthcare workforce. In spite of the growth of training programs, limited evidence exists about the merit and worth of these programs. We report here on a multi-year, multi-method evaluation of a QI training program at a large Midwestern academic medical center. Our methodology will demonstrate an approach to organizing a large scale training evaluation. Our results will provide best available evidence for features of the intervention, outcomes and the contextual features that enhance or limit efficacy.

Session Title: Reaping Randomized Results on the Ranch: Rigorous Impact Evaluation Designs and Preliminary Results From Agricultural Interventions in Three Millennium Challenge Corporation (MCC) Compacts Countries
Multipaper Session 572 to be held in El Capitan B on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Marc Shapiro, Millennium Challenge Corporation, shapiromd@mcc.gov
Abstract: The Millennium Challenge Corporation (MCC) is committed to conducting rigorous independent impact evaluations of its programs as an integral part of its focus on results. MCC expects that the results of its impact evaluations will help guide future investment decisions and contribute to a broader understanding in the field of development effectiveness. MCC's impact evaluations involve a variety of methods chosen as most appropriate to the context. This panel provides a brief overview of evaluation at MCC generally to frame the overall context. Next, the panel provides examples of evaluations being conducted in the agricultural sector across three MCA compact countries. The presenters will discuss the context, evaluation design, challenges involved in implementing these evaluations, and preliminary results. Although projects for MCC are designed to be context specific rather than to best allow cross-project/country comparisons, the panel will discuss lessons learned across and within countries from an evaluation design perspective.
Impact Evaluation at MCC: An Overview
Marc Shapiro, Millennium Challenge Corporation, shapiromd@mcc.gov
Mawadda Damon, NORC, damon-mawadda@norc.org
MCC was established in January 2004 with the objectives of promoting economic growth and reducing poverty by learning about, documenting and using approaches that work. MCC plans to complete 35 rigorous impact evaluations of international development projects over the next two years, and the rate of project evaluations likely will double in the following years. The results of these emerging evaluations are intended to shape the selection and design of future projects. This short presentation will provide a brief overview of impact evaluation at MCC.
Thinking Small: Impact of Business Services on the Economic Wellbeing of Small Farmers in Nicaragua
Anne Pizer, Millennium Challenge Corporation, pizerar@mcc.gov
The Rural Business Development (RBD) project aims to increase profits and wages in farms and non-farm businesses by providing technical and financial assistance to small and medium farms and agribusinesses to help them transition to higher profit activities. The impact evaluation will estimate the change in beneficiary household income and other welfare measures attributed to the project. The evaluation relies on the randomized sequencing (pipeline design) of 80 to 100 communities, with half selected randomly to receive services early and half later. Interim impact evaluation results (baseline and mid-term) found that the average increase in RBD household incomes is small (2 percent above the change in household income for those not yet receiving treatment) and not statistically different from zero. The very small magnitude of change in incomes may reflect the limited amount of time between the provision of services and the measurement of change.
Sewing the Seeds for Impact Evaluation Success: Problems and Preliminary Results from a Georgian Agricultural Project
Marc Shapiro, Millennium Challenge Corporation, shapiromd@mcc.gov
The Agribusiness Development Activity in the Republic of Georgia awards grants to small farmers, farm service centers that serve communities, and value-adding enterprises. The impact evaluation examines the project's effects on income and job creation for farmers through a pipeline experimental design used across nine rounds of grantees. Those selected in the first random drawing received grants immediately, while others receive grants later. Farm service center and value-adding enterprise grantees are being evaluated by matching grant recipients to similar enterprises in the comparison group using propensity score matching. Data collection involves augmenting the Georgian Department of Statistics' household survey and using local contractors to collect household level information from direct and indirect beneficiaries. The confounds of military conflict and the financial crisis as well as project delays have required adjustments. Preliminary double difference results for the first seven rounds of grantees will be discussed.
Has Beans - Moving from Rice and Beans to Radishes and Bell Peppers: The Impact of High-Value Horticulture Training on Farmer Income in Honduras
Algerlynn Gill, Millennium Challenge Corporation, gillaa@mcc.gov
Varuni Dayarantna, NORC, dayaratna-varuni@norc.org
George Caldwell, NORC, jcaldwell9@yahoo.com
The MCC-funded Farmer Training and Development Activity in Honduras provided technical assistance to farmers to transition from subsistence crops to high-value horticultural crops for domestic sale and international export. The impact evaluation assesses the training's effects on household income and production levels, comparing farmers and communities who received training and those who did not. Double-difference estimates will be formulated using two approaches, one involving comparison of a randomly selected treatment and control group of communities and one using a model-based method, due to implementation challenges that necessitated adjustments to the evaluation plan. Results from the evaluation and how the methodology evolved to meet changing conditions on the ground will be discussed. Differences in "monitoring data" collected by the implementer and evaluation data collected through independent surveys also will be presented, to demonstrate how impact evaluations with counterfactuals can correct initial over-estimations of results.

Session Title: Data Across the Miles
Multipaper Session 574 to be held in Huntington A on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Integrating Technology Into Evaluation
Chair(s):
Stephanie Beane,  Southeast AIDS Training and Education Center, sbeane@emory.edu
Discussant(s):
Margaret Lubke,  Utah State University, mlubke@ksar.usu.edu
Tracking Past Program Participants Online Versus Cyber-Stalking: The Ethical Fine Line
Presenter(s):
Samuel Held, Oak Ridge Institute for Science and Education, sam.held@orau.org
Mubina Schroeder-Khan, New York University, mubina.khan@orau.org
Abstract: Many programs define their success with short-term and long-term goals. In order the measure the effects over time or how long the impact of a program experience endure beyond the treatment, evaluators must track the program participants. The advent of the internet and social networks has made locating participants easier. But is it ethical to have people join a social network group to track them, even if it is voluntary for the participant? How can you find participants after a period after treatment and maintain contact? The authors will share their exploration of the ethical issues within the AEA Guiding Principles, the Program Evaluation Standards, and research ethics guidelines. We will share techniques implemented to find and form communities for program alumni for a federal workforce development program.
Strategies to Minimize the Effects of Reporting Errors Through Strategic Web-Survey Design in Program Evaluation
Presenter(s):
Martha Klemm, University of Massachusetts, Boston, martha.klemm@umb.edu
Kelly Haines, University of Massachusetts, Boston, kelly.haines@umb.edu
Abstract: Use of web-based surveys in evaluation is one strategy to improve the reliability and validity of evaluation data. This presentation addresses common problems in the survey-response process in program evaluation and describes three strategies to minimize the effects of reporting errors through strategic web-survey design. The strategies were used in surveys to collect data to evaluate the effectiveness of a training intervention targeted at community college faculty. Presenters will conclude with recommendations for implementing these strategies in evaluations.
Student and Teacher Use of Educational Technologies in High School Settings
Presenter(s):
Kimberley Miller, Texas A & M University, millerkim@svusd.org
Theresa Murphrey, Texas A&M University, t-murphrey@tamu.edu
Abstract: Given that computers and related technologies are updated, changed and enhanced on a continuous basis, it is imperative that education remain up-to-date in utilizing these technologies to enhance instruction. This study examined teachers and students use of computers, the Internet and related technologies in Southern California in the agriscience classroom. Data was collected from 80 agriscience teachers and 915 agriscience students during June 2010. Results revealed that although teachers are utilizing computers for general purposes, such as handouts, tests and quizzes, teachers are not utilizing computers for more creative applications. Findings revealed that while students are prepared to use computers and the Internet for school work - teachers are not requiring such use. Data also revealed that students perceive computers to be a useful tool for school and personal reasons and hold the Internet in high regard. They see the Internet as having a positive impact on society.

Session Title: Value in Government Evaluation: Multiple Perspectives
Multipaper Session 575 to be held in Huntington B on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Government Evaluation TIG
Chair(s):
David Bernstein,  Westat, davidbernstein@westat.com
Evaluation of Evaluators Who Rate Proposals
Presenter(s):
Randall Schumacker, University of Alabama, rschumacker@ua.edu
Abstract: Federal funding agencies receive thousands of grant proposals each year and distribute millions of dollars for research annually. Each agency solicits professional reviewers across numerous academic disciplines to review the thousands of grant proposals it receives annually. A fundamental requirement should be that the review process is auditable, and conducted in a fair and objective manner by peer review. This paper demonstrates a methodology to achieve accountability in the peer review process of grant proposals. Rasch many-faceted analysis creates an adjustment to the rating scores to yield a fair average that removes reviewer leniency and/or severity in their ratings. This rating adjustment is based on a single administration of proposals for review, does not require that all reviewers rate all proposals, and permits a comparison of summative raw score rankings to a fair average. A heuristic example demonstrates the Rasch many-faceted methodology with a group of proposal reviewers.
Using Interim Findings of a Multi-year National Evaluation to Inform Program Guidance
Presenter(s):
Eileen Chappelle, Centers for Disease Control and Prevention, echappelle@cdc.gov
Lazette Lawton, Centers for Disease Control and Prevention, llawton@cdc.gov
Diane Dunet, Centers for Disease Control and Prevention, ddunet@cdc.gov
Abstract: Waiting until the end of a multi-year evaluation to consider evaluation findings represents missed opportunities to use interim evaluation data for potential program improvements. The Centers for Disease Control and Prevention's Division for Heart Disease and Stroke Prevention is sponsoring a multi-year evaluation to assess the outcomes of the National Heart Disease and Stroke Prevention Program. In this project, CDC evaluators are working in close collaboration with CDC program staff to periodically review interim evaluation findings with the intent of improving CDC's guidance and technical assistance provided to funded programs. Interim findings have also been shared with funded programs to facilitate reflection on where resources and activities are being directed. We will demonstrate how interim evaluation findings can be used to improve programs and support funded programs' ability to reach intended goals.
The V in VFM: Value, Values and Assessing Value for Money
Presenter(s):
Jeremy Lonsdale, National Audit Office, United Kingdom, jeremy.lonsdale@nao.gsi.gov.uk
Abstract: Value for money audit - a variant of performance audit - is a significant evaluative activity in a number of countries, as examined in a recent book, 'Performance audit: contributing to accountability in democratic government' (Lonsdale et al, 2011). It has statutory role in assessing the economy, efficiency and effectiveness with which governments use public resources. The UK National Audit Office has recently given increased attention to how it assesses and communicates whether value for money has been secured on particular programmes and projects. This is of particular interest at a time when major public spending cuts are being introduced and there is concern that public value will be lost. This paper examines what NAO means by 'value for money', and what its approach and philosophy says about what the organisation considers to be important values in the delivery of public services.
We Have a Performance Measurement Framework...So Where's the Data? Let Sleeping Dogs Lie or Just Make the Case for Qualitative Methodologies!
Presenter(s):
Sandra L Bozzo, Ontario Government, sandra.bozzo@ontario.ca
Abstract: This paper examines the challenges and explores viable options for addressing evident data gaps in performance measurement for Aboriginal initiatives in an Ontario government context. Faced with a small number of reliable quantitative data sources, limited willingness to collect data, and programmatic/administrative data lacking population identifiers, government is left with few options on the performance measurement front. The internal challenges are compounded with external community realities of self-determination and the need for self-governance. There is a perceived tension between quantitative versus qualitative methodologies that would appear to be best left unresolved in government. While qualitative methods are often a hard sell in government, inevitably performance measurement in an Aboriginal context necessitates approaches that are consistent with Aboriginal approaches to data collection and traditional ways of knowing.

Session Title: Utilizing Conceptual Frameworks to Define and Evaluate Public Health Infrastructure
Panel Session 576 to be held in Huntington C on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Government Evaluation TIG
Chair(s):
Cassandra Martin Frazier, Centers for Disease Control and Prevention, bkx9@cdc.gov
Discussant(s):
Maryann Scheirer, Scheirer Consulting, maryann@scheirerconsulting.com
Abstract: Building the infrastructure of state and local public health systems is vital to promoting the health of the nation. Evaluation can be used as a tool to better understand the multifaceted and broad nature of infrastructure and its role in influencing systems change. Yet, it is challenging to systematically evaluate national-level infrastructure development initiatives. This session explores the use of theory, consensus-building and practice to develop conceptual frameworks to guide efforts to evaluate infrastructure programs. In this panel, presenters from three programs at the Centers for Disease Control Prevention will discuss the following: the development of their respective conceptual frameworks, creation of standardized evaluation tools, application of the conceptual framework to evaluate infrastructure and utilizing evaluation results to revise the framework.
Evaluating Environmental Health Systems as They Relate to Food and Water Safety Programs: Developing a Framework and Instrument
Vernon Garalde, Centers for Disease Control and Prevention, ivg1@cdc.gov
Kristin Delea, Centers for Disease Control and Prevention, gqi7@cdc.gov
Denita Williams, Centers for Disease Control and Prevention, uzk4@cdc.gov
Since the Institute of Medicine Report in 1988, public health has been described as "in disarray." Environmental public health departments providing food and water safety services need to be evaluated to ensure adequate and quality services to the jurisdictions they serve. This presentation will discuss the utilization of a conceptual framework to guide the evaluations of environmental public health systems as they relate to food and water safety. We will be discussing the need for measuring environmental public health infrastructure, associated challenges, development of a framework, its application, and development of the subsequent instrument. In addition, we will briefly discuss how the evaluation will be implemented and used. This evaluation seeks to address how environmental public health infrastructure affects environmental health specialists' abilities to provide services and how those services impact community health.
Evaluating Organizational and State Capacity to Support Violence Prevention Initiatives
Kimberley Freire, Centers for Disease Control and Prevention, hbx8@cdc.gov
Sally Thigpen, Centers for Disease Control and Prevention, sti9@cdc.gov
As evidence emerges on effective violence prevention strategies, the Division of Violence Prevention (DVP) has increased its focus on building state and local infrastructure to support and deliver such strategies. DVP has used the Interactive Systems Framework (ISF) to conceptualize and distinguish general and prevention-specific capacities for programs aimed at building and evaluating prevention system infrastructure. This presentation will describe the ISF Framework and focus on its support system, which defines the infrastructure or functional system needed to deliver and disseminate violence prevention strategies. In addition, we will discuss DELTA PREP, a national program that defines and measures part of this infrastructure as organizational and state (i.e., system) capacity. The program's evaluation is designed to link organizational capacity improvements to increased state capacity to support violence prevention within and between the ISF's three systems.
Using Evaluation to Understand Infrastructure Development in State Oral Health Programs
Cassandra Martin Frazier, Centers for Disease Control and Prevention, bkx9@cdc.gov
Kisha-Ann Williams, Centers for Disease Control and Prevention, evi9@cdc.gov
In 2003, the CDC Division of Oral Health (DOH) developed a performance measurement-based conceptual framework to plan, implement and evaluate the development of state oral health program infrastructure. This framework was used in evaluation to track and monitor process-level data that proved useful in providing guidance in developing infrastructure. During program implementation, stakeholder interests shifted from process evaluation to outcome evaluation for a more robust understanding of infrastructure development and its effects. As a result, DOH adjusted the evaluation of the infrastructure program and utilized the results to build a more complex, outcome-oriented conceptual framework for infrastructure development. This presentation will discuss how stakeholder values influenced the scope and design of the infrastructure evaluation, how the evaluation was used to enhance the infrastructure conceptual framework, and how the revised conceptual framework will add value to and shape future evaluation efforts.

Session Title: Moving "The Movement" Forward: Evaluation as a Critical Tool in Furthering the Work of Lesbian, Gay, Bisexual and Transgender Community and Programs
Think Tank Session 577 to be held in La Jolla on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Lesbian, Gay, Bisexual, Transgender Issues TIG
Presenter(s):
Joseph Kosciw, Gay, Lesbian & Straight Education Network, jkosciw@glsen.org
Discussant(s):
Emily Greytak, Gay, Lesbian & Straight Education Network, egreytak@glsen.org
Elizabeth Diaz, Gay, Lesbian & Straight Education Network, ediaz@glsen.org
Abstract: Partnerships between the community programs and researchers can be critical for helping to understand efforts to address health and social problems within specific communities and can result in more effective programs and a better evaluation of effects. For the lesbian, gay, bisexual and transgender (LGBT) community/communities, evaluation is often not integrated in development and delivery of programs seeking to improve life experiences for community members. Yet within the national LGBT movement, there has been interest in promoting coordination and collaboration among organizations in order to maximize use of resources and to ensure integrated (not competing) programming. The purpose of this Think Tank is to capitalize on the experiences and knowledge of expert evaluators interested in and working on LGBT programmatic issues in order to strategize and plan effective interventions to advocate and promote evaluation of efforts by LGBT organizations as well as inclusion of LGBT issues/identities in non-LGBT specific evaluations.

Session Title: From Theory to Practice: Potential Avenues via Evaluation Capacity Building and Research on Evaluation
Panel Session 578 to be held in Laguna A on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Research on Evaluation
Chair(s):
Leslie Fierro, Claremont Graduate University, Leslie.Fierro@cgu.edu
Discussant(s):
Christina Christie, University of California, Los Angeles, tina.christie@ucla.edu
Abstract: Evaluation scholars have repeatedly called for research on evaluation-in particular applied research. Since evaluation is largely a professional discipline, it is important for research on evaluation to move into the applied realm by generating information and developing methodologies that can be widely adopted by practitioners. Panelists in this session will demonstrate how processes and products of evaluation research and practice can inform each other. The first set of presenters will focus on evaluation capacity building (ECB)-describing a model generated from the existing empirical and theoretical knowledgebase about ECB and organizational learning and connect this to practical ECB approaches. The second set of presenters will turn the focus towards the use of evaluation techniques for explicating evaluation and program theory-discussing how the use of tools used in researching evaluation can translate into improved evaluation practice and potentially accelerate the sharing of important insights between evaluation scholars and practitioners.
Making Sense of "Capacity" in the Evaluation Process by Leveraging Existing Theories and Frameworks
Leslie Fierro, Claremont Graduate University, Leslie.Fierro@cgu.edu
Evaluation capacity has become a topic of great interest among members of the professional evaluation community. Multiple frameworks depicting the components of evaluation capacity within organizations have been published, as have multiple examples of "interventions" designed to build evaluation capacity. What is missing from the existing literature is a clear connection between these elements of capacity and how they contribute to supporting continuous evaluative inquiry processes within organizations. The current presentation will use an existing model of organizational learning to connect evaluation capacity to doing evaluation, collectively understanding evaluative findings, and acting on these findings. Implications for using this model to stimulate plans for evaluation capacity building efforts will be explored.
A Framework for Understanding Contextual Motivating Factors of Evaluation Capacity Building
Anne Vo, University of California, Los Angeles, annevo@ucla.edu
Evaluation capacity building (ECB), as an activity and area of research, has continued to pique the interest of evaluation practitioners, scholars, and more recently policy-makers. This is evidenced by the policy initiatives and federal calls for research on evaluation practice and methodology that have been enacted under the Obama Administration. However, the ECB literature suggests that there are discordant views regarding what "evaluation capacity building" means and where, when, how, why, and with whom it occurs. To begin addressing these issues, a deeper understanding of the contextual conditions that enable ECB is needed. A framework for understanding potential motivating factors that lead up to and foster ECB will be discussed in this presentation. This discussion will be grounded in examples of ECB efforts that have been taking place within a coalition of university-based educational outreach programs. Directions for future research on ECB will also be suggested.
Mixed Model Theory Development: Building a Theory-informed and Practice-informed Model of Evaluation
Michael Harnar, Claremont Graduate University, michaelharnar@gmail.com
Understanding the application of evaluation approaches helps inform our understanding of evaluation theories. Exploring this interaction of practice and theory is at the heart of research on evaluation. Because evaluations are much like the programs we evaluate, where our activities are expected to lead to outcomes, using a program theory-driven evaluation technique of modeling evaluation practice should increase our understanding of our approaches. This presentation extends previous evaluation theory modeling by describing a process that engages evaluators in modeling their practice with theoretically derived variables. In this method, evaluators model their preferred practice in an online modeling software and the produced models are combined to create one representative model that evaluators review and comment on, improving the model's reflection of practice. The final product is a theory- and practice-informed picture that might then be analyzed and tested for comprehensiveness and consistency in practice.
Testing Program Theory Using Structural Equation Modeling
Patricia Quiones, University of California, Los Angeles, pquinones@ucla.edu
Anne Vo, University of California, Los Angeles, annevo@ucla.edu
Developing methods to empirically test a program's underlying theory-that is, the linkages that connect program inputs and activities to outputs and outcomes on a logic model-has presented enduringly interesting challenges for evaluators. However, due to issues related to limited resources, measurement, and lack of quality data, a program tends to be evaluated primarily in terms of its component parts. As a result, only singular linkages are tested. While examining a program and its theory may be challenging, we maintain that the conduct of holistic evaluation remains a worthwhile and feasible endeavor. In this presentation, we propose a potential framework for using structural equation modeling (SEM) to evaluate a program and its theory in whole. We address ways in which one can transform a program theory into structural and measurement models. We also discuss practical and methodological issues to consider as one engages in this process.

Session Title: International & Cross-cultural Partnerships: Understanding how Partners Develop Relationships of Mutual Support and Value
Panel Session 579 to be held in Laguna B on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Carol Fendt, University of Illinois at Chicago, crfendt@hotmail.com
Discussant(s):
Cindy Shuman, Kansas State University, cshuman@ksu.edu
Abstract: In developing international partnerships geared towards providing "support" to one of the partners, how do the partners develop relationships of mutual support and value? In other words, how do true partnerships develop? How do partner leaders monitor and assess the value they place on partners? What structures can be developed to support the expressed values of the partnership? What role does the evaluator play in ensuring that the expressed value of each partner are included in project development and implementation? This panel will utilize a case study approach to explore the development of five different partnerships, how they expressed the values of each partner, and how evaluation was instrumental in ensuring the expressed values of the partners. Using an appreciative inquiry approach, project participants reveal how partnerships involved international/cross-cultural stakeholders in the development of the project's vision and partnership goals, and the process of monitoring, evaluating, and adapting these partnerships to the changing needs of the partners.
Valuing our Partners: Lessons Learned Through Building Relationships
Jan Middendorf, Kansas State University, jmiddend@ksu.edu
This session will share lessons learned from an impact evaluation conducted on a non-profit organization in Kenya. Comfort the Children (CTC) International helps Kenyans build and manage the sustainable infrastructure necessary to meet the challenges of everyday life. CTC's initiatives, focused on educational, environmental, economic, health and community development, directly impacts the community as a whole. CTC's programs are delivered through local relationships, providing the empowerment necessary to make those changes lasting while ensuring that the organization's resources, including donations of time and money from contributors and volunteers, have a maximum positive effect. The evaluator will share her insights on how the project develops various relationships and partnerships to further its mission. The presenter will also discuss how the evaluation valued individual characteristics while attempting to assess the overall impact of the whole program.
Developing and Valuing Partners Every Step of the Way
Desmond Odugu, Lake Forest College, odugudes@gmail.com
Esther Hicks, Archdiocese of Chicago, ehicks@archchicago.org
The partnership between the Archdiocese of Chicago Catholic Schools and the Diocese of Nsukka, Nigeria documents the development of a cross-cultural & international partnership. In 2007, the U.S. project manager organized a Committee of Interests, gathering organization leaders to begin researching the possibilities and challenges facing the Diocese of Nsukka in rebuilding its school system after the Biafra conflict in the late 60's. As a result, committee members and partners in the Diocese of Nigeria began a process of identifying specific problems, trained volunteers in Nsukka to assess school buildings, developed guidelines for building new schools, and conducted a Future Search Conference to craft the 2020 Vision goals currently driving the work of the partnership. The goals address issues of educational improvement, health impact on children, infrastructure and technology. The presenter will share his insights about the values of cross-cultural partnerships engaging in a regional plan for systemic change.
Lessons Learned From Three Case Studies in Cross-cultural International Partnerships
Carol Fendt, University of Illinois at Chicago, crfendt@hotmail.com
This presentation is a closer look at three distinct projects: Schools for the Children of the World, Mission Honduras International/Liberia Mission, and GEANCO Foundation. Each of these three cross-cultural and international projects have been working in the fields of education and/or health care, and focus on building local capacity. Schools for the Children of the Worldis an NGO in Honduras and provides educational opportunity through quality school facilities in underdeveloped countries. Mission Honduras International is a US-based lay nonprofit which currently both funds and operates Liberia Mission. The project hires 4-5 international staff to serve as director and program coordinator/missioners for Liberia Mission, although the long-run focus is to build local capacity and fully turn operations over to the local Liberian staff who currently manage most of the programs. The GEANCO Foundation was established in 2005 with the goal of developing a world class hospital in Anambra, Nigeria.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Assessing Board Performance: Dysfunctional and Effective Boards
Roundtable Presentation 580 to be held in Lido A on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Program Theory and Theory-driven Evaluation TIG and the Business and Industry TIG
Presenter(s):
Zita Unger, Independent Consultant, zitau@bigpond.com
Abstract: Evaluation of board performance has increased considerably in recent years. More rigorous forms of accountability and compliance are standard for public company boards and more commonplace for boards in the public, private, non-profit and for-profit sectors. What matters most is what goes on inside the boardroom rather than compliance around board structures and systems. What really counts are the values, skills, attitudes and behaviors - the inner workings of boards and how decisions are made. No amount of compliance will overcome the flaws of a fundamentally dysfunctional board, flowing from inadequate expertise of directors, or excessively dominant Chairman or CEO, or a factional board. The roundtable will explore questions about performance assessment in the context of board effectiveness: What are the values of effective boards? Can evaluation enhance and build on these values? Is the purpose of performance assessment to do so? What are optimal conformance and performance measures?
Roundtable Rotation II: Planning and Strategy: Making Plans That Support Strategic Behavior in Emergent Environments
Roundtable Presentation 580 to be held in Lido A on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Program Theory and Theory-driven Evaluation TIG and the Business and Industry TIG
Presenter(s):
Dana H Taplin, ActKnowledge Inc, dtaplin@actknowledge.org
Jill K Wohlford, Lumina Foundation for Education, jwohlfor@luminafoundation.org
Patricia Patrizi, Public Private Ventures, patti@patriziassociates.com
Catherine Borgman-Arboleda, Indendent Evaluation Consultant, cborgman.arboleda@gmail.com
Abstract: This roundtable session, following from the Winter 2010 issue of New Directions for Evaluation, "Evaluating Strategy", addresses the relationship between strategy and planning methods such as theories of change and logic models. Strategic work takes unexpected turns as it progresses: evaluating the work against fixed plans may produce unfairly negative judgments. Often too the planning models serve as plans but are not operationalized effectively to support and inform internal learning and evaluation going forward. In our own work using theory of change as a planning tool, the detailed outcomes framework of causal pathways can seem too much like a blueprint, as if practitioners should know all the steps in advance. Are logic models, theories of change, and other forms of planning inimical to truly strategic behavior? At what point does attention to planning begin to undermine strategy? How do we do planning that supports strategy and learning from strategic choices?

Session Title: Approaches to Assuring Evaluation Use: Valuing Stakeholders, Context, and Program Priorities in Cancer Control
Panel Session 581 to be held in Lido C on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Health Evaluation TIG
Chair(s):
Angela Moore, Centers for Disease Control and Prevention, cyq6@cdc.gov
Discussant(s):
Kimberly Leeks, Centers for Disease Control and Prevention, kfj1@cdc.gov
Abstract: The National Comprehensive Cancer Control Program (NCCCP) provides support for states, tribes, and Pacific Island Jurisdictions to sustain partnerships and implement cancer control plans including goals, objectives, and strategies that span the entire cancer control continuum from prevention to survivorship. In 2010, the Centers for Disease Control and Prevention (CDC) released six new priority areas for the NCCCP. The priorities focus on high-impact, common, and cross-cutting elements among programs, emphasize measureable outcomes, and reflect the cancer control continuum, and grew out of long standing focus areas of the national program. In order to ensure that evaluations are used to improve public health practice, an approach was taken that includes the following guiding principles: a commitment to obtain stakeholder input, the recognition that the NCCCP is evolving and adapting to new priority areas, and the commitment of CDC to increase evaluation capacity among grantees to ensure the ability to demonstrate program effectiveness.
Valuing and Understanding Context: What an Environmental Scan Tells Us About Comprehensive Cancer Control Programs
Behnoosh Momin, Centers for Disease Control and Prevention, fqv6@cdc.gov
Mary Kay Solera, Centers for Disease Control and Prevention, zmt7@cdc.gov
Stephanie Lung, Case Western University, 
Cindy Soloe, RTI International, csoloe@rti.org
CDC will conduct an environmental scan to inform the development of performance measures and an evaluation plan based on the NCCCP priorities. Questions to be answered by the environmental scan are 1) what are CCC programs currently doing with respect to the NCCCP priorities? 2) What is the current state of the science with respect to chronic disease programs performance measurement? A review of end of year reports, action plans, and funding opportunity announcements will be conducted. This document review will allow for the creation of Evaluation Planning Matrices, in order to begin the process of populating the matrix with data. Consultations with experts in performance measures as well as key informant interviews will also be conducted. Results from this environmental scan will provide additional perspective regarding the extent to which programs are equipped to implement the priorities and will inform the development of an evaluation plan and performance measures for the NCCCP program.
Development of an Evaluation Plan to Evaluate Grantee Attainment of Selected Activities of Comprehensive Cancer Control (CCC) Priorities
Angela Moore, Centers for Disease Control and Prevention, cyq6@cdc.gov
Behnoosh Momin, Centers for Disease Control and Prevention, fqv6@cdc.gov
Chris Stockmyer, Centers for Disease Control and Prevention, zll6@cdc.gov
Julie Townsend, Centers for Disease Control and Prevention, jtownsend@cdc.gov
LaShawn Curtis, RTI International, lcurtis@rti.org
CDC has begun to develop an evaluation plan that will assess the extent to which recipients of the NCCCP are appropriately and effectively implementing CCC activities. This evaluation will also address the extent to which the Comprehensive Cancer Control Branch's (CCCB) technical assistance, resources, and funding have sufficiently bolstered these efforts. The development of the evaluation plan follows methodology of the CDC Framework for Program Evaluation which was designed to improve and account for public health actions by involving procedures that are useful, feasible, ethical, and accurate. The CCCB's utilization of this framework has resulted in an evaluation plan that, once implemented, will assist both programs and CCCB in articulating key processes and outcomes related to the implementation of cancer control.
The Evolution of Performance Measures for the National Comprehensive Cancer Control Program
Julie Townsend, Centers for Disease Control and Prevention, jtownsend@cdc.gov
Chris Stockmyer, Centers for Disease Control and Prevention, zll6@cdc.gov
Susan Derrick, Centers for Disease Control and Prevention, srd3@cdc.gov
Angela Moore, Centers for Disease Control and Prevention, cyq6@cdc.gov
Behnoosh Momin, Centers for Disease Control and Prevention, fqv6@cdc.gov
The National Comprehensive Cancer Control Program performance measurement system must evolve to reflect new priority areas to ensure accountability, measure outcomes, and facilitate quality improvement. In 2010, CDC refined program priorities for the long-standing NCCCP. The performance measurement system needs to systematically capture grantee performance on existing recipient activities as well as these priority areas. CCCB evaluators initiated a process to map priority areas to domains that accurately reflect common recipient activities. Domain areas that were identified relate to partnerships, data and surveillance, evidence based interventions, technical assistance and training, policy, system, and environmental changes, communication, and evaluation. Another key activity related to the refining of this system is the conducting of an environmental scan that assesses performance measures of other chronic disease programs. These activities will enable CCCB to monitor implementation of the priorities and will provide data for linking NCCCP efforts with cancer control outcomes.

Session Title: New Approaches to Assessing National Institutes of Health (NIH) Research Programs
Multipaper Session 582 to be held in Malibu on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Robin Wagner, National Institutes of Health, wagnerr2@mail.nih.gov
Abstract: We present four papers on new approaches and tools that are being developed to inform the evaluation of research programs sponsored by the U.S. National Institutes of Health (NIH), which invests over $30 billion per year on biomedical research. The first paper considers how the traditional method of using expert opinion to assess research program performance has been implemented and can be enhanced. The second and third papers employ different text mining and visualization tools to characterize research portfolios to glean insights into the science supported and could facilitate the management of research programs. The fourth paper uses network analysis to evaluate and compare researcher collaborations in two distinct epidemiological cohort studies. While the examples presented in this session focus on NIH, the methods demonstrated can be extended to other organizations and countries seeking to better understand and inform strategies for managing their research programs.
Expert Opinion as a Performance Measure in R&D Evaluation
Kevin Wright, National Institutes of Health, wrightk@mail.nih.gov
Expert opinion continues to be the gold standard when assessing the performance, impact, and importance of R&D programs. This presentation will explore the use of expert opinion as the primary performance measure in evaluations of biomedical research programs. Questions that will be addressed include: 1) Is expert opinion really a performance measure?; 2) In what circumstances is expert opinion used as the primary performance measure in evaluations of biomedical R&D programs?; 3) What are strengths and limitations of expert opinion?; 4) What are various approaches to using expert opinion?; and 5) What are some good practices that might be considered when planning an evaluation using expert opinion? This presentation will be useful to evaluators interested in using expert opinion to evaluate R&D programs.
Text Mining for Visualization of Temporal Trends in NIH-Funded Research
L Samantha Ryan, National Institutes of Health, lindsey.ryan@nih.gov
Carl W McCabe, National Institutes of Health, carl.mccabe@nih.gov
Allan J Medwick, National Institutes of Health, allan.medwick@nih.gov
Using text mining methods, we will analyze and visualize topical changes in the abstracts of extramural research grants funded by the National Institutes of Health (NIH) over the past decade. The project will use publicly available data provided by the NIH (from RePORT), free and open-source analysis software, and a freely available visualization environment produced by Google (Motion Charts). Our methods will allow the user to interactively explore the first appearance and subsequent increase (or decrease) of substantive keywords in NIH abstracts over a temporal span and to see changes over time in animation. The corpus of abstracts may be subdivided into categories (e.g., fiscal year) in order for the user to explore and compare patterns and changes in NIH research funding.
Assessing Grant Portfolios Using Text-Mining and Visualization Methods
Elizabeth Ruben, National Institutes of Health, elizabeth.ruben@nih.gov
Kristianna Pettibone, National Institutes of Health, kristianna.pettibone@nih.gov
Jerry Phelps, National Institutes of Health, phelps@niehs.nih.gov
Christina Drew, National Institutes of Health, drewc@niehs.nih.gov
Granting agencies have an ongoing need for tools to assure that their portfolio of grants is current, mission-focused, and of high quality. Therefore we are exploring the novel use of a text-mining data visualization tool, OmniVizGäó, to examine patterns in the science distribution of our grants, analyze assignment of project officers, and identify gaps and emerging areas of research. We explore the effect of various options and choices, such as source data, number of clusters, or clustering method. We show examples of our data plots and describe how this could be used to think about the portfolios in new ways and inform our science management. Finally, we discuss the challenges and opportunities of these approaches. This presentation will be useful to evaluators interested in learning how to use visualization tools for data analysis and in understanding how the findings can be applied to science management.
Network Analysis of Collaboration Among National Heart, Lung and Blood Institute (NHLBI) Funded Researchers
Carl W McCabe, National Institutes of Health, carl.mccabe@nih.gov
Mona Puggal, National Institutes of Health, mona.pandey@nih.gov
Lindsay Pool, National Institutes of Health, 
Rediet Berhane, National Institutes of Health, 
Richard Fabsitz, National Institutes of Health, richard.fabsitz@nih.gov
Robin Wagner, National Institutes of Health, robin.wagner@nih.gov
We use freely-available, open-source analytical tools to explore co-authorship networks involving researchers funded by the NIH's National Heart Lung and Blood Institute (NHLBI). Underlying our analysis is an interest in the forms of collaboration that exist among researchers in two distinct cohort studies-The Cardiovascular Health Study and The Strong Heart Study. We use co-authorship as a proxy for collaboration, and we produce statistics and visualizations to help dissect the properties of these networks. To add further analytical dimension to the analysis, we examine aspects of network structure in relation to characteristics of the researchers and publications (e.g., institutional affiliation or publication title). Our presentation will use a step-by-step discussion of this project to illustrate some of the computational analysis tools and techniques that may be used to explore the concept of collaboration among a body of researchers.

Session Title: Evaluation Lessons From Work Among First Nations, Aboriginal and Metis Peoples in Canada
Multipaper Session 583 to be held in Manhattan on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Indigenous Peoples in Evaluation
Chair(s):
Joan LaFrance,  Mekinak Consulting, lafrancejl@gmail.com
Discussant(s):
Joan LaFrance,  Mekinak Consulting, lafrancejl@gmail.com
The First Nations Health Society ActNow Initiative: The Value of Engaging Communities
Presenter(s):
Kylee Swift, Reciprocal Consulting, kylee.swift@hotmail.com
Kim van der Woerd, Reciprocal Consulting, kvanderwoerd@gmail.com
Michelle Degroot, First Nations Health Society, mdegroot@fnhc.ca
Abstract: When considering a Social Determinants of Health model, there are several gaps in the health of Aboriginal people as compared to non-Aboriginal people in British Columbia (BC). The First Nations Health Society (FNHS), funded by the Aboriginal ActNow, endeavored to fill some of those gaps through the development and implementation of health promotion programs and products. The FNHS engaged in a number of activities in order to encourage healthy lifestyle changes and increase the capacity of communities to affect health outcomes. This presentation reviews a comprehensive process evaluation of the FNHS Aboriginal ActNow programs. Findings suggest that greater value can be found in community paced, holistic and culturally appropriate programs in Aboriginal communities. Relationship building and trust are key community values that have a great impact on the development, implementation and overall usefulness of an evaluation.
Case Study's Within First Nations and Canadian Policy
Presenter(s):
Andrea L K Johnston, Johnston Research Inc, andrea@johnstonresearch.ca
Abstract: We have completed more case studies over the past two years than the culmination of the previous 13 years of evaluation efforts. Current Canadian policy and decision making is wild over case studies. We interviewed our government clients asking how it works and what policy issues has it influenced. Then we interviewed the First Nations we visited as a part of these case study contracts. We asked First Nations what has worked well and not well for them. Then, we rolled this data up to improve our approaches for case studies and present to you some of our interview findings and also our "best-practices" with regard to performing case studies within, not without First Nations. Finally, we explore the impacts case studies have on Canadian Policy.
Have Government Programs Been Successful in Transferring the Benefits of Postsecondary Educational Attainment to the First Nation People?
Presenter(s):
Paul J Madgett, University of Ottawa, paul.madgett@gmail.com
Andrew Wall, University of Rochester, afwall@warner.rochester.edu
Abstract: This paper aims at identifying whether the benefits of postsecondary educational attainment of the Canadian First Nation people have resulted in an improvement in their social, financial and cultural well-being. The authors will be evaluating whether greater emphasis should be placed on higher education accessibility programs by assessing if significant differences exist by comparing those individuals that have attended postsecondary training programs, community colleges and universities to rest of the population. The authors will be using advanced statistical methods in order to compare these grouping of individuals with various variables namely dealing with health, income, views on education, financial planning and self-efficacy. Overall, the authors are attempting to determine whether the government has created an environment through their policies and programs which allows for the benefits of higher education to be diffused to this unique minority group in their respective geographical locations.

Session Title: Methodological and Theoretical Considerations in Evaluation
Multipaper Session 584 to be held in Monterey on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Graduate Student and New Evaluator TIG
Chair(s):
Krista Schumacher,  Oklahoma State University, krista.schumacher@okstate.edu
Negotiating Program Evaluation in Collaborative Environments: Use of Best Practices & American Evaluation Association's Guiding Principles
Presenter(s):
Tosha Cantrell-Bruce, University of Illinois, Springfield, tcantrel@uis.edu
Abstract: Evaluators are increasingly finding themselves working in collaborative arrangements when evaluating programs. Further compounding this changing environment are the divergent priorities of stakeholders. That was the situation for this new evaluator who was asked to evaluate a locally-funded technology program. This program aimed to place 100 computers into the homes of low-income children and assess whether the computers influenced recipients' academic achievement. A local nonprofit, funder, three schools, and the evaluator were involved in the program implementation and evaluation. This case confirmed many of the 'real world' evaluation techniques suggested by other evaluators. These included negotiating realistic outcomes, expressing limitations to methodologies, identifying the priorities of the stakeholders and negotiating ownership of the data. The evaluator also relied heavily on many of the AEA guiding principles to navigate this evaluation. Of particular interest were the principles on systematic inquiry, integrity and honesty, and respect for people.
When Values Collide: A New Evaluator's Experience Managing Program Resistance to Evaluation
Presenter(s):
Krista Schumacher, Oklahoma State University, krista.schumacher@okstate.edu
Abstract: This paper describes the experience of an external evaluator contracted to evaluate a multi-state federally funded program. The project's goal is to create new technology programs at approximately 60 institutions of higher education throughout several states. Main activities include the creation of new courses and faculty professional development. While the goal of evaluation initially seemed straightforward, it became clear that the project administration viewed evaluation unfavorably, which significantly compromised evaluation efforts. Despite requests of a national review committee for a strong evaluation and the efforts of the evaluator to respond to this request, the project director attempted to steer evaluation away from real assessment to a simple counting of institutions, faculty, courses, and students. This experience, which placed the evaluator in a position of compromising her values and ethics, offers important lessons for external evaluators in clearly establishing the values and expectations of project directors before entering into a contract.
Valuing an Iterative Process in Evaluation
Presenter(s):
Joanna Doran, University of California, Berkeley, joannad@berkeley.edu
Chris Lee, University of California, Berkeley, clee07@berkeley.edu
Abstract: Viewing an evaluation measure as a static entity may prohibit an evaluator from capturing useful information that may emerge out of an iterative assessment process. This paper describes one example of incorporating an iterative process to help inform revision and provide clarity to a multi-site evaluation of curriculum infusion. A social work education program was implemented across twenty different professional social work schools, and an evaluation plan for assessing program implementation and impact was created. An initial survey was developed as a way to assess the degree to which learning competencies were being infused across school curricula. Results of this initial survey proved difficult to analyze and challenging to synthesize for reports. Accordingly, this experience led to the development of a new tool to assess the infusion of competencies. This paper describes the evolution of this evaluative tool and the iterative process taken to arrive there.

Session Title: Appreciating Complexity in Qualitative Design and Analysis
Multipaper Session 585 to be held in Oceanside on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Qualitative Methods TIG
Chair(s):
Janet Usinger,  University of Nevada Reno, usingerj@unr.edu
Discussant(s):
Janet Usinger,  University of Nevada Reno, usingerj@unr.edu
Expanding the Evaluation Toolkit: Using Qualitative Comparative Analysis in Cross-Site Evaluation
Presenter(s):
Heather Kane, RTI International, hkane@rti.org
Megan Lewis, RTI International, melewis@rti.org
Pamela Williams, RTI International, ppiehota@rti.org
Leila C Kahwati, National Center for Health Promotion and Disease Prevention, leila.kahwati@va.gov
Abstract: Evaluators are faced with two challenges when, conducting cross-site analyses of larger units such as organizations, communities or systems: 1) having too few cases; and 2) capturing multiple configurations indicating successful program effects, typically referred to as equifinality in organizational theory. To address this issue, we will describe how qualitative comparative analysis (QCA) bridges qualitative and quantitative analyses by combining in-depth knowledge gained from case studies with principles derived from Boolean algebra. This method can be used in evaluations containing at least 10-12 cases where a small sample size would preclude using other techniques. It also allows for multiple configurations of variables to account for program success, and thus may be a better reflection of complex implementation processes. We will describe an example of how it has been successfully used in a cross-site evaluation of the Veterans Health Administration's MOVE! Weight Management Program for Veterans
The Complexity of Practice: Participant Observation and Values-Engagement in a Responsive Evaluation of a Professional Development School Partnership
Presenter(s):
Melissa Freeman, University of Georgia, freeman9@uga.edu
Jori Hall, University of Georgia, jorihall@uga.edu
Abstract: All social and professional practices are historically situated, evolving forms of acting and interacting. Evaluation, as a practice, is shaped by and shapes the practice evaluated. Our paper contributes to responsive and values-engaged evaluation approaches by reflecting on the space where these two practices intersect. The evaluative task was to document the nature of a partnership between a university and school district and how that partnership was being carried out in the form of a professional development school. Although other methods were used, we focus on the role participant observation, as an interactive and responsive form of engagement, played in the evaluation. Through two lenses: observing the partners and observing ourselves, we critically reflect on our decision-making processes, assessing their accomplishments and shortcomings. We conclude by considering how we might further our engagement as values-engaged evaluators in this context in ways that support the development of both practices.
Navigating Personal Values and Beliefs in the Analysis of Qualitative Data
Presenter(s):
Janet Usinger, University of Nevada, Reno, usingerj@unr.edu
Stephanie Woolf, University of Nevada, Reno, swoolf@unr.edu
Jafeth Sanchez, Washoe County School District, jsanchez@washoe.k12.nv.us
Abstract: Qualitative evaluators frequently argue how critical it is to represent the voice of the participant. Yet analysis of qualitative data is inherently a constructivist process. As evaluators interpret data collected through interviews, focus groups, observations, and field notes, how do they place their own values in abeyance? How do they not fall into the trap of analyzing data through their own personal perspectives? How do they avoid perpetuating stereotypes through superficial analysis? This paper presents an evaluator's reflective process in the analysis of qualitative data gathered longitudinally for six years to explore the process that 60 adolescents undertook as they socially constructed their career aspirations and the role that education played in that process. The study was a component of the evaluation of a State Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP) project.

Session Title: Climate Change Education Projects: Advancing the Dialogue Through Effective use of Evaluation Strategies
Multipaper Session 587 to be held in Palos Verdes A on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Environmental Program Evaluation TIG
Chair(s):
Beverly Farr, MPR Associates, bfarr@mprinc.com
Abstract: The papers that are included in this session will illuminate how the evaluation process can enhance the goals of a set of projects--in this case Climate Change Education projects supported by NSF, NASA, and NOAA--that are designed to advance the discourse on climate change, uncover effective communication strategies, and translate research into classroom instruction. Two of the papers focus on intervening variables and mitigating factors that evaluators need to tease out to contextualize implementation and explain impact. One paper focuses on the evaluation of research experiences for teachers that can translate into classroom practice, and finally, one paper addresses the issue of developing common indicators of impact across a range of projects focused on common goals. The theme that runs through this set of papers is the process of communicating values and translating values into effective practices and evidence success.
Value of Global Climate Change Research Experiences on Classroom Practice
Lori Reinsvold, University of Northern Colorado, lori.reinsvold@unco.edu
Little is known about how teachers' research experiences change secondary science classroom practices. Literature indicates that teachers value research experiences, but it is unclear how this influences their teaching practices and the learning of their students. Besides the support provided by the project itself, evaluators must also consider the mandates imposed by the school to which the teachers will return if they are going to truly understand how teachers make the transition from laboratories to classrooms. The evaluation of the National Center for Atmospheric Research: Research Experience Institute, a global climate change program for secondary science teachers, will be used to explore the indicators that most influence teacher practice.
Emotions, Politics and Climate Change
John Fraser, Institute for Learning Innovation, fraser@ilinet.org
The vast number of climate change programs targeted toward changing public attitudes focus on public change. Yet, those charged with climate change education are also deeply aware of the negative impacts climate change will have on the world and their own health and well-being. This Cassandra experience is exacting a toll on the educators that may limit their ability to succeed. This paper will address how social discourses surrounding climate change harm educators and how educators may unknowingly undermine their own work. The presenter will offer examples of the environmental movement's dominant persuasion techniques, results of a study on the emotional experiences of conservationists, and recent results from funded climate change programs in order to identify a potential mitigating factor that may serve as a useful predictor in process and summative evaluation.
The Right Half of Your Logic Model: How Values Affect the Middle Ground Between Measurable Outcomes and Long-term Goals
Ardice Hartry, University California, Berkeley, hartry@berkeley.edu
In evaluation, we often do not fully understand the relationship between short-term effects - what we can expect to accomplish over the duration of a project - and long-term outcomes - the overall goals of a project, yet we base our entire evaluation on the rigor of this relationship. For instance, if we assume that changes in students' attitudes towards science leads to increased achievement in science, then we feel we only need measure changes in attitudes. These assumptions are often based upon underlying and unacknowledged values and perspectives, rather than on research and evidence. This presentation explicates the problem using the example of an evaluation of a Global Climate Change Education program. At recent national meetings, multiple evaluators raised the issue of relying on these assumptions and their underlying values; this presentation sets out to describe both pitfalls associated with blind acceptance of these assumptions and potential solutions based on current literature.
Use of Common Measures Across Diverse Climate Change Education Projects: How do you Show Collective Value?
Beverly Farr, MPR Associates, bfarr@mprinc.com
The projects included in the CCEP Grants Program across NSF, NASA, and NOAA all have two goals: 1) Workforce Development: Preparing a new generation of climate scientists, engineers, and technicians equipped to provide innovative and creative approaches to understanding global climate change and to mitigate its impact; 2)Public Literacy: Preparing U.S. citizens to understand climate change and its implications. They vary, however, in the levels they address--from public agencies to research organizations to universities to public schools-and in the strategies they use to achieve their objectives. The activities of the projects cannot always be directly linked to the ultimate goal, however, and intervening outcomes need to be examined to assess the impact of the projects overall. As the funders, NSF, NASA, and NOAA desire to know what the projects together contribute to the accomplishment of the ultimate goals, and the evaluators want to collaborate by establishing common indicators.

Session Title: Teaching Program Evaluation for Public Managers
Panel Session 589 to be held in Redondo on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Teaching of Evaluation TIG
Chair(s):
Bonnie Stabile, George Mason University, bstabile@gmu.edu
Abstract: This proposed panel is intended to address both the importance and the peculiarities of teaching evaluation in public affairs graduate programs, including Masters Programs in Public Administration and Public Policy, and to consider best practices. The growing surge of interest in program evaluation is perhaps nowhere more important than in the public sector, where policymakers at all levels of government strive to ensure that public programs are effective in an era when both budgets and political discourse are strained. Those who teach public managers to be effective evaluators of government program efforts have an important role to play. Despite the importance of their task, they may have only one semester, or less, to instill in their students both an appreciation for evaluation and an ability to tackle its multiplicity of methodologies with some competence.
Teaching Analytical Skills: Management, Measurement, Evaluation
Maria Aristigueta, University of Delaware, aristigueta@oet.udel.edu
Maria Aristigueta is Director of the School of Public Policy and Administration,GǿProfessor, and Senior Policy Fellow at the University of Delaware. She is co-editor of the International Handbook of Practice-Based Performance Management (2008) and has written widely on performance measurement and management.
Clinical or Course-based Approaches to Teaching Evaluation
Heather Campbell, Claremont Graduate University, heather.campbell@cgu.edu
Heather Campbell is Associate Professor at Claremont Graduate University, School of Politics and Economics, Department of Politics and Policy. She has contributed importantly to the field of public affairs in her role as Editor in Chief of the Journal of Public Affairs Education, and as author of articles on many facets of policy analysis and program evaluation. In addition, she has taught program evaluation to current and future public managers using both a clinical approach (with real evaluation projects for real clients) and a purely course-based approach and will present tradeoffs.
Educating Evaluators in the Public Sphere
Kathryn Newcomer, George Washington University, kathryn.newcomer@gmail.com
Kathryn Newcomer is Director of the Trachtenberg School of Public Policy and Public Administration at the George Washington University in Washington, DC where she teaches program evaluation, performance measurement and policy analysis. She also conducts research and training for federal and local agencies on performance measurement, and has published several books, including Improving Government Performance (1989) and The Handbook of Practical Program Evaluation (3rd edition 2010).

Session Title: Making Our Way Forward: Creating an Evaluation Design for an Indigenous College
Think Tank Session 590 to be held in Salinas on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Assessment in Higher Education TIG
Presenter(s):
Maenette Benham, University of Hawaii, Manoa, mbenham@hawaii.edu
Discussant(s):
Antoinette Konia Freitas, University of Hawaii, Manoa, antoinet@hawaii.edu
Marlene P Lowe, University of Hawaii, Manoa, mplowe@hawaii.edu
Brandi Jean Nalani Balutski, University of Hawaii, Manoa, balutski@hawaii.edu
Maya Saffery, University of Hawaii, Manoa, msaffery@hawaii.edu
Abstract: How does an indigenous college within a Research I university approach program evaluation? There are both philosophical and practical challenges inherent in the program evaluation process due to differing purposes, perspectives regarding mastery, and eclectic methods of data collection, analysis, and presentation. The challenges for a college's evaluation team is to create an evaluation design that meets both institutional and cultural aims by "re"languaging the process of evaluation and assessment that decolonizes both the processes and the perspectives, reveals a commitment to cultural values as well as an understanding and respect for academic values, and builds bridges that support the work and the vitality of the indigenous programs. How this is done without creating an overly complex evaluation design is the focus of the think tanks discussion.

Session Title: Review of Literature Evaluative Steps: A Meta-Framework for Conducting Comprehensive and Rigorous Literature Reviews in Program Evaluation
Demonstration Session 591 to be held in San Clemente on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the AEA Conference Committee
Presenter(s):
Rebecca Frels, Lamar University, rebecca.frels@gmail.com
Abstract: Conducting the literature review represents the most important step of the research process in evaluation studies because it is the most effective way of becoming familiar with previous findings and research methodology, as well as being cognizant of previous and existing programs. In our demonstration, we (a) identify myths associated with conducting literature reviews, (b) provide a new and comprehensive definition of the literature review, (c) provide reasons for conducting comprehensive literature reviews, (d) identify the roles of literature reviewers, and (e) introduce the seven Review of Literature Evaluative Steps (ROLES) for program evaluations. Participants will experience ways to document the information search; explore beliefs, values, and valuing; select and deselect literature according to a validation framework; extend the literature review; store literature; and analyze literature using several quantitative and qualitative analysis techniques involving quantitative (e.g., Excel) and qualitative (e.g., QDA Miner [Provalis Research, 2009]) computer software programs.

Session Title: Nurturing a Learning Culture: Two Key Tensions
Think Tank Session 592 to be held in San Simeon A on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
David Scheie, Touchstone Center for Collaborative Inquiry, dscheie@touchstone-center.com
Discussant(s):
Scott Hebert, Sustained Impact, shebert@sustainedimpact.com
Jessica Shao, James Irvine Foundation, jshaosan80@yahoo.com
Ross Velure Roholt, University of Minnesota, rossvr@umn.edu
Abstract: This session explores barriers to learning within projects and organizations, and strategies to enlarge space for learning. It will probe the tension between meticulous data collection and reporting, on the one hand, and thoughtful meaning-making, on the other; and that between painful "learning experiences" and the possibility that learning can be delightful and satisfying. Facilitators will draw on experiences particularly with youth development and community change projects, to sketch these two tensions and identify ways to foster a learning climate amid these challenges. Small groups will engage in dialogue using these prompts: How can we integrate reflective practice with solid data collection? How can we use available data to maximize learning - even when the data aren't pristine? What are the barriers to a learning culture, and how can they be lowered? How can a safe environment for learning from mistakes be established? What does a lively learning culture look like?

Session Title: Innovative Techniques for Data Collection and Management in Educational Evaluation
Multipaper Session 593 to be held in San Simeon B on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
James P Van Haneghan,  University of South Alabama, jvanhane@usouthal.edu
Discussant(s):
Tiffany Berry,  Claremont Graduate University, tiffany.berry@cgu.edu
Increasing the Quality of Student Voice: The Role of Personal Response Devices, or Clickers in the Evaluation of Secondary School Reform
Presenter(s):
Maribel Harder, University of Miami, mgharder@miami.edu
Ann G Bessell, University of Miami, agbessell@miami.edu
Sabrina Sembiante, University of Miami, s.sembiante@umiami.edu
Ileana Altamirano-Cardenas, University of Miami, i.altamirano@umiami.edu
Abstract: A broader vision for secondary school reform that promotes leadership and career-based skills in addition to the traditional academic, classroom-based approach has been advocated by Harvard Graduate School of Education's Pathways to Prosperity Project(2011). The University of Miami Education Evaluation Team (UMEET) has served as the third party evaluator of such a reform effort, evaluating the implementation of career-based academies, or smaller learning communities (SLCs) for the past seven years, providing formative and summative assessment to a multi-cohort, multi-site group of 28 large urban high schools. This year, audience response clickers were implemented in order to enhance efficient and effective data collection methods during our participatory 'Data-in-a-Day' school visits. This session presents information gleaned using personal response devices, or clickers with high school students, and discusses the benefits and challenges overcome associated with the usage of this innovative technology for evaluation purposes.
An Evaluation of the Use of a Database Management Software for Improved Student Performance
Presenter(s):
Susan Skidmore, Sam Houston State University, skidmore@shsu.edu
Abstract: Almost half of the school districts in Texas (n=441) have implemented the use of the Database Management Assessment and Curriculum (DMAC) software to optimize teachers' ability to tailor curriculum to meet students' needs. However, before DMAC's utility can be assessed, an understanding of the extent to which teachers' actually use the software has to be determined. The present evaluation study assesses teachers' facility with and affinity for DMAC in a rural east Texas school district. All high school campus faculty, administrators and counselors were surveyed. In addition, semi-structured interviews were conducted with teachers from each of the five departments and the campus testing coordinator. Results indicate that teachers are frustrated with development of the curriculum based assessments (CBAs) and require more training on how to best interpret the data provided. Suggestions for improving the use of database management software are offered.
Use of Incentives to Increase Participation
Presenter(s):
Susan Saka, University of Hawaii, Manoa, ssaka@hawaii.edu
Abstract: A high participation rate is vital to obtaining reliable and valid information. With the increased academic requirements of NCLB educators are reluctant to take on anything that takes away from academic time and adds burden to teachers and students. The use of incentives helped to increase participation in school-level health surveys. Various kinds of incentives including gift card, items with healthy slogans, and refreshments were used to 'reward' teachers who attended a training session, and healthy gifts (e.g., pedometer and lunch bag) and gift cards were offered as incentives to teachers and school-level coordinators for permission form return. Food items and drawings were used for students. Proper planning, including knowing the rules is paramount, including identifying potential problems. An examination of participation rates for 10+ years and interviews with project-related personnel regarding the effectiveness of the incentives and will be discussed.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: State-Level Evaluation of a Migrant Education Program
Roundtable Presentation 594 to be held in Santa Barbara on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Human Services Evaluation TIG
Presenter(s):
Alberto Heredia, WestEd, aheredi@wested.org
Abstract: This presentation will focus on challenges to and preliminary findings of an evaluation of a Migrant Education Program State Service Delivery Plan (SSDP) in a large western state. Federal and state statutes require that an evaluation of the SSDP determine its effectiveness in meeting performance targets and its fidelity of implementation. Our evaluation approach will assess the effect of the SSDP on student achievement and functioning and identify the practices used to achieve the effect. Formatively, the evaluation documents regional implementation of programs and services in the SSDP annually. Information and evidence on progress toward implementation of programs and services according to SSDP guidelines will be collected. Student outcome data as delineated by SSDP performance indicators will be analyzed annually to gauge progress toward SSDP performance targets. Summatively, an assessment of the impact of the SSDP on academic and non-academic outcomes of migrant children will be provided.
Roundtable Rotation II: Gathering Information From Valued Stakeholders: Exploring Effective Ways to Capture Homeless Parents' Perceptions of School-Based Program Implementation
Roundtable Presentation 594 to be held in Santa Barbara on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Human Services Evaluation TIG
Presenter(s):
Erika Taylor, Prince George's County Public Schools, etaylorcre@aol.com
Kola Sunmonu, Prince George's County Public Schools, kolawole.sunmonu@pgcps.org
Abstract: In Maryland, school districts are required to coordinate with local social service agencies to provide services for homeless students. Prince George's County Public Schools also conducts a mandatory annual evaluation of their Homeless Education Program (HEP). One such stakeholder group that is actively sought to provide information on HEP service delivery includes caregivers of homeless students. Traditionally, self-administered surveys have been used to collect data from parents and caregivers. However, the response rates have been very low, primarily due to high residential mobility among parents, making them 'hard-to-reach'. As the HEP is designed to address the needs of homeless students and their families, the input of parents is vital to the ongoing development and administration of the program. The purpose of the proposed roundtable is to gain feedback from colleagues about the methods currently used to administer the parent surveys, and to explore new strategies that might increase participation.

Session Title: Quantifying Threats to Validity
Panel Session 595 to be held in Santa Monica on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Patrick McKnight, George Mason University, pem725@gmail.com
Abstract: Evaluators often face various threats to validity due to relatively weak non-randomized designs or strong designs that deteriorate due to failed randomization, unexpected environmental changes, or other problems. Unfortunately, we do not know how large of an effect these threats impose upon our findings. The purpose of this panel is to discuss several studies aimed at estimating the largest effect possible for several threats to validity including selection bias, testing effects, and statistical artifacts. By estimating the largest effects possible from these threats, we might all be able to prioritize our efforts in order to protect against the largest and most likely threats to validity.
Appreciating Threats to Validity Using Experimental Designs and Simulation to Estimate Effect Sizes
Patrick McKnight, George Mason University, pem725@gmail.com
Several prominent evaluators argued for the importance of estimating effects for various threats. Despite these efforts, very little work has been done to help us all appreciate the magnitude and probability of threats to validity. The first presentation in this panel outlines the problems with threats to validity and how not knowing the effects of these threats lead us all to routine practices that may not be justifiable.
Estimating Testing Effects Via A Web-based Intervention
Simone Erchov, George Mason University, sfranz1@gmu.edu
Repeated testing often causes us some concern since respondents typically adopt response tendencies that do not adequately reflect the property we wish to measure. Unfortunately, we know little about how large of an effect may come from repeated testing. A simple weight loss program where participants repeatedly reported their weight along with several self-report items helped us estimate the effects of repeated testing. The following presentation provides some insights into how repeated testing leads to effects similar to interventions. Simply put, by measuring people repeatedly over time, they change and that change was attributable solely to the measurement since there was no intervention. Thus, repeated measurement served as an intervention. The implications of these findings will be discussed in detail.
Maximizing Selection Bias Effects: An Experimental and Simulation Study
Julius Najab, George Mason University, jnajab@gmu.edu
Selection bias stands as one of the most likely and troublesome threats to validity. Failed randomization, non-randomized studies, or small sample sizes often lead to non-equivalent groups - a situation Campbell and Stanley originally referred to as selection bias. To estimate the maximal effect possible from selection bias, we conducted several experiments where participants were purposely assigned to different groups based upon certain pre-treatment variables. The more relevant the selection variable, the larger the selection bias effect - just as we might expect. What was not expected was the magnitude of the effect from this threat. Overall, selection bias effects can be much larger than we ever expect and probably could account for all the effects observed in many evaluations. The implications for these findings will be discussed in detail.

Session Title: Faults Everywhere: An Introduction to Fault Tree Analysis (FTA)
Skill-Building Workshop 596 to be held in Sunset on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Needs Assessment TIG
Presenter(s):
James W Altschuld, The Ohio State University, maltschuld1@columbus.rr.com
Hsin-Ling Hung, University of North Dakota, sonya.hung@und.edu
Yi-Fang Lee, National Chi Nan University, ivanalee@ncnu.edu.tw
Abstract: Most evaluators know of Fault Tree Analysis (FTA) but may not have much familiarity with what it is and its key features. FTA is used in needs assessment in two ways. The first, after needs have been prioritized, is to help to pinpoint what causes them in order to design solution strategies with greater likelihood of success. The focus is on paths of failure and critical path elements to be eliminated or reduced in their causal power. The second is when the question is asked "How might an already structured solution strategy fail?" The workshop begins with an overview of needs assessment which highlights where FTA and causal analysis fit in and are used. Causal techniques will be overviewed followed by an emphasis on FTA, a hands-on exercise, and participant discussion/comment.

Session Title: Exploring New Roles and Responsibilities in Educational Evaluation
Multipaper Session 597 to be held in Ventura on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Catherine Nelson,  Independent Consultant, catawsumb@yahoo.com
Discussant(s):
Tom McKlin,  The Findings Group, tom@thefindingsgroup.com
The Three Rs of Existing Evaluation Data: Revisit for Relevance, Reuse, and Refine
Presenter(s):
Kate Clavijo, Pearson, kateclavijo@gmail.com
Kelli Millwood, Pearson, kelli.millwood@pearson.com
Abstract: Existing evaluation data is data collected or recorded at an earlier time, often for an entirely different purpose than the research at hand. This paper will describe how to save resources by using existing data to inform decisions. Examples of collecting and then revisiting, reusing, and refining existing evaluation data will be described. The first example describes how we identify school district and community needs prior to implementing professional development. The second example compares evaluation outcomes with face to face and on-line professional development models. A third example illustrates how existing evaluation data answers questions about the impact professional development has on novice and experienced educators. While there is nothing new about using secondary data in evaluation, the examples set forth in this presentation will hopefully encourage participants to rethink and revitalize how they approach the use of secondary data.
Evaluation and Educational Innovation: Coping and Growth
Presenter(s):
Talbot Bielefeldt, International Society for Technology in Education, talbot@iste.org
Brandon Olszewski, International Society for Technology in Education, brandon@iste.org
Abstract: A university worked with rural schools to implement a locally-developed integrated science curriculum under a U.S. Math Science Partnership grant. Evaluators found that standard assessments were not sensitive to non-standard interventions. The educators' innovations led to transformation of an evaluation department's skills, and positioned the organization to address new challenges. Lacking relevant science assessments, evaluators and program staff created new evaluation tools to accompany the curriculum. This presentation documents trials, errors, and eventual success as evaluators changed their role and adopted a new suite of assessment skills. Presenters will discuss the logistics of bringing new skills (in this case, Item Response Theory) in-house versus outsourcing. Professional development, time, and technology are all factors in this transition. Presenters will also discuss the implications of these changes for an organization that wants to participate in K-12 educational evaluations in the United States and other countries.
What is a 'Good' Program? A Comprehensive Meta-Evaluation of the Project Lead The Way (PLTW) Program
Presenter(s):
Melissa Chapman Haynes, Professional Data Analysts Inc, melissa.chapman.haynes@gmail.com
David Rethwisch, University of Iowa, drethwis@engineering.uiowa.edu
Abstract: What makes a program 'good?' This could mean many different things to many different people. Good in the sense that we should continue to fund it? Good in the sense that expected outcomes were obtained? And so it goes. There were numerous conversations at AEA 2010 about how evaluators (and program leaders) are often restricted by funding sources as far as what they are able to examine. Unfortunately, it is not typical for evaluators to have access to prior evaluations of programs, unless they have personally worked on those evaluations. The focus of this work is a meta-evaluation of Project Lead The Way, a secondary engineering program with fast growth but equivocal results about program outcomes. Drawing upon a small but growing group of PLTW researchers and evaluators, technical reports and other unpublished work was obtained through personal request, and the Program Evaluation Standards were used to meta-evaluate PLTW.
But What Does This Tell Me? Teachers Drawing Meaning From Data Displays
Presenter(s):
Kristina Ayers Paul, University of South Carolina, paulka@mailbox.sc.edu
Ashlee Lewis, University of South Carolina, lewisaa2@mailbox.sc.edu
Min Zhu, University of South Carolina, zhum@mailbox.sc.edu
Xiaofang Zhang, University of South Carolina, zhang29@mailbox.sc.edu
Abstract: Educators are continually asked to use data to inform instructional decision-making, yet few educators have the technical training needed to interpret complex data sets. Evaluators providing data to non-technical audiences should be mindful of the need to present data in ways that are understandable, interpretable, and comparable (May, 2004). Researchers from the South Carolina Arts Assessment Program (SCAAP) are exploring new methods of sharing program assessment data with teachers, principals, and regional coordinators in ways that will be meaningful to these groups. In this paper presentation, we will share the results of a study examining educators' reactions to various data display formats. The study uses a think aloud protocol to examine teachers' thinking as they attempt to draw meaning from assessment data presented through an assortment of data displays.

Return to Evaluation 2011
Search Results for All Sessions