| Session Title: In the Multi-level Systemic Evaluation Universe, Whose Responsibility is Quality? A Discussion Among Respected Colleagues |
| Think Tank Session 582 to be held in Lone Star A on Friday, Nov 12, 1:40 PM to 3:10 PM |
| Sponsored by the Presidential Strand and the Cluster, Multi-site and Multi-level Evaluation TIG |
| Presenter(s): |
| Kathleen Toms, Research Works Inc, katytoms@researchworks.org |
| Discussant(s): |
| Sandra Mathison, University of British Columbia, sandra.mathison@ubc.ca |
| Michael Morris, University of New Haven, mmorris@newhaven.edu |
| Linda E Lee, Proactive Information Services Inc, linda@proactive.mb.ca |
| Josue De La Rosa, Research Works Inc, jdelarosa@researchworks.org |
| Abstract: As funder requirements filter through levels of their particular systems, evaluators attached to each of those levels can find themselves caught in a quality conundrum. This think-tank will consider and discuss this issue. This includes an exploration of some possible solutions, for example, whether mandatory Standards and Principles could help to achieve systemic evaluation quality. This session brings together a group of respected colleagues to frame and facilitate a discussion of this emerging challenge to evaluation quality: Sandra Mathison, widely published and referenced author and professor known for her strong social responsibility stance regarding evaluation theory and practice; Michael Morris, respected author and professor known for his work in evaluation ethics and the AEA Guiding Principles; Linda E. Lee, past National President of the Canadian Evaluation Society and practicing evaluator in Canada and abroad; and Josue De La Rosa, a new evaluator. |
| Session Title: Fitting the Key to the Lock: Matching Systems Methods To Evaluation Questions |
| Skill-Building Workshop 583 to be held in Lone Star B on Friday, Nov 12, 1:40 PM to 3:10 PM |
| Sponsored by the Systems in Evaluation TIG |
| Presenter(s): |
| Bob Williams, Independent Consultnat, bobwill@actrix.co.nz |
| Abstract: A quality evaluation not only has to use a particular method, technique or approach well, it has to choose it appropriately. Despite increasing interest in systems approaches, and examples of the use of specific methods, there has not been guidance on which systems method would be suitable in which evaluation situation. Consequently for many evaluators the leap between thinking systemically and using the most appropriate systems methods in their evaluation is a bit of a gamble. Out of the many hundreds available which ones can be useful in what circumstances; which systems keys fit which evaluation locks? One solution lies in evaluation questions - some systems methods address particular evaluation questions very well. Drawing on the just published book - Systems Concepts in Action - A Practitioner's Toolkit this session will demonstrate how evaluators can match specific systems methods to their evaluation questions. |
| Session Title: Youth Participatory Evaluation: Where Are We and Where Do We Go From Here? |
| Think Tank Session 584 to be held in Lone Star C on Friday, Nov 12, 1:40 PM to 3:10 PM |
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG |
| Presenter(s): |
| Kim Sabo Flores, Evaluation Access and ActKnowledge, kimsabo@aol.com |
| Discussant(s): |
| Jane Powers, Cornell Univeristy, jlp5@cornell.edu |
| Katie Richards-Schuster, University of Michigan, kers@umich.edu |
| Abstract: Over the past decade, the practice of Youth Participatory Evaluation has grown significantly, evidenced by increased articles, books and social networks focusing attention on the subject, along with curricula, toolkits, and web trainings. However, even with all of these new developments, questions about youth participation remain: Where is the field now and where are we going? With the proliferation of articles and materials, have we learned anything new? What types of impacts are being documented (e.g., on youth, adults, programs, organizations, and the field of evaluation)? What work needs to be done to continue supporting meaningful quality youth participatory evaluation efforts? This think tank session will engage audience members in addressing these questions through small and large discussion groups. The goals of this think tank will be to facilitated peer learning, gather key information about the current state of the field, and create actionable steps to further advance the work. |
| Session Title: Helping Nonprofit Agencies Move From Measuring Outcomes to Managing Them: A Budding Success Story From the United Way of Greater Houston | |||
| Panel Session 585 to be held in Lone Star D on Friday, Nov 12, 1:40 PM to 3:10 PM | |||
| Sponsored by the Non-profit and Foundations Evaluation TIG | |||
| Chair(s): | |||
| Michael Hendricks, Independent Consultant, mikehendri@aol.com | |||
| Abstract: Like the rest of the nonprofit world, the United Way of Greater Houston (UWGH) and its partner agencies realize it’s no longer sufficient to measure only program inputs, activities, and outputs. Therefore, for the past 10 years UWGH has helped its partner agencies to also measure outcomes. But while agencies dutifully complied, and while some benefits resulted, it was unclear whether this outcome measurement substantially improved services. Re-evaluating the situation, UWGH realized it needed to step back and ask itself “WHY do we want agencies to measure outcomes, and HOW can we make it more useful for them?” As a result, UWGH began a conscious shift away from the research-focused activity of mere outcomes measurement to the improvement-focused activity of outcomes management, and the benefits are beginning to become obvious. This session describes the change, from both the UWGH and agency perspectives, and offers recommendations for other funders and agencies. | |||
| |||
| |||
| |||
|
| Session Title: Enhancing the Quality of Evaluations by Rational Planning | ||||
| Panel Session 586 to be held in Lone Star E on Friday, Nov 12, 1:40 PM to 3:10 PM | ||||
| Sponsored by the Quantitative Methods: Theory and Design TIG | ||||
| Chair(s): | ||||
| Frederic Malter, University of Arizona, fmalter@email.arizona.edu | ||||
| Abstract: Evaluation reports are often disappointing with a frequent reason being insufficient thought given to the effort in the first place. A rule: the implementation of an evaluation never improves once the process has begun. Quality of evaluations can be improved by initial attention to theory and models underlying the evaluation, without which , evaluation is likely to lack focus and veer off target. Design of evaluation is often poorly specified and less rigorous than it could and should be, sometimes because options are not fully considered. Measurement issues are critical and require attention, but they frequently are resolved in arbitrary ways. Finally, plans for analysis of data should be developed in concert with those for design, methods, and measurement, but in many cases data analyses represent a sort of Procrustean bed for whatever resulted from previous efforts, no matter how flawed. These problems are discussed in relation to specific examples. | ||||
| ||||
| ||||
| ||||
|
| Session Title: AEA and Public Engagement: How Can the American Evaluation Association’s Role in Public Engagement Contribute to Evaluation Quality? |
| Think Tank Session 587 to be held in Lone Star F on Friday, Nov 12, 1:40 PM to 3:10 PM |
| Sponsored by the AEA Conference Committee |
| Presenter(s): |
| Thomaz Chianca, COMEA Evaluation Ltd, thomaz.chianca@gmail.com |
| Discussant(s): |
| Nicki King, University of California, Davis, njking@ucdavis.edu |
| Jim Rugh, Independent Consultant, jimrugh@mindspring.com |
| Abstract: AEA’s shift in governance to a policy-based model has changed the structure and focus of all of the Association’s committees. The newly-formed Public Engagement Team (PET) is the amalgamation of AEA’s former Public Affairs and International Committees. We define “public engagement” (PE) as the coordinated set of activities AEA conducts outside our own association in order to (a) present to, (b) learn from, and (c) collaborate with other relevant organizations and individuals, both within the US and internationally. This think tank session will emphasize both small-group discussions and cross-group sharing on a variety of subjects related to PE: collaboration, learning from others, and presenting concerns about evaluation policy and practice. A specific issue we will use as an example for discerning how AEA can and should be involved in PE will be addressing the implications of the “NONIE Impact Evaluation Guidance” for how international development programs should be evaluated. |
| Roundtable Rotation I: Managing Evaluation: Continuing the Conversation |
| Roundtable Presentation 588 to be held in MISSION A on Friday, Nov 12, 1:40 PM to 3:10 PM |
| Sponsored by the Evaluation Managers and Supervisors TIG |
| Presenter(s): |
| Donald Compton, Centers for Disease Control and Prevention, dcompton@cdc.gov |
| Michael Baizerman, University of Minnesota, mbaizerm@umn.edu |
| Michael Schooley, Centers for Disease Control and Prevention, mschooley@cdc.gov |
| Abstract: This roundtable continues the conversation on how to recognize, assess, train and evaluate professionals to manage evaluation studies, evaluators, and an evaluation unit. The goal is to foster greater recognition of managing evaluation within the profession, to gain legitimacy for the practice and to begin education and training in doing this work at an expert level. The discussion will be grounded in, but not limited to our recent New Directions for Evaluation issue, Managing Program Evaluation: Towards Explicating a Professional Practice. Therein, we collected case examples of expert managing and analyzed these, assessed the literature and proposed foci for study, and education and training curricula and pedagogy. We will continue touching on these topics and will distribute handouts on these. The roundtable is an alternate format for us to keeping the attention on this vital evaluation practice, while collecting data from participants useful for critiquing our analyses and proposals, and for recruiting others who are managing or would like managing to be their evaluation career. |
| Roundtable Rotation II: Directors of Research Ensuring Quality in Practice |
| Roundtable Presentation 588 to be held in MISSION A on Friday, Nov 12, 1:40 PM to 3:10 PM |
| Sponsored by the Evaluation Managers and Supervisors TIG |
| Presenter(s): |
| Colleen Manning, Goodman Research Group Inc, manning@grginc.com |
| Abstract: Attend this roundtable to learn what other directors of research are doing to ensure evaluation quality, from the theoretical to the practical. Find out how your practices are similar to and differ from those of your fellow directors. Hopefully, you will leave the discussion with some new contacts and new strategies for your work! For the purposes of this session, we are defining a “director” as someone who provides leadership and oversight to others who carry out evaluations. The facilitator is a director of research at a small research and evaluation firm. |
| Session Title: Evaluation Management Policies: Examining Requirements of Quality Evaluation | ||||||||||||||||||||||||||
| Multipaper Session 589 to be held in MISSION B on Friday, Nov 12, 1:40 PM to 3:10 PM | ||||||||||||||||||||||||||
| Sponsored by the | ||||||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||||||
| Lisa Rajigah, International Initiative for Impact Evaluation (3ie), lrajigah@3ieimpact.org | ||||||||||||||||||||||||||
| Lisa Rajigah, International Initiative for Impact Evaluation (3ie), lrajigah@3ieimpact.org | ||||||||||||||||||||||||||
| Discussant(s): | ||||||||||||||||||||||||||
| Leslie Fierro, SciMetrika, let6@cdc.gov | ||||||||||||||||||||||||||
| Gary Miron, Western Michgian University, gary.miron@wmich.edu | ||||||||||||||||||||||||||
|
| Session Title: Integrating Management Consulting Competencies into the Evaluation Process | |||
| Panel Session 590 to be held in BOWIE A on Friday, Nov 12, 1:40 PM to 3:10 PM | |||
| Sponsored by the Independent Consulting TIG | |||
| Chair(s): | |||
| Pamela Davidson, University of California, Los Angeles, davidson@ucla.edu | |||
| Abstract: Our review of the literature reveals that, for a variety of reasons, the field has paid little attention to the consultative aspects of the work conducted by evaluators. This session will deal with this important but neglected component of the role and activities engaged in by most professional evaluators. Its primary objectives are to share a set of concepts and practices essential for consulting with organizations and managers and to integrate consulting activities and competencies into the role of the professional evaluator. More specifically, our focus will be on (1)the consulting aspects of the evaluator’s role,(2)the nature of and approaches to consultation,(3)the management consulting concepts and methods most relevant for evaluators,(4)the consultative process, including its inherent issues and challenges,(5)the management of organizational change interventions, and (6)identifying and integrating consulting competencies into the role of professional evaluators. | |||
| |||
| |||
|
| Session Title: The Evolution and Revolution of Culturally Responsive Evaluation | |||||
| Panel Session 591 to be held in BOWIE B on Friday, Nov 12, 1:40 PM to 3:10 PM | |||||
| Sponsored by the Multiethnic Issues in Evaluation TIG | |||||
| Chair(s): | |||||
| Fiona Cram, Katoa Ltd, katoaltd@gmail.com | |||||
| Discussant(s): | |||||
| Karen E Kirkhart, Syracuse University, kirkhart@syr.edu | |||||
| Nan Wehipeihana, Research Evaluation Consultancy Limited, nanw@clear.net.nz | |||||
| Abstract: This panel brings together a group of evaluators actively engaged in utilizing Culturally Responsive Evaluation(CRE)strategies with cultural subgroups in the United States (U.S.), as well as with members of indigenous communities in the U.S. and New Zealand. The panel will discuss the progression of CRE as a revolutionary concept in response to traditional evaluation methods and then highlight the progression of CRE in evaluative discourse over the last decade as a vehicle for the enhancement of evaluation quality. Discussion will situate on international, indigenous, and comparative perspectives when working in communities of color and finally examine evaluation tools designed to effectively make use of the cultural context of the evaluand. | |||||
| |||||
| |||||
| |||||
|
| Session Title: Health Matters: Evaluating Advocacy and Policy Change | |||||||||||||||||||||||||||||||||
| Multipaper Session 592 to be held in BOWIE C on Friday, Nov 12, 1:40 PM to 3:10 PM | |||||||||||||||||||||||||||||||||
| Sponsored by the Advocacy and Policy Change TIG and the Health Evaluation TIG | |||||||||||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||||||||||
| Astrid Hendricks, California Endowment, ahendricks@calendow.org | |||||||||||||||||||||||||||||||||
|
| Roundtable Rotation I: The Essentials of a Quality Evaluation Capstone Project, Practicum or Internship: Students’ Perspectives |
| Roundtable Presentation 593 to be held in GOLIAD on Friday, Nov 12, 1:40 PM to 3:10 PM |
| Sponsored by the Teaching of Evaluation TIG |
| Presenter(s): |
| Aubrey W Perry, Portland State University, aubrey.perry@gmail.com |
| Robert Tornberg, University of Minnesota, tornb012@umn.edu |
| Veronica Smith, data2insight, veronicasmith@data2insight.com |
| Abstract: Undergraduate and graduate students’ capstone projects, practicums, and internship offer valuable experience to soon-to-be evaluators. This real world exposure helps bridge the gap between academia and the workplace. Three graduate students recap their experiences serving in evaluation settings to fulfill graduate requirements. The presenters will compare and contrast their diverse experiences, methods, and challenges while serving in different settings, including a small non-profit organization aimed at employing people with disabilities, a public radio station, and a museum of natural history. The students will discuss what knowledge and skills they developed as a result of their experience. The presenters will also offer tips on entering into an evaluation experience to fulfill education requirements and how to maximize the quality of your project experience. Finally, they will open the floor to discussion for session attendees to ask questions and describe their educational evaluation experiences. |
| Roundtable Rotation II: An Evaluation Seminar: How Our Students Gain Practical Experience in Evaluation and Research Methodology |
| Roundtable Presentation 593 to be held in GOLIAD on Friday, Nov 12, 1:40 PM to 3:10 PM |
| Sponsored by the Teaching of Evaluation TIG |
| Presenter(s): |
| Jennifer Morrow, University of Tennessee, Knoxville, jamorrow@utk.edu |
| Gary Skolits, University of Tennessee, Knoxville, gskolits@utk.edu |
| Thelma Woodard, University of Tennessee, Knoxville, twoodar2@utk.edu |
| Susanne Kaesbauer, University of Tennessee, Knoxville, skaesbau@utk.edu |
| Abstract: In this roundtable we will discuss the creation of a three semester, three credit seminar that we created for our graduate program students in Evaluation, Statistics, and Measurement at our university. This seminar was designed to enhance our students training in evaluation and research methodology. We will discuss our reasons for creating the seminar, the design of the seminar, and two of our graduate students will discuss their experiences in the seminar. Lastly, we will spend most of the time leading a discussion with the audience members on strategies for offering these evaluation and research experiences at their institutions. |
| Roundtable Rotation I: Conceptualizing the Quality of Assessment |
| Roundtable Presentation 594 to be held in SAN JACINTO on Friday, Nov 12, 1:40 PM to 3:10 PM |
| Sponsored by the Assessment in Higher Education TIG |
| Presenter(s): |
| Steve Culver, Virginia Tech, sculver@vt.edu |
| Ray Van Dyke, Virginia Tech, rvandyke@vt.edu |
| Abstract: After the Spellings Commission’s report (2006) on the lack of accountability mechanisms in higher education, student outcomes assessment processes have become a part of the landscape of every college and university in the United States. Regional accrediting bodies, such as the Southern Association of Colleges and Schools (SACS), and discipline-specific accrediting bodies, such as ABET and AACSB, now evaluate programs and institutions based on the quality of their assessment process and the results of those processes. However, judging the quality of those assessments has become problematic. Is it more important to demonstrate a continuous improvement cycle or is it more important to demonstrate reflective practice not yet supported by empirical evidence? How much should context play into this evaluation and what should be the pertinent contextual factors? This session will explore questions (and some answers) about building the appropriate elements of such an evaluation. |
| Roundtable Rotation II: Perspectives on Collaboration in Practice: Expanding the Role and Culture of Assessment in Academic Affairs and Student Services in Higher Education |
| Roundtable Presentation 594 to be held in SAN JACINTO on Friday, Nov 12, 1:40 PM to 3:10 PM |
| Sponsored by the Assessment in Higher Education TIG |
| Presenter(s): |
| Ge Chen, University of Texas, Austin, gechen@austin.utexas.edu |
| Glen E Baumgart, University of Texas, Austin, gbaumgart@austin.utexas.edu |
| R Joseph Rodriguez, University of Texas, Austin, joseph.rodriguez@austin.utexas.edu |
| Abstract: Assessment requires reflection on our practices, which directly influence student learning outcomes and institutional effectiveness. The roundtable session will focus on the assessment and collaborative practices of a large division at The University of Texas at Austin that includes academic affairs, student services, and community outreach to demonstrate the progress and growth achieved through strategic and assessment planning. Presenters will provide perspectives and reflection from the following three points of view and practice: divisional leadership, assessment consultant, and practitioner department level. Discussion will center on the importance of language, culture, leadership, outside review, and practice in creating and sustaining quality assessment in a larger organization. Participants will leave the session with tools and information to implement the lessons learned to their large organization assessment practices. |
| Session Title: Social Network Analysis Across Disciplines and Purposes | |||||||||||||||||||||||||||||||||||
| Multipaper Session 595 to be held in TRAVIS A on Friday, Nov 12, 1:40 PM to 3:10 PM | |||||||||||||||||||||||||||||||||||
| Sponsored by the | |||||||||||||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||||||||||||
| Maryann Durland, Durland Consulting, mdurland@durlandconsulting.com | |||||||||||||||||||||||||||||||||||
| Discussant(s): | |||||||||||||||||||||||||||||||||||
| Maryann Durland, Durland Consulting, mdurland@durlandconsulting.com | |||||||||||||||||||||||||||||||||||
|
| Session Title: Multimedia Advances in Evaluation: The Use of Skype, Elluminate, and Virtual World Technologies in Conducting Focus Group Interviews |
| Demonstration Session 596 to be held in TRAVIS B on Friday, Nov 12, 1:40 PM to 3:10 PM |
| Sponsored by the Integrating Technology Into Evaluation |
| Presenter(s): |
| Corliss Brown, University of North Carolina at Chapel-Hill, ccbrown@email.unc.edu |
| Lauren Kendall, University of North Carolina at Chapel-Hill, lkendall@email.unc.edu |
| Taniya Reaves, University of North Carolina at Chapel-Hill, treaves@email.unc.edu |
| Jessica Milton, University of North Carolina at Chapel Hill, jrmilton@email.unc.edu |
| Johnavae Campbell, University of North Carolina at Chapel-Hill, johnavae@email.unc.edu |
| Abstract: Technological advances have facilitated data collection for evaluation research by expanding accessibility for both researchers and participants. However, current use of email does not allow real time conference participation necessary for interviews and focus groups. More recent web-based media allow several methods of communicability including audio or video teleconferencing, instant messaging and document sharing with some software allowing for integrated use of all four methods to accommodate the range of accessibility for participants. Benefits of online focus groups include lower recruitment costs, increased engagement and an absence of logistic and travel expenses. This demonstration examines the features of three applications that support facilitated data collection: Skype, Elluminate and online virtual Worlds. Limitations include increased training and orientation, discourse analysis, identity of participants and possible technical difficulties. Integrated properly, these applications could increase quality of data collection while decreasing collection time and costs. |
| Session Title: Real World Applications of System Concepts in Evaluation | ||||||||||||||||||
| Multipaper Session 597 to be held in TRAVIS C on Friday, Nov 12, 1:40 PM to 3:10 PM | ||||||||||||||||||
| Sponsored by the Systems in Evaluation TIG | ||||||||||||||||||
| Chair(s): | ||||||||||||||||||
| Janice Noga, Pathfinder Evaluation and Consulting, jan.noga@pathfinderevaluation.com | ||||||||||||||||||
| Discussant(s): | ||||||||||||||||||
| Janice Noga, Pathfinder Evaluation and Consulting, jan.noga@pathfinderevaluation.com | ||||||||||||||||||
|
| Session Title: Implication of Evaluation Approaches on Arts Organization Policies | ||||||||||||||||||
| Multipaper Session 598 to be held in TRAVIS D on Friday, Nov 12, 1:40 PM to 3:10 PM | ||||||||||||||||||
| Sponsored by the Evaluating the Arts and Culture TIG | ||||||||||||||||||
| Chair(s): | ||||||||||||||||||
| Ching Ching Yap, Savannah College of Art and Design, cyap@scad.edu | ||||||||||||||||||
| Discussant(s): | ||||||||||||||||||
| Don Glass, VSA, dlglass@vsarts.org | ||||||||||||||||||
|
| Session Title: Interesting Evaluations in Social Services and Welfare | |||||||||||||||||||||||||||
| Multipaper Session 599 to be held in INDEPENDENCE on Friday, Nov 12, 1:40 PM to 3:10 PM | |||||||||||||||||||||||||||
| Sponsored by the Human Services Evaluation TIG | |||||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||||
| Vajeera Dorabawila, New York State Office of Children and Family Services, vajeera.dorabawila@ocfs.state.ny.us | |||||||||||||||||||||||||||
|
| Session Title: Structural Equation Modeling Solutions for Evaluators | |||||||||||||||||||||||||||
| Multipaper Session 600 to be held in PRESIDIO A on Friday, Nov 12, 1:40 PM to 3:10 PM | |||||||||||||||||||||||||||
| Sponsored by the Quantitative Methods: Theory and Design TIG | |||||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||||
| Frederick Newman, Florida International Unversity, newmanf@fiu.edu | |||||||||||||||||||||||||||
|
| Session Title: Improving the Quality of Analysis, Interpretation, and Reporting of Program Outcomes Through a Measurement, Evaluation, and Statistics Training Course |
| Demonstration Session 601 to be held in PRESIDIO B on Friday, Nov 12, 1:40 PM to 3:10 PM |
| Sponsored by the Organizational Learning and Evaluation Capacity Building TIG |
| Presenter(s): |
| Yvonne Watson, United States Environmental Protection Agency, watson.yvonne@epa.gov |
| Tracy Dyke-Redmond, Industrial Economics Inc, tdr@indecon.com |
| Terell Lasane, United States Environmental Protection Agency, lasane.terell@epa.gov |
| Abstract: The U.S. Environmental Protection Agency (EPA)'s Evaluation Support Division has designed a new training course to help program staff use statistically valid approaches to demonstrate program outcomes. The course responds to critiques from OMB and others regarding the unintentional but inappropriate use of non-representative program data to draw conclusions about program outcomes. The new course, Using Statistical Approaches to Support Performance Measurement and Evaluation, is designed to introduce Agency staff with little to no knowledge of program evaluation and inferential statistics to basic concepts and techniques. Course participants will learn how performance measurement, program evaluation and inferential statistics can be combined to strengthen the quality of their analysis, interpretation, and reporting of program outcomes. This demonstration will walk conference participants through the course materials, highlighting aspects of the training that were successful and unsuccessful in EPA's organizational context. |
| Session Title: Assessing Implementation Fidelity of Substance Abuse Prevention Environmental Change Strategies: Lessons Learned From the Substance Abuse and Mental Health Services Administration (SAMHSA), Center for Substance Abuse Prevention (CSAP), Strategic Prevention Framework State Incentive Grant (SPF-SIG), and National Cross-site Evaluation | ||||
| Panel Session 602 to be held in PRESIDIO C on Friday, Nov 12, 1:40 PM to 3:10 PM | ||||
| Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG | ||||
| Chair(s): | ||||
| Kristi Pettibone, The MayaTech Corporation, kpettibone@mayatech.com | ||||
| Abstract: States and jurisdictions,funded through the Strategic Prevention Framework State Incentive Grant (SPF SIG) initiative, were required to fund communities to implement a range of substance abuse prevention interventions, including interventions designed to change the environment in which substance abuse occurs. The SPF SIG cross-site evaluation team developed implementation fidelity (IF) measures for environmental change strategies to assist grantees in implementing these strategies and evaluating their effectiveness. Three presentations comprise this panel. The first presentation describes the need for, and development of, measures for assessing IF of environmental change strategies, the data collection process, and the characteristics of data submitted by SPF SIG grantees. The second presentation discusses the preliminary assessment of the connection between the IF data and the outcomes or accomplishments associated with environmental change strategies. The third presentation describes a grantee’s adaptation of the assessment process to accommodate state-level and community-level conditions. | ||||
| ||||
| ||||
|
| Roundtable Rotation I: Innovations in Youth Empowerment Evaluation |
| Roundtable Presentation 603 to be held in BONHAM A on Friday, Nov 12, 1:40 PM to 3:10 PM |
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG |
| Presenter(s): |
| Kimberly Kay Lopez, Independent Consultant, kimklopez@hotmail.com |
| Abstract: This discussion is based on the youth focused Empowerment Evaluation (Fetterman, 2001) of community-based youth prevention programs. The researcher developed an innovative approach by using photography and journal writing within the Empowerment Evaluation methodology. The researcher also modified the scoring scale used in Empowerment Evaluation to a letter grade system. These innovations resonated with the youth and allowed the researcher to not only collect but validate data across several modalities. These innovations have been applied to several youth program evaluations with great success. Youth-focused participatory research is relevant to evaluation research as the expertise of the youth is valued as an equitable partner in research. Fetterman, D. M. (2001). Foundations of Empowerment Evaluation. Thousand Oaks, Sage Publications. |
| Roundtable Rotation II: Slipping and Sliding Like a Weasel on the Run: Empowerment Evaluation and the Hawthorne Effect |
| Roundtable Presentation 603 to be held in BONHAM A on Friday, Nov 12, 1:40 PM to 3:10 PM |
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG |
| Presenter(s): |
| Michael Matteson, University of Wollongong, cenetista3637@hotmail.com |
| Abstract: In the course of evaluating the empowerment aspect of an Empowerment Evaluation I was faced with the argument that any effect of the evaluation on the evaluation team participants was in fact a Hawthorne or placebo effect. This was seen as meaning that any effect would be the result of the evaluation team's changing their actions to please me as a participant observer and the evaluator, not as a result of process use of the Empowerment Evaluation experience as such. I thought this could be seen as just a part of the Empowerment Evaluation process itself, but there was a problem with gaining post evaluation data on evaluation effects by participant observation after the evaluation was concluded. This roundtable would look at the issues involved and ways of overcoming them, the role of the evaluator in empowerment Evaluation, and the relevance of different interpretations of “empowerment” in Empowerment Evaluation on what is done and what can be regarded as success. |
| Session Title: Improving Evaluation in the Real World of Nonprofits and Foundations | ||||
| Panel Session 604 to be held in BONHAM B on Friday, Nov 12, 1:40 PM to 3:10 PM | ||||
| Sponsored by the Non-profit and Foundations Evaluation TIG | ||||
| Chair(s): | ||||
| Pamela Imm, University of South Carolina, pamimm@windstream.net | ||||
| Discussant(s): | ||||
| Abraham Wandersman, University of South Carolina, wandersman@sc.edu | ||||
| Abstract: The session will focus on how the Health Foundation of Central Massachusetts (THFCM) integrates the work of evaluation and advocacy into their grantmaking. The panel members will present the structure of the Foundations’s grantmaking, how evaluators are recruited to work with grantees, steps taken to ensure evaluation strategies are incorporated into the work of the grantees, and the benefits that evaluators perceive to working with the Foundation. The panel will describe the accountability model used by the Foundation as well as how evaluators are encouraged to facilitate the work with the grantees. The evaluators’ perspective will also be presented to include the advantages of being involved in a small community of learners through monthly conference calls, professional development workshops, and informal and formal networking. By utilizing high quality evaluators, the Foundation promotes its grantmaking principles of evidence-based practice, continuous quality improvement, and systems change through policy and advocacy. | ||||
| ||||
| ||||
|
| Session Title: Teacher Effectiveness and Teacher Quality: What's the Difference? |
| Think Tank Session 605 to be held in BONHAM C on Friday, Nov 12, 1:40 PM to 3:10 PM |
| Sponsored by the Pre-K - 12 Educational Evaluation TIG |
| Presenter(s): |
| Nathan Balasubramanian, Centennial Board of Cooperative Educational Services, nathanbala@gmail.com |
| Discussant(s): |
| Nathan Balasubramanian, Centennial Board of Cooperative Educational Services, nathanbala@gmail.com |
| Joy Perry, Fort Morgan School District, jperry@morgan.k12.co.us |
| Roxie Bracken, Keenesburg School District, roxiebracken@re3j.com |
| Abstract: How might we evaluate teacher performance? What is the difference between highly effective teachers and highly qualified teachers? Participants in this think tank will delve into how these two constructs might impact student performance in our elementary and secondary schools. The three breakout group leaders, with their combined expertise of over 80 years between them, will facilitate this think tank informed by their recent collaborative research, “Leveraging Innovation: Teacher Effectiveness Initiative,” with funding from a 2010 NCLB Recruitment and Retention Grant from the Office of Federal Programs Administration at the Colorado Department of Education. Participants will leave with a deep understanding of the distinction, following an initial orientation, three small group discussions, and a check and connect upshot to recap our 90-minute lively dialogue. |
| Session Title: Fidelity of Program Implementation in Educational Evaluations | ||||||||||||||||||||||||||
| Multipaper Session 606 to be held in BONHAM D on Friday, Nov 12, 1:40 PM to 3:10 PM | ||||||||||||||||||||||||||
| Sponsored by the Pre-K - 12 Educational Evaluation TIG | ||||||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||||||
| Javan Ridge, Colorado Springs School District 11, ridgejb@d11.org | ||||||||||||||||||||||||||
| Discussant(s): | ||||||||||||||||||||||||||
| Stacey Merola, ICF International, smerola@icfi.com | ||||||||||||||||||||||||||
|
| Session Title: Key Issues in Evaluating Industrial and Commercial Energy Efficiency Programs and Technologies | |||||||||||||||||||||||||
| Multipaper Session 607 to be held in BONHAM E on Friday, Nov 12, 1:40 PM to 3:10 PM | |||||||||||||||||||||||||
| Sponsored by the Environmental Program Evaluation TIG and the Business and Industry TIG | |||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||
| Mary Sutter, Opinion Dynamics Corporation, msutter@opiniondynamics.com | |||||||||||||||||||||||||
|
| Session Title: Report on a Test of a General Method for Quick Evaluation of Medical Research by Morbidity | ||||
| Multipaper Session 609 to be held in Texas D on Friday, Nov 12, 1:40 PM to 3:10 PM | ||||
| Sponsored by the Research, Technology, and Development Evaluation TIG and the Health Evaluation TIG | ||||
| Chair(s): | ||||
| Jerald Hage, University of Maryland, hage@socy.umd.edu | ||||
|
| Session Title: Taking a Good, Long Look In the Mirror: How Can We Hold Ourselves Accountable for Quality Recommendations? |
| Think Tank Session 610 to be held in Texas E on Friday, Nov 12, 1:40 PM to 3:10 PM |
| Sponsored by the Evaluation Use TIG and the Organizational Learning and Evaluation Capacity Building TIG |
| Presenter(s): |
| Jennifer Iriti, University of Pittsburgh, iriti@pitt.edu |
| Discussant(s): |
| Jennifer Iriti, University of Pittsburgh, iriti@pitt.edu |
| Kari Nelsestuen, Education Northwest, kari.nelsestuen@educationnorthwest.org |
| Abstract: This Think Tank session begins a conversation within the field about building an empirically-based understanding of recommendations generated from evaluations—whether and how clients respond to them and if so, what impact they ultimately have. Although the making of recommendations is common practice for a majority of evaluators, the systematic study of their use and impact has lagged. The session is geared toward both evaluation practitioners and researchers who have interest in advancing the field toward evidence-based practice. After a brief framing by the facilitators, participants will break into small groups to consider the following questions: 1) What do and don’t we know about the various ways clients respond to recommendations from evaluators and how could we track these responses systematically? 2) What do and don’t we know about the impact of evaluation recommendations and how could we assess impact of those that are implemented by clients? |
| Session Title: Snapshots of Exemplary Evaluations | ||||
| Panel Session 611 to be held in Texas F on Friday, Nov 12, 1:40 PM to 3:10 PM | ||||
| Sponsored by the Research on Evaluation TIG | ||||
| Chair(s): | ||||
| Paul Brandon, University of Hawaii, brandon@hawaii.edu | ||||
| Discussant(s): | ||||
| Paul Brandon, University of Hawaii, brandon@hawaii.edu | ||||
| Abstract: Existing research on the nature of exemplary evaluation practice is limited. In spite of awards being given for outstanding work, there has been little study of what makes an evaluation exemplary and how to produce excellent work consistently. The four presentations in this panel examine national, regional, and local studies for the purposes of (a) describing the characteristics of exemplary work; (b) identifying the factors, conditions, events, and actions that contribute to exemplary work; and (c) considering the impediments to improved practice. Influences such as effective evaluation designs, strong stakeholder relationships, and evaluator flexibility in dealing with changing contextual factors are shown as key in promoting exemplary practice. Implications for an improved theory of evaluation practice, as well as improvements in evaluation practice, are also considered. | ||||
| ||||
| ||||
| ||||
|
| Session Title: Extension Educators and Evaluation | ||||||||||||||||||
| Multipaper Session 612 to be held in CROCKETT A on Friday, Nov 12, 1:40 PM to 3:10 PM | ||||||||||||||||||
| Sponsored by the Extension Education Evaluation TIG | ||||||||||||||||||
| Chair(s): | ||||||||||||||||||
| Karen Ballard, University of Arkansas, kballard@uaex.edu | ||||||||||||||||||
|
| Session Title: Evaluating Effectiveness in Recruitment, Mentoring, and Human Resources' Functions | |||||||||||||||
| Multipaper Session 613 to be held in CROCKETT B on Friday, Nov 12, 1:40 PM to 3:10 PM | |||||||||||||||
| Sponsored by the Business and Industry TIG | |||||||||||||||
| Chair(s): | |||||||||||||||
| Ray Haynes, Indiana University, rkhaynes@indiana.edu | |||||||||||||||
|
| Session Title: Impact of Social and Economic Development Interventions: Presentation of Synthetic Reviews of Education, Early Childhood Development and Agricultural Extension Programs in Developing Countries | |||||||
| Multipaper Session 614 to be held in CROCKETT C on Friday, Nov 12, 1:40 PM to 3:10 PM | |||||||
| Sponsored by the International and Cross-cultural Evaluation TIG | |||||||
| Chair(s): | |||||||
| Marie Gaarder, International Initiative for Impact Evaluation (3ie), mgaarder@3ieimpact.org | |||||||
|
| Session Title: Perspectives on Conducting Quality Evaluations at Various Levels of Government | |||||
| Panel Session 615 to be held in CROCKETT D on Friday, Nov 12, 1:40 PM to 3:10 PM | |||||
| Sponsored by the Government Evaluation TIG | |||||
| Chair(s): | |||||
| Rakesh Mohan, Idaho State Legislature, rmohan@ope.idaho.gov | |||||
| Discussant(s): | |||||
| Kathryn Newcomer, George Washington University, newcomer@gwu.edu | |||||
| Abstract: Focusing on the role of evaluation in government, this session will highlight differences among various levels of government (county, city, state, federal and international) in how quality in evaluation is addressed. The three standards offered by House (1980) - truth (validity), beauty (credibility), and justice (fairness) – will be used as one framework to illustrate differences among the different levels of government. Through the panel, the presentations will highlight not only the strategies that differ among the different levels, but also the ways in which the three standards vary in importance and weight. Barriers to achieving the standards and the unique challenges at each level of government will be described. Recommendations for evaluators in working at each level will be offered. | |||||
| |||||
| |||||
|
| Session Title: Reaching for the Pot of Gold: Tested Techniques for Enhancing Evaluation Quality With Trustworthiness and Authenticity | |||||||||||||||||||||
| Multipaper Session 616 to be held in SEGUIN B on Friday, Nov 12, 1:40 PM to 3:10 PM | |||||||||||||||||||||
| Sponsored by the Qualitative Methods TIG | |||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||
| Eric Barela, Partners in School Innovation, ebarela@partnersinschools.org | |||||||||||||||||||||
| Discussant(s): | |||||||||||||||||||||
| Eric Barela, Partners in School Innovation, ebarela@partnersinschools.org | |||||||||||||||||||||
|
| Session Title: Building Capacity to Monitor and Evaluate Development Policies, Programs, and Projects: Everyone Wants to do It, but How Should It Be Done to Ensure High Quality? | ||||
| Panel Session 617 to be held in REPUBLIC A on Friday, Nov 12, 1:40 PM to 3:10 PM | ||||
| Sponsored by the International and Cross-cultural Evaluation TIG | ||||
| Chair(s): | ||||
| Linda Morra Imas, World Bank, lmorra@worldbank.org | ||||
| Abstract: Developing country governments around the world are planning and trying to implement results-based monitoring and evaluation (M&E) systems. The developed world has recognized the capacity gaps and the Paris Declaration called for donors to increase technical cooperation and build capacity. The 2008 Accra Agenda for Action reinforced the need to improve partner countries’ capacities to monitor and evaluate the performance of public programs. However, despite some effort, a late 2008 report shows that capacities are still weak in many governments--M&E implementation lags planning and quality is generally low. Many are now engaged in renewed efforts to build M&E capacity in the development context. But what works? What capacity-building efforts are needed to yield good quality M&E? This panel explores four types of M&E capacity building efforts, experience with them based on actual cases, their advantages and disadvantages, and factors important for their transfer to behavior change and quality M&E. | ||||
| ||||
| ||||
| ||||
|
| Session Title: Examining Heart Disease and Stroke: Sharing Lessons to Improve Evaluation Quality | |||||||||||||||||||||
| Multipaper Session 618 to be held in REPUBLIC B on Friday, Nov 12, 1:40 PM to 3:10 PM | |||||||||||||||||||||
| Sponsored by the Health Evaluation TIG | |||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||
| Cindy Wong, Bradeis University, cindyjwong@gmail.com | |||||||||||||||||||||
|
| Session Title: Preliminary Results of Prevention Capacity Building in Science-based Teen Pregnancy, HIV, and STI (Sexually Transmitted Infections) Prevention | |||||
| Multipaper Session 619 to be held in REPUBLIC C on Friday, Nov 12, 1:40 PM to 3:10 PM | |||||
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG | |||||
| Chair(s): | |||||
| Duane House, Centers for Disease Control and Prevention, lhouse1@cdc.gov | |||||
| Discussant(s): | |||||
| Thomas Chapel, Centers for Disease Control and Prevention, tchapel@cdc.gov | |||||
|