Evaluation 2009 Banner

Return to search form  

Roundtable: Oil Dependence of the United States on the Supplier's Nations
Roundtable Presentation 744 to be held in the Boardroom on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the Advocacy and Policy Change TIG
Presenter(s):
Larissa Smirnova, Duquesne University, smirnova.lora@gmail.com
Abstract: Attached proposal is a policy recommendation drafted by the Department of energy (DOE) Office of Science (DOFOS). This policy recommendation aim is, first and foremost, the development of alternative and renewable energy sources and, secondly - and by extension, to reduce the US dependence on oil - foreign or otherwise. To these ends, this policy recommendation calls for the establishment of a new federal research energy agency within the DOE; an agency tasked specifically with the goals and objectives noted above. Moreover, this policy recommendation, as well as the various course of action (COA) examined, is founded on empirically centered evidence vis-à-vis a multi-attribute analysis (MAA). As this past summer so vividly illustrated, where oil commodity futures and supply shortages drove crude up some 150%, both business (airline, auto) and the American consumer requires price stable and environmentally sound energy source. To sum up, manipulative speculation and dollar depreciation account for most of the recent increases in the price of oil-speculation accounts for nearly 60 percent, dollar depreciation for almost 40 percent. This policy recommendation provides for those needs. To do this, we, first, need to claim the problem of oil dependence. This problem definition reflects the tangible evident data that reveal a real situation of the energy market in the U.S. and confirm our problem definition. This data refer to the analytical reviews and reports of the Department of Energy of the U.S. and other independent analytical editions. The main point of this data is to articulate an existence of the oil dependence on the supplier nations in the Middle East. Second, we recommend necessary external and internal essential adjustment policies for the U.S. The feasible sustainability of this COA we tested by the multi-attribute analysis. The reason why we chose this type of the analysis could be explained by the specific practical context of the COA. This kind of analysis tends to estimate precise expenses and benefits. The nature of our COA is related to the strategic management that supposed to be valuable in light of the cost, politics, ease of implementation and environmental impact criteria. According to logic of the multi-attribute analysis, we selected the criteria that transformed the intangible ideas and values of policy decision process to the tangible data. This setting of the data can explain us the clear mechanism of policy implementation and show up the possible weakness and strengths of the COA. Furthermore, each criterion has a weighing measure. The most important criteria of this table accumulate within the multiplying action. Thus, the most valuable criteria of the COA has more multiplied times. This table with visual measures intends to emphasize the practical capability of our policy recommendation. Finally, we considered the role of each possible scenario through the measured criteria. We articulate precisely the advantages and disadvantages of the implied policy activities. The advantages are based upon efficiency, affordable cost, and social fairness. The disadvantages of the COA are formulated through the possible fails and obstacles of the accomplishment. Status quo as oil dependence of the US represents the scenario of the contemporary energy policy agenda. We also keep in mind all external and internal factors that we could not predict today. The validity and the reliability of this multi-attribute analysis we can test through the exploring of the previous experience of the similar policy implementation in other states. Qualitative research of the second sources is a supplementary technical tool of our policy recommendation.

Session Title: Meet the Editor: The American Journal of Evaluation
Expert Lecture Session 745 to be held in  Panzacola Section F1 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the AEA Conference Committee
Presenter(s):
Thomas Schwandt, University of Illinois at Urbana-Champaign, tschwand@illinois.edu
Robin Miller, Michigan State University, mill1493@msu.edu
Abstract: This is an opportunity to meet the outgoing and incoming editors of the American Journal of Evaluation (AJE) and discuss with them your views on the content and value of the journal to the AEA membership. The session will also include key strategies for getting published in AJE for those considering submitting a manuscript for peer review.

Session Title: Measuring Quality of Life of Adults With Developmental Disabilities as a Quality Improvement Mechanism
Multipaper Session 746 to be held in Panzacola Section F2 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the Human Services Evaluation TIG
Chair(s):
Dale Howard, Howard Research & Management Consulting Inc, dale@howardresearch.com
Abstract: This session focuses on operationalizing the construct "quality of life" to measure impact of services and supports provided to adults with developmental disabilities. It focuses first on quality of life as a key component of a "balanced scorecard" approach as a way to move forward in evaluation of services and supports provided. Second, the session focuses on the use of quality of life as part of a quality improvement process, including the importance of considering local context when moving forward in measurement. The key elements of measuring quality of life (e.g., instrument design, administration and analysis) and the importance of positioning results in the local context in order to derive value for local agencies and systems is also discussed. Discussion throughout this session is situated within the context of implementation of a pilot project to measure quality of life of adults with developmental disabilities in a region within Alberta, Canada.
Quality of Life as a Performance Measure in Providing Supports to Adults With Developmental Disabilities
Dale Howard, Howard Research & Management Consulting Inc, dale@howardresearch.com
Sean McDermott, Government of Alberta, sean.mcdermott@gov.ab.ca
Large amounts of government funds are directed toward providing services and supports for adults with developmental disabilities. However, there is often limited measurement of the degree to which such services and supports are effective in achieving better outcomes for clients. The culture of the sector, as well as traditional models of service provision, have limited the extent to which measurement is both possible and considered important. However, as a key component of a "balance scorecard", the use of the construct "quality of life" provides a way to move forward in evaluation of the impact of services and supports provided to clients. This paper discusses the implementation of a pilot to measure quality of life of adults with developmental disabilities in a region within Alberta, Canada, and the importance of situating results within the local context of the service delivery system in order to derive meaning for local agencies and systems.
Considerations and Processes in Measuring Quality of Life of Adults With Developmental Disabilities
Teresa Bladon, Howard Research & Management Consulting Inc, teresa@howardresearch.com
Jillian Carson, Government of Alberta, jillian.carson@gov.ab.ca
Measurement of quality of life provides a method to gauge the degree to which services and supports provided to adults with developmental disabilities improve individuals' lives along a number of dimensions related to personal well-being. Over the past decade the use of this measure has been employed in a variety of international contexts. However, when "quality of life" as a quality improvement measure is considered, the context in which it is going to be employed must also be considered. It is not sufficient to simply take an instrument developed elsewhere and administer it in any local context. This paper discusses the key elements of measuring quality of life of adults with developmental disabilities and the importance of considering local context in instrument design, survey administration and data analysis. Failure to consider this context can limit validity of the tool, effectiveness of data collection methods employed, and usefulness of results obtained.

Session Title: Using Multivariate Regression for Program Evaluation and Marketing
Expert Lecture Session 747 to be held in  Panzacola Section F3 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the Health Evaluation TIG
Presenter(s):
Julia Joh Elligers, National Association of County and City Health Officials, jjoh@naccho.org
Abstract: This session will explain how survey data informs program evaluation and marketing activities at the National Association of County and City Health Officials (NACCHO). Julia Joh Elligers, NACCHO Senior Analyst and doctoral student at the University of Maryland College Park, will describe how she has used multivariate regression to analyze NACCHO Profile survey data. The session will explain how statistical modeling has helped evaluate the effectiveness of NACCHO's Mobilizing for Action through Planning and Partnerships (MAPP) and National Public Health Performance Standards (NPHPS) projects over time. The presentation will also describe how multivariate regression has been used to identify factors that predict MAPP and NPHPS use and how that information has informed marketing activities. Strengths and limitations of this approach to program evaluation will be discussed. Suggested audiences include representatives from associations, non-profit organizations, and individuals conducting evaluations that focus on institutional characteristics that predict product or service use.

Session Title: Finding Promising Programs and Practices: The Systematic Screening and Review Method
Panel Session 748 to be held in Panzacola Section F4 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the Program Theory and Theory-driven Evaluation TIG
Chair(s):
Laura Leviton, Robert Wood Johnson Foundation, llevito@rwjf.org
Discussant(s):
Wendy Yallowitz, Robert Wood Johnson Foundation, wyallow@rwjf.org
Abstract: The Systematic Screening and Review Method (New Directions in Evaluation, in press) aims to systematize the search for effective innovations so as to reduce uncertainty about those that are worth evaluating. The method involves nomination, expert panel review and evaluability assessment. The method has now been used in several initiatives that are presented in this panel. The Centers for Disease Control and Prevention has identified 48 childhood obesity prevention approaches that are worth evaluating. The Robert Wood Johnson Foundation is using this method to identify innovations that are worth evaluating in two areas: nursing education and intimate partner violence with immigrant populations.
The Systematic Screening and Review Method to Identify Programs and Policies on Childhood Obesity Prevention
Nicola Dawkins, ICF Macro, nicola.u.dawkins@orcmacro.com
Seraphine Pitt Barnes, Centers for Disease Control and Prevention, spittbarnes@cdc.gov
Holly Wethington, Centers for Disease Control and Prevention, hwethington@cdc.gov
Diane Dunet, Centers for Disease Control and Prevention, ddunet@cdc.gov
David Cotton, ICF Macro, david.a.cotton@macrointernational.com
Leah Robin, Centers for Disease Control and Prevention, ler7@cdc.gov
Jo Anne Grunbaum, Centers for Disease Control and Prevention, jgrunbaum@cdc.gov
Laura Leviton, Robert Wood Johnson Foundation, llevito@rwjf.org
Laura Kettel Khan, Centers for Disease Control and Prevention, ldk7@cdc.gov
This project is a collaboration of the Robert Wood Johnson Foundation, the Centers for Disease Control and Prevention (CDC), the CDC Foundation, and Macro International Inc. to identify promising programs and policies to prevent childhood obesity. In the past two years this process received over 450 nominations and identified 48 policies and programs ready for more rigorous evaluation. The process begins with a national scan of programs and policies. An expert panel reviews program and policy documentation and selects those that warrant further investigation. The selection process is guided by assessment of plausibility, feasibility, innovativeness, and potential for impact. Each selected program or policy undergoes evaluability assessment to determine whether a rigorous evaluation is feasible and merited. The project team and the expert panel review the findings of the evaluability assessments to determine the degree of promise and readiness for rigorous evaluation and synthesize the results and recommendations.
The Systematic Screening and Review Method Applied to Intimate Partner Violence and Nursing Education
Laura Leviton, Robert Wood Johnson Foundation, llevito@rwjf.org
Nathaniel Tashima, LTG Associates Inc, partners@ltgassociates.com
Mariana Sachse, Robert Wood Johnson Foundation, 
Nancy Fishman, Robert Wood Johnson Foundation, nfishma@rwjf.org
Michael Yedidia, Rutgers University, myedidia@ifh.rutgers.edu
This presentation will describe how the Systematic Screening and Review Method has been adapted for use in two Robert Wood Johnson Foundation initiatives, innovations in addressing Intimate Partner Violence in immigrant populations, and innovations in nursing education. For the intimate partner violence study, ethnography was a useful addition to the overall method. In the case of nursing education, evaluability assessment was conducted through a distributed network with email and telephone guidance from a central coordinating center.

Session Title: The Office of Management and Budget's New Policy on Increased Emphasis on Program Evaluation: An Open Discussion
Think Tank Session 751 to be held in Panzacola Section H1 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the Presidential Strand
Chair(s):
Debra Rog, Westat, debrarog@westat.com
Presenter(s):
George F Grob, Center for Public Program Evaluation, georgefgrob@cs.com
Discussant(s):
Patrick Grasso, World Bank, pgrasso@worldbank.org
Abstract: On October 7, 2009, OMB Director Peter Orszag issued a memorandum to the Heads of Federal Departments and Agencies on Increased Emphasis on Program Evaluation. (http://www.whitehouse.gov/omb/assets/memoranda_2010/m10-01.pdf) It focuses on impact evaluation, with initial application to social, educational, economic, and similar programs whose expenditures are aimed at improving life outcomes for individuals. It promotes rigorous, independent evaluations to be used as a key resource in determining whether government programs are achieving their objectives at the lowest possible cost. Because of the importance of this new policy. This session will provide an opportunity for open discussion among all interested AEA members.

Session Title: Psychometric Analysis of Student Performance Comparisons in Evaluation Using Item Response Theory: An Illustration With National Longitudinal Study Data
Demonstration Session 752 to be held in Panzacola Section H2 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Raymond Hart, Georgia State University, rhart@gsu.edu
Abstract: Measuring change in variables is common in social science research. Experimental and quasi-experimental studies often employ pretest-posttest or longitudinal time-series designs to compare effects on dependent variables. A number of statistical methods are available for measuring change including: 1) Analysis of Variance (ANOVA) on gain scores, 2) Analysis of Covariance (ANCOVA), 3) ANOVA on residual scores, and 4) Repeated measures ANOVA. These methods are often dependent on raw score differences and statistical comparisons on sample based performance on standardized tests. Traditional sample comparisons capitalize on measurement error and sampling error. The purpose of this demonstration is to illustrate the practical application of a method that uses item parameter estimates from Item Response Theory (IRT) assessments to measure student or group progress longitudinally without the need for comparison groups. Evaluators can use this research to compare student performance and growth across time and across subgroups of students to population performance.

Session Title: Evaluating Educational Innovation in the Midst of Outcomes Measurement
Multipaper Session 754 to be held in Panzacola Section H4 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Leanne Kallemeyn, Loyola University Chicago, lkallemeyn@luc.edu
Abstract: Similar to government programs, foundations are emphasizing outcomes measurement. With the implementation of state-level assessment systems and the importance of assessment and evaluation in No Child Left Behind (NCLB), educational evaluation and assessment emphasizes monitoring, validating, verifying, tracking, and so on. The expectation is improvement with minimal tolerance of failure. Such approaches to evaluation seem to leave limited room for risk-taking. At the same time, compared to government, foundations have greater autonomy for innovation, experimentation, and creativity. Drawing from the experiences of two external evaluation projects of innovative educational programs funded by small family foundations and implemented in urban settings, the purpose of this session is to consider the lessons learned in negotiating an emphasis on evaluating and supporting innovation of complex programs, while also addressing interests in outcomes measurement.
An Evaluation of a Professional Learning Community Among Elementary and High Schools in a Large, Urban School District
Leanne Kallemeyn, Loyola University Chicago, lkallemeyn@luc.edu
Peter Mich, McDougal Family Foundation, pmich.mff@ameritech.net
Donna Ogle, National-Louis University, dogle@nl.edu
Katherine McKnight, National-Louis University, katherine.mcknight@nl.edu
McDougal Family Foundation (MFF), a small family foundation under the leadership of Peter Mich, in collaboration with Donna Ogle and colleagues at National Louis University (NLU), endeavored to implement a pilot program, the Transitional Adolescent Literacy Leadership (TALL) project, which involved five Chicago Public Schools (CPS). The goals of the three year program were to develop a learning community among elementary schools and a high school serving predominately Latino/a students, in order to support students during the transition to high school through academic literacy, and students' own social and cultural context. The evaluation involved a three-year case study that utilized program theory and multiple methods. Using evaluation reports, interviews from three key stakeholder groups, school principals, foundation, university facilitators and program implementers, and self-reflections of the evaluator, I will discuss the successes and challenges of evaluating the TALL program within the context of relevant policies within CPS.
Assessing Preparation and Perseverance: An Evaluation of an Urban Charter School Alumni's Post-Secondary Experiences
Ruanda Garth McCullough, Loyola University Chicago, rmccul1@luc.edu
Asma Ali, University of Illinois at Chicago, asmamali@yahoo.com
Raquel Farmer-Hinton, University of Wisconsin Milwaukee, rfarhin@uwm.edu
Rolanda West, Loyola University, rwest@luc.edu
Another family foundation is sponsoring an external evaluation of an urban charter high school. The purpose of the evaluation is to document and analyze the factors that contribute to the academic persistence and resilience of its graduates. The majority of the students perform an average of 3-4 years below grade level in reading and mathematics. The evaluation of the schools' efforts to prepare their struggling students for college in this high-stakes testing era requires incorporating methods and instruments that acknowledge the risks and creativity involved in this un-chartered educational territory. This multi-method, longitudinal evaluation seeks to determine what factors enhanced the support and development of under-served African American students' to persevere during their postsecondary endeavors. Analysis of survey and focus group data from alumni representing six graduating classes, counselors, teachers, and administrators reveal the complexities of imposing a traditional model of college "success" to evaluate this innovative endeavor.

Session Title: Context Matters: Conducting Evaluations in Federal, State, and Local Government Funding Source Environments
Multipaper Session 755 to be held in Sebastian Section I1 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the Government Evaluation TIG
Chair(s):
Katrina Bledsoe, Walter R McDonald and Associates Inc, kbledsoe@wrma.com
Discussant(s):
Kimberly Wells, United States Office of Personnel Management, kimberly.wells@opm.gov
Abstract: The variance in contexts of evaluation projects demands the ability to understand the environment in which the evaluation is performed. For instance, projects that are performed within the context of federal funding vary greatly from the contexts of those that are funded by state and local governments. Each has a general culture that must be considered including goals of the evaluation, political agendas, key players, contracts, and methodological barriers such as participant recruitment. This presentation highlights the evaluation contexts of two common types of projects: one at the federal government level, and one at the state government level. Anticipated audience discussion will focus on navigating the contexts to address the needs of clients and of the evaluation.
Evaluation in State and Local Government Contexts: Considerations, Challenges and Successes
Cindy Crusto, Yale University, cindy.crusto@yale.edu
Meghan Finley, The Consultation Center, 
This paper presents considerations, challenges, and successes of evaluation in a state/local government agency. Information is based on the evaluation of a large children's mental health cooperative agreement between the funder (SAMHSA) and a state partner. The project integrates three state systems to meet the needs of children with social, emotional, and/or behavioral health care challenges and their families. Considerations of context and evaluation include the utility of a champion at a high level in the project/organization who believes in accountability and CQI processes, need for developing relationships at multiple levels of state agencies and across multiple state agencies, maintaining integrity of the evaluation through multiple state-level transitions (i.e., governance bodies, economic difficulties). Challenges experienced include history of poor coordination and data sharing among agencies and significant system level changes. Successes include identification of key staff in each collaborating state agency that is committed to the evaluation/accountability process.
Evaluation in Federal Government Contexts
Carolyn Lichtenstein, Walter R McDonald and Associates Inc, clichtenstein@wrma.com
The presentation focuses on the facilitators and challenges for conducting evaluation work within the context of federal government contracting. The presenter discusses issues such as accountability, logistics of working within Federal contract guidelines, scope of the evaluation and evaluation data collection, data use, and the numerous dissemination requirements, both technical and scholarly. Facilitators of conducting Federal evaluation work include the emphasis on data-driven performance monitoring, having one major stakeholder (generally), and relatively little Federal staff turnover. In contrast, there are many challenges that must be overcome or incorporated into the evaluation; for example, specific performance monitoring needs sometimes restrict the evaluation design and type of data requested by the client, and approval must be obtained for the burden any proposed data collection efforts place on the public. Examples will be provided from several Federal-level evaluations conducted by the author.

Session Title: Complexity and Clustering
Multipaper Session 756 to be held in Sebastian Section I2 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
Rene Lavinghouze,  Centers for Disease Control and Prevention, rlavinghouze@cdc.gov
Exploratory Cluster Evaluation of Variability and Commonality of the Implementation and Impact of Ohio Mathematics and Science Partnership Projects
Presenter(s):
Lucy Seabrook, Seabrook Evaluation + Consulting LLC, lucy@seabrookevaluation.com
Hsin-Ling Hung, University of Cincinnati, hunghg@ucmail.uc.edu
Sarah Woodruff, Miami University of Ohio, woodrusb@muohio.edu
Debbie Zorn, University of Cincinnati, zorndl@ucmail.uc.edu
Mary Marx, University of Cincinnati, mary.marx@uc.edu
Abstract: USDOE Mathematics and Science Partnership funds support 15 three-year partnerships between Ohio high-need schools/districts and faculty in institutions of higher education. Although projects share common goals of increasing teacher content knowledge, improving teaching practices, and improving student performance, considerable variability in program design and delivery exists across projects. To identify characteristics of effective professional development for teachers of mathematics and science, an exploratory cluster evaluation approach was used to assign these projects membership into different modalities. Analyses were based on data obtained from a self-reported program characteristics survey developed by the evaluation team to assess variability across projects regarding (a) partnerships, (b) strategies to target participants, (c) school/district leadership involvement, (d) curriculum content, (e) delivery of professional development, and (f) local evaluation activities and findings. This presentation will describe the cluster approach and implications of findings. Applications of this approach to evaluations of similar programs also will be discussed.
Complex Adaptive Systems: Evaluation as Dynamic Human and Information Systems in a Formative, Collaborative, Statewide, Multi-site Context
Presenter(s):
A Rae Clementz, University of Illinois at Urbana Champaign, clementz@illinois.edu
Abstract: This presentation uses the lens of complexity theory and complex adaptive systems to analyze and critique a recent evaluation of a statewide, multi-site, induction and mentoring pilot program. Each site presented unique programmatic and contextual challenges and features, while the larger political and collaborative organizational contexts requested data that would lead to convergence and cross-site comparisons. This session will offer our understanding of the ways in which the individual actors and dynamic, often overlapping groups, including the evaluation team, organized and operated in order to achieve both singular and collective goals in a complex, adaptive environment. Similarly, we will look at the ways in which the information systems created by and for these different individuals, groups, and goals affected the human and group dynamics, as well as the progress of the evaluation.

Session Title: Management and Analysis of National Multisite Program Evaluation Data: Center for Substance Abuse Prevention's Data Analysis Coordination and Consolidation Center
Multipaper Session 757 to be held in Sebastian Section I3 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
Beverlie Fallik, United States Department of Health and Human Services, beverlie.fallik@samhsa.hhs.gov
Discussant(s):
Beverlie Fallik, United States Department of Health and Human Services, beverlie.fallik@samhsa.hhs.gov
Abstract: Multisite and cross-site program evaluation poses specific data management challenges. This session will describe the data assessment, validation, cleaning, management, and analysis processes at the Center for Substance Abuse Prevention's Data analysis Coordination and Consolidation Center (DACCC). After a brief introduction of the DACCC and its activities, two papers will be presented to demonstrate the two major functions of the Center, that is, data management and data analysis. The first paper will be presented by DACCC's Data Management Team Lead and will demonstrate the Center's data quality assurance procedures. It will cover topics such as procedures for quality assessment, standard data cleaning rules, and statistics on frequently encountered threats to data quality. The second paper will be delivered by DACCC's Data Analysis Team Lead and will demonstrate how the cleaned data are analyzed by presenting the results of an analysis of program outcomes within the context of site-specific factors.
Data Quality Assessment and Data Management Practices: An Example From the Center for Substance Abuse Prevention's Program Evaluation Data
P Allison Minugh, Datacorp, aminugh@mjdatacorp.com
Nicolletta A Lomuto, Datacorp, nlomuto@mjdatacorp.com
Susan L Janke, Datacorp, sjanke@mjdatacorp.com
Meeting performance goals in the context of "real world" program evaluation is a critical evaluation task. In order to demonstrate whether programs are effective, data must be trustworthy. This presentation focuses on common data quality issues involved in evaluating the Center for Substance Abuse Prevention's grant programs. Focusing on data quality assessment procedures that are used by CSAP's Data Analysis Coordination and Consolidation Center (DACCC), the presentation describes common data quality threats, the DACCC's procedures for evaluating and compiling information on data quality, the feedback loop established between DACCC and CSAP's grantees to improve data integrity, and the Center's data cleaning and management processes designed to respond to these assessments and to dialog with grantees. Data quality statistics will be presented for a variety of CSAP's prevention programs and the impact of data quality on key outcome and mediating variables will be discussed.
The Impact of Program Dosage and Intervention Strategy on Program Outcomes: An Analysis of Data Submitted to the Center for Substance Abuse Prevention
Nilufer Isvan, Human Services Research Institute, nisvan@hsri.org
Lavonia Smith LeBeau, Human Services Research Institute, llebeau@hsri.org
This presentation demonstrates the data analysis activities of Center for Substance Abuse Prevention's Data Analysis Coordination and Consolidation Center, assessing individual-level baseline, and exit data within the context of specific program characteristics. The focus will be on the role of program dosage, service delivery format, and intervention type in evaluating program outcomes for individual participants. Preliminary findings based on approximately 8,000 program participants from multiple sites suggest that program dosage affects outcomes only when considered in the context of service type, delivery format (group vs. individual), and the specific combination of intervention strategies implemented at the grantee site. Additional results from a detailed multivariate analysis will be presented, further investigating the interaction of these site-specific contextual factors with participant characteristics in predicting participant- and site-level program outcomes.

Session Title: Metaevaluation and the Program Evaluation Standards
Panel Session 758 to be held in Sebastian Section I4 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the Research on Evaluation TIG
Chair(s):
Chris L S Coryn, Western Michigan University, chris.coryn@wmich.edu
Discussant(s):
Leslie J Cooksy, University of Delaware, ljcooksy@udel.edu
Abstract: This panel session presents the results of two projects focused on the use of The Program Evaluation Standards (Joint Committee, 1994) for metaevaluation. A systematic content analysis of the standards text revealed numerous overlap and dependency relationships between standards, which has implications for how the standards could be differentially weighted in a condensed and efficient instrument to facilitate metaevaluation. Results from an interrater reliability study of thirty individuals with a wide range of evaluation expertise who used the standards to assess ten evaluations sheds light on the consistency with which the Standards are used to reach judgments concerning evaluation quality. Both studies provide insights for the use and further development of the Standards and suggest ways in which their use can be most effectively supported and advocated.
The Program Evaluation Standards Applied for Metaevaluation Purposes: Investigating Interrater Consistency and Implications for Practice
Lori Wingate, Western Michigan University, lori.wingate@wmich.edu
Professional evaluation rests on the premise that its procedures and results are systematic and objective. The Program Evaluation Standards (Joint Committee, 1994) have been a major contribution toward making evaluation practice more systematic. However, there are two important underlying, untested assumptions embodied within the standards: (1) Adherence to the standards will produce higher quality evaluations, which reflects the standards' "guiding" function; and (2) comparable judgments about how the quality of a given evaluation would be reached by different individuals when using the standards as criteria of merit, which reflects the standards' "assessing" function. Results of research undertaken to investigate the legitimacy of the latter assumption are presented. The purpose of the study was to assess interrater reliability as measured by coefficients of agreement among a group of thirty evaluators who were charged with the task of assessing the quality of ten evaluations, using the Program Evaluation Standards as the criteria.
Documenting Dependency Relationships Between the Standards to Facilitate Metaevaluation
Carl Westine, Western Michigan University, carl.d.westine@wmich.edu
The thirty standards set forth by The Joint Committee on Standards for Educational Evaluation in The Program Evaluation Standards (PES) (Joint Committee, 1994) form the basis for a checklist to be used for metaevaluation (Stufflebeam, 1999). However, identifying overlap between the standards should simplify the metaevaluation process. Through a systematic content analysis, we learn what the PES reveals about the overlapping nature of the standards. In most standards, specific references of up to ten standards are stated in the textual overview, guidelines, and common errors sections of the PES. Further references between standards not explicitly stated are also documented. Incongruence in references between standards implies a dependency relationship exists. Moreover, the PES functional table of contents outlines further dependency relationships between the standards. Documenting well-defined dependency relationships has implications for how the standards could be differentially weighted in a condensed and efficient instrument to facilitate metaevaluation.

Session Title: Integrating Monitoring and Evaluation (M&E) and Learning Into an International Organization
Panel Session 760 to be held in Sebastian Section L1 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Colleen Duggan, International Development Research Centre, cduggan@idrc.ca
Abstract: This panel will discuss the strategy, steps, lessons and challenges involved in building M&E capacity in the International Centre for Transitional Justice; a young and rapidly growing international non-profit organization. As the first effort by the organization to grow its M&E ability, the work involved building a Culture of Evaluation and injecting Result-Based Management principles into the organization as well as the more common M&E technical elements. The discussion will focus particularly on the opportunities and challenges that the unique 'context' of an international organization poses such as: how does one create an agency wide organizational culture and a learning organization with offices in multiple countries staffed by multiple nationalities? How do power dynamics (e.g. north-south) factor into establishing quality M&E systems in an agency? Three different perspectives will be given on this effort, from inside the ICTJ, the consultant and a donor providing an overarching commentary.
The View From Inside: Leading an Agency's First Initiative to Integrate Monitoring and Evaluation (M&E)
Paige Arthur, International Center for Transitional Justice, parthur@ictj.org
This paper will review briefly the history of the International Centre of Transitional Justice, the type of work associated with transitional justice and the contextual characteristics in which this work generally takes place. It will then outline the successes and failures of the organization's initial efforts to integrate M&E which took place prior to hiring the M&E consultant. After setting the stage, the paper will be written from the perspective of the staff person responsible for leading the M&E effort. It will look at the steps taken to shift the organizational culture towards a culture of evaluation and results based management; two tasks which were seen as central to the ability of the technical M&E work to be successfully integrated into the programmatic work of the Centre. It will also touch on the internal dynamics of having the initiative led by a headquarter based Research Unit, resource issues and working with a consultant.
The Monitoring and Evaluation (M&E) Consultant Perspective on Integrating M&E Into a Previously M&E-Free Organization
Cheyanne Scharbatke-Church, The Fletcher School of Law and Diplomacy, cheyanne.church@tufts.edu
This paper will take the perspective of an external consultant brought in to assist the International Centre for Transitional Justice in its first efforts to build M&E capacity on a short-term contract basis. It will have three sections. First, the successes and challenges in building technical capacity and developing relevant policies will be explored and the associated lessons learned articulated. Second, in addition to the technical, this paper will also look at the power dynamics of being a consultant attempting to create internal changes within an organization and that effect on the overarching process. Finally the paper will discuss the challenges of the context and the effect on the strategy, M&E tools and learning initiatives.

Session Title: Synergistic Technologies: Building Supports for Both Evaluation and Project Management in Education
Demonstration Session 761 to be held in Sebastian Section L2 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the Integrating Technology Into Evaluation
Presenter(s):
Carlos Romero, Apex Education, romero@apexeducation.org
Michelle Bloodworth, Apex Education, michelle@apexeducation.org
Abstract: If evaluation technologies are designed and built to simultaneously support project management activities, both endeavors can be synergistic, improving the ease with which evaluation data is collected and its subsequent quality, while at the same time supporting project management. The presenters will describe how to plan for and develop such synergistic technologies. The presenters will also demonstrate web-based data management system that was developed by the presenters' evaluation firm and serves as a technology platform offering a variety of tools that are customized for different evaluation projects and clients. The technologies to be discussed and demonstrated include: coaching/professional development log, customizable website builder, blog, business plan templates, on-line implementation checklists, electronic platform for developing and revising school improvement plans, assessment and survey tool, platform for state standards and associated lesson bank, curriculum platform, event and training registration, internal e-mail and notification system, customizable and automated reports with informative charts.

Session Title: Building Capacity and Models Through an Examination of Contextual Influences on Workforce Educational Outcomes
Panel Session 763 to be held in Sebastian Section L4 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the Program Theory and Theory-driven Evaluation TIG
Chair(s):
Julianne Manchester, Case Western Reserve University, julianne.manchester@case.edu
Abstract: The panelists will examine the usefulness of incorporating contextual factors into capacity and model-building. Discussion examples will primarily come from the health professions in a workforce educational perspective; however, the experience would likely be of interest to interdisciplinary programmers or evaluators. The session will begin through examining, in the first presentation, the relational, organizational, educational, and evaluation factors that influence trainee outcomes. The knowledge benefits evaluators in strengthening their clients' evaluation capacities through the identification of barriers (ill-planned stakeholder recruitment, weak access to sites for collection) to programming and evaluation. Understanding these facets feasibly allows for the creation of a model that tests the influence of these contextual factors on programmatic pathways. A workforce educational model with multiple pathways interacting with the four context domains will be proposed.
Relying Upon Contextual Factors to Build Capacity in Workforce Education
Rob Fischer, Case Western Reserve University, fischer@case.edu
The first presentation will examine contextual factors that influence workforce educational programming, using examples from the health professions. Understanding relational, organizational, educational, and evaluation factors that influence trainee outcomes allows evaluators to strengthen plans (methods, instrumentation) and client capacities through the identification of related barriers. When such factors are known, it improves the ability of evaluators to build capacity with programmers and practitioners, as barriers to measurement or stakeholder recruitment are now countered. Examples of how to integrate utilization-focused evaluation into capacity building with health professions programmers, being mindful of these facets, will be applied. Examples of how knowing the contextual factors of organizations implementing programs can drive consultation content from evaluator to practitioner will be shared.
Using Contextual Factors to Illustrate Advantageous Program Pathways in Model-Building
Julianne Manchester, Case Western Reserve University, julianne.manchester@case.edu
From an understanding of contextual facets, using health professions examples (diabetes management, asthma), a workforce education model is proposed for understanding salient training pathways and examining statistical effects of the contextual factors discussed previously. What paths are strengthened by the degrees or presence of specific relational or organizational factors? A constructivist model for exploring these relationships will be presented. Hypothetical path diagrams will be presented as components of model building. Building a model through field-based exploration will be discussed. With an exploratory model, implications for data collection, standardized instrumentation, and pooled analysis across sites will be presented. Shaping these processes becomes dependent upon the contextual factors (relational, organizational, etc.) in which they occur. Implications for workforce education, where understanding the conditions by which trainees are most likely to use acquired skills, will be discussed.

Session Title: Evaluating Human Resources for Health Systems Strengthening: Experiences From the United States Agency for International Development's (USAID) Capacity Project
Multipaper Session 764 to be held in Suwannee 11 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the Health Evaluation TIG
Chair(s):
Laura Gibney, IntraHealth International, lgibney@intrahealth.org
Abstract: Increasing access to basic health care, and responding to critical needs like HIV/AIDS, malaria, family planning and maternal health, relies on mobilizing health care leaders and workers where they are most needed. Yet shortages and poor distribution of health workers pose serious problems in many developing countries. The Capacity Project is a global USAID-funded initiative to help countries build and sustain the health workforce in low-resource settings. The Project is assisting 16 countries to strengthen human resources for health (HRH). Faced with the challenging objective of developing technical assistance in a relatively new area and doing this on a health systems level, the Project has developed evaluation strategies that respond to the important influence of context that characterizes human resource management. We will share our lessons learned regard to the meaning of context within human resources for health as well as recommendations for future HRH evaluation.
The Role of Context in Evaluating Human Resources for Health Systems Strengthening
Daniel de Vries, IntraHealth International, ddevries@intrahealth.org
Linda Fogarty, Jhpiego, lfogarty@intrahealth.org
Erik Reavely, Independent Consultant, ereavely@nc.rr.com
Elizabeth Bunch, IntraHealth International, ebunch@intrahealth.org
The aim of HRH systems strengthening is to build capacity among HR leaders and practitioners to develop and implement strategies to achieve an effective and sustainable health workforce. Lessons learned from the Capacity Project suggest a central role of context in evaluating HRH intervention outcomes. First, because the technical work aims to change national-level HRH systems, measuring evidence of this change is challenging because it takes time, only indirectly affects health outcomes and depends heavily on a country's historical and cultural context and starting point. Second, the nature of HRH capacity-building requires a participatory approach wherein unforeseen directions of the intervention are reflected in evaluating success. Third, balancing global evaluation needs with local priorities proved challenging, particularly in the context of insufficient technical evaluation skills in field countries. This paper will introduce the HRH context and review how an engaged evaluation practice has dealt with these challenges.
Indicators for Evaluating Human Resources for Health Capacity-Building
Linda Fogarty, Jhpiego, lfogarty@intrahealth.org
Daniel de Vries, IntraHealth International, ddevries@intrahealth.org
Erik Reavely, Independent Consultant, ereavely@nc.rr.com
Elizabeth Bunch, IntraHealth International, ebunch@intrahealth.org
Evaluating the effects of health capacity-building interventions has been called more of an art than a science. The Capacity Project, working to build the capacity of human resources for health (HRH) systems in low-resource settings, developed and tested indicators and approaches to monitor and evaluate interventions to plan, develop and support the health workforce. Indicators measure qualitative change in national and sub-national HRH systems and are flexible enough to account for the evolving country context, corresponding technical needs and responding interventions. However, they lack rigor and standardization. Interviews from 30 HRH technical and program experts were analyzed to inform evaluation indicator refinement to better capture country context factors and strengthen HRH evaluation approaches. Results advocate for an even broader set of HRH-related indicators, requiring the same flexibility, but with well-defined milestones, reflecting country-specific needs and changes. Recommendations for strengthened HRH evaluation methods and measures will be discussed.

Session Title: Evaluating the Effectiveness of Using Online Social Networking Tools to Deliver Family Life Education Programs
Demonstration Session 765 to be held in Suwannee 12 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the Extension Education Evaluation TIG
Presenter(s):
Shelly Mahon, University of Wisconsin Madison, mdmahon@wisc.edu
Abstract: The primary purpose of this session is to investigate unique considerations when evaluating family life education programs delivered through online social networking tools. The use of social networking as a median for communication and networking continues to increase. However, little is known about how effective these tools can be in promoting learning and facilitating behavior change. Following a brief review of the current literature, the presenter will introduce and discuss a variety of considerations to evaluators. How is online learning expected to occur within a social networking tool? What should evaluators look for? What are the best strategies for collecting data? What other unique aspects should we, as evaluators, consider when evaluating online program delivery? What challenges do evaluators face when using social networking tools to deliver educational programs? Examples will be discussed using a program for nonresidential fathers, delivered though a private social networking tool, etendi BRIDGE.

Session Title: Recognizing Subtexts Within the Context of Evaluation
Think Tank Session 766 to be held in Suwannee 13 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Ann Davis, Northwest Regional Educational Laboratory, davisa@nwrel.org
Discussant(s):
Phyllis Campbell Ault, Northwest Regional Educational Laboratory, aultp@nwrel.org
Ann Davis, Northwest Regional Educational Laboratory, davisa@nwrel.org
Ann Davis, Northwest Regional Educational Laboratory, davisa@nwrel.org
Abstract: Randomization is touted as an exemplary design by researchers, yet even in randomized studies, subtexts within a study's context should be considered. Using the context of an early childhood Home Educator Study, this Think Tank focuses on the key question: In what ways do subtexts influence the context of evaluation? Subtext issues will be discussed within a study case with an experimental design including 145 families, all low income, and predominately Spanish speaking. Families recruited from rural Washington State were randomly assigned to "treatment" or comparison groups. The study hopes to ascertain the effect on kindergarten readiness, for children of families who received sustained home visits, initiated when children were two-years old. Each small group in the Think Tank will explore one of three subtext themes: (1) The importance of hiring community members as Home Educators (2) Early childhood education means early parent/family education (3) Use of appropriate, culturally-responsive measures

Session Title: Making Specific, Measurable, Attainable, Relevant, and Times (SMART) Objectives SMARTer: How to Avoid Common Pitfalls in Their Design and Interpretation
Demonstration Session 767 to be held in Suwannee 14 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Joel Philp, The Evaluation Group, joel@evaluationgroup.com
Abstract: In a perfect world, an evaluator will use SMART objectives (Specific, Measurable, Attainable, Relevant, and Timed) to gauge progress in program implementation, outcomes, or both. But even with SMART objectives, interpretations and assumptions underlie decisions on how the objective is reported, and unless these are clarified there will be considerable confusion when it comes time to report progress. This skill-building workshop will teach participants how to make SMART objectives even SMARTer by anticipating some common pitfalls associated with their design and reporting. Practical, concrete examples will be presented, computed, and discussed.

Session Title: Considering Professional Practice: Evaluator Roles and Issues of Professionalization
Multipaper Session 768 to be held in Suwannee 15 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the AEA Conference Committee
Chair(s):
Kristianna Pettibone,  MayaTech Corporation, kpettibone@mayatech.com
To be or Not to be a Profession: Pros, Cons and Challenges for the Evaluation Community
Presenter(s):
Steve Jacob, Laval University, steve.jacob@pol.ulaval.ca
Abstract: Debates on the professionalization of evaluation regularly fuel the discussion. Evaluation literature contains varied points of view in favour or in disfavour of consolidating quality control mechanisms. This presentation examines the aims and challenges pursued by the promoters of the professionalization of evaluation (e.g. institutionalization, quality improvement). It will then present the mechanisms envisioned by the Quebecois Society of Program Evaluation (Canada) designed to address these points. These mechanisms include the drafting process of an evaluation chart, adhesion to a professional order, and evaluator certification. This presentation is based on a documentary review and also on an analysis of semi-directed interviews conducted with members and former members of the Quebecois Society of Program Evaluation (SQEP) and its administrative council. The results that I will present will help to fuel debates in matters of the professionalization of the evaluative practice which arise in most contexts where evaluation has reached a certain maturity.
The Road Less Traveled: Integrating Internal and External Evaluation
Presenter(s):
Sheila Arens, Mid-continent Research for Education and Learning, sarens@mcrel.org
Andrea Beesley, Mid-continent Research for Education and Learning, abeesley@mcrel.org
Abstract: Making a decision about whether a project is appropriate for an internal evaluation perspective or employ an external evaluation perspective is one that many organizations face. The merits of internal and external evaluation are many and have been broadly discussed. Internal and external evaluation efforts do not have to be at odds with one another; indeed, each brings a valued, and hopefully diverse, perspective to an understanding of program processes, functions, outcomes. Rarely, however, do programs have the luxury of being able to fund both an internal and an external evaluator. In this paper, we describe our experiences acting as external evaluators side-by-side with internal evaluators. Presenters will discuss both advantages and challenges of this approach to evaluation and offer suggestions for ensuring the success of such an arrangement for the involved evaluators and the client.

Session Title: The Examination of Gender in the Context of National and International Program Evaluation: Part II
Multipaper Session 769 to be held in Suwannee 16 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the Feminist Issues in Evaluation TIG and the International and Cross-cultural Evaluation TIG
Chair(s):
Kathryn Bowen, Centerstone Research Institute, kathryn.bowen@centerstoneresearch.org
Discussant(s):
Donna Mertens, Gallaudet University, donna.mertens@gallaudet.edu
Abstract: Many programs influence the status of women, and the cultural, economic and political relationships between men and women and among household members. At the cross-roads of cultural competence and social justice await both opportunities and challenges for the contextually responsive evaluator. It also has important implications for those practitioners who explicitly deal with cultural dimensions that are implicit in international and gender issues. There is an implicit assumption of commensurability of cultural competence and social justice for evaluators who actively engage in dealing with societal power differentials. The purpose of the panel session is to present different perspectives from the US and other countries on how gender affects the context of the programs (such as education, health care, and poverty alleviation) they are evaluating, the methods they have used for assessing gender and some of the challenges in convincing clients that gender issues matter.
Cultural Competence and Social Justice
Saumitra Sengupta, APS Healthcare Inc, ssengupta@apshealthcare.com
At the cross-roads of cultural competence and social justice await both opportunities and challenges for the contextually responsive evaluator. It also has important implications for those practitioners who explicitly deal with cultural dimensions that are implicit in international and gender issues. This presentation will build upon current news items to analyze and start a discourse on 'sensitive' areas that 'political correctness' often dissuades the practitioner from dealing with head on! There is an implicit assumption of commensurability of cultural competence and social justice for evaluators who actively engage in dealing with societal power differentials. In some cases, that assumption may hold true, and in some others, the evaluator will have to examine the value proposition more in-depth, and through more dialogic means to find common grounds for change. This paper will propose that closer examination through examples from current situations.
Gender and Healthcare: Why Australian Men Can't Get Vasectomies
Denise Seigart, Mansfield University, dseigart@mansfield.edu
This paper explores the challenges of incorporating gender analysis into the evaluation of health programs in the US., Australia, and Canada. While conducting case studies of school based health care in these countries, it became apparent that inequities in the provision of health care exist and are often related to gender inequities. Racism, sexism and classism were all noted, due to religious, economic, and cultural influences, and these all play a part in the quality of and accessibility of health care in these countries. Examples of gender inequities in access to health care include the disproportionate influence religious organizations have on the provision of health care, the impact tying health care to employment has on women and children, and the valuing (or devaluing) of women's work with regard to the provision of health care for children in schools. Analysis of the results utilizing a feminist perspective will be presented.

Session Title: Evaluating the Hispanic Serving Institutions (HSI) Education Grants Program
Panel Session 770 to be held in Suwannee 17 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the Assessment in Higher Education TIG
Chair(s):
Henry Doan, United States Department of Agriculture, hdoan@csrees.usda.gov
Abstract: This Federally funded education grants program serves Hispanic serving institutions with the following specific targets: faculty preparation and teaching enhancements; instruction delivery systems; scientific instrumentation for teaching; student experimental learning; student recruitment and retention, and curriculum design and materials development. The program promotes and strengthens the ability of America's HSIs to develop and conduct educational initiatives that attract outstanding students and produce graduates capable of enhancing the nation's scientific and professional workforce in the food and agricultural sciences.
The Hispanic-Serving Institutions Education Grants Program Over the Years
Irma Lawrence, United States Department of Agriculture, ilawrence@csrees.usda.gov
This Federally funded education grants program serves Hispanic serving institutions with the following specific targets: faculty preparation and teachning enhancements; instruction delivery systems; scientific instrumentation for teaching; student experiental learning; student recruitment and retention, and curriculum design and materials development. The program promotes and strenthens the ability of America's HSIs to develop and conduct educational initiatives that attract outstanding students and produce graduates capable of enhancing the nation's scientific and professional workforce in the food and agricultural sciences.
Involving Partners in Evaluating the HSI Education Grants Program
Henry Doan, United States Department of Agriculture, hdoan@csrees.usda.gov
Evaluating the program began with the development of logic models for each invidividual grant with input and participation from all HSI program consortium members or HIS grant recipients, the national program leader (NPL) responsible for managing the program, and evaluation leaders from the granting agency. Reports on progress and/or pitfalls of the program are developed for each step of the logic model through the grant life, and submitted to the NPL for reviews, comments, and remedial action, if neccessary, and assessed by evaluation leaders. This way any adjustments and revisions or corrections can be done in a timely manner to ensure program outcomes.

Session Title: Measurement Issues: Standards for Concept Mapping and Optimization of Item Response Theory
Multipaper Session 771 to be held in Suwannee 18 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the AEA Conference Committee
Chair(s):
Ann Doucette,  George Washington University, doucette@gwu.edu
The Optimization of Item Response Theory Model's Type to Evaluating Practice in Context of Different Item Format
Presenter(s):
Marina Chelyshkova, State University of Management, mchelyshkova@mail.ru
Victor Zvonnikov, State University of Management, zvonnikov@mail.ru
Abstract: In paper we represent the results of the researches which have been carried out within the project: 'Comparative Efficiency of Parametric and Nonparametric Item Response Theory Models for Combination Different Item Formats in Tests'. The goal of this project was connected with choice the optimum models for scoring examination's data in conditions of integration the quantitative and qualitative scores, which were obtained by varying formats of test items. The opportunities of models were compared by two basic criteria: high objectivity of measurement and high comparability of graduate's scores in different tests variants. Test information functions were used as basis of score's comparison for different models. The analysis has shown, that the Monotone Homogeneity Model of Nonparametric Item Response Theory is the best for scaling. As the results of the work the theoretical and technology requirements developing measurements in evaluation are formulated.
Establishing Standards in Concept Mapping: A Meta-review and Analysis
Presenter(s):
Scott Rosas, Concept Systems Inc, srosas@conceptsystems.com
Abstract: The use of concept mapping for planning, evaluation, and research has expanded considerably in recent years. Concept mapping, an applied multivariate methodology that integrates familiar qualitative group processes with multivariate statistical analyses encompasses six distinct phases: preparation, generation, organization, representation, interpretation, and utilization. Due to the emergent interest in and the variety of application of the concept mapping methodology in research and evaluation, it is vital to define rigorous and feasible standards of quality across all phases of the process. As an initial step in the development of benchmarks, a systematic meta-review of 30 concept mapping projects was conducted and data generated within each phase of the process was analyzed. The findings serve as a preliminary set of standards by which results from future concept mapping studies can be measured or judged. The implications of these findings are discussed relative to methodological expectations for concept mapping and mixed method approaches.

Roundtable: Fathers Can Talk About Kids Too: An Evaluation of Fathers' Involvement in a Parent Training Program
Roundtable Presentation 772 to be held in Suwannee 19 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the Independent Consulting TIG
Presenter(s):
Abraham Salinas, University of South Florida, asalinas@health.usf.edu
JC Smith, University of South Florida, jcsmith6@mail.usf.edu
Abstract: This roundtable session will focus on a qualitative evaluation using focus group interviews with male caregivers who participated in the Helping Our Toddlers Developing Our Children's Skills (HOT DOCS) program. This session will consider the emerging themes identified through the focus groups that reflects the fathers' experiences and motivation to participate in a parent training program. We conducted 3 focus groups, composed of male caregivers including biological and adoptive fathers. A set of open ended questions was developed to assess reasons for attending HOT DOCS, experiences in the training, perceived facilitators and barriers to participation , as well as identification of perceived strengths and weaknesses of the program. Implications for program improvement are identified to make it appealing to prospective male caregivers. This roundtable is aimed particularly to demonstrate the strengths of qualitative methods in the field of evaluation and lessons learned from the field.

Roundtable: Getting Your Findings in Use: Making Evaluation Results Relevant, Accessible and Understandable
Roundtable Presentation 774 to be held in Suwannee 21 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the Evaluation Use TIG
Presenter(s):
Anita Drever, Wyoming Survey and Analysis Center, adrever@uwyo.edu
Laura Feldman, University of Wyoming, lfeldman@uywo.edu
Abstract: Evaluators often put great effort into writing accurate and comprehensive reports that end up gathering dust on the shelves of the stakeholders they were intended to serve. Even when program managers or other interested parties read the report, findings that could have improved policy and programming get overlooked or misinterpreted. This roundtable will be a 'show-and-tell' forum for evaluators to discuss strategies they have used to make results relevant to practitioners' contexts. We will discuss examples of successful report formats, effective use of the internet to disseminate results, strategies to communicate complex statistical or theoretical concepts to lay audiences, and the dilemmas evaluators face when forced to choose between precision and comprehension by a lay audience. We would like to encourage roundtable participants to bring examples of their work, if they are available.

Session Title: Ethical and Inclusive Excellence Imperatives in a Globalizing World: An Integral Evaluator-Self Model
Expert Lecture Session 775 to be held in  Wekiwa 3 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the Multiethnic Issues in Evaluation TIG
Presenter(s):
Hazel L Symonette, University of Wisconsin Madison, hsymonette@odos.wisc.edu
Abstract: This session introduces my Integral Evaluator-Self Quadrant Model as a holistic self-assessment framework for ethical praxis and inclusive excellence. It provides a comprehensive resource for enhancing multilateral self-awareness, in the context of other aspects of the evaluator role, through explicitly representing the intersection of two dimensions: (individual vs. collective vantage points) X (interior vs. exterior environments). The model offers a framework of sensitizing concepts and questions for mindfully scanning, tracking and monitoring *WHO* factors, notably, the human systems dynamics vis a vis relevant diversity divides. As we move among the relevant situational and relational contexts for our work, the model offers some head's up alerts for checking with ourselves vis a vis what the context is calling for from us. This facilitates a more mindful assessment of the status of one's forcefield of preparedness and readiness for the sociocultural context as well as the tasks embodied in the evaluation questions and agenda.

Session Title: Measuring and Assessing Costs: Why All the Resistance?
Think Tank Session 776 to be held in Wekiwa 4 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the Costs, Effectiveness, Benefits, and Economics TIG
Presenter(s):
Brian Yates, American University, brian.yates@mac.com
Discussant(s):
Sarah Hornack, American University, sarah.hornack@gmail.com
Jose Hermida, American University, hermidaj@gmail.com
Jennifer Cintron, American University, cintron.jenny@gmail.com
Abstract: In the field of program evaluation, those suggesting that cost analysis should be invariably included in the evaluation process are often met with opposition. This may be due to the misconception that cost analysis is strictly about monetary resources, when actually all are typically considered (temporal, material, spatial, transportation, communication, financing). Additionally, in this economic climate, it is understandable that there is some anxiety surrounding the idea that one's program may not deem cost-beneficial and lose precarious funding. On the other hand, because of the current financial situation and stimulus package objectives, it may be to a program's advantage monetarily to demonstrate the cost utility of their design. The primary aim of the Think Tank is to incite discussion surrounding the resistance to measuring costs and brainstorm with fellow evaluators the possibilities for making this type of assessment less taboo.

Session Title: Measuring Additionally of Government Research: Views From New Zealand and Taiwan
Multipaper Session 778 to be held in Wekiwa 6 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Isabelle Collins,  Technopolis, isabelle.collins@technoplois-group.com
Adding Addtionality to Programs Evaluation
Presenter(s):
Chin-Wen Chang, Science & Technology Policy Research and Information Center, cwc1104@hotmail.com
Che-Hao Liu, The Legislative Yuan Republic of China, haogo0904@msn.com
Abstract: The purpose of this paper is to provide a conceptual framework for the evaluation of research and development and innovation funding program. The article focus on connecting evaluation theory and current practice in various countries by reviewing evaluation approaches of public R&D& Innovation funding from literature concerning addtionality. This paper presents a discussion of the concept of additionality and its definition as well as the scope of its measurement. By reviewing theoretical and empirical studies, the concept of additionality and the effects of the projects carried out by comparing different countries. The findings of this paper reaffirm the importance of addtionality, enhance its implications for evaluation practice, and benefits participants in the government sponsored program. In additions, suggestions are provided based on the analysis of the results.
Evaluating the Additionality of Economic Development Policies in New Zealand
Presenter(s):
David Bartle, Ministry of Economic Development, david.bartle@med.govt.nz
Cavan O'Connor-Close, Ministry of Economic Development, cavan.o'connor-close
Abstract: Evaluation of government policies should test for both attributable impacts and the additionality of these impacts over what would otherwise have occurred. Often additionality is considered in a descriptive way and consequently the real outcomes are insufficiently analysed. We report the results of developing and testing a new, more quantitative/econometrics based evaluation approach to these issues. Establishment of a database of performance information on all active firms enabled a close matching of assisted firms against unassisted firms. Using econometric analysis we examined attributable effects expected from policy. The results appear robust both statistically and against qualitative studies that examine contextual issues in more depth. However, there remain challenges for comparison with other evaluations in similar areas of policy. By drawing on evidence in the area of regulatory impact analysis and cost benefit analysis, the paper makes suggestions for overcoming these challenges so that the opportunities can be fully exploited.

Session Title: Corporate Sustainability: A Systems Approach to Evaluating the Triple Bottom Line
Panel Session 779 to be held in Wekiwa 7 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the Business and Industry TIG
Chair(s):
Mallary Tytel, Healthy Workplaces, mtytel@healthyworkplaces.com
Discussant(s):
Mallary Tytel, Healthy Workplaces, mtytel@healthyworkplaces.com
Abstract: The field of human systems dynamics represents new methods and opportunities for mapping and evaluating complex change. One such opportunity within business and industry is Corporate Sustainability (CS). CS is an approach that creates long-term shareholder value, improves performance, decreases waste and manages risk. Referred to as the Triple Bottom Line, CS considers the interrelationship of economic, environmental and social issues. As triple bottom line (TBL) reporting emerges as a critical part of an organization's persona and value, we need a clear and consistent understanding of the process of TBL, organizational learning, capacity and relationship building, and the overall impact on sustainability. The presenters will examine the dynamic connections between the three bottom lines, offer insights gained and discuss the challenges, opportunities and implications for business and industry for integrating expanded human systems dynamics thinking and approaches into the evaluation of TBL.
System Networks as Part of the Triple Bottom Line
Enrico Wensing, Ecosphere Net, ejwensing@ecosphere.net
An understanding of the connections between the big picture and little picture of the Triple Bottom Line system networks and human systems dynamics is necessary to generate a sustainable future. This presentation provides a broad overview. For example, cross-cultural supply chains represent both local networks and, ultimately, multinational global networks engaged in the TBL. How relationships along the various networks evolve is of key importance. One critical tool, the global sustainability inventory (GSI) is introduced as a measure we are developing to help calibrate the necessary individual human input toward human systems dynamics that are more equitable with CS and the needed transition to global sustainable development.
The Potential of Human Systems Dynamics in Corporate Sustainability
Mallary Tytel, Healthy Workplaces, mtytel@healthyworkplaces.com
Three organizations, representing diverse business sectors, implemented corporate sustainability programs as part of broad culture change initiatives. For each there was a focus on internal objectives (e.g. using recycled materials) and external objectives (e.g. producing "green" products to sell). Summative and formative evaluations were put in place to track (1) corporate-wide understanding of sustainability; (2) definitions and indicators of success; (3) major obstacles; and (4) the effect of having an identified champion for the effort. To investigate the role of systems factors in shaping the outcomes of the project, I used an adaptation of the Eoyang CDE Model (Container, Difference, and Exchange) from the Human Systems Dynamics Institute to identify new patterns of learning, interactions and relationships within the organizations.

Session Title: Using Community-Based Participatory Evaluation to Reduce Opioid Overdoses
Multipaper Session 780 to be held in Wekiwa 8 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
Leslie Aldrich, Massachusetts General Hospital, laldrich@partners.org
Abstract: Community-Based Participatory Evaluation (CPBE) can be a useful tool to help coalitions respond to a community issue. It places evaluation and stakeholders at the center of a cycle that includes Assessment, Capacity/Partnership Building, Planning, and Implementation. Recently, the Massachusetts Department of Public Health distributed State Incentive Grants to local communities to reduce the opioid overdoses. Two of these communities applied CPBE to implement the required Substance Abuse and Mental Health Services Administration Strategic Prevention Framework (SPF). CBPE facilitated the process of conducting a full assessment of the problem and identifying best-practice interventions for each community. Because CPBE mirrors the SPF cycle and is dynamic enough to account for the context of politics, population and situation, incorporating the method was easy and advantageous. Although both communities used this approach, the data collection methods, results of the assessment and identified strategies to address opioid overdose were unique.
Identifying Strategies to Reduce Opioid Overdoses in a Boston Neighborhood
Danelle Marable, Massachusetts General Hospital, dmarable@partners.org
Beth Rosenstein, Massachusetts General Hospital, brosenshein@partners.org
Jennifer Kelly, Boys & Girls Club of Boston, jkelly14@partners.org
Toni Weintraub, Independent Consultant, tabramsweintraub@partners.org
Susan Crowley, Independent Consultant, susan_crowley@ksg07.harvard.edu
This presentation will focus on the Charlestown Substance Abuse Coalition (CSAC) and how it used CBPE during its assessment to identify strategies to reduce opioid overdoses in the specific community the coalition serves. Because CBPE relies heavily on the stakeholders at the table, CSAC was able to attract new stakeholders to the coalition, including substance abuse providers and those in recovery from opioid addiction. Through this process, CSAC reviewed public data and conducted interviews and focus groups in order to listen to, as well as obtain buy-in from, the community. Because of the continuous feedback loop with community stakeholders, CSAC was able to pinpoint several strategies directed at reducing opioid overdoses that were supported by the community and to engage stakeholders in the implementation phase.
A Case Study of Assessment and Planning Process to Address Opioid Overdose in Revere, Massachusetts
Erica Clarke, Massachusetts General Hospital, esclarke@partners.org
Leslie Aldrich, Massachusetts General Hospital, laldrich@partners.org
Susan Crowley, Independent Consultant, susan_crowley@ksg07.harvard.edu
A case study of the assessment and planning process conducted using Community-Based Participatory Evaluation (CBPE) in Revere, Massachusetts will be discussed. The CBPE approach was applied to this work through establishing an expert panel from the community to conduct a thorough assessment and develop and identify strategies to reduce opioid overdose in the community. The expert panel participated by overseeing and making recommendations regarding the data collection process, reviewing and interpreting local data related to opioid overdose, identifying data gaps, and based on the results of the assessment, identifying strategies most appropriate for the community. This case study will highlight the benefits of the CBPE approach in this particular community.

Session Title: Incorporating Multiple Perspectives in Environmental Program Evaluation
Multipaper Session 781 to be held in Wekiwa 9 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the Environmental Program Evaluation TIG
Chair(s):
Annelise Carleton-Hug,  Trillium Associates, annelise@trilliumassociates.com
A National Park and Affiliated 'Think Tank' Critically Reflect on Their First Decade: Client and Evaluator Perspectives on the Lessons Learned
Presenter(s):
Jennifer Jewiss, University of Vermont, jennifer.jewiss@uvm.edu
Daniel Laven, National Park Service, daniel_laven@nps.gov
Nora Mitchell, National Park Service, nora_mitchell@nps.gov
Rolf Diamant, National Park Service, rolf_diamant@nps.gov
Christina Marts, National Park Service, christina_marts@nps.gov
Abstract: Weiss defines the unit of analysis as 'the entity about which data are collected, analyzed, and conclusions drawn.' In contrast to most evaluations that are conducted at the program level, this study examined the work of two affiliated organizations within the U.S. National Park Service (NPS): Marsh-Billings-Rockefeller National Historical Park and the Conservation Study Institute co-located in Woodstock, Vermont. The evaluation elicited critical reflections from stakeholders on the successes and challenges of the Park's and Institute's major undertakings. Interviewees also considered strategic directions for the organizations' next decade given the NPS context and broader trends in conservation. The most valuable findings included insights about the evolving context in which these two entities operate and stakeholders' articulation of the role that future Park and Institute programming might play in advancing collaborative conservation. The session will feature client and evaluator perspectives on the organizational learning prompted by a study of this scope.
Evaluation in informal Learning Environments: Gaining Meaningful Data From a Zoo-based Teen Leadership Program
Presenter(s):
Melinda Hess, University of South Florida, mhess@tempest.coedu.usf.edu
Mary Corinne DeGood, Lowry Park Zoo, mc.degood@lowryparkzoo.com
Abstract: This study addresses the evaluation of an intense, 9-month teen leadership initiative conducted at a medium sized zoo in a large urban area. It provides an overview of the framework, methods, and instrumentation used to evaluate the Environmental Conservation and Community Outreach (ECCO) initiative. Data was gathered from multiple sources, including participants, parents/guardians, and program administrators to ensure the reflection of perceptions from multiple perspectives. Data was both quantitative and qualitative in nature and were gathered throughout the duration of the program in order to inform the formative nature of the evaluation as well as the summative. Examples will be provided regarding how the data was used from the initial cohort of participants to guide changes for the second cohort of participants in 2009. Lessons learned regarding the evaluative process will be discussed, including elements most beneficial and what changes in the evaluative process were made for the second cohort.

Session Title: Creating Ethical Case Studies in Program Evaluation
Demonstration Session 782 to be held in Wekiwa 10 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the Teaching of Evaluation TIG
Presenter(s):
Scott Grubbs, Valdosta State University, stgrubbs@valdosta.edu
Abstract: There is little argument that program evaluators must adhere to a high standard of ethical practice in the execution of their professional responsibilities. Therefore, it is important to provide students of program evaluation with not only a strong ethical foundation, but also with multiple opportunities to practice dealing with the various ethical dilemmas that may emerge during the course of program evaluation. Given the diversity of ethical situations and their attendant underlying complexities, instructors of program evaluation may find presenting ethical dilemmas in authentic contexts to be somewhat challenging (Newman and Brown, 1995). Case studies, however, may provide instructors and their students with the flexibility required to present realistic ethical dilemmas across a variety of professional contexts. This presentation will address how to create effective ethical case studies and provide session attendees with a sample case study for adaptation and use with their students.

Return to Evaluation 2009
Search Results for All Sessions