|
Session Title: AEA International Committee (IC) and the International Organization for Cooperation in Evaluation (IOCE) Forum on Promoting Partnerships With International Evaluation Groups
|
|
Think Tank Session 295 to be held in Capitol Ballroom Section 2 on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the AEA Conference Committee
|
| Presenter(s):
|
| Jim Rugh,
Independent Consultant,
jimrugh@mindspring.com
|
| Discussant(s):
|
| Tom Grayson,
University of Illinois Urbana-Champaign,
tgrayson@uiuc.edu
|
| Michael Hendricks,
Independent Consultant,
mikehendri@aol.com
|
| Nino Saakashvili,
Horizonti Foundation,
ninosaak@yahoo.com
|
| Donna Podems,
Macro International,
donna.R.Podems@macrointernational.com
|
| Bob Williams,
Independent Consultant,
bobwill@actrix.co.nz
|
| Abstract:
Part of the AEA vision states that 'We support developing relationships and collaborations with evaluation communities around the globe to understand international evaluation issues.' How should we choose which groups to partner with and what forms could such partnerships take? During this session the AEA International Committee will share its proposals for an AEA Partnership Policy, then solicit ideas from participants on these as well as other ideas for fostering partnerships with associations of professional evaluators around the world. In break-out groups we will brainstorm criteria for selecting partners as well as ideas for activities that would be mutually beneficial to AEA and partner groups. We will also consider which sub-groups within AEA, such as local affiliates, might be interested in forming partnerships with international groups. I.e. we would be promoting multiple forms of partnerships, not just one predetermined AEA-level institutional type of relationship with a limited number of partners.
|
|
Session Title: Cost-Benefit Evidence and Systems Modeling Approaches From the Medical Center Training, Psych-Social Rehabilitation, and Permanent Supportive Housing Sectors
|
|
Multipaper Session 296 to be held in Capitol Ballroom Section 3 on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Costs, Effectiveness, Benefits, and Economics TIG
|
| Chair(s): |
| Ronald Visscher,
Western Michigan University,
ronald.s.visscher@wmich.edu
|
|
Evaluating Opportunities to Optimize Learning and Economic Impact: Applying System Dynamics to Model Training Deployment in a Medical Center
|
| Presenter(s):
|
| Daniel McLinden,
Cincinnati Children's Hospital Medical Center,
daniel.mclinden@cchmc.org
|
| Rebecca Phillips,
Cincinnati Children’s Hospital Medical Center,
rebecca.phillips@cchmc.org
|
| Adam Helbig,
Cincinnati Children’s Hospital Medical Center,
adam.helbig@cchmc.org
|
| Abstract:
Systems modeling provides the evaluator with a method to empirically explore potential impact of an intervention and, in particular, the economics of features and benefits of that intervention. Assumptions that support the intervention can be tested and the varying perspectives of multiple stakeholders can be systematically evaluated as a means to both quantify the effects of their unique perspectives and also to promote consensus building and dialogue among stakeholders that is grounded in empirical evidence. This session will review the work done in a medical center to model the economics of required training programs for new employees. The specific aim was to identify the optimum timing for compliance based training utilizing economic considerations and this model as the basis for beginning discussions of policy changes within the organization. Additionally modeling the tradeoffs between optimizing economic variables and optimizing non-economic variables such as learning will be discussed.
|
|
Cost Benefit Analysis of the Clubhouse Psychosocial Rehabilitation Model
|
| Presenter(s):
|
| Anna Myers,
American University,
am7502a@american.edu
|
| Brian Yates,
American University,
brian.yates@mac.com
|
| Colleen McKay,
University of Massachusetts,
colleen.mckay@umassmed.edu
|
| Abstract:
This research shows that the Clubhouse Model of psychosocial rehabilitation--implemented currently at over 328 sites in 28 countries--has measurable, quantifiable, and monetizable inputs (costs) and outputs (benefits). In US dollars, we assessed the average costs of providing services to Clubhouse members, and a subset of the average benefits generated by the members: their earnings in competitive employment in the community. With a sample of 220 Clubhouses from 21 countries, we found that Clubhouse members earn, on the average, the equivalent of 30% of the Clubhouse budget. We also found that having members provide services to members seems to be more cost-beneficial than having only staff provide services to members. We anticipate that cost-savings due to reduced use of health and criminal justice services would increase total Clubhouse benefits, possibly to where benefits exceeded costs.
|
|
An Evaluability Assessment of a Cost Benefit Study of Permanent Supportive Housing
|
| Presenter(s):
|
| Brian Dates,
Southwest Solutions,
bdates@swsol.org
|
| Abstract:
Since the late 1970’s, evaluability assessment has been advanced as an important aspect of the full program evaluation process. In spite of this, the various methodologies available to the evaluator with which to conduct feasibility studies of a proposed evaluation have remained largely underutilized. At the same time, the application of cost analysis procedures to evaluations, while gaining support as pivotal to a full understanding of program effect, has itself remained infrequently utilized. This paper will examine the process of conducting an evaluability assessment of a cost analysis study of permanent supportive housing. Using the JSCEE Standards and the Guiding Principles to provide direction for the evaluability assessment, the presentation will focus on how evaluation feasibility was appraised and how the process of determining and validating data sources for cost data was carried out.
|
| | |
|
Session Title: Educational Evaluation in an Era of Globalization: New Tensions and Contradictions
|
|
Multipaper Session 297 to be held in Capitol Ballroom Section 4 on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Theories of Evaluation TIG
|
| Chair(s): |
| J Bradley Cousins,
University of Ottawa,
bcousins@uottawa.ca
|
| Abstract:
Nearly all aspects of modern life (e.g., culture, employment) are being changed as society is being increasingly globalized by market integration and the emergence of new global media and information technologies that permit the circulation of ideas, resources, and even individuals across boundaries. In this sense, evaluation theories and how practices are being 'globalized' as they are being influenced by the movement of ideas across national boundaries. Yet the extent to which and evaluation theories and practices are changing in response to globalization and other political and social changes has not been sufficiently examined. Using educational evaluation as the backdrop, this session is devoted to considering how these political and social changes create new contradictions and tensions for foundational evaluation issues. Each paper presents a particular turn on these contradictions and tensions in relation to fundamental issues, the role of science in evaluation, capacity building and monitoring, and democracy and participation.
|
|
Fundamental Evaluation Issues in a Global Society
|
| Nick L Smith,
Syracuse University,
nlsmith@syr.edu
|
|
Fundamental issues are those underlying problems or choices that continually resurface in different guises throughout evaluation work. What are the proper roles for the evaluator as a professional? What is the nature of acceptable evidence in making evaluative judgments? Such issues are, by their very nature, never finally solved, but only temporarily resolved. Each resolution is highly dependent on a complex of interrelated historical, contextual, and cultural forces.
This paper reviews 10 fundamental issues that will continue to reflect the changing nature of educational evaluation both within the US and globally. Resolutions to these issues are constantly being reshaped by the increasingly diverse, globalized nature of society; driving forces are briefly reviewed. Although common resolutions might facilitate cross-cultural evaluation collaboration, it is more likely that the evaluation profession will have to learn to flexibly accommodate to rapid redefinitions of evaluation theory, methods, and practice as the world becomes increasingly global.
|
|
Science, Evaluation, and Sense-Making
|
| Melvin Mark,
Pennsylvania State University,
m5m@psu.edu
|
|
What is the role of educational evaluation, science and sense-making in a globalized society, and how can evaluation theory and practice be better suited for this role? I first briefly review the intersection between globalization-related trends and evaluation. I then offer observations on contemporary debates about the role of randomized experiments. The focal point of the paper is on a set of recommendations for future work, including: the need to develop procedures for macro-level evaluability assessment, to examine the readiness for evaluation in emerging policy and program areas; the value of developing models for communicating the rationale for choosing one (set of) potential evaluation questions) above others; the desirability of focusing on evaluation portfolios, rather than debating a single method or approach; and the importance of evaluators (and others) recognizing there is no single pathway by which evaluation can serve democracy - which should not be surprising in the face of mixed-model democratic governance.
|
|
Evaluation, Accountability and Performance Measurement in National Education Systems: Trends, Methods, And Issues
|
| Irwin Feller,
Pennsylvania State University,
iqf@ems.psu.edu
|
| Katherine Ryan,
University of Illinois Urbana-Champaign,
k-ryan6@uiuc.edu
|
|
The role of evaluation in relationship to accountability is receiving substantial attention and Ryan (2002, p. 453) has previously called upon the evaluation community to 'play an integral part in the study of, improvement of, and judgments' about the merit and worth of accountability systems. This paper is crafted both as a general and specific response to this call. It begins with a general overview of international trends toward new more formal requirements for accountability and performance measurement with emphasis on dominant 'global' requirements and precepts of the new public management about how public sector activities are to be managed, monitored, and measured. With consideration of the differentiated mix of interests of public and private institutions, the context of 'local' national systems of education and the evaluation of national K-12 and higher education systems, the paper concludes by discussing the implications of this analysis for the design and conduct of evaluations, and by posing additional policy and research questions.
|
|
Serving the Public Interest through Educational Evaluation: Salvaging Democracy by Rejecting Neo-Liberalism
|
| Sandra Mathison,
University of British Columbia,
sandra.mathison@ubc.ca
|
|
Evaluation's role in the current educational reform movement is connected to both the neo-liberal agenda of free market capitalism and the neo-conservative agenda of Christian values. And, the State plays a significant role in facilitating the implementation of these agendas. These respective economic and moral neo agendas directly affect the purposes of educational evaluation, methodological choices, and conceptions of quality in education. Implicit in the neo agendas is the assumption that a select group of individuals is best positioned to determine what is in the public interest. However, I will argue that this agenda serves particular interests, the private and individual interests of a corporate ruling class, and that the public interest is not well served when evaluation is directed by these neo agendas. The paper also considers countervailing forces in evaluation, ones that support a grassroots, communitarian approach to the public interest. These alternatives and their potential for serving the public interest will be discussed.
|
|
Session Title: Public Issues Forum: Multiple Perspectives on the Politics of Evaluation
|
|
Panel Session 298 to be held in Capitol Ballroom Section 5 on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Presidential Strand
|
| Chair(s): |
| Leslie J Cooksy,
University of Delaware,
ljcooksy@udel.edu
|
| Discussant(s):
|
| William Trochim,
Cornell University,
wmt1@cornell.edu
|
| Abstract:
Evaluation is an inherently political endeavor. This panel considers issues in evaluation politics from different perspectives. The panel starts with a presentation of how evaluation politics are experienced at the local level, where evaluation practice may be driven by directives from state or federal funders. This is followed by a discussion of the interplay of politics and practice in evaluations of federal demonstrations. An outside perspective is provided by the third speaker, a journalist with experience in communicating politically-sensitive evaluation findings to the public. The fourth presentation will shed light on cross-cultural similarities and differences in the ways that politics affects evaluation. Finally, the panel's discussant connects the issues identified by each speaker to the theme of the conference: evaluation policy, focusing in particular on the role of evaluation policy in creating and resolving political challenges in evaluation practice.
|
|
Negotiating Across Cultures: Perspectives on Politics in an International Development Agency
|
| Patrick Grasso,
The World Bank,
pgrasso45@comcast.net
|
|
Patrick Grasso is a senior consultant with the World Bank where, among many other projects, he co-edited the volume, The World Bank Operations Evaluation Department: The First 30 Years. Before joining the World Bank, Dr. Grasso was at the U.S. Government Accountability Office, where he led several high profile studies for Congress. At both GAO and the World Bank, Dr. Grasso has had ample opportunity to see politics manifest itself in all aspects of evaluation. In this presentation, he will focus on his experiences with the World Bank and discuss cross-cultural issues in evaluation, with particular attention to evaluation capacity development. He will share lessons learned about building evaluation capacity and influencing evaluation policy in the context of international politics.
|
|
|
Feeling Free: The Perspective of an Evaluator in an Academic Setting
|
| Leonard Bickman,
Vanderbilt University,
leonard.bickman@vanderbilt.edu
|
|
Professor Leonard Bickman directs the Center for Evaluation and Program Improvement at Vanderbilt Peabody College. He has led several large, federally-funded evaluations, including the evaluation of the Fort Bragg Continuum of Care for Children and Adolescents, which received the American Evaluation Association's Outstanding Evaluation Award. In the academic setting, Professor Bickman and CEPI have independence in the kinds of evaluations they take on, which protects them from certain aspects of evaluation politics. However, they are still influenced by political decisions about what gets evaluated, how much money is dedicated to evaluation, and what kinds of evaluation approaches are preferred. Professor Bickman will draw on his experiences to identify political issues in evaluation practice in the academic environment.
| |
|
In the Public Eye: Evaluation Politics from a Journalist's Perspective
|
| Lesley Dahlkemper,
Schoolhouse Communications,
lesley@schoolhousecom.com
|
|
Lesley Dahlkemper has extensive experience in journalism, public policy, politics and strategic communications. For over a decade, she was a national-award winning reporter for the National Public Radio (NPR) affiliate in Denver and a regular contributor to NPR, the Osgood Files, and Monitor Radio. Today, she is the president of Schoolhouse Communications, a Denver firm specializing in helping education clients communicate effectively about the need for school improvement and system-wide change. Her clients have included the American Association of School Administrators, Association for Supervision and Curriculum Development, and the National Education Association. Lesley has managed a multi-million dollar Annenberg grant focused on comprehensive school reform for Education Commission of the States and served on the board of the national Education Writers Association. Lesley will draw upon her professional experience to explain how evaluation efforts factor into news coverage of political issues - particularly in the area of education reform.
| |
|
Variations in Political Influences: Evaluation Politics from an International Perspective
|
| Peter Dahler-Larsen,
University of Southern Denmark,
pdl@sam.sdu.dk
|
|
Peter Dahler-Larsen is a professor in the Department of Political Science and Public Management at the University of Southern Denmark and past president of the European Evaluation Society. His research interests include cultural and institutional aspects of evaluation and he has published widely on issues of evaluation and public management, evaluation theory, and evaluation and democracy. Together with Jonathan Breul and Richard Boyle, he edited Open to the Public: Evaluation in the Public Sector, exploring the new roles of evaluative information in a public arena characterized by political images and new media. Dr. Dahler-Larsen will draw on his evaluation experiences in Denmark, including a study he conducted of evaluation, power and democracy as a part of the Danish Parliament's inquiry into power and democracy in Denmark, to address issues of politics and policy in evaluation.
| |
|
Session Title: Innovative Approaches to Teaching Evaluation in a University Setting
|
|
Multipaper Session 299 to be held in Capitol Ballroom Section 6 on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Teaching of Evaluation TIG
|
| Chair(s): |
| SaraJoy Pond,
Brigham Young University,
sarajoypond@gmail.com
|
|
Beyond Problem-Based Experiential Learning: An Applied Research Training Practicum
|
| Presenter(s):
|
| Meghan Lowery,
Southern Illinois University at Carbondale,
meghanlowery@gmail.com
|
| Theresa Castilla,
Southern Illinois University at Carbondale,
castilla@siu.edu
|
| Joel Nadler,
Southern Illinois University at Carbondale,
jnadler@siu.edu
|
| Abstract:
Evaluation practices in applied settings rely on solid training programs. The need for experiential, hands-on learning in applied settings is important. Applied research training practicum programs are a unique addition to any graduate-level curriculum, yet very few practicum programs exist. The Applied Psychology program at Southern Illinois University Carbondale (SIUC) is one of the few programs in the country to offer a student run practicum program. A literature-based comparison reveals different approaches to and techniques used in evaluation training, such as the case study method, re-enactment of past evaluations, and a problem-based learning approach. These methods will be compared to the training practicum offered by SIUC’s Applied Research Consultants (ARC). ARC provides a full two years of practical experiential based training. Challenges, necessary support, and the unique learning experiences that ARC offers will be discussed as well as the benefits of such training when entering the job market.
|
|
Infusing Evaluation Concepts and Skills Across a Professional Curriculum: Evidence-Based Practice as a Theme for a Research Class
|
| Presenter(s):
|
| Kathleen Bolland,
University of Alabama,
kbolland@sw.ua.edu
|
| Abstract:
The push for evidence-based practice (EBP) in many fields has an unintended benefit: it provides a way for additional evaluation content to be incorporated into professional curricula. Although the concept is sometimes controversial, and there is some resistance to putting it into practice, there may be more acceptance of EBP than of previous iterations such as “the scientist-practitioner model.” Acceptance of EBP opens the door for infusion of evaluation content into courses where otherwise it has been absent or tangential. In this presentation, I will discuss (a) how I have designed a research course around the theme of evidence-based practice; (b) ways I have highlighted the relationship of EBP, and hence, evaluation, to other curricular areas such as policy and human behavior in the social environment; and (c) further work we evaluators can do to increase appropriate coverage of evaluation concepts and principles across curricula and in textbooks.
|
|
Consulting Within the University: Varied Roles, Valuable Training
|
| Presenter(s):
|
| Mark Hansen,
University of California Los Angeles,
hansen.mark@gmail.com
|
| Janet Lee,
University of California Los Angeles,
jslee9@ucla.edu
|
| Abstract:
The Social Research Methodology Evaluation Group provides technical assistance and evaluation services to organizations in the community and supports the professional development of novice evaluators. The group is housed within a school of education and is closely linked to a graduate program in research methodology and program evaluation. We are presently working with several university-based outreach programs that seek to improve the college-readiness of youth in the region. Despite a shared purpose, these programs differ in their service approach and the sophistication of their evaluation efforts. Thus, our work has provided an opportunity to assist these programs in a wide variety of capacities: facilitating conversations with stakeholders, developing logic models, focusing evaluation questions, developing measurement tools, enhancing data infrastructure, and building capacity for analysis and communication of findings. This paper will describe these diverse roles and discuss the corresponding value of this setting for training graduate students in evaluation.
|
|
Reflections on Evaluation Training by Apprenticeship: Perspectives of Faculty and Graduates
|
| Presenter(s):
|
| Annelise Carleton-Hug,
Trillium Associates,
annelise@trilliumassociates.com
|
| Joan LaFrance,
Mekinak Consulting,
joanlafrance1@msn.com
|
| Abstract:
To build evaluation capacity, the Center for Learning and Teaching in the West, a consortium of five universities, numerous community and tribal colleges, and public school districts in Montana, Colorado and Oregon, developed a unique approach for evaluation of this NSF-funded program. Under the guidance of experienced evaluators, doctoral students in science and mathematics education participated in an evaluation apprenticeship involving all aspects of program evaluation, including theory development, design of evaluation plans, data collection, analysis and reporting. The hands-on experience proved to be instructive on many levels, and influenced the opinions of campus leadership toward evaluation. The paper presents the perspectives of university faculty involved as project leaders, as well as reflections of the graduate students who participated in the apprenticeship. The discussion shares the strengths and challenges for implementing such types of apprenticeship training for future education leaders.
|
| | | |
|
Session Title: Environmental Strategy Implementation Fidelity Assessment: Exploring the Continuum between Standardized Measures and Case-specific Methodology
|
|
Panel Session 300 to be held in Capitol Ballroom Section 7 on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
|
| Chair(s): |
| Ann Landy,
Westat,
annlandy@westat.com
|
| Abstract:
Methods for assessing implementation fidelity of environmental strategies for substance abuse prevention are not fully developed. A major challenge is the tension between the need for consistent measurement across communities and the need for flexible measures that reflects community-specific adaptations and conditions. Panelists--members of a national cross-site evaluation team and state evaluators--will first discuss the development of a standardized tool to assess 21 evidence-based environmental prevention strategies, potential adaptations to account for specific community circumstances, and how fidelity rating data may augment current knowledge about the impact of environmental strategies. They will then discuss challenges in implementing uniform assessment procedures that adequately capture community-specific variation. Panel members will not recommend one method but will provide an opportunity for sharing different perspectives about the need for, challenges with, and experiences or lessons learned in attempting to measure or monitor implementation fidelity of environmental strategies across diverse multi-site 'systems.'
|
|
Developing a Standardized Tool to Assess Implementation Fidelity of Environmental Substance Abuse Strategies
|
| Kristianna Pettibone,
MayaTech Corporation,
kpettibone@mayatech.com
|
|
In developing a method for assessing implementation fidelity of environmental strategies for a cross-site evaluation that included over 400 communities, we had to take a standardized approach to ensure that comparable measures were used across all sites. Development of the tool was based on literature search to identify evidence-based environmental strategies. Core activities were identified from the literature and a rating scale was assigned to assess levels of fidelity. These measures were shared with external reviewers with experience implementing environmental strategies and their feedback was incorporated. Data from the fidelity measures will also be used to assess the relative importance of key implementation activities within an environmental strategy.
|
|
|
Measuring Fidelity to Best Practice Standards Across a Statewide Substance Abuse Prevention System
|
| Beth Welbes,
University of Illinois,
echamb@uiuc.edu
|
|
Since 2002, Illinois has contracted with U of I/CPRD to review, rate and comment on the extent to which its system of 125 community-based grantees implement prevention programs, policies and practices in line with clearly communicated standards of practice. Diverse types of environmental programs are implemented across the state from mentoring to public policy and enforcement efforts. CPRD has created a mechanism to annually rate fidelity to the standards of practice and provide customized feedback to each grantee regarding areas of strength and deficiency. This is one of the primary mechanisms used to monitor individual and system level performance. Challenges and lessons learned will be shared.
| |
|
The Effects of Multiple Scoring Approaches on Fidelity Measurement of Environmental Strategies
|
| Richard Cervantes,
Behavioral Assessment Inc,
bassessment@aol.com
|
|
Measuring environmental strategies and approaches, as well as outcomes related to these strategies can be complex and challenging. A variation in specific strategies, activities, and programs further complicates standardized measures of implementation fidelity. This presentation will highlight psychometric considerations in measuring fidelity across different forms and types of environmental approaches. Further, recommendations will be made related to the new SAMHSA/CSAP Users Guide Measuring SPF Strategies and Programs to improve reliability and validity of these measures.
| |
|
Reconciling Competing Needs: Options for Assessing Environmental Strategy Implementation Fidelity to Address Needs at Federal, State, and Local Levels
|
| Ann Landy,
Westat,
annlandy@westat.com
|
|
One of the primary challenges associated with a standardized approach for assessing implementation fidelity within a diverse, multi-site initiative, is facilitating the adaptation of the measures to meet the needs of individual communities. The degree to which the application of a standardized approach, using core activity ratings, can accommodate practitioner needs and suggestions for tailoring the process to individual communities while collecting comparable data across communities will be discussed.
| |
| In a 90 minute Roundtable session, the first
rotation uses the first 45 minutes and the second rotation uses the last 45 minutes. |
| Roundtable Rotation I:
Evaluation Methods and Experiences on Five Indian Reservations With the Federally Recognized Tribal Extension Program in Arizona and New Mexico |
|
Roundtable Presentation 301 to be held in the Limestone Boardroom on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Indigenous Peoples in Evaluation TIG
|
| Presenter(s):
|
| Sabrina Tuttle,
University of Arizona Cooperative Extension,
sabrinat@ag.arizon.edu
|
| Linda Masters,
University of Arizona,
lmasters@ag.arizona.edu
|
| Melvina Adolf,
University of Arizona Cooperative Extension,
madolf@ag.arizona.edu
|
| Gerald Moore,
University of Arizona Cooperative Extension,
gmoore@ag.arizona.edu
|
| Matthew Livingston,
University of Arizona Cooperative Extension,
mateo@ag.arizona.edu
|
| Jeannie Benally,
University of Arizona,
jbenally@ag.arizona.edu
|
| Abstract:
Extension faculty who work with the Federally Recognized Tribal Extension Program on five reservations in Arizona and New Mexico have employed diverse methods to evaluate their programs. They have found that certain types of evaluations work well with their indigenous community members. However, some standard methods used in county extension do not work as well or can be somewhat problematic. This round table discussion will present a summary of evaluation methods used for diverse projects on each reservation, and describe how each evaluation technique functioned in the context of a range of project situations. We will also query participants in the round table discussion about effective evaluation approaches for Native Americans and other indigenous populations that they may have experienced, to aid us in understanding more effective ways to evaluate programs on the five reservations: the Navajo Nation, the San Carlos Apache Tribe, Colorado River Indian Tribes; the Hualapai Tribe, and the Hopi Tribe.
|
| Roundtable Rotation II:
Indicators of Success in a Native Hawaiian Educational System: Implications for Evaluation Policy and Practice in Indigenous Programs |
|
Roundtable Presentation 301 to be held in the Limestone Boardroom on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Indigenous Peoples in Evaluation TIG
|
| Presenter(s):
|
| Ormond Hammond,
Pacific Resources for Education and Learning,
hammondo@prel.org
|
| Sonja Evensen,
Pacific Resources for Education and Learning,
evensens@prel.org
|
| Abstract:
This presentation will describe the results of a project whose purpose is to identify valid, reliable, and meaningful indicators of success for the Native Hawaiian Education Council (NHEC). The NHEC provides oversight to a system of U.S. federally funded education programs serving Native Hawaiians. The search of indicators has ranged from what the U.S. Department of Education (US ED) needs for Government Performance and Results Act (GPRA) reporting to what Native Hawaiian communities feel are important and often apparently unmeasureable outcomes. Following a presentation of the results of the indicators project, a series of targeted questions will be provided to stimulate participant discussion. These will include such questions as, “Can an indicators system include culturally meaningful but extremely subjective outcome measures?” “Should an indigenous education system accept and make use of non-native outcome indicators?”
|
| In a 90 minute Roundtable session, the first
rotation uses the first 45 minutes and the second rotation uses the last 45 minutes. |
| Roundtable Rotation I:
Collaborative Evaluation Policy Development With a State Office of Minority Health |
|
Roundtable Presentation 302 to be held in the Sandstone Boardroom on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Health Evaluation TIG
|
| Presenter(s):
|
| Betty Yung,
Wright State University,
betty.yung@wright.edu
|
| Peter Leahy,
University of Akron,
leahy@uakron.edu
|
| Robert Fischer,
Case Western Reserve University,
fischer@case.edu
|
| Lucinda Deason,
University of Akron,
deason@uakron.edu
|
| Carla Clasen,
Wright State University,
carla.classen@wright.edu
|
| Manoj Sharma,
University of Cincinnati,
manoj.sharma@uc.edu
|
| Abstract:
This roundtable will describe the collaborative development of evaluation policies by a statewide evaluation panel and the Ohio Commission on Minority Health (OCMH), a state grant-making agency. The policy development effort was designed to standardize and improve the evaluation of the health promotion and disease prevention projects funded by OCMH. The 7-member statewide panel, composed of faculty from four universities and a community agency representative, has worked with OCMH to: select uniform key outcome indicators in their funding priority areas; establish a pool of culturally competent program evaluators approved to provide external evaluation of OCMH projects; produce an Evaluation Guidance manual for OCMH grantees, potential grantees, and evaluators; modify the OCMH Request for Proposals, the grant review process, and grantee start-up meetings to emphasize evaluation expectations; and design a standardized format for reporting of program outcomes.
|
| Roundtable Rotation II:
Discovering Who Is "At The Helm" of Evaluation Practice in Public Health: Implications for the Content and Structure of Public Health Academic Programs |
|
Roundtable Presentation 302 to be held in the Sandstone Boardroom on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Health Evaluation TIG
|
| Presenter(s):
|
| Leslie Fierro,
Claremont Graduate University,
leslie.fierro@cgu.edu
|
| Abstract:
The public health community has been addressing the topic of public health workforce training for more than two decades. Large attendance by public health practitioners at evaluation trainings and the introduction of new cross-cutting public health competencies (which include evaluation) provide evidence of the need for evaluation training in the public health sector. A potential underlying reason for these needs may lie within the organization of public health disciplinary tracks. Five public health disciplines are currently recognized, not one of which is evaluation. Primary responsibility for conducting evaluation currently resides in the health policy and social and behavioral science disciplines yet, in practice, there appears to be no clear delineation of responsibility for evaluation to either. This roundtable will explore who is “at the helm” of public health evaluation practice, and discuss a variety of different options available for approaching evaluation education within public health academic institutions.
|
| In a 90 minute Roundtable session, the first
rotation uses the first 45 minutes and the second rotation uses the last 45 minutes. |
| Roundtable Rotation I:
Translating Cultural Expertise Into Culturally-Specific Data Collection Tools: How to Increase the Evaluation Effectiveness of Community-Based Organizations |
|
Roundtable Presentation 303 to be held in the Marble Boardroom on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Multiethnic Issues in Evaluation TIG
|
| Presenter(s):
|
| Robin Kipke,
University of California Davis,
rakipke@ucdavis.edu
|
| Jeanette Treiber,
University of California Davis,
jtreiber@ucdavis.edu
|
| Abstract:
How do community-based organizations translate their understanding of client populations into culturally competent evaluation tools and practices? Many of these agencies devote most of their efforts and resources toward program outreach and intervention, and fail to translate their field-tested knowledge of how to engage communities into evaluation activities. In this session, we will explore how this gap can be bridged so that cultural expertise is incorporated into relevant evaluation tools and practices. This session will use the case study of People’s CORE, a grassroots organization designed to mobilize and empower Asian/Pacific Islander communities in central Los Angeles, as a starting point to: 1) discuss how community-based organizations can effectively develop culturally-specific data collection instruments, 2) explore what outside evaluators can contribute to building the evaluation capacity of these programs, and 3) share effective strategies for engaging Asian/Pacific islanders in evaluation activities.
|
| Roundtable Rotation II:
Evaluation Assisting Program Planning: A Technique to Choose Among Alternative Plans |
|
Roundtable Presentation 303 to be held in the Marble Boardroom on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Multiethnic Issues in Evaluation TIG
|
| Presenter(s):
|
| Hongyan Cui,
Western Michigan University,
h.cui@wmich.edu
|
| Abstract:
In many cases, program staff has to decide which one to choose among alternative plans to best serve program attendees and obtain the best effects. This paper proposes the use of multicriterion model that covers all major decision steps from initial statement of the problem to the choices among alternatives for achieving the desired result. The goals will be a function of the decisions to be taken depending on the circumstances governing the problem. The application of “goal-programming” and “weighting factors” methods in program evaluation will be discussed.
|
|
Session Title: The Essence of Evaluation Capacity Building: Conceptualization, Application, and Measurement
|
|
Multipaper Session 304 to be held in Centennial Section A on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
|
| Chair(s): |
| Yolanda Suarez-Balcazar,
University of Illinois Chicago,
ysuarez@uic.edu
|
| Discussant(s): |
| Abraham Wandersman,
University of South Carolina,
wandersman@sc.edu
|
| Abstract:
Evaluation capacity building has become an important process for organizations to respond to demands for accountability from stakeholders. Although there is a large volume of literature on capacity building, only a few studies have experimentally examined the conceptualization and measurement of evaluation capacity. This multipaper panel will address these issues. The first paper by Maras et al., describes a participatory approach to building evaluation capacity with a school district and addresses implications for informing policies, programs and practices. The second paper by Hunter et al., describes a quasi-experimental study of evaluation capacity building in two substance abuse prevention coalitions. The third paper by Suarez et al., addresses the conceptualization of capacity building with community organizations. The authors propose model of key factors that impact evaluation at the organizational level. Finally, Taylor-Ritzler and colleagues present the experimental validation of the model. All presenters will discuss implications for research and practice.
|
|
No Data Left Behind: Defining, Building, and Measuring Evaluation Capacity in Schools
|
| Melissa A Maras,
Yale University,
melissa. maras@yale.edu
|
| Paul Flaspohler,
Miami University of Ohio,
flaspopd@muohio.edu
|
|
Schools are increasingly held accountable for providing quality curricula and services that address academic and non-academic barriers to learning. They are collecting rich data that can be used for ongoing school improvement efforts focused on demonstrating outcomes. Unfortunately, schools often lack the capacity to use data to inform and evaluate their policies, programs, and practices. Building evaluation capacity in schools has been identified as one way that schools can meet accountability demands. Evaluation capacity is a complicated construct that has not been clearly defined; however, innovative approaches for building and measuring capacity in schools are emerging. The purpose of this paper is to explore definitions of evaluation capacity and describe a participatory approach to building evaluation capacity. Results from semi-structured participant interviews will be presented to further elucidate dimensions of evaluation capacity. Next steps for research and practice in the area of evaluation capacity in school contexts will be discussed.
|
|
Conducting Evaluation Capacity Building: Lessons Learned from a GTO Demonstration
|
| Sarah B Hunter,
RAND Corporation,
shunter@rand.org
|
| Patricia Ebener,
RAND Corporation,
pateb@rand.org
|
| Matthew Chinman,
RAND Corporation,
chinman@rand.org
|
|
The Getting To Outcomes Demonstration project, a quasi-experimental study of evaluation capacity building, was conducted in two substance abuse prevention community coalitions. Six programs received manuals, training, and technical assistance (TA) over two years. TA was assessed with logs of the mode, amount, and type of TA delivered and with focus groups and interviews with participating program staff. These data indicated the evaluation capacity areas in which the prevention coalitions received the most assistance and the type of TA activities undertaken. Examples of process and outcome evaluation capacity building will be presented. Challenges the coalition staff reported at the end of the demonstration e.g., how to sustain gains in evaluation capacity are discussed. The demonstration suggests several lessons learned on ways to improve evaluation capacity (e.g., TA providers ought to motivate, troubleshoot, and provide structure; assess evaluation needs early on; continuously document progress) and sustain it after TA has ended.
|
|
Evaluation Capacity Building: An Analysis of Individual, Organizational and Contextual variables
|
| Yolanda Suarez-Balcazar,
University of Illinois Chicago,
ysuarez@uic.edu
|
| Tina Taylor Ritzler,
University of Illinois Chicago,
tritzler@uic.edu
|
| Edurne Garcia,
University of Illinois Chicago,
edurne21@yahoo.com
|
|
The purpose of this presentation is to discuss an interactive model of evaluation called The Evaluation Capacity Building Contextual Model. Based on our collective work with diverse CBOs, and on reviews of evaluation, cultural competence, and capacity building literatures, we have identified a number of organizational and individual factors that facilitate optimal evaluation capacity building. These factors can lead to or detract from efforts to institutionalize and mainstream evaluation practices and use evaluation findings within an organization. In addition, we discuss the role of contextual and cultural factors of the organization and the community which can facilitate or impede capacity building for evaluation. Evaluation capacity building is an important process for community organizations experiencing pressure for accountability from various stakeholders. In this presentation we will discuss implications for future research and practice.
|
|
Validation of an Evaluation Capacity Building Conceptual Model
|
| Tina Taylor Ritzler,
University of Illinois Chicago,
tritzler@uic.edu
|
| Yolanda Suarez-Balcazar,
University of Illinois Chicago,
ysuarez@uic.edu
|
| Edurne Garcia,
University of Illinois Chicago,
edurne21@yahoo.com
|
|
Most of the literature on evaluation capacity building has attended to process issues in building evaluation capacity (how to do it). Very little attention has been paid to issues of measurement (how to assess it). In terms of measurement, there are no published examples of validated instruments or systems for assessing evaluation capacity. The few published articles that have addressed measurement have reported only on evaluation products agencies generate (e.g., reports to funders), agencies' satisfaction with training and/or the evaluator's report that capacity was built. In this presentation, we will share the results of a validation study of our multi-factor and multi-method system for measuring evaluation capacity. Specifically, we will present results related to assessing individual, organizational, cultural and contextual factors that serve as a critical infrastructure for evaluation capacity, as well as the evaluation capacity building outcomes of use, mainstreaming and institutionalization of evaluation practices.
|
|
Session Title: Generalizability Theory in Analysis of Survey Data
|
|
Multipaper Session 305 to be held in Centennial Section B on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Quantitative Methods: Theory and Design TIG
|
| Chair(s): |
| Lee Sechrest,
University of Arizona,
sechrest@u.arizona.edu
|
| Discussant(s): |
| Patrick McKnight,
George Mason University,
pem725@gmail.com
|
| Abstract:
Generalizability theory, although not yet much used in evaluation, has considerable potential utility. Its use is illustrated by analysis of survey data, for which generalizability analyses can provide insights to guide further analyses directed at answering substantive questions. Generalizability analyses can also help analysts to make decisions about approaches that will yield dependable results and help in planning future surveys. Generalizability analysis is an analysis of variance that permits allocation of variance in responses to different sources. A large survey of Americans concerning making of medical decisions provides the material for four papers. One focuses on generalizability of responses across survey items and the implications of the findings. A second focuses on generalizaiblity across respondents. A third paper will show how generalizability may guide the development of specific further analyses, and a fourth paper will show how the results of generalizability analyses can inform the development of further surveys.
|
|
Generalizability Theory and Person Variance
|
| Lee Sechrest,
University of Arizona,
sechrest@u.arizona.edu
|
|
Generalizability theory may also be used to study the variance attributable to persons responding to various tasks such as surveys. Variance attributable to persons means that the means of individual persons across a set of items are not generalizable to a larger set. At some level, researchers will almost always want the proportion of variance attributable to persons to shrink to zero, i.e., it will be most useful and meaningful to know that certain subsets of persons are very much alike in their responses. If variance is attributable to persons, further analyses should show what characteristics of persons are associated with differences among them. Main effects of persons may be interesting in their own right, but interactions between persons and other design factors are also usually of interest, as is shown by responses to the national survey.
|
|
Generalizabilty D Study in Planning Productive Analyses
|
| Mei-Kuang Chen,
University of Arizona,
kuang@u.arizona.edu
|
| Patrick McKnight,
George Mason University,
pem725@gmail.com
|
| Lee Sechrest,
University of Arizona,
sechrest@u.arizona.edu
|
|
In Generalizability theory, D study refers to the determination of the extent to which results for one observation or some subset of observations is generalizable to other similar observations. The determination then makes it possible to estimate how many such observations would be required in order to achieve a predetermined level of reliability. Data from large scale studies, e.g., surveys, can often be broken down in many different ways for the purpose of conducting sub analyses. Such analyses may be disappointing, however, and even misleading, if the number of observations in the subset is not sufficient to produce dependable estimates. Illustrative uses of D studies to estimate required number of observations for items, topics, and persons for the national data set on medical decision-making are instructive. Such analyses can show, among other things, whether optimal use of the data available may make replications possible.
|
|
Generalizability Theory D Studies in Planning for Future Research
|
| Sally Olderbak,
University of Arizona,
sallyo@email.arizona.edu
|
| Patrick McKnight,
George Mason University,
pem725@gmail.com
|
| Lee Sechrest,
University of Arizona,
sechrest@u.arizona.edu
|
|
Generalizability D studies can be used in the planning of future research in order to maximize statistical power and efficiency achieved by minimizing data to be collected. D studies can show how many respondents are needed to achieve a specified level of reliability of the data, which could make it possible to minimize response burden by not requiring that every item be asked of every respondent. D studies can also show how many items might be required, or how many topics might need to be addressed in order to get dependable results. Data from the National Survey of Medical Decision Making can be used to determine in any follow-on surveys the specific characteristics of the survey design that would be required in order to obtain dependable data. Follow-on surveys, if required, can be conducted efficiently so as to conserve resources of all kinds.
|
|
Session Title: Evaluation of Adult Science Media: Retrospect and Prospect
|
|
Panel Session 306 to be held in Centennial Section C on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Distance Ed. & Other Educational Technologies TIG
|
| Chair(s): |
| Saul Rockman,
Rockman et al,
saul@rockman.com
|
| Abstract:
The National Research Council's Board on Science Education is exploring the past, present and future of informal science education (ISE). As a component of this effort, several of the presenters developed a background paper on Media-based Science Learning in Informal Environments ( http://www7.nationalacademies.org/bose/Rockman_et al_Commissioned_Paper.pdf ). The commissioned paper reviews evaluations of science television and radio programs and IMAX movies designed for the adult learner and compares the evaluation strategies to those used in studying children's media. It also recommends evaluation strategies that will capture impacts beyond the 'water cooler discussion' stage. More recent efforts by the authors and their colleagues illustrate some of the new evaluation strategies ' at least new to this area of media studies ' that provide insights into the study general audience, television-based, science learning. Strategies include longitudinal studies with repeated measurement, panel studies, and transfer tasks. The findings of these studies will also be presented.
|
|
Retrospective Review of Evaluations of Adult Science Media
|
| Saul Rockman,
Rockman et al,
saul@rockman.com
|
|
Over the past 20 years, NSF, as well as corporate and foundation funders have supported science television and radio programs. National Geographic Specials, NOVA, and programs on the Discovery Channel have informed the general public about issues of historical, current, and future interest in science topics. Many of the initiatives have had both formative and summative evaluations attached to them, with reports back to the producers and the funding agencies. While formative studies have contributed significantly to the improvement in programming, studies of program outcomes and effects have been traditionally ineffective in establishing actionable findings. As the head of an organization conducting many of these evaluations, and as the senior author of the NRC paper on media and science learning, I have been concerned that the strategies supported have been wasteful. In contrast to children's media studies, we start with a different structure and a constraint of short term outcomes.
|
|
|
Informal Science Evaluation Methodologies
|
| Jennifer Borse,
Rockman et al,
jennifer@rockman.com
|
|
The constraints on the methods and procedures of studying informal science media materials has been a function of both the funding patterns and the production cycle of the programs broadcast on radio and television. There are evaluation strategies that can provide a greater understanding of both cumulative impact and immediate outcomes that have not been part of the standard arsenal of evaluation tools for media researchers. We identify and encourage the application of several evaluation approaches that can further the goals of the funding agencies and provide a greater understanding of impact for program developers. These include both longitudinal strategies and transfer tasks that can capture audience reactions and actions in the real world. We show how these strategies are in the best interests of all interested parties. The presenter, a co-author of the NRC paper, has been an evaluator of numerous NSF-funded informal science education projects.
| |
|
Assessing Knowledge of Exploring Time
|
| Kristin Bass,
Rockman et al,
kristin@rockman.com
|
|
This presentation will describe the design and implementation of an online assessment of the Exploring Time television programs, a two-hour prime-time television special that 'reveals the unseen world of natural change and probes the deep mysteries of Time' (Exploring Time, 2008). The assessment included fixed-choice and open-ended tasks that assessed viewers' understanding of the content in the film and their ability to transfer that knowledge to new situations. We will share our process of construct identification, question creation, and instrument review or validation (Wilson, 2005). We'll talk about what worked well, what we wish we had done differently and what we learned about assessing adult learning in science media. We'll also share the results of our study and discuss what the instrument contributed to our overall evaluation. Our presentation is intended to encourage evaluators and funders to demand and develop instruments that are sensitive to the complexities of adult learning.
| |
|
Survey and Panel Studies of Quest Science Programming
|
| Monnette Fong,
Rockman et al,
monnette@rockman.com
|
|
Quest is a science and nature show based in the northern California geographic area, funded by NSF and several foundations. San Francisco's KQED disseminates the materials on three platforms: television, radio, and the web. We will describe our surveys and panels used to evaluate the influence of Quest on adult learning and behavior, and acquire data about sustained interest and activities. A combination of surveys and panels are conducted with Bay Area citizens to explore their media consumption and behavior associated with science and environmental activities. Each month for four successive months, panelists are asked to visit the website and complete a questionnaire about their science, environment and nature activities for the past two weeks, a sample participates in a telephone interview. Through the surveys and this panel, we obtain granular data about the influence of Quest, in all its dissemination strategies. The presenter is the manager of these studies.
| |
|
Session Title: How to Write an Evaluation Plan: A Step-by-Step Guide for New Evaluators
|
|
Demonstration Session 308 to be held in Centennial Section E on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Graduate Student and New Evaluator TIG
|
| Presenter(s): |
| Kai Young,
Centers for Disease Control and Prevention,
deq0@cdc.gov
|
| Maureen Wilce,
Centers for Disease Control and Prevention,
mwilce@cdc.gov
|
| Linda Leary,
Centers for Disease Control and Prevention,
lsl1@cdc.gov
|
| Abstract:
Having a clearly written evaluation plan prior to conducting evaluation has become standard evaluation practice. Clients and funders often ask for a formal evaluation plan before committing to agreements for having evaluations conducted. This session will discuss the importance of having a detailed plan prior to conducting an evaluation to ensure that the evaluation will meet the standards of a 'good' evaluation and that the time, energy and resources invested in evaluation will be worthwhile. An evaluation plan template as well as a step-by-step guide that has been pilot tested and used will be provided to new evaluators and program staff for working through the development and writing of an evaluation plan. We will also discuss how we developed the guide using the following techniques: breaking down concepts, operationalizing concepts using terms program staff could understand, developing worksheets to assist in organizing information, and providing real-life examples to foster understanding.
|
|
Session Title: Research on Evaluation Influence and System Change Efforts
|
|
Multipaper Session 309 to be held in Centennial Section F on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Research on Evaluation TIG
|
| Chair(s): |
| Chris LS Coryn,
Western Michigan University,
chris.coryn@wmich.edu
|
|
Refining the Assessment: Using the Three I's Model for Documenting Systemic Change
|
| Presenter(s):
|
| Anna Lobosco,
New York State Developmental Disabilities Planning Council,
alobosco@ddpc.state.ny.us
|
| Dianna Newman,
University at Albany - State University of New York,
dnewman@uamail.albany.edu
|
| Abstract:
Work in developing an evaluation model for evaluating programs whose intent is systems change in education and the human services has been guided by a meta-evaluation of over 100 evaluation case studies using the Three Is model of evaluating programs whose intent is systems change. The Three Is refers to stages of systems change – initiation, implementation, impact. This paper describes use of multiple methods and indicators that have worked at each stage and serve to operationalize systemic change. Many examples are provided showing how four factors - program policies and procedures, infrastructure, design and delivery of services, and expected consumer outcomes/experiences – interact with four concomitant factors – program climate/culture, capacity-building, sustainability and leadership – can document systems change.
|
|
An Investigation of the Use of Network Analysis to Assess Evaluation Influence in Large, Multi-site NSF Evaluations
|
| Presenter(s):
|
| Denise L Roseland,
University of Minnesota,
rose0613@umn.edu
|
| Boris Volkov,
University of Minnesota,
volk0057@umn.edu
|
| Jean King,
University of Minnesota,
kingx004@umn.edu
|
| Kelli Johnson,
University of Minnesota,
johns706@umn.edu
|
| Frances Lawrenz,
University of Minnesota,
lawrenz@umn.edu
|
| Stacie A Toal,
University of Minnesota,
toal0002@umn.edu
|
| Lija Greenseid,
University of Minnesota,
gree0573@umn.edu
|
| Abstract:
This paper investigates challenges and opportunities of using network analysis from a social perspective to provide one measure of evaluation influence in large, multi-site evaluations. This research is part of a larger project studying the use and influence of evaluations of four NSF-funded programs by examining the long-term influence of the evaluations on project staff, the science, technology, engineering, and mathematics (STEM) community and evaluation community.
The paper critiques an innovative way of applying network analysis from a social perspective to research on evaluation influence. The network analysis was used to track the perceived influence of four evaluations, their PIs, and related publications in the STEM education field. This research explores the usefulness of network analysis for helping to ascertain the broad influence of evaluations.
|
|
External Evaluation in New Zealand Schooling: Engagement, Influence and Improvement
|
| Presenter(s):
|
| Ro Parsons,
Ministry of Education,
ro.parsons@paradise.net.nz
|
| Abstract:
The New Zealand Education Review Office (ERO) reviews and reports on all New Zealand schools on a regular cycle. ERO’s purpose is to provide external evaluation that contributes to high quality education while maintaining accountability functions.
The research study investigated how ERO’s approach assisted two schools to improve, and examined the effect of external evaluation over time. The findings show that ERO’s approach has a differential influence in each evaluation context. School evaluation history and the complex interaction between evaluator practice, school conditions and participants during the evaluation process, influenced how participants responded to the evaluation and how the evaluation assisted a school to improve. A tentative theory of education review is proposed that posits the external evaluation as situational, and its influence as socially constructed and mediated through a process of engagement between two organizations with a common purpose. This research expands our understanding of the relationship between external evaluation and school improvement.
|
|
Research on Evaluation Practice: Preliminary Findings on Use of Logic Model Approaches in Large Scale Educational Reform Projects
|
| Presenter(s):
|
| Rosalie T Torres,
Torres Consulting Group,
rosalie@torresconsultinggroup.com
|
| Rodney Hopson,
Duquesne University,
hopson@duq.edu
|
| Jill Casey,
Torres Consulting Group,
jill@torresconsultinggroup.com
|
| Abstract:
Within the field of evaluation, several tools related to explicating program theory and design are in use. Among these are: logic models, theories of action, theories of change, as well as tools associated with systems approaches. Virtually all major texts and practical evaluation guides include sections on logic model use. Over the last several decades, the topic has been equally visible: in journal articles and monographs; as well as in annual conference and professional development sessions of the American Evaluation Association and the American Educational Research Association. This paper will review preliminary findings on the actual use of these tools from a series of National Science Foundation-funded case studies of large-scale, multi-year, multi-partner education reform projects. The paper will overview emerging findings on key factors influencing the development, design, content, and use of these tools, as well as the benefits and challenges to use. Implications for further research and implications for evaluators and program staff currently using and/or seeking to employ these tools in their practice will be highlighted.
|
| | | |
|
Session Title: Photography Enhances Evaluation Processes and the Study's Usability
|
|
Panel Session 310 to be held in Centennial Section G on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Extension Education Evaluation TIG
|
| Chair(s): |
| Nancy Grudens-Schuck,
Iowa State University,
ngs@iastate.edu
|
| Abstract:
Photography was integrated into two federal grant evaluations:
1. A National Science Foundation (NSF) grant tested a field-based inquiry-focused geoscience course for pre-service teachers. Quantitative pre/post research instruments tested knowledge and skills gained, and attitudes and self-efficacy changed. Qualitative processes explored the impact on students' (a) thinking about the inquiry process and (b) desire, as future teachers, to integrate inquiry-based learning processes into geoscience lessons.
2. Three Federal grants created a 10-year partnership between the University of Nebraska-Lincoln and the Technological University of Tajikistan to develop a Textiles Museum and Entrepreneur Center for Extension Education. Upon completion of the grants, structured personal interviews with faculty in Tajikistan identified the changes, or improvements, in classes/programs/projects/businesses that resulted from the 10-year collaboration with a developing country.
The panel will describe how photography helped understand and interpret the results of the quantitative and qualitative instruments, and suggest recommendations for the reporting process.
|
|
Integrating Photography for Process Evaluation and Outcome Documentation
|
| S Kay Rockwell,
University of Nebraska Lincoln,
krockwell1@unl.edu
|
|
Photography was chosen as one method in two different evaluation studies: One designed to meet process evaluation needs and the other designed to meet outcome evaluation needs. This paper will describe why photography was selected, the challenges it presented, issues which needed to be addressed, problems which surfaced, managing the photos, using the photos in reports, and both the expected and unexpected value it had for the project leaders. The term, Embedded Reporter, will be contrasted with using the term, Participant-Observer, for the process evaluation study. The Photo Elicitation process used with small-group interviews in the process evaluation study will also be described; advantages and disadvantages of integrating a Photo Elicitation process will be proposed. The audience will be invited to describe how photography either enhanced, or could have enhanced, one of their recent evaluations, and the problems they encountered.
|
|
|
Photo Enhancement of Structured Personal Interviews to Document Project Outcomes
|
| Julie Albrecht,
University of Nebraska Lincoln,
jalbrecht1@unl.edu
|
| Kathleen Prochaska-Cue,
University of Nebraska Lincoln,
kprochaska-cue1@unl.edu
|
|
Photography was used to enhance the evaluation of a ten-year project with the Technological University of Tajikistan and the University of Nebraska-Lincoln (three US State Department Grants). Structured personal interviews were used with a set of open-ended questions for 16 students, faculty and administrators to provide examples of projects that resulted from their training experience at the University of Nebraska-Lincoln. Digital photograph were taken at the time of the formal interviews as well as throughout the activities planned for the site visit. Evaluation goals and open-ended questions provided guidelines for sorting and summarizing the data and photographs. In the final report, photographs verified the written evaluation of implementation of the textile museum, entrepreneur center, classroom use of teaching methods and student involvement, university changes and community involvement as outcomes for the granting agency. The use of photographs to enhance evaluation was shared via the web for teaching the evaluation process.
| |
|
Enhancing Process Evaluation through an Embedded Reporter Approach Using Photography
|
| Gwen Nugent,
University of Nebraska Lincoln,
gnugent1@unl.edu
|
| Gina Kunz,
University of Nebraska Lincoln,
gkunz2@unl.edu
|
|
This presentation will discuss how photography served as a key evaluation strategy on an NSF Geoscience Education project that also had major quantitative and qualitative research components. The quantitative component examined the impact of a field-based geoscience course for pre-service teachers, looking at the outcome variables of content knowledge, inquiry skills, and attitudes towards science. The qualitative analysis focused on analysis of student field books, providing a window into the student's thinking regarding the inquiry process and various aspects of the field-based experience that enhanced his/her status as a student and future teacher of geoscience content. Despite the valuable data provided by this mixed methods approach, it was clear that the numbers (quantitative analysis) and words (qualitative analysis) did not capture the full nature of the students' experience. The photographs provided valuable documentation of students' frustrations, confusion, understandings, and successes across the 14-day field experience.
| |
|
Session Title: Program Theory and Theory Driven Evaluation TIG Business Meeting and Presentation: How Foundations Use Program Theory to Improve Policy
|
|
Business Meeting Session 311 to be held in Centennial Section H on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Program Theory and Theory-driven Evaluation TIG
|
| TIG Leader(s): |
|
John Gargani,
Gargani and Company Inc,
john@gcoinc.com
|
|
Katrina Bledsoe,
Walter R McDonald and Associates Inc,
kbledsoe@wrma.com
|
| Chair(s): |
|
Katrina Bledsoe,
Walter R McDonald and Associates Inc,
kbledsoe@wrma.com
|
| Presenter(s): |
| Charles Gasper,
Missouri Foundation for Health,
cgasper@mffh.org
|
| Huilan Yang,
W K Kellogg Foundation,
hy1@wkkf.org
|
| Discussant(s): |
|
Stewart I Donaldson,
Claremont Graduate University,
stewart.donaldson@cgu.edu
|
|
John Gargani,
Gargani and Company Inc,
john@gcoinc.com
|
| Abstract:
Foundations must evaluate the credibility of new policy initiatives in the absence of data, as well as monitor, modify, and evaluate the initiatives they undertake. Increasingly over the past decade, theory-based techniques, ranging from logic models to systems thinking, have been used to promote these disparate activities. This session will examine how the W. K. Kellogg Foundation and the Missouri Foundation for Health use program theory and other theory-based concepts to promote policymaking. The session will be framed by a presentation describing the ways in which theory-based approaches might be used as policy tools and the benefits they can produce. Representatives from the foundations will provide real-world perspectives on the use of theory-based approaches and their promised benefits. And discussants will provide insights on the proposed and actual uses of theory-based approaches in philanthropy.
|
|
Session Title: Program-Level Evaluation in Medical and Health Science Education: Policies, Practices, and Designs
|
|
Multipaper Session 312 to be held in Mineral Hall Section A on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Assessment in Higher Education TIG
|
| Chair(s): |
| Susan Boser,
Indiana University of Pennsylvania,
sboser@iup.edu
|
| Discussant(s): |
| France Gagnon,
University of British Columbia,
fgagnon@medd.med.ubc.ca
|
|
Developing and Evaluating an Innovative Clinical Education Simulation Curriculum for a Nursing Program
|
| Presenter(s):
|
| Mary Piontek,
University of Michigan Ann Arbor,
mpiontek@umich.edu
|
| Abstract:
This paper discusses my collaboration as evaluation consultant with the University of Michigan’s School of Nursing on developing, implementing, and evaluating a curricular revision process for the School’s Initiative for Excellence in Clinical Education, Practice, and Research. The curricular revision includes the development of a series of simulation modules and teaching materials to support deeper integration of clinical skills and knowledge into the BSN degree program. The potential for documenting the curricular development, evaluation, and educational research processes is unique in that an evaluator is involved from day-one of the development process. The paper will describe the process of facilitating the curricular revision and how the use of educational and professional standards shaped the process; it will focus on the evaluation design for assessing the quality of the simulation modules and the impact of the curricular revision on student learning and development of clinical skills and knowledge.
|
|
Nursing Curriculum Evaluation Using National League for Nursing Accrediting Commission Standards and Criteria
|
| Presenter(s):
|
| Vicki Schug,
College of St Catherine,
vlschug@stkate.edu
|
| Abstract:
Systematic curricular evaluation is essential to assure integrity of baccalaureate nursing programs. This session will provide participants with information about an innovative approach to curricular evaluation using the accreditation standards and criteria required for nursing programs by the National League for Nursing Accrediting Commission (NLNAC). This holistic approach has provided a more comprehensive perspective for nursing faculty and administrators. A conceptual framework will be presented that illustrates the dynamic relationship of curricular content, context, and conduct. It is suggested that curricular content cannot be isolated and must be examined in light of the milieu or context of curricular delivery as well as the conduct or implementation of curriculum. The NLNAC standards of Mission and Governance (I), Faculty (II), and Students (III) relate to the "context" of curricular enactment. Standard IV, Curriculum and Instruction, pertains to the curricular "content." The "conduct" of curricular delivery is addressed through NLNAC standards of Resources (V), Integrity (VI), and Educational Effectiveness (VII). Results of this “best-practice” curricular evaluation approach will be shared along with recommendations and implications for the future.
|
|
Studying Curriculum Implementation and Development Through Sustained Evaluation Inquiry: Using Focus Groups to Study Innovations in an Undergraduate Nursing Program
|
| Presenter(s):
|
| William Rickards,
Alverno College,
william.rickards@alverno.edu
|
| Abstract:
The undergraduate nursing program at a Midwestern college introduced innovations in its curriculum. After three years, when the first cohort of students was ready to graduate, a program of focus groups was begun that would engage each of the graduating cohorts over the next four semesters. The results of these groups and the subsequent deliberations of the nursing faculty provide a picture of program implementation, described in relation to student learning outcomes, student experience and faculty practices. The processes for conducting and analyzing the focus groups as well the approaches to reporting results to faculty and deliberating on the student experiences are described in this paper. Over the four different groups, students discussed different emphases in their experiences with implications for how the nursing curriculum was developing during this period. The group data in conjunction with curriculum and NCLEX performance data provide insights into how program innovations proceed.
|
|
Challenges Evaluating Comparability: Lessons Learned Evaluating a Fully Distributed Medical Undergraduate Program
|
| Presenter(s):
|
| Chris Lovato,
University of British Columbia,
chris.lovato@ubc.ca
|
| Caroline Murphy,
University of British Columbia,
caroline.murphy@ubc.ca
|
| France Gagnon,
University of British Columbia,
fgagnon@medd.med.ubc.ca
|
| Angela Towle,
University of British Columbia,
atowle@medd.med.ubc.ca
|
| Abstract:
The University of British Columbia (UBC) implemented a fully-distributed undergraduate medical education program at three separate geographic sites in 2004. Comparability of educational experiences is essential to the success of the program, for purposes of accreditation, educational outcomes, and program policy and decision making. Evaluation of comparability is complex and requires more than a simple analysis of test performance across sites. How should comparability be operationalized to evaluate program success? What methods are most appropriate for evaluating comparability? This paper will describe an approach to evaluating comparability of UBC’s medical undergraduate program, including methodological challenges and reflections regarding the operational definition of comparability. Findings regarding students’ educational experiences and how they relate to performance will be presented. We will discuss our perceptions regarding the utility and accuracy of the methods used and provide recommendations regarding further development of this area.
|
|
Evaluating the Graduate Education in Biomedical Sciences Program: A Mixed Method Study
|
| Presenter(s):
|
| Laurie A Clayton,
Higher Education,
lclayton@rochester.rr.com
|
| Abstract:
National initiatives, global competition, and public health needs require scientists who have interdisciplinary training to meet these 21st century challenges. In an effort to offer a broad environment from which to enhance the interdisciplinary training of its doctoral students, the University of Rochester School of Medicine and Dentistry developed the Graduate Education in the Biomedical Sciences Program (GEBS). The purpose of this study was to evaluate the GEBS program and the interests and experiences of its graduate students.
Implementation of the mixed method design was sequential, interpreting data through document analysis followed by a combination of quantitative (survey) and qualitative (focus group and interview) methods.
Evaluation results moderately supported the GEBS program’s theoretical framework, its interdisciplinary orientation and objectives. Doctoral students utilize a combination of experiences, decision-making skills and the GEBS in the selection of their Ph.D. program
|
| | | | |
|
Session Title: Evaluating Programs for Older Adults: Exploring Challenges and Lessons Learned in this New Frontier
|
|
Panel Session 313 to be held in Mineral Hall Section B on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Human Services Evaluation TIG
and the Alcohol, Drug Abuse, and Mental Health TIG
|
| Chair(s): |
| Kathryn Bowen,
Centerstone Community Mental Health Centers,
kathryn.bowen@centerstone.org
|
| Abstract:
With the aging of the baby boom generation, there is increasing emphasis on programs and policies to meet the needs of this burgeoning population. This session presents evaluations of a variety of programs for older adults in the areas of mental health, health promotion and disease prevention, and home and community-based care. The panelists will present particular challenges in evaluating programs targeted to older adults, and discuss lessons learned in their respective evaluations.
|
|
The Integration of Mental and Physical Health: An Evaluation of Older Adults with Depression Treated in a Primary Care Setting
|
| Jules Marquart,
Centerstone Community Mental Health Centers,
jules.marquart@centerstone.org
|
| Ajanta Roy,
Centerstone Community Mental Health Centers,
ajanta.roy@centerstone.org
|
| Catherine Sewall,
Centerstone Community Mental Health Centers,
catherine.sewall@centerstone.org
|
|
This paper describes the evaluation of an evidence-based model for treating older adults in a primary care practice. The presenter will describe the treatment model, evaluation design and methods, and preliminary findings. The presenter will discuss ways in which the evaluation findings were presented to and utilized by a variety of stakeholder groups, including program clients, and particular challenges in reaching older adults for both treatment and the evaluation data collection.
|
|
|
Evaluation of Older Adult Treatment Services for Co-Occurring Disorders in a Community Mental Health Center
|
| Ajanta Roy,
Centerstone Community Mental Health Centers,
ajanta.roy@centerstone.org
|
| Catherine Sewall,
Centerstone Community Mental Health Centers,
catherine.sewall@centerstone.org
|
| Jules Marquart,
Centerstone Community Mental Health Centers,
jules.marquart@centerstone.org
|
|
Substance abuse and misuse of prescription drugs is an area of concern in the older adult population and expansion of services in this area is of great need. This paper describes the evaluation of an evidence-based model for treating older adults with co-occurring disorders (substance abuse and mental health disorders). The approach uses the cognitive-behavioral and self-management intervention (CB/SM) in a counselor-led group treatment setting to help older adults overcome substance use disorders. The presenter will describe the treatment model, evaluation design, and findings from the evaluation. Assessment tool results will show the effectiveness of the program in reducing substance use and mental health symptoms at follow-up periods. The presenter will also present the challenges in engaging older adults in treatment and recommendations for outreach.
| |
|
Evaluation of a Health Promotion and Disease Prevention Program Among Older Adults in Hawaii
|
| Kathryn Braun,
University of Hawaii,
kbraun@hawaii.edu
|
| Michiyo Tomioka,
University of Hawaii,
mtomioka@hawaii.edu
|
| Noemi Pendleton,
University of Hawaii,
npendleton@hawaii.edu
|
|
Older Americans are disproportionately affected by chronic diseases and conditions, such as arthritis, diabetes and heart disease, as well as by disabilities that result from injuries such as falls. With funding from the US Administration on Aging (AoA), Hawaii is replicating Enhance Fitness (EF) from Seattle and the Chronic Disease Self Management Program (CDSMP) from Stanford to reduce older people's risk of disease, disability and injury. The evaluation measured four areas: 1) if local service providers trained in the curricula master training objectives; 2) if providers replicate the curricula with fidelity; 3) if the seniors improve their strength or self-management skills; and 4) if the service providers increase their capacity to effectively replicate evidence-based programs. Instruments and findings on training, fidelity, client outcomes, and capacity-building outcomes will be presented.
| |
|
Balancing Policy, Planning and Program Quality: Using the Balanced Scorecard in the Utilization of Data from a Survey of Consumers of Community-Based Long Term Care
|
| Melanie Hwalek,
SPEC Associates,
mhwalek@specassociates.org
|
| Kathy Kueppers,
Area Agency on Aging 1-B,
kkueppers@aaa1b.com
|
| Victoria Straub,
SPEC Associates,
vstraub@specassociates.org
|
|
This paper shows how a balanced scorecard approach was used to synthesize results from Area Agency on Aging 1-B's 2007 telephone survey of 411 participants of its Community Care Management and MI Choice Medicaid Waiver programs. In both programs, Care Managers provide comprehensive assessments, develop care plans, and monitor community services brokered on behalf of enrolled participants and purchased from a pool of vendors. Survey items were based on tools from the Administration on Aging's Performance Outcome Measurement Project (POMP) and from recommendations of a Consumer Advisory Group. A technique based on the balanced scorecard was used to roll-up individual item ratings into five dimensions of quality for care management, and also for the direct care worker. The evaluation results then drill down to the specific items for each quality dimension providing data for use at the policy level, as well as care manager-specific and vendor-specific results for performance monitoring.
| |
|
Session Title: Evaluating the Impact of Using a Human Rights Framework to Engender Social Change
|
|
Multipaper Session 314 to be held in Mineral Hall Section C on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Advocacy and Policy Change TIG
|
| Chair(s): |
| Taj Carson,
Carson Research Consulting Inc,
taj@carsonresearch.com
|
| Abstract:
Through funding for projects on a range of issues (housing, labor, gender, education, etc.), and throughout the country, the Mertz Gilmore Foundation seeks to determine the value and effectiveness of the human rights framework for tackling social, economic and/or political justice issues inside the U.S. Rather than merely funding good human rights work, the Foundation is funding projects that will provide good information about how human rights works.
In order to learn more about the effectiveness of the human rights framework, the Foundation designed a two-stage evaluation process: (1) at the individual project level; and (2) across the entire program. This sessions represents finding from three of these projects.
|
|
Hitting One out of the Park: How the United Workers Association Won the Fight for a Living Wage at Camden Yards
|
| Taj Carson,
Carson Research Consulting Inc,
taj@carsonresearch.com
|
|
The goal of the Living Wages at Camden Yards Campaign was to secure above-poverty working conditions for every cleaner at Camden Yards by the end of the 2008 season. The United Workers Association used a human rights framework to mobilize day laborers at Camden Yards and guarantee workers the current Baltimore city 'living wage' of $9.06 an hour.
The evaluation of the Living Wages Campaign is designed to determine how the human rights framework was used by the UWA, what the impact of that framework was, and what impact contextual factors, such as the passage of a living wage bill in Maryland in the preceding year, had on the Campaign's efforts. The evaluation revealed that contextual factors and the direct action of the UWA combined to create pressure on the Maryland Stadium Authority that led to achievement of the living wage for the 2009 season.
|
|
Insight + Influence = Impact: An Evaluation of a United States Human Rights Project
|
| Susan Lloyd,
Lloyd Consulting Inc,
slloyd@lloydconsulting.com
|
|
The evaluation of the From Poverty to Opportunity Campaign is assessing whether, to what extent, and how the use of a human rights framework contributes to a changed public discussion of extreme poverty that is reflected in the policymaking process at the state level, and influences the advocacy and communications of activists and legislative leaders. It is collecting information to see whether the application of the framework generates new insight and understanding; influences individual and institutional actions; and contributes, directly or indirectly, to specific outcomes. Thus far, the evidence argues for the efficacy of a human rights framework: the Campaign has resulted in a new and different public dialogue in about the right to live free of poverty; mobilized a broader base of constituents than previous anti-poverty efforts; and is on its way to passing legislation introduced in January 2008.
|
|
Using Human Rights to Increase Prisoner's Rights: An Evaluation of the Bridging the Gap Project at Stop Prisoner Rape
|
| Robert Dumond,
Consultant for Improved Human Services LLC,
rwdumond@aol.com
|
|
Evaluation of Stop Prisoner Rape's "Bridging the Gap" project seeks to measure how incorporating international human rights principles into three (3) broad strategies in a prison setting can create increased safety and justice in the institution. Though a correctional setting creates a relatively unique challenges in applying traditional evaluation techniques, data is being collected to determine the impact on correctional staff attitudes and subsequent behavior, inmate awareness and choices, and institutional policies and procedures. Preliminary evidence appears promising, with some healthcare and security staff from demonstrating an increased awareness and more humane stance toward inmates. There equally is the recognition that the "messenger" imparting the knowledge may be as important as the message in creating change, and that ultimate climate change is significantly affected by senior agency leadership. Despite challenges, however, it may provide important knowledge to both evaluation practice and social change.
|
|
Session Title: At the Intersection of Policy and Practice in Multisite Evaluation: Perspectives From Three Different Fields
|
|
Panel Session 315 to be held in Mineral Hall Section D on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
|
| Chair(s): |
| Michael Bates,
Mosaic Network Inc,
mbates@mosaic-network.com
|
| Discussant(s):
|
| Michael Furlong,
University of California Santa Barbara,
mfurlong@education.ucsb.edu
|
| Abstract:
Complex, multisite community evaluations present a number of unique challenges, including difficulties balancing sometimes competing needs and expectations at global and local levels; collecting information across programs diverse with respect to evaluation capacity, programmatic focus, and geographical location; designing and implementing comprehensive evaluation frameworks; and producing and utilizing meaningful results that inform both policy and practice. In this presentation, we will present practical information about our experiences with multisite evaluations from three different perspectives: program evaluation in large urban public school districts; evaluation of international aid and development programs; and evaluation of early childhood initiatives through government-university partnerships. Panelists will discuss how evaluation policies have influenced evaluation practices, how evaluation results have influenced evaluation policies, and the benefits, challenges, and lessons learned in working directly with policy makers and frontline service providers.
|
|
At the Intersection of Policy and Practice in Multisite Evaluation: Perspective from NYC Outward Bound
|
| Courtenay Carmody,
Carmody Consulting Evaluation Services,
cc@carmodyconsulting.net
|
|
Following seven years as Director of Research and Evaluation for MOUSE'a national, multisite nonprofit creating opportunities for underserved students Courtenay Carmody launched Carmody Consulting Evaluation Services to provide accountability to funders and constituents in the New York City (NYC) nonprofit sector. Ms. Carmody has worked with NYCOB to establish a means for meaningful, efficient reporting of program outcomes to funders while providing a useful tool that principals can use to improve teaching and learning. In this presentation, she will share her experiences working with NYC Outward Bound's (NYCOB) National Expeditionary Learning model, which improves schools through specific pedagogy, student activities and professional development. She will also discuss the challenges of working with a large urban school district, her experiences integrating participant-, school-, and indicator-level data from multiple external databases into a single coherent system, and some lessons learned implementing NYCOB's data management system in its first year.
|
|
|
At the Intersection of Policy and Practice in Multisite Evaluation: Perspective from International Aid
|
| Todd Nitkin,
Medical Teams International,
tnitkin@medicalteams.org
|
|
Todd Nitkin is WA DC Representative and Monitoring and Evaluation Specialist for Medical Teams International an International Aid organization that provides humanitarian relief and development to people affected by disaster, conflict and poverty around the world and chair of the Monitoring and Evaluation working group for the Core Group, a coalition of nongovernmental organizations addressing child survival and maternal health issues worldwide. Dr. Nitkin has helped to adapt and improve the tools and methodologies used for monitoring and evaluation of large multisite international development projects. In this presentation, he will share his experiences working with both funders and direct service providers in the international aid community. He will also discuss the particular challenges of working with the political realities of developing nations, the resulting impact on evaluation policy and practice, and how the focus and scope of multisite evaluation in this field may differ from traditional program evaluation in developed countries.
| |
|
At the Intersection of Policy and Practice in Multisite Evaluation: Perspective from Comprehensive Early Childhood Initiatives
|
| Michael Bates,
Mosaic Network Inc,
mbates@mosaic-network.com
|
|
Michael Bates is the Director of Research and Evaluation for Mosaic Network, Inc. and has participated in the design and implementation of numerous multisite community evaluations, particularly in the field of early childhood services. Prior to joining Mosaic, Dr. Bates was an Assistant Research Psychologist at the University of California, Santa Barbara and the Evaluation Director for First 5 Santa Barbara County for eight years. In this presentation, he will share his experiences in working with several early childhood initiatives in California and North Carolina in various roles, from helping funders develop strategic plans and establish evaluation policies, to disseminating results to maximize impact on evaluation policy and practice. He will also discuss some of the challenges of working in government-university partnerships, and working with diverse service providers with a range of capacity for evaluation.
| |
|
Session Title: Collaboration to Build Evaluation Capacity and Engage Educational Stakeholders
|
|
Multipaper Session 317 to be held in Mineral Hall Section F on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Chair(s): |
| Anane Olatunji,
Fairfax County Public Schools,
aolatunji@fcps.edu
|
|
Reporting Complex Data to Diverse Stakeholders
|
| Presenter(s):
|
| Ann Vilcheck,
Pearson,
ann.vilcheck@pearson.com
|
| Resendez Miriam,
PRES Associates,
mresendez@presassociates.com
|
| Mariam Azin,
PRES Associates,
mazin@presassociates.com
|
| Abstract:
In this session, representatives from an educational prek-12 textbook publisher and one of their evaluation contractors will share insights into working collaboratively to clearly specify evaluation goals and reporting and presenting evaluation results to multiple stakeholders. Particular attention will be paid to how collaborative work incorporates: (1) multiple stakeholders within the publisher, (2) designing studies to meet all stakeholder expectations, as well as (3) the challenges of reporting and interpreting statistical analyses for stakeholders unfamiliar with the analyses being reported.
|
|
The Product of Our Differences: Enhancing Elementary Mathematics Teaching through Collaborative Evaluation Communities
|
| Presenter(s):
|
| Kelli Thomas,
University of Kansas,
kthomas@ku.edu
|
| Douglas Huffman,
University of Kansas,
huffman@ku.edu
|
| Karen Lombardi,
University of Kansas,
karenl@ku.edu
|
| Dana Atwood-Blaine,
University of Kansas,
danab@ku.edu
|
| Abstract:
This paper session will focus on the evaluation activities of the Collaborative Evaluation Communities in Urban Schools project (CEC Project) at the University of Kansas. The project was designed to enhance the evaluation capacity of K-8 schools through collaborative evaluation communities comprised of teachers, instructional coaches, graduate students, and university faculty. The goals of the project were to improve the evaluation capacity of urban schools, develop graduate level educational leaders with the knowledge and skills to evaluate science and mathematics education programs, and develop the evaluation capacity of K-8 teachers. The paper will focus on the processes of using evaluation to develop elementary mathematics instruction in schools using an example of a team involved in a process of modified lesson study.
|
|
Evaluating the Impact of Science Fair Participation on Student Understanding of the Scientific Process Using a Collaborative Evaluation Communities Approach
|
| Presenter(s):
|
| Douglas Huffman,
University of Kansas,
huffman@ku.edu
|
| Kelli Thomas,
University of Kansas,
kthomas@ku.edu
|
| Dana Atwood-Blaine,
University of Kansas,
danaab@ku.edu
|
| Karen Lombardi,
University of Kansas,
karenl@ku.edu
|
| Abstract:
This paper will focus on the results of the Collaborative Evaluation Communities in Urban Schools project (CEC Project) at the University of Kansas. The project was designed to enhance the evaluation capacity of K-8 schools through collaborative evaluation communities comprised of teachers, instructional coaches, graduate students, and university faculty. The goals of the project were to improve the evaluation capacity of urban schools, develop graduate level educational leaders with the knowledge and skills to evaluate science and mathematics education programs, and develop the evaluation capacity of K-8 teachers. The paper will focus on the impact of using evaluation to examine the impact of a science fair on teachers and students. The paper will also address the broader question: In what ways can collaborative evaluation create learning communities of practice?
|
|
Building Evaluation Capacity in K–12 Public Schools: Partnering Practitioners and Evaluators to Raise Local Evidentiary Standards
|
| Presenter(s):
|
| Pamela Paek,
University of Texas at Austin,
pamela.paek@mail.utexas.edu
|
| Abstract:
Given current federal and state accountability requirements, it is critical that our public school systems rigorously evaluate the impact of their work on teaching and learning. The status of educational practitioners’ understanding of evaluation will be explored through an examination of local evaluation methods, theories, policies, and practices in 22 sites that were found in a nationwide search for practices used to close the achievement gap in secondary mathematics. Examples of the various types of evidence that practitioners use to measure the effectiveness of these practices will also be analyzed and discussed. This presentation will focus on the ways the evaluation community can raise evidentiary standards and build capacity for evaluation in districts and schools, with the goal of improving evaluation practices—and ultimately educational outcomes—in K–12 public schools.
|
| | | |
|
Session Title: Engaging Participants in the Evaluation Process: A Participatory Approach
|
|
Multipaper Session 318 to be held in Mineral Hall Section G on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
|
| Chair(s): |
| Myriam Baker,
Outcomes Inc,
mbaker@outcomescolorado.com
|
|
Does Power Matter in Participatory Evaluation?
|
| Presenter(s):
|
| Steve Jacob,
University Laval,
steve.jacob@pol.ulaval.ca
|
| Abstract:
Evaluation is presented as a practice aimed at informing deciders and managers as to the effects of public interventions. Traditionally, focus is often placed on the methodological and scientific dimensions of a given evaluation. Without contesting this vision, it seems that this perspective overshadows the political dimension and does not allow for a proper grasp of all the issues that drive the evaluation process. The aim of this presentation is to emphasize the political dimension in presenting the role and the influence that the various players involved may have in the carrying out of a given evaluation. The presentation will focus on stakeholders in order to address the question: “Is evaluation a ‘stakeholder-friendly’ environment?” Answering this question requires an understanding of the extent to which these particular players are included in the evaluation process. It will thus allow for a greater understanding of evaluation as a power game from the standpoint of the stakeholder.
|
|
Non-Traditional Teacher Preparation Program and Non-Traditional Students: Utilizing Participatory Evaluation to Measure Progress
|
| Presenter(s):
|
| Patricia Alvarez McHatton,
University of South Florida,
mchatton@tempest.coedu.usf.edu
|
| Anna Robic,
University of South Florida,
robic@tempest.coedu.usf.edu
|
| Roseanne Vallice,
University of South Florida,
vallice@tempest.coedu.usf.edu
|
| Abstract:
Project PROPEL (PReparing Our Paraprofessionals to teach Exceptional Learners)
prepares paraprofessionals to become special education teachers while still working as an assistant in the classroom. This session will examine ways to evaluate a non-traditional teacher preparation program utilizing input and ideas from the participants and professionals involved in various aspects of the program. Successes and challenges will be discussed and suggestions for future evaluation procedures will be made.
|
|
"If You Cannot Bring Good News Then Don't Bring Any": Resentment and Betrayal in Participatory Evaluation Research
|
| Presenter(s):
|
| Michael Matteson,
University of Wollongong,
cenetista3637@hotmail.com
|
| Abstract:
A number of evaluators have commented on being surprised by antagonistic reactions to what were thought to be inclusive and responsive participatory evaluations. This paper arose from negative stakeholder reactions to a participatory evaluation of a an Indigenous Issues program for local government children's services staff. I interviewed participants some time later to see why they had reacted with anger to evaluation results. It seems that proposals for action were taken as de facto criticisms of what was being done, and of those who were doing it. I feel there are aspects of participatory evaluation which can make this kind of reaction more likely than we would expect.
|
|
Stakeholder Selection Criteria and Methods in Participatory Evaluation
|
| Presenter(s):
|
| Randi K Nelson,
University of Minnesota,
nelso326@umn.edu
|
| Abstract:
The presentation reports interim results of ongoing research on stakeholder selection in participatory evaluation. The purpose of the research is to examine one aspect of collaborative or participatory evaluation methodology by describing stakeholder selection practices in practical-participatory, transformative, empowerment, utilization-focused, and culturally-responsive evaluation. The following questions guide the research: 1) What selection criteria or considerations do evaluators use to identify relevant stakeholders? 2) What methods do evaluators use to select stakeholders? 3) What program, context, and evaluation factors influence stakeholder selection? 4) How do evaluators know they selected the right stakeholders? Interim results are based on analysis of individual interviews with practicing evaluators about their stakeholder selection activities. Evaluation practitioners were nominated by evaluation theorists of diverse approaches to participatory evaluation based on the practitioners’ experience in conducting evaluations that apply theories and concepts of collaborative and participatory evaluation.
|
|
Adding Value Through Debrief: Techniques, Purposes, Applications and Contributions
|
| Presenter(s):
|
| Maryann Durland,
Durland Consulting,
mdurland@durlandconsulting.com
|
| Shaunti Knauth,
Durland Consulting,
shaunti_knauth@comcast.net
|
| Abstract:
Debrief is a term often associated with business but used less often in the field of evaluation. But a debrief – or the carefully review upon completion of a project phase– can provide invaluable information and insights to evaluators and clients. The debrief can be used for a variety of purposes, such as capturing and systematizing impressions from a site visit, gathering information on improving evaluation processes, and learning whether client expectations were met. This paper provides a brief overview of debriefing purposes and techniques applicable to evaluation, with specific guidelines for use and examples. We then detail the process and results of a debrief recently carried out during the evaluation of an NSF project.
|
| | | | |
| In a 90 minute Roundtable session, the first
rotation uses the first 45 minutes and the second rotation uses the last 45 minutes. |
| Roundtable Rotation I:
Emergent Thoughts on Using Systems Concepts to Evaluate the Emergent Phenomena of Collaboration Development |
|
Roundtable Presentation 319 to be held in the Slate Room on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Systems in Evaluation TIG
|
| Presenter(s):
|
| Ruth Mohr,
Mohr Program Evaluation and Planning,
rmohr@pasty.com
|
| Abstract:
For organizations facing “wicked” problems, collaboration has frequently become the recommended solution. However, collaboratives are complex, emergent entities that are not easy or quick to develop. As complex, emergent entities, they are also not easy to evaluate in a manner that provides the type of information on a rapid turn-around basis that could be most useful for those leading, managing and participating in the development of them. This roundtable will explore the use of systems thinking to inform the evaluation of collaboration development. The starting point for this discussion will be the presenter’s previous modeling of collaboration development around a healthcare problem compared and contrasted with modeling that uses the systems concepts of perspectives, inter-relationships and boundaries. Other discussion points to be addressed include integrating collaboration participants into the “sense-making of the data” process and development of an emergent evaluation approach.
|
| Roundtable Rotation II:
Community Participation in Systems Model Building: An Approach to Developmental Evaluation of a Multi-Level, Multi-Cultural Community Health Intervention |
|
Roundtable Presentation 319 to be held in the Slate Room on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Systems in Evaluation TIG
|
| Presenter(s):
|
| Eve Pinsker,
University of Illinois Chicago,
epinsker@uic.edu
|
| Abstract:
The complexity of evaluation of the Center for Disease Control’s current REACH US (Racial and Ethnic Approaches to Community Health) community health initiatives, which are premised on a socio-ecological theory of change, and focus on goals for increasing empowerment and capacity at the multiple levels of local communities, policy, organizations, and coalitions affecting multi-ethnic target populations, suggests the appropriateness of a “developmental” approach. Given the mandate to support empowerment and involvement of the target populations in posing solutions to health inequity, it also is clear that development of logic models for each initiative should involve a broad range of stakeholders, including at the grassroots level. In 2007, three REACH initiatives in Chicago focused on diabetes in African-American and Latino populations were funded. The lead evaluator for two of these initiatives chose to use qualitative systems model building, with stakeholder participation, using tools modified from systems dynamics and soft systems approaches.
|
|
Session Title: State Policy and Arts Assessment System
|
|
Panel Session 320 to be held in the Agate Room Section B on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Evaluating the Arts and Culture TIG
|
| Chair(s): |
| Ching Ching Yap,
University of South Carolina,
ccyap@gwm.sc.edu
|
| Abstract:
State policies and mandates that dictate arts instructional requirements are the main factors that influence individual states effort in creating large-scale arts assessments. In addition to adhering to mandates, each state has to determine available resources and establishes the purpose of its assessment programs to develop the program guidelines. Those guidelines determine the design and implementation of the assessment including scoring and results dissemination procedure. Due to the differences in state policies and mandates, the currently available arts assessment programs varied widely from state to state. The purpose of this panel is have state arts representatives present and share their efforts in developing arts assessment programs that best fit their states. The discussion of the approaches taken by each state including successes and challenges in creating the assessments will provide the attendees with additional insights about the developmental process of a state-wide arts assessment program.
|
|
California Arts Assessment Network
|
| Nancy Carr,
California Department of Education,
ncarr@cde.ca.gov
|
|
The California Arts Assessment Network (CAAN) has brought together local educational agencies to develop models of arts assessment that can be used for large-scale administration at the school or at the classroom level.
Currently, the CAAN, http://www.teachingarts.org/CAAN, has developed a selected response item pool, and an online process for examining student work (SWOP) based on state standards. The assessment is not mandatory.
|
|
|
Connecticut's Common Arts Assessment Initiative
|
| Scott Shuler,
Connecticut State Department of Education,
scott.shuler@ct.gov
|
|
The goal of Connecticut's Common Arts Assessment Initiative (in Art and
Music) is to develop common tools to measure student art and music learning of the standards at the district and school levels. The pilot and final versions of the assessments will be available to teachers on a voluntary basis in order to (a) monitor and improve student learning in the arts; (b) ensure that all students have the opportunity to learn in the arts; and (c) promote collaboration and exchange of instructional ideas among teachers. In addition, the assessments will also be available to districts as tools to monitor and improve student learning in the arts, including the ability to compare learning across schools and districts. The assessment is not mandatory.
| |
|
The Commonwealth Accountability Testing System (CATS) in the Arts
|
| Phillip Shepherd,
Kentucky Department of Education,
pshepher@kde.state.ky.us
|
|
Kentucky has state mandated assessments in the arts for all students in grades 5, 8, and 11. The Commonwealth Accountability Testing System (CATS) includes multiple choice questions and open response questions that require students to apply content knowledge and understanding in greater depth by explaining, using supporting details, and justifying their conclusions.
There are a total of 96 multiple choice items and six open response items spread across six testing forms so that random sampling is employed to arrive at school level accountability scores. These tests are not designed to measure for student level accountability. The arts and humanities assessment scores for each school are factored into the overall school academic index (assessment score).
| |
|
Rhode Island Arts Proficiency Assessments
|
| Rosemary Burns,
Rhode Island Department of Education,
rosemary.burns@ride.ri.gov
|
|
Arts assessment in Rhode Island is a proficiency-based measure of student's knowledge and skills demonstrated consistently in various settings over time. Proficiency decisions are local, within each district. Fine arts educators and professionals in the state also researched and designed guidelines to assist district as they develop proficiency plans in the arts.
The assessment is not mandatory.
| |
|
South Carolina Arts Assessment Program
|
| Ching Ching Yap,
University of South Carolina,
ccyap@gwm.sc.edu
|
|
The South Carolina Arts Assessment Program (SCAAP) is a state-wide assessment program that currently has six different assessments in various stages of development. All SCAAP assessments include a web-based multiple-choice section and two performance tasks. Both multiple-choice items include the use of multimedia interpretative materials such as digital images, digital sound files, and streaming video clips. School-level results are disseminated to teachers and principals to inform instruction based on state standards. The assessment is voluntary for some schools, but mandatory for schools that receive state arts grants.
| |
|
Session Title: Evaluation For Improving Occupational Safety: Examples From a Federally Funded Government-Industry Cooperative Program
|
|
Panel Session 321 to be held in the Agate Room Section C on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Business and Industry TIG
|
| Chair(s): |
| Michael Coplen,
Federal Railroad Administration,
michael.coplen@dot.gov
|
| Abstract:
The Federal Railroad Administration has embarked on an ambitious long term effort to improve safety and safety culture in the U.S. railroad industry by funding a series of innovative technological, analytical, behavioral and organizational based safety improvement programs. The effort encompasses building internal evaluation capacity at the FRA, funding evaluations of its innovative programs, changing its own internal organizational culture, and integrating evaluation into a its major new approach to safety known as the Risk Reduction Program.
|
|
Evaluation and Evaluation Capacity Building at the Federal Railroad Administration: History, Purpose, and Future
|
| Michael Coplen,
Federal Railroad Administration,
michael.coplen@dot.gov
|
|
Evaluation and the AEA have evolved into a cross disciplinary field that includes many different applications and disciplines. Relatively little of the evaluation field, however, has been applied to occupational safety. The Federal Railroad Administration's Human Factors R&D Program is making that effort. We have embarked on an ambitious long term initiative to improve safety and safety culture in the U.S. railroad industry by funding a series of demonstration projects, and integrating various evaluation methods into each of the demonstrations. Pilot demonstrations emphasize innovative behavioral and organizational based safety improvement methods, incorporating evaluation methods to increase stakeholder involvement and improve utilization, impact and effectiveness. As part of this effort, we have also been building our own internal evaluation capacity. This presentation will summarize the evaluations being carried out and the efforts made to build capacity.
|
|
|
A Meta-logic Model for Evaluating Safety Programs: Uses and Limitations
|
| Jonathan A Morell,
TechTeam Government Solutions,
jonny.morell@newvectors.net
|
|
Whenever evaluations of related programs are carried out, each program's respective logic model will have similarities and differences with the logic models of the other programs. The similarities can be captured in a meta-logic model which can be used to generate the unique models for each program. Some of the differences in the program-specific models will involve only low-level detail, while others will reflect important idiosyncratic differences among the programs. It is useful to think in terms of a hierarchy of logic models as a way to design evaluations whose findings can be compared, and as a way to understand differences in underlying theory of the different programs. This meta-modeling approach is useful not only as a way of testing program theory, but also of understanding evolutionary, non-model based program change. These principles will be illustrated with examples from a series of evaluations of safety programs in the railroad industry.
| |
|
Implementation Analysis and Safety Impact Assessment
|
| Joyce Ranney,
Volpe National Transportation Systems Center,
joyce.ranney@volpe.dot.gov
|
|
The railroad industry, like other industries, has a bias towards implementing programs and when they don't deliver the desired results quickly, stopping the program and trying something else. This flavor of the month approach to safety improvement ensures minimal sustainable learning about safety risks and prevention. It also encourages skepticism and distrust on the part of labor as well as management. Given this bias and resulting limitations to learning and improvement implementation evaluation is one of the most important types of evaluation for the industry to learn about.
The following will be presented: examples of data collection methods and feedback reports from the implementation evaluation efforts, forums that have been used to foster visibility and accountability on the part of the pilot sites and ground rules that have helped to foster open and useful dialogue about implementation effectiveness.
| |
|
Integrating Evaluation for Organizational Change Into the Federal Railroad Administration's (FRA's) Risk Reduction Program
|
| Miriam Kloeppel,
Federal Railroad Administration,
miriam.kloeppel@dot.gov
|
| Michael Coplen,
Federal Railroad Administration,
michael.coplen@dot.gov
|
|
The relationships between industry management, labor, and the Federal Railroad Administration have evolved over time into a culture that does not foster open disclosure of safety-related information. Unlike regulatory programs, the Risk Reduction Program will engender collaborative development of data collection, analysis, modeling, and identification of corrective actions. This will be a major change for the FRA and for participating railroads.
Development, implementation, and ongoing operation of the Risk Reduction Program will require evaluation on several levels. For example, changes to FRA organization and culture must be evaluated. Also, individual programs created as part of the Risk Reduction Program will generate internal-use data, as well as data to be shared with the FRA; both types must be subject to evaluation. And to promote an effective Nationwide program, the FRA will need to perform cross-project evaluations.
| |
|
Session Title: Can Evaluation Policy and Practice Serve Rural Schools?
|
|
Panel Session 322 to be held in the Granite Room Section A on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Chair(s): |
| Louis Cicchinelli,
Mid-Continent Research for Education and Learning,
lcicchinelli@mcrel.org
|
| Abstract:
National education policy is often shaped to suit large schools with specialists on staff. The policy may contain criteria particularly difficult for rural schools to attain. In this session, three panelists will discuss issues related to evaluation policy and evaluation practice focused on rural education. The chair/discussant, Lou Cicchinelli, will introduce the topic with a description of the condition of rural education particularly as it is manifested in the Central Region of the US including the states of Colorado, Kansas, Missouri, Nebraska, North and South Dakota and Wyoming. Evaluation practice (of rural schools) must hold a tension between the criteria and the realities of rural education. Each panelist will discuss an area of educational policy and address two overarching questions: Could educational policy (and therefore its evaluation) be fine tuned to better serve small rural schools? What guidelines can be offered for evaluation practice in rural settings?
|
|
The Highly Qualified Teacher Policy and its Implications for Rural Education
|
| Andrea Beesley,
Mid-Continent Research for Education and Learning,
abeesley@mcrel.org
|
| Kim Atwill,
Mid-continent Research for Education and Learning,
katwill@mcrel.org
|
| Pamela Blair,
Mid-continent Research for Education and Learning,
pblair@mcrel.org
|
|
At a time when national education policies and goals present challenges to rural schools, evaluators must be sensitive to the particular characteristics of the rural education environment. Andrea Beesley will discuss the policies related to the 'highly qualified teacher' and how they impact rural schools, and will present findings from studies that surveyed preservice rural teacher programs and examined strategies for teacher recruitment and retention in rural schools. Beesley has been working in or studying rural schools for five years, as a National Science Foundation fellow and then as a researcher focused on high-performing high-needs rural schools, alternative schedules, rural school leadership and teacher quality, and rural student characteristics.
|
|
|
All Students Proficient by 2014: Challenges and Opportunities for American Indian Education
|
| Dawn M Mackety,
Mid-continent Research for Education and Learning,
dmackety@mcrel.org
|
|
Despite progress in the quality of American Indian education, American Indian student performance on key education indicators continues to lag behind the performance of white peers and national averages. Mackety will provide an overview of American Indian education policy and its effect on the educational conditions of American Indian students, trends in American Indian academic achievement, and the challenges of meeting the goal of 'all students proficient by 2014.' Opportunities and suggestions for culturally-relevant policies, practices and evaluations to improve American Indian student academic achievement will be presented, including findings from a recent study on parent involvement in the Central Region. Mackety has been involved in American Indian research, evaluation, and programming partnerships for over two decades and is a registered member of the Little Traverse Bay Band of Odawa Indians in Michigan.
| |
|
Not Meeting Adequate Yearly Progress: Policy and Practice in Providing Supplemental Educational Services in Rural Schools
|
| Zoe A Barley,
Mid-Continent Research for Education and Learning,
zbarley@mcrel.org
|
| Sandra Wegner,
Wegner Consulting,
sandrawegner611@hotmail.com
|
|
Title I schools that have not met adequate yearly progress (AYP) for three consecutive years must offer supplemental services (SES) to their high needs students. Central Region rural schools have a lower SES student participation rate than national averages. In a 2007 study with state SES staff, barriers for rural schools were noted. Barley will discuss a 2008 study in the Central Region that examines the implementation of SES policy in rural schools. Comparisons of 2006-2007 rural and non-rural school reports will identify the differential presence of factors believed to support SES participation and follow up interviews with nine rural schools in their second year of required implementation will augment the findings. Barley will comment on how educational policy might be better adapted to rural education and suggest guidance for evaluative studies. Barley has studied rural issues for nearly a decade and is active in both NREA and AEA.
| |
|
Session Title: Comparing Mixed-mode Methods for Comprehensive Community Health Surveys
|
|
Panel Session 323 to be held in the Granite Room Section B on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Health Evaluation TIG
|
| Chair(s): |
| Kendra Bigsby,
Health District of Northern Larimer County,
kbigsby@healthdistrict.org
|
| Abstract:
Ideally, program planners and policy-makers need current, locally collected data to inform decision making and program direction. Survey researchers increasingly face dilemmas of how to gather representative and robust samples of participants for reasonable cost. In the fall of 2007, community health planners from two neighboring Colorado counties (Larimer and Weld) independently conducted large, comprehensive community health surveys of their populations. The separate efforts had common purposes, use and even instrumentation. However, uniquely different mixed methods approaches for population sampling, participant recruitment and data collection were used. These combinations of traditional telephone random digit dial (RDD), mail using Dillman protocols, and Internet serve as a natural laboratory to better understand what methods work best for what circumstances. This three person panel will share the learnings from Larimer County's experimental split sample study that compared RDD and mailed methods and Weld County's strategies for an economical mail design with Internet option.
|
|
The Larimer County and Weld County 2007 Community Health Surveys
|
| Susan Hewitt,
Health District of Northern Larimer County,
shewitt@healthdistrict.org
|
|
Susan (Sue) Hewitt, M.S, is the coordinator of the Assessment, Research and Program Evaluation Team for the Health District of Northern Larimer County. Sue and her staff of three are responsible for program monitoring and evaluation of Health District programs, projects, and grants; conducting comprehensive triennial community health needs assessments; leading quality improvement efforts; and researching effective approaches to addressing community health needs. Since joining the Health District in 2000, Sue has coordinated three community health surveys (2001, 2004 and 2007.) She holds a Masters of Science degree in Environmental Health from Colorado State University with an emphasis in occupational epidemiology, health behaviors and evaluation. Her current professional affiliations include the Colorado Evaluation Network and AEA's Local Affiliates Steering Council and the Evaluation Managers & Supervisor's Topical Interest Group.
|
|
|
Alternatives to RDD Survey Recruitment: Using a Split-Sample Experimental Design for a Triennial Community Health Survey
|
| Burke Grandjean,
Wyoming Survey and Analysis Center,
burke@uwyo.edu
|
|
Burke Grandjean, Ph.D. is Executive Director of the Wyoming Survey & Analysis Center (WYSAC) at the University of Wyoming, where he has been Professor of Statistics and Sociology since 1990. By executive order of Wyoming's governor, WYSAC serves as the state's clearinghouse for evaluation research and policy analysis, with emphasis on issues in public health (including substance abuse), and criminal justice. WYSAC's Survey Research Center has been conducting surveys on these and other topics in Wyoming and throughout the Rocky Mountain region for decades, and has extensive experience in all modes of survey administration. Burke holds both M.A. and Ph.D. degrees in Sociology from the University of Texas at Austin, and is a member of the American Association for Public Opinion Research.
| |
|
Strategies for an Economical Community Health Survey Using Mixed Methods
|
| Cindy Kronauge,
Weld County Department of Public Health and Environment,
ckronauge@co.weld.co.us
|
|
Cindy Kronauge, M.P.H., is the Health Data Analyst at the Weld County Department of Public Health and Environment located in Greeley, Colorado and coordinated the Weld County 2007 Community Health Survey. Cindy has worked as an independent evaluation consultant and performed research, statistical and evaluation consulting for health, education, and human services service organizations locally and nationally for over ten years. Currently, Ms. Kronauge is working on her Ph.D. in applied research methods and statistics. Cindy has been a member of AEA since 1994 and is on the steering group for the Colorado Evaluation Network.
| |
|
Session Title: Humanitarian Response Index: Accountability, Transparency and Quality
|
|
Panel Session 324 to be held in the Granite Room Section C on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Disaster and Emergency Management Evaluation TIG
|
| Chair(s): |
| Silvia Hildago,
Development Assistance Research Associates,
shidalgo@daraint.org
|
| Abstract:
A number of initiatives exist to track the implementation of the Good Humanitarian Donorship, including the OECD-DAC peer reviews. While this process is encouraging, the lack of comprehensive impact indicators for measuring individual donor performance continues to be identified by the donor community as an outstanding challenge.
Dara has developed an index which measures donor performances within humanitarian aid. Experts behind the index will present the index, its purpose, theory and how donors reacted to the first publication in November 2007.
|
|
Purposes of the Humanitarian Response Index
|
| Philip Tamminga,
Development Assistance Research Associates,
ptamminga@daraint.org
|
|
Silvia Hidalgo will discuss the purpose of the Humanitarian Response Index. She will go into depth on the benefits of benchmarking as an accountability tool, and ways in which others could use benchmarking to increase adherence to voluntary guidelines and principles, and many of the issues involved in launching such an initiative for the first time.
|
|
|
The Five Pillars of the Index (Theoretical Underpinnings)
|
| Laura Altinger,
Development Assistance Research Associates,
laltinger@daraint.org
|
|
Mr. Tamminga will describe the five 'pillars' which were established to group the different Principles of Good Humanitarian Donorship, each containing soft and hard data indicators. There are a total of 57 indicators that constitute the 'pillars', which together form the basis for the Index and its ranking. The five pillars are: 1) responding to humanitarian needs, 2) linking relief and development, 3) working with humanitarian partners, 4) implementing international guiding principles, and 5) promoting learning and accountability.
| |
|
Practice Input to the Index: Field Missions, Questionnaires and Lessons Learned
|
| Nicolai Steen,
Development Assistance Research Associates,
nsteen@daraint.org
|
|
Mr. Steen will discuss the field missions and the reason behind the crises' selection (type of crisis, geographic distribution, magnitude of crisis, presence of donors and amount of funding received). Mr. Steen will also explain the questionnaire used to capture the perceptions of implementing humanitarian organizations, including NGOs, United Nations System and the Red Cross and Red Crescent Movement. He will elaborate on lessons learned from the experience.
| |
|
Session Title: Moving From Participatory Evaluation to Participatory Implementation: Strategies to Move Policy Makers to Action
|
|
Demonstration Session 325 to be held in the Quartz Room Section A on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
|
| Presenter(s): |
| Stacey Hoffman,
University of Nebraska Public Policy Center,
shoffman@nebraska.edu
|
| Juan Paulo Ramirez,
University of Nebraska Public Policy Center,
jramirez2@nebraska.edu
|
| Kathryn Speck,
University of Nebraska Public Policy Center,
kspeck2@nebraska.edu
|
| Abstract:
Participatory evaluations are collaborations among evaluators and stakeholders. Stakeholder participation helps with the interpretation of data and may generate additional evaluation questions based on the needs of the participating organizations. Through participatory evaluation, stakeholders assist with development of instruments and protocols, selection of data collection procedures that maximize utility of information and minimize the burden of data collection, and help communicate evaluation results. Implementation of program and policy changes based on evaluation results are often left to the stakeholders alone. Participatory implementation extends the concept of participatory evaluation and expands the role of the evaluator as a partner in implementation. This session introduces participants to evaluation as a tool for participatory implementation. It features demonstrations of effective stakeholder engagement strategies, ways to organize and transition evaluation activities from planning to implementation and reporting strategies that move policy makers to action.
|
| In a 90 minute Roundtable session, the first
rotation uses the first 45 minutes and the second rotation uses the last 45 minutes. |
| Roundtable Rotation I:
Q Methodology: Something to Add to the Evaluator Toolbox |
|
Roundtable Presentation 326 to be held in the Quartz Room Section B on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Quantitative Methods: Theory and Design TIG
|
| Presenter(s):
|
| Virginia Gravina,
University of the Republic,
virginia@fagro.edu.uy
|
| Abstract:
Q-Methodology offers in a single methodology, harnessing different methods to privilege the language and culture of a specific evaluation context.
Q can be defined as an approach to inquiry that privileges the participant's interpretation of ideas over the evaluator's interpretation of the same ideas. It has a potential for casting participants' perspective(s) in contrast with program personnel's perspectives, which represents strategic points of evaluation leverage.
Powerful statistical mechanics are in the background, although unnoticed by the users, who are disinterested in its mathematical structure. Routinely employed in political science, communication, advertising, health science, public policy and other fields is now drawing attention to development projects and empowerment, wherever subjectivity is at issue.
This workshop's goal is to guide the attendees through the knowledge and application of Q-Methodology, from the selection of a suitable situation, to the interpretation of the results, with provided examples to refer to.
|
| Roundtable Rotation II:
Multivariate Generalizability and Temperament |
|
Roundtable Presentation 326 to be held in the Quartz Room Section B on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Quantitative Methods: Theory and Design TIG
|
| Presenter(s):
|
| M David Miller,
University of Florida,
dmiller@coe.ufl.edu
|
| Youzhen Zuo,
University of Florida,
zuo9@ufl.edu
|
| Abstract:
Reliability needs to be considered at the level of the scores that are being used or interpreted. Often profiles are used to examine individual differences in areas such as personality type or temperament. This study examined the multivariate generalizability of the Student Styles Questionnaire using the scores of 9168 students from their norming study. The four dimensions that scores are reported for are preferences for Extroversion – Introversion, Practical – Imaginative, Thinking – Feeling, and Organized - Flexible styles. Results show the reliabilities for the profiles along the four dimensions using multivariate generalizability theory as well as the univariate estimates for the four dimensions. The implications of using reliability for the composite are discussed with the use focused on the nature of score reporting that generally does not focus on the general temperament style based on the profile across the four dimensions. In addition, the paper discusses the use of alternative methods for examining reliability within a multivariate context.
|
|
Session Title: Tools in Developing Evaluation Design
|
|
Multipaper Session 327 to be held in Room 102 in the Convention Center on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Health Evaluation TIG
|
| Chair(s): |
| Jenica Huddleston,
University of California Berkeley,
jenhud@berkeley.edu
|
|
Mapping Information Literacy: Using Concept Mapping to Evaluate Nurses’ Sources of Health Information
|
| Presenter(s):
|
| Louise C Miller,
University of Missouri,
lmiller@missouri.edu
|
| Melissa J Poole,
University of Missouri,
poolem@missouri.edu
|
| Abstract:
Program planning follows a predicable flow from establishing need and assessing the target population to designing the program. Survey methodology is the mainstay for gathering data about need, readiness, and feasibility, using surveys designed by program administrators grounded in their experiences and factors identified in the literature. An alternative method of evaluation, concept mapping, has a number of advantages over the survey approach. It stimulates the generation of new ideas, helps expert practitioners to articulate tacit knowledge grounded in practice, and makes that knowledge visible, displaying relationships of semantic units or concepts. Finally, the process itself engages members of the target population, using their own words and ideas to evaluate program needs. In this paper we report on the lessons learned while using concept mapping to create a conceptual model of information literacy for public health and school nursing practice.
|
|
Using Standing Setting Procedures as a Means of Understanding Best Practice in Chronic Care Disease Management Programs
|
| Presenter(s):
|
| Janet Clinton,
University of Auckland,
j.clinton@auckland.ac.nz
|
| Martin Connelly,
University of Auckland,
j.clinton@aucckland.ac.nz
|
| Abstract:
This paper describes how a standard setting procedure was applied to understand the evaluation of best practice in the management of chronic care diseases within number of within New Zealand. As a part of a large evaluation of chronic care programs, criteria for best practice were developed which relied on both an evidence base and a practice perspective. A standard setting process was used to understand the nature of best practice. Standard setting has long been an evaluation tool, especially to assess achievement in medical education. More recently it has been used more widely in program evaluation.
During a standard setting workshop providers identified aspects of best practice from exemplars of chronic care disease management programs. Responses were analyzed using multi-attribution analysis. The weighting of various attributes of chronic care programs allowed for the development of a set of criteria from which to judge best practice. This paper demonstrates that standard setting is a useful procedure for understanding best practice.
|
|
The ‘Draw the Path’ Technique in Evaluation: Prospective, Mid-Program and End-of-Program Uses
|
| Presenter(s):
|
| Ross Conner,
University of California Irvine,
rfconner@uci.edu
|
| Abstract:
The ‘Draw the Path’ technique was developed and refined during the evaluation of several different health projects, when the need was end-of-program assessment. At its core, the technique involves program participants, both staff and clients, in the collaborative development of a logic model or theory of change, without explicitly naming or using these terms, as well as in the evaluative assessment of the program to that point. Following presentation of the technique at several recent international conferences, the author received positive reports of its successful use in different ways and contexts. The technique was used mid-program to look backward and forward, and it also provided start-of-program prospective judgments and ideas for program and evaluation planning, including use in program staff training. This AEA presentation will describe the technique and give examples of its use, as well as include a discussion of its benefits and limitations.
|
|
Building a ‘World-centric’ Rather than ‘Program-centric’ Logic Model for a National Problem Gambling Strategy: Using Logic Modeling Software
|
| Presenter(s):
|
| Kate Averill,
FutureState,
kaverill@futurestate.co.nz
|
| Paul W Duignan,
Parker Duignan Consulting,
paul@parkerduignan.com
|
| Abstract:
Developing a set of outcomes and indicators for a national problem gambling strategy illustrates the importance of how a logic model (outcomes model) is constructed. Logic models can be drawn either taking a ‘program-centric’ or a ‘world-centric’ approach. A 'world-centric’ approach first focuses on building a logic representing the world on which a set of programs are operating, rather than limiting the perspective to that of the programs in question. Once this has been done, the steps and outcomes different programs are attempting to influence can be mapped onto the model, as can be indicators at a range of levels. Using logic modeling software, a model was developed incorporating the different layers of outcomes (national, regional, individual, service, public health, research and evaluation). Lessons learnt for the best way of structuring national strategy logic models are discussed.
|
| | | |
|
Session Title: Challenges to Measuring Environmental Program Behaviors, Performances, and Outcomes
|
|
Multipaper Session 329 to be held in Room 106 in the Convention Center on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Environmental Program Evaluation TIG
|
| Chair(s): |
| Michael Coe,
Northwest Regional Educational Laboratory,
coem@nwrel.org
|
| William Michaud,
SRA International Inc,
bill_michaud@sra.com
|
|
Evaluation of the United States Environmental Protection Agency’s National Environmental Performance Track Program
|
| Presenter(s):
|
| Gabrielle Fekete,
United States Environmental Protection Agency,
fekete.gabrielle@epa.gov
|
| Jerri Dorsey,
United States Environmental Protection Agency,
dorsey,jerri@epa.gov
|
| Katie Martini,
University of Maryland,
kbmartini@gmail.com
|
| Abstract:
The U.S. Environmental Protection Agency (EPA) created the voluntary National Environmental Performance Track Program as a new model for achieving environmental protection goals be fostering innovation and recognizing and encouraging corporate leadership. The EPA's Office of Inspector General (OIG) evaluated the implementation and effectiveness of the Performance Track program.
The OIG found that Performance Track did not have clear plans that connected activities with its goals, and did not have performance measures that show if it achieves innovation or anticipated environmental outcomes. These implementation challenges detracted from EPA’s anticipated results. In assessing members’ leadership using independent criteria, the OIG found that most Performance Track members’ compliance and toxic release records were better than average, but some were not. Although most members showed leadership and environmental progress, the presence of underperforming facilities in this leadership program reduces the integrity and value of the Performance Track brand.
|
|
Uses and Suggested Structuring of an Environmental Behavior Index (EBI) at the National Level
|
| Presenter(s):
|
| Andrea Hramits,
United States Environmental Protection Agency,
hramits.andrea@epa.gov
|
| Katherine Dawes,
United States Environmental Protection Agency,
dawes.katherine@epa.gov
|
| Abstract:
In 2005, Washington State’s King County developed an innovative tool: the Environmental Behavior Index (EBI). The EBI is a compilation of individual environmental behavior data collected by surveying King County residents. The success of the King County EBI as a tool for evaluation and its potential for use on a wider scale is the inspiration for exploring the viability of a nation-wide EBI.
A national EBI would collect data that could be used by many organizations for: informing and improving program design, performing program evaluations, marketing programs to the public, tracking trends in environmental behaviors, influencing regulatory decision, and providing useful data for environmental research. A national EBI would collect information on household or individuals’ environmental behaviors via telephone survey. This data could be collected at the local level and then aggregated at the national level. The data could then be made available for use nationally.
|
|
Using Evaluation to Identify Performance Measures and Their Relevance to Mission-level Program Outcomes
|
| Presenter(s):
|
| William Michaud,
SRA International Inc,
bill_michaud@sra.com
|
| Abstract:
Since the advent of GPRA, there has been an increasing focus on systematic measurement of outcomes that are directly related to a program’s mission. This has been reinforced by the OMB PART process. While laudable, the focus on direct measurement of outcomes that are far-removed from a program’s intervention could lead to inefficient use of performance measurement resources and poorly informed program management decisions. This is particularly true when a program operates in the context of complex environmental or social systems. A more cost-effective and, potentially, valid approach to performance measurement would be to focus resources on easier-to-measure, more immediate outcomes whose relationship to more distant, mission-level outcomes has been quantified using targeted evaluation. The paper will explain this argument, present a framework for optimizing performance measures, and consider the implications of this alternative for setting evaluation priorities.
|
|
Methodological Challenges to the Practice of Environmental Program and Policy Evaluation
|
| Presenter(s):
|
| Matthew Birnbaum,
National Fish and Wildlife Foundation,
matthew.birnbaum@nfwf.org
|
| Per Mickwitz,
Finnish Environmental Institute,
per.mickwitz@ymparisto.fi
|
| Abstract:
During the last decade, environmental policy and program evaluation has taken a huge leap forward. The development has been established through the activities of AEA's environmental TIG and in particular the Environmental Evaluators Network (EEN) Forum since its inception three years ago. However any summarizing analyses of the development in the field of environmental evaluation is still missing. This presentation will put together the evaluation experiences of the leading international experts on the development on the key challenges related to time horizons, scales, data credibility and counterfactuals as part of an overall volume for NDE currently underway by the two of us who are co-editoring this volume. This discussion is directed to two distinct groups. The first are those that are directly involved with environmental evaluation. The second audience applies to other environ-mental practitioners who seeing their careers influenced by the increased demand for evaluation.
|
| | | |
|
Session Title: An AEA Membership Committee Report: Examining AEA's Topical Interest Group (TIG) Structure - What Works, What Changes Are Needed
|
|
Think Tank Session 330 to be held in Room 108 in the Convention Center on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the AEA Conference Committee
|
| Presenter(s):
|
| Saumitra SenGupta,
APS Healthcare Inc,
ssengupta@apshealthcare.com
|
| Discussant(s):
|
| Bianca Montrosse,
University of North Carolina,
bmontros@serve.org
|
| Katye Perry,
Oklahoma State University,
katye.perry@okstate.edu
|
| Abstract:
In 2007-08, AEA Membership Committee undertook an examination of its current Topical Interest Group (TIG) structure including leadership, governance, activities and benefits to the TIG members. As part of this effort, the committee has conducted a survey of the TIG leadership and also looked at similar structures existing in other similar organizations. This Think Tank session will introduce to the participants some of the key findings and begin a dialog to identify common elements that can be incorporated to enhance the existing TIG structure within AEA. This session will encourage participation from all AEA members and especially invite the current TIG leadership to join the discussion.
|
|
Session Title: Risks and Rewards of Conducting Politically Sensitive and Highly Visible Evaluations
|
|
Panel Session 331 to be held in Room 110 in the Convention Center on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Government Evaluation TIG
|
| Chair(s): |
| Rakesh Mohan,
Idaho Legislature,
rmohan@ope.idaho.gov
|
| Discussant(s):
|
| Rakesh Mohan,
Idaho Legislature,
rmohan@ope.idaho.gov
|
| Abstract:
Politically sensitive and highly visible evaluations are often associated with risks and rewards. Evaluations associated with high risks do not automatically suggest that such evaluations be avoided. Evaluation organizations and evaluators can take certain steps to mitigate those risks and increase the likelihood of achieving higher level of rewards.
This panel will discuss examples of politically sensitive and highly visible evaluations on various subjects, including education, health, and social services. The discussion will focus on the types of risks evaluators face, how those risks affect evaluation policies and practices, how to mitigate those risks in order to conduct successful evaluations, and whether rewards outweigh risks.
|
|
The At-Risk Evaluation Office: Walking a Tightrope in the State Legislature
|
| Nancy Zajano,
Learning Point Associates,
nancy.zajano@learningpt.org
|
|
In conducting evaluations for a state legislature, the risks of being a messenger with unwelcome news are plentiful. There is always someone or some group who views the evaluation findings as detrimental to a particular interest, no matter how sound the methodology. On the other hand, evaluations done at the behest of a state legislature have the potential to directly influence policy. In some instances the report’s recommendations are translated almost verbatim and immediately into a bill directing the future implementation of a state-funded program. Examples of both the risks and rewards from an education oversight office in Ohio will be presented – including the story of the demise of the office itself when the findings from a series of reports on charter schools were considered unwelcome by some officials.
|
|
|
Anticipating and Preparing for Attacks on Your Evaluation Methodology
|
| Tedd McDonald,
Boise State University,
tmcdonal@boisestate.edu
|
|
For a politically sensitive and highly visible evaluation, the likelihood that your methodology will be attacked, or at least questioned, is high. Evaluators and their organizations must anticipate this and planned accordingly – they need to be prepared to answer tough questions about their methods in a public setting. This presentation will discuss a study that evaluated the largest government health and welfare agency in a state. The stakes were high, because legislators who directed the study questioned the performance of the agency management. The evaluators anticipated the tough road ahead and planned their study accordingly. The amount of efforts spent on planning and scoping the study paid off – the study was a success in terms of having both policy and programmatic impact.
| |
|
Biting the Hand that Feeds You: A Dilemma for Evaluators in the Legislative Environment
|
| Bob Thomas,
King County Auditor's Office,
rthomasconsulting@msn.com
|
|
Evaluations conducted as part of legislative oversight may carry certain risks for the evaluation teams. When the outcomes of an evaluation are politically sensitive, the process and the competence of the evaluation itself can come under intense scrutiny. Although such risks “go with the territory” and usually can be mitigated by developing a sound methodology and following standards, sometimes the political stakes are so high that the normal process breaks down and the risks to the evaluators become severe. This can happen when the legislature itself has contributed to the problems under review. This presentation will draw on examples of evaluations that have faced and overcome such risks, and will identify the strengths of the evaluation process that can make success possible.
| |
|
Session Title: Evaluation Use for Strategic Management in Environmental and Public Health Organizations
|
|
Multipaper Session 332 to be held in Room 112 in the Convention Center on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Research, Technology, and Development Evaluation TIG
and the Environmental Program Evaluation TIG
|
| Chair(s): |
| Dale Pahl,
United States Environmental Protection Agency,
pahl.dale@epa.gov
|
| Abstract:
Today, federal organizations with responsibility for environmental and health research face unique challenges when evaluating progress and impact. These challenges exist because solutions for many of today's environmental and health problems require knowledge about complex systems and dynamic interactions with human activities; it is difficult to adapt traditional approaches and metrics to evaluate such "systems" programs. The four presentations in this session explore evaluation policy, best practice, innovation, and lessons learned for strategic management and operation (e.g., program effectiveness, efficiency, and contributions to outcomes) of environmental and health programs.
|
|
Performance Measurement for Research Improvement
|
| Phillip Juengst,
United States Environmental Protection Agency,
juengst.phillip@epa.gov
|
|
This presentation focuses on the challenge of developing meaningful performance measures for research. Measuring ultimate outcomes is especially difficult, yet focusing on the output of research publications ignores the relevance and quality of research. To address these challenges, the Environmental Protection Agency (EPA) expanded their program evaluations to provide systematic ratings of performance. These ratings, based on similar methodologies at other agencies, provide a consistent measure of long-term outcomes and a useful tool for tracking the success of program improvement strategies. EPA also developed a balanced suite of measures including bibliometric and decision-document analyses, as well as partner surveys, to measure performance across the logic model framework. Recently, EPA engaged the National Academy of Sciences and other Federal agencies in a broader dialog about how best to measure the efficiency of research, and the outcome of that discussion is leading to new directions for research efficiency and evaluation.
|
|
A Conceptual Model for the Capability, Effectiveness, and Efficiency of Laboratories at the United States Environmental Protection Agency
|
| Andrea Cherepy,
United States Environmental Protection Agency,
cherepy.andrea@epa.gov
|
| Michael Kenyon,
United States Environmental Protection Agency,
kenyon.michael@epa.gov
|
|
This presentation communicates an overview of a life-cycle model for laboratory facilities and describes its relevance for understanding effective and efficient laboratory functions at EPA. During the past twenty years, the growing importance of laboratory facilities in all sectors of the United States coupled with laws enacted by Congress and Executive Orders from the Office of the President have stimulated development of a conceptual model for laboratory facilities and the programs they support. This model is consistent with emerging demands for sustainable laboratory facilities. At EPA, scientific laboratories have a strong role in developing knowledge to inform decisions about environmental problems and to support agency programs. Understanding the relationship between laboratory capabilities, effectiveness, and efficiency is important for the strategic management of laboratories and for management and evaluation of the scientific functions and research programs sustained by laboratories.
|
|
Evaluating a Research Program Through Independent Expert Review
|
| Lorelei Kowlaski,
United States Environmental Protection Agency,
kowalski.lorelei@epa.gov
|
|
This presentation will provide an update on an approach presented to the AEA in 2005 for conducting regular evaluation of the EPA's Office of Research and Development's (ORD) research programs. Rod's federal advisory committee, the Board of Scientific Counselors, is an independent expert panel that began conducting retrospective and prospective reviews of ORD's research programs in 2004 to evaluate their quality, relevance, and effectiveness. The intent was to continue these reviews on an approximately 4-5 year review cycle, and for ORD to use the recommendations to help plan, implement, and strengthen it's research programs, as well as respond to the increased focus across the government on outcomes and accountability. This presentation will address the lessons learned from implementing the BOSC review process the past 4 years, and how the process has been adapted to incorporate new concepts, such as systematic ratings of performance and efficiency measures.
|
|
NCI Corporate Evaluation Converging Approaches to Maximize Utility
|
| James Corrigan,
National Institutes of Health,
corrigan@mail.nih.gov
|
| Kevin D Wright,
National Institutes of Health,
wright@mail.nih.gov
|
| Lawrence S Solomon,
National Institutes of Health,
solomonl@mail.nih.gov
|
|
The National Cancer Institute (NCI) has been building a framework to support systematic evaluation and assessments of its programs. This framework has evolved over time and includes evaluation Policies and Procedures and enhanced Evaluation Capacity. Evaluation policy and procedures were developed with input from key stakeholders such as NCI leadership, program directors, and evaluation experts to increase evaluation utility. Evaluation tools, resources, and information networks were developed to enhance evaluation capacity and increase the quantity and quality of evaluations. Taken together, these activities represent a multi-pronged corporate evaluation model that has and continues to strengthen the evaluation process and evaluation utilization. This presentation will provide examples of what a corporate evaluation policy might look like, how an evaluation policy can be operationalized, and what can be done to enhance the number and quality of evaluations consistent with a corporate evaluation policy. Corporate evaluation at NCI continues to evolve with significant opportunities for further development.
|
|
Session Title: Retention in a Longitudinal Outcome Study: Modeling Techniques and Practical Implications
|
|
Multipaper Session 333 to be held in Room 103 in the Convention Center on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Quantitative Methods: Theory and Design TIG
|
| Chair(s): |
| Robert Stephens,
Macro International Inc,
robert.stephens@macrointernational.com
|
| Abstract:
This multipaper session will explore the determinants of retention and patterns of participation in the national evaluation of the Comprehensive Community Mental Health Services for Children and Their Families Program. The data were collected as part of the CMHS system of care initiative in communities initially funded between 2002 and 2004 by the Center for Mental Health Services. In the three papers, we use data from the longitudinal outcome component of this evaluation, as well as site level information on communities participating in the evaluation, to investigate the effect of child, caregiver and site level characteristics on retention. Each of the three papers explores a different methodology to model retention. Specifically, we use latent class analysis (LCA), the sequential response model, panel data techniques, and multilevel modeling. The session will have both practical and methodological relevance, as participants will learn about determinants of retention, as well as techniques for studying retention.
|
|
Modeling Retention Over Time
|
| Yisong Geng,
Macro International Inc,
yisong.geng@macrointernational.com
|
| Megan Brooks,
Macro International Inc,
megan.a.m.brooks@macrointernational.com
|
|
In this analysis, two methods are used to model participation in the longitudinal outcome study. In both models, the outcomes are binary indicators of participation at each timeframe. In the first, the outcome of retention is modeled as a series of sequential decisions of whether or not to participate in the next longitudinal outcome study interview. Because an individual does not decide to be 'retained' in a study at one point, but rather makes a series of decisions at each interview of whether or not to continue to participate, this sequential model might capture these dynamics more accurately than other models.
In the second model, panel data analysis techniques are used to focus on the impact of past participation on future participation. While retention is not modeled sequentially here, the decision to participate at each timeframe is considered separately, while taking into consideration participation at each prior timeframe.
|
|
A Latent Class Analysis of Patterns of Respondent Participation in a Longitudinal Outcome Study
|
| Ye Xu,
Macro International Inc,
ye.xu@macrointernational.com
|
| Robert Stephens,
Macro International Inc,
robert.stephens@macrointernational.com
|
|
This presentation will explore the patterns of respondents' participation in the national evaluation of the Comprehensive Community Mental Health Services for Children and Their Families Program. Through this study we hope to develop a classification system of the longitudinal outcome study participants who were heterogeneous in their participation in follow up data collection, and develop a set of key characteristic variables that predict these patterns of participation in the longitudinal study. We will present the utility of latent class analysis for accomplishing this type of classification. Latent class analysis (LCA) allows one to examine shared characteristics across groups of respondents with different distributions on several indicators at a point in time (Muthon, 2001) For this presentation retention is defined through a series of dichotomous variables that represent participation at each of the follow-up waves of data collection.
|
|
Determinants of Retention in a Longitudinal Study using a Multilevel Modeling Approach
|
| Tesfayi Gebreselassie,
Macro International Inc,
tesfayi.gebreselassi@macrointernational.com
|
|
In any longitudinal study, participant loss during follow-up can potentially bias the results of analysis because of differences between those who dropped-out and those who continue to participate. In this presentation, we use data from the longitudinal outcome component of the national evaluation of the Comprehensive Community Mental Health Services for Children and Their Families program to implement system of care funded through the Center for Mental Health Services, as well as site level information on communities participating in the evaluation, to investigate child, caregiver and site level characteristics that predict retention in the longitudinal outcome study.
|
|
Session Title: Show Me the Evidence: Evidence-Based Practice in Non-Profits
|
|
Multipaper Session 334 to be held in Room 105 in the Convention Center on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Non-profit and Foundations Evaluation TIG
|
| Chair(s): |
| Jennifer Hamilton,
Westat,
jenniferhamilton@westat.com
|
|
Bridging Research and Practice: Key Characteristics of an Effective Research Collaboration Between a Non-Profit Agency and a University-Based Research Center
|
| Presenter(s):
|
| Kristin Duppong Hurley,
University of Nebraska Lincoln,
kdupponghurley2@unl.edu
|
| Tanya Shaw,
Boys Town,
shawt@boystown.org
|
| Stephanie Ingram,
Boys Town,
ingrams@boystown.org
|
| Annette Griffith,
University of Nebraska Lincoln,
annettekgriffith@hotmail.com
|
| Katy Casey,
University of Nebraska Lincoln,
katyjcasey@yahoo.com
|
| Abstract:
This paper will review the history of an effective research collaboration between a human service provider and a university-based research center, the key factors essential to its early success, obstacles that have been encountered and overcome, and a brief review of the progress made by the collaboration to date. It is expected that the audience will be able to apply the lessons learned to their own efforts of establishing effective research partnerships. It is hoped that the insights provided in this paper will (a) help other non-profit agencies select research collaborators, and (b) assist research organizations in identifying characteristics of service providers that would most likely participate in rigorous evaluation methodologies.
|
|
Overcoming Challenges to Determine Evidence-Based Practice in Community Agencies
|
| Presenter(s):
|
| Jill Bomberger,
University of Nebraska Omaha,
jbomberger@unomaha.edu
|
| Jeanette Harder,
University of Nebraska Omaha,
jharder@unomaha.edu
|
| Abstract:
This paper explores the roles of evaluators of community-based non-profit organizations as agencies continue to face increasing demands to become “evidence-based.” The paper to be presented reviews what evidence-based practice is and is not as it pertains to programs implemented by community agencies. Often exploration of the “evidence-base” for common interventions in child abuse prevention, for example, shows that programs simply cannot be said to be “evidence-based” due to lack of empirical evidence. Community agencies vary widely in their evaluation methods, yet nearly all have room for advancing practices. The presenters will discuss strategies for helping agencies take practical steps to improve data collection, measurement, logic models, and research design using a step-by-step research protocol for helping community agencies.
|
|
Where is Scouting Making a Difference Within the Community? Mapping Diverse Data to Locate and Identify "Hot-Spots" of Community Interaction
|
| Presenter(s):
|
| Didi Fahey,
Denver Area Council Boy Scouts of America,
fahey.13@osu.edu
|
| Abstract:
Determining how organizations interact with the community remains a pivotal point in evaluations. Typically, this interaction is measured by counting the number of public service announcements, customers, or even event attendance. Unfortunately, there are limits as to what we can learn by counting the times we speak to the community. Using mapping software, community organizations can generate a number of maps to see precisely where, and in what manner they interact with the larger community. The Denver Area Boy Scout Council, was able to create a unique data-set based on a variety of activities that spoke directly to multiple types of community interactions. Mapped activities ranged from fund-raising, to Eagle Scout projects, to meeting locations. Separately, each indicator tells a small story about some specifics of the scouting program. Together, however, a picture of scouting “hot-spots” emerges, enabling the community to see for itself where scouting makes a difference.
|
| | |
|
Session Title: How Evaluation Results are Used by Policy Makers in the US and Internationally
|
|
Multipaper Session 335 to be held in Room 107 in the Convention Center on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Evaluation Use TIG
and the International and Cross-cultural Evaluation TIG
|
| Chair(s): |
| Roger Miranda,
United Nations Office on Drugs and Crime,
nica9368@yahoo.com
|
|
A Longitudinal Study of the Uses of Public Performance Reports by Elected Officials
|
| Presenter(s):
|
| James McDavid,
University of Victoria,
jmcdavid@uvic.ca
|
| Irene Huse,
University of Victoria,
ihuse@uvic.ca
|
| Abstract:
The paper summarizes findings from a longitudinal study of the ways the elected officials use public performance reports. In 2001, the British Columbia Legislature passed a law that mandates annual performance planning and reporting. The study obtained a base line from members of the legislature (MLAs) before they saw their first performance report in 2003. The baseline focused on their expected uses of the performance reports. In 2005 and 2007, follow up surveys were conducted with MLAs to track their actual uses of these reports. The presentation will compare usage levels for 15 different uses of performance information. Comparisons between members of the Executive Council (cabinet ministers) and back benchers are included, as are comparisons between members of the governing party (Liberals) and the Opposition party (NDP). This study is unique – no other study has obtained this kind of information from the intended principal users of public sector performance reports.
|
|
An Exploratory Study of How Information is Used in the Policy Process by Education Policymaker Staff and Public Administrators
|
| Presenter(s):
|
| Katy Anthes,
The Third Mile Group,
katyanthes@comcast.net
|
| Abstract:
The vast amount of money spent on program and policy evaluation will be wasted unless the findings of evaluations are used to inform program designers, program implementers and policymakers in their decision-making. This study investigates how policy research, analysis, and evaluation are used by other actors, in addition to policymakers, in the education political and policy process. These other actors, particularly policymaker staffs and middle to high-level administrators in state government are important brokers of the policy process and often neglected in the literature on research use. Evaluators can learn from the fields of political science and public administration regarding how evaluation overlaps and integrates with policy analysis to inform the policy process. Through the findings of this mixed-method study using survey data and interviews, evaluators will be better able integrate evaluation research into the political process by understanding the way it is offered into the policy process.
|
|
Evaluation as Transnational Policy: Evaluation Policy in Swedish Compulsory Education - Its International Influences and National Features
|
| Presenter(s):
|
| Christina Segerholm,
MidSweden University,
christina.segerholm@miun.se
|
| Abstract:
The paper describes and analyses national evaluation policy in Swedish compulsory education. It is based on two studies; one textual analysis of national policy documents concerning educational evaluation, and one interview study conducted with a selection of so called ‘national policy brokers’. What are the characteristics of educational evaluation in Swedish schooling? What in European and international ideas on educational evaluation is disseminated to Swedish national policy? Results show that Swedish national evaluation policy for education consists of a number of interrelated evaluative activities, a web. All activities are aligned to the principle of governing by objectives and results. Trans- and supranational evaluation policies, developed in various groups in the OECD and EU influence Swedish national policies, and are disseminated by an elite of policy actors, not necessarily formal national policy-makers. Educational evaluation policy may be understood as an example of a globalization process with multiscalar characteristics.
|
|
Use of Evaluations in Dutch Urban Policy
|
| Presenter(s):
|
| Ger Arendsen,
Open University of the Netherlands,
ger.arendsen@ou.nl
|
| Abstract:
Research findings on the use of evaluations in Dutch urban policy from the 1990’s on will be presented. This policy area is seen in the Netherlands as a prime example of consecutive governments trying to learn from past experiences through evaluations and failing to do so effectively. National governments are involved in a complex governance structure with regional and local governments, in a social and economic context of conflicting interests. The complexity and richness of the context, and the many attempts at learning from evaluations in a relative short span of time, makes it an interesting case to study using conceptual frames from the evaluation use literature. Within a case study design the influence of different types of evaluation (such as monitoring and expert judgment) is investigated. The research aims primarily at testing conceptual models fitting in with recent developments in evaluation use theory.
|
| | | |
|
Session Title: Evaluation Capacity Building and the Nonprofit
|
|
Multipaper Session 336 to be held in Room 109 in the Convention Center on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
and the Non-profit and Foundations Evaluation TIG
|
| Chair(s): |
| Emily Hoole,
Center for Creative Leadership,
hoolee@ccl.org
|
| Discussant(s): |
| Lily Zandniapour,
Innovation Network,
lzandniapour@innonet.org
|
|
Building System-Wide Awareness, Readiness, and Capacity for Program Evaluation: Strategies, Challenges, and Lessons Learned from a National Capacity Building Effort
|
| Presenter(s):
|
| Barbara Bedney,
United Jewish Communities,
barbara.bedney@ujc.org
|
| Abstract:
United Jewish Communities (UJC), the non-profit, umbrella organization of the Jewish federation system, represents 155 federations and 400 smaller communities that provide humanitarian assistance through more than 1,300 institutions in North America, Israel and the world. As part of a broad strategy to promote research and evidence-based practice in the federation system, and to create leadership and a shared learning environment throughout the system, UJC has initiated a multi-level capacity-building effort to increase the ability of federations to engage in program evaluation. This session will provide an overview of the level-of-intervention framework that is guiding our capacity-building efforts, highlight our efforts to promote the use of logic models and systematic, theory-driven evaluation in all these capacity-building activities, address some of the challenges that have emerged during the capacity-building process, and present a series of lessons learned for other organizations engaging in (or thinking of engaging in) similar capacity-building efforts.
|
|
Evaluation Capacity Among Nonprofit Organizations: Is the Glass Half-Empty or Half-Full?
|
| Presenter(s):
|
| Joanne Carman,
University of North Carolina Charlotte,
jgcarman@uncc.edu
|
| Kimberly A Fredericks,
Indiana State University,
kfredericks@isugw.indstate.edu
|
| Abstract:
In order to better understand the evaluation capacity of nonprofit organizations, we gathered data through a mail survey of nonprofit organizations providing human services in the state of Indiana. In this paper, we report the findings of a cluster analysis which suggests that when it comes to evaluation capacity, there are three types of nonprofit organizations: those that are satisfied with their capacity to do evaluation; those that have the internal support for evaluation, yet have evaluation design, data collection, and other issues; and those that are struggling across the board with evaluation and report little support for evaluation from funders, the board, management, and staff. The findings from this study not only illustrates the range that we see in the evaluation capacity of nonprofit organizations, but it also helps to identify specific areas where evaluators, funders, and nonprofit managers can help to improve the evaluation capacity of nonprofit organizations.
|
|
Facilitating Capacity Building Through Organizational Development Processes: Engaging the Independent Living Movement in Evaluation
|
| Presenter(s):
|
| Tiffeny Jimenez,
Michigan State University,
jimene17@msu.edu
|
| Abstract:
Program evaluation within the Independent Living (IL) movement has recently become a ground-breaking and crucial national endeavor. Although Centers for Independent Living (CIL) are grassroots organizations that often would like to sustain their organizational functioning without the help of government funding, many do receive those funds, and given the political climate, these organizations are being impelled more than ever to demonstrate their effectiveness in improving the lives of people with disabilities. CILs in Michigan have begun to take on this mission of evaluating and proving their worth through means of Organizational Development (OD). CILs in Michigan, have begun to work on this through a Process Consultation approach in partnership with other supportive agencies. This non-expert model approach is consistent with the philosophy of IL and has effectively built the organizational capacities of CILs. The discussion of this work has implications for theory and practical approaches in facilitating processes of OD.
|
| | |
|
Session Title: Frameworks for Evaluation Teacher Professional Development: Implementation, Outcomes, and Impact
|
|
Multipaper Session 337 to be held in Room 111 in the Convention Center on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Chair(s): |
| Sheryl Gowen,
Georgia State University,
sgowen@gsu.edu
|
|
Quantifying the Qualitative: A Test Analysis of Coaching Staff Journals to Boost Understanding of Teacher Professional Development Program Status
|
| Presenter(s):
|
| Keith Murray,
MA Henry Consulting LLC,
keithsmurray@mahenryconsulting.com
|
| Martha Henry,
MA Henry Consulting LLC,
mahenry@mahenryconsulting.com
|
| Abstract:
Quantifying the qualitative presents a frequent challenge to evaluators, given the need to convert raw program materials into supportable process and summative evidence. Evaluations of teacher professional development programs usually focus on quantitative evaluator-provided instruments, such as tests, surveys and observation reports, to understand program effectiveness and progress towards objectives. Evaluation of program-derived products offers a more qualitative source of program information. These texts include teacher products such as portfolios and lesson plans and program staff products such as report narratives and journals. To test qualitative methods, evaluators of an NSF-funded grades 6-8 math-science partnership examined the journals of program staff coaches working with teachers, schools and districts for evidence of partnership dynamics, program challenges, problem solving and teacher engagement. The paper reports results of this analysis and its contribution to the program evaluation, and discusses application of such techniques to extant data sources in K-12 teacher professional development programs.
|
|
Evaluating the Impact of Professional Development on Teachers' Practice
|
| Presenter(s):
|
| Patricia Moore Shaffer,
College of William and Mary,
pmshaf@wm.edu
|
| Jan Rozzelle,
College of William and Mary,
mjrozz@wm.edu
|
| Abstract:
The School-University Research Network (SURN) of the College of William and Mary has provided professional development in research-based content literacy assessment, standards, and strategies to three successive cohorts of middle school core content teachers. The professional development model features a summer workshop for school teams, follow-up workshops during the academic year, school-based classroom observations and coaching, collaborative lesson planning, and peer mentoring. Using a mixed method evaluation design and varied data collection strategies including classroom observation, interviews, surveys, and document analysis, SURN staff has sought to assess the impact of this program on teachers’ practice and student achievement. During this paper presentation, SURN staff will discuss the evaluation design, data collection and analysis, and significant findings in response to its research questions.
|
|
Using a Multi-Faceted Approach to Evaluating a Statewide Professional Development Program from Conceptualization Through Implementation
|
| Presenter(s):
|
| Jacqueline Stillisano,
Texas A&M University,
jstillisano@tamu.edu
|
| Hersh Waxman,
Texas A&M University,
hwaxman@tamu.edu
|
| Karin Sparks,
Texas A&M University,
karinsparks@tamu.edu
|
| Brooke Kandel-Cisco,
Texas A&M University,
bkandel@tamu.edu
|
| Sue Wedde,
Texas A&M University,
swedde@tamu.edu
|
| Abstract:
This study showcases an evaluation of a statewide professional development project in Texas and provides a model of a broad-based approach to program evaluation. Using the Accountability, Effectiveness, Impact, Organizational Context, and Unanticipated Outcomes (AEIOU) evaluation framework (Simonson, 1997; Sorensen & Sweeney, 1997), the evaluators examined the content, process, and context variables of the curriculum materials developed; the workshops where the curriculum was introduced; and the implementation of the curriculum. Both qualitative and quantitative methods were employed, and multiple sources were used for data collection—including focus groups, questionnaires, surveys, and observations (different observation tools were developed for different contexts). The evaluation process had and continues to have many different facets over an extended time period, from the formative evaluations that contributed to an iterative process in the design and development of the program to the summative evaluations of materials and workshops and the implementation of the curriculum.
|
|
Using Systems Approaches to Evaluate Practice Outcomes of Teacher Professional Development
|
| Presenter(s):
|
| Janice Noga,
Pathfinder Evaluation and Consulting,
jan.noga@stanfordalumni.org
|
| Abstract:
The purpose of this paper presentation is to discuss the use of systems approaches to evaluate transfer of training to classroom practice among teachers participating in a statewide professional development program in literacy instruction. While it is tempting to view changes in classroom practice as a logical outcome of changes in knowledge and skills, the reality is far more complex. In evaluating any professional development effort that seeks to change teachers’ practice, one must keep in mind that classroom practice, while an important influence on student achievement, is but one element of a much more complex system that is the school or district. This presentation will focus on the methodological challenges faced in documenting practice outcomes and describe the development and use of a systems framework to assess the potential for program learning to impact classroom practices of participating teachers.
|
| | | |
|
Session Title: New Tools for Old Methods: Using Technology for Evaluation Methodology and Technical Assistance
|
|
Panel Session 338 to be held in Room 113 in the Convention Center on Thursday, Nov 6, 1:40 PM to 3:10 PM
|
|
Sponsored by the Integrating Technology Into Evaluation
|
| Chair(s): |
| Rashon Lane,
Northrop Grumman Corporation,
rlane@cdc.gov
|
| Abstract:
Advances in technology provide new ways to improve efficiency in conducting traditional evaluation activities. The Centers for Disease Control and Prevention (CDC), Division for Heart Disease and Stroke Prevention (DHDSP) aims to foster a skilled and engaged public health workforce by incorporating technology into evaluation, including collecting evaluation data, conducting focus groups, and structuring a needs assessment. DHDSP also piloted a professional network for evaluators in state health departments who have advanced evaluation skills. However, effective and appropriate uses of technology involve more than skill in operating a new tools or software. Evaluators must also incorporate into their evaluation planning an understanding of the potential impact of a new technology that differs from traditional methods that are well-understood. This panel session provides three examples of how technology was integrated into traditional evaluation activities and illustrates the range of skills needed by evaluators to attain competency in using technology for evaluation.
|
|
Visceral Reactions to Virtual Panels
|
| Susan Ladd,
Centers for Disease Control and Prevention,
sladd@cdc.gov
|
| Joan Ware,
National Association of Chronic Disease Directors,
ware@chronicdisease.org
|
| Lazette Lawton,
Centers for Disease Control and Prevention,
llawton@cdc.gov
|
|
The Centers for Disease Control and Prevention's Division for Heart Disease and Stroke Prevention (DHDSP) often incorporates expert panels in developing CDC recommendations, guidance and products. Experts with the level of knowledge and experience sought by CDC are in high demand. As a result, scheduling in-person meetings is very difficult. To address this issue, DHDSP collaborated with the Cardiovascular Health Council of the National Association of Chronic Disease Directors to convene a focus group using a browser-based software tool that facilitates management of a group process. The interactive tool eliminated participant travel time and provided real-time group interaction for considering options, reaching consensus, and setting priorities. We describe how the tool was used, share feedback from focus group participants and summarize the benefits and challenges of using the web-based system.
|
|
|
Conducting a Training Needs Assessment Using a Live Web Discussion Board
|
| Rashon Lane,
Northrop Grumman Corporation,
rlane@cdc.gov
|
| Susan Ladd,
Centers for Disease Control and Prevention,
sladd@cdc.gov
|
| Jan Jernigan,
Centers for Disease Control and Prevention,
jjernigan1@cdc.gov
|
| Linda Redman,
Centers for Disease Control and Prevention,
lredman@cdc.gov
|
| Margaret Casey,
National Association of Chronic Disease Directors,
casey@chronicdesease.org
|
|
The Division For Heart Disease and Stroke Prevention (DHDSP) of the Centers for Disease Control and Prevention increases the skills and capacity of the heart disease and stroke public health prevention workforce by holding an annual skill building training. To inform the planning of this training, DHDSP used a live web discussion board to conduct a needs assessment. Advantages of this approach were that participants provided real-time responses when probed by facilitators and communicated with one another through threaded discussions, thus enhancing the level of detail in their responses. The web board responses allowed us to gain a deeper understanding of participant needs so that training planning could be sharply focused. This presentation will explain the application of a live discussion board process and illustrate the multiple roles of the evaluator in structuring an online evaluative process.
| |
|
Professional Networking Among Advanced Evaluators: The Pilot of the Advanced Evaluation Network (AEN)
|
| Lisa Levy,
Northrop Grumman Corporation,
llevy@cdc.gov
|
| Michael Schooley,
Centers for Disease Control and Prevention,
mschooley@cdc.gov
|
|
CDC's Division for Heart Disease and Stroke Prevention (DHDSP) strives to foster a skilled and engaged public health workforce. To address the needs of experienced evaluators in various state based chronic disease programs, DHDSP piloted a professional networking group called the Advanced Evaluation Network (AEN). Relying on technology to promote distance based networking among evaluators, AEN connected six advanced level state health department program evaluators from across the U.S. for five teleconference calls. AEN tested the concept of a self-directed network with the goals of sharing information and peer-to-peer learning across chronic disease evaluators. This presentation will describe the concept and process of distance based professional networking and how future applications of technology can be used to meet advanced evaluation needs. Highlights of professional networking among evaluation members, assessment, and pitfalls of the pilot will also be discussed.
| |