|
Session Title: The Power of Self in Systems: Organizational Learning From Self-Determination Theory-driven Evaluations
|
|
Demonstration Session 535 to be held in International Ballroom A on Friday, November 9, 10:20 AM to 11:05 AM
|
|
Sponsored by the Program Theory and Theory-driven Evaluation TIG
|
| Presenter(s): |
| Deborah Wasserman,
The Ohio State University,
wasserman.12@osu.edu
|
| Abstract:
Self-Determination Theory-driven logic models promise important new avenues for culturally competent, responsive, and ultimately more effective program evaluation methodology. One avenue is through the continual evaluative learning these models produce. This demonstration presents the conceptual framework for creating these models for both quality improvement and outcome evaluation. According to Self Determination Theory (SDT), optimal functioning of human systems (i.e. communities, families, human service programs, etc.) both causes and is caused by a sense that the basic psychological needs for competence, relatedness, and autonomy are satisfied. This notion leads to SDT models which augment more traditional logic models with eight measurable program "pulse points," which produce learning opportunities that enhance durable program outcomes (the eighth pulse point). Case studies of two evaluations (a statewide suicide prevention screening program and a local comprehensive after-school program) will be used to illustrate how SDT-based data generates learning and Rapid Cycle Quality Improvement responses.
|
|
Session Title: Applications of Systems Thinking to Educational Evaluation
|
|
Multipaper Session 537 to be held in International Ballroom C on Friday, November 9, 10:20 AM to 11:05 AM
|
|
Sponsored by the Systems in Evaluation TIG
|
| Chair(s): |
| Janice Noga,
Pathfinder Evaluation and Consulting,
jan.noga@stanfordalumni.org
|
|
Schooling as a Complex System: Appropriate Frameworks for Educational Evaluation
|
| Presenter(s):
|
| Tamara Walser,
University of North Carolina, Wilmington,
walsert@uncw.edu
|
| Abstract:
The purpose of this presentation is to propose appropriate frameworks for educational evaluation based on current research in complexity science and its application in education and the social sciences. Complex systems are, by definition, holistic, non-linear, unpredictable, emergent, adaptive, and changing. The brain and weather are examples of complex systems, as are social phenomena such as learning, teaching, and schooling. Although most would agree that schooling is a complex phenomenon, many of the frameworks used for instruction, student assessment, and educational evaluation do not account for this complexity—they are based on linear, cause and effect notions of schooling. Given the need for rigorous evaluations of educational programs, and the increasing complexity of schooling, it is important that evaluators use appropriate frameworks so that results are valid and meaningful.
|
|
What Else is Happening With Squishy and Marvin: Combining Program Logic, Appreciative Inquiry, and Complex Adaptive Systems Frameworks in Evaluating a K-12 Science Education Project
|
| Presenter(s):
|
| Lois-ellin Datta,
Datta Analysis,
datta@ilhawaii.net
|
| Abstract:
Squishy and Marvin are two squid, enthusiastically dissected by fifth graders as part of a National Science Foundation project bringing together classroom teachers and graduate science students. The evaluation framework combines (1) program logic in assessing implementation,(2) appreciative inquiry in seeing what is happening in the classrooms, and (3)complex adaptive systems to understand what may emerge from PRISM and what else that is happening may affect PRISM in a contribution analysis. The paper focuses on the nuts-and-bolts, methodologically, in applying CAS, from explaining CAS to the stakeholders, to training the CAS evaluator to data analysis. It is a promises and pitfalls case example of learning how to understand context and consequences through the framework of CAS.
|
| |
|
Session Title: Costs are All That Matters (With Studies That Prove It): About and Beyond Cost-inclusive Evaluation
|
|
Expert Lecture Session 540 to be held in Liberty Ballroom Section A on Friday, November 9, 10:20 AM to 11:05 AM
|
|
Sponsored by the Costs, Effectiveness, Benefits, and Economics TIG
|
| Chair(s): |
| Brian Yates,
American University,
brian.yates@mac.com
|
| Presenter(s): |
| Brian Yates,
American University,
brian.yates@mac.com
|
| Abstract:
[Presented tongue-in-cheek] Three quantitative studies show that costs are important to evaluate; outcomes are not, actually. Mental health services are found to differ not in effectiveness to any appreciable degree, but to potentially differ in cost by several orders of magnitude. Cost per pound (lost) also was found to differ by one or more orders of magnitude between obesity treatments. And, in a well-funded attempt to prevent substance abuse, the least expensive (if somewhat iatrogenic) component was used most. Programs are offered as entitlements anyway: decision-makers need to know which "standard practice" costs least.
Ways to cope with the superior importance of costs are offered, to aid evaluators with this Zeitschrift. "Cost-inclusive" evaluation is offered as a helpful, if mandatory, reconceptualization. We also might measure monetary and monetizable outcomes of programs, including savings of future expenditures and enhancement of client income, as well as costs. :-)
|
|
Session Title: Empirical Research on Evaluation: Evidence-based Contributions to Evaluation Theory
|
|
Multipaper Session 541 to be held in Liberty Ballroom Section B on Friday, November 9, 10:20 AM to 11:05 AM
|
|
Sponsored by the Theories of Evaluation TIG
|
| Chair(s): |
| Janice Fournillier,
Georgia State University,
jfournillier@gsu.edu
|
|
Evaluator Contextual Responsiveness: A Simulation Study
|
| Presenter(s):
|
| Tarek Azzam,
University of California, Los Angeles,
tazzam@ucla.edu
|
| Abstract:
To test how evaluators responded to varying stakeholder interests, a simulation study was conducted in which evaluators had the opportunity to modify their evaluation design in response to varying stakeholder perspectives. The purpose of this study was to test how the political context and evaluator characteristics affected evaluation design decisions. By systematically varying stakeholder opinions of the evaluation, it was possible to examine how evaluators reshaped their design to fit different political contexts. The study's results revealed that evaluators were more responsive to stakeholders who held more logistical control over the evaluation (i.e. funding, data access). In these conditions evaluators were willing to modify more evaluation design elements than they did for stakeholders with less logistical control over the evaluation. Additionally, findings suggested that evaluator's methodological and utilization preferences strongly influenced their evaluation design decisions.
|
|
What's Hot and What's Not? Sifting Through Six Years and Three Journals Worth of Evaluation Theory and Research
|
| Presenter(s):
|
| Bernadette Campbell,
Carleton University,
bernadette_campbell@carleton.ca
|
| Deborah Reid,
Carleton University,
debbie.reid@sympatico.ca
|
| Abstract:
There has been a push for more empirical research on program evaluation to ground evaluation theory in a base of evidence. With such a broad charge, however, it is difficult to know where precisely to begin. Indeed, evaluation theory covers a lot of territory. In the present study, we use a sample of the evaluation literature as a starting point. We present the results of a systematic review and content analysis of over 500 abstracts, representing 6 years (2000-2006) of published research in three well-known evaluation journals (American Journal of Evaluation, Canadian Journal of Program Evaluation, Evaluation). Among other dimensions, abstracts were coded according to Shadish, Cook & Leviton's (1991) five dimensions of evaluation theory (social programming, valuing, knowledge use, knowledge construction, and evaluation practice). It is hoped that the results of this review will (a) paint a picture of current theoretical debates and discussions in the field, and (b) provide a starting point for establishing priorities for the empirical program of evaluation (Shadish et al., 1991).
|
| |
|
Session Title: Emergency Preparedness Standards of Acceptability for Evaluation
|
|
Multipaper Session 543 to be held in Edgar Allen Poe Room on Friday, November 9, 10:20 AM to 11:05 AM
|
|
Sponsored by the Disaster and Emergency Management Evaluation TIG
|
| Chair(s): |
| Ralph Renger,
University of Arizona,
renger@u.arizona.edu
|
| Discussant(s): |
| Ralph Renger,
University of Arizona,
renger@u.arizona.edu
|
| Abstract:
This session will discuss the importance of using emergency preparedness standards of acceptability for evaluation of emergency preparedness initiatives (courses, exercise programs and so forth). The Core Bioterrorism Competencies for all Public Health Workers (Core Competencies), a set of public health preparedness standards, will be used to illustrate how standards can serve as the foundation of course content and evaluation. We will then focus on what evaluators can do in situations where standards of acceptability are revised, or new standards are introduced. We will argue the importance of understanding the relationship between new, revised or existing standards to determine if, and where, existing content and evaluation strategy modifications are needed.
|
|
The Importance of Using Emergency Preparedness Standards of Acceptability for Evaluation
|
| Adriana Cimetta,
University of Arizona,
cimetta@email.arizona.edu
|
| Anneke Jansen,
University of Arizona,
annekej@u.arizona.edu
|
| Erin Peacock,
University of Arizona,
epeacock@email.arizona.edu
|
| Kim Fielding,
University of Arizona,
kjf@u.arizona.edu
|
|
This session will focus on the utility of the Core Bioterrorism Competencies for all Public Health Workers (Core Competencies) in evaluating emergency preparedness initiatives (courses, exercise programs and so forth). The Core Competencies are, in essence, standards of acceptability for what public health professionals should know with regard to emergency response. From a planning and evaluation standpoint, the Core Competencies are useful in (1) guiding content development, and (2) focusing the evaluation on appropriate outcomes. The result of using the Core Competencies toward these ends is that the initiatives will have the greatest chance of success. This session will discuss the application of the Core Competencies in the planning and evaluation of emergency preparedness initiatives.
|
|
The Benefits of Understanding the Relationships Between Emergency Preparedness Standards of Acceptability From an Evaluation Standpoint
|
| Anneke Jansen,
University of Arizona,
annekej@u.arizona.edu
|
| Adriana Cimetta,
University of Arizona,
cimetta@email.arizona.edu
|
| Erin Peacock,
University of Arizona,
epeacock@email.arizona.edu
|
| Kim Fielding,
University of Arizona,
kjf@u.arizona.edu
|
|
The Core Bioterrorism Competencies for all Public Health Workers (Core Competencies) can be considered standards of acceptability for evaluating public health preparedness initiatives (courses, exercise programs and so forth). It is important for success of the initiative that evaluation and content development are centered on these standards. Frequently, existing emergency preparedness standards are revised and new standards are developed. This session will explore the impact of introducing new standards and/or modifying existing standards on the development and evaluation of preparedness initiatives. We will argue the importance of understanding the relationship between standards (new, revised and current) to determine whether it is necessary to modify the existing evaluation. Finally, we will present a methodology to determine the relationship between changing standards.
|
|
Session Title: Strategic Evaluation in a Public Research Institute to Contribute to Innovation
|
|
Multipaper Session 544 to be held in Carroll Room on Friday, November 9, 10:20 AM to 11:05 AM
|
|
Sponsored by the Research, Technology, and Development Evaluation TIG
|
| Chair(s): |
| Osamu Nakamura,
National Institute of Advanced Industrial Science and Technology,
osamu.nakamura@aist.go.jp
|
| Discussant(s): |
| Naoto Kobayashi,
National Institute of Advanced Industrial Science and Technology,
naoto.kobayashi@aist.go.jp
|
| Abstract:
In order to advance strategic implementation of R&D in National Institute of Advanced Industrial Science and Technology and to develop the R&D achievements to produce outcomes toward innovation, two types of strategic evaluation systems have been developed for research units, and for research supportive and research administrative departments.
For the effective and efficient execution of research, AIST has strategically shifted the viewpoint from "output- to -outcome" in the evaluation of research units at the beginning of the second research term.
On the other hand, AIST have been developing the evaluation of research-support departments and research administrative departments in order to improve the service, to increase the efficiency, and to activate the works to collaborate with research units for creating industrial innovation.
We have been trying to evolve our evaluation system so that AIST plays a key actor in the process of innovation for global competitiveness with clear strategies.
|
|
Strategic Evaluation of Research Units Towards Innovation in a Public Research Institute
|
| Osamu Nakamura,
National Institute of Advanced Industrial Science and Technology,
osamu.nakamura@aist.go.jp
|
| Shin Kosaka,
National Institute of Advanced Industrial Science and Technology,
shin.kosaka@aist.go.jp
|
| Michiko Takagi Sawada,
National Institute of Advanced Industrial Science and Technology,
takagi.sawadamichiko@aist.go.jp
|
| Isao Matsunaga,
National Institute of Advanced Industrial Science and Technology,
matsunaga-isao@aist.go.jp
|
| Masao Koyanagi,
National Institute of Advanced Industrial Science and Technology,
m-koyanagi@aist.go.jp
|
| Koichi Mizuno,
National Institute of Advanced Industrial Science and Technology,
kkk-mizuno@aist.go.jp
|
| Naoto Kobayashi,
National Institute of Advanced Industrial Science and Technology,
naoto.kobayashi@aist.go.jp
|
|
National Institute of Advanced Industrial Science and Technology as a public research institute related to Ministry of Economic, Trade, and Industry is expected to perform R&D activities to contribute to the social and economic development of the country and to improve the welfare of people.
The evaluation should be designed to promote such activities with the appropriate evaluation policy. In order to advance strategic implementation of R&D in AIST, we have been developing the strategic evaluation system based on the following policies:
1) Importance of strategy formulation,
2) Significance of ex-ante evaluation,
3) Evaluation from the viewpoint of outcomes,
4) Reflection of evaluation and the design of new strategy,
5) Strategic evaluation linkage
R&D evaluation should work effectively all on the side of R&D performance, budget allocation, and the recipient of the achievement of R&D. With the strategic evaluation on each sector, suitable R&D activities toward innovation would be performed.
|
|
Evaluation System with PDCA Cycle in the Management of National Institute of Advanced Industrial Science and Technology
|
| Tomoko Mano,
National Institute of Advanced Industrial Science and Technology,
mano-tomoko@aist.go.jp
|
| Sunao Kunimatsu,
National Institute of Advanced Industrial Science and Technology,
s.kunimatsu@aist.go.jp
|
| Osamu Nakamura,
National Institute of Advanced Industrial Science and Technology,
osamu.nakamura@aist.go.jp
|
| Yoshikazu Arai,
National Institute of Advanced Industrial Science and Technology,
arai-yoshikazu@aist.go.jp
|
| Hiroshi Sato,
National Institute of Advanced Industrial Science and Technology,
h-sato@aist.go.jp
|
| Shinichi Kikuchi,
National Institute of Advanced Industrial Science and Technology,
s.kikuchi@aist.go.jp
|
| Suzuko Nakatsu,
National Institute of Advanced Industrial Science and Technology,
suzuko-nakatsu@aist.go.jp
|
| Naoto Kobayashi,
National Institute of Advanced Industrial Science and Technology,
naoto.kobayashi@aist.go.jp
|
|
The PDCA cycle provides a framework for the improvement in the management of the organization. We have learned that the cycle is effective to improve the efficiency of the activities of research supportive and research administrative departments in AIST. The cycle as a part of management and the evaluation system for 'C' were introduced in 2005. The cycle should be repeatedly implemented, as quickly as possible, in upward spirals. We call this linkage as the evolutionary PDCA (PDCA-E). In the cycle, we have adopted two years-term cycle, one year for evaluation ('C') and another year for the improvement ('APD'). Several subjects related to several departments were clarified by the evaluation in 2005 and the required 'Actions' and leading persons in charge of the improvements were assigned in 2006. We have been building the cycle so that the evaluation results are utilized for the effective management of AIST.
|
|
Session Title: Introducing SAMMIE - Successful Assessment Methods and Measurement In Evaluation: A Web-based, Self-paced, Evaluation Skill Development Course
|
|
Demonstration Session 545 to be held in Pratt Room, Section A on Friday, November 9, 10:20 AM to 11:05 AM
|
|
Sponsored by the Extension Education Evaluation TIG
|
| Presenter(s): |
| Karen Bruns,
The Ohio State University,
bruns.1@osu.edu
|
| Debby Lewis,
The Ohio State University,
lewis.205@osu.edu
|
| Thomas Archer,
The Ohio State University,
archer.3@osu.edu
|
| Abstract:
Is evaluating the impact of community-based programs new to you? Or, have you done your share of evaluating programs, but you want to refresh your knowledge on specific evaluation techniques? Whatever the level of experience planning or evaluating programs, SAMMIE can be the web portal to help expand evaluation skills. SAMMIE stands for Successful Assessment Methods & Measurement In Evaluation and is a one-stop site on the web to valuable evaluation resources. Through SAMMIE one can: [1] Access resources on 21 evaluation related topics; [2] Read the best literature on the web related to program planning and evaluation; [3] Ask an Expert questions about program planning and evaluation; and [4] Develop a personalized program with an evaluation plan. To get started with SAMMIE, go to Sammie.osu.edu, click the "login" link to create a user name and password.
|
|
Session Title: Intelligence Analysis: Maximizing Learning and Decision Making From Evaluations in Public and Private Sector Settings
|
|
Panel Session 546 to be held in Pratt Room, Section B on Friday, November 9, 10:20 AM to 11:05 AM
|
|
Sponsored by the Business and Industry TIG
|
| Chair(s): |
| Darryl Lawton,
McManis and Monsalve Asociates,
dlawton@mcmanis-monsalve.com
|
| Abstract:
Managers are increasingly experiencing information overload, making effective decision-making more difficult. Evaluations of organizational processes, program outcomes, or business trends may add to this overload unless evaluators present actionable decision opportunities to organizational leaders. While intelligence analysis is often thought to belong strictly within the realms of national security and military operations, it can also be viewed as a disciplined approach providing an effective set of evaluation methods that help evaluators get the right information to the right person at the right time. It also provides a means of overcoming cognitive biases common to analysis when dealing with environments that are dynamic and subject to a variety of influences, particularly competitive business environments. This discussion, including case studies, will give evaluators knowledge of how intelligence analysis can be used both to increase learning from evaluations and to provide focused, actionable results to managers, particularly in the private sector.
|
|
Applying Intelligence Analysis to Private Sector Evaluations
|
| Nancy Potok,
Mcmanis and Monsalve Associates,
npotok@mcmanis-monsalve.com
|
|
Nancy A. Potok has more than 27 years of experience as a Federal executive, non-profit administrator, and consultant. She is a Fellow of the National Academy of Public Administration, and received the Arthur S. Flemming Award for her work in evaluating the effects of proposed legislation on the Federal Judiciary. Ms. Potok is a PhD candidate at George Washington University, where her field is Program Evaluation. She is currently the Chief Operating Officer of McManis & Monsalve Associates, a management consulting firm that is a leader in applying intelligence analysis to evaluations. This presentation will use case studies to demonstrate how intelligence analysis can enhance evaluations, particularly when applied to businesses that are interested in evaluating their: (1) ability to achieve strategic business goals; (2) competitive market position; (3) organizational processes and structure; and (3) ability to successfully absorb major changes, such as going through a merger or acquisition.
|
|
|
Intelligence Analysis Techniques and Applications in an Evaluation Environment
|
| Robert Heibel,
Mercyhurst College,
rheibel@mercyhurst.edu
|
|
A twenty-five year veteran of the FBI, Heibel served as its deputy chief of counter-terrorism. He holds a Masters Degree from Georgetown University and is currently the Executive Director of the Mercyhurst College Institute for Intelligence Studies and the developer of its unique Research/Intelligence Analyst Program (R/IAP). The award-winning R/IAP was the first four-year college undergraduate program designed to generate a qualified entry-level intelligence analyst for government and the private sector. Heibel also directs the Institute's Center for Information Research, Analysis and Training (CIRAT), an academic pioneer in the application of computerized analytical tools and techniques to open source information. Heibel has served on the board of directors of several national intelligence associations and is a founder and the vice chairman of the International Association for Intelligence Education. He has received the Society of Competitive Intelligence Professionals' Meritorious Award, and a lifetime achievement award for his work in open-source intelligence
| |
| Roundtable:
No Child Left Behind Act, Logic Models and Instructional Systems Design Models: Action Research in English as a Second Language (ESL) and Music Classrooms: Case Studies in the Making |
|
Roundtable Presentation 547 to be held in Douglas Boardroom on Friday, November 9, 10:20 AM to 11:05 AM
|
| Presenter(s):
|
| Tamara J Barbosa,
PhD's Consulting,
dr.barbosa@phdsconsulting.com
|
| Rodnie Barbosa,
District of Columbia Public Schools,
kalorama.17th@yahoo.com
|
| Mary Jo DePaola,
Orange County Public Schools,
mdepaol@k12.ocps.net
|
| Abstract:
The goal of this roundtable is to discuss and explore the nexus that exists between theoretical logic models, instructional systems design models and the mandate of the No Child Left Behind Act of 2001 to create “scientifically-based instruction.” One way to implement “scientifically based-instruction” is to create educational programs using instructional systems design models and principles. Systematic instructional design requires the integration of evaluation in every step of the process. Logic models can serve as a way to organize the evaluation process and obtain reliable and valid knowledge about educational activities and programs.
|
|
Session Title: A Collaborative Practice-based Approach to Evaluation Research
|
|
Expert Lecture Session 548 to be held in Hopkins Room on Friday, November 9, 10:20 AM to 11:05 AM
|
|
Sponsored by the Social Work TIG
|
| Chair(s): |
| Carrie Petrucci,
EMT Associates Inc,
cpetrucci@emt.org
|
| Presenter(s): |
| Carrie Petrucci,
EMT Associates Inc,
cpetrucci@emt.org
|
| Abstract:
Putting the goals of practice first has been advocated as a means to produce relevant research that is both rigorous and utilized by practitioners (Epstein, 2001; Gingerich, 1990). Practice-based research (Epstein, 2001) emphasizes putting practice needs ahead of research needs, and ideally, the needs of both are met simultaneously. The availability of versatile computer database technology has created another way that researchers and practitioners can work together. This expert lecture will apply these concepts to evaluation research to outline the key elements of a collaborative practice-based evaluation approach that incorporates academic theory, existing research, practice knowledge, and computer technology. This will be accomplished through four project illustrations: a two-year curriculum development project with a probation department; a self-administered computerized screening tool in a DUI court; an outcomes measurement system for a job preparation program with a community-based agency working with parolees; and an internet-based concept mapping study.
|
|
Session Title: Applicability and Evaluation of Model of Global Baseline Survey Adapted for Use in Bangladesh, Bolivia and Tanzania
|
|
Expert Lecture Session 549 to be held in Peale Room on Friday, November 9, 10:20 AM to 11:05 AM
|
|
Sponsored by the International and Cross-cultural Evaluation TIG
|
| Chair(s): |
| Paul L Johnson,
National Institutes of Health,
pjohnson@mail.nih.gov
|
| Presenter(s): |
| Nalin Johri,
EngenderHealth,
njohri@engenderhealth.org
|
| Hannah Searing,
EngenderHealth,
hsearing@engenderhealth.org
|
| Inés Escandon,
EngenderHealth,
iescandon@engenderhealth.org
|
| Erin Mielke,
EngenderHealth,
emielke@engenderhealth.org
|
| Rosemary Duran,
EngenderHealth,
rduran@engenderhealth.org
|
| Javier Monterrey,
Independent Consultant,
jmonterrey@yahoo.com
|
| Mahboob Alam,
EngenderHealth,
mealam@engenderhealth.org
|
| Grace Lusiola,
EngenderHealth,
glusiola@engenderhealth.org
|
| Abstract:
EngenderHealth's ACQUIRE (Access, Quality, and Use in Reproductive Health) project works with local, national and international partners to advance and support family planning and reproductive health (FP/RH) services, with a focus on facility-based care. In 2004-05, rigorous baseline evaluations were conducted in three key countries – Bangladesh, Bolivia, and Tanzania. These baselines used instruments previously developed by MEASURE for Service Provision Assessment and Quick Investigation of Quality. A process evaluation of these baseline surveys including review of reports, phone interviews and a metaevaluation using the program evaluation standards is used to draw lessons learnt in adapting and using instruments in diverse contexts. Lessons learnt include: local involvement and ownership; clear expectations and coordination; realistic timelines; external technical consultation and timely development of analysis and tabulation plans. These lessons learnt feed into planning for endline evaluation scheduled for 2007-08 in these countries.
|
|
Session Title: Assessing Advocacy: Building Evaluation Frameworks and Models That Work
|
|
Multipaper Session 550 to be held in Adams Room on Friday, November 9, 10:20 AM to 11:05 AM
|
|
Sponsored by the Advocacy and Policy Change TIG
|
| Chair(s): |
| Laura Roper,
Brandeis University,
l.roper@rcn.com
|
|
Learning From Campaigning and Advocacy: There's Method in the Madness
|
| Presenter(s):
|
| Laura Roper,
Brandeis University,
l.roper@rcn.com
|
| Abstract:
In the international arena, campaigning and advocacy are becoming an increasingly important aspect of creating an enabling environment for addressing issues of poverty, health, and humanitarian crises. Campaigning and advocacy tends to be a non-linear process driven by influencing opportunities, shaped by contingencies, involving multiple actors, with a rolling set of goals. Culturally, advocates/campaigners have limited patience with evaluation because they are almost always over-extended, almost exclusively forward-looking, and opportunity-driven.. This paper presents a simple tool for helping advocates and campaigners walk through their work, making explicit what is often (but not always) a well-developed implicit theory of change. When applied it can help advocates/campaigners make more strategic choices in responding to opportunities, while keeping their “eyes on the prize” of achieving policy change and implementation that will have significant impact on people's lives
|
|
Developing a 'Community of Practice' in Advocacy Evaluation
|
| Presenter(s):
|
| Kristin Kaylor Richardson,
Western Michigan University,
kkayrich@comcast.net
|
| Abstract:
Advocacy and social activism are two related areas of practice that play a significant role within a number of disciplines, including social work, education, community psychology, health care and public policy. Although there is a long history of evaluating social service programs and policies, evaluating advocacy and activist approaches to creating policy change is a relatively undeveloped field of practice. The purpose of this paper is to address this gap through a critical examination of the unique aspects of advocacy and social activism, offering considerations for developing a comprehensive evaluation approach within this emergent field of practice. Three related topics are explored: 1.) reasons why advocacy and activism deserve greater recognition as legitimate areas of focus within the broader area of policy evaluation; 2.)a review and critique of historical and contemporary advocacy evaluation models;3.)recommendations for designing and conducting evaluations of multidisciplinary advocacy and activist efforts, activities and initiatives.
|
| |
|
Session Title: Studies Dealing With Needs Assessment and Program Development: Focus on Domestic Violence Victims and Children of the Incarcerated
|
|
Multipaper Session 552 to be held in Washington Room on Friday, November 9, 10:20 AM to 11:05 AM
|
|
Sponsored by the Crime and Justice TIG
|
| Chair(s): |
| Roger Przybylski,
RKC Group,
rogerkp@comcast.net
|
|
Program Theory, Development, and External Influences: Assessing a New Permanent Housing Program for Domestic Violence Victims
|
| Presenter(s):
|
| Hilary Botein,
University of Connecticut,
hilary.botein@uconn.edu
|
| Andrea Hetling,
University of Connecticut,
andrea.hetling@uconn.edu
|
| Abstract:
Using a new housing program for domestic violence victims as a case study, this research seeks to understand how external and internal influences affect program development and administrators' abilities to realize their originally intended program theory. The study is the first to evaluate the change theory assumptions behind an untested housing program designed to serve domestic violence victims. We hypothesized that such influences, including funding, legal restrictions, and discrimination, compromise the initial conception of how the program will achieve its intended outcomes. Using Burawoy's extended case study method, we interviewed stakeholders and conducted focus groups with potential clients. Coding uncovered disconnects between program conception and current design and multiple instances where program theory was knowingly and intentionally compromised. Findings reveal how service programs are developed in complex environments with conflicting theories and viewpoints and offer guidelines for assessing program fidelity in formative evaluation settings.
|
|
Lessons Learned and Strategies That Worked From a Study on a Unique and Sensitive Population: Study of Children of Incarcerated Persons
|
| Presenter(s):
|
| Mariah Storey,
University of Wyoming,
riah@uwyo.edu
|
| Mark McNulty,
University of Wyoming,
mmcnulty@uwyo.edu
|
| Trisha Worley,
University of Wyoming,
tworley1@uwyo.edu
|
| Abstract:
Researchers at the Wyoming Survey & Analysis Center (WYSAC) conducted the Study of Children of Incarcerated Persons (SCIP) in an attempt to understand the specific health and well-being needs of Wyoming's children of the incarcerated. The project was funded by the Wyoming's Department of Health Mental Health & Substance Abuse Services Division. Children of the incarcerated are a unique, highly sensitive, highly mobile population that is extremely difficult to measure. Working with a number of state agencies and nonprofit organizations, SCIP employed a comprehensive mixed-method research design. The design included the use of mail surveys, face-to-face structured interviews, and focus groups. After determining the needs of these children, the SCIP team made concrete policy recommendations to policymakers, tailored to assist these children and the households in which they live. Researchers will present on the lessons learned and the successful strategies used in this study.
|
| |
|
Session Title: Using Democratic Evaluation Principles to Foster Citizen Engagement and Strengthen Neighborhoods in a Place-based Poverty Program
|
|
Expert Lecture Session 553 to be held in D'Alesandro Room on Friday, November 9, 10:20 AM to 11:05 AM
|
|
Sponsored by the Non-profit and Foundations Evaluation TIG
|
| Chair(s): |
| Melanie Moore Kubo,
See Change Evaluation,
melanie@seechangeevaluation.com
|
| Presenter(s): |
| Arnold Love,
Independent Consultant,
ajlove1@attglobal.net
|
| Abstract:
This paper outlines how a community foundation applied a community development approach to evaluate a place-based poverty program in four neighborhoods. In this model, evaluation supported the democratic evaluation principles of inclusion, participation, dialogue, and action in several ways: by documenting the local issues and outcomes that were important to the residents, by creating opportunities to deliberate together and practice direct democracy, by mobilizing partnerships and networks to generate solutions, and by identifying the current assets and additional resources that would create positive outcomes for challenged neighborhoods and their residents. The paper describes three key methods of the evaluation approach: a) deepen residents' understanding of the assets and strengths of their neighborhoods through a participatory assets-mapping, b) evaluate the outcomes and achievements of neighborhood residents using photovoice techniques, and c) public validation and democratic deliberation of evaluation findings.
|
|
Session Title: A Method to Measure and Numerically Demonstrate the Effectiveness of a University's Planning and Evaluation Processes
|
|
Demonstration Session 554 to be held in Calhoun Room on Friday, November 9, 10:20 AM to 11:05 AM
|
|
Sponsored by the Quantitative Methods: Theory and Design TIG
|
| Presenter(s): |
| Kim Bender,
Colorado State University,
kkbender@provost.colostate.edu
|
| Abstract:
This method enables a university's central administrative unit to demonstrate numerically the effectiveness of each academic department's planning and evaluation effectiveness. Using 12 indicators, such as research exploration range and depth, measuring frequency, improvement range and frequency, diagnostic capacity of planning, best practices generation, and program participation, a single planning and evaluation index score is computed. Departmental effectiveness is now comparable without resorting to subjective anecdotal accounts. These measures are automatically embedded into university on-line program review self-studies, giving visibility to a departments' proficiency of and dedication to regular self-evaluation for purposes of improving student learning, faculty research, and faculty service or outreach.
|
|
Session Title: Making Data Accessible to Organizations, Communities, and the General Public: Designing an Interactive Graphing Website
|
|
Demonstration Session 555 to be held in McKeldon Room on Friday, November 9, 10:20 AM to 11:05 AM
|
|
Sponsored by the Integrating Technology Into Evaluation
|
| Presenter(s): |
| Shannon Williams,
University of Wyoming,
swilli42@uwyo.edu
|
| Eric Canen,
University of Wyoming,
ecanen@uwyo.edu
|
| Laura Feldman,
University of Wyoming,
lfeldman@uwyo.edu
|
| Abstract:
This session will demonstrate the advantages of web-based interactive graphing. The presenters will explain two publicly available interactive graphing websites created by the Wyoming Survey & Analysis Center (WYSAC). WYSAC developed these interactive graphing websites to give users greater flexibility in selecting, graphing, and presenting data. The websites allow users to quickly choose data and compare it across varying demographics, survey years, and/or data sources. The websites permit users to access data more directly without having to reference a written report. The presenters will discuss the uses of the websites, the advantages of using an interactive system, and the limitations that result from using an interactive website.
|
|
Session Title: Straight Talk: Threats to Validity Caused by Heteronormative Bias in Opinion Polls
|
|
Think Tank Session 556 to be held in Preston Room on Friday, November 9, 10:20 AM to 11:05 AM
|
|
Sponsored by the Lesbian, Gay, Bisexual, Transgender Issues TIG
|
| Presenter(s):
|
| Che Tabisola,
Human Rights Campaign,
che.tabisola@hrc.org
|
| Abstract:
Opinion polls play a powerful role in the campaign for gay, lesbian, bisexual, and transgender equality. But what goes almost unrecognized is a deep-seated bias against GLBT Americans within the language of these polls. The subtext of many of survey questions, even those cited by equality advocates, is tainted by a difficult to identify sexual prejudice. This language can play to the misnomer that the poll's subjects are abnormal, or worse suggest that they are perverse.
This Think Tank opens with a brief introduction of heteronormativity theory then examines select questions from polls by the Gallup Organization, Human Rights Campaign, and NBC/USA Network. After reviewing together, participants will form small groups to examine the surveys for further cases of sexual prejudice then identify ways to address threats to research validity caused by a heteronormative bias.
|
|
Session Title: The Role of the Leadership Recruitment Task Force to Foster Organizational Learning Within the American Evaluation Association
|
|
Think Tank Session 557 to be held in Schaefer Room on Friday, November 9, 10:20 AM to 11:05 AM
|
|
Sponsored by the AEA Conference Committee
|
| Presenter(s):
|
| Stanley Capela,
HeartShare Human Services,
stan.capela@heartshare.org
|
| Discussant(s):
|
| Rachel Hickson,
Montgomery County Public Schools,
rhickson731@yahoo.com
|
| Nicole Bowman,
Bowman Performance Consulting LLC,
nbowman@nbowmanconsulting.com
|
| Henry Frierson Jr,
University of North Carolina, Chapel Hill,
ht_frierson@unc.edu
|
| Abstract:
A key ingredient to an organization's success is its leadership. As the AEA grows to meet the changing needs in the evaluation field it becomes important to identify and develop future leaders. In the past, individuals hesitate to express an interest to volunteer their time because of various pressures in their work settings; lack of knowledge on increasing visibility in a large organization; and volunteering for committees or running for the Board.
The AEA Leadership Recruitment Task Force has been given the charge to help identify individuals who may be worthy candidates to run for the Board as well as identify future leaders who would make excellent volunteers for committees. In addition, this group has been created to help provide coaching and technical assistance for those members who express an interest in running for the Board or getting more involved by volunteering for different AEA activities such as committees etc.
|
| Roundtable:
Evaluation and Learning: Accomplishing Both Through the Conduct of a Needs Assessment |
|
Roundtable Presentation 559 to be held in Federal Hill Suite on Friday, November 9, 10:20 AM to 11:05 AM
|
| Presenter(s):
|
| Katye Perry,
Oklahoma State University,
katye.perry@okstate.edu
|
| Mwarumba Mwavita,
Oklahoma State University,
mwavita@okstate.edu
|
| Chin-Huey Lee,
Oklahoma State University,
chin.lee@okstate.edu
|
| Tammi Mitchell,
Oklahoma State University,
tjenx@aol.com
|
| Donell Barnett,
Oklahoma State University,
donell.barnett@okstate.edu
|
| Abstract:
A team of evaluators agreed to conduct a needs assessment for a local counseling agency. The impetus behind this agreement was twofold: foremost, to fulfill the request from the agency; and secondly to provide an opportunity for three doctoral students to participate in an actual evaluation project. What emerged was a tripartite cooperative learning experience with unexpected shifts in the roles of “teacher” and “learner.” In keeping with the conference theme, insights emerged that answered the following questions: What does it mean to learn from and about evaluation processes and outcomes?; Who else is involved in the process of learning from evaluation in different contexts and how?; and, What kinds of evaluation designs and approaches maximize which kinds of learning from and about evaluation? The answer to each question has implications for the teaching of evaluation.
|
|
Session Title: Assessing Appropriate Outcomes: Measurement Issues in Human Services Evaluation
|
|
Multipaper Session 560 to be held in Royale Board Room on Friday, November 9, 10:20 AM to 11:05 AM
|
|
Sponsored by the Human Services Evaluation TIG
|
| Chair(s): |
| Tracy Greever-Rice,
University of Missouri, Columbia,
greeverricet@umsystem.edu
|
| Discussant(s): |
| William Cabin,
Youth Consultation Service,
williamcabin@yahoo.com
|
|
Evaluating Individually-tailored Services: A Proposed Strategy
|
| Presenter(s):
|
| Roger Boothroyd,
University of South Florida,
boothroy@fmhi.usf.edu
|
| Steven Banks,
University of South Florida,
tbosteve@aol.com
|
| Abstract:
Evaluators often encounter and report on the problems associated with heterogeneity among clients needing services, the services provided in response to their needs, and the potential outcomes resulting from these services (Goldenberg, 1978; Gordon, Powell, & Rockwood, 1999). A common response of evaluators is to identify and use multiple measures that span the range of variability in clients, services, and outcomes. Multivariate analyses or latent class models are often performed on the full set of outcome measures. The primary problem with this strategy is that many recipients are assessed on outcomes that their services plans were never designed to influence. In this presentation we will describe several approaches that have been developed and used to deal with this issue. Additionally, we will describe a procedure we developed, the maximum individualized change score method, summarize a simulation supporting the value of its use, and highlight the contexts in which this method has strengths over other frequently used approaches.
|
|
Evaluating Programs to Reduce Child Abuse and Maltreatment: The Abilene Replication of the Family Connections Program
|
| Presenter(s):
|
| Darryl Jinkerson,
Abilene Christian University,
darryl.jinkerson@coba.acu.edu
|
| David Cory,
New Horizons Family Connections,
dcory@sbcglobal.net
|
| Abstract:
Family Connections is a community-based project created to reduce the incidence of child abuse and maltreatment by providing services that address risk factors and link families to multiple service agencies and community organizations. Services provided help families acknowledge their areas of weakness, embrace their strengths, and empower them to act on their own behalf to improve their situations.
The current intervention is a replication of an earlier study but the evaluation approach is unique to this situation. The approach uses a variety of qualitative and quantitative data collection methods and includes a five month baseline period. The Evaluation Design is based on a “dashboard” containing 12 outputs, 8 short term goals (7 of which are mandated by State Contractual Requirements) and 6 long term goals (all of which are mandated by State Contractual Requirements). The evaluation dashboard is support by a Logic Model which displays the Program Inputs and Processes.
|
| |
|
Session Title: Smashing the Mental Health Atom: A Conceptual Framework to Properly Evaluate System, Service, and Clinical Practice
|
|
Demonstration Session 561 to be held in Royale Conference Foyer on Friday, November 9, 10:20 AM to 11:05 AM
|
|
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
|
| Presenter(s): |
| Christopher Cameron,
Calgary Health Region,
christopher.cameron@calgaryhealthregion.ca
|
| Brian Marriott,
Calgary Health Region,
brian.marriott@calgaryhealthregion.ca
|
| Abstract:
Evaluating mental health services funded by government organizations presents numerous challenges. Many of these challenges are caused by difficulties that many stakeholders within these services have determining and articulating what their evaluation needs are. The primary consequence of this lack of clarity from an evaluation standpoint is that a significant portion of project design time is devoted to activities intended to clarify stakeholder evaluation needs. The presenters have devised a straightforward conceptual framework to help stakeholders achieve this clarity so that the evaluation questions devised, and methods of investigation used, consistently generate information that can be used to make a positive impact on mental health service provision. The fundamental strength of this conceptual framework is that it allows stakeholders and evaluators to distinguish between considerations related to (a) system issues, (b) service delivery issues, and (c) clinical practice issues. This conceptual framework is presented and suggestions for application are made.
|
|
Session Title: College Success Programs: Evaluating Undergraduate and Graduate Interventions
|
|
Multipaper Session 562 to be held in Hanover Suite B on Friday, November 9, 10:20 AM to 11:05 AM
|
|
Sponsored by the College Access Programs TIG
|
| Chair(s): |
| Kurt Burkum,
National Council for Community and Education Partnerships,
kurt_burkum@edpartnerships.org
|
|
Lessons Learned in our own Backyard: Evaluation in a University Setting
|
| Presenter(s):
|
| Cidhinnia M Torres Campos,
Crafton Hills College,
cidhinnia@yahoo.com
|
| Beatriz Ornelas,
California State University, Los Angeles,
ornelasbeatriz@yahoo.com
|
| Abstract:
Few reports have distilled the implications of the increasing efforts to evaluate college intervention programs. This paper outlines lessons drawn from the experience of implementers and evaluators of a targeted intervention for Latino freshmen. For two years program staff, faculty, and researchers have collaborated to evaluate and learn from results of an urban commuter campus program, which screened over a thousand students and aimed to identify high-risk students and increase their academic success. The presentation highlights how evaluation efforts contribute to creating learning communities. Eight lessons reflect several themes: timing and function of evaluation, value of feedback and its timing, role of intervention research, importance of open communication of findings, and uses of evaluation results by the program and university staff. These presentation offers practical advice on implementing and effectively evaluating college level interventions. This presentation will also examine the process of learning from program evaluation in a university context.
|
|
Combating the Decline: A Report on Attraction, Retention and Learning Evaluation Data From Higher Education Computing Science Classrooms Using Emerging Technologies
|
| Presenter(s):
|
| Jamie Cromack,
Microsoft Research, External Research and Programs,
jamiecr@microsoft.com
|
| Abstract:
By the year 2014, employment in computer-related industries is expected to grow between 28 and 68 percent, but an alarming decline in the number of incoming freshmen choosing to major in computing science (CS) and computer engineering (CE) bodes ill for American graduates. The questions of attraction, retention and learning in CS courses can be answered by the use of certain cutting-edge technologies and approaches to draw students into CS and CS-related classrooms, keep them there and improve their learning. Evaluation data from a growing body of research in these innovative CS and CS-related courses show strong potential, but more evidence is needed to support this assertion. This report analyzes over 30 research studies from CS and CS-related courses that used advanced technologies as pedagogical tools, describes the mixed-method evaluation methods used, identifies their suitability for the setting and makes recommendations for further research.
|
| |
|
Session Title: Building Capacity for Planning, Monitoring, Evaluating, and Learning among Conservation Leaders
|
|
Multipaper Session 563 to be held in Baltimore Theater on Friday, November 9, 10:20 AM to 11:05 AM
|
|
Sponsored by the Presidential Strand
and the Environmental Program Evaluation TIG
|
| Chair(s): |
| Vinaya Swaminathan,
Foundations of Success,
vinaya@fosonline.org
|
| Abstract:
Historically, the biodiversity conservation community has placed limited emphasis on program evaluation and, thus has been unable to provide evidence of the effectiveness of their actions and to learn from their experiences. Recently, however, there has been a growing interest in program evaluation and an explicit desire to use monitoring and evaluation to learn about, adapt and improve conservation actions. The session presenters have been directly involved in helping generate the capacity to do just that among conservation leaders worldwide. This session will highlight our experiences with two different audiences - on-the-ground conservation managers and university students (a group we term "tomorrow's leaders"). We see these as two key audiences for promoting a learning culture that encourages critical examination of successes and failures to uncover the reasons behind them. This session will focus on the process we have used and the achievements and challenges of these two groups.
|
|
Developing Monitoring, Evaluation, and Programmatic Learning Skills in Conservation Project Managers: How the Worldwide Fund for Nature (WWF) is Institutionalizing Adaptive Management
|
| Caroline Stem,
Foundations of Success,
caroline@fosonline.org
|
| Marcia Brown,
Foundations of Success,
marcia@fosonline.org
|
| Guillermo Placci,
Foundations of Success,
guillermo@fosonline.org
|
| Richard Margoluis,
Foundations of Success,
richard@fosonline.org
|
| Caroline Stem,
Foundations of Success,
caroline@fosonline.org
|
| Nick Salafsky,
Foundations of Success,
nick@fosonline.org
|
| Vinaya Swaminathan,
Foundations of Success,
vinaya@fosonline.org
|
|
In 2004, the Worldwide Fund for Nature/WWF developed a set of program management standards designed to encourage systematic planning, implementation, monitoring, and evaluation of its conservation efforts. The standards advocate a process that helps teams systematically design their projects and develop useful monitoring plans that provide the information they need to learn about and improve their conservation actions. Foundations of Success (FOS) - a nonprofit organization dedicated to improving the practice of conservation - has played an instrumental role in helping develop and roll out these standards. FOS has helped build WWF capacity by conducting workshops, facilitating online learning courses, and providing face-to-face and remote technical assistance and follow-up. This session will highlight FOS's experience working with WWF teams, the methodology used, and some of the main achievements and challenges FOS and WWF have encountered.
|
|
Adaptive Management Training at the University of Maryland: Teaching Planning, Monitoring, and Evaluation Skills to Tomorrow's Leaders in Conservation.
|
| Vinaya Swaminathan,
Foundations of Success,
vinaya@fosonline.org
|
| Fabiano Godoy,
Bushmeat Crisis Taskforce,
fgodoy@conservation.org
|
| Sara Zeigler,
University of Maryland,
szeigler@umd.edu
|
| Marcia Brown,
Foundations of Success,
marcia@fosonline.org
|
| Nick Salafsky,
Foundations of Success,
nick@fosonline.org
|
| Richard Margoluis,
Foundations of Success,
richard@fosonline.org
|
| Guillermo Placci,
Foundations of Success,
guillermo@fosonline.org
|
| Caroline Stem,
Foundations of Success,
caroline@fosonline.org
|
|
At the request of the Conservation Measures Partnership (CMP), students in the Sustainable Development and Conservation Biology (CONS) Master's program at the University of Maryland (UMD) developed a graduate-level course in adaptive management. The course is based on the CMP Open Standards for the Practice of Conservation and teaches the theory and skills necessary for systematic project planning and effective monitoring and evaluation. Foundations of Success (FOS) played a large role in developing the course as part of our strategy to reach tomorrow's leaders' in conservation. UMD and FOS jointly offered the course to CONS students in Spring 2007. Enrolled students heard FOS presentations on strategic planning topics and experienced the practical side of project management by working in teams with practitioners and FOS facilitators to develop management plan outlines for actual conservation projects. This presentation will highlight the structure of the course and lessons learned from its first iteration.
|
|
Session Title: Innovative Techniques to Assess Learning in Child Welfare Workers' Training
|
|
Multipaper Session 564 to be held in International Room on Friday, November 9, 10:20 AM to 11:05 AM
|
|
Sponsored by the Human Services Evaluation TIG
|
| Chair(s): |
| Elizabeth Hayden,
Northeastern University,
hayden.e@neu.edu
|
| Discussant(s): |
| Henry Ilian,
New York City Administration for Children's Services,
henry.ilian@dfa.state.ny.us
|
|
Using Knowledge Assessments to Promote Learning and Assess Child Welfare Workers' Competencies+
|
| Presenter(s):
|
| Jennifer Hicks,
University of Tennessee, Knoxville,
hicksj@sworps.utk.edu
|
| Chris Hadjiharalambous,
University of Tennessee, Knoxville,
sissie@utk.edu
|
| Abstract:
In 2004 the Tennessee Department of Children's Services (TDCS) adopted a new best practice model of child welfare to ensure safety, permanence, and well being for children in care. A competency-based training curriculum was developed based on these outcomes. New caseworkers must attend this training and complete “certification” before assignment of a caseload. “Certification” involves both knowledge and skills assessments. The proposed presentation highlights the written assessment developed for measuring workers' knowledge and critical thinking skills. Topics addressed include: the process used to develop and validate assessment content; demonstration of the item bank used to develop multiple exam versions; work around development of “cut-scores” for distinguishing between masters and non-masters. Special emphasis is placed on a discussion around how knowledge assessments can be used during and also at the end of training for providing on-going feedback to the learner, holding trainers' accountable for instruction, and promoting a professional workforce.
|
|
Using Competency Assessments in Evaluating Pre-service Training for Child Welfare Workers
|
| Presenter(s):
|
| Gail Myers,
University of Tennessee, Knoxville,
myersg@sworps.utk.edu
|
| Charlotte Sorensen,
University of Tennessee, Knoxville,
sorensenc@sworps.utk.edu
|
| Chris Hadjiharalambous,
University of Tennessee, Knoxville,
sissie@utk.edu
|
| Abstract:
While knowledge-based measures are common in the evaluation of training courses, competency-based skills assessments have not been as widely used. The Tennessee Department of Children's Services elected to implement a “skill-based competency test” as part of its certification process for newly hired workers. The University of Tennessee, Social Work Office of Research and Public Service was charged with the development of an instrument and process for assessing workers' skills or competencies. In this session, evaluators will present information on the context for the development of these assessments, the process for selecting critical competencies to be measured, the design of the competency assessment instruments, and the preparation for assessors in performing the assessment. In addition, evaluators will present findings from a review of the assessments to highlight implementation issues and discuss the implications for the use of competency assessments in judging the effectiveness of training and education programs.
|
| |
|
Session Title: Using NVIVO 7 in Conducting Evaluation Research
|
|
Demonstration Session 565 to be held in Chesapeake Room on Friday, November 9, 10:20 AM to 11:05 AM
|
|
Sponsored by the Qualitative Methods TIG
|
| Presenter(s): |
| Shelly Mahon,
University of Wisconsin, Madison,
mdmahon@wisc.edu
|
| Abstract:
The primary purpose of this session is to provide information on using NVIVO 7 when conducting evaluation research. The presenter will give a brief review of its basic functions, followed by a variety of examples that illustrate how NVIVO can be used to conduct literature reviews, compile and analyze field notes and observations, and present results to stakeholders and peer reviewed journals. The examples involve information on adolescent peer relationships, youth participation in extracurricular activities, and an evaluation of smoke-free capacity building efforts. This session is intended to introduce evaluators to the different facets of NVIVO, as well as its usefulness in conducting evaluation research and disseminating results.
|
|
Session Title: Performance Measurement and Evaluation: A Distinction With a Difference
|
|
Panel Session 566 to be held in Versailles Room on Friday, November 9, 10:20 AM to 11:05 AM
|
|
Sponsored by the AEA Conference Committee
|
| Chair(s): |
| Thomas Chapel,
Centers for Disease Control and Prevention,
tchapel@cdc.gov
|
| Discussant(s):
|
| Michael Schooley,
Centers for Disease Control and Prevention,
mschooley@cdc.gov
|
| Abstract:
Increasing emphasis on accountability forces high-level decision-makers to think about program performance in a disciplined way. Often in organizations, planners, budgeters, evaluators, and performance monitors work in isolation from each other and use approaches and terms so differently that opportunities to meld insights into a common approach to improving the organization are missed. And the pressure to develop and use performance measures means these efforts can trump rather than complement program evaluation efforts. This panel presents three thoughts on challenges of developing, and especially interpreting, performance measures for public health programs, and the appropriate mutually supportive role to be played by program evaluation and performance measurement in public health organizations. The payoffs for an appropriate relation between performance measurement and evaluation, and the peril of allowing one to supersede the other, will be presented.
|
|
Thinking in an Integrated Way About Performance Measurement and Evaluation
|
| Thomas Chapel,
Centers for Disease Control and Prevention,
tchapel@cdc.gov
|
|
The performance measurement process is too often regarded as something entirely new, or at least different from other modes of reflection (e.g., program evaluation and strategic planning).
Performance measurement should be considered part of the iterative cycle of program planning and evaluation. And evaluators should consider, as a major constituency for evaluation results, persons who are charged with meeting performance measurement mandates. Making this link is a win-win for both performance measurement and program evaluation. This presentation will use the CDC Framework for Program Evaluation in Public Health as a process for integrating program evaluation and performance measurement. The session will show how both processes draw on the early framework steps of stakeholder engagement and program description, and how each can be viewed as a unique and appropriate response to the challenge of setting an evaluation focus for a program.
|
|
|
Federal-level Performance Measurement: Challenges in Public Health
|
| Amy DeGroff,
Centers for Disease Control and Prevention,
asd1@cdc.gov
|
| Michael Schooley,
Centers for Disease Control and Prevention,
mschooley@cdc.gov
|
| Goldie MacDonald,
Centers for Disease Control and Prevention,
gmacdonald@cdc.gov
|
| Thomas Chapel,
Centers for Disease Control and Prevention,
tchapel@cdc.gov
|
|
Over the past fifteen years or more, performance measurement has gained significant attention as part of 'results-based' government. Policies such as the Government Performance and Results Act (GPRA) and the Bush Administration's Program Assessment Rating Tool (PART) have institutionalized the practice of performance measurement. At the Centers for Disease Control and Prevention (CDC), performance measurement systems have been developed by many of the public health programs in order to improve accountability, monitor program implementation, and support program improvement. However, there are some important challenges to developing and implementing performance measurement systems at the federal level that must be considered. These include the decentralized nature of public health programs, the complexity of public health problems, and measurement challenges. All three challenges contribute to a fundamental challenge in regard to attributing results to particular public health program efforts. These challenges will each be described and some strategies to address them presented.
| |