|
Session Title: Learning From Research on Evaluation Practices and Theories
|
|
Panel Session 442 to be held in International Ballroom A on Thursday, November 8, 5:15 PM to 6:00 PM
|
|
Sponsored by the Presidential Strand
and the Theories of Evaluation TIG
|
| Chair(s): |
| Jody Fitzpatrick,
University of Colorado, Denver,
jody.fitzpatrick@cudenver.edu
|
| Discussant(s):
|
| Melvin Mark,
Pennsylvania State University,
m5m@psu.edu
|
| Marvin Alkin,
University of California at Los Angeles,
alkin@gseis.ucla.edu
|
| Abstract:
For many years evaluators have been proposing various theories and models for practice. These models are intended to offer principles, rationales, and organization for the procedural choices made by evaluators and to orient practitioners to the issues and problems with which they must deal. Until recently, there has been little research on the use of these models and how they relate to evaluators' practice. This session will bring together two strands of research on evaluation: broader-based studies and more focused case studies. The presenters will discuss current empirical studies of evaluation and contrast what each has learned about evaluation practice and its connection to theory. Pathways to learning about evaluation practice will be addressed, e.g., the influence of context and organization on practice, categorization systems and advance organizers. Furthermore, the impact of evaluation issues in determining the choices evaluators make, such as stakeholders' and evaluators' positions, will be discussed.
|
|
Conducting Research on Evaluation: Necessary, Challenging, and Insightful
|
| Christina Christie,
Claremont Graduate University,
tina.christie@cgu.edu
|
|
The development of good evaluation theory has consequences for the discipline, the profession, and the practitioner. With the demand for evaluation increasing steadily, the field needs to move toward establishing firm academic grounding so to supply a continuous stream of well-trained practitioners and scholars of evaluation, as well as more advanced understandings of what it means to conduct evaluation. This can be accomplished by shifting from theories based on discourse and experience to those developed from empirical study. Theory development in other fields can be conducted in labs with college students; this is not the case with evaluation. I will discuss the various ways in which we might enhance our current research on evaluation literature base, as well as the challenges specific to conducting such research. As an example of this work, the findings from a recent study on the relationship between evaluators' training and practice will be presented.
|
|
|
Examining Theories of Evaluation in Practice through Case Studies
|
| Jody Fitzpatrick,
University of Colorado, Denver,
jody.fitzpatrick@cudenver.edu
|
|
Just as evaluators make use of mixed methods to study programs, our understandings of evaluation theories in practice are greatly enhanced by using a variety of methods. My interviews of evaluators operating in different settings and in different roles provides insight into how exemplars, often ones who espouse a particular theory, apply that theory to an individual study. The case studies can be examined individually to learn more about how a theorist, when serving as a practitioner, applies or adapts his or her theory. Further, the case studies can be examined across cases to identify areas in which these evaluators are rather similar in practice, in spite of quite different settings, and where they differ, sometimes rather dramatically. I will discuss the findings of the interviews focusing, in particular, on advance organizers, stakeholder involvement, and methodology to illustrate what we can learn about theory in practice.
| |
|
Session Title: PreK-12 Educational Evaluation TIG Business Meeting
|
|
Business Meeting Session 443 to be held in International Ballroom B on Thursday, November 8, 5:15 PM to 6:00 PM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| TIG Leader(s): |
|
Alison Williams,
Clark County School District,
alisonw@interact.ccsd.net
|
|
James Van Haneghan,
University of South Alabama,
jvanhane@usouthal.edu
|
|
Linda Channell,
Jackson State University,
drlinda@bellsouth.net
|
|
Anane Olatunji,
George Washington University,
dr_o@gwu.edu
|
|
Tom McKlin,
Georgia Institute of Technology,
tom.mcklin@gatech.edu
|
|
Session Title: Research, Technology, and Development Evaluation TIG Business Meeting
|
|
Business Meeting Session 446 to be held in International Ballroom E on Thursday, November 8, 5:15 PM to 6:00 PM
|
|
Sponsored by the Research, Technology, and Development Evaluation TIG
|
| TIG Leader(s): |
|
Gretchen Jordan,
Sandia National Laboratories,
gbjorda@sandia.gov
|
|
George Teather,
Independent Consultant,
gteather@sympatico.ca
|
|
Brian Zuckerman,
Science and Technology Policy Institute,
bzuckerm@ida.org
|
|
Session Title: The Corruption of Public Evaluation: And What Should We Do About It, Collectively or Individually?
|
|
Think Tank Session 447 to be held in Liberty Ballroom Section A on Thursday, November 8, 5:15 PM to 6:00 PM
|
|
Sponsored by the AEA Conference Committee
|
| Discussant(s):
|
| Michael Scriven,
Western Michigan University,
scriven@aol.com
|
| Ernie House,
University of Colorado,
ernie.house@colorado.edu
|
| Abstract:
The AEA collectively is working to uprade the quality and image of evaluation, eg via standards, guidelines, and checklists. At the same time, cashing in to some extent on the image we are improving, various public and commercial interests are undermining the credibility of evaluation by corrupting good procedures (the FDA example, discussed last year by Ernie House), or failing to bolster weak procedures (Consumers Union scandalous misreporting on child's seats), or appealing to bogus evaluation organizations (Consumers Digest) or using misleading evaluation labels (Microsoft's bogus use of 'beta-testing' or World Bank's claim of 'external' evaluation), the Dept of Justice (claiming incompetence in firing US attorneys) or Education (RCT imperialism). There are many examples, some of which we will discuss in more detail. Should AEA be doing more to counteract this degradation of our profession, eg by publicizing minimum standards, as AMA does with medical matters? Does ABA have a better model? After short introductions by the two speakers, we will go to floor discussion and consider recommendations.
|
|
Session Title: A Practitioner's Guide to Program Theory-driven Evaluation
|
|
Expert Lecture Session 448 to be held in Liberty Ballroom Section B on Thursday, November 8, 5:15 PM to 6:00 PM
|
|
Sponsored by the Program Theory and Theory-driven Evaluation TIG
|
| Chair(s): |
| Katrina Bledsoe,
The College of New Jersey,
katrina.bledsoe@gmail.com
|
| Presenter(s): |
| Stewart I Donaldson,
Claremont Graduate University,
stewart.donaldson@cgu.edu
|
| Abstract:
The purpose of this presentation will be to provide a state-of-the-art treatment of the practice of program theory-driven evaluation science. This will be accomplished by highlighting some of the main findings from my new book Program Theory-Driven Evaluation Science: Strategies and Applications (2007). This work attempts to fill a serious void in the extant literature, namely a lack of detailed examples of program theory-driven evaluation science being implemented in "real world" settings. That is, instead of relying on abstract theory or hypothetical examples to discuss this evaluation approach, an in-depth description of the nuances and results from a series of "authentic program theory-driven evaluations" from recent evaluation practice will be presented. Challenges and lessons learned from the cases presented and the theory-driven evaluation literature more broadly will be discussed in some detail.
|
|
Session Title: Strategic Design, Measurement, and Accountability in Environmental Program Evaluations
|
|
Multipaper Session 450 to be held in Edgar Allen Poe Room on Thursday, November 8, 5:15 PM to 6:00 PM
|
|
Sponsored by the Environmental Program Evaluation TIG
|
| Chair(s): |
| Katherine Dawes,
United States Environmental Protection Agency,
dawes.katherine@epa.gov
|
|
Environmental Protection Agency's (EPA) Toxics Release Inventory: A Government Accountability Office (GAO) Evaluation of its Uses for Environmental Information and Recent Reporting Changes
|
| Presenter(s):
|
| Terry Horner,
United States Government Accountability Office,
hornert@gao.gov
|
| Karen Febey,
United States Government Accountability Office,
febeyk@gao.gov
|
| Mark Braza,
United States Government Accountability Office,
brazam@gao.gov
|
| Abstract:
The Government Accountability Office (GAO) evaluated stakeholder uses of and policy changes to EPA's Toxics Release Inventory (TRI) program, an environmental community right-to-know program. Central to GAO's evaluation was the impact of changes to TRI that will now allow companies to release four times as many toxic chemicals before they must report those releases to the public. GAO's evaluation incorporated a mixed-methods approach—involving a nation-wide survey, quantitative analyses with the TRI database, and interviews—to address central evaluation questions. Preliminary findings indicate that the TRI is used for diverse purpose at the local, state, and federal levels and that EPA's changes will significantly decrease the amount of chemical release information available to the public. This presentation will also detail how GAO's evaluations can influence executive agencies and Congressional legislation.
|
|
Process- and Model-based Approaches to the Strategic Design and Evaluation of Performance Measurement Systems
|
| Presenter(s):
|
| William Michaud,
SRA International Inc,
bill_michaud@sra.com
|
| Abstract:
Program performance measures can help an organization achieve its goals. When properly designed and applied, performance measures provide incentives that help align actions with organizational goals and provide actionable information to support budget and resource decisions. Most performance measures have evolved from the ground up – from the perspective of what individual programs are doing, rather than what the organization wants to achieve. Re-evaluating performance measures from a strategic perspective can help ensure that the measures provide the right incentives and useful information. This paper will explore two approaches to the strategic evaluation of performance measures. The first uses a process-oriented approach adopted by OECD for safety performance indicators. The second applies a contextual model, the DPSIR model, to organize and strategically evaluate environmental performance measures. Both approaches emphasize the view that individual performance measures work best when viewed as elements of a comprehensive system of measures.
|
| |
|
Session Title: Health Evaluation TIG Business Meeting and Presentation: Implementing Evidence-based Programs: A Six-step Protocol for Assuring Replication With Fidelity
|
|
Business Meeting Session 452 to be held in Pratt Room, Section A on Thursday, November 8, 5:15 PM to 6:00 PM
|
|
Sponsored by the Health Evaluation TIG
|
| TIG Leader(s): |
|
Christel A Woodward,
McMaster University,
woodward@mcmaster.ca
|
|
Ann Zukoski,
Oregon State University,
ann.zukoski@oregonstate.edu
|
|
Robert LaChausse,
California State University, San Bernardino,
rlachaus@csusb.edu
|
|
Eunice Rodriguez,
Stanford University,
er23@stanford.edu
|
| Chair(s): |
|
Ann Zukoski,
Oregon State University,
ann.zukoski@oregonstate.edu
|
| Presenter(s): |
| Kathryn L Braun,
University of Hawaii,
kbraun@hawaii.edu
|
| Michiyo Tomioka,
University of Hawaii,
mtomioka@hawaii.edu
|
| Shirley Kidani,
Executive Office on Aging,
shirley.kidani@doh.hawaii.gov
|
| Abstract:
In replicating evidence-based programs, the evaluator must establish and facilitate processes to assure that the program is replicated with fidelity. The purpose of this paper is to describe fidelity-assurance processes established for Hawaii's Healthy Aging Program as it begins to offer Enhanced Fitness and the Chronic Disease Self Management Program to Hawai'i seniors. Processes include: 1) the deconstruction of each program using a tracking-changes tool; 2) a step-by-step plan for program replication, including specific strategies to protect fidelity of study design and delivery; 3) review of this plan with the parent program; 4) excellent training to local staff who will delivery and coordinate the programs; 5) continuous monitoring by supervisory and evaluation staff using standardized checklists; and 6) regular review of checklists and data collection forms to identify areas for improvement. Processes and forms will be shared with participants as they could be transferable to other program replication efforts.
|
|
Session Title: Human Services Evaluation TIG Business Meeting
|
|
Business Meeting Session 453 to be held in Pratt Room, Section B on Thursday, November 8, 5:15 PM to 6:00 PM
|
|
Sponsored by the Human Services Evaluation TIG
|
| TIG Leader(s): |
|
Michel Lahti,
University of Southern Maine,
mlahti@usm.maine.edu
|
|
Ann Tvrdik,
Region III Behavioral Health Services,
atvrdik@region3.net
|
|
James Sass,
LA's BEST After School Enrichment Program,
jim.sass@lausd.net
|
|
Tracy Greever-Rice,
University of Missouri, Columbia,
greeverricet@umsystem.edu
|
| Roundtable:
Strategies to Evaluate Learning in Project and Team-based Environments |
|
Roundtable Presentation 454 to be held in Douglas Boardroom on Thursday, November 8, 5:15 PM to 6:00 PM
|
| Presenter(s):
|
| Meghan Kennedy,
Neumont University,
meghan.kennedy@neumont.edu
|
| Jake Walkenhorst,
Neumont University,
jake.walkenhorst@neumont.edu
|
| Abstract:
Project and team-based learning environments are effective learning methods, but provide assessment challenges for instructors. It is difficult to reliably assess individual achievement on collaborative projects. Students often dislike team projects because one person ends up “doing all the work” and others “slide by”. What must be done to ensure individual learning occurs that we can accurately assess and evaluate?
In project and team-based learning environments, these items must be considered and implemented in an assessment strategy, (1) adequate support, (2) individual accountability, and (3) defined competencies. These items are vital to tracking individual learning in addition to completing effective projects. We must look critically at the different levels of support, accountability, and competencies that should be developed and used. By focusing on individual learning within a group setting, we explore a relevant issue to measuring personal and academic growth.
|
|
Session Title: Qualitative Methods TIG Business Meeting
|
|
Business Meeting Session 455 to be held in Hopkins Room on Thursday, November 8, 5:15 PM to 6:00 PM
|
|
Sponsored by the Qualitative Methods TIG
|
| TIG Leader(s): |
|
Jennifer Jewiss,
University of Vermont,
jennifer.jewiss@uvm.edu
|
|
Leslie Goodyear,
Education Development Center Inc,
lgoodyear@edc.org
|
|
Eric Barela,
Los Angeles Unified School District,
eric.barela@lausd.net
|
|
Janet Usinger,
University of Nevada, Reno,
usingerj@unr.edu
|
|
Session Title: Learning Through Evaluation: Brazilian and Other International Development Experiences
|
|
Multipaper Session 456 to be held in Peale Room on Thursday, November 8, 5:15 PM to 6:00 PM
|
|
Sponsored by the International and Cross-cultural Evaluation TIG
|
| Chair(s): |
| Elizabeth Harris,
EMT Associates Inc,
eharris@emt.org
|
|
Evaluating in at Risk Community Environments: Learning From a Social Program Evaluation in a Brazilian Slum
|
| Presenter(s):
|
| Thereza Penna Firme,
Cesgranrio Foundation,
therezapf@uol.com.br
|
| Ana Carolina Letichevsky,
Cesgranrio Foundation,
anacarolina@cesgranrio.org.br
|
| Abstract:
This paper describes learning from a two-year evaluation of a social program in a Brazilian slum troubled by poverty, unemployment, and drug related threats and violence, where self protecting attitudes have led to inhabitant behaviors of fear and silence. Evaluation experience in this environment uniquely taught us how not to conduct evaluations. Our strategies made more sense especially as alternatives of what not to do. “Objective” questioning, excluding or withholding full information from evaluees, judgmental interactions versus empathetic conversations were intimidating and detrimental to giving out valid information. Trust building as basis for data collection and data utilization was of utmost importance. Inclusion and Empowerment ensured key community member involvement, stakeholder safety and integrity and importantly, data quality. Emphasis on Appreciative inquiry facilitated utilization of findings for community betterment. The significant lesson that emerged astonished the program staff, sponsors, evaluators and evaluees alike: the meaning of evaluation to the community.
|
|
Learning Through Evaluation: The Case of International Development Interventions
|
| Presenter(s):
|
| Osvaldo Feinstein,
Spanish Evaluation Agency,
ofeinstein@yahoo.com
|
| Abstract:
This presentation discusses the role of evaluation as a learning tool, with examples of evaluations of development interventions funded by international organizations, pointing out that evaluation facilitates learning by doing and showing how this type of learning can be accomplished. Thus, evaluation criteria are presented and the attribution issue is considered in some detail. The paper concludes with a discussion of a set of constraints to the use of evaluation as a learning tool, and with practical proposals to deal with each of these constraints.
|
| |
|
Session Title: Evaluation Skills Beyond Technical Capacities
|
|
Expert Lecture Session 457 to be held in Adams Room on Thursday, November 8, 5:15 PM to 6:00 PM
|
|
Sponsored by the Teaching of Evaluation TIG
|
| Presenter(s): |
| Claire Tourmen,
Ecole Nationale d'Enseignement Supérieur Agronomique de Dijon,
klertourmen@yahoo.fr
|
| Abstract:
What is evaluating? Is it an occupation, or even a profession? How is it practiced, learned and how can it be taught? Our point is to bring new answers to these questions. We have chosen to take a look at real evaluation activity through a doctoral thesis in education sciences. We have studied the way evaluation is practiced and learned by beginners, and the way it is practiced and has been learned by experts. The use of models and methods from psychology of work and learning highlights the specificities of evaluation. We will show how this new approach widens the knowledge of evaluative practice, learning and teaching. We will finally supplement and discuss some well-known works in evaluation.
|
| Roundtable:
Lessons Learned From the Evaluation of Partnerships Between One Non-governmental Organization Within the European Union and Two Caribbean Organizations |
|
Roundtable Presentation 458 to be held in Jefferson Room on Thursday, November 8, 5:15 PM to 6:00 PM
|
| Presenter(s):
|
| Lennise Baptiste,
Kent State University,
lbaptist@kent.edu
|
| Abstract:
Historical links due to colonization, have facilitated partnership ventures between agencies in the European Union and Caribbean organizations for special interest groups, business and education projects. The presenter will discuss the lessons learned from the evaluations of two such partnerships. Lessons about traversing unfamiliar terrain and culture emerged during project preparation, even though language did not seem to be a barrier, and the Terms of Reference comprehensive. A key finding was that routine tasks can become monumental when an evaluator is not in familiar territory. Though the data gathering process may be physically and emotionally taxing, it is important to maintain fidelity to the qualitative methods of triangulation and member checking to validate the conclusions and recommendations outlined in the final report. These lessons can guide evaluators who wish to undertake projects that span more than one country.
|
|
Session Title: Crime and Justice TIG Business Meeting and Presentations
|
|
Business Meeting with Panel Session 459 to be held in Washington Room on Thursday, November 8, 5:15 PM to 6:00 PM
|
|
Sponsored by the Crime and Justice TIG
|
| TIG Leader(s):
|
|
Roger Przybylski,
RKC Group,
rogerkp@comcast.net
|
|
Update on the Federal Budget and Federal Funding Streams for Criminal and Juvenile Justice-Related Research, Evaluation and Programming
|
| Roger Przybylski,
RKC Group,
rogerkp@comcast.net
|
|
Federal funding remains a significant source of support for criminal and juvenile justice-related research and evaluation. This presentation will provide audience members with the latest information available on the status of the Federal budget for Fiscal Year 2008. Appropriations for major justice-related funding streams such as the Byrne/Justice Assistance Grant (JAG), the Juvenile Accountability Block Grant (JABG), the Residential Substance Abuse Treatment (RSAT) program, Community-Oriented Policing Services (COPS), and proposed new initiatives such as the Violent Crime Reduction Partnership will be highlighted.
|
|
|
Session Title: Tools and Frameworks for Evaluating Social Change Philanthropy: A Case Study of an Evaluation of Responses by Women's Foundations to Hurricane Katrina
|
|
Demonstration Session 460 to be held in D'Alesandro Room on Thursday, November 8, 5:15 PM to 6:00 PM
|
|
Sponsored by the Non-profit and Foundations Evaluation TIG
|
| Presenter(s): |
| Hanh Cao Yu,
Social Policy Research Assocaites,
hanh_cao_yu@spra.com
|
| Heather Lewis-Charp,
Social Policy Research Assocaites,
heather@spra.com
|
| Abstract:
A growing number of foundations are interested in how to evaluate their investments in social change that impact systems and policies. Beyond shifts in individuals' attitude, knowledge, and behaviors, foundations and their donors are interested in how grants awarded create shifts in public framing of issues, institutional practices, critical mass engagement, and policies. Using the case example of Social Policy Research's recently completed evaluation of the Ms. Foundation for Women, Inc. and other women's foundation's support of the Katrina's Women's Response Fund, we highlight the framework, methods, and indicators that we used to capture the results of the first phase of funding of efforts to support rebuilding efforts in the Gulf Coast region that are driven by the leadership of women of color. The online tool, developed by the Women's Funding Network, is called -Making the Case: An Assessment Tools for Measuring Social Change.-
|
|
Session Title: A Demonstration of the Use of Concept Mapping as Evaluation Tool for the National Science Foundation's Integrative Graduate Education and Research Traineeships Program
|
|
Demonstration Session 461 to be held in Calhoun Room on Thursday, November 8, 5:15 PM to 6:00 PM
|
|
Sponsored by the College Access Programs TIG
|
| Presenter(s): |
| Jenny Bergeron,
University of Florida,
jennybr@ufl.edu
|
| Abstract:
This demonstration will serve to introduce concept maps as a way to evaluate the National Science Foundation's (NSF) Integrative Graduate Education and Research Traineeships Program (IGERT). Through the support of interdisciplinary Education programs in Science, Technology, Engineering and Mathematics, the IGERT program's primary goal is to educate U.S. PhD scientists and engineers with the interdisciplinary backgrounds, deep knowledge in chosen disciplines, and technical, professional, and personal skills to become leaders in their own careers and creative agents of change. To date, many evaluations of science programs have primarily included tests and surveys to assess student outcomes. However, these outcomes are somewhat limited in what they can measure, making it difficult to get a sense of student's knowledge in their chosen disciplines and their ability to integrate knowledge from different fields in a systematic way. As an assessment tool, concept maps may be more appropriate at uncovering knowledge integration. Concept maps are constructed by students and are graphical organizers consisting of nodes and labeled lines that relate the organization of thoughts, theories or concepts in student's memories. This demonstration will consist of a presentation of the different types of concept maps used, their scoring procedures, the measurement issues surrounding them, and finally a demonstration of applying data collected from an IGERT evaluation in adaptive management.
|
|
Session Title: Internal Evaluation Capacity Building Through Critical Friends and Communities of Practice
|
|
Multipaper Session 463 to be held in Preston Room on Thursday, November 8, 5:15 PM to 6:00 PM
|
|
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
|
| Chair(s): |
| Valerie Janesick,
University of South Florida,
vjanesic@tempest.coedu.usf.edu
|
| Discussant(s): |
| Ellen Taylor-Powell,
University of Wisconsin,
ellen.taylor-powell@ces.uwex.edu
|
|
Developing a Collaborative Spirit: Learning Communities at Work
|
| Presenter(s):
|
| Candace Lacey,
Nova Southeastern University,
lacey@nova.edu
|
| Abstract:
In any project evaluation, the evaluator has the opportunity to become a critical part of the project team. Being accepted as a critical friend is an important part of transforming the evaluation process into a learning experience. This presentation will explore the role of the evaluator in the development, implementation, and evaluation of a four year federally funded grant. The presentation will address how a positive working collaboration can create a relationship between the evaluator and project staff that results in the development of a true learning community. The evolution of this partnership and collaborative strategies will be traced through a discussion of the writing of the proposal, the selection of the project staff, evaluation instruments, preparation of federal reports, and the use of formative and summative evaluation findings.
|
|
Learning and Improving? Or Just Gathering Information?
|
| Presenter(s):
|
| Laura Silverstein,
New Futures,
lauras@newfutures.us
|
| Abstract:
New Futures, a small nonprofit, used an internal Self Evaluation Team to learn how to learn from their evaluation. The organization, which provides services within low-income apartment complexes, has conducted both outcomes-based assessments and, last year, a quasi-experimental research study on their programs. These evaluation efforts satisfied external stakeholders and earned the organization a reputation as having a strong evaluation and an effective program. But staff members were not sufficiently confident in their results to use them to improve programs. So they built the infrastructure to ensure that the organization learns from its evaluation. They improved the current evaluation tools and added additional ones to assess impact and help staff be intentional about their programming. They examined the processes of conducting the evaluation and of analyzing the results. With very limited resources, the organization was able to go from one where they gathered information to one where they used it.
|
| |
|
Session Title: Assessing and Improving Evaluation Staff Skills
|
|
Panel Session 464 to be held in Schaefer Room on Thursday, November 8, 5:15 PM to 6:00 PM
|
|
Sponsored by the Evaluation Managers and Supervisors TIG
|
| Chair(s): |
| Ann Maxwell,
United States Department of Health and Human Services,
ann.maxwell@oig.hhs.gov
|
| Abstract:
The standards for a good evaluation are well established. However, the soft skills that affect how an evaluator does the job are less clear. This session will look at how to assess, evaluate and improve staff performance.
|
|
Improving the Performance of Internal Evaluators in Local Government
|
| Sue Hewitt,
Health District of Northern Larimer County,
shewitt@healthdistrict.org
|
|
Evaluators at the local level work closely with the staff they are evaluating. Their ability to work with others, move projects forward and complete useful evaluations are influenced by the environment they are working in.
|
|
|
Assessing and Working With Evaluators in the Federal Government
|
| Ann Maxwell,
United States Department of Health and Human Services,
ann.maxwell@oig.hhs.gov
|
|
The federal government has standard performance evaluation criteria. How does that fit with the skills needed to be an effective evaluator? How do you work with evaluation staff to improve their ability to be effective?
| |
|
Session Title: Using a Participatory Impact Assessment Approach to Measure the Effectiveness of Famine Relief and Increase Community Resiliency in Sub-Saharan Africa
|
|
Panel Session 465 to be held in Calvert Ballroom Salon B on Thursday, November 8, 5:15 PM to 6:00 PM
|
|
Sponsored by the Disaster and Emergency Management Evaluation TIG
|
| Chair(s): |
| Carlisle Levine,
Catholic Relief Services,
clevine@crs.org
|
| Discussant(s):
|
| Peter Walker,
Feinstein International Center,
peter.walker@tufts.edu
|
| Abstract:
With mounting emergencies around the world, donors are interested in the effective use of limited resources to address increasing needs. This includes increased interest in finding feasible and credible ways to measure that effectiveness. The Bill and Melinda Gates Foundation and Tufts University's Feinstein International Center are testing a Participatory Impact Assessment (PIA) method's ability to help those responding to food security crises judge the effectiveness of their interventions.
As participants in this effort, Catholic Relief Services and Lutheran World Relief will share their experiences piloting the PIA method in Mali and Niger respectively. Through their experiences, each is learning how using the PIA method can contribute to greater impact. Working in partnership, the participants and the NGOs achieve mutually satisfying outcomes, while engaged in a process that creates opportunity for decision making and builds participant capacity and ownership. This panel responds to broad interest in participatory monitoring and evaluation methods and provides a unique example of how such an approach can be used in an emergency response context.
|
|
Using Participatory and Developmental Evaluation Methods to Contribute to Decreased Food Insecurity and Increased Tribal Peace in Niger: The Experience of Lutheran World Relief
|
| Heather Dolphin,
Lutheran World Relief,
hdolphin@lwr.org
|
| Jindra Cekan,
Jindra Cekan LLC,
jindracekan@yahoo.com
|
| Abdelah Mobrouk,
Lutheran World Relief,
abdelah_lwrniger@liptinfor.net
|
|
Increasing impact and long-term sustainability is the objective of every program. The question is how? LWR has been working in Niger alongside our partner, Contribution a l'Education de Base (CEB), taking both a deeply participatory and a developmental evaluation approach to affect both impact and sustainability. Community recipients strongly informed program design and their community leaders, including traditional and administrative leaders have become the drivers of the decision making related to project concerns. The often-antagonistic Tuareg and Fulani herders now meet regularly with settled farmer Tuareg and Hausa leaders to make decisions regarding project implementation and policy, to discuss conflict over land use and find common ground. This presentation will discuss in greater detail how the use of these participatory and developmental evaluation methods is contributing to decreased food insecurity and increased tribal peace in the region.
|
|
|
Using a Participatory Impact Assessment Approach to Improve Household Resiliency to Food Security Shocks: Catholic Relief Services/Mali and the Douentza Circle in Crisis
|
| Moussa Sangare,
Catholic Relief Services,
mbsangare@crsmali.org
|
| Abderahamane Bamba,
Catholic Relief Services,
abamba@crsmali.org
|
|
In international relief and development, implementing agencies and program participants might define positive impact differently. Bringing together all stakeholders to define and measure impact can lead to a common understanding of intervention effectiveness, greater intervention relevance, and increased probability that positive results will be sustained.
In response to the locust invasion and drought in the region of Douentza, Mali, Catholic Relief Services/Mali and the International Crop Research Institute for the Semi-Arid Tropics (ICRISAT) partnered to improve the food security of vulnerable communities. To measure the effectiveness of the intervention, the project is using a Participatory Impact Assessment (PIA) approach, involving all stakeholders in developing and using performance indicators to monitor progress. Using PIA has increased community members' responsibility and investment in the project, and increased implementing agency understanding of meaningful impacts as perceived by community members.
| |
|
Session Title: Measuring Sexuality and Gender: Accurately Capturing Dimensions and Categories of Sexuality
|
|
Panel Session 470 to be held in Royale Board Room on Thursday, November 8, 5:15 PM to 6:00 PM
|
|
Sponsored by the Lesbian, Gay, Bisexual, Transgender Issues TIG
|
| Chair(s): |
| A Cassandra Golding,
University of Rhode Island,
c_h_ride@hotmail.com
|
| Abstract:
Evaluators across disciplines are faced with accurately and effectively measuring sexuality and gender on most evaluation projects, though finding rigorous and reliable measurement tools is a challenge. Scale items rarely differentiate between sex and gender, and evaluators often confuse sexual orientation with identity, and behavior. The resulting measurement error may skew the data collected and advance faulty conclusions about sexuality and gender for the populations being studied.
This panel focuses on increasing evaluators' knowledge and awareness of gender and sexuality issues as they relate to measurement tasks within evaluation work. Audience members will increase their skill set around measurement issues and share suggestions of tested sexuality and gender scales to use in their daily evaluation practice. By revisiting measurement basics and examining theoretical concepts of sexuality and gender, this panel will augment evaluators' training with regard to practical knowledge about the intersection of measurement and sexuality, gender, and LGBTQ issues.
|
|
Building Blocks of Measurement with Lesbian, Gay, Bisexual, Transgendered and Questioning Populations
|
| A Cassandra Golding,
University of Rhode Island,
c_h_ride@hotmail.com
|
|
The lack of awareness and resulting confusion around sexuality, gender and LGBTQ issues within the current encumbering socio-political climate has fostered an unquestioned paradigm of assumed psychometric rigor, archaic demographic categories, and a general lack of appreciation for the complexity of sexuality issues.
This presentation will revisit the building blocks of measurement psychometrics and explore the consequences of this unquestioned paradigm for evaluation work. Attendees will gain hands-on knowledge about these issues and walk away with practical tools, rigorous sexuality measures, the knowledge of how to assess a measure's rigor, a better understanding of the complexity of sexuality and gender in evaluation work, and how to resolve unique issues around sexuality concepts and measurement. In addition, the presenter will introduce her own scale- the Healthy Emotional Reliance Scale (HER's; Golding, 2006) for female couples and use this as an integral example of the process of scale development, refinement and use.
|
|
|
Reducing Error: Measuring Sexuality and Gender Issues in Everyday Evaluation Practice
|
| Kari Greene,
Oregon Public Health Division,
kari.greene@state.or.us
|
|
Academics and theorists have explored gender and sexuality to find that both are flexible social constructs that vary across location, culture and time. From Alfred Kinsey to Fritz Klein, researchers have explored the differences between sexual orientation, affection/desire, sexual behavior, and gender identity. After a century of research on sexuality and gender, however, few researchers agree on terminology, dimensions and categorical classifications of sexuality. Building on recent studies, presenters will examine the complex constructs of sexuality and gender and how they relate to evaluation practice.
This presentation will examine how to reliably measure and assess the concepts of sexuality and gender critical to your evaluation project. This presenter will review scientifically rigorous scales, items and measures, and when to use those measures appropriately to address your evaluation questions. Discussion will focus on how to balance accuracy in measurement with the demands of everyday evaluation practice.
| |
|
Session Title: Alcohol, Drug Abuse, and Mental Health TIG Business Meeting and Roundtable: Soldiers Returning From Combat and Higher Learning Evaluation
|
|
Business Meeting Session 471 to be held in Royale Conference Foyer on Thursday, November 8, 5:15 PM to 6:00 PM
|
|
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
|
| TIG Leader(s): |
|
Robert Hanson,
Health Canada,
robert_hanson@hc-sc.gc.ca
|
|
Garrett E Moran,
Westat,
garrettMoran@westat.com
|
| Presenter(s): |
| Maria Clark,
United States Army Command and General Staff College,
maria.clark1@conus.army.mil
|
| Abstract:
As the Global War on Terrorism rages on, thousands of soldiers will return from combat and re-enter daily living environments throughout the United States. Many will seek to continue their education through military and civilian institutions. Their unique experiences may impact those learning environments. This session seeks to provide information on Combat Stress Reaction and additionally to explore the experiences of other higher learning evaluators.
|
|
Session Title: When Clients Collect Evaluation Data: Promises and Pitfalls
|
|
Think Tank Session 472 to be held in Hanover Suite B on Thursday, November 8, 5:15 PM to 6:00 PM
|
|
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
|
| Presenter(s):
|
| Andrea Beesley,
Mid-continent Research for Education and Learning,
abeesley@mcrel.org
|
| Sheila Arens,
Mid-continent Research for Education and Learning,
sarens@mcrel.org
|
| Discussant(s):
|
| Mary Piontek,
University of Michigan,
mpiontek@umich.edu
|
| Abstract:
Clients have a variety of reasons for wanting to engage in data collection activities, and it can benefit the evaluation in a number of ways. However, how far can the standard to "Involve clients and other stakeholders directly in designing and conducting the evaluation" (Standard U1, Joint Committee on Standards for Educational Evaluation, 1994) reasonably be taken by responsible evaluators when clients collect data? Variations among clients mean that some are better positioned to engage in effective and useful data collection than others. When clients take on more data collection than they can reasonably handle, an evaluation can fail. This Think Tank will interactively explore the promises and pitfalls of client data collection.
|
|
Session Title: Increasing Evaluation Capacity: Learning From Social Network Analysis, A Review in Evaluation
|
|
Expert Lecture Session 473 to be held in Baltimore Theater on Thursday, November 8, 5:15 PM to 6:00 PM
|
|
Sponsored by the Quantitative Methods: Theory and Design TIG
|
| Presenter(s): |
| Maryann Durland,
Durland Consulting,
mdurland@durlandconsulting.com
|
| Abstract:
This expert lecture reviews the history of Social Network Analysis in evaluation practice to the present, illustrates overall utilization in evaluation practice, and provides a comprehensive review of what we have learned from SNA, particularly with regard to collaboration, groups, teaming, organizational capacity and communities of learners. The first use of SNA in evaluation practice was reported at AEA in 1996. Since then there has been a slow but growing interest in the methodology. The application of SNA is reaching a tipping point, as results from the application of SNA alone or linked with traditional methods such as quantitative surveys, have informed projects in ways that have direct connections to measuring implementation strategies and linking strategies to successful outcomes. This review will be helpful for anyone interested in the method or in understanding applications and implications. A bibliography will be included. Dr. Durland has been working with SNA in evaluation since 1991.
|
|
Session Title: Regression Discontinuity Design: Lessons Learned From a Real World Application
|
|
Demonstration Session 474 to be held in International Room on Thursday, November 8, 5:15 PM to 6:00 PM
|
|
Sponsored by the Quantitative Methods: Theory and Design TIG
|
| Presenter(s): |
| Elizabeth Autio,
Northwest Regional Educational Laboratory,
autioe@nwrel.org
|
| Abstract:
In the many instances when purely experimental designs are not feasible, evaluators are increasingly encouraged to adopt quasi-experimental methods, such as the regression discontinuity design. While the methodology of regression discontinuity is described in several texts, this presentation connects research to practice by describing its application in the "real world" environment of evaluation. We explore lessons learned when applying the design to student assessment data from a large-scale education evaluation. Issues addressed will include: time and cost investment; the challenges of appropriate model specification, pseudo-effects, and ceiling and floor effects; wrestling with the possible violation of assumptions; and the overarching question of client utility. The presentation will invite a larger audience to share in what we have learned as well as to offer their own insights to the discussion.
|
|
Session Title: A Foot in Each Worlds: An Evaluator in the Assessment World
|
|
Panel Session 475 to be held in Chesapeake Room on Thursday, November 8, 5:15 PM to 6:00 PM
|
|
Sponsored by the Assessment in Higher Education TIG
|
| Chair(s): |
| Jo-Ellen Asbury,
Villa Julie College,
dea-joel@mail.vjc.edu
|
| Discussant(s):
|
| Molly Engle,
Oregon State University,
molly.engle@oregonstate.edu
|
| Abstract:
This panel addresses the conference theme of evaluation and learning by focusing on the potential for evaluation research and assessment research to learn from one another. Though their end objectives are similar, a quick review of the key sources within these two bodies of literature reveals very little cross-pollination. Evaluators seem to situate their methodological roots in fairly classical social science research approaches. Assessment scholars seem to place their roots in the need to respond to calls for greater accountability from various accrediting agencies and calls for clearer documentation of effectiveness from the general public. This panel will examine how these two bodies of research, while unique in origin, may be moving toward similar goals and what they may learn from one another.
|
|
Program Evaluation and Higher Education Assessment: Different Origins, Same Objectives
|
| Jo-Ellen Asbury,
Villa Julie College,
dea-joel@mail.vjc.edu
|
|
This paper reviews the unique origins of these two bodies of literature, and analyzes the commonalities in current foci and objectives. Issues of level of control over design and procedures the researcher can exercise, who the stakeholders are, formative vs. summative agendas, what constitutes 'evidence' and guidance from overarching theoretical perspectives will be discussed. Whether or not these two bodies of research are truly on different paths or just at different points along the same paths will be discussed. Throughout, the emphasis will be on what each body of literature could learn from the other.
|
|
|
Enhancement Through Integration: What we can Learn From Each Other
|
| Martha Ann Carey,
Azusa Pacific University,
mcarey@apu.edu
|
| Connie Brehm,
Azusa Pacific University,
cbrehn@apu.edu
|
| Javier Guerra,
Azusa Pacific University,
jguerra@apu.edu
|
|
Examining program quality as it meets defined criteria is the purpose of most professional accreditation. In order to protect the public, assessment may focus more on ensuring that a professional's performance meets minimum standards. Often taking a broader view and exemplified by the application of scientific principles for design, implementation, and utilization, evaluation may utilize Standards from AERA, AEA, APA, and NCME to enhance the validity of the process. Evaluation approaches may also be better able to address future needs as social and political contexts can be anticipated to impact on a social program.
| |