|
Session Title: Complex Systems and "Wicked" Evaluations
|
|
Multipaper Session 848 to be held in Centennial Section A on Saturday, Nov 8, 10:45 AM to 12:15 PM
|
|
Sponsored by the Systems in Evaluation TIG
|
| Chair(s): |
| Bob Williams,
Independent Consultant,
bobwill@actrix.co.nz
|
|
Evaluation Guidebook: Implications for a Quality Child Care System Evaluation and Development
|
| Presenter(s):
|
| Xuejin Lu,
Children's Services Council of Palm Beach County,
kim.lu@cscpbc.org
|
| Lance Till,
Children's Services Council of Palm Beach County,
lance.till@cscpbc.org
|
| Karen Brandi,
Children's Services Council of Palm Beach County,
karen.brandi@cscpbc.org
|
| Jeff Goodman,
Children's Services Council of Palm Beach County,
jeff.goodman@cscpbc.org
|
| Abstract:
This presentation will demonstrate how an evaluation guidebook was developed and used to guide the evaluation and management of a complex child care quality improvement system (QIS) in Palm Beach County. Guidebooks can be used as effective tools to steer evaluations, in addition to providing a central resource that programs can look to for development and alignment. The goals, nature, and evaluation of the QIS will be described, and the rationale for an evaluation guidebook to guide system development and evaluation will be highlighted. We will outline the development process and explain the contents of the guidebook, illustrating how it has been used throughout the evaluation process and for facilitating system change. We will also discuss the implication of the evaluation guidebook to the practice of system evaluation and development in the field of early care and education.
|
|
Evaluating System Response in Complex Problem Domains: Integrating Social Network and Systems Change Approaches
|
| Presenter(s):
|
| Branda Nowell,
North Carolina State University,
branda_nowell@ncsu.edu
|
| Abstract:
Increasingly, evaluators are being engaged in the formative and summative evaluation of systems change initiatives focused on improving upon the level of coordination amongst key organizations and agencies. The goals of these efforts focus on promoting synergistic outcomes and eliminating areas of destructive interference among those who share involvement in a common problem domain. Adopting a problem domain level of analysis, the focus of this paper is to present a framework for interorganizational systems assessment. This framework was developed based on applied systems thinking and social network principals with the ultimate goal of creating an evaluation technique for critically assessing coordination processes in the collective management of a given problem domain. A case example of its application in a project aimed at helping to improve coordination in the management and remediation of a polluted watershed will be presented. Implications for theory and practice will be discussed.
|
|
Systematic Review Techniques as a Means of Taming “Wicked” Evaluations
|
| Presenter(s):
|
| Charles Naumer,
University of Washington,
naumer@u.washington.edu
|
| Abstract:
The context for this paper is the effort of a research team at the University of Washington to define outcome measures for the work of community technology centers throughout the state of Washington. This paper considers the challenges of defining the scope and measures of an evaluation project in which there is a high degree of ambiguity among stakeholders. This type of evaluation project is defined in terms of Rittel and Weber’s classic description of “wicked” problems which are often used to describe complex policy issues.
Systematic reviews focus on methodologically rigorous methods of surveying the literature to identify and synthesize relevant research. The use of techniques associated with this process is explored as a means of addressing complexity, facilitating discussion among stakeholders and providing a framework for developing data collection tools. Theoretical foundations for this approach are informed by research from the field of systems science.
|
| | |
|
Session Title: Core Quantitative Issues: Presenting Quantitative Findings to Policy Makers, Stakeholders and the Public
|
|
Multipaper Session 849 to be held in Centennial Section B on Saturday, Nov 8, 10:45 AM to 12:15 PM
|
|
Sponsored by the Quantitative Methods: Theory and Design TIG
|
| Chair(s): |
| Stephanie M Reich,
University of California Irvine,
smreich@uci.edu
|
| Discussant(s): |
| Dale Berger,
Claremont Graduate University,
dale.berger@cgu.edu
|
| Abstract:
The goal of evaluation research is to provide feedback to stakeholders about their program. However, translating complex quantitative analyses in a manner that is understandable and useful to laypeople is often challenging. This symposium will discuss the issue of presenting quantitative findings to a variety of non-statistical audiences. The session will start with innovative ways public health data have been presented to policy makers and state agencies to both translate findings and sway public opinion. This will be followed by a comparison on how the findings from several large, longitudinal studies of childcare have been interpreted and utilized based on the way in which they were presented. The session will conclude with suggestions, from a national health worker program, for presenting data that address 'what works for whom'. Evaluating programs is only beneficial when the outcomes can be used in ways that enable program improvement, expansion, modification, or even discontinuation.
|
|
Translating Evaluation Research into Policy and Practice: The Case of California's Family Planning Program
|
| Claire Brindis,
University of California San Francisco,
claire.brindis@ucsf.edu
|
|
Using data to shape policy requires a portfolio of evaluation dissemination strategies and tools and the ability to communicate effectively the results of stakeholders' investments. As an external evaluator, The University of California, San Francisco has played both a program monitoring and evaluation role since the inception of the PACT Program to provide family planning and reproductive health services at no cost to California's low-income residents of reproductive age. Key in the translation of results of this program is the ability to present data in formats that respond to the needs and vital interests of the potential user of the information. For example, an 8 page brief presents comparative information on the decline in unintended pregnancies in the state by California State Senate (40 districts) and Assembly (80 districts), as well as by Federal Congressional districts. Data are presented by the specific name of the policymaker, including: numbers of clients served, the total provider reimbursement by that district, number of Family PACT providers in both the public and private sector, and estimates of PACT Program Impact, specifically, the estimated number of pregnancies averted, and the public costs averted through the prevention of unintended pregnancy. Data are presented both in numeric and graphic formats.
|
|
Presenting Data to Diverse Audiences: Many Evaluations With Varying Levels of Success
|
| Margaret Burchinal,
University of California Irvine,
mburchin@uci.edu
|
|
Through my long career as a statistician and evaluator, 'presenting data to lay people" has ranged from being quite successful to quite unsuccessful. I have served as the statistician on a number of high-profile child-care projects that have attempted to influence policy, in part, through their presentation of data in the press. The successful projects such as the Cost, Quality, and Child Outcomes Study and the Abecedarian Project worked with media specialists to refine their message to a few, relatively straight-forward statements. These projects created clear messages that could be easily understood by a diversity of audiences. They then worked with policy-makers to translate findings into practice. Projects that I view as being unsuccessful in disseminating findings, such as the NICHD Study of Early Child Care, allowed each investigator to craft their own interpretation of the findings, and then argued in the media over whose interpretation was correct. It is unclear whether any one's interpretation of the findings influenced policy makers, although the mixed messages seemed to worry parents. This presentation will focus on these lessons learned about how to present findings successfully to less stat-savvy audiences and ways that mire meaningful interpretations and limit utilization of quantitative outcomes.
|
|
Changing Public Policy with Quantitative Data: Lessons from the Welcome Back Initiative
|
| Zoe Clayson,
Abundantia Consulting,
zoeclay@abundantia.net
|
| José Ramón Fernández Peña,
San Francisco State University,
jrfp@sfsu.edu
|
|
Welcome Back is an initiative to integrate internationally trained health workers into the health workforce, The program, which began 7 years ago in California, is now being replicated throughout the country with funding from private philanthropy, federal training grants, and private employers. The quantitative data generated from this Congressional Earmarked program have been particularly important with licensing boards, professional organizations, universities/hospital residency programs, and nursing organizations. The Initiative Director has used the data extensively and this early and continuous 'buy in' has been critical to being able to translate findings for broader audiences.
This presentation will focus on the variables that have been collected and the results of a quantitative analyses relating to a variety of outcomes used to answer the evaluation question, for whom did the initiative work best and why. Specifically, this talk will describe innovative ways in which these quantitative data are shared to promote comprehension, utilization, and continuing program support.
|
|
Session Title: Competencies, Credentials, Certification, and Professional Development: Experiences and Issues for Evaluation Policy
|
|
Multipaper Session 850 to be held in Centennial Section C on Saturday, Nov 8, 10:45 AM to 12:15 PM
|
|
Sponsored by the Presidential Strand
|
| Chair(s): |
| Melvin Mark,
Pennsylvania State University,
m5m@psu.edu
|
| Discussant(s): |
| Thomas Schwandt,
University of Illinois Urbana-Champaign,
tschwand@uiuc.edu
|
| Abstract:
As is indicated by the AEA President's statement of the conference theme, various issues fall under the concept of "evaluation policy." Among these are a set of related areas of research and practice that have to do with evaluator, rather than evaluation, quality. These include the specification of evaluator competencies, the process of offering credentials or certification, and the nature of evaluation training. In this session, these issues are examined and discussed, drawing on experiences in four different countries.
|
|
Next Steps in the Use of the Essential Competencies for Program Evaluators
|
| Jean King,
University of Minnesota,
kingx004@umn.edu
|
|
Development of the Essential Competencies for Program Evaluators (ECPE) began as an unfunded graduate student project at the University of Minnesota a decade ago. The resulting set of competencies, not yet sponsored by any professional organization, was published in draft form in 2002 and a more developed version in 2005. Since then it has been used in settings around the world. Three activities are appropriate next steps for ECPE development: (1) discussion and possible revision of certain competency statements, (2) formal validation of these competencies using the process outlined in Messick's unified theory, and (3) the development of rubrics for each of the competencies. Potential uses in evaluation policy include (1) establishing selection criteria for external evaluators in government agencies (for example, in New Zealand's Ministry of Education), (2) endorsement or accreditation of training courses and university-based programs, (3) endorsement or credentialing of evaluators, and (4) structuring of evaluation training.
|
|
Toward Professional Designations for Evaluators: The Canadian Experience
|
| J Bradley Cousins,
University of Ottawa,
bcousins@uottawa.ca
|
| Heather Buchanan,
Jua Management Consulting Services,
hbuchanan@jua.ca
|
| Keiko Kuji Shikatani,
Independent Consultant,
kujikeiko@aol.com
|
| Brigitte Maicher,
Net Results and Associates,
maicherb@nb.sympatico.ca
|
|
In May 2006 the Canadian Evaluation Society (CES) released an RFP for the development of an action plan for evaluator credentialing. The RFP process was in response to growing national interest in evaluation quality assurance and the professional development and renewal of qualified evaluation practitioners. The process marked the official launch of a year-long, multifaceted national consultation process on whether CES should develop and install a system of professional designations and if so what such a system should look like. Consultation ultimately led to a formal decision by CES National Council to commit to the development and implementation of a system of credentialing for evaluators including as a foundation, a cross-walk of evaluator competencies. This paper describes the development process for the system and plans for further development and implementation. Consideration will be given to challenges encountered along the way and lessons learnt for evaluation policy.
|
|
New Evaluation Policy for Schools in Japan
|
| Masafumi Nagao,
International Christian University,
nagaom@icu.ac.jp
|
|
Starting this year (2008), all Japanese K-12 schools, including private ones, must carry out an annual self-evaluation and make public its results. In addition, the schools are advised to have the evaluation results reviewed by stakeholders. Annually at the end of the school year, each school must submit a report on its evaluation exercise to the local Board of Education, which should review the report and, if necessary and appropriate, take supportive action on it. All this results from of a June 2007 act passed in the Japanese national parliament, demanding from schools greater accountability and the practice of continuous improvement for providing quality education. This presentation will: discuss the rationale for this newly instituted policy; analyze factors which will determine its fate, including the reaction of schools and the massive requirement for evaluation training for teachers; and relate this Japanese experience to the broader issue of evaluation policy.
|
|
Where 'Connectedness to Others and the Land' is an Essential Competency for Evaluators: The Challenges of Building a System of Evaluation Professional Development in a Bi-Cultural Context
|
| Kate McKegg,
Knowledge Institute Ltd,
kate.mckegg@xtra.co.nz
|
|
In the South Pacific, a diverse group of practitioner evaluators came together recently to form the Aotearoa New Zealand Evaluation Association (anzea). A motivating factor has been awareness by many evaluators of how different our evaluation practice is compared with much of what we read about from overseas. In particular, building connections is a key foundation for the practice of evaluation. Our histories matter here in New Zealand; where we are from and who our families and ancestors are matters. Establishing connection to others and to the land is a critical part of our cultural and evaluation practice. Our 'competence' is as much a function of our technical evaluation skills as it is our ability to connect. This paper will discuss some of the challenges that lie ahead for us in New Zealand as we work to weave this cultural competency into a system of professional evaluation development.
|
|
Session Title: Identifying Best Practices in Program Implementation and Evaluation: Innovative Examples From the Centers for Disease Control and Prevention (CDC)
|
|
Panel Session 851 to be held in Centennial Section D on Saturday, Nov 8, 10:45 AM to 12:15 PM
|
|
Sponsored by the Government Evaluation TIG
and the Health Evaluation TIG
|
| Chair(s): |
| Thomas Chapel,
Centers for Disease Control and Prevention,
tchapel@cdc.gov
|
| Abstract:
To achieve the distal outcomes with which public health agencies have been charged will require coordinated efforts of many sectors and players. The implications for evaluation are that interventions and their evaluation are complex, multi-layered, and difficult to achieve. Evaluation and implementation challenges may exceed skills of our partners, who, fortunately, yearn for technical assistance and tools to aid at any or all stages of evaluation. This presentation discusses strategies of large CDC programs to identify best approaches for their partners and grantees. Each program will describe its situation briefly and how it landed on the specific tools or approaches it chose for providing guidance. The development and implementation of the tool/approach will be discussed, as will any information on how the guidance is perceived by the target audience of grantees and how it has changed evaluation practice at the partner/grantee level. Lessons from the CDC experience will also be drawn.
|
|
Perspectives on a Promising Practices Evaluation
|
| Susan Ladd,
Centers for Disease Control and Prevention,
sladd@cdc.gov
|
| Rosanne Farris,
Centers for Disease Control and Prevention,
rfarris@cdc.gov
|
| Jan Jernigan,
Centers for Disease Control and Prevention,
jjernigan1@cdc.gov
|
| Pam Williams-Piehota,
RTI International,
ppiehota@rti.org
|
| Belinda Minta,
Centers for Disease Control and Prevention,
bminta@cdc.gov
|
|
The Centers for Disease Control and Prevention's Division for Heart Disease and Stroke Prevention (DHDSP) identified a need to build practice-based evidence for policy and system-level interventions that could be replicated by state health departments to achieve public health goals. To meet this need, DHDSP launched an initiative to evaluate seven programs to identify effective interventions as well as promising practices. We briefly describe the evaluation design, the process used to evaluate the state programs, and the theoretical framework used to guide the evaluation. We discuss how challenges such as implementation and data collection delays were used to inform internal policy for future applications. We highlight benefits that sites attributed to their participation in the evaluation process including changes in their perceptions of evaluation and evaluative thinking, and increases in their internal capacity for conducting evaluation.
|
|
|
Rapid Evaluation of Promising Asthma Programs in Schools
|
| Marian Huhman,
Centers for Disease Control and Prevention,
mhuhman@cdc.gov
|
| Dana Keener,
Centers for Disease Control and Prevention,
dkeener@cdc.gov
|
|
As part of its mission to prevent the most serious health risk behaviors among children and adolescents, the Division of Adolescent and School Health (DASH) of the Centers for Disease Control and Prevention (CDC) funds school-based programs for asthma management. To help schools assess the impact of their program, DASH provides evaluation technical assistance using a rapid evaluation model. Rapid evaluations are designed to be completed within one year from initiation. Their purpose is to describe short-term impacts and outcomes, enhance the understanding of program activities, and provide recommendations for program improvement. This presentation will explain the components of the rapid evaluation model and describe its application for two school-based asthma management programs. How the programs used the information as well as strengths and weaknesses of the model will also be presented.
| |
|
Using National Standards to Evaluate Cultural Competence in the African American Tuberculosis Intensification Project
|
| Linda Leary,
Centers for Disease Control and Prevention,
lsl1@cdc.gov
|
|
In 2005, CDC, Division of Tuberculosis Elimination funded three sites to participate in the African American Intensification Project. The purpose of this project was to intensify tuberculosis (TB) prevention and control efforts in the African American communities and to increase cultural competence among care providers. To measure progress toward this effort, the National Standards on Culturally and Linguistically Appropriate Services in Health Care (CLAS Standards) were used to assess progress towards increasing cultural competence of TB program staff. Program self-evaluation data were examined and interviews were conducted with the program staff at all three sites regarding activities instituted to address cultural competence. Qualitative data were summarized and categorized in a matrix format stratified by the 14 CLAS standards. This cross-examination provided valuable information on the effectiveness of the services initiated and the matrix displayed how the project activities contributed to the elimination of health disparities.
| |
|
Session Title: Theory-Based Evaluations and Educational Contexts
|
|
Multipaper Session 852 to be held in Centennial Section E on Saturday, Nov 8, 10:45 AM to 12:15 PM
|
|
Sponsored by the Program Theory and Theory-driven Evaluation TIG
|
| Chair(s): |
| Uda Walker,
Gargani and Company Inc,
udaowalker@yahoo.com
|
|
Explicating Expected Consequences and Underlying Assumptions of a National Teacher Evaluation System
|
| Presenter(s):
|
| Sandy Taut,
Pontificia Catholic University of Chile,
staut@ucla.edu
|
| Maria Veronica Santelices,
Pontificia Catholic University of Chile,
mvsante@berkeley.edu
|
| Patricia Thibaut,
Pontificia Catholic University of Chile,
patithibaut@gmail.com
|
| Abstract:
The paper describes the process and results of explicating the expected consequences and underlying assumptions of the Chilean national teacher evaluation system (NTES), in the context of a larger study examining the intended and unintended consequences of this program. The NTES was introduced based on an agreement between three main stakeholder groups that traditionally hold opposing political views and to this day have diverging expectations regarding the program. We interviewed 14 professionals pertaining to these stakeholder groups and asked them to describe the program’s theory at the individual (teacher) and system levels. Based on these interviews we reconstructed the “theories” of each interviewee and then consolidated these individual theories into one theory for each stakeholder group. We reflect on the challenges we faced during this process and propose a methodology to effectively address these challenges when trying to reconstruct a program’s underlying assumptions in politically complex contexts.
|
|
Exploring the Intervention-Context Interface: A Case Study of a Nutrition Education Program Implementation
|
| Presenter(s):
|
| Sherri Bisset,
University of Montreal,
sherri.l.bisset@umontreal.ca
|
| Louise Potvin,
University of Montreal,
louise.potvin@umontreal.ca
|
| Mark Daniel,
University of Montreal,
mark.daniel@umontreal.ca
|
| Abstract:
This study develops a novel theoretical framework of the social processes which underlie program implementation. This is a case study of a nutrition intervention delivered by community nutritionists to elementary school children living in some of Montreal’s most disadvantaged neighborhoods. Data collection and analysis were guided by the theory of translation (Callon, 1986; Latour, 1987). Data are derived from semi-structured interviews completed with six program interventionists. Findings identified nutritionists as pre-occupied with three overarching goals, whereby goals were found to vary between settings and to take form interactively with the perceived interests of program participants (primarily students) and stakeholders (primarily teachers). Nutritionists were found to translate the program’s techno-gram such that it provided a legitimate response to these perceived goals. Findings reveal program implementation as essential a social process whereby interventionists translate program operations as a means of negotiating with program stakeholders.
|
| |
|
Session Title: Examining Values in the Context of Evaluation Theory and Practice
|
|
Multipaper Session 853 to be held in Centennial Section F on Saturday, Nov 8, 10:45 AM to 12:15 PM
|
|
Sponsored by the Theories of Evaluation TIG
|
| Chair(s): |
| Marvin Alkin,
University of California Los Angeles,
alkin@gseis.ucla.edu
|
|
Peacebuilding Programs and the Philosophical Frameworks of Evaluation: A Conceptual Discussion
|
| Presenter(s):
|
| Terrence Jantzi,
Eastern Mennonite University,
jantzit@emu.edu
|
| Abstract:
Over the past decade, the field of Peacebuilding has emerged as a recognized discipline. However, there is still considerable debate within the field concerning how to do evaluations of Peacebuilding initiatives. This paper presents a conceptual reflection intended to spark conversation concerning the location of peacebuilding within the philosophical and theoretical landscape of evaluation. The paper provides a description of the spectrum of philosophical frameworks in evaluation highlighting their embedded assumptions and a description of the spectrum of purposes of evaluation. These spectrums are then used to provide a reflection on the philosophical frameworks and evaluation purposes most closely aligned with the assumptions and practices found in contemporary Peacebuilding practice. The paper ends with a discussion of the implications of these reflections for designing appropriate peacebuilding evaluations.
|
|
Valuing and Evaluation: Steps to a Framework in Support of Effective Evaluation Policy
|
| Presenter(s):
|
| Jennifer Grewe,
Utah State University,
jenngrewe@gmail.com
|
| Rod Hammer,
Utah State University,
rodhammer@cc.usu.edu
|
| Lindsey Thurgood,
Utah State University,
lindsey.thurgood@usu.edu
|
| George Julnes,
Utah State University,
george.julnes@usu.edu
|
| Abstract:
Much attention has been given lately to controversies over methods for supporting causal conclusions (Julnes & Rog, 2007). Indeed, these controversies are a major reason that AEA has established a standing committee for evaluation policy and has chosen evaluation policy as the theme for this year’s conference. A related controversy, though with less attention, involves how we make judgments of value about the policies and programs that we evaluate. While scholars like Scriven insist that value judgments are necessary for inquiry to be evaluation, there is little consensus within the evaluation community on either the importance of valuation or, when needed, how to do it.
|
|
The Goal Versus the Gold Standard
|
| Presenter(s):
|
| James Griffith,
Claremont Graduate University,
james.griffith@cgu.edu
|
| Abstract:
The current paper argues for an epistemological stance in evaluation that connects to current movements in contemporary Philosophy. Contemporary philosophical discussions of such ancient questions as ‘When can we be certain?’ ‘When is knowledge secure?’ ‘When do we have enough evidence?’ have obvious and meaningful application in contemporary evaluation practice.
Gettier’s (1963) refutation of analyses of knowledge as justified, true, belief thrust philosophers into decades of attempts to rethink justification or discover some additional element that, added to justified true belief would yield knowledge. Some contemporary philosophers have turned in a new direction referred to variably as interest-relative, means-end, or practical interest epistemology. While this view is certainly not universally accepted in philosophy, this turn toward a practical orientation to knowledge in what is arguably the pure research discipline is informative for evaluation, where theorists have taken pains to distinguish evaluation from pure research, citing evaluation’s action orientation.
|
| | |
|
Session Title: The Role of Extension Middle Managers and Program Leaders in Building Agent Evaluation Capacity
|
|
Panel Session 854 to be held in Centennial Section G on Saturday, Nov 8, 10:45 AM to 12:15 PM
|
|
Sponsored by the Extension Education Evaluation TIG
|
| Chair(s): |
| Nancy Franz,
Virginia Polytechnic Institute and State University,
nfranz@vt.edu
|
| Discussant(s):
|
| Nancy Franz,
Virginia Polytechnic Institute and State University,
nfranz@vt.edu
|
| Abstract:
Building evaluation capacity in an organization requires support and attention from all levels. This panel focuses on how Virginia Cooperative Extension builds employee program, process, and performance evaluation capacity through middle managers known as District Directors and field-campus liaisons known as District Program Leaders. Virginia Cooperative Extension program development and evaluation specialists will share their experiences working with these organizational leaders for improving program evaluation. District Program Leaders and a District Director will also share their efforts and best practices for developing Extension agent program development capacity. Specific joint efforts to be highlighted include evaluation training and technical assistance content and process, professional development for organizational leaders, unified communication, and integration with performance review. Group discussion will follow so participants can more specifically glean best practices for their own work with complex organizations working to build program evaluation capacity in employees.
|
|
Working with Cooperative Extension District Directors and District Program Leaders to Build Agent Program Evaluation Capacity
|
| Heather Boyd,
Virginia Polytechnic Institute and State University,
hboyd@vt.edu
|
|
A program evaluator’s work can be greatly enhanced through a team approach to evaluation capacity building. This presentation looks at how program evaluation specialists work with Virginia Cooperative Extension District Directors and District Program Leaders to enhance agent program evaluation capacity through training and technical assistant, relationship building, and problem solving.
|
|
|
A Cooperative Extension Middle Manager’s Perspective of Building Program Evaluation Capacity in Employees
|
| Barbara Board,
Virginia Polytechnic Institute and State University,
board@vt.edu
|
|
Virginia Cooperative Extension District Directors serve as administrative middle managers with a major focus on personnel management, fiscal management, and policy compliance. These middle managers use information from program, process, and performance evaluations to guide this work. District Directors enhance the program evaluation capacity in agent employees through support for training and technical assistance, including evaluation competency in employee professional development, support, and performance management, and providing support for District Program Leaders in developing agent program evaluation capacity.
| |
|
Where the Rubber Meets the Road: Helping Front Line Employees Build Evaluation Capacity
|
| Christine Kastan,
Virginia Polytechnic Institute and State University,
cakastan@vt.edu
|
| Jewel Hairston,
Virginia Polytechnic Institute and State University,
jewelh@vt.edu
|
| Dan Goerlich,
Virginia Polytechnic Institute and State University,
dalego@vt.edu
|
|
Virginia Cooperative Extension District Program Leaders serve as a liaison between Extension field faculty and staff and state faculty and administrators in developing, implementing, and evaluating educational programming. These leaders provide direct training and technical assistance to employees on the front line in program evaluation. This role increases the speed at which Extension agents build evaluation capacity and conduct quality program evaluation.
| |
|
Session Title: Impact Evaluation for Public Research and Development Program in Japan
|
|
Multipaper Session 855 to be held in Centennial Section H on Saturday, Nov 8, 10:45 AM to 12:15 PM
|
|
Sponsored by the Research, Technology, and Development Evaluation TIG
|
| Chair(s): |
| Yasukuni Okubo,
Ministry of Economy Trade and Industry,
okubo-yasukuni@meti.go.jp
|
| Abstract:
Japan has been promoting science and technology development program under the Basic Plan for Science and Technology Law. Since then the evaluation has took a key role for producing impacts from the development.
The scope of public R&D program covers a wide range from high risk basic research to technology development for industrialization and the impacts produced are diversified. Today, the impact evaluation draws upon diffusion of innovations in the world, which describes how social change occurs. In this changing context, we discuss impact evaluation methods proper to measure public R&D program.
|
|
Effective Methods to Evaluate Impacts of Meti Projects: New Approaches for Logic Model Development to Find Paths Between Research and Development Outputs and Objectives
|
| Kazuki Ogasahara,
Ministry of Economy Trade and Industry,
ogasahara-kazuki@meti.go.jp
|
| Yasukuni Okubo,
Ministry of Economy Trade and Industry,
okubo-yasukuni@meti.go.jp
|
| Chikahiro Miyokawa,
Ministry of Economy Trade and Industry,
ogasahara-kazuki@meti.go.jp
|
|
We challenge to develop an effective method to make clear the path between outputs and objectives of R&D projects implemented in Ministry of Economy, Trade and Industry (METI).
Logic model is a tool available to capture the path and to delineate missing elements.
'Missing Middle', which represents the missing element, often emerges between the seed side and the objective side and is crucial to evaluate the project impacts.
To fill the 'Missing Middle', we develop logic models of several METI projects from the objective side. The data based on are scenario, technical overview and roadmap in 'Strategic Technology Roadmap' published by METI.
The results indicate paths and their outcome indicators to evaluation the project impacts.
|
|
Case Study and Analysis for Economic and Social Impact on National Research and Development Project Based on the Results of Follow-Up Monitoring and Evaluation at NEDO
|
| Takahisa Yano,
New Energy and Industrial Technology Development Organization,
yanotkh@nedo.go.jp
|
| Mitsuru Takeshita,
New Energy and Industrial Technology Development Organization,
takeshitamtr@nedo.go.jp
|
| Hideaki Takamatsu,
New Energy and Industrial Technology Development Organization,
|
| Tsutomu Kitagawa,
New Energy and Industrial Technology Development Organization,
|
| Tsunekazu Asano,
New Energy and Industrial Technology Development Organization,
|
| Kazuaki Komoto,
New Energy and Industrial Technology Development Organization,
kohmotokza@nedo.go.jp
|
| Momoko Okada,
New Energy and Industrial Technology Development Organization,
yanotkh@nedo.go.jp
|
| Momoko Okada,
New Energy and Industrial Technology Development Organization,
okadammk@nedo.go.jp
|
| Hiroyuki Usada,
New Energy and Industrial Technology Development Organization,
|
| Hiroyuki Usada,
New Energy and Industrial Technology Development Organization,
|
|
NEDO is a funding agency for promoting the development of advanced industrial, environmental, new energy and energy conservation technology. NEDO needs to improve its performance through evaluation activities.
Follow-up monitoring and evaluation were started with the purpose of understanding R&D short-term outcomes and utilizing the results of monitoring to improve NEDO's R&D management.
In order to understand the current statuses of the completed projects, their ex-post activities including further R&D and commercialization are monitored for a period of 5 years.
In this session, we discuss the economic and social impact of the project completed in 5 years ago based on the results of the follow-up monitoring. Using case study analysis, we also introduce some remarkable results in detail and identify the relationship between ex-post evaluations and follow-up monitoring.
|
|
Measurement of Economic Impact on National Research and Development Program and Cost Benefit Analysis at NEDO
|
| Kazuaki Komoto,
New Energy and Industrial Technology Development Organization,
kohmotokza@nedo.go.jp
|
| Mitsuru Takeshita,
New Energy and Industrial Technology Development Organization,
takeshitamtr@nedo.go.jp
|
| Hideaki Takamatsu,
New Energy and Industrial Technology Development Organization,
|
| Tsutomu Kitagawa,
New Energy and Industrial Technology Development Organization,
|
| Tsunekazu Asano,
New Energy and Industrial Technology Development Organization,
|
| Takahisa Yano,
New Energy and Industrial Technology Development Organization,
yanotkh@nedo.go.jp
|
|
Outcome survey covered 6 technology areas has been conducted in the past three years for the purpose of understanding mid- and long-term economic and social impact of R&D programs.
In this research, we deal with national photovoltaic (PV) R&D programs. Solar energy is one of several promising clean energy sources that contribute to a stable energy supply and mitigation of global environmental issues. The Japanese government has been promoting PV R&D through initiatives such as the 'Sunshine Program' and 'New Sunshine Program' since 1974. Recently, Japan has achieved the highest PV production and installation levels in the world.
We studied the relationship between national PV R&D programs and PV industry through interviews with project participants. Based on these results, we have discussed how to grasp the outcome additionality of PV R&D programs using econometric approach. We also tried to measure economic impact of the programs and studied cost benefit analysis.
|
|
Session Title: Alternative Approaches to Building Evaluation Capacity Through Formal Education
|
|
Multipaper Session 856 to be held in Mineral Hall Section A on Saturday, Nov 8, 10:45 AM to 12:15 PM
|
|
Sponsored by the Teaching of Evaluation TIG
|
| Chair(s): |
| Anna Madison,
University of Massachusetts Boston,
anna.madison@umb.edu
|
| Discussant(s): |
| Jody Fitzpatrick,
University of Colorado Denver,
jody.fitzpatrick@cudenver.edu
|
| Abstract:
One of the greatest challenges to improving evaluation practice is ensuring that evaluators are competent practitioners. The focus of this paper session is the capacity of university education programs to meet the challenges created by the expanding role of evaluators and the increasing diversity of evaluation settings. The three presenters examine three different approaches to formal evaluation education, each addressing a specific population of learners. The first paper addresses teaching evaluation in a Minority Serving Institution (MSIs) and the unique contributions of MSI's to expanding evaluation capacity. The second paper examines undergraduate evaluation education as the entry point into the field. The presenter will present the outcome of an undergraduate evaluation pilot program. The third paper addresses formal education programs that target mid-career educators with emphasis on evaluation to inform institutional change and to improve practice.
|
|
The Unique Contributions of Minority Serving Institutions to Evaluation Capacity Building
|
| Veronica Thomas,
Howard University,
vthomas@howard.edu
|
|
In recent years, attention has been given to identifying strategies for enhancing the capacity of faculty at Minority Serving Institutions (MSIs) to integrate evaluation courses and content within their research curriculum. The objectives of this effort are to aid in the dissemination of effective teaching strategies to a broad community of evaluators, and to attract more students of color to graduate training in evaluation. The following questions will frame this presentation: (1) why a special focus on teaching of evaluation at Minority Serving Institutions (MSI), (2) which strategies are effective to enhance the capacity of MSI faculty, and (3) which courses and practical experiences facilitate meaningful experiences for students of color and increase the value and relevance of the profession for these students. Implications of the teaching of evaluation at MSIs will be tied to the larger issues of broadening the scope of the profession and evaluation for social justice.
|
|
Linking Undergraduate and Graduate Evaluation Education
|
| Anna Madison,
University of Massachusetts Boston,
anna.madison@umb.edu
|
|
Within the field of evaluation formal education is supplemental to a basic discipline and is usually provided at the graduate level. The project reported in this presentation is a unique effort to integrate evaluation into an undergraduate public and community services curriculum. The undergraduate program, funded by the National Science Foundation, sought to increase the capacity to meet the demands for a more diverse pool of evaluators. The project results suggest that undergraduate evaluation education can serve as the orientation to evaluation as well as increase the representation of persons of color in the profession. The paper highlights the recruitment strategies, curricula content, student characteristics and the student outcomes of project participants.
|
|
Strengthening Evaluation of Public Schools Through Evaluation Education Targeted to Mid-Career Educators
|
| Katye Perry,
Oklahoma State University,
katye.perry@okstate.edu
|
|
This paper addresses the critical role that mid-career teachers can play by designing and producing evaluations to inform educational change. To this end, this paper will examine the factors that could bring about this contribution by first examining the need for mid-career teacher input for being the voice for change followed by a discussion of the developmental stages teachers transcend from first year status to mid-career to veteran status. In do doing, it will look at the concerns that typify each developmental stage while identifying the stage at which time evaluative thinking and training could be most effectively introduced. The paper will conclude by exploring potential strategies to develop evaluation skills whose end result could inform educational policy.
|
|
Session Title: The National Institute of Justice's Research and Evaluation Program Development on American Indian and Alaska Native Crime and Justice Issues
|
|
Panel Session 858 to be held in Mineral Hall Section C on Saturday, Nov 8, 10:45 AM to 12:15 PM
|
|
Sponsored by the Indigenous Peoples in Evaluation TIG
|
| Chair(s): |
| Angela Moore,
National Institute of Justice,
angela.moore.parmley@usdoj.gov
|
| Abstract:
The U.S. Department of Justice has sought to fulfill its unique trust responsibility to protect and act for the betterment of Indian tribes by increasing its involvement in addressing American Indian and Alaska Natives crime and justice issues. It is important to improve the quality and relevance of this research by ensuring that research and evaluations are done collaboratively at the community level by increased participation or in an advisory capacity.
This panel will discuss the National Institute of Justice's (NIJ) current tribal portfolio and outreach activities. NIJ's portfolio encompasses both social science research and evaluation as well as technology assessments. Outreach activities include increased partnerships with other agencies as well as the use of focus groups and key informant interviews to inform an American Indian and Alaska Native crime and justice research and evaluation agenda. Finally, this panel will discuss guiding principals for researchers and evaluators interested in Indian Country.
|
|
An Overview of the National Institute of Justice's American Indian and Alaska Native Crime and Justice Portfolio
|
| Winnie Reed,
National Institute of Justice,
winnie.reed@usdoj.gov
|
|
This presentation will provide an overview of NIJ's American Indian and Alaska Native research and evaluation portfolio for the past ten years. Several projects will be highlighted including those in the areas of policing and comprehensive criminal justice system improvement. Attendees will be provided with information regarding the products of NIJ research in this area and how to obtain copies.
|
|
|
Priority Research and Evaluation Needs for American Indian and Alaska Native Communities
|
| Jaclyn Smith,
National Institute of Justice,
jaclyn.smith@usdoj.gov
|
|
To date, NIJ has convened several focus groups and held key informant interviews with tribal leaders, representatives, and stakeholders to discuss the extant research and identify the need for, and gaps in, research and evaluation. Findings based on these activities will illustrate the discrepancies and agreements regarding the current state of research from the viewpoints of those most directly affected by such research compared to outside researchers and evaluators.
| |
|
Moving Forward: Guiding Principals for Research and Evaluation with American Indian and Alaska Native Populations
|
| Christine Crossland,
National Institute of Justice,
christine.crossland@usdoj.gov
|
|
This panel will build on the first two panelists' presentations by discussing in some detail guiding principals researchers and evaluators should adhere to when conducting research and evaluation with American Indian and Alaska Native populations to ensure the highest level of quality and integrity throughout the research process. Although emphasis will be placed on social science research and evaluation, guiding principals for technology assessments will be included since there is some overlap in these areas.
| |
|
Session Title: Evaluation in the Field: Developing Local Practitioners' Evaluative Thinking
|
|
Panel Session 859 to be held in Mineral Hall Section D on Saturday, Nov 8, 10:45 AM to 12:15 PM
|
|
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
|
| Chair(s): |
| Jennifer Greene,
University of Illinois Urbana-Champaign,
jcgreene@uiuc.edu
|
| Discussant(s):
|
| Jennifer Greene,
University of Illinois Urbana-Champaign,
jcgreene@uiuc.edu
|
| Abstract:
In this session, a distinctive model of evaluation practice is shared - a model that blends an educative and a capacity building intent with self-reflective evaluation practice and progress. A diverse group of R&D units and university-based evaluators in Sweden has been contracting with local municipal authorities to conduct an 18-month 'workshop' for staff in 6-8 municipal projects. The group meets once a month for one full day at a municipal location. At these meetings, the instructor offers short lectures on key topics in evaluation, and the group critically discusses evaluation progress on one or more of the projects involved. In between meetings, the staff in the projects works on their own evaluations. This distinctive model of evaluation, named Evaluation Verkstad Practice (EVP), offers promise to both enhance the defensibility of evaluation results in the region and to support more reflective evaluative thinking among professional municipal staff.
|
|
Evaluation Verkstad Practice (EVP): Basic Ideas, Structure and Content
|
| Elisabeth Beijer,
FoU i Väst,
elisabeth.beijer@gr.to
|
| Bengel Eriksson,
Karlstad University,
bengt-g.eriksson@kau.se
|
| Per-Ake Karlsson,
Borss University,
per-ake.karlsson@hb.se
|
| Tom Leissner,
Goteborg University,
tom.leissner@socwork.gu.se
|
|
Evaluation Verkstad Practice (EVP) constitutes an activity that accomplishes the combined purpose of conducting evaluations and developing competence to conduct evaluations, with the support of R&D units or university-based evaluators. The EVP featured in this presentation brings together groups of participants from different welfare organizations and workplaces. The participants have an assignment from their own organizations to conduct an evaluation of a specific object or program. The EVPs support the participants' evaluation activities through themed mini-lectures, and through a process of supervision, dialogue and reflection around issues that arise while the evaluations are in process. Further, the EVP supports the development of participant competence in evaluation more broadly, primarily through peer interaction and critical reflection. The evaluations conducted at the workshops are primarily internal, but with external support. EVP have a beneficial effect on the learning of evaluation methods by directly combining learning and conducting evaluation.
|
|
|
The Program Theory of Evaluation Verkstad Practice
|
| Kari Jess,
Malardalen University,
kari.jess@mdh.se
|
|
This paper explores the program theory of the Evaluation Verkstad Practice (EVP) as a form of 'learning by doing' practice or evaluation capacity building. The EVP facilitator should be experienced and skilled in evaluation, with knowledge about evaluation theory and multiple methodologies. The EVP participants should bring well-structured projects for evaluation, managerial support, and be at about the same stage in the project. It is an advantage if the projects to be evaluated are diverse in origin and scope, as this diversity enhances participant learning.
The purpose of the paper is to understand the premises and underlying rationale of the EVP. An EVP program theory can be formulated with a focus on the assumptions underlying (a) learning by doing, and (b) the role of the evaluator/instructor, which leads to (c) the development of learning organizations via 'the loop of learning'. (McLaughlin 1999, Stame 2004, van der Knaap 2004, Weiss 1997).
| |
|
Recruited or Appointed to Evaluation Verkstad Practice? A Reflection About Participation in Self-Reflective Evaluation Groups
|
| Laila Niklasson,
Malardalen University,
laila.niklasson@mdh.se
|
|
The model for implementing the Evaluation Verkstad Practice (EPV) takes the form of self-reflective evaluation groups. It also has a formal policy to guide decisions and actions, among them the issue of membership in the groups. In this paper, the recruitment of members to EPV is elaborated. Are members recruited or are they appointed? Who is making these decisions? How does recruitment and appointment affect the practical evaluation work? From a review of the EPVs conducted to date, a preliminary conclusion is that both recruitment and appointment have been used, reflecting both formal and ad hoc perspectives and several levels of decision making actors.
In this presentation, the effects of participant recruitment/appointment on the EPVs are only discussed from a theoretical perspective. The hypothesis presented is that choice of recruitment/appointment can affect dissemination of the project results, dependent on the formal authority/responsibility of the participant for reporting and thus dissemination.
| |
|
The Participant Perspective on Evaluation Verkstad Practice: Learning and Utilization
|
| Ove Karlsson Vestman,
Malardalen University,
ove.karlsson@mdh.se
|
|
With a view of evaluation as a practice of interpretation, argumentation, and decision making in interactional contexts, this presentation focuses on the significance of the Evaluation Verkstad Practice (EVP) for participant learning and utilization. The learning focus is on how participants have changed their conception of evaluation practice. Participant learning will be analyzed via three orders of learning: (1) assimilation or single-loop learning, as in increased knowledge; (2) accommodation or double-loop learning, involving critically assessing one's activity from new perspectives; and (3) learning that comprises the re-organization of earlier experiences and knowledge, generating a re-formulation of fundamental perspectives.
Participant utilization of their learning in the EVP will be analyzed via three venues of utilization: (1) instrumental utilization or how participants apply their learning in evaluation practice, (2) conceptual utilization or new ways of understanding their evaluation activity, and (3) utilization that involves changes organizational perspectives on and valuing of evaluation.
| |
|
Session Title: Lessons Learned From an Evaluation of a Comprehensive Community Initiative
|
|
Multipaper Session 860 to be held in Mineral Hall Section E on Saturday, Nov 8, 10:45 AM to 12:15 PM
|
|
Sponsored by the Non-profit and Foundations Evaluation TIG
|
| Chair(s): |
| Pennie Foster-Fishman,
Michigan State University,
fosterfi@msu.edu
|
| Abstract:
The complexity of comprehensive community initiatives (CCIs) makes evaluation exceptionally challenging. For example, despite their interest in changing population- and community-level outcomes, CCIs often use interventions that target only a few neighborhoods or that target subgroups of residents. In short, this can produce significant mismatches between the scale and scope of the intervention and expected outcomes, ultimately leading to evaluation results that suggest that CCIs models are ineffective. Evaluating CCIs is also complicated by the fact that some interventions are not evenly dispersed throughout the target community, leading to widely varying levels of program dosage. In this panel presentation we will present the evaluation methods we adopted to address these challenges in our evaluation of one CCI. The lessons learned from our evaluation over the past six years will also be discussed.
|
|
Who Should I Collect Data From: Simple Questions, Complex Answers When Evaluating A Comprehensive Community Initiative
|
| Pennie Foster-Fishman,
Michigan State University,
fosterfi@msu.edu
|
|
With the expansive visions that drive most comprehensive community initiatives (CCI) to pursue population- and community-level social change, it is tempting to focus CCI evaluations entirely on large scale outcomes. But, taking such an approach ignores the fact that CCIs are often composed of multiple program components that operate at different scales, have different goals and objectives, target different systems and populations, vary in duration, and are implemented to different degrees. This complexity challenges evaluators at even the most basic of levels: who to collect data from, what outcomes to target, and when and how to best collect this information. This presentation will describe how we tackled these basic evaluation questions in our evaluation of one CCI. Specifically, we will discuss the strategies we used to identify which residents and organizations to sample and how we strived to maximize the possibility of dosage variation in our samples.
|
|
The Pros and Cons of Using Resident Surveys to Measure Changes in Resident- and Neighborhood-Level CCI Outcomes
|
| Jason Forney,
Michigan State University,
forneyja@msu.edu
|
| Steven Pierce,
Michigan State University,
pierces1@msu.edu
|
| Charles Collins,
Michigan State University,
ccollins1981@gmail.com
|
| Soyeon Ahn,
Michigan State University,
ahnso@msu.edu
|
|
This presentation will discuss the pros and cons of using resident surveys to evaluate whether a community hosting a comprehensive community initiative (CCI) experienced improvements in resident- and neighborhood level outcomes targeted by the CCI. We will discuss the survey goals and objectives, how the survey was administered, the survey design and sampling challenges we encountered, the most important methodological features of the study we implemented, and data management and analysis issues arising from those methodological choices. We will present key findings related to the value of the door-to-door outreach effort we used to encourage residents to respond, response rates and levels of longitudinal attrition, and findings related to the major evaluation questions addressed by the study. We will also highlight how we used dosage analysis to determine programmatic effects with our survey data.
|
|
Integrating Secondary Data into the Evaluation of Comprehensive Community Initiatives
|
| Melissa Quon Huber,
Michigan State University,
hubermel@msu.edu
|
| Laurie Van Egeren,
Michigan State University,
vanegere@msu.edu
|
|
The complexity of comprehensive community initiatives (CCIs) calls for multiple data sources, many of which may already be available as secondary data. Secondary data can be easily accessible and are often used as the 'default' measure of an initiative's impact (e.g., census data). Other secondary data sources, particularly those developed within the local community such as information on neighborhood groups and leaders and community surveys, can provide more detailed snapshots of local impacts but also require creative consideration of the sources available and how they can effectively inform the evaluation. This paper describes lessons learned about the integration of secondary data into a CCI evaluation, including challenges in aligning available secondary data with CCI goals, monitoring shifting definitions of community-defined variables, identifying the full range of relevant secondary data sources available within the community, and developing indicators of the CCI's dosage through the use of these secondary data sources.
|
|
Using Online surveys to Track Organizational and Service Delivery Network Changes
|
| Kristen Law,
Michigan State University,
lawkrist@msu.edu
|
|
Collaborative partnerships among service delivery organizations are viewed as an important objective in enhancing CCI success. This presentation will discuss the use of an online longitudinal survey to evaluate and track organizational and service delivery network changes in one CCI. A primary focus of the CCI was to enhance local organizations' capacity to partner both with local organizations and local residents. I will discuss the survey goals and objectives, survey administration, and challenges encountered. I will discuss using measurements of readiness and capacity for change and dosage exposure to predict organizational partnerships and resident involvement within organizations. I will also present key findings related to the outcomes of organization-to-organization and organization-to-resident partnerships and collaboration.
|
|
Session Title: The Impact of School and Individual Context Variables on Student Outcomes
|
|
Multipaper Session 861 to be held in Mineral Hall Section F on Saturday, Nov 8, 10:45 AM to 12:15 PM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Chair(s): |
| Janice Noga,
Pathfinder Evaluation and Consulting,
jan.noga@stanfordalumni.org
|
|
Contextual Variance in Educators' Perceptions of School Violence
|
| Presenter(s):
|
| Benjamin Cohen,
Center for Schools and Communities,
bcohen@csc.csiu.org
|
| Beth Edwards,
Center for Schools and Communities,
bedwards@csc.csiu.org
|
| Abstract:
Using data from the U.S. Department of Education’s 2003-2004 Schools and Staffing Survey (SASS), we examine the relationship between teachers’ and administrators’ perceptions of several school climate indicators, including the frequency of physical conflicts, bullying, racial tensions, and disorder in classrooms. We calculate the degree of agreement between teacher and administrator ratings on these indicators and then examine individual and school characteristics associated with good and poor agreement. We discuss the implications of these analyses for evaluators and the school-based violence prevention programs they examine; in particular this study suggests how evaluators should interpret data from different levels of school systems. Furthermore, in line with the calls of prevention scientists to examine how underlying teacher and principal characteristics impact the quality of prevention program implementation (Greenburg, et al, 2005), we offer suggestions for how and when evaluators should incorporate measures of teacher-administrator agreement into formative and summative evaluation activities.
|
|
Examining School Leadership and Climate: Assisting School Districts in Identifying Key Factors Impacting Student Outcomes
|
| Presenter(s):
|
| Vicki Schmitt,
University of Alabama,
vschmitt@bamaed.ua.edu
|
| Rebecca Rodriguez,
Institute for School Improvement,
rodriguez09@missouristate.edu
|
| David Hough,
Institute for School Improvement,
davidhough@missouristate.edu
|
| Abstract:
School climate has long been the subject of interest to those in the field of educational evaluation. Research suggests that the level of school climate accounts for a significant amount of variance in student achievement, making it a priority for school districts that are concerned with improving student achievement. Leadership often plays an integral part in the overall culture and environment of the school has been characterized as a primary factor comprising overall school climate. Utilizing a triangulation approach, this paper explores data collected from faculty, students, and parents associated with ten suburban and rural school districts in a Mid-western state. Leadership is examined as a key factor mediating the relationship between climate and student outcomes.
|
|
Documenting Intervention Effects in High Poverty Schools: Approaches, Issues, and Concerns
|
| Presenter(s):
|
| Vicki Schmitt,
University of Alabama,
vschmitt@bamaed.ua.edu
|
| David Hough,
Missouri State University,
davidhough@missouristate.edu
|
| Victoria Henbest,
Missouri State University,
victoria2010@missouristate.edu
|
| Steve Seal,
Missouri State University,
sseal@missouristate.edu
|
| Abstract:
The effects of poverty on school-age children are well documented with disadvantages reaching far beyond the elementary school years. Studies examining student resiliency report negative psychosocial and environmental effects associated with student success in school for many children living below the poverty level or in working poor homes. Addressing poverty and its effects must be a priority for communities and their schools if the negative impacts on children’s health and development are to be addressed comprehensively (Abert, et al., 1997). Many high poverty schools struggle to provide adequate interventions aimed at addressing the lingering effects of poverty. In addition, efforts to document such efforts to help policy makers and social service providers better understand what constitutes “best practice” are also difficult. This paper provides insight on these issues from a community-focused intervention targeting two high-poverty elementary schools in an urban setting within a major metropolitan area in a mid-western state.
|
|
Why Can’t Poor Kids Learn?
|
| Presenter(s):
|
| Tom McKlin,
Georgia Tech,
tom.mcklin@gatech.edu
|
| Abstract:
Numerous policy initiatives and countless dollars have been devoted to diminishing the achievement gap between low-socioeconomic status (SES) students and high-SES students. Oftentimes, these initiatives and expenditures have done little to reduce the gap, and sadly, that gap has sometimes increased. Still, most educators and evaluators would be appalled by anyone claiming that poor children can not learn. If we believe that low-SES students can learn just as their high-SES counterparts, why then does the gap remain? This paper seeks to answer that question in three ways: by examining the correlation between socio-economic and student achievement data in one State (Georgia) during the NCLB years; by providing a review of the literature on the relationship between SES and student achievement; finally, by hypothesizing why the correlation between these two variables remains high.
|
| | | |
|
Session Title: Empowerment Evaluation Principles in Practice: The Annie E. Casey Foundation's Making Connections Initiative
|
|
Panel Session 862 to be held in Mineral Hall Section G on Saturday, Nov 8, 10:45 AM to 12:15 PM
|
|
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
|
| Chair(s): |
| Tom Kelly,
Annie E Casey Foundation,
tkelly@aecf.org
|
| Discussant(s):
|
| David Fetterman,
Stanford University,
david@stanford.edu
|
| Abstract:
Collaborative evaluation, participatory evaluation and empowerment evaluation are variations on a field of practice that engages stakeholders in the evaluation process. Ten principles guiding this empowerment evaluation have been clearly laid out in the past decade; improvement, community ownership, inclusion, democratic participation, social justice, community knowledge, evidence-based strategies, capacity building organizational learning and accountability. During that same period, the Annie E. Casey Foundation launched its Neighborhood Transformation/Family Development Initiative (Making Connections) in low-income neighborhoods in 22 cities throughout the U.S. This panel discusses the extent to which the Making Connections initiative in three of those cities has implemented the core principles of empowerment evaluation. The panelists-local evaluators in Denver, Indianapolis and San Antonio - will discuss what worked and what did not work in their attempts to put these principles into action. Fetterman, D., "A Window into the Heart and Soul of Empowerment Evaluation, "Chapter 1 and Abraham Wandersman, Jessica Snell-Johns, Barry E. Lentz, David M. Fetterman, Dana C. Keener, Melanie Livet, Pamela S. Imm and Paul Flaspohler, "The Principles of Empowerment Evaluation, "Chapter 2 in David M. Fetterman and Abraham Wandersman (eds.) Empowerment Evaluation Principles in Practice. New York: The Guilford Press, 2005.
|
|
Empowerment Evaluation Principles in Practice: Making Connections-San Antonio
|
| Robert Brischetto,
Making Connections San Antonio,
brischetto@wireweb.net
|
|
The ten principles of empowerment evaluation as applied to the evaluation of the San Antonio Making Connections initiative reveal the successes and challenges since its inception in 2001. Resident engagement in the evaluation process is one of many challenges encountered. Once solution to the problem of resident engagement has been the establishment of a West Side Center for Resident Engagement in Community Evaluation Research (WS-CRECER) in one of the neighborhood service centers. The Presentation discusses the techniques and methods used to engage residents in the evaluation process and lessons learned from their implementation. The presenter is Robert Brischetto, the Local Learning Partnership Coordinator for Making Connections San Antonio since 2005. He has been the local evaluator of the Making Connections initiative in San Antonio.
|
|
|
Empowerment Evaluation Principles in Practice: Making Connections - Indianapolis
|
| Lisa Osterman,
Making Connections Indianapolis,
laosterman@earthlink.net
|
| Elaine Cates,
Southeast Learning Partnership,
cates1e@yahoo.com
|
|
The Southeast Learning Partnership is a neighborhood-based committee of residents and other stakeholders dedicated to collecting and sharing data for community improvement initiatives. Local service providers and policy makers rely on the SELP to provide critical information for evaluating programs and policies in the community. The grassroots approach to data collections and dissemination improves the quality and utility of the data, while simultaneously building community capacity to use data to measure change and hold community leaders, service providers, and policy makers accountable. Presenters include Elaine Cates, Chairperson of the SELP and community resident, and Lisa Osterman, Data Access Facilitator for Making Connections-Indianapolis.
| |
|
Empowerment Evaluation Principles in Practice: Making Connections - Denver
|
| Sue Tripathi,
Making Connections Denver,
stripathi@mcdenver.org
|
|
Making Connections-Denver (MC-D), a place-based initiative involves families in shaping the future of their neighborhoods. The initiative teaches residents to develop the relationships, skills, leadership and strategies necessary to build powerful communities and staff the Community Learning Network (CLN) to oversee research and evaluation of the initiative. To evaluate participation of residents, MC-D developed and administered the first wave of the Participant Family Data Collection (PFDC). The PFDC is a longitudinal, mixed-method study designed to capture the experiences of families participating deeply in MC-D. A goal of PFD implementation is for resident leaders to take the helm of the research and evaluation work. The PFDC project measures results and impact of strategies and also is a community building tool that engages and empowers community researchers and participating families in ways that are congruent with MC-D Guiding Principles. Presenters include Sue Tripathi, Evaluation Manager and Members of the Community Learning Network Team.
| |
|
Session Title: Using Qualitative Evaluation to Gain a Deeper Understanding of Unique Programs Contexts
|
|
Multipaper Session 863 to be held in the Agate Room Section B on Saturday, Nov 8, 10:45 AM to 12:15 PM
|
|
Sponsored by the Qualitative Methods TIG
|
| Chair(s): |
| Jennifer Jewiss,
University of Vermont,
jennifer.jewiss@uvm.edu
|
| Discussant(s): |
| Jennifer Jewiss,
University of Vermont,
jennifer.jewiss@uvm.edu
|
|
Valuing Idiosyncratic Program Outcomes and Individual Antecedent Contexts: Using In-Depth Phenomenological Interviewing in an Early Childhood Teacher Preparation Program Evaluation
|
| Presenter(s):
|
| Sally Galman,
University of Massachusetts Amherst,
sally@educ.umass.edu
|
| Abstract:
Recent policy shifts in early childhood education in Massachusetts resulted in a move toward a universal preschool model in the state. In anticipation of growing personnel needs statewide, many teacher education programs rapidly reconfigured their early childhood education teacher preparation (ECETP) program models. However, the highly individualized, complex field of variables involved and the idiosyncratic nature of program outcomes made it difficult to employ standard evaluation models. This study evaluated one such ECETE program using the goal-based Transaction Model of program evaluation (Madaus, Scriven & Stufflebeam, 1983), modified to include in-depth phenomenological interview methods. This technique focuses on illuminating individual stakeholders’ contemporary and antecedent contexts and meaning-making processes (Seidman, 1998), and in the course of this study served to fracture seeming uniformity of experience into concrete categories for analysis. Findings support recommendations for careful but effective use of the technique in a wide variety of evaluation work.
|
|
Unraveling Evaluation Puzzles Through Qualitative Methods
|
| Presenter(s):
|
| Janet Usinger,
University of Nevada Reno,
usingerj@unr.edu
|
| Bill Thornton,
University of Nevada Reno,
thorbill@unr.edu
|
| Abstract:
Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP) is a federally funded project that provides academic and financial support to first generation college-going students. Upon high school graduation and acceptance to a community college or university, students receive a scholarship for up to six years to complete up to a bachelor’s degree. Evaluation data reveal that of the 1086 students eligible to receive the GEAR UP scholarship, only 272 (25%) actually accessed it immediately following high school graduation. The question lingers, why? Using qualitative methods, a six year longitudinal study was conducted with 60 GEAR UP students from diverse communities to explore how students construct their career aspirations and the role education plays. The focus of this presentation will be on the analytic process used to identify and explore the students’ values and beliefs. This presentation will be relevant to evaluators who use qualitative methods in complex projects.
|
|
"I'm Wasting My Time" Thinking: Factors Associated with Student Retention and Attrition at a Small, Public, Comprehensive College in the Northeast
|
| Presenter(s):
|
| Kathleen Greenberg,
University at Old Westbury - State University of New York,
greenbergk@oldwestbury.edu
|
| Abstract:
Findings will be presented from a series of focus groups designed to explore the factors associated with student retention and attrition at a small, comprehensive, northeastern college. The groups were conducted as a preliminary step in the development of a college-wide survey that will be used to provide a statistically secure understanding of the characteristics of various segments of the student body – particularly those who attrite and those who do not – so that targeted retention strategies and a predictive model for identifying likely attritors may be developed. Findings suggested that there are conceptually consistent psychological differences between students who leave and students who consider leaving but stay, and that these differences may predispose students who leave to think they are wasting their time in college. If quantitatively confirmed, this hypothesis implies that efforts to improve retention should aim to reduce the frequency or likelihood of such "wasting my time" thinking.
|
| | |
|
Session Title: The Children's Trust Evaluation Policy and Practice: Lessons Learned Assessing Progress and Opportunity at Multiple Levels
|
|
Multipaper Session 864 to be held in the Agate Room Section C on Saturday, Nov 8, 10:45 AM to 12:15 PM
|
|
Sponsored by the Non-profit and Foundations Evaluation TIG
|
| Chair(s): |
| Lori Hanson,
Children's Trust,
lori@thechildrenstrust.org
|
| Discussant(s): |
| Catherine Raymond,
Raymond Consulting Inc,
raymondconsult@bellsouth.net
|
| Abstract:
The Children's Trust is a source of public revenue established in 2002, with a mission to improve the lives of children and families in Miami-Dade County by making strategic investments in their future. This session discusses the development, implementation and impact of The Trust's three-tiered approach to evaluation policies/practices since its creation over recent years. First, our $40+ million investment in Out-of-School programs illustrates a contract-level evaluation approach, including development of a uniform set of program components and measures for participant-level outcomes, as well as ongoing provider capacity-building. Next, we illustrate initiative-level evaluation, focusing on formative evaluation of HealthConnect, a $26 million three-pronged portfolio to strengthen child/family health conditions through early childhood home visitation, school-based health, and community health workers focused on health care access. Finally, we discuss efforts conducting community-level research/evaluation around the well-being and needs of children in our community, as well as progress toward fulfilling our mission.
|
|
Out-of-School: Moving Towards Common Program Components and Outcome Measures
|
| Dalia Garcia,
Children's Trust,
dalia@thechildrenstrust.org
|
| Lori Hanson,
Children's Trust,
lori@thechildrenstrust.org
|
|
In 2004, The Children's Trust launched an Out-of-School initiative to expand the availability of after school and summer programs in Miami-Dade County, including children with disabilities. In 2005, The Trust created Project RISE (Research, Inspiration, Support and Evaluation) in partnership with a local university to support provider capacity-building, quality improvement and development of program and participant outcome standards. Core OOS program quality standards and participant outcome measures have been adopted in collaboration with providers and expertise in the OOS field. Two required common measures were introduced last year for all contracted providers to ensure consistency and comparable results. The One-Minute Oral Reading Fluency evaluates improvements in reading skills, and the School-Age Care Environment Rating Scale evaluates overall program quality. Providers are offered specialized trainings regarding selection, administration and uses of participant measurement tools. The Trust and Project RISE are evaluating the outcome of the first year using these tools.
|
|
HealthConnect: Initiative-level Evaluation as a Framework for Collaborative Partnerships
|
| Sharon DeJoy,
Children's Trust,
sharon@thechildrenstrust.org
|
| Lori Hanson,
Children's Trust,
lori@thechildrenstrust.org
|
|
In 2006, The Children's Trust launched the three-pronged HealthConnect initiative to create systemic improvements in child health and increase the number of children linked to medical homes. Recognizing that Miami-Dade has a higher proportion of uninsured children than national and state averages, HealthConnect programs were designed to increase access to health care for all children from early childhood (HealthConnect in the Early Years) through school-ages (HealthConnect in Our Schools), with HealthConnect in Our Community filling gaps through outreach and health navigation at the neighborhood level throughout the community.
Integration and collaboration are critical to HealthConnect's success; therefore, in collaboration with experts and literature in the field, we developed a unique evaluation framework to drive program implementation. The formative evaluation strategy identified opportunities to improve, leverage resources and integrate strategies across providers; including 'inreach' referral networks. Summative evaluation for providers focuses on consistent measures to ensure achievement of common objectives.
|
|
Community-level Research and Evaluation to Inform Progress and Opportunities
|
| Lisa Pittman,
Children's Trust,
lisa@thechildrenstrust.org
|
| Lori Hanson,
Children's Trust,
lori@thechildrenstrust.org
|
|
The Children's Trust's approach to evaluation policy and practice begins and ends with community-level analysis. Compiling and analyzing statistical data to understand and compare the changing status of children and families within Miami-Dade County - across neighborhoods, demographics, and time - enables The Trust, and the community it serves, to identify both areas of success and challenges. This information leads to needs-identified investments in child and family services at the initiative-level, and to data-rich measures of change at the community-level.
Periodically fielding a comprehensive survey of parents and compiling indicators of child health and well-being, The Trust identifies need, establishes baselines, and measures community/neighborhood change to assess the value of its investments, as well as inform policymakers. A community data integration collaboration is underway to follow children as they traverse various systems - health, child care, school, social services, justice, employment, etc - to improve service delivery and measure impact.
|
|
Session Title: Evaluators as Mediators: Participatory Techniques to Understand Policy, Practice, and Belief Systems in School
|
|
Multipaper Session 865 to be held in the Granite Room Section A on Saturday, Nov 8, 10:45 AM to 12:15 PM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Chair(s): |
| Linda Channell,
Jackson State University,
drlinda@bellsouth.net
|
|
Interaction: Policy, Evaluation, and Practice in North Carolina Educator Personnel Evaluations
|
| Presenter(s):
|
| Sha Balizet,
Mid-Continent Research for Education and Learning,
sbalizet@mcrel.org
|
| Jouanna Crossland Wells,
Mid-Continent Research for Education and Learning,
jcrossland-wells@mcrel.org
|
| Abstract:
North Carolina policymakers boldly envisioned a comprehensive personnel appraisal (or evaluation) system to advance educational leadership and professionalism among teachers, principals, and superintendents. Using North Carolina as a case, we look at the interplay of policy with practice and the mediating role of evaluators. In this example, policymakers sought to shape educational practice by the lever of a set of new personnel appraisal systems, all based on the same foundational standards. The new appraisal systems will follow aligned processes and procedures across career levels and tracks, and share features such as 360-degree feedback and growth-oriented rubrics. Guided by the Personnel Evaluation Standards, evaluators investigated the perceived quality of the new systems through attitudes of stakeholders towards the new evaluation processes. This case illustrates the interactions of evaluators with policymakers and practioners, and may point towards future evaluation needs, as more policymakers adopt the comprehensive approach developed in North Carolina.
|
|
An Evaluation of Meadowbrook K-8
|
| Presenter(s):
|
| Eleanor Spindler,
University of Colorado Boulder,
eleanor.spindler@colorado.edu
|
| Amy Subert,
University of Colorado Boulder,
amy.subert@colorado.edu
|
| Kenneth Howe,
University of Colorado Boulder,
ken.howe@colorado.edu
|
| Abstract:
Over the last decade, many districts have implemented K-8 grade configurations in the hopes of raising student achievement (Hough, 2005). While some researchers have found positive achievement effects (Abella, 2005; Alspaugh, 1998; Offenberg, 2001), other research suggests those effects are actually attributable to differing student and teacher populations (Byrnes & Ruby, 2007). Through parent and teacher surveys, teacher focus-groups, and analysis of student achievement and enrollment patterns, this evaluation of an urban, K-8 public school seeks to determine the effectiveness of the upper grades (6-8). Currently, a majority of parents with students in grades K-5 choose other schools for 6-8, thus breaking the continuity that is the hallmark of K-8 schools. We will compare this school with 13 other K-8s in the district, controlling for size, SES, and ethnicity; we hope to examine the qualities of successful K-8s and the effects of an open-enrollment system of parental choice.
|
|
Process and Evaluation Use and Organizational Consequences of Developing, Administering and Reporting on School Climate Surveys in the Albuquerque Public Schools
|
| Presenter(s):
|
| Michelle Osowski,
Albuquerque Public Schools,
osowski@aps.edu
|
| Abstract:
Select schools within the Albuquerque Public School (APS) District have elected to include student and staff climate data for a more thorough understanding of the strengths of their school, and the barriers to changing practice and belief systems.
The project was three-fold: the first part was the systematic inquiry and participatory action research that preceded the development, administration and testing of two surveys that have been used in several elementary, middle and high schools within APS. The second part was the examination of the evaluation use, and the third part was the subsequent organizational consequences.
|
| | |
|
Session Title: Evaluating Translational Research: Challenges for Evaluation Policy and Practice
|
|
Panel Session 866 to be held in the Granite Room Section B on Saturday, Nov 8, 10:45 AM to 12:15 PM
|
|
Sponsored by the Health Evaluation TIG
|
| Chair(s): |
| William Trochim,
Cornell University,
wmt1@cornell.edu
|
| Discussant(s):
|
| William Trochim,
Cornell University,
wmt1@cornell.edu
|
| Abstract:
Translational research is a newly emerging discipline that attempts to integrate scientific research and communities of practice, and change the way society and science interact. This field poses important new and complex conceptual, methodological and systems challenges for evaluation. This panel describes current evaluation efforts of the newly established NIH Clinical and Translational Science Awards (CTSAs) initiative, a consortium that is expected by 2012 to include 60 academic centers linked nationally to connect academic institutions and communities of practice in health and medicine. The presentations will address conceptual issues (how to define and operationalize translational research), methodological issues (the use of social network analysis to assess how diffusion of innovation changes over time in communities of practice) and systems issues (how business and management methods can help create an effective 'program of evaluation). The implications of this work for evaluation policy and practice generally will be discussed.
|
|
Translational Research: Can Usable Categories be Created?
|
| Ann Dozier,
University of Rochester,
ann_dozier@urmc.rochester.edu
|
| Stephen Lurie,
University of Rochester,
stephen_lurie@urmc.rochester.edu
|
| Camille Martina,
University of Rochester,
camille_martina@urmc.rochester.edu
|
| Thomas Fogg,
University of Rochester,
thomas_fogg@urmc.rochester.edu
|
| Thomas Pearson,
University of Rochester,
thomas_pearson@urmc.rochester.edu
|
|
Translational research is increasingly discussed in the literature, often in the context of NIH's Clinical and Translational Research Award (CTSA) initiative. While referring in broad terms to different types of translation research there is no consensus as to the number of types or their definitions. To make these definitions meaningful for an evaluator, administrator or researcher, a classification schema is warranted but developing a usable one to categorize actual research endeavors presents significant challenges. Through an ongoing research resource inventory at a CTSA funded institution all investigators biennially categorize each of their research projects across pre-defined fields (e.g. geographic scope, life stage, international disease classification). In 2007 fields describing translational research categories (not labeled as such) were added. Over 1000 research projects were classified. Test-retest reliability assessment with a sample of 50 researchers representing the inventory's three categories of translational research demonstrated that some investigators had difficulty interpreting the categories.
|
|
|
Evaluation of Communities of Practice and Diffusion of Innovation in Translational Science
|
| Abigail Cohen,
University of Pennsylvania,
abigailc@mail.med.upenn.edu
|
|
The University of Pennsylvania's Clinical and Translational Science Award (CTSA) has forged a complex multi-institutional 'academic home' for clinical and translational research between Penn, the Children's Hospital of Philadelphia, the Wistar Institute and the University of the Sciences in Philadelphia to foster interdisciplinary science from discovery of new molecules to the study of drug action in large populations. As part of a multifaceted approach to evaluating this program, Social Networking Analysis will be used to measure changes over time, and determine whether or not the centers, cores and programs show patterns of connectedness across the alliance and whether these nodes and ties result in higher productivity and greater numbers of work products. Analyses use existing databases (e.g., publication citations) and survey data and are intended to evaluate how diffusion of innovation advances by examining the alliance of networks and their role in influencing the spread of new ideas and practices.
| |
|
Evaluation and Metaevaluation as Program, Project and Sub-project: Designing and Implementing the Evaluation of a Clinical and Translational Science Institute
|
| Don Yarbrough,
University of Iowa,
d-yarbrough@uiowa.edu
|
|
Large institutional innovations with multiple components often need complex evaluations to serve numerous purposes and users. Using the example of a NIH-funded Center for Clinical and Translational Science, this paper conceptualizes an overall 'program of evaluation' using the business program/project management literature. In this conceptualization, each specified evaluation purpose has its owned linked evaluation subproject sharing some resources and activities with other evaluation subprojects. Individual evaluation subprojects focus on individual components, collaboration among components, overall governance, or evaluation capacity building and resource sharing. An important set of subprojects provide formative and summative metaevaluation of the evaluation subprojects to be sure that the individual subprojects are optimally efficient, effective and well-coordinated with each other and responsive to overall evaluation purposes and needs. The paper provides illustrations of how to incorporate collaborative approaches, evaluation capacity building, program theory and logic, and evaluation standards and guidelines into this project-based design.
| |
|
Session Title: Evaluation in International Education Settings
|
|
Multipaper Session 867 to be held in the Granite Room Section C on Saturday, Nov 8, 10:45 AM to 12:15 PM
|
|
Sponsored by the International and Cross-cultural Evaluation TIG
|
| Chair(s): |
| Audrey-Marie Moore,
Academy for Educational Development,
amoore@aed.org
|
|
Short-Term Exchange Programs: Long-Term Outcomes - Does It Really Work?
|
| Presenter(s):
|
| Liudmila Mikhailova,
US Civilian Research and Development Foundation,
mikhailova@msn.com
|
| Abstract:
The panel will focus on outcome assessment of short-term international programs for professional exchange. It will address challenges that donor and contracted organizations face while designing M&E plans and measuring mid-and long-term outcomes. The panel will analyze the best practices of evaluation designs and critique the shortcomings of evaluation findings.
A sampling of the findings of short-term exchange programs' outcomes will be presented including the International Visitor Leadership Program (IVLP), one of the oldest professional exchange program sponsored by the U.S. federal government. Started in 1940 with Inter-America exchange, IVLP brings today to the United States about 4500 promising leaders in 50 areas of expertise from 185 countries of the world.
The discussion will center on the analyses of different criteria for measuring success in three major areas: international knowledge acquisition and its impact on alumni professional development; application of international knowledge in promoting innovation and change in alumni home countries and increase of international and cross-cultural understanding. Professional exchanges have been recognized all over the world as an enormous contribution to human capital development and global social change. In the United States, such programs are seen as one of the major vehicles to promote and advance U.S. foreign policy in the field of public diplomacy and technical assistance. Ideas of how to measure success of short-term exchange programs will be brainstormed during this presentation.
|
|
Good Governance of Public University : Multi Case Study
|
| Presenter(s):
|
| Rattana Buosonte,
Naresuan University,
rattanabb@hotmail.com
|
| Abstract:
The purpose of this research were to study and compare good governance between the two public universities in Thailand which Located in Northern and Northeast region. Mixed-Method, Integrated Design (Quan-Qual) was use in this study. The participants were 226 stakeholders from those universities. Questionnaire and semi-structure interview were use to collecting data. The research results from quantitative study found two university were medium level in good governance on overall and 5
components except the effectiveness which was high level. But the results from qualitative study found two universities were low level in 3 components such as transparency, equity and participation. However there were good or satisfy level in the components of independent, effectiveness and flexibility. Besides these when compared the results between two public universities, there were difference in some components and some issues of good governance.
Key word : 1) Good Governance 2) Public University 3) Equity 4) Transparency 5)Participation 6) Independence 7)Effectiveness 8) Flexibilit
|
|
Practices and Challenges in Educational Program Evaluation in the Asia-Pacific Region: Results of a Delphi Study
|
| Presenter(s):
|
| Yi-Fang Lee,
National Chi Nan University,
lee.2084@yahoo.com
|
| James Altschuld,
The Ohio State University,
altschuld.1@osu.edu
|
| Hsin-Ling Hung,
University of Cincinnati,
hsonya@gmail.com
|
| Abstract:
Educational program evaluation (EPE) has become more important in recent years because of increasing governmental demands for accountability. At the same time, little is known about the development of and issues in educational evaluation in the Asia-Pacific region. To that end, we conducted a Delphi study to learn what is happening now and what the future might hold for EPE in this part of the world based on the perspectives of evaluation experts.
Thirty-seven panelists from eleven Asia-Pacific countries participated in three Delphi rounds. Thirty-four out of 78 statements reached consensus in accord with our criterion of 90% of responses falling into the inter-quartile range. Higher agreement was noted for the concept of EPE as compared to current and future statuses. Major characteristics of EPE in the area are discussed as well as potential trends and challenges 5 years from now.
|
|
A Development of Conceptual Change Model in Quality Assurance of Basic Education Institutions
|
| Presenter(s):
|
| Sukanyarat Khong-Ngam,
Chulalongkorn University,
ksukanyarat@hotmail.com
|
| Suwimon Wongwanich,
Chulalongkorn University,
wsuwimon@chula.ac.th
|
| Siridej Sujiva,
Chulalongkorn University,
ssiridej@chula.ac.th
|
| Abstract:
The purpose of this study were (1) to study the conception of quality assurance of basic education institutions and to develop the set of diagnostic instrument for detecting misconceptions about quality assurance of basic education institutions in Thailand. (2) to analyze the amount of misconceptions about quality assurance of basic education institutions of teachers in school case study (3) to develop the conceptual change model in quality assurance of basic education institutions and to employ the developed model with key stakeholders in school case study and (4) to examine an effectiveness of conceptual change model and to compare the change in amount of misconception of quality assurance between before and after employing the developed model. This research employed the research & development methodology and was classified into two phases. The first phase was an exploratory study aimed to develop the diagnostic method for detecting misconceptions and analyze the amount of misconceptions about quality assurance. The second phase was an experimental research aimed to develop the conceptual change model and to compare the change in amount of misconception of quality assurance misconception between before and after employing the developed model. The research sample consisted of key stakeholders such as teachers, administrators and school internal quality assurance committee from six basic educational schools in Thailand. Data were collected by two sets of research instruments. The first set were instruments for diagnose misconceptions consisted of self checklist, and diagnostic test, the second set were instruments for examining effectiveness of conceptual change model consisted questionnaire, interview and anecdotal record. The qualitative data were analyzed by using content analysis, whereas the quantitative data were analyzed by employing descriptive statistics, ANOVA and ANCOVA. The results expected to be beneficial to stakeholders.
|
| | | |
| In a 90 minute Roundtable session, the first
rotation uses the first 45 minutes and the second rotation uses the last 45 minutes. |
| Roundtable Rotation I:
Multiple Case Study of College First Year Seminars From an Evaluative Perspective Using Critical Action Research Matrix Application (CARMA) |
|
Roundtable Presentation 868 to be held in the Quartz Room Section A on Saturday, Nov 8, 10:45 AM to 12:15 PM
|
|
Sponsored by the College Access Programs TIG
|
| Presenter(s):
|
| Karen M Reid,
University of Nevada Las Vegas,
reidk2@unlv.nevada.edu
|
| Peggy Perkins,
Thomas University,
pperkins@thomasu.edu
|
| LeAnn Putney,
University of Nevada Las Vegas,
leann.putney@unlv.edu
|
| Abstract:
John N. Gardner in 1972 pioneered a concept called the first-year seminar to increase academic performance and freshman student retention. By 2002, 94% of America’s four-year institutions offered a first-year seminar to at least some students. Research has leaned towards a positive and almost always statistically significant relationship between seminar participation and college achievement and/or higher persistence. Unfortunately, these studies frequently reflected a variety of methodological issues. The purpose of this research was to address these shortfalls by defining for the reader the multiple dimensions of first-year seminars and a prescription for future success. An evaluative perspective was applied using Critical Action Research Matrix Application (CARMA) as the basis for collecting and analyzing the data. Analysis concentrated on the commonalities and differences associated with first-year seminar programs at three different institutions. The central question for this evaluative research was what should be the key components of a first year seminar?
|
| Roundtable Rotation II:
Evaluating One Program Within a Systemic Reform Initiative: Discussion of Challenges and Potential Solutions for Isolating Program Impact |
|
Roundtable Presentation 868 to be held in the Quartz Room Section A on Saturday, Nov 8, 10:45 AM to 12:15 PM
|
|
Sponsored by the College Access Programs TIG
|
| Presenter(s):
|
| Doreen Finkelstein,
College Board,
dfinkelstein@collegeboard.org
|
| Abstract:
The College Board has multiple systemic reform initiatives that provide a suite of programs and services for high schools. This type of systemic reform approach raises a difficult problem in evaluation: how does one understand the impact of one particular program or service within the context of the entire initiative? While it is useful and desirable to look at the effects of the whole reform package as a single large and multifaceted intervention, stakeholders also want to know whether certain components of the package are more effective than others, and if some components are redundant or perhaps even ineffectual. The proposed roundtable will explore the issues and generate ideas for evaluating the impact of one particular program within the context of a multifaceted systemic reform initiative. Questions for discussion with attendees will cover both methodological and statistical approaches for addressing this problem, and ideas and suggestions will be actively solicited.
|
| In a 90 minute Roundtable session, the first
rotation uses the first 45 minutes and the second rotation uses the last 45 minutes. |
| Roundtable Rotation I:
"Little Pig, Little Pig, Let me In!" The Relationship Between Individual Evaluation Policy and Degree of Cooperative Participation in the Evaluation of a Large, Comprehensive Project |
|
Roundtable Presentation 869 to be held in the Quartz Room Section B on Saturday, Nov 8, 10:45 AM to 12:15 PM
|
|
Sponsored by the Evaluation Use TIG
|
| Presenter(s):
|
| Gale Mentzer,
University of Toledo,
gmentze@utnet.utoledo.edu
|
| Abstract:
This roundtable session begins with a presentation of a study that examined the relationship between faculty attitudes and beliefs about the role of evaluation in a multi-million dollar federal education grant and faculty levels of cooperation in the implementation of the evaluation plan. The study used a mixed methods design wherein faculty attitudes were collected through in-depth interviews and level of cooperation was measured using a rating scale based upon the type (formative or summative) and the frequency of evaluation implemented. Results indicated that certain stereotypes were detrimental to full implementation of the evaluation plan. Roundtable discussion will allow participants to share their own experiences and will conclude with an opportunity to explore strategies to re-educate clients as to the productive role evaluation can play in their projects.
|
| Roundtable Rotation II:
You Expect Us To Do What? Examining Issues Related to Utilizing Evaluation Findings and Recommendations |
|
Roundtable Presentation 869 to be held in the Quartz Room Section B on Saturday, Nov 8, 10:45 AM to 12:15 PM
|
|
Sponsored by the Evaluation Use TIG
|
| Presenter(s):
|
| Stefanie Anderson,
Association of Minority Health Professions Schools Inc,
sanderson@minorityhealth.org
|
| Diane Elder,
Independent Consultant,
dr_nutmeg@yahoo.com
|
| Abstract:
The purpose of the proposed roundtable is to examine the challenges associated with negotiating the relationship between the funding agency and the recipient after an evaluation is completed and recommendations have been made. The Association of Minority Health Professions Schools, Inc. (AMHPS), a 501 (c) 3 organization in Atlanta, was asked by its Federal funding agency to select an independent contractor to conduct a performance based evaluation of its cooperative agreement to clarify the activities of the program, its successes, and its constraints. All stakeholders, including staff from the Federal agency, participated where appropriate. Findings and recommendations were made for AMHPS, the Federal agency, and the grant recipients. Ultimately, all stakeholders are responsible for implementing their recommendations and failure to implement impedes progress of the other participants. Although AMHPS commissioned the evaluation and the Federal government requested and participated in the evaluation, neither is empowered to exact findings applicable to the other.
|