Evaluation 2008 Banner

Return to search form  

Session Title: Evaluation of Statewide Special Education Initiatives Current Practices and Future Policies?
Panel Session 825 to be held in Centennial Section A on Saturday, Nov 8, 9:50 AM to 10:35 AM
Sponsored by the Special Needs Populations TIG
Chair(s):
Joanne Farley,  University of Kentucky,  joanne.farley@uky.edu
Discussant(s):
Brent Garrett,  Pacific Institute for Research and Evaluation,  bgarrett@pire.org
Abstract: Special Education initiatives, like Response to Intervention (RTI), prescribe organized and deliberate changes at the individual as well as institutional level. In order for the evaluations of statewide system change initiatives to be influential and useful, it has to provide reflective and corrective feedback. Such feedback mechanism not only allows administrators to modify decisions but also help evaluators explain observed student intervention outcomes. However, there are many challenges and issues with regard to evaluation of statewide initiatives. The purpose of the panel is to start dialogue on the issues with regard to evaluation of such initiatives and have a discussion on effective evaluative practices. This panel includes a discussion about issues with regard to evaluation of RTI, especially monitoring progress of implementation. The panel will also present the evaluation of Vermont's Pilot Project on RTI including formative and summative evaluations, methodologies and preliminary findings.
Evaluation of a Statewide Implementation of Response To Intervention
Chithra Perumal,  University of Kentucky,  vet077@yahoo.com
'Implementation is defined as a specified set of activities designed to put into practice an activity or program of known dimensions (National Implementation Research Network).' Implementation fidelity refers to the extent an agency implements a program or practice as intended. Education programs such as Response To Intervention (RTI) prescribe changes not only in the classroom, but also at the administration and leadership levels. Understanding if certain critical activities have occurred and to the degree to which they have been implemented is crucial. This will allow evaluators to provide feedback but also be to explain the observed student outcomes. Ideally, the evaluations should be able to link implementation outcomes and intervention outcomes. This presentation discusses the challenges of evaluating RTI implementation statewide and will also discuss an evaluation approach of the TN State Improvement Grant evaluators to assess the impact of RTI.
Vermont's Pilot Project on Responsiveness to Intervention (RtI)
Patricia Mueller,  Evergreen Educational Consulting,  eec@gmavt.net
The Vermont Department of Education's work in four pilot schools began in 2006. During the spring 2008, an evaluation research team convened to assess the degree to which the VT RtI model has improved educational outcomes for all students. This presentation will provide a preliminary review of evaluation findings. Evaluation questions focused on impact at the classroom level (e.g., change in practice, increase in student achievement), the systems level (e.g., change in referral rates to special education), and changes in roles and relationships (e.g., staff roles and responsibilities, influence on leadership). Summative questions included assessment of the factors that contributed to scaling up the RtI model and any unintended effects of model implementation. Methodology included interviews with teachers, administrators, paraeducators, parents and school board members. Classroom observations were also conducted to assess fidelity of implementation of the model and to triangulate the interview data with student outcome data.

Session Title: Evaluation of A Peer Mediated Health Education Program: A Demonstration of Regression Discontinuity Design
Demonstration Session 826 to be held in Centennial Section B on Saturday, Nov 8, 9:50 AM to 10:35 AM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Karen Larwin,  University of Akron,  drklarwin@yahoo.com
Abstract: The Peer Mediated Health Education Program was implemented in a struggling inner-city school district in NE Ohio, in the hopes that its impact would be result in a measurable change in attitudes, knowledge, behaviors of participating students regarding risky-behaviors. Specifically, the Peer Mediated Health Education Program sought to correct students misconceptions about how many students were participating in risky behaviors, address students misconceptions regarding the dangers associated with participating in risky behaviors, and address options and approaches to avoiding peer pressure to participating in risky behaviors. Students participating in this program received bi-weekly health education classes that were led by trained peer mediators. This evaluation included a two group analyses, in an effort to make decisions about the effectiveness of this first year program and future funding. Regression Discontinuity will be demonstrated with data from an evaluation of the Peer Mediated Health Education Program (n = 755).

Roundtable: From Identification of Major Challenges to Evaluation to Proactively Influencing Evaluation Policies: A Compilation of the Findings from a Series of Think Tanks
Roundtable Presentation 827 to be held in Centennial Section C on Saturday, Nov 8, 9:50 AM to 10:35 AM
Sponsored by the Research, Technology, and Development Evaluation TIG and the Presidential Strand
Presenter(s):
Rosalie Ruegg,  TIA Consulting Inc,  ruegg@ec.rr.com
Connie Chang,  Ocean Tomo Federal Services,  cknc2006@gmail.com
Abstract: A series of five think tanks—initiated in 2003—focused on challenges to evaluation and sought input and solutions through interactive discussions with participants. Six categories of challenges were identified and addressed: (1) institutional/cultural barriers, (2) methodological problems, (3) data/measurement difficulties, (4) resources issues, (5) communication obstacles, and (6) conflicting stakeholder agendas. These challenges interfere with the effectiveness of evaluation in measuring and improving program performance, meeting public accountability requirements, and informing management and policy decisions. Four of the think tanks focused on barriers to performing evaluation and the fifth one investigated barriers that impede the use of evaluation to inform program management and public policy. This Roundtable will provide a concise overview of these challenges and solutions, and ask participants to identify any missing challenges, explore additional solutions, and, provide advice on how best to incorporate solutions into R&D management and operations to improve performance and make informed policies.

Session Title: Planning for Evaluation’s Future: Undergraduate Students’ Interest in Program Evaluation
Expert Lecture Session 828 to be held in  Centennial Section D on Saturday, Nov 8, 9:50 AM to 10:35 AM
Sponsored by the Research on Evaluation TIG
Chair(s):
Janet Lee,  University of California Los Angeles,  janet.lee@ucla.edu
Presenter(s):
John LaVelle,  Claremont Graduate University,  john.lavelle@cgu.edu
Abstract: Currently, few people seek a career in program evaluation, though the demand for evaluators exceeds the supply. Undergraduate students are a potential pool of future evaluators, but little is known about their interest in pursuing a career in program evaluation (PE). The purpose of this session is to expand on a previous study (LaVelle, 2007) and share more representative data on undergraduate students’ interest in PE. The study will present data from students at a number of sites across the U.S. Participants were asked to read five randomly-ordered descriptions of program evaluation, indicate the readability of each, and respond to semantic differential questions to assess their attitude towards each description of evaluation. Participants then indicate their global attitude towards program evaluation, their familiarity and interest in PE as a career, and their interest in participating in a PE internship. Their responses may aid our quest to grow the evaluation profession.

Session Title: Using Technology to Push the Boundaries of Theory-Driven Evaluation Science: Implications for Policy and Practice
Panel Session 829 to be held in Centennial Section E on Saturday, Nov 8, 9:50 AM to 10:35 AM
Sponsored by the Program Theory and Theory-driven Evaluation TIG
Chair(s):
Stewart I Donaldson,  Claremont Graduate University,  stewart.donaldson@cgu.edu
Discussant(s):
John Gargani,  Gargani and Company Inc,  john@gcoinc.com
Amanda Bueno,  First 5 LA,  abueno@first5la.org
Abstract: Theory-driven evaluation science is now being used across the globe to develop and evaluate a wide range of programs, practices, large-scale initiatives, policies, and organizations. One of the great challenges faced while using this approach is addressing high levels of complexity. In this session, we will illustrate new breakthroughs in the development of conceptual frameworks to guide evaluation practice and policy. A range of practical examples from evaluation practice will be presented to illustrate the value of these new tools and processes. The implications for evaluation policy and practice will be explored.
Using Interactive Software to Develop Complex Conceptual Frameworks to Guide Evaluations
Christina Christie,  Claremont Graduate University,  tina.christie@cgu.edu
Stewart I Donaldson,  Claremont Graduate University,  stewart.donaldson@cgu.edu
Tarek Azzam,  Claremont Graduate University,  tarek.azzam@cgu.edu
This paper will focus on how to use new advances in software design and application to develop and use program theory to improve evaluations. Based on the principles of program theory-driven evaluation science (Donaldson, 2007), a step-by-step approach will be reviewed to illustrate the value of using conceptual frameworks to engage stakeholders in a process that leads to accurate, useful, and cost-effective program evaluations. New interactive software will be illustrated to demonstrate how it can be used to engage stakeholders, facilitate needs assessments, develop program theory, formulate and prioritize evaluation questions, help answer key evaluation questions, and to communicate evaluation findings in ways that increase use and influence. The implications toward improving evaluation policy and practice will be discussed.
Examples of Complex, Interactive Conceptual Frameworks to Guide Evaluation Planning, Enhance Data Analysis, and for Communicating Findings
Tarek Azzam,  Claremont Graduate University,  tarek.azzam@cgu.edu
Stewart I Donaldson,  Claremont Graduate University,  stewart.donaldson@cgu.edu
Christina Christie,  Claremont Graduate University,  tina.christie@cgu.edu
This paper will explore several examples from evaluation practice of technology-enhanced logic models, program theories, and more complex conceptual frameworks to guide evaluation practice and policy. These technology-enhanced evaluations span education, health, and child development programs. Furthermore, an in-depth analysis of a technology-enhanced evaluation of a full portfolio of early childhood initiatives totaling over $800 million, will be presented to illustrate how these new tools can be used to address complexity problems, to guide strategic management processes, and to improve evaluation policy and practice.

Session Title: Tobacco Prevention and Control Cost-Benefit and Impact Analysis: The Wyoming Experience
Multipaper Session 830 to be held in Centennial Section F on Saturday, Nov 8, 9:50 AM to 10:35 AM
Sponsored by the Costs, Effectiveness, Benefits, and Economics TIG
Chair(s):
Nanette Nelson,  University of Wyoming,  nnelso13@uwyo.edu
The Impact of Wyoming’s Tobacco Prevention and Control Program on Cigarette Consumption
Presenter(s):
Mark Leonardson,  University of Wyoming,  mleonar3@uwyo.edu
Nanette Nelson,  University of Wyoming,  nnelso13@uwyo.edu
Abstract: The University of Wyoming’s Survey & Analysis Center (WYSAC) investigated the relationship between expenditures of Wyoming’s Tobacco-Free Communities Program (TFWC) and cigarette sales using regression analysis. WYSAC measured cigarette sales for ten years for each of the 23 Wyoming counties, gathering 230 observations. The response variable is the logarithm of annual county per capita cigarette (stamp) sales. The explanatory variable of primary interest is the logarithm of annual TFWC expenditures by county. WYSAC used a regression model to account for the numerous factors that affect cigarette sales and to obtain an unbiased estimate of the impact of the TFWC program. Results show that TFWC expenditures reduce cigarette sales (p >0.05). This paper summarizes the results of the economic analysis and compares them to previous research. Specifically, the paper discusses the choice of response variables used in the regression analysis and the associated limitations.
Cost-Benefit Analysis of the Tobacco Prevention and Control Program in Wyoming
Presenter(s):
Nanette Nelson,  University of Wyoming,  nnelso13@uwyo.edu
Abstract: The health impacts of tobacco use are well known; however, optimal public policy should incorporate a cost-benefit analysis of tobacco prevention and control programs as one important component of a comprehensive evaluation. Combining both cost and benefit information can result in an estimate of the net societal benefits of a specific program under review. This talk will summarize the chronic disease impacts of tobacco use, the monetary costs (e.g., medical, lost productivity, etc.) of tobacco use, the impact of tobacco prevention and control programs on tobacco use, and the cost of tobacco prevention and control programs for the state of Wyoming.

Session Title: Evaluation-Guided Course Design and Delivery
Demonstration Session 831 to be held in Centennial Section G on Saturday, Nov 8, 9:50 AM to 10:35 AM
Sponsored by the Assessment in Higher Education TIG
Presenter(s):
Celina Byers,  Bloomsburg University of Pennsylvania,  cbyers@bloomu.edu
Maria Cseh,  George Washington University,  cseh@gwu.edu
Shannan McNair,  Oakland University,  mcnairshannan@yahoo.com
Abstract: Three professors from different departments and universities combine forces to examine, apply and reflect on a common model for evaluating their online, face-to-face, and blended teaching, but with diverse evaluation strategies and evidence of student learning. Within the model (include link here) the needs of the institution and students, resources, learning objectives and contexts are integrated. Strategies demonstrated include self-evaluation, critical comparison of teaching and learning methods and contexts, surveys, rubrics, forum posting and self-assessment.

Session Title: Comparative Environmental Risk Assessment: A Practical and Applied Method
Demonstration Session 832 to be held in Centennial Section H on Saturday, Nov 8, 9:50 AM to 10:35 AM
Sponsored by the Environmental Program Evaluation TIG
Presenter(s):
Su Wild River,  Australian National University,  su.wild-river@anu.edu.au
Abstract: The Comparative Environmental Risk Assessment Method (CERAM) is tool for evaluating compliance with environmental protection laws. It can be used by business operators and environmental consultants to identify site-specific pollution prevention priorities and results. Environmental protection agencies can use CERAM to identify best practice and non-complying operations, and to assist their licensing and enforcement actions. CERAM evaluates substantive outcomes, not just the administrative results. This demonstration will explain the process for undertaking CERAM assessments, including: - Understanding assessment contexts and applications - Hazard identification using generic processes, - Environmental receptors, - Applying inherent and residual risk ratings, - Calibration and cross-checking, - Interpreting and presenting results, and - Setting and achieving environmental risk targets. CERAM's semi-quantitative approach makes it highly cost-effective compared with most other methods for environmental risk assessment. CERAM is best applied to evaluate priorities on a site-scale, and has been used successfully in industrial and research contexts.

Session Title: Cell Phone Technology: Exploiting the Possibilities -- Not Lamenting the Past
Multipaper Session 833 to be held in Mineral Hall Section A on Saturday, Nov 8, 9:50 AM to 10:35 AM
Sponsored by the Integrating Technology Into Evaluation
Chair(s):
Paul Lorton Jr,  University of San Francisco,  lorton@usfca,edu
Incorporating Cellular Telephones into a Random-digit-dialed Survey to Evaluate a Media Campaign
Presenter(s):
Lance Potter,  Westat,  lancepotter@westat.com
Rebekah Rhoades,  University of Oklahoma,  rebekah-rhoades@ouhsc.edu
Andrea Piesse,  Westat,  andreapiesse@westat.com
Laura Beebe,  University of Oklahoma,  laura-beebe@ouhsc.edu
Jennifer Berktold,  Westat,  jenniferberktold@westat.com
Abstract: The use of telephone surveys to evaluate media campaigns and other interventions faces new challenges due to the growing cell phone-only population, currently 15 percent of all Americans according to the NCHS. The prevalence of certain health behaviors such as smoking is thought to be underestimated by landline telephone surveys and relatively little is known about differences in attitudes toward tobacco by telephone status (cell phone only, landline only, or both). Tobacco Stops With Me is a multi-phase media campaign in Oklahoma highlighting how tobacco use affects individuals and influences relationships, while emphasizing that each Oklahoman has a role to play in reducing the burden of tobacco use. This presentation will describe the evaluation study, which involves a longitudinal component, several media tracking studies, and sampling of respondents through both landline and cell phone numbers. Differences in demographics and tobacco-related attitudes and behaviors by telephone status will be highlighted.
The Growing Cell Phone-Only Population in Telephone Survey Research: Evaluators Beware
Presenter(s):
Joyce Wolfe,  Fort Hays State University,  jwolfe@fhsu.edu
Brett Zollinger,  Fort Hays State University,  bzolling@fhsu.edu
Abstract: Telephone surveying is one of many common data collection methods evaluation researchers use to gather information from stakeholders. At one time, estimates of landline telephone coverage exceeded 90% in U.S. households making telephone surveying an effective means for contacting members of the general public. However, the cell phone-only, non-landline population is increasing at a fast pace. Many believe that the future viability of telephone survey research is in question as increased cell phone-only usage poses potential threats to data quality and validity. This presentation will provide evaluators with important information that may impact future evaluation design and implementation including: 1) the increasing prevalence of cell phone-only users in the general population 2) the differences between landline and cell phone-only users and the potential impact of these differences on data interpretation and report writing and 3) the proposed strategies to address the issue and practical implications of various strategy adoption.

Session Title: Social Network Analysis 101: A Brief Demonstration
Demonstration Session 834 to be held in Mineral Hall Section B on Saturday, Nov 8, 9:50 AM to 10:35 AM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Stephanie M Reich,  University of California Irvine,  smreich@uci.edu
Abstract: This demonstration will provide an introduction to the field of social network analysis (SNA). SNA is a specific type of analysis that is interested in the relationship between people, places or things (called actors). Rather than looking at the specific attributes of actors, SNA examines the relationships among actors and the characteristics of the networks in which they are embedded. Unlike parametric analyses that assume independence, SNA is interested in the connections between actors (interdependence). This session will introduce AEA attendees to the theoretical underpinnings of SNA and some of the types of things that can be calculated about actors, social relationships, and network structures. This session is geared for novices interested in SNA.

Session Title: Evaluating Policy Efforts Through Systems and Organization Theories
Multipaper Session 835 to be held in Mineral Hall Section C on Saturday, Nov 8, 9:50 AM to 10:35 AM
Sponsored by the Advocacy and Policy Change TIG
Chair(s):
Cindy Roper,  Clemson University,  cgroper@clemson.edu
Evaluating Policy and Advocacy Outcomes: Using Systems Change to Evaluate Environmental Policies to Reduce Disparities
Presenter(s):
Mary Kreger,  University of California San Francisco,  mary.kreger@ucsf.edu
Claire Brindis,  University of California San Francisco,  claire.brindis@ucsf.edu
Dana Hughes,  University of California San Francisco,  dana.hughes@ucsf.edu
Simran Sabherwal,  University of California San Francisco,  simran.sabherwal@uscf.edu
Katherine Sargent,  University of California San Francisco,  katherine.sargent@uscf.edu
Christine MacFarlane,  University of California San Francisco,  cgmacfarlane@sbcglobal.net
Annalisa Robles,  The California Endowment,  arobles@calendow.org
Marion Standish,  The California Endowment,  mstandish@calendow.org
Abstract: Communities and foundations engaged in policy and advocacy change can employ systems change outcomes analysis to assess capacity; define individual, group, institutional changes, and strategic partnerships; and policy, media, funding, and systems outcomes to refine their strategies and goals. This prevention policy analysis uses six years of data on policy outcomes in housing, schools and outdoor air quality to demonstrate the methods employed in local, regional, and statewide advocacy and policy evaluation. Policy and systems change concepts are discussed as they related to structural changes across multiple sectors of communities. Examples of communities leveraging their resources to create sustainable policies are included to provide maximum accessibility to relevant lessons. These include the types of collaborative strategies that were and were not successful and the use of media messages used to support policies related to environmental risk factors. These methods can maximize the successes for evaluators involved in policy advocacy.
Accountability and No Child Left Behind: Implications for Evaluation and Public Policy
Presenter(s):
Cindy Roper,  Clemson University,  cgroper@clemson.edu
Abstract: This paper utilizes organization theory to examine the role of accountability and evaluation in public policy. It explores various criteria for effective accountability and uses a case study to examine the implementation of performance assessment in a national program. The paper then discusses the implications of how accountability impacts program performance and offers suggestions for improvements and future research. The No Child Left Behind Act of 2001 (NCLB) was developed to raise achievement levels of students in American schools. It targets those students who are especially at risk; minorities, low-income students, those with limited English proficiency and students with disabilities. The guiding theory behind this legislation is that of continuous educational improvement through accountability. As a case study, NCLB provides an opportunity to demonstrate organization theory as a viable and valuable tool for evaluation.

Session Title: Engaging the Client When Mandates Drive Client Participation
Panel Session 836 to be held in Mineral Hall Section D on Saturday, Nov 8, 9:50 AM to 10:35 AM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Chair(s):
Mary Nistler,  Learning Point Associates,  mary.nistler@learningpt.org
Abstract: On occasion, a client is mandated not only to participate in an evaluation, but also to implement the evaluator's recommendations. To some extent, this is an evaluator's best-case scenario. However, success requires a cooperative relationship with the client - both to accomplish evaluation tasks and to engender client ownership of findings. A recent state-mandated curriculum audit, for which we are project managers, was conducted in 12 districts that were in correction action under NCLB. The audit illustrates an approach we used to successfully develop client relationships and encourage client buy-in: we provided opportunities for clients and other stakeholders to learn about audit procedures; meet and communicate with audit staff; exercise options; and contribute to findings. The audit itself consisted of several studies (of general education, special education, and education of English language learners) and extensive data collection methods: interviews, observations, surveys, and document reviews.
Building Client Relationships Within the Context of a Mandated Audit
Mary Nistler,  Learning Point Associates,  mary.nistler@learningpt.org
Building client relationships within a mandated audit requires communication and inclusion throughout the audit process. In the example we illustrate in this session, client relationships were first developed through preliminary conversations. Following this, we held kick-off meetings in which the details of the audit and expectations of the districts and schools were articulated. The kick-off meetings also provided opportunities for meeting participants to discuss and convey their assessments of district strengths and challenges, using an appreciative inquiry format. Throughout the audit, communication with key district stakeholders continued on a planned schedule, including a schedule for presenting emergent findings, and a schedule for planning the co-interpretation sessions. Client relationships were also managed through consistent messaging about audit guidelines and principles.
Building Ownership of Audit Findings
Cary Goodell,  Learning Point Associates,  cary.goodell@learningpt.org
Co-interpretation is a field-tested process proven to build trust and ensure client ownership of even the most distressing audit findings. Utilizing substantial relevant resources, including locally generated data, outside auditors and district stakeholders worked together to identify critical issues. Both critical and positive key findings emerged, allowing clients to draw from what they did well to eradicate barriers to improvement. This contrasted to a more typical audit, where client viewpoint may be dismissed. We have not yet had a client refuse to participate in the co-interpretation of data, nor have we had a client refuse to accept key findings developed through this process.

Session Title: Putting Methods into Practice: Case Examples of Advocacy Evaluation
Multipaper Session 837 to be held in Mineral Hall Section E on Saturday, Nov 8, 9:50 AM to 10:35 AM
Sponsored by the Advocacy and Policy Change TIG
Chair(s):
Beth Rosen,  Humane Society of the United States,  brosen@humanesociety.org
Measuring Impact: Quantifying and Qualifying Changes in Policy and Advocacy
Presenter(s):
Beth Rosen,  Humane Society of the United States,  brosen@humanesociety.org
Abstract: At first glance measuring policy and advocacy seems straightforward because they have quantitative components. But differentiating between high and low impact policy changes, incremental --yet pivotal--steps leading toward such changes (that often take several years to complete), as well as measuring how one’s organization contributes to the enforcement of laws (that is, just getting a law passed is often not enough) are complex. This presentation will look at the path the Humane Society of the United States, a national NGO with a budget exceeding $100 million and more than 400 staff, took to understand how to quantify and qualify its policy and advocacy efforts. The policy measurements focus on laws passed at the state and federal level, the enforcement of existing laws, and both formal and informal alliances with networks of policy enablers. The advocacy measurement discussion encompasses the engagement of grassroots activists and other key stakeholders.
Challenges in Evaluating Public Health Advocacy Support Systems: The Case of the National Policy and Legal Analysis Network to Prevent Childhood Obesity
Presenter(s):
Todd Rogers,  Public Health Institute,  txrogers@pacbell.net
Marice Ashe,  Public Health Institute,  mashe@phlpnet.org
Manel Kappagoda,  Public Health Institute,  mkappagoda@phlpnet.org
Cheryl Fields Tyler,  Fields Tyler Consulting,  cheryl@fieldstyler.com
Abstract: In 2007, the National Legal & Policy Network to Prevent Childhood Obesity (NPLAN) was initiated as part of the Robert Wood Johnson Foundation’s significant commitment to reverse the epidemic of childhood obesity in the United States by 2015. NPLAN is supporting policy innovation and implementation by empowering advocates and decision makers with expert legal technical assistance resources within a collaborative learning environment. The structural design of the Network, and its goals and objectives, were informed by a comprehensive needs assessment of hundreds of stakeholders involved in obesity prevention research, advocacy, and action. This presentation will detail the evaluation plan being implemented to assess the processes and impact of NPLAN, and will review the opportunities and challenges inherent in evaluating complex, multi-level public health advocacy support systems. Special attention will be paid to strategic and methodological decisions that must be made to enhance the rigor and relevance of the evaluation.

Session Title: Report Automation Using Visual Basic
Demonstration Session 838 to be held in Mineral Hall Section F on Saturday, Nov 8, 9:50 AM to 10:35 AM
Sponsored by the Integrating Technology Into Evaluation
Presenter(s):
Yisong Geng,  Macro International Inc,  yisong.geng@macrointernational.com
Megan Brooks,  Macro International Inc,  megan.a.m.brooks@macrointernational.com
Abstract: This session demonstrates how to use Microsoft Visual Basic and Microsoft databases (Access or SQL Server) to automate data reporting processes. Specifically, we will demonstrate how to generate a complex report of data errors and/or inconsistencies in Excel format and how to generate a comprehensive data report including graphics and tables in PowerPoint format using Microsoft Visual Basic 2005 and Microsoft SQL Server 2005. As an additional illustration of the power of automation, we will also demonstrate how to generate unique tracking numbers for surveys in Word format, using Microsoft Access as the database.

Session Title: International University Evaluations
Multipaper Session 840 to be held in the Agate Room Section B on Saturday, Nov 8, 9:50 AM to 10:35 AM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
Martha Ann Carey,  Maverick Solutions LLC,  marthaann123@sbcglobal.net
A Development of the Instructional Quality Assurance Model in Nursing Science: An Application of Multi-Site Evaluation Approach
Presenter(s):
Haruthai Ajpru,  Chulalongkorn University,  ajpru19@gmail.com
Aimorn Jungsiripornpakorn,  Chulalongkorn University,  aimornj@hotmail.com
Suwimon Wongwanit,  Chulalongkorn University,  wsuwimon@chula.ac.th
Abstract: The purpose of this present study was to develop the instructional quality assurance model in nursing science. This study was undertaken in two phases; a documentary research and multi-site evaluation approach. The samples consisted of 59 institutional accreditation reports, nursing teachers and students from 5 sites, school of nursing in Thailand. The instruments were a coding record, a checklist and a semi-structured interview. Meta-analysis was conducted using HLM 6.02 (hierarchical linear models program) and cross site analysis was conducted by content analysis. The proposed model was developed employing the evidences from related documents and findings from multi-site evaluation. The results indicate that this approach helps receive the information for develop the instructional quality assurance model.
Developing Future Research Leaders: Evaluation of the Group of Eight (Go8) Universities Program
Presenter(s):
Zita Unger,  Evaluation Solutions Pty Ltd,  zunger@evaluationsolutions.com
Abstract: The Group of Eight (Go8) universities is a coalition of Australia’s leading research intensive universities. In this highly competitive environment the Group of Eight Universities have collaborated to develop and implement a Future Research Leaders Program providing best practice professional development in financial and resource management to current and emerging researchers who are identified as future research leaders. Evaluation of overall impact on researcher capabilities and institutional performance utilized a mixed method approach involving (i) evaluation of nine training modules piloted at each contributing university, trialed at three universities and implemented across all eight universities; (ii) establishment of key performance indicator measures collected pre-and post-training delivery for 1,000 researchers; and (iii) eight institutional case studies about researcher productivity and institutional performance. There are many evaluation challenges in a project of this size, complexity, sensitivity and importance. The paper discusses these challenges and how the evaluation was shaped to meet them.

Session Title: Evaluating Nationally and Locally: What Can Evaluators do to Create Evaluation Policies in National Nonprofit Networks Composed of Autonomous Local Organizations?
Think Tank Session 841 to be held in the Agate Room Section C on Saturday, Nov 8, 9:50 AM to 10:35 AM
Sponsored by the Non-profit and Foundations Evaluation TIG
Presenter(s):
Stephen Axelrad,  Hillel Foundation,  saxelrad@hillel.org
Abstract: Many non-profit organizations are structured as decentralized networks. There is an national agency which serves as a leader and/or service provider to several autonomous local agencies. Sometimes, the local agencies are set up as local offices or divisions that have official reporting lines to the national agency. Other times, the local agencies are independent franchises that receive funding and service support from but do not officially report to the national agency. The national agency typically is focused on meeting larger, more abstract goals of the entire network and the local agency typically is focused on serving specific populations in a region, municipality or other local community. What frequently results is a national vs. local tension that affects what professionals assume are 'best' practices. The purpose of this think tank is to discuss how to form evaluation policies that answers the different 'what works' questions that national and local professionals have.

Session Title: Designing Responsive Coalition Evaluations Without a Stakeholder Road Map
Demonstration Session 842 to be held in the Granite Room Section A on Saturday, Nov 8, 9:50 AM to 10:35 AM
Sponsored by the Health Evaluation TIG
Presenter(s):
Kristin A Hobson,  Indiana University,  khobson@indiana.edu
Mindy Hightower King,  Indiana University,  minking@indiana.edu
Abstract: When funding agencies require grantees to evaluate their programs, yet, neither the funding agencies nor grantees provide direction for the evaluation, having the knowledge and skills to design meaningful and useful evaluation plans is critical for evaluators. This workshop will draw on existing literature and practical strategies to equip participants with the knowledge to ensure evaluation designs are responsive to stakeholders' needs. Evaluation plans and development processes from the Indiana Cancer Consortium and Indiana Joint Asthma Coalition will be discussed to describe specific strategies used to define stakeholder needs, key questions, and data collection plans. Strengths of the practices that will be described include integration of practical strategies with the existing research base and successful application of these methods across multiple coalition evaluations. One limitation is that the methods have not yet been tested on program structures outside of public health coalitions.

Session Title: Evaluating Large Transdisciplinary Initiatives: The Use of Bibliometric Techniques to Assess the Quality and Quantity of Research Productivity
Multipaper Session 843 to be held in the Granite Room Section B on Saturday, Nov 8, 9:50 AM to 10:35 AM
Sponsored by the Health Evaluation TIG
Chair(s):
Richard Moser,  National Institutes of Health,  moserr@mail.nih.gov
Abstract: Over the past few decades there has been a growing interest in the development and evaluation of large initiatives intended to promote transdisciplinary collaborations in research. As investments in these team science initiatives have grown and debates about their scientific and societal value have ensued, a calls for the evaluation of the effectiveness of these programs has become more pronounced in recent years. A variety of methods and measures evaluating these large initiatives have been explored. This session will show how bibliometric studies are being utilized to evaluate one such large research initiative, the Transdiciplinary Tobacco Use Research Centers (TTURCs). Two bibliometric methods that are being used to assess the scientific outcomes of the TTURC initiative will be presented: one that uses an interrupted time series design with a comparison group; the other focused on the utilization of science mapping techniques in assessing the evolution of TTURC publications in the tobacco research field as a whole.
A Bibliometric Study of the Productivity of the Transdisciplinary Tobacco Use Research Centers (TTURCs) with a Comparison Group
Annie Xuemei Feng,  National Institutes of Health,  fengx3@mail.nih.gov
David Berrigan,  National Institutes of Health,  berrigad@mail.nih.gov
James Corrigan,  National Institutes of Health,  corrigan@mail.nih.gov
Stephen Marcus,  National Institutes of Health,  marcusst@mail.nih.gov
Glen Morgan,  National Institutes of Health,  gmorgan@mail.nih.gov
Richard Moser,  National Institutes of Health,  moserr@mail.nih.gov
Mark Parascandola,  National Institutes of Health,  paramark@mail.nih.gov
Lawrence S Solomon,  National Institutes of Health,  solomonl@mail.nih.gov
Daniel Stokols,  University of California Irvine,  dstokols@uci.edu
Brandie Taylor,  National Institutes of Health,  taylorbr@mail.nih.gov
The Transdisciplinary Tobacco Use Research Centers (TTURCs) are one of the large research initiatives funded by the National Cancer Institute and were initiated in 1999. As one part of the evaluation of this initiative, this bibliometric study aims to examine the pattern of productivity of TTURC researchers before and after they were funded. Utilizing an interrupted control-series design (Campbell, 1969), this bibliometric study will examine how TTURC investigators' productivity was similar or different from that of tobacco research investigators who were funded under the traditional R01 grant mechanism during the same period of time. Bibliometric indexes such as publication counts, citations, number of expected citations, journal impact factors, statistics on cited and citing journals, and a journal disciplinary index reflecting the multidisciplinarity of cited or citing journals will be utilized and presented.
Mapping Transdisciplinary Tobacco Use Research Centers (TTURC) Publications onto the Landscape of the Tobacco Research Field
Katy Borner,  Indiana University,  katy@indiana.edu
David Berrigan,  National Institutes of Health,  berrigad@mail.nih.gov
James Corrigan,  National Institutes of Health,  corrigan@mail.nih.gov
Annie Xuemei Feng,  National Institutes of Health,  fengx3@mail.nih.gov
Stephen Marcus,  National Institutes of Health,  marcusst@mail.nih.gov
Glen Morgan,  National Institutes of Health,  gmorgan@mail.nih.gov
Richard Moser,  National Institutes of Health,  moserr@mail.nih.gov
Mark Parascandola,  National Institutes of Health,  paramark@mail.nih.gov
Lawrence S Solomon,  National Institutes of Health,  solomonl@mail.nih.gov
Daniel Stokols,  University of California Irvine,  dstokols@uci.edu
Brandie Taylor,  National Institutes of Health,  taylorbr@mail.nih.gov
To examine the unique contribution of TTURC research on the overall landscape of the tobacco research field, science mapping techniques will be used to produce a global map of the tobacco research field displaying the comprehensive structure and evolution process of tobacco research. TTURC publications and non-TTURC publications of comparable R01 tobacco researchers will be mapped onto the overall publication matrix for the tobacco research field as a whole over the course of the TTURC initiative. The TTURC research productivity will thus be captured and compared through its convergent and/or divergent development against comparable R01 researchers in the broader context of tobacco research field over the decade. Viable networks such as co-author, author-project and bibliometric indexes such as journal citations, and journal impact factors will be mapped and interpreted.

Session Title: Measurement and Data Quality in South Africa
Multipaper Session 844 to be held in the Granite Room Section C on Saturday, Nov 8, 9:50 AM to 10:35 AM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Gwen M Willems,  University of Minnesota Extension,  wille002@umn.edu
Building Data Management Capacity Through Data Quality Assessments: Appreciative Inquiry Meets Auditing
Presenter(s):
Anzel Schonfeldt,  Khulisa Management Services (Pty) Ltd,  aschonfeldt@khulisa.com
Mary Pat Selvaggio,  Khulisa Management Services (Pty) Ltd,  mpselvaggio@khulisa.com
Abstract: To build capacity of PEPFAR /South Africa partners in M&E and data management, Khulisa Management Services conducts DQA (Data Quality Assessments/Audits) of partners DMS (Data Management System) and the data contained within it. Khulisa’s 3-phased DQA approach is rooted in Total Quality Management principles and ISO 9001 Internal Quality Auditing methodologies that examine internal organizational systems, processes, or procedures against pre-determined standards for quality. Phase 1 is a self assessment and macro evaluation of the partners’ DMS. Phase 2 is the in-depth, trace-and-verify audit which validates data from source through to reporting. Phase 3 is a follow-up assessment to those partners who were found to have major vulnerabilities in their DMS. Recently Khulisa redesigned its Phase 2 ‘trace-and-verify’ process and tool to incorporate Appreciative Inquiry (AI) concepts into the assessment process, allowing for a more capacitating process and richer assessment results that can be used to inform technical assistance responses.
The Value of Measurements and Where They Apply: Monitoring Requirements for Evaluation of Population Policy
Presenter(s):
Liezl Coetzee,  Southern Hemisphere Consultants,  liezl@southernhemisphere.co.za
Abstract: One of the guiding principles in the South Africa Government’s “Policy Framework for the Government Wide Monitoring and Evaluation System” (2007) is that “Monitoring and the development and enforcement of statistical standards are important pre-conditions for evaluation." The policy thus places primary emphasis on monitoring, until a stage where institutional capacity has been built, when more emphasis will be placed on evaluation. This paper will explore the linkages between monitoring an evaluation, looking at the development of a Monitoring and Evaluation Plan for South Africa’s Population Policy Unit (PPU), tasked with monitoring and evaluation of the country’s Population Policy (1998). This plan will be used as an example to illustrate areas requiring monitoring, and those requiring evaluation. It will show how monitoring can be integrated with different stages of evaluation, for example linking collection of baseline data with a planning evaluation, and using monitoring data in formative and summative evaluations.

Return to Evaluation 2008
Search Results for All Sessions