Evaluation 2008 Banner

Return to search form  

Session Title: Making Evaluation and Research Universally Accessible
Skill-Building Workshop 802 to be held in Centennial Section A on Saturday, Nov 8, 8:00 AM to 9:30 AM
Sponsored by the Special Needs Populations TIG
Presenter(s):
Yvonne Kellar-Guenther,  University of Colorado Denver,  yvonne.kellar-guenther@uchsc.edu
Nancy Koester,  University of Colorado Denver,  nancy.koester@uchsc.edu
William Betts,  University of Colorado Denver,  william.betts@uchsc.edu
Abstract: When people hear of universal access/universal usability (UA/UU), many think of access for persons with disabilities. Product designers, however, now realize that UA/UU leads to a process of keeping all situations and all people in mind (Vanderheiden, 2000). As evaluation aims to become more inclusive of groups that can benefit from UA/UU (e.g. the elderly, poor, children too young to read, substance abusers) it is becoming vital the evaluators think about universal access to the whole evaluation process. The goal of this session is to talk about ways to increase accessibility for all groups who participate in evaluations. These ideas are a mix of what has been gleaned through existing literature on UA/UU for product design and our own experience of doing evaluation that includes persons of all ages with all types of disabilities. During this session we will also work through parts of a study to make it accessible.

Session Title: Creating Excellent Data Graphs for Everyday Evaluation Products
Demonstration Session 803 to be held in Centennial Section B on Saturday, Nov 8, 8:00 AM to 9:30 AM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Frederic Malter,  University of Arizona,  fmalter@email.arizona.edu
Abstract: Visualizing raw data and empirical findings is a crucial part of most evaluation work. The demonstration will first introduce theoretical principles for graphical excellence by drawing on the groundbreaking work of Edward Tufte, a leading authority in data visualization,. One important goal of the demonstration is to convey to participants that visualizing data should be part of every data analytic endeavor in evaluation practice. Exemplary data graphs will be shown to discuss the degree to which they succeeded or failed in realizing graphical excellence. Two software tools, MS Excel and Tableau, will be employed in live demonstration exercises that aim to teach participants how graphical excellence can be achieved in their everyday work with simple means. A number of empirical findings from evaluation practice (percentages, mean differences with CI, time series data, geographically rendered measures) serve as a starting point for a live creation of graphs with widely used tools.

Session Title: Envisioning Culturally Responsive Evaluation Policy: Perspectives From the United States and New Zealand
Panel Session 804 to be held in Centennial Section C on Saturday, Nov 8, 8:00 AM to 9:30 AM
Sponsored by the Presidential Strand
Chair(s):
Stafford Hood,  University of Illinois at Urbana-Champaign,  slhood@illinois.edu
Discussant(s):
Finbar Sloane,  Arizona State University,  finbar.sloane@asu.edu
Bernice Anderson,  National Science Foundation,  banderso@nsf.gov
Abstract: Imagine that the newly elected US President has asked you for guidance on governmental evaluation policy for culturally responsive evaluation. The President is interested in practical guidelines that can inform and direct governmental evaluations that are culturally responsive, respectful and reparative. The President welcomes theoretical justification of the evaluation policy guidelines generated, but insists that the emphasis be on evaluation practice not theory. The President has elicited three distinct, albeit complementary, perspectives to inform this challenge. First, the perspectives of historically oppressed cultures in the US are critically important to any governmental evaluation policy on culturally responsive evaluation. Second, the perspectives of the Maori as indigenous people of New Zealand are to be highly valued, as they have proactively engaged in policies and practices that are intended to be genuinely bicultural. Third, the perspectives of the philanthropic community in the US can well complement the public perspective of the government.
Showing Up
Stafford Hood,  University of Illinois at Urbana-Champaign,  slhood@illinois.edu
Jennifer Greene,  University of Illinois Urbana-Champaign,  jcgreene@uiuc.edu
Julie Nielsen,  University of Minnesota,  niels048@umn.edu
This presentation of culturally responsive evaluation policy for governmental programs in the US will address three main clusters of guidelines. First, we will offer specific guidance on determining the primary components of culturally responsive evaluation practice: evaluation purpose and audience, key questions to be addressed, design and methods, reporting, and utilization. Second, we will present guidelines on the relational and communicative dimensions of our craft, on the evaluator's presence and role in a particular context, on how the evaluator 'shows up' in that context. This set of guidelines will foreground the dynamic engagement of culture, race, ethnicity, class, gender and so forth in the micro interstices of evaluation practice. Third, we will present guidelines on judging the quality of a program intended to benefit those historically underserved in our society. These guidelines will focus on how and how well the evaluand 'shows up' in the lives of the people being served.
'For' and 'With' Maori: Culturally Responsive Evaluation
Nan Wehipeihana,  Research Evaluation Consultancy Limited,  nanw@clear.net.nz
Fiona Cram,  Katoa Ltd,  fionac@katoa.net.nz
Maori-non-Maori disparities in Aotearoa New Zealand need to be understood within the context of colonization and its attacks on our identity. In the 1980s the call went out from Maori for an end to deficit-based thinking about Maori; followed by statements about 'by Maori, for Maori' research and evaluation, and the building of Maori evaluation capacity. Guidelines were produced by funding bodies and government organizations about how non-Maori can work with Maori in culturally responsive ways. Moreover contracting procedures look for evidence of meaningful inclusion of Maori at all stages of an evaluation. At first this was about consultation, then engagement, and now the push is on for a relationship ethic. Building and maintaining relationships with Maori communities is about evaluators: knowing themselves and acknowledging their status as visitors, having connections that keep them and communities safe, and being respectful of Maori protocols and customs.
Keeping it Real: Building an Agenda for Culturally Responsive Programming and Evaluation in the World of Philanthropy
Rodney Hopson,  Duquesne University,  hopson@duq.edu
Justin Laing,  Heinz Endowments,  jlaing@heinz.org
Modern day foundations can play a critical role as engines for social change when a sincere desire for community improvement is coupled with strategic investments and tools that meaningfully measure the effects of those investments (Braverman, Constantine, & Slater, 2004). With increasing calls for accountability and transparency, how philanthropic investments impact those most vulnerable is coming under increased scrutiny - especially how philanthropic giving impacts ALANA (African, Latino/a, Asian, Native American) communities. So, what kind of evaluation of philanthropic giving can be genuinely culturally responsive to these underserved communities? What kind of evaluation policy should foundations adopt to guide culturally responsive evaluations of their strategic investments? This paper provides snapshots of a foundation's early and emerging attempt to build culturally responsive programming and collaborate with experts in the field of evaluation so as to measure the impact of early investments in this area.

Session Title: Evaluation Policy and Practice in Government Settings
Multipaper Session 805 to be held in Centennial Section D on Saturday, Nov 8, 8:00 AM to 9:30 AM
Sponsored by the Government Evaluation TIG
Chair(s):
Rakesh Mohan,  Idaho Legislature,  rmohan@ope.idaho.gov
Discussant(s):
Rakesh Mohan,  Idaho Legislature,  rmohan@ope.idaho.gov
The Homogenization of Evaluation Policy
Presenter(s):
Katherine Ryan,  University of Illinois Urbana-Champaign,  k-ryan6@uiuc.edu
Abstract: The 2008 AEA Call for Papers acknowledges evaluation policy as an important influence on evaluation methods and practice. In this paper, I propose examining evaluation policy within the current milieu (physical or social setting in which something occurs). That is, to what extent are New Public Management (NPM) and other social changes leading to an implicit if not explicit homogenization of evaluation policy across a variety of domains such as health-care, education, environment, and international development? To address this question, I critically examine how the definitions of NPM concepts such as accountability and performance measurement have become entangled with the definition of evaluation. [NPM is a regulatory style that makes individuals and organizations accountable through auditable performance standards (Powers, 1997).] As part of the examination, I present case vignettes illustrating how evaluation in education (within domain) is being influenced and entangled at the federal and local levels in the United States, Europe, and Pacific Rim. These cases are then examined within a descriptive framework including evaluation focus (e.g., learning, accountability), key players (e.g., local), norms, key concepts, roles and responsibilities, and types of evaluations conducted. I close with a brief discussion about opportunities and challenges to influencing evaluation policy.
The American Evaluation Association Guiding Principles for Evaluation and the Government/Contractor Interface
Presenter(s):
Connie K Della-Piana,  National Science Foundation,  dellapiana@aol.com
Gabriel Della-Piana,  Independent Consultant,  dellapiana@aol.com
Abstract: Analysis of the AEA Guiding Principles for Evaluation [hereafter, Principles] on “selecting key evaluation questions” reveals heavy demands for relational skills, technical reasoning, and normative (practical) reasoning. Implications are drawn for: the interface between government-as-client (funding/purchasing evaluations) and evaluator-as-contractor (responding to federal requests for evaluation proposals); the Principles; and research on evaluation. Analysis of the principles directly relevant to generating and selecting evaluation questions reveals a process that is a formidable assignment involving a complex, dynamic, and time-intensive process. The government contracting process will be updated beyond that of earlier critiques with emphasis on constraints and facilitators. The paper addresses the combined constraints and identifies ways to meet the joint demands of accountability as government prescribes and accountability as professional practice prescribes while anticipating changing government players and standards.
Using Integrative Evaluation Practices for Program Improvement
Presenter(s):
Celeste Sturdevant Reed,  Michigan State University,  csreed@msu.edu
Beth Prince,  Michigan State University,  princeem@msu.edu
Megan Platte,  Michigan State University,  plattmeg@msu.edu
Laurie A Van Egeren,  Michigan State University,  vanegere@msu.edu
Abstract: This presentation illustrates the ways in which statewide evaluation practices and tools are being incorporated by a state agency for overall program improvement. Using a federally-funded, state-administered out-of-school time program as the example, we present the overall evaluation system that has been designed and discuss the network of inter-related policies and actions promoted by the state and by the evaluators. These mutually beneficial transformations include changes in such aspects as program policy, the requirements for gaining state funds, evaluation requirements, implementation factors (such as staffing and hours of operation), and staff attitudes. Collaboration among all partners -- the state agency, the evaluators, and the grantee service providers – has substantially reduced duplication in the collection of program and evaluation data. And, because we are proponents of continuous program improvement, we will discuss the challenges that remain for this partnership and its method of operating.
Policy Versus Practice: Does Anyone Win?
Presenter(s):
Candace Lacey,  Nova Southeastern University,  lacey@nova.edu
John Enger,  Nova Southeastern University,  jenger@nova.edu
Abstract: This session focuses on the issue of policy versus practice as it relates to the development of a comprehensive evaluation plan for a four year, eight million dollar grant. Tough decisions related to policy versus practices raised the serious question of evaluation ethics and standards.

Session Title: When Community Passions and Personal Callings Meet Empiricism: Exploring the Interpersonal Side of Program Evaluation Policy Shifts
Demonstration Session 806 to be held in Centennial Section E on Saturday, Nov 8, 8:00 AM to 9:30 AM
Sponsored by the Independent Consulting TIG
Presenter(s):
Michael Lyde,  Lyde and Associates,  drlyde@charter.net
Abstract: A community-based agency has a rich history of effecting positive change in the lives of its clients. One critical element missing from this history is a catalog of formal evaluation reports that provide a counterpoint to the many testimonials and other qualitative evidence of the agency's effectiveness. A new program evaluation team is contracted and takes numerous steps to remedy the evaluation limitations of this agency and they live happily ever after, right? Perhaps, but some of the underlying work (i.e., relationship building, empowering agency staff, etc.) is the focus of this demonstration session. Inherent in any paradigm shift is the clash of philosophies and resistance to change. This demonstration will provide a forum for the presentation, exchange, and refinement of strategies that professional evaluators can utilize to overcome these challenges.

Session Title: Techniques and Strategies to Increase Participation in Mental Health and Substance Abuse
Multipaper Session 807 to be held in Centennial Section F on Saturday, Nov 8, 8:00 AM to 9:30 AM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
Garrett Moran,  Westat,  garrettmoran@westat.com
The Application of Population Estimation (PE) to Mental Health Research: An Evaluation of Methods
Presenter(s):
Tanmaykumar Patel,  University of South Florida,  drtany@gmail.com
Marion Becker,  University of South Florida,  becker@fmhi.usf.edu
Ezra Ochshorn,  University of South Florida,  ochshorn@fmhi.usf.edu
Abstract: Although administrative data sets are available for public use, personal identifying information is often restricted from use because of laws regarding the privacy, and confidentiality of protected health information. This study examined the comparative usefulness and accuracy of two leading methods of population estimation (PE) used to produce valid measures of the population overlap and treatment outcomes for mental health consumers when personal identification information (IDs) such as social security numbers (SSNs) are unavailable. The authors merged anonymous extracts of Medicaid and involuntary civil commitment datasets to produce estimates of population overlap and treatment outcome for Medicaid enrolled individuals diagnosed with major depressive disorder who were also experienced an involuntarily psychiatric examination. Results obtained using PE method developed by Dr. Steven Banks and the method proposed by Dr. Eugene Laska are compared with those obtained using SSNs. Implications for program planning and evaluation are discussed.
Successes and Challenges in Using a Web-Based Survey for Community Data Collection
Presenter(s):
Shelly Kowalczyk,  MayaTech Corporation,  skowalczyk@mayatech.com
Kristianna Pettibone,  MayaTech Corporation,  kpettibone@mayatech.com
Abstract: This session will show the successes and challenges associated with using a Web-based survey to collect data from communities participating in CSAP’s Strategic Prevention Framework State Incentive Grant Initiative. As part of the cross-site evaluation, we developed a Web-based survey to evaluate communities’ progress through the framework, including: needs assessment, capacity building, strategic planning, intervention implementation and evaluation. The survey includes closed-ended and open-ended responses for the collection of quantitative and qualitative data. Challenges encountered and resultant successes include maximizing user-friendliness by developing capabilities such as automated skip patterns; reducing respondent burden while effectively tracking process measures and providing real-time data; and providing resource intensive training and TA, allowing users to efficiently complete the survey. Using the Web-based survey resulted in high completion rates. Data from 326 communities implementing 599 interventions were collected. The completion rate at the due date was 83%, increasing to 98% after two weeks.
Mental Health Treatment Study: Preliminary Evidence on the Demand for Employment Assistance and Supports by SSDI Beneficiaries with Mental Disorders
Presenter(s):
David Salkever,  University of Maryland Baltimore County,  salkever@umbc.edu
Mustafa Karakus,  Westat,  mustafakarakus@westat.com
William Frey,  Westat,  williamfrey@westat.com
Dave Marcotte,  University of Maryland Baltimore County,  marcotte@umbc.edu
Abstract: This paper reports on our preliminary analysis of the factors that influence beneficiaries’ decisions to enroll in the MHTS. Binary logistic regressions of enrollment outcome include demographic and diagnostic information on each beneficiary, level of their monthly SSDI benefit, length of time they have been an SSDI beneficiary, SSI recipient status, distance to the intervention service site, county unemployment rate, per cent African-American and per capita income level in the 5-digit zip code. Results indicate that most of these predictors are highly significant. Findings for age and time-on-benefits are of particular interest in testing the common “culture of disability” thesis that long-time recipients have the least interest in working. Unemployment rate results show a positive impact of labor-market weakness on enrollment. Viewing the distance variable as a proxy for the time plus travel “price” of participation, we find a significant but relatively small negative impact of price on enrollment.
Strategies to Increase Parental Consent in Active Consent Situations: Lessons Learned from Two Studies
Presenter(s):
Kristen Ogilvie,  Pacific Institute for Research and Evaluation,  kogilvie@pire.org
Matt Courser,  Pacific Institute for Research & Evaluation,  mcourser@pire.org
Jennifer Norland,  Pacific Institute for Research and Evaluation,  jnorland@pire.org
Melodie Fair,  University of Alaska Anchorage,  anmdf@uaa.alaska.edu
Abstract: This paper explores the strategies used in two NIDA-funded studies in Alaska to obtain parental consent for a drug use and attitude survey administered to middle-school-aged children. The Alaska legislature passed an active consent law in 1999 that requires schools to obtain positive written consent prior to the administration of any survey that requests personal information from students in public schools. Under these active consent conditions, the parental consent rates in the two studies, which involved nearly identical surveys in similar communities, were vastly different. The first study had an overall baseline survey consent form return rate of 69% while the second study increased this rate to 92%. The second study’s strategy to obtain parental consent was informed by the first study’s shortcomings. This paper examines what lessons learned in the first study helped increase consent form return rates in the second study.

Session Title: Course-Evaluation Designs II: Faculty Perspectives on Practices and Continuing Development
Multipaper Session 808 to be held in Centennial Section G on Saturday, Nov 8, 8:00 AM to 9:30 AM
Sponsored by the Assessment in Higher Education TIG
Chair(s):
Rick Axelson,  University of Iowa,  rick-axelson@uiowa.edu
Discussant(s):
Jennifer Reeves,  Nova Southeastern University,  jennreev@nova.edu
The Evaluation of an Adjunct Faculty Development Program at a Midwestern Private Non-Profit University
Presenter(s):
Jeannie Trudel,  Indiana Wesleyan University,  jeannie.trudel@indwes.edu
Ray Haynes,  Indiana University,  rkhaynes@indiana.edu
Abstract: This presentation discusses a completed evaluation of the Adjunct Faculty Development Program for Business and Management programs in the College of Adult and Professional Studies at a Midwestern private non-profit university. According to Fitzpatrick, Sanders and Worthen (2004), evaluations are conducted to judge the worth, merit and value of programs and products. The evaluation utilized Stufflebeam’s (2000) context, input, process, and product (CIPP) evaluation model to assess the adjunct faculty development program. The CIPP evaluation is based upon the basic principles of an open system (input, process, and output) and is capable of guiding decision making and addressing accountability (Fitzpatrick et al., 2004). The evaluation’s methodology and findings are presented and reconciled using the CIPP evaluation model checklist.
Evaluating a Doctoral Research Community in Online Education: Faculty and Independent Learner Interaction and Satisfaction
Presenter(s):
James Lenio,  Walden University,  jim.lenio@waldenu.edu
Sally Francis,  Walden University,  sally.francis@waldenu.edu
Nicole Holland,  Walden University,  nicole.holland@waldenu.edu
Iris Yob,  Walden University,  iris.yob@waldenu.edu
David Baur,  Walden University,  david.baur@waldenu.edu
Abstract: As the pressure to demonstrate student learning and success in higher education increases, the difficulty in evaluating doctoral education remains. The Research Forum at Walden and its accompanying assessment instruments represent an effort to provide systemic evaluation of this student population. The Research Forum, designed to support individual online doctoral student research, facilitates student communication with faculty mentors, promotes dialogue with other students, and provides access to materials specific to student research interests. The forum also allows faculty to be active mentors while enabling them to easily track mentee progress and performance. This paper examines how faculty utilized the Research Forum, communicated with their mentees, how helpful faculty and students perceived the Research Forum to be, and how student satisfaction has changed over time. Results of two online surveys, the Research Forum course evaluation for students and a Research Forum satisfaction/usage survey for faculty, will be presented.
Building Evaluation Capacity in Faculty through a Systematic Plan for Teaching Improvement
Presenter(s):
Meghan Kennedy,  Neumont University,  meghan.kennedy@neumont.edu
Abstract: Course evaluations are designed to provide meaningful information so instructors can improve their teaching and curriculum, but typically, this evaluation feedback is isolated and lacks connection to past or future courses. These evaluations are rarely a part of a systematic evaluation plan where faculty assess, improve, and follow up on identified areas. Faculty are simply passive receivers of feedback instead of active evaluators of their own course and teaching. Faculty must be trained to effectively evaluate their own teaching and curriculum. How can faculty take the information they receive and evaluate its soundness? What do they do with the data? How can they dig deeper and ask more questions? When do they make the changes and how do they communicate them? Training faculty to be evaluators in their own courses can change course evaluations from a punitive to an empowering experience.
Beliefs of Teachers About the Use and Efficacy of End of Semester Student Evaluation Surveys in Japanese Tertiary Education
Presenter(s):
Peter Burden,  Okayama Shoka University,  burden-p@po.osu.ac.jp
Abstract: For over five years, student evaluation of teaching through end of semester surveys (SETs) has been mandatory, hatched by bureaucracy and delivered to schools as an imperative, but often without clarification of aims or purposes. Little has been written questioning the introduction of evaluation in Japan and even less research has been channeled into gaining an understanding of the perspectives of teachers. A qualitative, case-study approach examines through in-depth interviews the perspectives of 22 English language teachers in Japanese tertiary education about the purpose and the use of this form of evaluation. Findings suggest that teachers feel ratings as not useful for either formative or summative purposes and are not informed in any way as to the purpose which leads to haphazard administration which affects consequential validity and teachers’ ability to improve citing threats to job security is threatened and a lack voice in decision making.

Session Title: Engaging Stakeholders in the Scientific Enterprise: Using Concept Mapping for Research Priority Setting and Participatory Evaluation
Multipaper Session 809 to be held in Centennial Section H on Saturday, Nov 8, 8:00 AM to 9:30 AM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Scott Rosas,  Concept Systems Inc,  srosas@conceptsystems.com
Discussant(s):
Scott Rosas,  Concept Systems Inc,  srosas@conceptsystems.com
Abstract: Recent transformation of the scientific research enterprise has led to a corresponding need for participatory methods that involve stakeholders in shaping directions for future research. This session examines multi-level stakeholder involvement and contributions across three projects focused on the planning and evaluation of scientific priorities. The first project engaged researchers and agency staff to co-construct a framework of success factors for evaluating an emerging infectious disease research program. The second project engaged internal and external stakeholders, including funding agency staff, researchers and collaborators of a multi-site research network, to identify cancer research priorities. The third project engaged residents, organizational associates, managers, and executives, family members, and experts to develop a community-articulated research agenda for future geriatric and aging research. This panel will summarize the common methodology, highlight its use across the three projects, and conclude with a discussion of the involvement of stakeholders at multiple levels.
Defining Success for the National Institute of Allergy and Infectious Diseases' Regional Centers of Excellence in Biodefense and Emerging Infectious Diseases Research Program: A Co-Authored Evaluation Framework and Plan
Mary Kane,  Concept Systems Inc,  mkane@conceptsystems.com
Kathleen M Quinlan,  Concept Systems Inc,  kquinlan@conceptsystems.com
The National Institute of Allergy and Infectious Diseases (NIAID) Regional Centers of Excellence in Biodefense and Emerging Infectious Diseases Research Program was first funded in 2003, as part of a large funding allocation for biodefense. Given the newness, the broad mandate and the innovative approaches of the program, an evaluation of its first five years was planned. The concept mapping methodology provided a rigorous, structured approach for scientists to articulate the conceptual model underlying their endeavor, a major challenge in this type of evaluation. Center researchers and agency staff co-constructed a framework of success factors that served as the foundation upon which a task force of leaders within NIAID and the RCEs collaboratively identified evaluation questions and measures. This thorough, participatory planning process set the stage for participant commitment to, involvement in and acceptance of the interim evaluation.
Identifying Research Priorities for the National Cancer Institute's Cancer Research Network: Developing a Collaboratively Authored Conceptual Framework
Kathleen M Quinlan,  Concept Systems Inc,  kquinlan@conceptsystems.com
Katy Hall,  Concept Systems Inc,  khall@conceptsystems.com
Leah Tuzzio,  Group Health Care Cooperative,  tuzzio.l@ghc.org
Wendy McLaughlin,  National Institutes of Health,  wendy.mclaughlin@nih.hhs.gov
Ed Wagner,  National Institutes of Health,  wagner.e@ghc.org
Martin Brown,  National Institutes of Health,  mbrown@mail.nih.gov
Robin Yabroff,  National Institutes of Health,  robin.yabroff@nih.hhs.gov
Entering its third 5-year funding cycle, the National Cancer Institute's (NCI) Cancer Research Network (CRN), consisting of 14 integrated health systems nationwide, is a cooperative research grant that encourages the generation of new research ideas and increased involvement by other cancer researchers. To support research agenda planning and decision-making, the CRN sought stakeholder input on scientific research priorities. Key leaders brainstormed 98 research topics and then used the concept mapping approach to organize the ideas conceptually. Both internal and external CRN stakeholders those directly involved with the network and those with an interest in cancer research were invited to rate the ideas to determine CRN research priorities. The framework includes elements related to the biological, behavioral, and economic aspects of cancer; informatics and diffusion research; and aspects of the healthcare system, setting a research agenda that will improve the quality and effectiveness of preventive, curative, and supportive interventions for cancer.
Setting the Research Agenda with Communities
Mary Kane,  Concept Systems Inc,  mkane@conceptsystems.com
This initiative yielded a collaboratively authored comprehensive framework to guide the selection of future research programs in the field of aging and wellness. The Institute for Optimal Aging (IOA) stakeholders were residents of three senior living communities in the Chicago area; professional and para-professional associates who provide care giving and programming to the residents; executives and field employees operational corporation, and academics and researchers in geriatrics. Participants used a mix of web-based and on-site methods for collecting and organizing data; this input created the conceptual framework of priority research areas on aging. Through document review and key informant interviews, the conceptual research framework was enriched and rendered more relevant to future research needs in geriatrics. The benefits of engaging residents, associates and academics in one endeavor included greater depth in the research framework, and a strong sense of contributing to future geriatric research.

Session Title: Assessment and Improvement of Government Agency Collaboration for Disaster Preparedness and Recovery
Multipaper Session 810 to be held in Mineral Hall Section A on Saturday, Nov 8, 8:00 AM to 9:30 AM
Sponsored by the Disaster and Emergency Management Evaluation TIG
Chair(s):
Bolton Patricia,  Battelle,  bolton@battelle.org
Assessment of Collaboration for Incident Response Preparedness: A Collaborative Project Among the Animal And Plant Health Inspection Service, National Association of State Departments of Agriculture And Other Key State Organizations
Presenter(s):
Anna Rinick,  United States Department of Agriculture,  anna.l.rinick@aphis.usda.gov
Kenneth Waters,  United States Department of Agriculture,  kenneth.e.waters@aphis.usda.gov
Anne Dunigan,  United States Department of Agriculture,  anne.dunigan@aphis.usda.gov
Abstract: Strategic alliances are an important way to achieve mutual goals and for organizations to gain benefits that could not be realized by individual efforts. Despite successful outcomes resulting from the long-standing alliance between APHIS and States related to incident preparedness and response, the strength of the alliances and especially the collaboration to support them have recently been called into question. This assessment of collaboration for incident response preparedness was itself a collaborative effort involving the Animal and Plant Health Inspection Service (APHIS), National Association of State Departments of Agriculture (NASDA) and other members of the APHIS/State strategic alliance. Data from surveys, interviews and listening sessions yielded four findings: •People have different ideas and language about collaboration which affect the effectiveness of communication, activities, and the results of the collaboration. •State and APHIS collaboration is imperative for incident response preparedness but neither APHIS nor State expectations about the needed levels of collaboration are being fully met. •Within APHIS and within State organizations, strong internal collaboration is needed to guide and support successful strategic alliances. •States and APHIS collaborate effectively, particularly where there are strong communication and relationship building skills; when these skills are lacking, productivity and progress suffers.
Converting Perception to Reality: A Case Study of Post Disaster Reconstruction Monitoring and Evaluation Model of ERRA
Presenter(s):
Khadija Khan,  Pakistan Earthquake Reconstruction and Rehabilitation Authority,  khadijakhan01@yahoo.com
Abstract: Converting Perception into Reality A massive earthquake of a scale of 7.6 struck Pakistan on October 8, 2005, resulting in huge loss of life and property across an area of 30000 sq. km. and displacing 3.5 million people in nine districts in the northern areas of Pakistan. In response, the Government of Pakistan established Federal Relief Commission (FRC) and Earthquake Reconstruction and Rehabilitation Authority (ERRA) to address the challenges of relief and reconstruction respectively. ERRA launched its program in April 2006 with a financial outlay of about US$ 5 billion, mobilized through soft loans and grants from IFIs, UN, foreign governments and international development agencies, to reconstruct physical and social infrastructure in the affected areas encompassing twelve major sectors i.e. Housing (rural, urban and town planning), livelihoods, education, health, water and sanitation, government buildings, power, roads, communication, social protection, environment and tourism industry. The task included reconstruction of some 600,000 housing units, town planning and reconstruction of 4 destroyed urban city areas, more than 3000 education institutions, 300 health facilities, 6440 km of roads and thousands of destroyed water and sanitation facilities. On the human side, it included looking after for a large number of vulnerable populations of widows, female headed households, orphans, disabled and homeless elderly male and female. ERRA, being a policy making body, had to implement its program through its affiliates at State, Provincial and District levels. It was also answerable to ERRA Board and ERRA Council for the delivery of its program and achievement of results. A devolved financial management system was introduced in the organization to expedite the process of decision making at every level with well defined authority. Simultaneously, in order to ensure high performance, quality of work, transparency and accountability, ERRA developed a Monitoring and Evaluation System with five main components working from head office to the filed offices right at the reconstruction sites. For the purpose of the study the system is taken as a ‘Model’. The five components that said to be forming the model were: Project Monitoring, Internal Audit, External Audit, Impact Assessment and Third Party Validation. A critical analysis of the system revealed that the perception of the model as an integrated system was far from the reality for the reasons: • The project monitoring remained limited to physical construction of buildings at sites and could not provide substantial monitoring on the diverse range of program which were progressively launched. A parallel monitoring mechanism was implemented by the regional affiliated offices for their internal use whereas progress reporting continued to be the main responsibility of program officers and planning experts at regional and district levels. • Internal audit worked independently and reported to the CEO and the Chairman, who then instructed the Finance Wing and Legal Wing to take action on particular cases. • External audit conducted by the Auditor General’s office, focused only on financial matters and reported to the CEO, Chairman, Board and the Council. • Impact assessment was conducted on a localized manner initially, but could not fetch any useful information as to draw generalized conclusions. • Third party validation discussed at length and due to its costly nature could not materialize for some time. Otherwise also M&E documentation required for third party validation was not prepared. So it appeared that according to the analysis, the model could not be called integrated but complementary at best. The two components internal and external audits had no practical input into the monitoring. Whereas the other two impact assessment and third party validation were not yet implemented. It left with one component of project monitoring. Besides, the nature of the program dealing with a large number of affected population and huge investment of donor funding remained in public eye so as to mounting pressure for displaying high performance, quality of work and transparency. On the basis of above analysis, the author sat down to redesign the system and came up with a model composed of parallel streams and not hierarchy of components as follows: First question to address was to set the object of the M&E and how to address it through the model. The objective of any M&E was to have timely, accurate and meaningful information for decision making at various levels of management. So the parallel streams were identified as follows: Stream I: Planning Wing and regional affiliated offices– for information on strategies, work plans and progress reports Stream II: Finance Wing, Internal Audit & External Audit – for financial information and funds management Stream III: M&E Wing, MIS and Knowledge Management Cells – for project monitoring, statistical information and program review, respectively. Stream IV: Media - for keeping up with public opinion and sharing information. The paper provides a comprehensive analysis and graphical models how the above streams come together as a whole to fulfill the needs of information and ensure cross checking of social and financial transactions. It draws some conclusions at the end.

Session Title: Randomized Control Trials: Regression Discontinuity and a Poor Relative
Multipaper Session 811 to be held in Mineral Hall Section B on Saturday, Nov 8, 8:00 AM to 9:30 AM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Frederick Newman,  Florida International University,  newmanf@fiu.edu
Are the Group Randomized Trials funded by the Institute of Education Sciences Designed with Adequate Power?
Presenter(s):
Jessaca Spybrook,  Western Michigan University,  jessaca.spybrook@wmich.edu
Abstract: In federally sponsored education research, randomized trials, particularly those that randomize entire classrooms or schools, have been deemed as the most effective method for establishing strong evidence of the effectiveness of an intervention. However, the presence of group randomized trial does not necessarily produce reliable evidence of the effectiveness of a program. A key element that contributes to the capacity of a group randomized trial to yield high-quality evidence is adequate statistical power. This study examined the experimental designs and power of group randomized trials funded by the National Center for Education Research, a division of the Institute of Education Science. between 2002 and 2006. The findings revealed that blocked designs are the most common type of designs. In addition, the precision of the studies increased over the five year span, indicating that the quality of the designs is improving.
The Development, Application, and Use of a Retrospective Pre-Post Test Instrument in the Evaluation of an Educational Program for Academically Gifted High School Students
Presenter(s):
Debra Moore,  University of Pittsburgh,  ceac@pitt.edu
Cynthia Tananis,  University of Pittsburgh,  tananis@education.pitt.edu
Abstract: The Pennsylvania Governor’s School for International Studies (PGSIS) is a six-week summer program designed to give academically talented high school students a challenging introduction to the study of international affairs and global issues. One focus of the evaluation for this program seeks to understand the effect of the program on the students’ perception of their knowledge concerning these issues. Across the 23 year history of the program, a variety of measures were used (and subsequently discarded) to assess changes in knowledge and perception of competence. Four years ago the program decided to institute a retrospective pre-post design. Results from these years, clearly indicate that these students have consistently overestimated their pre-test understanding of core competencies emphasized in the program and that they are able to better assess their knowledge gains and their initial inflated sense of knowledge as a result of the program. This paper presents an overview of the development, application, use and analysis of a retrospective pre-post instrument to address response shift bias.
Developing a Body of Evidence Using Randomized Trials: Flexible Phases
Presenter(s):
John Gargani,  Gargani and Company Inc,  john@gcoinc.com
Abstract: Many organizations that advocate randomized trials, for example the NIH and IES, promote a phased approach to research. While the exact number and nature of the phases may vary, they typically include non-experimental studies followed by small randomized trials followed in turn by large randomized trials. Unfortunately, a phased approach cannot be applied effectively in many evaluation settings because programs, unlike medical or pharmaceutical interventions, may need to change frequently in response to funders and market demands. I present an alternative approach to organizing repeated randomized trials that I call flexible phases. I provide an example of how it has been applied to a series of randomized trials of a teacher professional development program conducted over seven years. I outline the merits and weaknesses of the approach, and discuss how flexible phases can be used to develop a body of evidence for programs and policies.
Getting More Mileage Using a Hybrid Design: Demonstrating the Utility of the Regression-Discontinuity and Meta Analysis (Rd-Ma) Amalgam to Evaluate Developmental Education Programs
Presenter(s):
Brian Moss,  Oakland Community College,  bgmoss@oaklandcc.edu
William Yeaton,  University of Michigan,  bill.yeaton@yahoo.com
Abstract: Researchers who encounter cut-score-based pretests oftentimes use the regression-discontinuity (RD) research design to evaluate program effectiveness. This presentation demonstrates the potential of combining the regression-discontinuity (RD) research design with meta-analysis (MA) to evaluate developmental education programs within higher education by creating the RD-MA amalgam. Using the sort of data which are readily available at all colleges and universities, approximately 10,000 students and 400 course sections in a large, Midwestern college are analyzed. Aggregate level measurements of students who receive developmental education are contrasted to non-developmental students in subsequent, college-level, social science courses. Framing the results within the new RD-MA amalgam allows for a clearer interpretation of the impact of the developmental program while controlling for potential instructor-related grading bias.

Session Title: Distance Education: Course and Program Level Evaluation
Multipaper Session 812 to be held in Mineral Hall Section C on Saturday, Nov 8, 8:00 AM to 9:30 AM
Sponsored by the Distance Ed. & Other Educational Technologies TIG
Chair(s):
Diane Chapman,  North Carolina State University,  diane_chapman@ncsu.edu
Fine-tuning Evaluation Methodologies for Innovative Distance Education Programs
Presenter(s):
Debora Goetz Goldberg,  Virginia Commonwealth University,  goetzdc@vcu.edu
John James Cotter,  Virginia Commonwealth University,  jjcotter@vcu.edu
Abstract: Growth in the number and diversity of distance education programs has sparked a need for adaptable evaluation methodologies at the program level. This presentation reviews techniques to determine appropriate evaluation methodologies for educational programs delivered through distance technologies. Topics that will be discussed include: defining quality in distance education, identifying important program aspects for review, determining best approaches for data collection, and utilizing findings to improve program performance. An evaluation study of the Ph.D. Program in Health Related Sciences at Virginia Commonwealth University will serve as an example for the methodologies discussed. This pioneer program offers interdisciplinary concentrations in allied health fields through a blended learning environment that includes distance and on-site education. The evaluation combined information from course assessments with program level data. Findings from the evaluation led to changes in curriculum, enhanced use of distance education technologies, additional instructor training, and supplementary use of teaching assistants.
Evaluation of an Interactive Computer-based Instruction in Six Universities: Lessons Learned
Presenter(s):
Rama Radhakrishna,  Pennsylvania State University,  brr100@psu.edu
Marvin Hall,  Pennsylvania State University,  mhh2@psu.edu
Kemirembe Olive,  Pennsylvania State University,  ozk102@psu.edu
Abstract: Information technology has changed the way we teach our classes in higher education. In fact, it has helped instructors to link students’ learning styles with instructional methodology. Compared to traditional lecture method, interactive computer-based instruction allows the students to determine the pace and amount of instruction that can be assimilated at a given time. Further computer-based instruction is a useful method for classes that require integration of text and images and sharing information from a variety of information sources and institutions. This collaborative study funded by U.S. Department of Agriculture involving six institutions describes the process involved in developing, implementing and evaluating interactive, computer-based teaching modules for forage crops. This multi-institutional collaborative effort has helped capture the research expertise and teaching skills of numerous spatially and temporally-separated teachers into a single-educational experience. This project serves as a prototypical model for course development in all academic disciplines.
The Use of a Participatory Multimethod Approach in Evaluating a Distance Education Program in Two Developing Countries
Presenter(s):
Charles Potter,  University of the Witwatersrand,  charles.potter@wits.ac.za
Sabrina Liccardo,  University of the Witwatersrand,  sabrina.liccardo@wits.ac.za
Abstract: This paper describes a participatory multimethod evaluation design currently being implemented in the evaluation of an interactive radio learning program in two developing countries. The program provides direct support to schools as well as in-service training of teachers based on open learning principles. It has been implemented since 1992 across all nine provinces of South Africa, and has attracted large-scale funding from the international community to support educational reconstruction over a nine year period in schools in Bangladesh. The methodologies for formative evaluation of the program in South Africa are at this stage well-established. Existing achievement tests standardized for use in South Africa are thus being adapted for use in Bangladesh schools. Classroom observation instruments based on PhotoVoice methodology as used in South African schools are also being implemented for developmental purposes in the in-service training and support of teachers in Bangladesh, as well as for school-based performance monitoring.
Building Evaluation Practice Into Online Teaching: An Action Research Approach to the Process Evaluation of New Courses
Presenter(s):
Juna Z Snow,  InnovatEd Consulting,  jsnow@innovatedconsulting.com
Abstract: This paper discusses action research methodology as the approach to the process evaluations of distance education programs at different universities. The evaluation foci were the outcomes from the design and implementation of two teacher-education courses, which introduced new instructors and curricula in completely online learning environments. Instructional delivery relied on solely asynchronous communication in one course, while the other used a hybrid approach by adding weekly synchronous meetings. Such innovations, curricula implementation using technology, necessitate formative evaluation because designers cannot predict fully how participants will make use of and benefit from the innovations. Evaluation design and methods are discussed to illustrate the action research approach. Moreover, the cases demonstrate in what ways the technological nature of the courses provided the data collection medium and tools. The Performance Management Portfolio, a new tool used in the courses, is revealed to have implications for student evaluation and instructor professional development.

Session Title: Working Together to Enhance the Quality of Science and Math Education Evaluations: GK-12 Project Evaluations
Think Tank Session 813 to be held in Mineral Hall Section D on Saturday, Nov 8, 8:00 AM to 9:30 AM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Presenter(s):
Patti Bourexis,  The Study Group Inc,  studygroup@aol.com
Discussant(s):
Rita Fierro,  Fierro Evaluation,  fierro.evaluation@gmail.com
Annelise Carleton-Hug,  Trillium Associates,  annelise@trilliumassociates.com
Mimi McClure,  National Science Foundation,  mmcclure@nsf.gov
Abstract: This think tank provides a forum for evaluators to discuss key issues in evaluating GK-12 projects, the National Science Foundation initiative which places graduate students from science, mathematics and engineering disciplines in K-12 classrooms to share their content knowledge. Think tank participants will have an opportunity to identify project-level evaluation questions we might pursue as a community, useful evaluation tools and resources that might be shared, and plans for sustaining a learning community of GK-12 evaluators. By working collectively within a learning community with shared interests, we anticipate our discussions will contribute to building the capacity not only of the individual project evaluations, but also for enhancing the capacity of NSF to use project-level evaluation findings to inform the GK-12 program nationally. The think tank will offer opportunities for small group in-depth discussions along with whole-group dialogue designed to stimulate further thinking and exchange.

Session Title: Internal Review Boards (IRB) Place in the Philanthropic and Nonprofit Sector: Are Foundations and the Vulnerable Populations They Serve at Risk?
Think Tank Session 814 to be held in Mineral Hall Section E on Saturday, Nov 8, 8:00 AM to 9:30 AM
Sponsored by the Non-profit and Foundations Evaluation TIG
Presenter(s):
Delia Carmen,  Annie E Casey Foundation,  dcarmen@aecf.org
Discussant(s):
Delia Carmen,  Annie E Casey Foundation,  dcarmen@aecf.org
Bill Bacon,  Packard Foundation,  bbacon@packard.org
Ben Kerman,  Casey Family Services,  bkerman@caseyfamilyservices.org
Abstract: As a matter of practice, Foundations make major investments in the evaluation of both their own program initiatives as well as to support research in their fields of interest. Such investments are primarily made to large university and research partners with existing IRB credentials that provide needed protections to constituents or human subjects that are included in these efforts. However, as Foundations move to become more and more data-driven and results oriented the demands for performance measures that can only be obtained through original data collection which is increasingly being carried out by non-credentialed grantees as principal investigators, presents a new set of challenges for foundations and the non-profit sector in safeguarding the privacy, confidentiality, rights and privileges of those individuals who participate in and share information for study. This Think-tank session will ask participants to explore and discuss the best ways to incorporate non-government mandated IRB protocols into foundations' grant-giving protocols without becoming onerous, unwieldy barriers to reflective learning and strategic use of data. Key questions to be raised and discussed include: 1. What is deemed to be research in the grant-giving field? Who makes this assessment for program officers? 2. What are the legal implications for a foundation funding research or evaluation and where is the line between a foundation and grantee's responsibility? 3. What can Foundations ethically do with older, existing potentially rich, informative research data that may not have been collected using IRB protocols or under an IRB-approved protocol that does not comply with current standards? 4. How do foundations build the necessary capacity of its non-university/ research partner grantees to develop and sustain the necessary informed consent and confidentiality safeguards into their original data collection activities?

Session Title: On the Outside Looking In: Lessons From the Field for External Evaluators
Multipaper Session 815 to be held in Mineral Hall Section F on Saturday, Nov 8, 8:00 AM to 9:30 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Faith Connolly,  Naviance,  faith.connolly@naviance.com
Improving the Usefulness of External Evaluations in Schools: An Analysis of Multiple Perspectives in Reading First Ohio
Presenter(s):
James Salzman,  Cleveland State University,  j.salzman@csuohio.edu
Sharon Brown,  Cleveland State University,  s.a.brown54@csuohio.edu
Tania Jarosewich,  Censeo Group LLC,  tania@censeogroup.com
Abstract: In this paper, we consider the plight of district personnel and external evaluators as they attempt to negotiate the external evaluation process to improve the usefulness of the results. Preliminary analyses of district evaluations suggested variability in the quality of evaluations that districts received. This study examined the evaluation process, including relationships between evaluators and district personnel, quality of evaluation data, usefulness of evaluation reports, and value of information provided by external evaluators of thirteen districts that participated in the Reading First program in Ohio. The evaluators used a mixed-methods approach to collect data from three different sources and two different methods: interviews with district leaders, interviews with evaluators, and an analysis of external evaluation reports.
Evaluating Federal Smaller Learning Community Program Grants: Lessons Learned in Urban Districts Across Six States
Presenter(s):
Miriam Pacheco Plaza,  University of Miami,  m.pacheco1@miami.edu
Adam Hall,  University of North Carolina Greensboro,  ahall@serve.org
Ann Bessell,  University of Miami,  agbessell@miami.edu
Abstract: This presentation will focus on lessons learned by evaluators in six states while conducting their external evaluations for large urban districts awarded Smaller Learning Communities (SLC) Program Grants from the U.S. Department of Education. Frustrations and challenges encountered by the majority of participating evaluators, along with proposed solutions for dealing with those challenges, are included. Data were gathered through focus groups, individual interviews, and questionnaires.
Processes and Strategies for Conducting School-Based Federal Evaluations
Presenter(s):
Janet Lee,  University of California Los Angeles,  janet.lee@ucla.edu
Anne Vo,  University of California Los Angeles,  annevo@ucla.edu
Minerva Avila,  University of California Los Angeles,  avila@gseis.ucla.edu
Abstract: Evaluations conducted in schools and school districts offer a unique setting in which to study evaluation practice. When working within a public school setting evaluators face obstacles to evaluation such as school and district bureaucracy, working within the limitations of other educational mandates, and personnel issues. In this presentation, we explore the challenges of conducting evaluations in schools and school districts and provide suggestions on how to address them. Particular emphasis is given to the role of the evaluator. The strategies discussed will be in light of a current evaluation being conducted of a federal grant awarded for the implementation of Small Learning Communities in a large, urban school district.
Enhancing Data Collection and Use: Specific and Practical Lessons Learned Working With K–12 Schools
Presenter(s):
Susan Saka,  University of Hawaii,  ssaka@hawaii.edu
Abstract: Schools are constantly receiving requests to participate in research studies, including evaluations. With NCLB and other issues, they are becoming less willing to do so. This affects the ability to gather accurate data that can used to affect practice, and ultimately, policy. Lessons learned from over 25 years of experience working with K–12 schools will be discussed, including how to a) get buy-in from the people who have the power to grant permission to conduct the study/evaluation, b) time specific aspects of a study including obtaining IRB approval, c) increase cooperation of teachers and students, d) budget for incentives and contingencies, e) anticipate things that may jeopardize data collection, f) reduce burden placed on school-level personnel, g) assist researcher/evaluator manage the tracking of data collection and other aspects of the study/evaluation, and h) how to turn results into actions that inform practice and ultimately, policy.

Session Title: Federal Policy and Grass-Roots Practice: Explaining and implementing federal performance measurement (evaluation) policy requirements through self-help guides
Demonstration Session 816 to be held in Mineral Hall Section G on Saturday, Nov 8, 8:00 AM to 9:30 AM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Kenneth Terao,  JBS International,  kterao@jbsinternational.com
Anna Marie Schmidt,  JBS International,  aschmidt@jbsinternational.com
Nicole Vicinanza,  JBS International,  nvicinanza@jbsinternational.com
Edie Cook,  Independent Consultant,  elm20@juno.com
Susan Hyatt,  Business Nonprofit Connections,  shyatt@bnconnections.com
Abstract: For 13 years, JBS International's Project STAR has helped grantees of the Corporation for National and Community Service (CNCS) develop, implement, and report on performance measurement and evaluation. During this time Project STAR has developed a series of paper and web-based self-help evaluation documents for CNCS grantee programs (AmeriCorps, VISTA, Senior Corps and Learn and Serve America). This demonstration will walk participants through the steps we take to make our evaluation and performance measurement TA materials timely, accurate and user friendly. We will focus on how context and work with grantees and CNCS led document development; the step-by-step approach in developing each document and document series; and how the documents are employed in training and remote technical assistance to grantees. STAR's materials meet the immediate performance measurement and evaluation policy needs for CNCS grantees, but the concepts used in developing them apply to many evaluation efforts and audiences.

Session Title: Multisite Evaluations: Challenges, Methods, and Approaches in Public Health
Panel Session 817 to be held in the Agate Room Section B on Saturday, Nov 8, 8:00 AM to 9:30 AM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
Thomas Chapel,  Centers for Disease Control and Prevention,  tchapel@cdc.gov
Abstract: Consensus of key players regarding a program and its components is optimal for any evaluation, but it is particularly challenging in multisite evaluations. In federal and state programs that are implemented by networks of grantees and frontline practitioners, the evaluation process is a formidable one because evaluation skills and availability of data sources vary site by site. More importantly, in multisite evaluations, beyond agreement on the high-level purpose of the program, the frontline activities can differ widely. Representatives from the three programs discussed on this panel have had to face the challenges of monitoring performance at multiple sites, and for some, they have had to use site data to illustrate overall program performance nationally. Program representatives will discuss their programs, involvement of their grantees and partners in developing evaluation approaches, and the approaches taken. The process for developing and implementing the evaluation will be discussed as will decisions on where to impose uniformity or grant autonomy in indicators and data collection. Transferable lessons from their experience will be identified.
Advantages and Challenges of a Multi-Site, Multiple-Method Evaluation: Centers for Disease Control and Prevention's (CDC's) Colorectal Cancer Screening Demonstration Program
Amy DeGroff,  Centers for Disease Control and Prevention,  adegroff@cdc.gov
Laura Seeff,  Centers for Disease Control and Prevention,  lseff@cdc.gov
Florence Tangka,  Centers for Disease Control and Prevention,  ftangka@cdc.gov
Blythe Ryerson,  Centers for Disease Control and Prevention,  aryerson@cdc.gov
Janet Royalty,  Centers for Disease Control and Prevention,  jroyalty@cdc.gov
Jennifer Boehm,  Centers for Disease Control and Prevention,  jboehm@cdc.gov
Rebecca Glover-Kudon,  University of Georgia,  rebglover@yahoo.com
Judith Priessle,  University of Georgia,  jude@uga.edu
In 2005, CDC funded the Colorectal Cancer Screening Demonstration Project for three years to assess the feasibility of providing community-based, colorectal cancer screening for low income populations. An evaluation is being conducted across the five sites with a focus on assessing implementation costs, processes, and screening outcomes. Evaluation methods address both the program-level and patient-level and include a cost assessment, longitudinal case study, and analysis of patient-level data. The multi-disciplinary evaluation team includes economists, epidemiologists, program evaluators, clinicians, data management specialists, and health educators. Evaluation results will have important policy implications. The evaluation methodology will be described, outlining the individual protocols for each of the three evaluation strategies and highlighting efforts to involve key stakeholders. In addition, the challenges and limitations faced in evaluating a multi-site program in which each individual site is implementing a unique program model, including the use of different screening modalities, will be discussed.
Evaluating a Multi-Site HIV Testing Campaign: Addressing Real-World Challenges of Local Data Collection
Jami Fraze,  Centers for Disease Control and Prevention,  jfraze@cdc.gov
Jennifer Uhrig,  RTI International,  uhrig@rti.org
Kevin Davis,  RTI International,  kcdavis@rti.org
Doug Rupert,  RTI International,  drupert@rti.org
Ayanna Robinson,  Porter Novelli,  ayanna.robinson@porternovelli.com
Jennie Johnston,  Centers for Disease Control and Prevention,  jjohnston1@cdc.gov
Laura McElroy,  Centers for Disease Control and Prevention,  lmcelory@cdc.gov
The multi-faceted Take Charge. Take the Test. campaign encouraged at-risk African American women to get HIV tested in two cities from October 2006 to October 2007. Evaluators were challenged to design an evaluation that would accurately capture campaign exposure and testing behaviors. The evaluation had to: 1) Collect campaign data from multiple partners without overburdening them amidst database changes, 2) Interpret preliminary findings to quickly redirect activities, and 3) Survey target audience with adequate power despite limited sample availability. The team then designed a comprehensive, useful evaluation informed by a logic model, key stakeholders, evaluation experts, and published literature to: 1)Obtain consistent monthly data on HIV tests, hotline calls, web hits, events, and partner outreach by working closely with partners and providing incentives, 2) Provide preliminary results monthly to campaign implementers, and 3) Administer an internet efficacy survey with adequate sample size to accurately measure key outcomes.
Addressing the Challenges of Multi-Site Evaluation for the Georgia Family Connection Partnership
Adam Darnell,  Georgia State University,  darnelladam@hotmail.com
James Emshoff,  Georgia State University,  jemshoff@gsu.edu
Steve Erickson,  EMSTAR Research,  ericksoneval@att.net
The Georgia Family Connection initiative is a statewide network of community collaboratives that aims to address health-, education- and economic-related outcomes for Georgia's children and families. Evaluation efforts for Georgia Family Connection include three components: local evaluation undertaken by each collaborative, and sub-county and county-level evaluations conducted by the state evaluation team. We discuss practical challenges pertaining to multisite evaluation for each of these three evaluation efforts. For local evaluation, discussion will focus on the wide variation in methodological quality of evaluations conducted by each of 159 collaboratives given equal funding for evaluation. Sub-county and county-level evaluation efforts address aggregate effects of multiple collaboratives. Here our discussion will address challenges of operationalizing variables, data collection, and data analysis resulting from the fact that each collaborative is mostly free to address the unique conditions in its community however it sees fit. We also provide a brief report of evaluation findings.

Session Title: Do Schools Know Best? A Foundation Explores the Question
Panel Session 818 to be held in the Agate Room Section C on Saturday, Nov 8, 8:00 AM to 9:30 AM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Albert Bennett,  Roosevelt University,  abennett@roosevelt.edu
Abstract: The Lloyd A Fry Foundation High School Initiative began in the fall of 2001. There were two primary goals of the Initiative. The first goal was to increase student achievement. The second goal was to create sustainable improvements in the teaching and learning environment through distributed leadership and collaborative decision making. The Initiative was funded for five years and each school was able to receive up to $1.25 million over the life of the project. In addition to this support, each of the six schools received $50,000 for planning and $25,000 for retreats. Two assumptions guided the Initiative. The first assumption was that schools know their problems (and solutions) best, so give them the resources they need and get out of the way. The second assumption was that schools will make better decisions if more individuals are involved in problem identification and decision-making. If principals become more inclusive, they will be able to share leadership responsibilities, thereby making what is quickly becoming an unmanageable job more manageable. And finally, that the decisions of these leadership teams would be significantly different (i.e., better) than previous decisions made in the old authoritarian pattern.
Did the High Schools do the Right Thing? A Presentation on the Findings And Implications of the Lloyd A Fry High School Initiative
James Lewis,  Chicago Community Trust,  jlewis@cct.org
Albert Bennett,  Roosevelt University,  abennett@roosevelt.edu
Rodney Harris,  University of Illinois Chicago,  rharri5@uic.edu
Timothy Wateridge,  Roosevelt University,  timothy.wateridege@mymail.roosevelt.edu
This paper will present the findings of the evaluation of the Fry Foundation High School Initiative. Specifically, the evaluation will answer five questions: 1. How well did leadership teams operate within schools? 2. Did schools identify their most pressing needs and design programs likely to lead to school improvement? 3. What types of programs appeared to improve teaching and/or classroom climate? 4. What conditions supported or impeded program success? and, 5. What were some of the barriers and constraints to school improvement? Preliminary analyses of the data suggest the following. Schools were required to engage in significant amounts of planning, but did generally did not have the capacity to do this. Many of the programs developed by the schools had little buy-in from teachers and therefore failed quickly and were discontinued. The commitment of the foundation to five years of funding provided the schools with needed time to make changes in programs that were not succeeding.
Do Schools Know Best? The Executive Director Speaks
Unmi Song,  The Lloyd A Fry Foundation,  usong@fryfoundation.org
Unmi Song is the executive director of the Lloyd A Fry Foundation and thus brings a unique perspective to the question posed. Ms. Song made extensive use of the formative evaluations provided to make significant changes in the program design and delivery. Ms. Song will also be able to speak on the impact on the foundation of staff being heavily involved in the on-going program to make changes at the local level.
Do Schools Know Best? The School Principal Speaks
Anthony Spivey,  Chicago Public Schools,  anthony.m.spivey@cps.k12.il.us
Anthony Spivey, as one of the six high school principals who participated in the High School Initiative. Mr Spivey was involved in the Initiative for the entire life of the project. Mr. Spivey consistently used evaluation data to revise his curriculum and program offerings. Mr. Spivey represents an important group of stakeholders that we rarely have the opportunity to hear - local school people.
Do Schools Know Best? A Member of the Foundation Board Speaks
Howard McCue,  The Lloyd A Fry Foundation,  hmccue@fryfoundation.org
Howard (Scott) McCue is a long-time board member of the Lloyd A Fry Foundation. Mr. McCue was instrumental in the design and philosophy of the Fry Foundation High School Initiative. His involvement, as a board member, in this effort is quite unusual. He was also the driving force behind the need to identify an evaluation team early in the life of the project.

Session Title: Evaluating Math Science Partnership Projects in New York State: Finding Evidence and Documenting Results
Multipaper Session 819 to be held in the Granite Room Section A on Saturday, Nov 8, 8:00 AM to 9:30 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Dianna Newman,  University at Albany - State University of New York,  dnewman@uamail.albany.edu
Abstract: The US Education Math Science Partnership (MSP) program seeks to bridge the gap between current and expected content, pedagogy and student outcomes in math and science education. As federal and state priorities shift and funding increases, the evaluation component of this initiative has become increasingly important. This multi-paper presentation explores the evaluation methodologies proven to be successful in documenting local and statewide MSP programs and highlights "end of project" findings. The MSP grants discussed in this session assume that successful professional development empowers teachers with knowledge and skills to create an effective classroom environment, thus facilitating the transfer of learning. Evaluators focus not only on the professional development and student outcomes but also on the process by which teachers transferred the new skills and knowledge to the classroom. Additionally, insights are gleaned from assessing student work, thus making strong connections between teachers' practices and student performance.
Finding Evidence For Math Science Initiatives
Kathy Gullie,  University at Albany - State University of New York,  kp9854@albany.edu
Dianna Newman,  University at Albany - State University of New York,  dnewman@uamail.albany.edu
Evaluators of a Federally funded Math Science Partnership grant will present quasi-experimental methods and findings related to identifying ways to document evidence that meet GPRA indicators for federal and state agencies and address how to develop a plan that facilitates finding evidence of success. The goal MSP Partnerships is to foster student improvement in math by improving teacher knowledge of math content and math pedagogy. Teachers receive 60 hours of professional development from grant and district related resources, based on the assumption that successful professional development empowers teachers with knowledge and skills to create effective classroom environments. These papers investigate this transfer of learning while looking at the intermediate and integrated functions of teaching math. Analysis of student work and its relationship to grant initiated professional development and student academic achievement will be discussed. Evaluators will present findings highlighting student academic achievement on individual student folders, on local report cards, district and state tests.
Teacher Professional Development and Student Math Achievement: Results from Two Large-Scale Grants
Anna Valtcheva,  University at Albany - State University of New York,  avaltcheva@gmail.com
Kristina Mycek,  University at Albany - State University of New York,  km1042@albany.edu
In this age of accountability, educators are urged to meet the requirements of the No Child Left Behind (NCLB) Act that targets improvement in students' achievement while closing the racial achievement gap. In attempts to attain these goals, school districts across the country have initiated numerous programs to provide teachers with additional training. The purpose of this paper is to present the results of a study investigating the relationship between teachers' level of involvement in professional development offerings and students' mathematics achievement. Hierarchical linear modeling (HLM) data was analyzed and collected as part of a multi-phase mixed method evaluation process of a Math Science Partnership (MSP) Initiative. This program focuses on the enhancement of student outcomes in higher-level mathematics and science achievement in large urban settings as well as at-risk rural and small city schools. Results pertaining to students with special needs and Limited English Proficiency (LEP) will be discussed.
Addressing Gaps in Evaluation: Balancing Priorities
Amy Germuth,  Compass Consulting Group LLC,  agermuth@mindspring.com
Math Science Partnerships operate under a relatively simple logic model. The model for such partnership is that teacher professional development that emphasizes content and pedagogical skills in tandem should result in changes in teachers' knowledge and practice, thus benefiting students as evidenced by increased achievement. Despite this simple model, few evaluations have adequately addressed these different components; especially transfer of learning, the most critical component. As the state-level evaluators for MSP programs in New York Compass has worked with multiple partners, including the USED to address such gaps in evaluations. Compass will share their lessons learned about potential evaluation models and instruments that may promote better understanding of MSPs and their potential outcomes, and will speak to the need to balance federal, state and USED priorities when conducting such evaluations.

Session Title: Various Approaches to Evaluating Provision of Health Care Services
Multipaper Session 820 to be held in the Granite Room Section B on Saturday, Nov 8, 8:00 AM to 9:30 AM
Sponsored by the Health Evaluation TIG
Chair(s):
Robert LaChausse,  California State University at San Bernardino,  rlachaus@csusb.edu
Inpatient Rehabilitation Model for Evaluation: Should the Tail Wag the Dog?
Presenter(s):
Harriet Aronow,  Cedars-Sinai Medical Center,  harriet.aronow@cshs.org
Pamela Roberts,  Cedars-Sinai Medical Center,  pamela.roberts@cshs.org
Abstract: Hospital care quality and outcomes are on a fast track to public disclosure and to form the basis of “pay for performance” policies. However, the evaluation model from which quality and outcome information are being fed into the policy arena is flawed. Continuous Quality Improvement (CQI), with its roots in management science, has been the dominant model for quality improvement in hospitals. While it has accommodated patient outcomes it has maintained its adherence to methods that focus on control over processes. One small branch of hospital services, physical and medical rehabilitation (PMR), has had a parallel historical development adhering to the evaluation science methodologies that have roots in biological and social science. Inpatient PMR programs use a multi-disciplinary team model of care and, aligned with the mission of rehabilitation, understand that outcomes are really tested once the patient has returned to the community. The PMR model of evaluation has incorporated CQI, evidence-based practice and medical/social science approaches to improve process and outcomes. The purpose of this presentation is to compare and contrast the two models of evaluation, CQI and evaluation science, and to suggest a merged approach – based on the model developed in inpatient PMR – that has important applications in hospital care systems challenged by aging and increasingly complex patients.
Predictors of Utilization of Genetic Counseling services for Hereditary Breast and Ovarian Cancer
Presenter(s):
Alanna Kulchak Rahm,  Kaiser Permanente,  alanna.k.rahm@kp.org
Jason Glanz,  Kaiser Permanente,  jason.m.glanz@kp.org
Abstract: Attendance at genetic counseling for Hereditary Breast and Ovarian Cancer (HBOC) has rarely been studied separately from testing utilization. At Kaiser Permanente Colorado, consistently only 30% of all members referred for HBOC attend genetic counseling. A multivariable regression model was utilized to determine predictors of attendance on a sample of women referred for HBOC. Additional predictors were determined from a cross-sectional telephone survey. 572 women were referred from April 2003 – April 2005; 298 (52%) responded to the survey. Analysis of all referrals showed that women with cancer were 40% less likely to attend, as were women with a >10% calculated risk of BRCA1/2 mutation. Older age and referral by an oncologist also predicted attendance. Analysis of survey variables further showed that women who self-rate being extremely concerned about their health were 12 times more likely to attend. Higher family income and college education also predicted attendance.
Evaluating the Effectiveness of Foster Care Policy at Increasing Preventive Care Visits
Presenter(s):
Angela Snyder,  Georgia State University,  angiesnyder@gsu.edu
Glenn Landers,  Georgia State University,  glanders@gsu.edu
Mei Zhou,  Georgia State University,  alhmzzx@langate.gsu.edu
Abstract: This evaluation uses 2005 Medicaid claims data to compare the utilization of routine preventive care among children in the foster care system, children receiving adoption assistance, children receiving Supplemental Security Income (SSI), and low-income Medicaid children. Logistic regression is used to estimate the likelihood of an annual EPSDT visit and at least one dental visit by group. 55% of the foster care, 30% of the adoption assistance, 32% of the SSI, and 31% of the low-income Medicaid children received a preventive check-up during 2005. Compared to children in the adoption assistance (odds ratio [OR], 1.53), SSI (OR, 1.51), and low-income (OR, 2.03) Medicaid groups, foster care children were more likely to receive an annual EPSDT screening when controlling for health status and demographic variables. However, foster care children were less likely than children receiving adoption assistance (OR, 0.58) and low-income Medicaid (OR, 0.62) to have a dental visit during the same year.

Session Title: Evaluation Education in Diverse Countries
Multipaper Session 821 to be held in the Granite Room Section C on Saturday, Nov 8, 8:00 AM to 9:30 AM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Pauline E Ginsberg,  Utica College,  pginsbe@utica.edu
Riding the Tiger: Advice for Novice Evaluation Managers in the Field
Presenter(s):
Alice Willard,  Independent Consultant,  willardbaker@verizon.net
Abstract: There are many guides, courses, and events for people to learn about monitoring and evaluation (M&E), in terms of technical skills and business practices. What is less common is the ‘how to’ for a staff member in an organization to manage the evaluators conducting an evaluation. This presentation, captures the key elements of a module developed expressly to fill this gap, derived from a module produced by two private voluntary organizations (PVOs). While both employ full-time M&E staff who provide basic M&E training to agency staff, many field staff have never managed consultants, nor have they participated in evaluations. This ‘field-friendly’ module explores four major ‘black holes’ for the novice or infrequent evaluation manager: 1) Evaluation issues (rigor, bias, validity, communication, utilization, etc.), 2) Management tasks (personnel, finance, logistics, etc.), 3) Coping mechanisms for the unexpected (politics, weather, global economy, health, etc.), and 4) Basic management skills (organization, etc.).
Use of Detective Stories in Teaching Evaluation in International Settings
Presenter(s):
Alexey Kuzmin,  Process Consulting Company,  alexey@processconsulting.ru
Abstract: Training content must be presented in a lively and interesting manner in whatever context. One of the particular challenges in teaching evaluation internationally is related to developing training methods that are relevant in diverse settings and effective for trainees from different educational and cultural backgrounds. Finding training methodologies that meet all of these requirements is especially difficult when introducing fundamental theoretical concepts and principles. An example of a potentially difficult concept is the evaluation data analysis chain: describing the findings (facts, evidence), interpreting the findings, drawing conclusions (judgments), and making recommendations. This paper presents an approach that introduces data analysis principles using Arthur Conan Doyle’s Sherlock Holmes detective stories. The approach has proven to be both relevant and effective in a dozen countries from Russia to Thailand.
Implementation of a Institute of Evaluation in Ethiopia
Presenter(s):
Carla Decotelli,  Tulane University,  carladecotelli@gmail.com
Wuleta Lemma,  Tulane University,  lemmaw@gmail.com
Kifle Woldemichael,  Jimma University,  betty.kifle@yahoo.com
Elizabeth Moreira dos Santos,  FIOCRUZ,  bmoreira@ensp.fiocruz.br
Abstract: Building Monitoring and Evaluation Capacity that can influence the programs is a challenge. There are several constraints identified globally such as: non unified approach, inadequate support from governmental institutions, confusion about methods and long term committed. In Ethiopia, Jimma University (JU) and Ethiopia FMOH with the support of Tulane University and in partnership with ENSP/FIOCRUZ are changing this scenario. In order to respond to this need they implemented The Institute of Evaluation, that brings practitioners and theorists to provide unique M&E modules, innovative adult training methods and provide a platform for experience exchange in the area, creating a tradition, and a center of excellence in evaluation in JU. Through the establishment of this institute a pool of M&E experts, with strong skills in HIV/AIDS program evaluation will be created to support the quality of academic research and training in M&E field.

Roundtable: What Can You Tell in the Short Term?
Roundtable Presentation 822 to be held in the Quartz Room Section A on Saturday, Nov 8, 8:00 AM to 9:30 AM
Sponsored by the Extension Education Evaluation TIG
Presenter(s):
Ben Silliman,  North Carolina State University,  ben_silliman@ncsu.edu
Abstract: This roundtable focuses on evaluation of short-term events with youth. Brief events attended by large numbers of youth provide optimal opportunities for 4-H and other youth programs to gather data on program quality and outcomes. Such events may even provide opportunities to monitor long-term experiences and outcomes. However, settings such as youth conferences, short courses, trips, and camps offer logistical challenges (e.g., scheduling, location, use of certain methods and technologies) and practical limitations (e.g., intervention duration, developmental and learning potential) that restrict the feasibility and accuracy of the evaluation process. These issues are discussed in the context of two evaluation projects done with different degrees of success. Discussion will focus on maximizing the value of short-term evaluations with youth.

Roundtable: Evaluating Health Messages: Using Laptops and Embedded Messages to Assess Anti-drug Messages
Roundtable Presentation 823 to be held in the Quartz Room Section B on Saturday, Nov 8, 8:00 AM to 9:30 AM
Sponsored by the Health Evaluation TIG
Presenter(s):
Jason Siegel,  Claremont Graduate University,  jason.siegel@cgu.edu
Eusebio Alvaro,  Claremont Graduate University,  eusebio.alvaro@cgu.edu
William Crano,  Claremont Graduate University,  william.crano@cgu.edu
Abstract: This roundtable will discuss an evaluation of experimentally manipulated anti-drug messages. The focus will not be the results, per se, but rather the advantages of using multiple laptops and embedded messages as a means of evaluating messages targeting young adolescents. The experimental, anti-drug, messages were embedded in an anti-bullying video to “bury the chestnut.” The use of laptops allowed for short videos to keep participants entertained while filling out a long survey. Laptops also allowed participants to move at their own pace and the synchronized voice-over assisted participants with reading difficulties. Additional advantages of using laptops to evaluate health messages will be discussed along with some of the drawbacks and the costs.

Return to Evaluation 2008
Search Results for All Sessions