Evaluation 2008 Banner

Return to search form  

Session Title: International Participatory Evaluation
Multipaper Session 636 to be held in Capitol Ballroom Section 1 on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Marian Heinrichs,  University of Minnesota,  heinr003@umn.edu
Participatory Evaluation Across Cultures and Ethnic Groups: Some Lessons from Working in Latin and Anglo-Saxon Cultures
Presenter(s):
José María Díaz Puente,  University of Politics Madrid,  jm.diazpuente@upm.es
Abstract: Thanks to globalization, evaluators are likely to work with people from multiple ethnic groups and different cultures. People from different cultures may show some differences: different ways of looking at things; different ways of expression; different beliefs, values, norms, customs, behaviors, knowledge and language. These differences may cause difficulties in the evaluation work and in the interpretation of the data gathered through participatory techniques. The goal of this paper is to shed some light on these issues regarding the crossing of cultural boundaries with participatory evaluation. We will do that reflecting on some evaluation experiences in the USA, Spain and Latin America, and identifying some differences between Latin (e.g., Spanish, Hispanic) and Anglo-Saxon cultures. Some of these differences –regarding the need for sympathy, familiarity, individualism, establishing power distances among people, making things explicit, the different conception and importance of time, etc.– show practical meaningful consequences for the evaluation participatory work.
Increasing Citizen’s Participation in International Evaluation
Presenter(s):
Awgu Ezechukwu,  Western Michigan University,  eawgu@yahoo.com
Abstract: Citizen’s participation in program evaluation in this paper is addressed as the level of influence or power that citizens have over the planning and implementation of the program that is intended to affect their lives. Normally, programs are designed in the interest and welfare of citizens. However, the level of the participation or non-participation in the programs with regard to employment to work as technicians, resident observers, and community organizers affect their attitude to the program. Determining the level of citizen’s participation is very important in the assessment of the merit, worth and significance of a program. This presentation will address these issues, (a) what are the eight levels of citizen’s participation and non-participation? (b) What are the three sub levels of participation and non-participation? (c) Provide characteristics and illustration of citizen’s participation and non-participation in evaluation (d) Present the impact of citizen’s participation and non-participation in evaluation.

Session Title: Our Lives in Evaluation: AEA Members' Descriptions of Their Evaluation Work
Demonstration Session 637 to be held in Capitol Ballroom Section 2 on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the AEA Conference Committee
Presenter(s):
Colleen Manning,  Goodman Research Group Inc,  manning@grginc.com
Leslie Goodyear,  Education Development Center Inc,  lgoodyear@edc.org
Margaret Tiedemann,  Goodman Research Group Inc,  tiedemann@grginc.com
Abstract: 'Imagine you are out to dinner with other AEA members, and each member is taking a minute or two to describe his or her evaluation-related work and/or study. It's your turn; what would you say?' This was the first question on the AEA Member Survey, a major component of the AEA Internal Scan, conducted in late 2007. This session will creatively present, summarize and discuss the responses of more than 2,500 of your fellow AEA members. The session is designed to raise awareness of what it means to be an evaluator and practice evaluation. You will hear about evaluation experiences and dispositions that are markedly different from your own and identify and feel affirmed by members from similar professional worlds. Laugh, learn, relate, and be surprised, but most of all, take a moment to get to know your fellow evaluators through their thoughtful evaluation narratives.

Session Title: The Evaluation of "Green Buildings"
Panel Session 638 to be held in Capitol Ballroom Section 3 on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Environmental Program Evaluation TIG
Chair(s):
Arlene Hopkins,  Los Angeles Unified School District,  arlene.hopkins@gmail.com
Discussant(s):
Steve Maack,  REAP Change Consulting,  smaack@earthlink.net
Abstract: We humans live much of our lives indoors, in buildings. The design, construction, operation and maintenance of our buildings consume, by some calculations, an enormous 40% of our civilization's material and energy resources. The issue is that we need buildings that are more efficient if we are to achieve a sustainable quality of life for our civilization. Green building trends and technologies offer a solution to reduce the material and energy consumption of our buildings, and thus to increase the efficiency of our buildings. This panel will focus on emergent evaluation standards and practices that are transforming the design, construction, operation and maintenance of our buildings into green buildings. In turn, our civilization's impacts upon our environment will lessen and our organizations' budgets will improve.
Evaluating "Green Building" Performance: Building Commissioning
Arlene Hopkins,  Los Angeles Unified School District,  arlene.hopkins@gmail.cmo
Ms. Hopkins will begin with an overview of the current technically oriented practices and standards in building evaluation. She will show how we can use the work of leading evaluation theorists and practitioners to frame a more systematic, rigorous approach for enhanced reliability and validity in building evaluation. She will then focus on 'building commissioning' as practiced by a growing number of agencies such as the United States Green Building Council, the Environmental Protection Agency, the California State Architect and the United Nations to evaluate 'green building' performance. Ms. Arlene Hopkins is an architect and educator. As an architect, she has worked in technical evaluation of new construction projects for schools, some of which have been formally commissioned. Her consulting practice, Skye City, offers evaluation services for building projects.
Methods to Evaluate Energy Efficiency in Existing Buildings
John Griffin,  Green Energy Consultants LLC,  johng@greenenergyconsultants.biz
Rising oil prices and the current interest in energy conservation have given rise to a variety of evaluation methods for energy efficiency in existing commercial buildings. Evaluation metrics such as carbon footprint, ecological footprint, renewable energy credits, Energy Star Ratings measurement and evaluation standards that attempt to quantify energy efficiency. This presentation discusses these building evaluation standards with an attempt to understand their practical application and theoretical foundation. Examples of building energy efficiency retro-commissioning projects will show wide variances in the practice of evaluation by different agencies and evaluators This discussion will conclude with a look at how evaluation methods might evolve so that building owners and tenants can rely on enhanced reliability and validity in the building evaluations, with a goal of more predictable and consistent energy efficiencies. Mr. John Griffin is an energy management professional with an engineering degree, a law degree and an MBA. He operates Green Energy Consultants, LLC

Session Title: Towards an Evolutionary Approach to Evaluating Policies and Programs: Moving Beyond Policies and Programs as the Unit of Analysis
Expert Lecture Session 639 to be held in  Capitol Ballroom Section 4 on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Program Theory and Theory-driven Evaluation TIG
Chair(s):
Melvin Mark,  Pennsylvania State University,  m5m@psu.edu
Presenter(s):
Sanjeev Sridharan,  University of Edinburgh,  sanjeev.sridharan@ed.ac.uk
Abstract: This talk is informed by conceptualizing policing (and programs) from a realist perspective. Such a perspective encourages a move beyond policies as the unit of analysis the focus instead is on the underlying program/policy theory. The starting point for the talk is the recognition that most evaluations are unlikely to influence policy if certain conditions are not met these conditions are identified and discussed within the context of ongoing evaluations of anticipatory care policy initiatives that have recently been implemented in Scotland. These interventions aim to reduce health inequalities. This presentation will explore four questions: - What are design implications of moving beyond policies as the unit of analysis? - What are the pathways by which evaluations can help impact fu? - Should interventions and evaluations be discontinued if these conditions are not met? - How should evaluations be commissioned in light of answers to the above questions? The above questions are discussed within a theory of influence of evaluations.

Session Title: Getting Published in New Directions for Evaluation
Skill-Building Workshop 640 to be held in Capitol Ballroom Section 5 on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the AEA Conference Committee
Presenter(s):
Sandra Mathison,  University of British Columbia,  sandra.mathison@ubc.ca
Abstract: This session will provide an overview of publishing in New Directions for Evaluation, one of the journals sponsored by AEA. The overview will include what should be included in a proposal, the proposal review process, and the schedule of publication. Most of the session will be interactive, providing an opportunity for feedback on particular topics or concerns those considering submitting a proposal to NDE may have. The session will also be directed to young evaluation scholars and practitioners who may not feel they are ready to edit an NDE issue, but are interested in getting involved. The session will focus on strategies for getting involved, including connecting young scholars with mentors from the NDE editorial board to support and promote developing 'new' directions in evaluation.

Session Title: Credible Cultural Competence: Stakeholder Perceptions of Sociocultural Characteristics and Superficial Competency Strategies
Multipaper Session 641 to be held in Capitol Ballroom Section 6 on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Chair(s):
Chris LS Coryn,  Western Michigan University,  chris.coryn@wmich.edu
Discussant(s):
Katherine Tibbetts,  Kamehameha Schools,  katibbet@ksbe.edu
Abstract: This session will generate discussion on issues of demonstrating credible cultural competence, from the views of different stakeholders. In the first presentation, the authors examine common cultural competency 'strategies' in evaluation that rely heavily on 'tokenism.' The authors discuss the need to move toward a genuinely reflexive and contextual cultural competence. We focus on the difficulties in proving to clients that the evaluation team is sufficiently-and appropriately-culturally competent. The second presentation takes the perspective of the client, specifically the INGO Heifer International. Heifer has undergone extensive work to build evaluation capacity around its core ethics, in which cultural competency is embedded. The author will highlight some of the 'strategies' mentioned in the first presentation as they have appeared at Heifer and how they have proceeded differently. Discussion will follow, including whether the Heifer model is viable and what else an evaluation team would need to credibly demonstrate its genuine competence.
Moving to Genuine: Credible Cultural Competence and Stakeholder Believability
Anne Cullen,  Western Michigan University,  anne.cullen@wmich.edu
Stephanie Evergreen,  Western Michigan University,  stephanie.evergreen@wmich.edu
While it is almost universally accepted that evaluators need to be culturally competent, there has been little discussion of how evaluators can demonstrate cultural competencies to clients. For many, cultural competency means evaluators have the same sociocultural background as program recipients. In fact, this is often seen as a way to ensure credibility. However, this is inherently problematic for the field of evaluation. Such assumptions would require that the evaluator of a program designed to help the homeless population be homeless, for example. This session will explore how we can move beyond equating competency with sociocultural similarity. We begin by defining cultural competency and raising questions about the effectiveness of current strategies. Throughout the presentation we will use stories of our personal struggles of demonstrating our cultural competence as evaluators to clients and stakeholders. We intend to engage the audience in exploring potential solutions designed to overcome these issues.
Building Multicultural and Cross-Cultural Aspects of Evaluation through Values-Based Holistic Community Development Model: Sixty Years of Heifer International's Experiences in International Development
Tererai Trent,  Heifer International,  tererai.trent@heifer.org
In an increasingly diversified world, having an evaluative culture which systematically embeds multicultural competence is pivotal to the success of any International Non Governmental Organizations (INGOs). The objective of this session is to share how Heifer International, an INGO working in more than 160 countries world-wide, has defined and achieved organization-wide multicultural competence. This session will also show how Heifer International integrated five essential elements that contribute to a systematic multicultural and cross-cultural competence: (1) value diversity, (2) cultural self- assessment capacity (3) consciousness of the "dynamics" inherent when cultures interact, (4) institutionalize cultural knowledge, and (5) development of adaptations to service delivery reflecting an understanding of diversity between and within cultures. Heifer International firmly believes that to become effective and competent. Finally, we will demonstrate how these elements are manifested in every level of the organizational and reflected in attitudes, structures, and organizational policies.

Session Title: Government Evaluation TIG Business Meeting
Business Meeting Session 642 to be held in Capitol Ballroom Section 7 on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Government Evaluation TIG
TIG Leader(s):
David J Bernstein,  Westat,  davidbernstein@westat.com
Chair(s):
David J Bernstein,  Westat,  davidbernstein@westat.com
Abstract: The Government Evaluation TIG will hold its annual Business Meeting during the AEA 2008 conference. Topics include succession planning for TIG leadership, ensuring the relevance of the TIG, "hot issues" affecting the conduct of government sponsored evaluation work, and soliciting ideas for TIG-sponsored programs and services.

Roundtable: The Mississippi Delta Children’s Partnership: The Power of Capacity Building to Educate Children
Roundtable Presentation 643 to be held in the Limestone Boardroom on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Indigenous Peoples in Evaluation TIG
Presenter(s):
Bettye Fletcher,  Professional Associates Inc,  bwfletcher@paionline.org
LaTonya Lott,  Professional Associates Inc,  llott@paionline.org
Aisha Fletcher,  Professional Associates Inc,  afletcher@paionline.org
Gloria Billingsley,  Professional Associates Inc,  gbillingsley@paionline.org
Abstract: The Mississippi Delta Children’s Partnership was launched in response to the grave need to engage rural, low-wealth Delta communities in the collaborative process to improve the academic and social well-being of children and their families. The theory of change that undergirds this partnership is the ecological model of human development with the child as the center focus. One hypothesis of the partnership is that through structured out-of-school time activities, the children (ages 5-12) at the five sites in the Delta will improve their academic performance. Reading and math outcomes will be measured through: 1) Pre- and post-tests administered at site level using the Leap Track Instructional System and 2) Comparison of scores for the state administered Mississippi Curriculum Test. T-test analysis will be conducted to examine significant differences between pre-and post test Leap Track mean scores and for mean score differences for participants and non-participants of the partnership on the Mississippi Curriculum Test.

Roundtable: Re-framing a Deficit-based Evaluation Context Into an Asset-based Evaluation Approach
Roundtable Presentation 644 to be held in the Sandstone Boardroom on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Emma Norland,  e-Norland Group,  enorland2@comcast.net
Karie Phillips,  Denver Zoo,  kphillips@denverzoo.org
Joe E Heimlich,  Institute for Learning Innovation,  heimlich@ilinet.org
Chasta Beals,  Denver Zoo,  cbeals@denverzoo.org
Abstract: This session offers strategies for re-framing a deficit-based evaluation context into an asset-based approach to evaluation. These strategies were extremely useful during a complex, labor-intensive, time-sensitive evaluation. The focus of the evaluation was W.I.N.-W.I.N., a very large, very successful 12-year-old conservation education program created by Denver Zoo and the Colorado Division of Wildlife and offered to Pre-K-5th grade students in urban, low-income schools in the Denver Metro area. Funding challenges and changing priorities spurred program managers to contract with professional evaluators to conduct a study to determine program viability but stipulated that program staff were to be used as the major resources for conducting the study. The differential assets or deficit-frame represented in the initial relationship (experts using non-experts as resources - IE ‘haves’ using ‘have not’s’) gradually blurred as all involved recognized the critical contributions that specialized expertise makes, creating an asset-based paradigm for the evaluation.

Roundtable: An Examination of the Link Between Professional Development, Teacher Learning, and Student Outcomes
Roundtable Presentation 645 to be held in the Marble Boardroom on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Rolanda Bell,  Arizona Department of Education,  rbell@ade.az.gov
Ayesha Boyce,  Arizona Department of Education,  aboyce@ade.az.gov
Abstract: This project offers an explanation of how professional development impacts student reading achievement. Although many researchers have examined factors of effective professional development and how teachers benefit from such training, few researchers have made an empirical connection between imparting knowledge, teacher learning, and student outcomes. The model we propose contends that the relationship between professional development and student achievement is impacted by a teacher's ability to use the content knowledge obtained during professional development to make decisions about the use of instruction strategies. The focus of this project is two-fold: 1) Does professional development impact student achievement and 2) Is the effect of professional development on student achievement facilitated by the teacher’s ability to use the knowledge gained to engage students in activities associated with academic achievement in reading.

Session Title: Accounting for Evaluation Readiness and Program Evaluability in Building a Membership Organization's Capacity to Conduct Outcome Assessment
Demonstration Session 646 to be held in Centennial Section A on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Presenter(s):
Barry B Cohen,  Rainbow Research Inc,  bcohen@rainbowresearch.org
Mia Robillos,  Rainbow Research Inc,  mrobillos@rainbowresearch.org
Abstract: Oftentimes funders require grantees to conduct an evaluation on the assumption that they are equally ready to do so. Little attention is paid to evaluation capacity/experience and organizational issues that may preclude a timely and well thought-out evaluation. Even with technical assistance, some grantees may not be ready for evaluation when it has to face more pressing issues such as loss of funding, staff turnover, capital campaigns, or strategic planning. Also in this mix may be experienced grantees that do not feel the need for evaluation training or coaching. This situation can significantly reduce buy-in into the evaluation. Rainbow will discuss the 'triage' process it implemented (evaluation readiness assessment, an evaluability assessment, and technical assistance) in assisting a group of 43 grantees with outcome assessment which has been instrumental in delivering evaluation capacity building services on a 'just-in-time' basis. The evaluation readiness and program evaluability instruments will also be demonstrated.

Session Title: Alternative Methods for Analyzing Longitudinal Data: Moving Beyond Hierarchical Linear Models (HLM) and Repeated Measures Analysis of Variance (ANOVA)
Expert Lecture Session 647 to be held in  Centennial Section B on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Patrick McKnight,  George Mason University,  pmcknigh@gmu.edu
Abstract: There is more than one way to analyze longitudinal data but the literature seems to indicate otherwise. Evaluators frequently use longitudinal designs but most tend to favor just one analytic procedure. The two currently favored procedures are hierarchical linear models (HLM) and some variant of repeated measures analysis of variance (RM ANOVA). While these procedures are useful, they do not always address the primary evaluation question. Specifically, many evaluators wish to know whether the participants change over time in ways that can be understood and applied to further program development. Evaluators who are aware of alternative longitudinal analytic procedures such as growth curve modeling can better address that primary question. The purpose of this talk is to introduce this alternative procedure and offer suggestions about the conditions where each procedure may be optimally suited to answer that primary question.

Session Title: Theoretical and Practical Considerations in National Evaluation Policies: The Case of the Netherlands
Expert Lecture Session 648 to be held in  Centennial Section C on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Presidential Strand
Chair(s):
Leslie J Cooksy,  University of Delaware,  ljcooksy@udel.edu
Presenter(s):
Frans Leeuw,  University of Maastricht,  flleeuw@cuci.nl
Abstract: This presentation describes the development and evolution of a national evaluation policy, discusses lessons learned, and identifies unresolved issues. The presentation considers questions of evaluation policy that cut across national and other borders, including: what agencies champion national policies (and why); what is the role of an evaluation professional association in evaluation policy; and what are the pros and cons of university involvement? Frans Leeuw, the presenter, draws on his extensive experience with evaluation policy in the Netherlands. In 1991, he conducted a government-wide study of the Dutch Central Government Policy on Evaluation. He has also been the Director of Performance and Audit in the Dutch version of the U.S. Government Accountability Office, the Director of the agency responsible for the evaluation of justice policies and programs, and the President of the Dutch Evaluation Society. Leeuw also has experience in international settings, enabling him to identify lessons with broad applicability.

Session Title: The Impact of Experimental Designs and Alternative "Evidence" Sources
Multipaper Session 649 to be held in Centennial Section F on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Research on Evaluation TIG
Chair(s):
Mehmet Ozturk,  Arizona State University,  ozturk@asu.edu
The Impact of Randomized Experiments on the Education Programs they Evaluate
Presenter(s):
Anne Chamberlain,  Johns Hopkins University,  amchambe@yahoo.com
Abstract: Recent efforts to strengthen the quality of products and processes in the education arena have resulted in the legislation of randomized methodology where federal funds are involved in either the usage or evaluation of education initiatives. While the desire to improve education and education evaluation is both intuitive and laudable, it is not without potential drawbacks. In addition to the benefits of randomized experiments, drawbacks involving cost and ethics are well documented in the literature. However, there is another potential issue that has been virtually unrecognized: The impacts of conducting a randomized experiment on the implementation of education programs –the evaluands-- themselves. The purpose of this presentation is to share early findings from a study of how the implementation of education programs is affected by participation in randomized evaluation. Three questions will be addressed: Do education programs change during randomized evaluation? How? Why should this matter to policymakers?
Using Program Evaluation to Document “Evidence” for Evidence-Based Strategies
Presenter(s):
Wendi Siebold,  Evaluation, Management and Training Associates Inc,  wendi@emt.org
Fred Springer,  Evaluation, Management and Training Associates Inc,  fred@emt.org
Abstract: National registries of evidence-based prevention have traditionally relied on experimental-design studies as the “gold standard” for establishing evidence of effectiveness. This approach has contributed to a research-to-practice gap for two major reasons. First, programs have been accepted as the primary unit of analysis. Second, the experimental design is based on the logic of creating “most similar systems” to isolate the effects of an experimental variable (e.g., a program). This paper draws on cross-cultural research traditions to demonstrate how “most different system” designs such as multi-site evaluations (MSE’s), meta-analysis, and more recent studies of practice-based evidence and systematic review are more appropriate for developing standards of evidence-based practice that are robust across realistic implementation diversity. Findings from the National Cross-site Evaluation of High Risk Youth Programs, and select studies of violence prevention, will demonstrate the application of such evaluation methodologies for establishing evidence-based strategies in violence prevention.

Session Title: Tools to Measure Youth Camping Outcomes
Demonstration Session 650 to be held in Centennial Section G on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Extension Education Evaluation TIG
Presenter(s):
Allison Nichols,  West Virginia University,  ahnichols@mail.wvu.edu
Niki Nestor McNeely,  The Ohio State Extension,  mcneely.1@osu.edu
Jill Martz,  University of Montana Extension,  jmartz@montana.edu
Sarah Baughman,  Virginia Polytechnic Institute and State University,  baughman@vt.edu
Abstract: For the past four years, with funding and support from the Army Youth Camping Project and Cooperative State Research, Education and Extension Service (CSREES), a group of Cooperative Extension professionals formed the National 4-H Camping Research Consortium (NCRC) and developed evaluation tools that are designed to measure the context of the camping experience as it relates to the essential elements of youth development and life-skill development. This demonstration session will give participants an opportunity to examine three logic models and two questionnaires in relationship to their usefulness for collecting outcome data at camps. Attendees will be given an opportunity to give feedback and suggestions for further projects to NCRC members.

Session Title: Using the Cyber-Infrastructure to Build Evaluation Capacity
Multipaper Session 652 to be held in Mineral Hall Section A on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Integrating Technology Into Evaluation
Chair(s):
Marco Andrade,  Maine Center for Public Health,  mandrade@mcph.org
The Netway: Utilizing Cyberinfrastructure to Strengthen Evaluations
Presenter(s):
Jennifer Brown,  Cornell University,  jsb75@cornell.edu
Sarah Hertzog,  Cornell University,  smh77@cornell.edu
Claire Hebbard,  Cornell University,  cer17@cornell.edu
William Trochim,  Cornell University,  wmt1@cornell.edu
Abstract: The Netway is a web-based cyberinfrastructure that was developed for and is being pilot tested in current research. It constitutes a central common platform that can be utilized in the planning and management of evaluations across the entire spectrum of programs and is a central tool in evaluation partnerships. The Netway enables practitioners to enter information about an educational program (inputs, assumptions, contextual issues, activities, outcomes) and its evaluation (questions, participants, measures, design, analysis, reporting) to create a logic model, pathway model, and an evaluation plan. The term “Netway” is derived from the phrase “networked pathway” and refers to the system of programs or projects that is common in educational initiatives. The Netway cyberinfrastructure is a creative incorporation of technology that fundamentally changes the nature of evaluation practice for both the evaluator and the practitioner. It has the potential to be a transformative mechanism for evaluation in the 21st century.
Healthy Maine Partnerships: Using the Web-Based Knowledge-Based Information Technology© Prevention System to Monitor and Evaluate Coalition Activities Within the Context of the Minimum Common Program Objectives
Presenter(s):
Marco Andrade,  Maine Center for Public Health,  mandrade@mcph.org
Geoffrey Miller,  Maine Department of Health and Human Services,  geoff.miller@maine.gov
Pamela Bruno MacDonald,  Maine Center for Public Health,  pbrunomac@earthlink.net
Abstract: The Healthy Maine Partnerships were established in 2001 using tobacco settlement funds to create a coordinated state-local prevention and health promotion services delivery system. In the spring of 2007, a new partnership was created in the development of a braided funding RFP to create a new Public Health Infrastructure. The result of this RFP was the creation of eight new public health districts and the funding of 28 coalitions covering the entire state. The local coalitions use a comprehensive approach in implementing activities within their communities and are responsible for reporting their progress through a customized web-based monitoring tool called KIT© Prevention. This tool serves a dual purpose of informing the state partners of local comprehensive efforts across seven health promotion categories and for evaluating local efforts. The tool was customized in order to evaluate local activities within the context of the Minimum Common Program Objectives of eight state programs.

Session Title: Needs and Methodologies in Various Contexts: Community, School, and International
Multipaper Session 653 to be held in Mineral Hall Section B on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Needs Assessment TIG
Chair(s):
Ann Del Vecchio,  Alpha Assessment Associates,  delvecchio.nm@comcast.net
Discussant(s):
Ann Del Vecchio,  Alpha Assessment Associates,  delvecchio.nm@comcast.net
Assessing Evaluator Training Needs in Asia-Pacific Countries: Insights from a Delphi Study
Presenter(s):
Hsin-Ling Hung,  University of Cincinnati,  hsonya@gmail.com
James Altschuld,  The Ohio State University,  altschuld.1@osu.edu
Yi-Fang Lee,  National Chi Nan University,  lee.2084@yahoo.com
Abstract: Identifying training needs for evaluators is critical for the growth of the evaluation profession as more qualified and competent individuals will be required to meet the increased demand for services. The focus of this paper is on such needs as coming from a research project on the status and challenges of educational program evaluation in the Asia-Pacific region. Needs Assessment (NA) was utilized to examine discrepancies between the current status and what should be done in training. A web-based survey was used to collect data. The findings and methodological issues will be described. Implications for academic and informal training programs also will be covered.
Assessment of Technical Assistance Needs of Federally Funded Community-Based HIV Grantees Targeting African-Americans
Presenter(s):
Wilhelmena Lee Ougo,  MayaTech Corporation,  wlee-ougo@mayatech.com
Victor Ramirez,  MayaTech Corporation,  vramirez@mayatech.com
Kelly O'Bryant,  MayaTech Corporation,  kobryant@mayatech,com
Mesfin S Mulatu,  MayaTech Corporation,  mmulatu@mayatech.com
Abstract: Several community-based programs have received Center for Substance Abuse Treatment (CSAT) Minority AIDS Initiative (MAI) funding to address the disproportionate burden of HIV/AIDS in racial/ethnic minority populations. Technical assistance (TA) services are provided to MAI projects to help ensure their success. This study assesses the TA requests of CSAT MAI grantees that target African Americans. Data from 30 African American targeting grantees and 54 TA requests from them were abstracted and analyzed. The most frequently targeted TA population subgroup were women with children. The most frequently requested TA categories were motivational interviewing, clinical training, recruitment and sustainability. It appears that HIV capacity building grantees differ in their TA priorities and the characteristics of their primary target subgroup populations. Understanding these priorities in TA request trends will help inform proactive preparation for effective service delivery by programs that address the HIV epidemic in minority communities.

Session Title: Power to the People: Engaging Communities in Advocacy Evaluation
Multipaper Session 654 to be held in Mineral Hall Section C on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Advocacy and Policy Change TIG
Chair(s):
Zoe Clayson,  Abundantia Consulting,  zoeclay@abundantia.net
Evaluating a Community Development Initial Public Offering (IPO): Case Study of Market Creek Partners LLC San Diego, California
Presenter(s):
Victor Rubin,  Policy Link,  vrubin@policylink.org
Zoe Clayson,  Abundantia Consulting,  zoeclay@abundantia.net
Abstract: In early 2006 the Department of Corporations for the State of California issued a permit for the sale of securities for Market Creek Partners, LLC, the company that owns Market Creek Plaza. The resulting IPO represented six years of teamwork between the Jacobs Family Foundation and key stakeholders. Based on a double bottom line strategy the goals of this project were to 1) secure 450 community “stakeholders” investing a total of $500,000; 2) create the opportunity for residents to build individual and community assets while rebuilding their neighborhoods; and 3) transfer control of Market Creek Plaza to people who have a stake in the well-being of their communities. While the baseline evaluation employed four strategies, this paper focuses on the methodology and results of the pathway analysis which identified and analyzed three intersecting paths: the regulations and financing path; 2) resident engagement; and 3) the role of the Jacobs Family Foundation.
The Indianapolis Local Learning Partnership (LLP): The Story of How Data (Really/Truly) Becomes Power
Presenter(s):
Scott Hebert,  Independent Consultant,  shebert@sustainedimpact.com
Cynthia Cunningham,  Cunningham Consulting Inc,  cunninghamconsulting@earthlink.net
Abstract: Making Connections-Indianapolis is a 10-year initiative to improve outcomes for vulnerable children in disadvantaged neighborhoods, through strengthening families’ connections to economic opportunity, positive social networks, and effective services and supports. A key component of Making Connections-Indianapolis is its Local Learning Partnership (LLP) -- a consortium of people and organizations created to promote the use of data to inform and propel practice and system improvements that will lead to family strengthening and neighborhood transformation. The paper will examine the experience of the Indianapolis LLP in establishing a learning community that focuses on valid, relevant data to identify better practices, effective policies and other change strategies, and to evaluate the results that are achieved. In particular, the paper will highlight the ways in which the LLP uses data in its advocacy activities, with particular attention to the development of neighborhood-based learning partnerships that empower residents to use data to improve neighborhood conditions.

Session Title: Using the Getting to Outcomes (GTO) Model for Systems Change Within Child and Family Mental Health Services
Demonstration Session 655 to be held in Mineral Hall Section D on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Presenter(s):
Jason Katz,  University of South Carolina,  katzj@mailbox.sc.edu
Abraham Wandersman,  University of South Carolina,  wanderah@gwm.sc.edu
Pamela Imm,  LRADAC,  pimm@lradac.org
Victoria H Chien,  University of South Carolina,  victoria.chien@gmail.com
Abstract: Getting to Outcomes (GTO) has assisted practitioners in planning, implementing, and evaluating programs within many contexts, including substance abuse prevention, teen pregnancy prevention, and promoting developmental assets. As part of the OASIS state infrastructure grant in South Carolina (SC-SIG), GTO is being utilized to assist four demonstration sites in planning, implementing, and evaluating systems change around family involvement in systems of care in child and family mental health services. The ten accountability questions of GTO, which will be customized to fit this particular content area, include needs/resource assessment, identifying goals and objectives, best practices, fit of programs into host organization, planning, implementation and process evaluation, outcome evaluation, continuous quality improvement, and sustainability. The ten steps will provide a strategic process for program stakeholders in reviewing and modifying policies and procedures relevant to family-driven care.

Session Title: Practical Methodology for Evaluating Advocacy Efforts
Multipaper Session 656 to be held in Mineral Hall Section E on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Advocacy and Policy Change TIG
Chair(s):
Annette Gardner,  University of California San Francisco,  annette.gardner@ucsf.edu
Using Case Studies to Evaluate Policy and Advocacy
Presenter(s):
Annette Gardner,  University of California San Francisco,  annette.gardner@ucsf.edu
Claire Brindis,  University of California San Francisco,  claire.brindis@ucsf.edu
Lori Nascimento,  The California Endowment,  lnascimento@calendow.org
Sara Geierstanger,  University of California San Francisco,  sara.geierstanger@ucsf.edu
Abstract: In 2006 and 2007, the Institute for Health Policy Studies at the University of California, San Francisco, as part of its ongoing evaluation of The California Endowment Clinic Consortia Policy and Advocacy Program, expanded its data collection activities and developed two types of case studies: grantee best practices case studies and policy advocacy case studies. UCSF worked with 17 grantees to develop their own unique “story” of an exemplary practice be it policy advocacy, partnerships or quality improvement. These narratives were analyzed for crosscutting themes and progress in achieving longer-term outcomes. Second, we developed in-depth case studies of three different policies to explain how grantee advocacy contributes to a policy change. The findings indicate Program has afforded grantees an opportunity to experiment and develop novel and sustainable solutions as well as focus on the unique needs of their communities. Key factors included staff expertise, the ability to participate early and often during the policy process, business acumen, the ability to build coalitions and mobilize stakeholders, and the ability to leverage partnerships with member clinics.
Second Tier Advocacy and Policy Change Evaluation
Presenter(s):
John Risley,  Greater Kalamazoo United Way,  jrisley@gkuw.org
Abstract: This paper proposes an evaluation checklist for organizations pursuing advocacy and policy change on a secondary level. The checklist builds on the existing advocacy and policy change evaluation literature and incorporates learning from political science about what works in influencing elected officials. The paper focuses on how organizations can evaluate their own advocacy and policy change efforts. Drawing on experience the experience of the Greater Kalamazoo United Way, the paper presents the perspective of an organization pursuing policy advocacy on a supportive, rather than lead organization level.

Session Title: Using NVivo to Improve Rigor in Evaluation
Demonstration Session 657 to be held in Mineral Hall Section F on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Qualitative Methods TIG
Presenter(s):
Dan Kaczynski,  University of West Florida,  dkaczyns@uwf.edu
Michelle Salmona,  University of Technology Sydney,  m.salmona@pobox.com
Abstract: This workshop will demonstrate how NVivo software can be used by evaluators for qualitative data analysis. Specific strategies will be provided to improve the rigor and content of formative and summative evaluation reporting. Discussion will show the software as a technological tool that promotes transparency of qualitative methodology and evaluation practice. The workshop will examine the construction of meaning from qualitative data as seen through the development and use of the code structure through a coding and analysis activity. Code structure design will be discussed in relation to two key NVivo features: data management and data analysis. The application and benefits of this process in team evaluations will also be demonstrated. Session participants will be encouraged to share and discuss techniques to improve the quality of evaluations and strategies to better prepare for future evaluations.

Session Title: Stakeholder Involvement in the Development of a Multi-Purpose Evaluation System for a Public Mental Health System
Multipaper Session 658 to be held in Mineral Hall Section G on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
Sandra Sundeen,  University of Maryland Baltimore,  ssundeen@psych.umaryland.edu
Abstract: Evaluators who work with public mental health authorities can provide policy makers with valuable information about mental health system effectiveness at multiple levels. In Maryland, the State mental health agency established a partnership with the University of Maryland, Baltimore, to develop an outcomes measurement system that would provide state officials with information about the outcomes of state-funded services in order to make informed decisions about service effectiveness and, ultimately, priorities for reimbursement. Outcomes Measurement System results can also be used to inform local quality monitoring activities, service provider program development, and clinical decision-making This session will describe the participatory process that was undertaken to develop the Outcomes Measurement System, including the involvement of multiple stakeholders. The challenges encountered in analyzing the data and presenting it in a way that addresses the primary objective of usefulness for policy decision-making will also be discussed.
Using Evaluation to Support State Mental Health Policy: Development and Implementation of a System to Measure Outcomes of Services
Diana Seybolt,  University of Maryland Baltimore County,  dseybolt@psych.umaryland.edu
Sandra Sundeen,  University of Maryland Baltimore,  ssundeen@psych.umaryland.edu
Through a collaborative partnership with the Maryland mental health authority the Systems Evaluation Center of the University of Maryland, Baltimore developed, pilot-tested and participated in the implementation of an Outcomes Measurement System. This paper will focus on the policy implications of this project, including data requirements imposed by the Federal and State governments, along with the expectation to address the frequently competing concerns of other stakeholders. The participatory process that was implemented throughout all phases of the project will be discussed. Examples will be given of the policy challenges that were encountered and the ways in which they were addressed. Current issues related to maintenance, evaluation and potential revision of the system will be presented.
Evaluating Individual Changes Over Time in the Substance Abuse and Mental Health Services Administration's National Outcome Measures: Methodological Challenges and Complexities
Timothy Santoni,  University of Maryland Baltimore County,  tsantoni@comcast.net
Qiang Qian,  University of Maryland Baltimore,  qqian@psych.umaryland.edu
Even before Maryland Outcome Measurement System challenges shifted from data collection to data analysis and interpretation, several recording issues in the implementation of the collection system were identified. Data validity checks were missing for many fields; the collection system was changing certain entries. Special attention was needed to understand the meaning of blanks and zeros. Data inconsistencies were identified that had to be rectified. A method for the selection of t(0) and (t(n)) forms was required. Determination of what difference was required to determine 'real change', and the significance of maintenance related to positive, negative, and neutral maintenance, were explored. The effect of the calendar on certain items had to be considered. All of these considerations then had to be presented in formats and with explanations appropriate to audiences at several levels of sophistication. Sample analyses adopted to address these issues will be presented.

Roundtable: Differentiating Terms of Art and Building Infrastructure for Systems Assessment
Roundtable Presentation 659 to be held in the Slate Room on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Systems in Evaluation TIG
Presenter(s):
Deborah Duran,  National Institutes of Health,  durand@od.nih.gov
Abstract: For years, performance measuring has been part of management with an initial focus on single or similarly focused projects. As accountability demands expand, the scope and unit of analysis now include whole organizations and large, complex, diverse, and multi-site programs. In many areas, available evaluation language and methodologies have been inappropriately fit on to these different demands. However, systems assessments require distinct terms of art and methodologies to describe mechanisms of effect and holistic impact, which are not yet available. Therefore, cognitive, IT, linguistic, social, physical and financial infrastructure must be developed that enable systems assessment independence, as well as support collaborations between areas in order to appropriately measure performance. Synthesis of approaches and building collaborative infrastructure will be addressed. Discussions will focus on system level critical intervention points and metrics, including a draft systems assessment tool kit.

Session Title: Evaluating Organizational Capacity of Disaster and Emergency Management Organizations
Panel Session 660 to be held in the Agate Room Section B on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Disaster and Emergency Management Evaluation TIG
Chair(s):
Jared Raynor,  TCC Group,  jraynor@tccgrp.com
Abstract: A central aspect of doing emergency and disaster work is the internal capacity of organizations to do the work effectively`, including skills, communications, management capacity, adapting to a changing environment and leadership. This session will outline a framework for evaluating organizational capacity of humanitarian organizations developed specifically for their unique operating environment. Based on a framework of four core capacities (leadership, adaptive, management and technical), presenters will discuss the importance of evaluating organizational capacity, tools for doing the evaluation and how to relate organizational capacity evaluation to programmatic evaluation work (e.g. collaborations, staff skills, strategic niche, etc.).
Assessing the Environmental Context of Humanitarian Organizations
Kessler Shelly,  TCC Group,  skessler@tccgrp.com
We begin by articulating a framework for organizational capacity that is general to the nonprofit field as a whole and presented within the specific context of humanitarian organizations. This presentation will outline specifics of the operating environment for humanitarian work and discuss the concept of 'continuous discontinuity', pertaining to how humanitarian organizations are in a constant state of response. This presentation will outline a cohesive framework for examining organizational capacity that is appropriate to such an environment, the core capacity model. Capacities identified in the model, identified by TCC Group over its 30 years of experience working with nongovernmental agencies, are: leadership, adaptability to changing environments, effective management, and technical expertise. Ms. Kessler has extensive field experience with humanitarian work, including 10 years with CARE, and has consulted with numerous non-governmental organizations, helping them look at organizational capacity.
Evaluating Organizational Capacity of Humanitarian Organizations
Jared Raynor,  TCC Group,  jraynor@tccgrp.com
In order to have a well-functioning organization, it is important to have organizational stability. Building on the environmental context presented in the first presentation, this presentation will explore specifics of evaluating organizational capacity. Utilizing the core capacities model, the presentation will describe organizational structures and core capacities that may be employed by HAO leaders, managers, field staff - and by HAO funders - to improve organizational effectiveness. In addition to presenting specifics about important organizational characteristics drawn from the literature and evaluations, the presentation will articulate tools and a methodology for examining organizational capacity in humanitarian organizations. Mr. Raynor has worked in several areas of humanitarian work and organizational capacity building, including field work in Azerbaijan and Central America and work at the United Nations and brings significant experience in evaluating organizational capacity.

Session Title: Are States' K-12 Assessment Policies Inclusive? Using Established Principles and Characteristics to Find Out
Panel Session 661 to be held in the Agate Room Section C on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Special Needs Populations TIG
Chair(s):
Martha Thurlow,  National Center on Educational Outcomes,  thurl001@umn.edu
Abstract: For over a decade, our nation's schools have refocused their efforts toward high standards for the learning of all children, supported by assessment and accountability systems that will ensure that the public knows about the progress of all students toward those standards. Inclusion of all students in accountability systems is now reflected in public education laws, such as the No Child Left Behind Act of 2001 and the Individuals with Disabilities Education Improvement Act of 2004. How are these changes reflected in policies and practices that affect students with disabilities? This panel will address the need to evaluate special education policies at the state and district level with an emphasis on inclusion. The panelists will describe a set of principles and characteristics for inclusive assessment systems that can be used for evaluation purposes, and then show how these principles were used to evaluate states' assessment accommodations monitoring procedures.
The Components of an Inclusive Assessment System
Martha Thurlow,  National Center on Educational Outcomes,  thurl001@umn.edu
There has been remarkable progress during the past 15 years in moving toward more inclusive assessment systems. Positive consequences of including students with disabilities emerged and performance increased; expectations for students rose; access to the curriculum increased; teachers became more skilled at teaching students with disabilities. However, unintended negative consequences were identified as well. Enhancing the positive consequences and reducing the negative consequences can be accomplished by carefully examining and evaluating the assumptions on which assessment and accountability systems are based. In this presentation, the panelist will provide the background context in which the principles and characteristics for inclusive assessment systems were created. The principles and characteristics represent seven core principles, with underlying characteristics, that represent the essential components of an inclusive assessment system. These seven principles will be presented, along with their underlying rationale.
Using Principles and Characteristics for Inclusive Assessment Systems to Evaluate States' Accommodations Monitoring Policies and Procedures
Laurene Christensen,  National Center on Educational Outcomes,  chri1010@umn.edu
Beginning in 2004, the United States Department of Education began a peer review process for states' standards and accountability systems. In this review, it was observed that many states did not adequately address accommodations policies for students with disabilities. Furthermore, states were concerned that they did not have appropriate procedures in place to monitor accommodations including whether a student receives the accommodations on his/her Individualized Education Plan and if the student actually uses the accommodation he or she receives. At the request of states, we began a review of state accommodations monitoring policies and procedures, using the principles and characteristics for inclusive assessment systems as the basis for our evaluation. This panelist will report on this project, including the project design and its findings.

Session Title: Evaluating Integrated Intervention Models: Response to Intervention (RtI) and Positive Behavioral Interventions and Supports (PBIS)
Demonstration Session 662 to be held in the Granite Room Section A on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Patricia Mueller,  Evergreen Educational Consulting,  eec@gmavt.net
Brent Garrett,  Pacific Institute for Research and Evaluation,  bgarrett@pire.org
Abstract: State Departments of Education and public schools have been exploring the implementation of evidence-based practices to improve student achievement results in local and state assessments and to ensure that students experience safer and healthier learning environments. For the most part, these models, Response to Intervention (RtI) and Positive Behavioral Interventions and Supports (PBIS), have operated distinct from one another. Now, the two innovations are being integrated to create systems change across the K-12 setting. This demonstration will first briefly describe the RtI and PBS models and current assessment instruments employed for the respective models. Next, the rationale for development of an integrated tool will be presented. The session will conclude with the presenters describing an instrument they have constructed to measure the success of the integrated model. Discussion will include issues relating to validity, fidelity of the implementation and instrument utilization by multiple parties.

Session Title: Evaluating Development Programs: A Health Impact Assessment Approach
Demonstration Session 663 to be held in the Granite Room Section B on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Health Evaluation TIG
Presenter(s):
James Edwin,  University of Alaska,  afjae1@uaa.alaska.edu
Abstract: Health impact assessment (HIA) may be conducted at various stages in the development of an identified project, program or policy: Prospective HIA is carried out in the developmental stage when findings and recommendations can influence decision making. Concurrent HIA is carried out when the identified proposal is being implemented. Retrospective HIA is carried out after the proposal has been implemented. Given a proposed policy or project, HIA could be conducted in these steps: Screening Scoping Appraisal Reporting Evaluation. Prospective HIA is comparable to a situational analysis and needs assessment. Concurrent HIA is comparable to formative evaluation. Retrospective HIA is comparable to summative evaluation The strength of HIA is the ability to assess scientific evidence to influence policy making decisions while assessing positive and negative health impacts and possible inequalities in the population. One main weakness is delineating the causal pathway of the determinants of health.

Session Title: Effectiveness of Evaluation From the Perspective of Outcomes on Research Units and Research-supportive and Administrative Departments in National Institute of Advanced Industrial Science and Technology (AIST)
Multipaper Session 664 to be held in the Granite Room Section C on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Osamu Nakamura,  National Institute of Advanced Industrial Science and Technology,  osamu.nakamura@aist.go.jp
Discussant(s):
Katsuhisa Kudo,  National Institute of Advanced Industrial Science and Technology,  k.kudo@aist.go.jp
Abstract: With the introduction of evaluation system from the perspective of outcomes, research units in AIST have come to have a clear scenario of research and development toward outcomes by showing roadmaps with clear goal, milestone and benchmark. The leaders of research units have good communication with members to discuss about their strategies to induce innovations. The evaluation system has been encouraging research units to have a strategic management in their research activities. On the other hand, the evaluation of research-supportive and administrative departments has been leading a variety of advancement of their services with promoting of changes in the consciousness of members. They have been challenging to have long-term approaches to collaborate with other department to support the activities of research units for innovation. In this session, some of the good examples of research units and research-supportive and dministrative departments through the evaluation system in AIST will be introduced.
Effects of Evaluation from the Perspective of Outcomes on Research Units in National Institute of Advanced Industrial Science and Technology (AIST)
Osamu Nakamura,  National Institute of Advanced Industrial Science and Technology,  osamu.nakamura@aist.go.jp
Shinichi Ito,  National Institute of Advanced Industrial Science and Technology,  s24.ito@aist.go.jp
Kunio Matsuzaki,  National Institute of Advanced Industrial Science and Technology,  k.matsuzaki@aist.go.jp
Hironori Adachi,  National Institute of Advanced Industrial Science and Technology,  h.adachi@aist.go.jp
Tetsuo Kado,  National Institute of Advanced Industrial Science and Technology,  t-kado@aist.go.jp
Syuichi Oka,  National Institute of Advanced Industrial Science and Technology,  s.oka@aist.go.jp
In the evaluation committees which are held biannually in the second research term (FY2005-2009), research units must present 1) roadmaps for research projects to show the research strategy, 2) outputs such as publications or patents to show the potency, and 3) management of research units to orient to the expected outcomes in the future. Through evaluations, research units have come to have a clear scenario of research and development toward outcomes by showing roadmaps with clear goal, milestone and benchmark. The leaders of research units have good communication with members to discuss about their strategy by giving workshops inside, for example. The evaluation system from the perspective of outcome has been encouraging research units to have a strategic management in their research activities. In this session, some concrete examples of roadmaps they prepared and good management will be introduced as well as their good achievements to bring about outcomes.
Effects of Evaluation on Research-supportive and Administrative Departments in National Institute of Advanced Industrial Science and Technology (AIST)
Tomoko Mano,  National Institute of Advanced Industrial Science and Technology,  mano-tomoko@aist.go.jp
Osamu Nakamura,  National Institute of Advanced Industrial Science and Technology,  osamu.nakamura@aist.go.jp
Osamu Nakamura,  National Institute of Advanced Industrial Science and Technology,  osamu-nakamura@aist.go.jp
Hisao Ichijo,  National Institute of Advanced Industrial Science and Technology,  ichijo-h@aist.go.jp
Katsuhisa Kudo,  National Institute of Advanced Industrial Science and Technology,  k.kudo@aist.go.jp
AIST has been developing the evaluation system of research-supportive departments and administrative departments, in order to improve the service, to increase the efficiency, and to activate the works to collaborate with research units for creating industrial innovation. Each department has been working to support the activities of research units to establish outcomes collaboratively.  The evaluation has been carried out in the way of management by objectives. The targets set by each department at the beginning of fiscal year are self-judged and then reviewed by evaluation committee composed of external and internal members, at the end of fiscal year. Through evaluations, they have come to show a variety of advancement of their services with promoting of changes in the consciousness of members. They have been challenging to have long-term approaches to collaborate with other department to support the activities of research units for innovation.

Session Title: Evaluating and Empowering Agricultural Workers: The Poder Popular Experience in California
Panel Session 665 to be held in the Quartz Room Section A on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Ellen Braff Guajardo,  The California Endowment,  ebguajardo@calendow.org
Abstract: This session offers an overview of a participatory, empowerment evaluation of the Promotores Comunitarios de Salud Strategy, a component of The California Endowment's Poder Popular initiative, which seeks to provide agricultural workers in California with tools to advocate for improved community health through policy and systems change. The qualitative evaluation design utilized several innovative methods, including a combined photography/interview process and SWOT analyses. These methods were designed to include stakeholders as more active participants in the evaluation process, while allowing them to reflect more deeply on issues affecting their communities, and opportunities and challenges in addressing them.
The Evaluation of Promotores Comunitarios De Salud Strategy: A Grassroots Education and Mobilization Strategy for Improving Community Health
Nuria Ciofalo,  The California Endowment,  nciofalo@calendow.org
The Promotores Comunitarios de Salud Strategy, promoted by the California Endowment, is a health literacy and leadership development approach that utilizes popular education to help community residents understand and respond to physical and social environmental threats to their health. It increases the community awareness and understanding on the social and physical environmental factors that affect health and well-being through local grassroots leadership development, reflection, learning, and dialogue between Promotores Comunitarios and local agencies and institutions. The program and its evaluation provide technical assistance to communities as they develop, implement, and evaluate local action plans. The evaluation is designed to produce 'learnings' for multiple audiences to support unfolding efforts in the pilot sites, and to refine the strategic approach for potential implementation in other regions. It assesses social networks and synergies in place and contributing on learnings in the area of community health, placed-based initiatives, and culturally compatible evaluations that promote policy and practice without borders.
Challenges and Successes in a Participatory Evaluation Process with Farmworkers: The Experience of Poder Popular
Gloria Sayavedra,  California Institute for Rural Studies,  gsayavedra@cirsinc.org
Ron Strochlic,  California Institute for Rural Studies,  rstrochlic@cirsinc.org
Capacity-building and empowerment strategies are not easy to assess, and the experience of Promotores Comunitarios is no exception. This presentation highlights the participatory evaluation of the Promotores Comunitarios de Salud Strategy, to collecting evidence about changes happening in the residents¦ lives, through the words of the different actors involved in the project: promotores, comitTs, agencies and collaborating partners. Methods include a combined photography/interviewing component, in which promotores took photos of issues affecting them and interviewed residents about, and SWOT analyses with staff and promotores at each site. This approach afforded important insights regarding key issues affecting the project, while providing participants with an important tool for improving project implementation. Expected outcomes identified by the evaluation include resident mobilization to seek improved services, or attendance at the public meetings. Nonetheless, the evaluation also captured other, more subtle personal and familial changes, which were necessary steps in the promotores¦ process of empowerment.

Roundtable: Starting Out Right: Laying the Foundation for Knowledge Management Program Implementation and Subsequent Evaluation With a Well-Designed Needs Assessment
Roundtable Presentation 666 to be held in the Quartz Room Section B on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Government Evaluation TIG
Presenter(s):
Thomas Ward,  Paradox Research, Development, and Evaluation,  tewardii@aol.com
Abstract: A common lament among evaluators is that they are all too often called in quite late in a Program’s life cycle. Expectations vary at this point, but it is not uncommon to find that the evaluator is expected to identify “fixes” for programs that were poorly designed to begin with or are expected to provide “proof” that continued or additional funding is wise stewardship of resources. Being able to participate in a front end analysis (in this case a “knowledge needs analysis”) provides a rare opportunity to design a program – including its evaluation – truly from the ground up. How do we make the most of the opportunity to do this right, and to set up the organization for clear, consistent, and meaningful evaluation of its program? The foundation of this roundtable discussion is built on best practices developed during an in-process project for a large government organization.

Session Title: Adapting the State Plan Index Tool: The Purpose, Process, Product and Application
Demonstration Session 667 to be held in Room 102 in the Convention Center on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Health Evaluation TIG
Presenter(s):
Michele Mercier,  Centers for Disease Control and Prevention,  zaf5@cdc.gov
Abstract: The Centers for Disease Control and Prevention (CDC) funds states to address asthma from a public health perspective. A key requirement of this funding is the collaborative development of a statewide asthma plan. Plans varying widely in breadth, depth and quality were developed and published by states. A State Plan Index (SPI) Tool to systematically rate the quality of state obesity prevention plans was recently developed and made available. This tool includes 60 specific elements divided amongst nine components. The public health professionals that developed the tool encouraged adaptation by other public health programs. A workgroup comprised of CDC and State Asthma Program staff was convened to adapt the tool to be used by state asthma programs for self-assessment of existing written plans or as a checklist when developing new or revised state plans. Lessons learned from the adaptation process undertaken by the CDC asthma program will be presented.

Session Title: Current Approaches and Issues in Child Welfare Evaluations
Multipaper Session 668 to be held in Room 104 in the Convention Center on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Human Services Evaluation TIG and the Social Work TIG
Chair(s):
Lois Thiessen Love,  Uhlich Children's Advantage Network,  lovel@ucanchicago.org
Discussant(s):
Jules Marquart,  Centerstone Community Mental Health Centers,  jules.marquart@centerstone.org
State Foster Care Program Evaluation: Practical and Political Considerations
Presenter(s):
Margaret Richardson,  Western Michigan University,  margaret.m.richardson@wmich.edu
James Henry,  Western Michigan University,  james.henry@wmich.edu
Abstract: Independent evaluation within any state human services department program serving vulnerable populations brings with it inherent challenges. These include issues of confidentiality, gaining permission for research, the availability of data, the validity of secondary data, and political and policy considerations. This paper explores the evaluation process within the Michigan foster care system, using the author’s experience of evaluating the process of identifying the needs of children as they first enter foster care. A consumer-oriented evaluation approach using mixed methods, involving review of foster care files to gather referral and assessment data and interviewing foster care staff and supervisors, among others, served as the framework for the evaluation. The data obtained was synthesized, and then contrasted with best practice according to existing literature. Impediments to conducting a consumer-oriented evaluation within a state foster care program are addressed, and challenges to the collection, analysis, and synthesis of results are reviewed with consideration of the political context.
Evaluating the Impact of Non-traditional Interventions: Three Approaches to Capturing Case-level Information
Presenter(s):
Julie Murphy,  Human Services Research Institute,  murphy@hsri.org
Kimberly Firth,  Human Services Research Institute,  kfirth@hsri.org
Abstract: Public child welfare agencies are increasingly embracing non-traditional interventions for the treatment and care of children suffering abuse or neglect. Capturing information regarding the provision of services in this context is difficult both because of the nature of alternative interventions, and because traditional data systems designed for external reporting or internal tracking typically do not systematically capture this information. This paper draws on experiences gained through the evaluation of kinship care in Ohio’s child welfare demonstration waiver. In approaching this evaluative task, researchers have attempted three different methodological approaches: qualitative interviewing with caseworkers, case record reviews conducted by a third party, and a web-based survey completed by caseworkers; each met with varying degrees of success and multiple major challenges. Notably, evidence suggests that web-based surveys are a promising avenue for capturing difficult to obtain case-level services information.

Session Title: Addressing Complexity in Educational Program Evaluation: Focusing on Intent
Demonstration Session 669 to be held in Room 106 in the Convention Center on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Systems in Evaluation TIG and the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Tamara M Walser,  University of North Carolina Wilmington,  walsert@uncw.edu
Abstract: This demonstration will include a description of how focusing on intent supports the evaluation of complex educational programs, including how this approach was used in the evaluation of a teacher professional development program. The characterization of complex systems as holistic, non-linear, and changing has implications for the way educational programs should be evaluated. Evaluators often think of a program in terms of part-to-whole relationships, reducing a program to its components; however, they typically do not purposefully think about whole-to-part relationships. It is necessary to address both when considering complexity. Focusing on the intent of a program is holistic; the activities and outcomes are part of the program, but they also operationalize intent. Further, to understand intent, an evaluator must study patterns and underlying structures that influence intent. Thus, a focus on intent better addresses complexity by emphasizing part-to-whole and whole-to-part relationships, as well as patterns and structures.

Session Title: International Evaluation Standards and Policies
Multipaper Session 670 to be held in Room 108 in the Convention Center on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Randi K Nelson,  University of Minnesota,  nelso326@umn.edu
Discussant(s):
Michael Bamberger,  Independent Consultant,  jmichaelbamberger@gmail.com
Evaluation Standards for International Aid: An Assessment and Policy Proposals
Presenter(s):
Thomaz Chianca,  COMEA Communication and Evaluation Ltd,  thomaz.chianca@gmail.com
Abstract: This paper is part of my PhD dissertation defended in November 2007 at Western Michigan University under Dr. Michael Scriven, Mr. Jim Rugh, and Dr. Paul Clements. It makes a thorough assessment of the existing evaluation standards proposed by donor agencies (e.g. OECD/DAC), the UN system (UNEG), and international nongovernmental organizations (e.g., InterAction). It also provides historical and contextual background for the development of such standards, contrasts them with other comprehensive set of evaluation standards (e.g., the Key Evaluation Checklist), and applies specific criteria to assess their quality and relevance. Based on this extensive analysis, the study proposes main evaluation standards that should be considered by aid evaluators. Those standards were classified according to their nature/purpose: (i) assessing evaluands (e.g., programs), (ii) assessing evaluations’ processes and products, (iii) assessing evaluators’ capacity and behavior, and (iv) assessing evaluation commissioners’ support for the evaluation.
Practical Aspects to Consider in Implementing Evaluation Policy: Application of the Global Environment Facility Monitoring and Evaluation Policy and Procedures to Assess the State of Biodiversity in the Western Cape
Presenter(s):
Liezl Coetzee,  Southern Hemisphere Consultants,  liezl@southernhemisphere.co.za
Abstract: The Global Environment Facility (GEF)’s Monitoring and Evaluation (M&E) Policies and Procedures (2002) aim to provide a mechanism for systematically learning from experience as a GEF-wide operation, and a system to gather and disseminate this information and to track and monitor GEF strategies, operations, and projects. The M&E Policy stipulates that GEF Partner Agencies design M&E plans for projects, and monitor implementation against performance indicators using the logical framework approach. The GEF Policy provides guidelines on methodologies and indicator development, and strongly encourages active inclusion and involvement of all stakeholders in monitoring and evaluation of activities. This paper will explore some of the challenges faced and lessons learned by one such Partner Agency, Cape Action for People and Environment (C.A.P.E.), in designing such an M&E plan to monitor the impact of a range of Biodiversity initiatives implemented by a wide range of C.A.P.E.’s local partner organizations in South Africa’s Cape Floral Kingdom (CFK).

Session Title: Improving Evaluation Design and Methods: Examples for Research Programs
Multipaper Session 672 to be held in Room 112 in the Convention Center on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Stephanie Shipp,  Science and Technology Policy Institute,  sshipp@ida.org
The Role of Case Studies in Evaluation of Research Technology and Development Programs
Presenter(s):
George Teather,  Performance Management Network Inc,  george.teather@pmn.net
Steve Montague,  Performance Management Network Inc,  steve.montague@pmn.net
Abstract: Evaluation of RT&D programs requires the use of multiple lines of evidence in order to collect the information required to develop credible conclusions and develop useful recommendations. Case studies are one of the primary sources of evidence that is collected in many evaluation studies. Case studies can probe deeply into selected projects and trace the pathway from activities and outputs to early, intermediate and longer term outcomes. This paper will examine the strengths and limitations of case studies, and present examples of case studies carried out as part of recent evaluations of applied research projects.
Building Evaluation into Program Design: A Generic Evaluation Logic Model for Biomedical Research Programs at the National Cancer Institute
Presenter(s):
P Craig Boardman,  Science and Technology Policy Institute,  pboardma@ida.org
James Corrigan,  National Institutes of Health,  corrigan@mail.nih.gov
Lawrence S Solomon,  National Institutes of Health,  solomonl@mail.nih.gov
Christina Viola Srivastava,  Science and Technology Policy Institute,  cviola@ida.org
Kevin Wright,  National Institutes of Health,  wrightk@mail.nih.gov
Brian Zuckerman,  Science and Technology Policy Institute,  bzuckerm@ida.org
Abstract: Building evaluation into R&D program design is an ideal that requires program managers to identify beforehand clear program goals and corresponding outcome criteria for future assessments. This approach can be facilitated by the development of generic logic models, evaluative questions, and corresponding metrics. While some U.S. government agencies have developed such generic logic models (e.g., the USDA’s Extension Service and the DOE Office of Energy Efficiency and Renewable Energy), the National Cancer Institute (and by extension the National Institutes of Health) fund larger portfolios with a wide range of aims. We propose a framework for research evaluation that is both applicable to a broad range of program types across NCI and scalable to accommodate programs of different sizes and scope. Another distinguishing feature of the framework is that it is designed to elicit a thorough characterization of the “environment” of the program in addition to its motivations and goals.

Session Title: Building Practices: A Framework for Determining the Fit of Network Analysis in Evaluation
Expert Lecture Session 673 to be held in  Room 103 in the Convention Center on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Shaunti Knauth,  Durland Consulting,  shaunti_knauth@comcast.net
Presenter(s):
Maryann Durland,  Durland Consulting,  mdurland@durlandconsulting.com
Abstract: This expert lecture presents a framework for how to determine if Social Network Analysis fits into an evaluation design. The lecture will include examples of evaluation designs. The framework includes: how to determine the critical elements necessary to apply NA within an evaluation - such as a theory of the relationships studied; the appropriate levels for data collection and analysis; how the choices for analysis measures are linked to theory; steps and processes for interpreting results, and so on. The lecture is applicable to individuals who would like to understand when NA is best used, how the analysis and results align to attribute methods, and what is required for conducting the data collection, analysis, and interpretation. Using the framework, individuals would be able to explore their own programs or projects and determine if NA could add value, and determine the appropriate steps for using the methodology.

Session Title: Redefining The Way We Give
Panel Session 674 to be held in Room 105 in the Convention Center on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
David Hunter,  Hunter Consulting LLC,  david@dekhconsulting.com
Abstract: Together, David Hunter, former foundation executive and current consultant helping nonprofits implement evidence-based programming and better align their efforts to producing measurable outcomes and Steve Butz, who leveraged his human services expertise and commitment to outcomes measurement to found Social Solutions, a software company that through its Efforts to Outcomes Software promotes the benefits of relating efforts to outcomes to improve the effectiveness of social service delivery, will discuss the desire for accountability and transparency among charitable givers and funders and the impact that has on the way organizations function. Hunter and Butz will detail the evolving expectations on organizations by those who are giving, the metrics necessary to evaluate nonprofit effectiveness, and the way organizations must transform their efforts through active measurement and performance management to reach a more effective state and successfully fulfill the new demands made by those supporting them.
Tying Efforts to Outcomes and the Changing Impact of Nonprofit Organizations
Steve Butz,  Social Solutions,  steve@socialsolutions.com
We know what matters most is not the number of classes taught or students who attended, but the real changes in the lives of those affected. That's why Steve Butz leveraged his own human services expertise and commitment to outcomes measurement to found Social Solutions, a software company that promotes the benefits of relating efforts to outcomes to improve the effectiveness of social service delivery. Butz will speak to the importance of tools, including technology, in assisting organizations in managing their performance through evaluation. Butz will also discuss the necessity to relate organizational efforts to outcomes, rather than merely outputs, in order to guarantee effective evaluation and the success of programs, funding, and the organization as a whole. In closing, he will detail the need for organizational evaluation, properly linking efforts to outcomes, for ideal effectiveness in meeting the evolving expectations set forth by both funders and charitable donors.
Social Investing and the Need for Accountability
David Hunter,  Hunter Consulting LLC,  david@dekhconsulting.com
While the commercial investor relies on well-developed and thoroughly tested metrics to guide and assess the potential of an investment and then understand its return, this has not been the case when it comes to nonprofits. Many donors, foundations, and corporations have a desire to make a difference, but often fail to engage in the proper research, analysis and oversight to ensure their social investments achieve desired results. Hear from David Hunter, a nonprofit performance management consultant and managing partner of Hunter Consulting LLC, who will introduce the concept of social investing, an idea and a tool involving a series of metrics that, when applied, produce a more reliable and effective methodology for allocating funds to organizations. Learn how social investing will change the old approach of giving and usher in a new wave of responsible investing, benefiting both those who give and high-performing organizations who produce measured results.

Session Title: Children's Programs in Multidisciplinary Settings
Multipaper Session 675 to be held in Room 107 in the Convention Center on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
Renee Lavinghouze,  Centers for Disease Control and Prevention,  rlavinghouze@cdc.gov
Conducting Evaluations and Encouraging Utilization in Multidisciplinary Organizations: Experiences from the Multi-Site Evaluation of Child Advocacy Centers
Presenter(s):
Lisa M Jones,  University of New Hampshire,  lisa.jones@unh.edu
Wendy A Walsh,  University of New Hampshire,  wendy.walsh@unh.edu
Theodore P Cross,  University of Illinois Urbana-Champaign,  tpcross@uiuc.edu
Abstract: Multidisciplinary organizations are increasing and their structure creates challenges for conducting and institutionalizing outcome evaluation research. Child advocacy centers (CACs) are the fastest growing multidisciplinary model for investigations of child sexual abuse, bringing police, child protection, medical and legal professionals together in one agency to enhance the response to victims and their families. Yet, limited evaluation data about whom CACs serve and their impact on investigations has hampered the development of investigation policy and programs. This paper reports on the experiences collecting these data through a 5-year Multi-site Evaluation of CACs. Investigation procedure and outcome data were collected at four sites across the country using a replicated quasi-experimental design. This paper presents results from the research highlighting the challenges in working with multidisciplinary organizations to evaluate outcomes and utilize findings. Suggestions are provided for ways that evaluators can overcome challenges to evaluation in multidisciplinary settings.
Evaluating a Multi-Level, Multi-Strategy Intervention to Decrease Childhood Obesity
Presenter(s):
Jenica Huddleston,  University of California Berkeley,  jenhud@berkeley.edu
Abstract: Childhood obesity had increased dramatically over the last few decades, including children age 0-5. Traditional interventions often focus on individual behaviors without concern for options available to making healthier choices. Innovative ways to address childhood obesity include intervening on multiple levels concurrently, such as the individual, community and policy levels. The intent is to use multiple avenues to change social norms around nutrition and activity for young children, their families and communities. Strategies include intervening at multiple levels related to: breastfeeding, accessible healthy foods, safe places for activity and improving food/ activity in childcare settings. Evaluating a multi-level, multi-strategy intervention to reduce childhood obesity is complex. It is particularly difficult as the long-term goal will not likely be seen for many years. With this in mind, it is crucial to develop an evaluation plan that can track data on both short and long-term outcomes and provide needed information for sustainability.

Session Title: Organizational Implementation of an Evaluation Policy: A Local Health Department Perspective
Demonstration Session 676 to be held in Room 109 in the Convention Center on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Presenter(s):
Rocaille Roberts,  Harris County Public Health and Environmental Services,  rroberts@hcphes.org
Abstract: Department-wide evaluation became a strategic priority of Harris County Public Health & Environmental Services (HCPHES) upon release of its Strategic Plan in 2005. Prior to the Strategic Plan's establishment, there were no mechanisms to systematically evaluate outcome objectives to determine effectiveness of programs across the organization. To address this gap, the HCPHES Evaluation Framework was adopted in 2006, which currently functions as HCPHES' evaluation policy. This presentation will describe the HCPHES Evaluation Framework, which is based on a conceptual model of public health system performance proposed by Handler and colleagues, and published in the American Journal of Public Health in 2001. It will also address 1) how evaluation roles among key staff across the health department support the Framework's structure and viability; 2) the 14-month outcome measure development process for health and environmental programs; 3) a pilot test of proposed outcome measures; 4) evaluation capacity-building efforts; and 5) lessons learned.

Session Title: A Series of Evaluations of the Core-Plus Mathematics Curriculum: Methods, Results, and Issues
Demonstration Session 677 to be held in Room 111 in the Convention Center on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Steven Ziebarth,  Western Michigan University,  steven.ziebarth@wmich.edu
Abstract: Since the early 1990s many new mathematics curricula have been developed, tested, and evaluated in response to the National Council of Teachers of Mathematics' (NCTM) Curriculum and Evaluation Standards for School Mathematics (1989). A 2004 NRC Report examined approximately 700 evaluations of curricular effectiveness noting a wide variety of methods used to evaluate their effectiveness. This session examines three longitudinal evaluations of Core-Plus Mathematics, a four-year high school mathematics curriculum designed for all students that balances procedural and conceptual learning in real world contexts. The three studies are: 1) the formative evaluation of first edition materials including their development (1992-1998), 2) a focused longitudinal study of the published materials (1997-2002), and 3) the evaluation of a second edition of the curriculum (2002-2008) now concluding. We examine not only the various evaluation methods used and interesting results, but also how the earlier studies informed the work of the most recent.

Session Title: Readability Testing: Simple Skill Often Forgotten
Skill-Building Workshop 678 to be held in Room 113 in the Convention Center on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Integrating Technology Into Evaluation
Presenter(s):
Julien Kouame,  Western Michigan University,  julienkb@hotmail.com
Abstract: The objective of this workshop is to provide participants with resources and knowledge to analyze and interpret survey readability for the purpose of improving test reliability and validity. This 45 minute workshop will be conducted through hands-on practice and exploration of current software for readability testing. The session consists of a brief introduction to readability formulas and software, and small group activities with discussions. No prior knowledge of readability test or statistics is required. Specific readability formulas and utilization will be examined and discussed. Participants will use formulas to calculate the readability level of a survey, interpret results, and revise items to suit the intended audience. Participants will be expected to experiment with readability testing using software that will be provided or other current software versions. An electronic network will be set up for the participants that will provide a discussion forum on using readability tests for improving evaluation instruments.

Return to Evaluation 2008
Search Results for All Sessions