2010 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Collaboration in Government-Sponsored Evaluations
Multipaper Session 695 to be held in CROCKETT D on Friday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Government Evaluation TIG and the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Maria Whitsett,  Moak, Casey and Associates, mwhitsett@moakcasey.com
Some Evidence on Challenges in Government/Stakeholder Partnerships: A Case Study of the Voluntary Sector Initiative (VSI) and Networks
Presenter(s):
Caroline DeWitt, Human Resources and Skills Development, caroline.dewitt@hrsdc-rhdcc.gc.ca
Abstract: Some Evidence on Challenges in Government/Stakeholder Partnerships: A Case Study of the Voluntary Sector Initiative (VSI) and Networks The Canadian federal government has sought the participation of stakeholders in the development of policy, program design and implementation. (Salamon: 2005:) describes the model as public administration: “leaping beyond the borders of the public agency, embracing a wide assortment of third parties’’ that are intimately involved in the implementation and management of the public’s business.’ The delivery structure creates a principal–agent dichotomy where third parties exercise discretion over the expenditure of public funds and a monopoly of knowledge in relation to activities, costs, and results. This makes third parties an essential partner in evaluations. Government moves from centrally-driven and hierarchical evaluations to a more consultative approach that requires cooperation and collaboration. The VSI evaluation is an example of an evaluation undertaken that involved collaborating with stakeholders and networks throughout the process.
Assessing More Than Mortar and Bricks: Combining Research, Theory, Politics, and Chaos in a HOPE VI Community Revitalization Program Evaluation
Presenter(s):
Andrew Scott Ziner, Atlantic Social Research Corporation, asrc@rcn.com
Ross Koppel, University of Pennsylvania, rkoppel@sas.upenn.edu
Abstract: A longitudinal evaluation of a federally-supported HOPE VI program is a prescribed and proscribed activity – until it confronts reality. Neither the research and nor the underlying theoretical bases should be ad hoc, reactive or sloppy. How do, and how should, evaluators deal with these dilemmas? Using a longitudinal case study and research analysis, this paper explores the all-too-common evaluator’s conundrum: How do we make wise choices when confronted with the mess of most program realities? What are the implications of each choice to the outcomes, recommendations, and continuing evaluation? This analysis offers lessons for specific evaluation strategies and for the larger questions underlying our logic, our choices and the role evaluation theory and methods must play.
An Assessment of the Impact of Proactive Community Partnerships on Census Quality
Presenter(s):
Edward Kissam, JBS International Inc, ekissam@jbsinternational.com
Jesus Martínez-Saldaña, Independent Consultant, jesus@jesusmartinez.org
Anna Garcia, Independent Consultant, annamg01@yahoo.com
JoAnn Intili, JBS International Inc, jintili@jbsinternational.com
Abstract: The 2010 Census included new operational procedures, targeted outreach efforts and partnerships with local community organizations designed to decrease differential undercount of hard-to-count (HTC) population subgroups. JBS International assessed the effectiveness of these efforts to improve census data quality in HTC tracts in rural California with concentrations of farmworkers and Latino immigrants using a survey methodology adapted from the Bureau’s own post-enumeration survey approaches to coverage measurement. We report patterns of undercount and examine whether innovations such as mailing of Spanish-language questionnaires, better citing of Questionnaire Assistance Centers, more “Be Counted” centers where a person who failed to receive a mailed census form can request one, collaboration with community organizations to improve the Master Address File, and an expanded Spanish-language media campaign mitigated historic undercounts of migrant and seasonal farmworkers and specific sub-populations within this group: Mexican immigrants of indigenous origin, complex households, and persons living in substandard “low-visibility” housing.
The Effects of Qualitative Feedback on Mid-Managers’ Improvement on Performance Behaviors in the Veterans Health Administration
Presenter(s):
Thomas Brassell, United States Department of Veterans Affairs, thomas.brassell@va.gov
Boris Yanovsky, United States Department of Veterans Affairs, boris.yanovsky@va.gov
Katerine Osatuke, United States Department of Veterans Affairs, katerine.osatuke@va.gov
Sue R Dyrenforth, United States Department of Veterans Affairs, sue.dyrenforth@va.gov
Abstract: Many organizations take employee development seriously. The Department of Veterans Affairs (VA) is no exception. One expression of its commitment to employee development is the VA’s use of the 360-degree feedback appraisal system, designed for the professional evaluation and development of mid-level managers. The 360-degree feedback has become a popular, widely used resource across the department: participation was 1,300 in 2009 and is expected to double to around 2,500 in 2010. Along with quantitative ratings of behaviors expressing job-related competencies, participants receive qualitative feedback on their strengths and areas for improvement from multiple raters (boss, peer, staff, self). The evaluation has the explicit intention of facilitating development through feedback delivery. This study examined what type of feedback results in optimal performance improvement. Results showed that simple feedback resulted in significantly greater performance improvement than complex feedback. Additionally, complex feedback did not result in significant performance improvement whereas simple feedback did.

 Return to Evaluation 2010

Add to Custom Program