2011

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Tiered Evaluation: Local Evaluators Operating Within the Context of a Cross-site Evaluation
Panel Session 554 to be held in California B on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
Patricia Campie, National Center for Juvenile Justice, campie@ncjj.org
Abstract: This session will explore the strategic and collaborative approach to tiered evaluation of the National Child Welfare Resource Centers (NRC) Evaluators Workgroup. Each NRC is responsible for evaluating one of the eleven National Child Welfare Resource Centers of the training and technical assistance network coordinated by the Children's Bureau and participating in the national cross-site evaluation. This workgroup of local evaluators has addressed many strategic and methodological questions including: How should the workgroup be organized? How can we best cooperate to minimize shared client burden? How can we capture intermediate outcomes in regards to evaluating technical assistance? How do we adapt to changing initiatives? Which data collection methods should be shared or can be unique across NRC evaluation plans? How can our work best contribute to the cross-site evaluation? This session will discuss these questions and explore the role of local evaluators working in conjunction with a cross-site evaluation team.
Creating a Successful Structure for Collaboration Among Local Evaluators
Anne George, National Center for Juvenile Justice, george@ncjj.org
The NRC Evaluators Workgroup is a collaborative group of evaluators of the Training and Technical Assistance Network coordinated by the Children's Bureau. This presentation discusses the context and structure of the NRC Evaluators Workgroup and how it operates within the context of the cross-site evaluation while balancing the needs of each specific NRC evaluation. The NRC Workgroup has facilitated discussions on strategies for data collection among shared clients, differences in evaluation design among local evaluators, and strategies for capturing short-term and intermediate outcomes of technical assistance. Increased collaboration, facilitated by the Children's Bureau, has created a community of evaluators that is more adept to evaluate the many levels of the Training and Technical Assistance Network and provide information important to providers and customers. Practical examples of how the local evaluators have successfully collaborated through technology and strategic face-to-face meetings and lessons learned about multi-level evaluation will be provided.
Evaluation of Innovative and Dynamic Practice and Processes Through Collaboration
Sarah-Jane Dodd, University of New York, sdodd@hunter.cuny.edu
This presentation discusses the challenges of evaluating practice as it evolves through innovations, and offers suggestions for successful outcomes within the context of cross-site evaluation collaboration. The presentation discusses the experiences of a group of local evaluators from the National Resource Centers (NRCs) coordinated by the Children's Bureau, who also work collaboratively to support cross-site evaluation. Three significant innovations have been introduced to the NRC's practice model (Implementation Science, Adaptive Leadership and Business Process Mapping). While these practice innovations add a layer of complexity to the evaluation process, the practical and intellectual support created by the cross-site evaluation team provides an invaluable opportunity for learning and collaboration. The presentation demonstrates a suggested "best practice" for planfully and inclusively introducing innovations allowing administrators, practitioners and evaluators to develop and deepen their understanding together so that meaningful evaluation can continue to document outcomes and assist in practice improvement even as innovations occur.
Lessons Learned Through Collaboration to Develop Common and Unique Measures
Brad Richardson, University of Iowa, brad-richardson@uiowa.edu
The strategy of local evaluation combined with national (across projects) evaluation is becoming more common. Working within the context of collaboration among projects and with a national cross-site evaluation poses challenges for identifying and selecting common and unique measures. Strategies that have been effectively employed will be discussed using website tracking and satisfaction as an example of how effective local and national evaluation collaboration to measure results can be achieved. Though a seemingly simple evaluation to conduct, multiple interests, uses, technologies and perspectives add a variety of issues and level of complexity. Improving standards of quality in evaluation theory, methods, and practice will be addressed through the lessons learned by the experience of child welfare National Resource Centers which will be described to assist those who may undertake similar work. The presenter, Dr. Brad Richardson, has directed numerous projects involving local and national evaluation components.

 Return to Evaluation 2011

Add to Custom Program