|
Session Title: Evaluation Policy in Operation: A Case Study of the New York City Center for Economic Opportunity
|
|
Panel Session 397 to be held in Centennial Section B on Thursday, Nov 6, 4:30 PM to 6:00 PM
|
|
Sponsored by the Quantitative Methods: Theory and Design TIG
|
| Chair(s): |
| Jennifer Hamilton,
Westat,
jenniferhamilton@westat.com
|
| Abstract:
In December 2006, New York City's Mayor Michael Bloomberg established the Center for Economic Opportunity (CEO) to implement, monitor, and evaluate the City's new anti-poverty initiative. The overall goal of the initiative is to reduce poverty in the City through a range of programs targeted to the working poor, disconnected youth, and young children. A key hallmark of the initiative is an explicit policy on evaluation as a tool for accountability and decision-making. CEO operates results-driven programs and in turn, is working to fund evaluations that can be of the highest rigor possible to provide information on the implementation and outcomes of the programs. This session will include presentations from the director and other staff of the internal evaluation unit and the principal investigators of the coordinated set of external evaluations to provide insights into the genesis, development, and operation of CEO's evaluation policy.
|
|
Evaluation Policy in the New York City Center for Economic Opportunity
|
| Hector Salazar-Salame,
New York City Center for Economic Opportunity,
hsalame@cityhall.nyc.gov
|
| Carson Hicks,
New York City Center for Economic Opportunity,
chicks@cityhall.nyc.gov
|
| Kristen Morse,
New York City Center for Economic Opportunity,
|
| David Berman,
New York City Center for Economic Opportunity,
bermand@hra.nyc.gov
|
|
The first presenter, a CEO internal evaluator, will provide an overview of the New York City Center for Economic Opportunity (CEO) and the evaluation policy developed to help support CEO's decision-making. This presentation will begin by describing the impetus for CEO, how it is organized, and the rationale for the programs that are funded, as well as a descriptive overview of the programs (as they are designed to serve the key target groups of the working poor, disconnected youth, and young children.) The presenter will then outline CEO's policy on evaluation and describe the thinking behind the structure that involves internal evaluation, collection and use of performance data, use of performance-based contracts, and a coordinated set of external evaluations involving a range of approaches.
|
|
|
Setting the Stage for Evaluation: The Program Review Approach in the NYC Center for Economic Opportunity Evaluation
|
| Stan Schneider,
Metis Associates,
sschneider@metisassoc.com
|
|
This presentation will describe the 'program review' approach used as a first step in the evaluation process with each of the programs funded within the CEO initiative. The approach, patterned in part after evaluabilty assessment (Wholey 1994), begins with the collection and review of documents, discussions with CEO staff, and joint meetings with CEO staff and agency staff to obtain an understanding of the design and operation of each program, as well as the research questions that CEO and the agency have about the program and its performance. The development of a logic model guides this initial stage, and the subsequent data collection and analysis activities. This presentation will discuss the evaluation team's experience in conducting the 40+ reviews, highlight lessons learned both methodologically and substantively, and how the lessons have been provided to CEO, the agencies, and the programs.
| |
|
Strengthening the Sensitivity of Evaluation Designs to Detect Outcomes: The Use of a Technical Review Group in the NYC Center for Economic Opportunity Evaluation
|
| Debra Rog,
Westat,
debrarog@westat.com
|
|
Each program in the NYC CEO initiative will have a program review conducted by the external evaluation team. An action plan, developed as part of the program review report, outlines possible evaluation designs and methods for addressing the evaluation questions raised by CEO and the agency, and through the review itself. To support the development of the most rigorous designs feasible that are also coordinated with one another, the evaluation incorporates a technical review process. This process, based on the concept of design sensitivity posited by Lipsey (1990), reviews features of each outcome study (e.g., the strength and fidelity of the target program; the quality of the measurement) that can affect the study's effect size. The presenter will describe how the technical review group operates as part of the cross-program evaluation team to ensure that each action plan addresses these features.
| |
|
Using Results to Inform Action: The Experiences of the New York City Center for Economic Opportunity
|
| Kristen Morse,
New York City Center for Economic Opportunity,
kmorse@cityhall.nyc.gov
|
| Carson Hicks,
New York City Center for Economic Opportunity,
chicks@cityhall.nyc.gov
|
| David Berman,
New York City Center for Economic Opportunity,
bermand@hra.nyc.gov
|
|
The final presenter from CEO will describe how the results from the evaluation activities are being used to guide program decision making. Any changes to the evaluation policy based on the activities to date will be described, and future plans for CEO based on evaluation findings will be discussed.
| |