Evaluation 2009 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: New Applications, Large Challenges, and Strategic Approaches in Managing Data
Multipaper Session 882 to be held in Panzacola Section H2 on Saturday, Nov 14, 3:30 PM to 5:00 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Lihshing Wang,  University of Cincinnati, leigh.wang@uc.edu
Incorporating Multilevel Techniques Into Quality Control Charts
Presenter(s):
Christopher McKinney, University of Northern Colorado, christopher.mckinney@unco.edu
Pablo Olmos, Mental Health Center of Denver, antonio.olmos@mhcd.org
Linda Laganga, Mental Health Center of Denver, linda.laganga@mhcd.org
Kathryn DeRoche, Mental Health Center of Denver, kathryn.deroche@mhcd.org
Abstract: The use of statistical quality control charts has become more popular over the past two decades within behavioral and healthcare service systems. Though the use of common statistical process control charts can improve process quality and reduce variability, they utilize statistical methods which assume each individual measured is the same. In practice we know this is an erroneous assumption, where the means and rates of change vary across individuals. Multilevel techniques provide means and rates of change conditional on specified environmental factors, along with partitioning the variability of the individuals and even higher level groups from that of the measurement and random error. The current presentation will discuss the use of statistical quality control charts in the continued evaluation of behavioral and healthcare services, and demonstrate that multilevel statistical techniques can improve the function of quality control charts in these settings.
Data Envelopment Analysis: An Evaluator's Tool for Identifying Best Practice Among Organizations
Presenter(s):
John Hansen, Indiana University, joahanse@indiana.edu
Abstract: Comparing ranks of organizations such as schools or businesses is a common approach to evaluate relative group performance. To level the playing field across organizations, rankings may be based on a production function which relates the organization's inputs to its outputs - for example, ranking schools on test scores while accounting for poverty status. These rankings give a descriptive picture of relative performance but they offer little in terms of prescriptive comparisons in terms of best practice. Rankings identify top performers on defined criteria, but often there is substantial variability across organizations' inputs that restrict the utility of attempting to emulate the top performer. This paper demonstrates how the technique Data Envelopment Analysis identifies best practice peers along a continuum of rankings. This technique provides a prescriptive framework by identifying top performs with variable input levels for modeling best practice. Organization-level diagnostics for targeting best practices will be presented and interpreted.
Complex Database Design for Large-Scale Multi-Level Multi-Year and Multi-Cohort Evaluation in the e-Age
Presenter(s):
Lihshing Wang, University of Cincinnati, leigh.wang@uc.edu
Abstract: Evaluation research that involves large-scale, multi-level, multi-year, and multi-cohort data presents special challenges to researchers. Most quantitative programs and publications focus on the research design, data collection, and data analysis phases, but largely leave the database design phase out of the research cycle. This study examines the database design issues encountered in a recent state-wide endeavor to explore the causal relationships among three clusters of variables: one exogenous cluster (teacher education), one direct endogenous cluster (teacher quality), and one indirect endogenous cluster (student learning). The two endogenous clusters were repeated over seven years and collected from six cohorts. The following topics are addressed: (a) alignment of the conceptual framework, the operational model, and the database design; (b) match-linking of relational multiple databases and specification of unique ID's; and (c) security, confidentiality, and collaboration on a shared Internet platform. Implications for conducting complex evaluation research across multiple sites in the e-age are discussed.

 Return to Evaluation 2009

Add to Custom Program