2011

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Structural Equation Modeling as a Valuable Tool in Evaluation
Multipaper Session 308 to be held in Pacific C on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Frederick L Newman,  Florida International University, newmanf@fiu.edu
Structural Equation Modeling with Cross-lagged Paths to Evaluate Alcoholics Anonymous' Effect on Drinking
Presenter(s):
Stephen Magura, Western Michigan University, stephen.magura@wmich.edu
Charles M Cleland, New York University School of Nursing, chuck.cleland@nyu.edu
Abstract: Evaluation studies consistently report correlations between Alcoholics Anonymous (AA) participation and less drinking or abstinence. Randomization of alcoholics to AA or non-AA is impractical and difficult. Unfortunately, non-randomization studies are susceptible to artifacts due to endogeneity bias, where variables assumed to be exogenous ('independent variables') may actually be endogenous ('dependent variables'). A common artifact is reverse causation, where reduced drinking leads to increased AA participation, the opposite of what is typically assumed. The paper will present a secondary analysis of a national alcoholism treatment data set, Project MATCH, which consists of multi-wave data on AA participation and severity of drinking over a 15 month period (3 month intervals). An autoregressive cross-lagged model was formulated and indicated the predominance of AA effects on reduction of drinking, not the reverse. The presentation will be accessible to evaluators without advanced statistical training. Supported by R21 AA017906.
A Tool for Model Selection: Assessing the relative goodness of fit of an evaluation model using the Akaike Information Criterion (AIC)
Presenter(s):
Shelly Engelman, The Findings Group LLC, shelly@thefindingsgroup.com
Tom McKlin, The Findings Group LLC, tom@thefindingsgroup.com
Abstract: As the use of statistical models to assess how well program activities predict outcomes increases among evaluators, the need to detect best fitting models grows in importance. A 'good' statistical model not only strengthens an evaluation by identifying variables that have the strongest impact on outcomes, but also has the potential to add theoretical value to other programs and evaluation contexts. R-square is a commonly used statistic to evaluate model fit in multiple regression analysis. Adding additional variables to a regression model often increases the R-square; however, one consequence of choosing the best model because it has an incrementally higher R-square is that it often lacks parsimony and is difficult to replicate. Using a biology-based learning intervention as an example, we highlight the AIC (Akaike Information Criterion) as a tool for comparing models and selecting the most robust, parsimonious model. The advantage of using AIC is that it penalizes attempts at over-fitting a model and allows evaluators to compare multiple models before selecting the best model.

 Return to Evaluation 2011

Add to Custom Program