2010 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Enhancing the Quality of Evaluations by Rational Planning
Panel Session 586 to be held in Lone Star E on Friday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Frederic Malter, University of Arizona, fmalter@email.arizona.edu
Abstract: Evaluation reports are often disappointing with a frequent reason being insufficient thought given to the effort in the first place. A rule: the implementation of an evaluation never improves once the process has begun. Quality of evaluations can be improved by initial attention to theory and models underlying the evaluation, without which , evaluation is likely to lack focus and veer off target. Design of evaluation is often poorly specified and less rigorous than it could and should be, sometimes because options are not fully considered. Measurement issues are critical and require attention, but they frequently are resolved in arbitrary ways. Finally, plans for analysis of data should be developed in concert with those for design, methods, and measurement, but in many cases data analyses represent a sort of Procrustean bed for whatever resulted from previous efforts, no matter how flawed. These problems are discussed in relation to specific examples.
Building Evaluations on Theories and Models
Michele Walsh, University of Arizona, mwalsh@email.arizona.edu
Lee Sechrest, University of Arizona, sechrest@email.arizona.edu
Evaluation efforts should be guided by theory and a model of the evaluation process, i.e., as opposed to the theory of the intervention itself, and the theory and associated model need to be explicit. The theory/model can best be thought of in terms of Meehl’s scheme for theory appraisal, which requires not only the specification of the hypothesis of interest but of the auxiliary hypotheses related to the theory and to the particular research implementation that must be true if the main hypothesis is to be tested adequately. These auxiliary hypotheses are rarely made explicit in research, including program evaluation. The nature of the specifications required will be illustrated by reference to one large scale evaluation and one local evaluation, with the implications for the failures of these evaluations being made evident.
Design and Methods in the Planning Stage of Evaluation
Katherine McKnight, Pearson Corportation, kathy.mcknight@gmail.com
Although it might seem unlikely that an evaluation could be undertaken without a reasonably specific account of the design of the evaluation and the methods to be employed, as a brief review will show, those requirements are often not met. One reason for that failure is that just what is required in order to define a design and its accompanying methods seems not uniformly understood. Moreover, designs and methods may sometimes be proposed that review of other work, and perhaps even just good thinking, would be evidently unrealistic. The requirements for an adequate design and the methods appropriate to implement it can be outlined, and the processes appropriate for meeting those requirements can similarly be defined. An illustration of the requirements and their realization will be presented in the form of the reconstruction of a completed evaluation project.
What Happens When Measurement Planning Fails
Patricia Herman, University of Arizona, pherman@email.arizona.edu
Mei-kuang Chen, University of Arizona, kuang@email.arizona.edu
Measurement for evaluation needs to be a strategic, planned activity. It cannot be allowed simply to develop and follow its own course or the course of whoever has an idea at a particular moment. It is not true, despite what sometimes seems to be the assumption, that if enough data are collected, surely something can be made of them. Using as an example a large, national data set, problems in identifying variables, data reduction, specification of analytic models, and missing data will be illustrated. Recommendations for effective planning for measurement will be made, including how the project involved in the illustration might have been improved.
Data Analysis: Forethought, Not Afterthought
Patrick McKnight, George Mason University, pmcknigh@gmu.edu
Planning for data analysis should be part of the plan for the implementation of any evaluation, not left to the end under the supposition that somehow it will all be worked out. The general nature of the analysis that should be employed is often implicit in the design of the evaluation and the specification of the methods to be employed, but more than a general idea is needed. A good planning tactic is to try to identify the specific data elements that will derive from evaluation activities and then to determine where each of those elements will fit into an analysis that will answer questions of central interest. A “map” showing each element , where it will come from and when, to what evaluation questions is will be related, and where in the analysis it will be used can be helpful. An illustration of such a map will be presented.

 Return to Evaluation 2010

Add to Custom Program