|
Session Title: Comparative Effectiveness Research in Program Evaluation
|
|
Panel Session 206 to be held in Lone Star E on Thursday, Nov 11, 9:15 AM to 10:45 AM
|
|
Sponsored by the Quantitative Methods: Theory and Design TIG
|
| Chair(s): |
| James Michael Menke, University of Arizona, menke@email.arizona.edu
|
| Abstract:
Interest in comparative effectiveness research (CER) is increasing rapldly. Although the main focus of interest is in medicine, pressure toward, perhaps even demand for, CER in other areas will almost certainly follow along, probably led by health services research, but quickly followed by education, and then other policy areas. CER poses particular problems in all research, and program evaluation will be no exception. It will eventually not be sufficient simply to conclude that some intervention has positive effects; it will be required that those effects be shown to be as good as or better than alternative interventions or even alternative policy strategies. CER poses special challenges with respect to conceptual and design issues and for appropriate statistical analysis and interpretation of findings. Some of these challenges are described, along with proposed solutions, and illustrations of their applications are presented.
|
|
Epistemological and Methodological Issues in Comparative Effectiveness Research
|
| Lee Sechrest, University of Arizona, sechrest@email.arizona.edu
|
|
Comparing the effectiveness of two different methods of intervention is often not as simple as it might seem. A first question, related to the common distinction between efficacy and effectiveness research must be whether one intends to compare the interventions at some maximum , ideal level or as they might be expected to occur under ordinary conditions. A second issue sometimes arises when interventions are to be compared that may be differentially effective dependent on characteristics of the population(s) in they are tested. A critical problem, both theoretically and practically, is how we may know whether they interventions are administered at equivalent strengths (doses). If interventions “take hold” in different time frames or their effects become apparent in different ways, direct comparisons may be jeopardized. These and other issues need careful consideration and explication, for if they are overlooked, comparisons may not be legitimate and when done may be misleading.
|
|
|
Comparisons and Second Order Comparisons of Comparisons
|
| James Michael Menke, University of Arizona, menke@email.arizona.edu
|
|
Direct Statistical comparisons of different interventions are not always easy, particularly if the effects of the interventions are not thought to be directly comparable, i.e., the interventions have somewhat different effects. Differential attrition rates may also jeopardize direct comparisons. These and other statistical problems need to be considered and dealt with. Statistical problems are even more acute when direct comparisons are either not possible or do not suffice, and it is necessary to make comparisons of two interventions by inferences from their effects relative to their own comparison groups, e.g., inferences must be made by comparing differences between each intervention and its own separate comparison group. Methods for facilitating such indirect comparisons can be described even though they have not yet often been used.
| |
|
Comparative Effectiveness in Educational Settings
|
| Katherine McKnight, Pearson Corporation, kathy.mcknight@gmail.com
|
|
Educational interventions are common, but their direct head-to-head comparison is not so common. Most interventions are evaluated by comparisons to common or standard practices. Nonetheless, there are instructive instances of comparative effectiveness of interventions in education. Some of these have been implicit, comparing special educational arrangements with ongoing “regular” education, the Coleman report being a prim example. Other interventions have been compared more directly, sometimes by dint of coincidence, with two or more interventions happening to occur in the same time frame or the same social/situational frame. And still other, more recent efforts have made direct and deliberate comparisons. The different conceptual, methodological, and statistical issues involved in these efforts are illustrated, and the lessons are instructive.
| |
|
Comparing Tobacco Control Interventions
|
| Frederic Malter, University of Arizona, fmalter@email.arizona.edu
|
|
Interventions aimed at reducing tobacco use, particularly cigarette smoking, have been numerous (an understatement). Relatively infrequent, however, have been attempts to make direct comparisons between different methods of intervention. Hence, comparative effectiveness research in this important social area must rely heavily on inferences involving populations, equivalence of interventions within classes, legitimacy of comparisons between classes, and statistical analyses resting on sometimes dubious assumptions. Nonetheless, the importance and the difficulty of the problem requires making the best that one can of the data that do exist and of the comparisons that can be made, even if simulated. Data on the variability of results of putatively similar intervention programs are helpful in developing expectations (norms) against which new interventions may be judged. Statistical models may also be useful in identifying interventions that are unusually effective (or ineffective). These are not easy solutions but “Nature is never embarrassed by difficulties in analysis.”
| |