| Session Title: The Challenges of Conducting Experimental and Quasi-Experimental Small-Budget Evaluations of Teacher Professional Development Programs: The Case of Teaching American History Grants |
| Multipaper Session 545 to be held in Room 111 in the Convention Center on Friday, Nov 7, 9:15 AM to 10:45 AM |
| Sponsored by the Pre-K - 12 Educational Evaluation TIG |
| Chair(s): |
| Sally Bond, The Program Evaluation Group LLC, usbond@mindspring.com |
| Abstract: Experimental and quasi-experimental designs are becoming customary in RFPs from federal agencies including the U.S. Department of Education, National Science Foundation, and others. This session will explore the application of those designs in small-budget evaluations. Presenters will draw upon their years of experience using such designs in Teaching American History (TAH) grant project evaluations to illustrate key features of design and implementation. Of particular emphasis will be negotiating program and evaluation design with program staff, matching assessment to content (including design and validation of locally developed instrumentation), and challenges of implementing experimental/quasi-experimental evaluations (including factors necessitating changes in evaluation design and strategies for recruiting control or comparison groups). Presenters will discuss their own experiences and will engage attendees in discussing lessons learned in this and other evaluation contexts. |
| Negotiating Project Design to Permit Experimental and Quasi-Experimental Evaluation Designs |
| Sally Bond, The Program Evaluation Group LLC, usbond@mindspring.com |
| Negotiating program design elements to accommodate a robust experimental or quasi-experimental design is often overlooked when developing proposals or designing programs. In such cases where experimental or quasi-experimental evaluation designs are desirable and/or required, it is the responsibility of the evaluator to collaborate with program staff to (1) educate them about the demands of the desired evaluation design and (2) design a program that is consistent with the aims of the project and the needs of the evaluation design. The presenter will illustrate the iterative process of program and evaluation design in a TAH program, including selection of the linchpin assessment of teacher learning which provided the best fit with the resulting design and content to be conveyed. |
| The Critical Role of Assessment in Experimental and Quasi-Experimental Designs |
| Michael Herrick, Herrick Research LLC, herrickresearch@aol.com |
| Teaching American History grants are required to be evaluated with experimental or quasi-experimental designs. The expected outcomes of these projects are often expressed as gains in student achievement. This session will explore two different TAH evaluations, each using different types of assessments for measuring student achievement. One project used nationally validated NAEP items, while the other used locally developed and locally validated test items. The pre to post test gains in the experimental group using NAEP items were not statistically different than those of the control group. For the project using locally development and validated items, the gains of the experimental group over the control group were statistically significant at the .01 level. The primary conclusion was that the NAEP items were not strongly aligned to the content of the project, whereas the locally developed assessments were aligned and, therefore, sensitive to instruction. |
| The Challenges of Implementing Experimental and Quasi-Experimental Designs in Teaching American History Grant Evaluations |
| Kristine Chadwick, Edvantia Inc, kristine.chadwick@edvantia.org |
| Kim Cowley, Edvantia Inc, kim.cowley@edvantia.org |
| Georgia Hughes-Webb, Edvantia Inc, georgia.hughes-webb@edvantia.org |
| Through six projects, TAH program evaluators at Edvantia have taken on the challenge of implementing mandated experimental or quasi-experimental designs on shoestring budgets. Working with slightly different designs and methods, these evaluators have learned a few lessons about how to adhere to the rigorous evaluation mandate in the program solicitation, soothe client fears, partner with clients to implement these designs, collect data for both formative and summative purposes, stay within budget, and successfully manage at least a few threats to validity. Three main challenges will be discussed. Evaluations were often downgraded from true experimental designs to quasi-experimental either in the initial design phase or later in the project implementation. Negotiating and recruiting teacher comparison groups required different solutions in every project. Comparison group data collection has required evaluator flexibility and extensive use of persuasion skills, thus highlighting both the technical and people skills evaluators must have to be effective. |