2011

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Strategies to Improve Quality of Mixed Methods Evaluations
Multipaper Session 551 to be held in Avalon A on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Mixed Methods Evaluation TIG
Chair(s):
Donna Mertens,  Gallaudet University, donna.mertens@gallaudet.edu
Qualitative Comparative Analysis as an Evaluation Tool
Presenter(s):
Emmeline Chuang, University of South Florida, echuang@mail.sdsu.edu
Jennifer Craft Morgan, University of North Carolina, Chapel Hill, craft@email.unc.edu
Abstract: Evaluating new programs, partnerships, and/or collaboratives can be challenging, particularly when the number of cases is too small to be analyzed quantitatively but too large for comparative case study analysis. This paper introduces qualitative comparative analysis (QCA) as a technique for systematically assessing cross-case commonalities and differences in moderately sized samples. The utility of QCA is illustrated using data from the national, mixed-methods evaluation of seventeen frontline health workforce development programs implemented by diverse partnerships of health care employers, educational institutions, and other organizations, including workforce intermediaries. Two examples are provided: In the first, QCA is applied to mixed methods data to identify the effect of employers' use of high performance work practices on worker-level outcomes. In the second, QCA is applied to qualitative data to determine how partnership composition influenced programmatic outcomes.
Content Validity Using Mixed Methods Approach: Its Application and Development Through the Use of a Table of Specifications Methodology
Presenter(s):
Isadore Newman, Florida International University, newmani@fiu.edu
Janine Lim, Berrien RESA, janine@janinelim.com
Fernanda Pineda, Florida International University, mapineda@fiu.edu
Abstract: There is paucity in the literature on content validity (logical validity) and therefore a need to improve procedures for estimating its trustworthiness. This article presents four unique examples, interpretation, and application of tables of specifications (ToS) for estimating content validity. To have a good content validity estimate, the ToS must also have estimates of reliability. The procedures presented -Lawshe's (1975) Content Validity Ratio and Content Validity Index, and expert agreement estimates procedures- would enhance both. The development and the logic of the ToS requires presenting evidence that has transparency and trustworthiness of the validity estimates by maintaining an audit trail as well as triangulation, expert debriefing, and peer review. An argument is presented that content validity requires a mixed methods approach since data are developed through both qualitative and quantitative methods that inform each other. This process is iterative and provides feedback on the effectiveness of the ToS through consensus.
Balancing Rigor and Relevance in Educational Program Evaluation
Presenter(s):
Rebecca Zulli, University of North Carolina, Chapel Hill, rzulli@unc.edu
Adrienne Smith, University of North Carolina, Chapel Hill, adrsmith@eamil.unc.edu
Gary Henry, University of North Carolina, Chapel Hill, gthenry@unc.edu
David Kershaw, Slippery Rock University, dckersh@email.unc.edu
Abstract: Over the past decade there has been great debate amongst evaluation professionals regarding experimental designs and their appropriateness for educational settings. We designed our evaluation of a three-year pilot program implemented in five rural North Carolina school districts to address our own gold standard: Maximizing both Rigor and Relevance. The program was designed to improve recruitment and retention through performance incentives, to improve skills and practice via professional development and to offer quality afterschool programming. This evaluation necessitated a design that examined program theory, scrutinized implementation, provided timely formative information, and ultimately, provided summative information regarding program impact on student outcomes. This paper presents how the completed evaluation met the requirements of both rigor and relevance by incorporating 1) logic modeling; 2) qualitative methods including interviews, focus groups and observations; 3) descriptive analyses of survey and participation data; and 4) rigorous analytic strategies including propensity-score matching and value-added models.
Evaluation of the New York City Health Bucks Farmers' Market Incentive Program: Demonstrating the Value of Stakeholder Input for Evaluation Design, Implementation, and Dissemination
Presenter(s):
Yvonne Abel, Abt Associates Inc, yvonne_abel@abtassoc.com
Jessica Levin, Abt Associates Inc, jessica_levin@abtassoc.com
Leah Staub-DeLong, Abt Associates Inc, leah_staub-delong@abtassoc.com
Sabrina Baronberg, NYC Department of Health and Mental Hygiene, sbaronbe@health.nyc.gov
Lauren Olsho, Abt Associates Inc, lauren_olsho@abtassoc.com
Deborah Walker, Abt Associates Inc, debbie_walker@abtassoc.com
Jan Jernigan, Centers for Disease Control and Prevention, ddq8@cdc.gov
Gayle Payne, Centers for Disease Control and Prevention, hfn5@cdc.gov
Cheryl Austin, Abt Associates Inc, cheryl_austin@abtassoc.co
Cristina Booker, Abt Associates Inc, cristina_booker@abtassoc.com
Jacey Greece, Abt Associates Inc, jacey_greece@abtassoc.com
Erin Lee, Abt Associates Inc, erin_lee@abtassoc.com
Abstract: The effectiveness of an evaluator can be greatly influenced by the values or principles guiding an evaluation. This presentation highlights the value placed on stakeholder input as an essential component to conducting a CDC-funded evaluation of the New York City Health Bucks program, an initiative designed to increase access to and purchase of fresh fruits and vegetables in three underserved NYC neighborhoods. During both the formative and implementation phases of the evaluation, input from key stakeholders was used to refine and enhance data collection methods. In the dissemination phase, lessons learned throughout the evaluation were combined with findings from key informant interviews conducted specifically to inform the design of an online evaluation toolkit targeting broad stakeholder needs. This session demonstrates the value of collecting stakeholder input for implementing a mixed methods evaluation and showcases the format and content of the toolkit for providing meaningful information back to stakeholders.
Using Relational Databases for Earlier Data Integration in Mixed-methods Approaches
Presenter(s):
Natalie Cook, Cornell University, nec32@cornell.edu
Claire Hebbard, Cornell University, cer17@cornell.edu
William Trochim, Cornell University, wmt1@cornell.edu
Claire Hebbard, Cornell University, cer17@cornell.edu
William Trochim, Cornell University, wmt1@cornell.edu
Abstract: This paper discusses the challenges of data management and analysis of a mixed-methods research project. The focus of the paper is on the use of a single MS Access database to allow for both integrated data management and efficient integrated analysis. Modern evaluation teams are increasingly facing many challenges, and the technology to address those challenges is marginally sufficient to manage the complexity that it creates, especially when quantitative and qualitative data are integrated in the analysis. Communication is always critical, and – as in this case - is even more challenging when the team members are geographically dispersed.

 Return to Evaluation 2011

Add to Custom Program