|
Social Science Standards and Ethics: Development, Comparative Analysis, and Issues for Evaluation
|
| Presenter(s):
|
| Linda Mabry, Washington State University, Vancouver, mabryl@vancouver.wsu.edu
|
| Abstract:
This paper presents a comparative analysis of the Guiding Principles for Evaluators (AEA, 1995), The Program Evaluation Standards (Joint Committee, 1994), and the codes of ethics and standards established by the American Sociological Association (1997), the American Psychological Association (APA, 2007), the American Sociological Association (ASA, 1997) and other social science organizations. The theoretical contexts for cross-codes analysis are Kohlberg's (1984) theory of human moral development and Rawl's theory of justice (1971); the historical context is the origins and development during the past half-century of ethical codes by governments and international agencies. Three issues specific to evaluation are discussed: (1) potential conflicts between evaluation's codes of conduct and U.S. legal requirements regarding ethics in social science, (2) the cultural sensitivity and appropriateness of codes of conduct in international and transnational evaluations, and (3) the posssibility and advisability of enforcing evaluation codes of conduct.
|
|
Insight into evaluation practice: Results of a content analysis of designs and methods used in evaluation studies published in North American evaluation focused journals
|
| Presenter(s):
|
| Christina Christie, University of California, Los Angeles, tina.christie@ucla.edu
|
| Dreolin Fleischer, Claremont Graduate University, dreolin@gmail.com
|
| Abstract:
To describe the recent practice of evaluation, specifically method and design choices, we performed a content analysis on 117 evaluation studies published in eight North American evaluation-focused journals for a 3-year period (2004-2006). We chose this time span because it follows the scientifically-based research (SBR) movement, which prioritizes the use of randomized controlled trials (RCTs) to study programs and policies. The purpose of this study was to determine the designs and data collection methods reportedly used in evaluation practice in light of federal guidelines enacted prior to 2004. Results show that, in contrast to the movement, non-experimental designs dominate the field, that mixed-methods approaches just barely nudged out qualitative methods for most commonly used, and that the majority of studies where statistical significance was reported indicated mixed-significance.
|
|
Standards for Evidence-based Practices and Policies: Do Campbell Collaboration, Cochrane Collaboration, and What Works Clearinghouse Research Reviews Produce the Same Conclusions?
|
| Presenter(s):
|
| Chris Coryn, Western Michigan University, chris.coryn@wmich.edu
|
| Michele Tarsilla, Western Michigan University, michele.tarsilla@wmich.edu
|
| Abstract:
Interventions intended to ameliorate, eliminate, reduce, or prevent some persistent, problematic feature of the human condition have existed for millennia. In a climate of increasingly scarce resources and greater demands for accountability, now, more than ever, policy makers and practice-based disciplines and professions are consistently seeking high-quality, non-arbitrary, and defensible evidence for formulating, endorsing, and enforcing “best” policies and practices. In the last few decades, randomized experiments, randomized controlled trials, and clinical trials, universally have become the standard for supporting inferences and claims regarding the efficacy, effectiveness, and, to a lesser extent, generalizability, of such actions. In this presentation the authors will present a study of the degree to which the standards applied by major repositories for evidence-based practices and policies produce the same conclusions about the same studies.
|
|
Can Systematic Measurement of an Evaluations’ Goodness of Fit and its Influence Determine Quality?
|
| Presenter(s):
|
| Janet Clinton, University of Auckland, j.clinton@auckland.ac.nz
|
| Abstract:
Given the expanding and influencing role of evaluation it is critical that the quality of the evaluation process be more fully scrutinized. While we monitor the quality of evaluation processes it is rare to consider outcomes that are attributable to the evaluation. To understand quality we must take in to account the impact of evaluation on a program’s effectiveness and efficiency.
This paper uses a heuristic to illustrate how program components and evaluation processes can be combined to produce an explanation of effectiveness. The goodness of fit of an evaluation process and an evaluations influence can be analysed with appropriate weightings to produce information to ensure quality judgements occur. A number of evaluation cases are used to demonstrate a method of monitoring and measuring evaluation influence.
It is argued that a judgement of quality evaluation lies in systematic measurement of an evaluations’ goodness of fit and its influence.
|
| | | |