PROFESSIONAL DEVELOPMENT WORKSHOPS

     

    Professional Development Workshops are hands-on, interactive sessions that provide an opportunity to learn new skills or hone existing ones at Evaluation 2007.

    Professional development workshops precede and follow the conference. These workshops differ from sessions offered during the conference itself in at least three ways: 1. each is longer (either 3, 6, or 12 hours in length) and thus provides a more in-depth exploration of a skill or area of knowledge, 2. presenters are paid for their time and are expected to have significant experience both presenting and in the subject area, and 3. attendees pay separately for these workshops and are given the opportunity to evaluate the experience. Sessions are filled on a first-come, first-served basis and most are likely to fill before the conference begins.

    Registration: Registration for professional development workshops is handled as part of the conference registration forms; however, you may register for professional development workshops even if you are not attending the conference itself (still using the regular conference registration forms - just uncheck the conference registration box).

    Fees: Workshop registration fees are in addition to the fees for conference registration:

     

    Two Day
    Workshop

    One Day
    Workshop

    Half Day
    Workshop

    AEA Members

    $300

    $150

    $75

    Full-time Students

    $160

    $80

    $40

    Nonmembers

    $400

    $200

    $100

     

    Full Sessions: Sessions that are closed because they have reached their maximum attendance will be clearly marked below the session name. No further registrations will be accepted for full sessions and we do not maintain waiting lists. Once sessions are closed, they will not be re-opened.


    Browse by Time Slot:

    Two Day Workshops, Monday and Tuesday, November 5 and 6, 9 AM to 4 PM

    Qualitative Methods; Quantitative Methods; Evaluation 101; Logic Models; Participatory Evaluation; Organizational Collaboration; Survey Design;
    Evaluation Methodology

    One Day Workshops, Tuesday, November 6, 9 AM to 4 PM

    Longitudinal Analysis; RealWorld Evaluation; Developing Questionnaires;
    Growing Your Eval Business; Best Practices in Quant;
    Using Systems Tools; Cultivating Self; Communicating and Reporting; Needs Assessment;
    Qualitative Software

    One Day Workshops, Wednesday, November 7, 8 AM to 3 PM

    Effect Size Measures; Evaluating Advocacy/Policy; Utilization-focused Evaluation; Evaluating Program Implementation; Logic Modeling Success; Concept Mapping; Performance Measurement; Evaluation Dissertation; Theory-Driven Evaluation; Rasch Measurement; Multiple Regression; Public Health Eval; Experimental Design; Visual Presentations; State of the Art; Multilevel Models;
    Collaborative Evaluation

    Half Day Workshops, Wednesday, November 7, 8 AM to 11 AM

    Advanced Performance Measurement; Conducting Online Surveys;
    Racism in Evaluation; Using Systems Thinking; Using Stories

    Half Day Workshops, Wednesday, November 7, 12 PM to 3 PM

    Level Best; Empowerment Evaluation; Focus Group Challenges; Handling Data; Propensity Scores

    Half Day Workshops, Sunday, November 11, 9 AM to 12 PM

    Getting to Outcomes; Conflict-Resolution; Focus Group Moderator;
    Adv Program Theory; Building Evaluation Capacity


    Two Day Workshops, Monday and Tuesday, November 5 and 6, 9 AM to 4 PM


    1. Qualitative Methods

    Qualitative data can humanize evaluations by portraying people and stories behind the numbers. Qualitative inquiry involves using in-depth interviews, focus groups, observational methods, and case studies to provide rich descriptions of processes, people, and programs. When combined with participatory and collaborative approaches, qualitative methods are especially appropriate for capacity-building-oriented evaluations.

    Through lecture, discussion, and small-group practice, this workshop will help you to choose among qualitative methods and implement those methods in ways that are credible, useful, and rigorous. It will culminate with a discussion of new directions in qualitative evaluation.

    You will learn:

    • Types of evaluation questions for which qualitative inquiry is appropriate,

    • Purposeful sampling strategies,

    • Interviewing, case study, and observation methods,

    • Analytical approaches that support useful evaluation.

    Michael Quinn Patton is an independent consultant and professor at the Union Institute. An internationally known expert on utilization-focused evaluation and qualitative methods, he published the third edition of Qualitative Research and Evaluation Methods (SAGE) in 2001.

    Session 1: Qualitative Methods
    Scheduled: Monday and Tuesday, November 5 and 6, 9 AM to 4 PM
    Level: Beginner, no prerequisites


    2. Quantitative Methods

    Quantitative data offers opportunities for numerical descriptions of populations and samples. The challenge is in knowing which analyses are best for a given situation. Designed for the practitioner needing a refresher course and/or guidance in applying quantitative methods to evaluation contexts, the workshop covers the basics of parametric and nonparametric statistics, as well as how to report your findings.

    Hands-on exercises and computer demonstrations interspersed with mini-lectures will introduce methods and concepts. The instructor will review examples of research and evaluation questions and the statistical methods appropriate to developing a quantitative data-based response.

    You will learn:

    • The conceptual basis for a variety of statistical procedures,

    • How more sophisticated procedures are based on the statistical basics,

    • Which analyses are most applicable for a given data set or evaluation question,

    • How to interpret and report findings from these analyses.

    Katherine McKnight applies quantitative analysis as Director of Program Evaluation for Pearson Achievement Solutions. Additionally, she teaches Research Methods, Statistics, and Measurement in Public and International Affairs at George Mason University in Fairfax, VA.

    Session 2: Quantitative Methods
    Scheduled:
    Monday and Tuesday, November 5 and 6, 9 AM to 4 PM
    Level: Beginner, no prerequisites


    3. Evaluation 101: Intro to Evaluation Practice

    This workshop is full. There is no waiting list available for this workshop. Please choose another.

    Begin at the beginning and learn the basics of evaluation from an expert trainer. The session will focus on the logic of evaluation to answer the key question: "What resources are transformed into what program evaluation strategies to produce what outputs for which evaluation audiences, to serve what purposes." Enhance your skills in planning, conducting, monitoring, and modifying the evaluation so that it generates the information needed to improve program results and communicate program performance to key stakeholder groups.

    A case-driven instructional process, using discussion, exercises, and lecture will introduce the steps in conducting useful evaluations: Getting started, Describing the program, Identifying evaluation questions, Collecting data, Analyzing and reporting, and Using results.

    You will learn:

    • The basic steps to an evaluation and important drivers of program assessment,

    • Evaluation terminology,

    • Contextual influences on evaluation and ways to respond,

    • Logic modeling as a tool to describe a program and develop evaluation questions and foci,

    • Methods for analyzing, and using evaluation information.

    John McLaughlin has been part of the evaluation community for over 30 years working in the public, private, and non-profit sectors. He has presented this workshop in multiple venues and will tailor this two-day format for Evaluation 2007.

    Session 3: Evaluation 101
    Scheduled:
    Monday and Tuesday, November 5 and 6, 9 AM to 4 PM
    Level: Beginner, no prerequisites


    4. Logic Models for Program Evaluation and Planning

    Many programs fail to start with a clear description of the program and its intended outcomes, undermining both program planning and evaluation efforts. The logic model, as a map of what a program is and intends to do, is a useful tool for clarifying objectives, improving the relationship between activities and those objectives, and developing and integrating evaluation plans and strategic plans.

    First, we will recapture the utility of program logic modeling as a simple discipline, using cases in public health and human services to explore the steps for constructing, refining and validating models. Then, we'll examine how to improve logic models using some fundamental principles of "program theory", and, finally, demonstrate how to use logic models effectively to help frame questions in evaluation, performance measurement, and strategic planning. Both days use modules with presentations, small group case studies, and debriefs to reinforce group work.

    You will learn:

    • To construct logic models,

    • To use program theory principles to improve a logic model,

    • To develop an evaluation focus based on a logic model,

    • To use logic models to answer strategic planning questions and select and develop performance measures.

    Thomas Chapel is the central resource person for planning and program evaluation at the Centers for Disease Control and Prevention and a sought after trainer. Tom has taught this workshop for the past four years to much acclaim.

    Session 4: Logic Models
    Scheduled:
    Monday and Tuesday, November 5 and 6, 9 AM to 4 PM
    Level: Beginner, no prerequisites


    5. Participatory Evaluation

    Participatory evaluation practice requires evaluators to be skilled facilitators of interpersonal interactions. This workshop will provide you with theoretical grounding (social interdependence theory, conflict theory, and evaluation use theory) and practical frameworks for analyzing and extending your own practice.

    Through presentations, discussion, reflection, and case study, you will experience strategies to enhance participatory evaluation and foster interaction. You are encouraged to bring examples of challenges faced in your practice for discussion to this workshop consistently lauded for its ready applicability to real world evaluation contexts.

    You will learn:

    • Strategies to foster effective interaction, including belief sheets; values voting; three-step interview; cooperative rank order; graffiti; jigsaw; and data dialogue,

    • Responses to challenges in participatory evaluation practices,

    • Four frameworks for reflective evaluation practice.

    Jean King has over 30 years of experience as an award-winning teacher at the University of Minnesota. As an evaluation practitioner, she has received AEA’s Myrdal award for outstanding evaluation practice. Laurie Stevahn is a professor at Seattle University with extensive facilitation experience as well as applied experience in participatory evaluation.

    Session 5: Participatory Evaluation
    Prerequisites:
    Basic evaluation skills

    Scheduled:
    Monday and Tuesday, November 5 and 6, 9 AM to 4 PM
    Level: Intermediate


    6. Evaluating Inter- and Intra-Organizational Collaboration

    “Collaboration” is a ubiquitous, yet misunderstood, under-empiricized and un-operationalized construct. Program and organizational stakeholders looking to do and be collaborative struggle to identify, practice and evaluate it with efficacy.    

    This workshop aims to increase participants’ capacity to quantitatively and qualitatively examine the development of inter- and intra-organizational partnerships. Assessment strategies and specific tools for data collection, analysis and reporting will be presented. You will practice using assessment techniques that are currently being employed in the evaluation of PreK-16 educational reform initiatives and other grant-sponsored endeavors including the Safe School/Healthy Student initiative. The processes and tools are applicable across all areas of practice from health and human services to business to governmental networks and agencies.

    You will learn:

    • The principles of collaboration so as to understand and be able to evaluate the construct,

    • Specific strategies, tools and protocols used in qualitative and quantitative assessment,

    • How to assess formatively the development of inter-personal, intra-organizational collaboration in grant-funded programs,

    • How stakeholders use the evaluation process and findings to improve organizational collaboration.

    Rebecca Gajda has facilitated workshops and courses for adult learners for more than 10 years and is the Director of Educational of Research and Evaluation for a large-scale school improvement initiative. Her most recent publication on the topic of organizational collaboration can be found in the March 2007 issue of AJE.

    Session 6: Organizational Collaboration
    Prerequisites:
    Basic understanding of organizational change theory/systems theory and familiarity with mixed methodological designs.
    Scheduled:
    Monday and Tuesday, November 5 and 6, 9 AM to 4 PM
    Level: Intermediate


    7. Survey Design and Administration

    A standout from the 2006 and program, this workshop has been updated and expanded to a two-day offering for 2007. Designed for beginners with little or no background in survey development, you will be introduced to the fundamentals of survey design and administration, and leave with tools for developing and improving your own surveys as part of your evaluation practice.

    This interactive workshop will use a combination of direct instruction with hands-on opportunities for participants to apply what is learned to their own evaluation projects. We will explore different types of surveys, how to identify the domains included in surveys, how to choose the right one, how to administer the survey and how to increase response rates and quality of data. You will receive handouts with sample surveys, item writing tips, checklists, and resource lists for further information.  

    You will learn:

    • The various types and formats of surveys,

    • Procedures for high quality survey design,

    • How to write high quality questions,

    • Strategies for increasing reliability and validity.

    Courtney Malloy and Harold Urman are consultants at Vital Research, a research and evaluation firm that specializes in survey design. They both have extensive experience facilitating workshops and training sessions on research and evaluation for diverse audiences.

    Session 7: Survey Design
    Scheduled:
    Monday and Tuesday, November 5 and 6, 9 AM to 4 PM
    Level: Beginner, no prerequisites


    8. Evaluation Methodology Basics: The Nuts and Bolts of Sound Evaluation

    This session has been cancelled.

    Session 8: Evaluation Methodology


    One Day Workshops, Tuesday, November 6, 9 AM to 4 PM


    9. Introduction to Longitudinal Analysis

    Many evaluation studies make use of longitudinal data. However, while much can be learned from repeated measures, the analysis of change is also associated with a number of special problems, e.g. the unreliability of change scores. This workshop reviews how traditional methods in the analysis of  change, such as the paired t-test, and repeated measures ANOVA or MANOVA, address these problem. From there, we will move to the core of the workshop, an introduction to latent growth curve modeling (LGM) and how to specify, estimate, and interpret growth curve models.

    The workshop will be delivered as a mixture of PowerPoint presentation, group discussion, and exercises with a special focus on model specification. Processes for setting up and estimating models will be demonstrated using different software packages, and a number of practical examples will help to illustrate the material. You will receive all slides as handouts as well as a recommendations for further reading and study.

    You will learn:

    • How to detect reliable sources of variance in individual differences of intraindividual change,

    • Special problems associated with the analysis of longitudinal data,

    • Important assumptions of traditional methods for the analysis of change,

    • The advantages and limitations of conventional techniques for the analysis of change,

    • How to specify, estimate and interpret latent growth curve models (LGM),

    • Recent developments in latent growth curve modeling.

    Manuel C Voelkle is a research associate at the University of Mannheim where he teaches courses on multivariate data analysis and research design and methods. Werner W. Wittmann is professor of psychology at the University of Mannheim, where he heads a research and teaching unit specializing in research methods, assessment and evaluation research.

    Session 9: Longitudinal Analysis
    Prerequisites: Basic understanding of structural equation models and regression analytic techniques. Experience with analyzing longitudinal data is of advantage but not necessary.

    Scheduled: Tuesday, November 6, 9:00 AM to 4:00 PM
    Level: Intermediate


    10. RealWorld Evaluation: Conducting Evaluations with Budget,
    Time, Data and Political Constraints

    This workshop is full. There is no waiting list available for this workshop. Please choose another.

    What do you do when asked to perform an evaluation on a program that is well underway? When time and resources are few, yet expectations high? When questions about baseline data and control groups are met with blank stares? When time and resources are few, yet clients expect “rigorous impact evaluation”? When there are political pressures to address?

    The RealWorld Evaluation approach will be introduced and its practical utility assessed through presentations and discussion, and through examples drawn from the experiences of presenters and participants. This well-developed seven-step approach seeks to ensure the best quality evaluation under real-life constraints.

    You will learn:

    • The seven steps of the RealWorld Evaluation approach,

    • Context-responsive evaluation design alternatives,

    • Ways to reconstruct baseline data,

    • How to identify, and overcome threats to the validity or adequacy of evaluation methods.

    Jim Rugh and Michael Bamberger recently co-authored, with Linda Mabry, the book Real World Evaluation, Working Under Time, Data and Political Constraints (SAGE 2006). The two presenters bring over eighty years of professional evaluation experience, mostly in developing countries around the world.

    Session 10: RealWorld Evaluation
    Prerequisites: Academic or practical knowledge of the basics of evaluation.
    Scheduled:
    Tuesday, November 6, 9:00 AM to 4:00 PM
    Level: Intermediate


    11. Developing Reliable and Valid Questionnaires

    Increasingly, individuals and organizations are being asked to collect, manage, and use information for decision-making, particularly to improve the quality of services and products. Rather than basing decisions on hunches and intuition, decision-making is viewed as being a “data-driven” process, one which is systematic and produces trustworthy information.

    Employing lecture, hands on exercises, and discussion, this workshop will focus on developing reliable and valid questionnaires. A variety of both supply and selection item formats will be presented, including short answer, fill in the blank, paired comparison ranking, rating scales, checklists, etc.  Types of reliability to be discussed include measures of stability over time and instrument consistency. Validity discussion will focus on face, content, criterion related, and construct validity. Overall, we will emphasize the practical “how to” aspects of developing good questionnaires and observational instruments.

    You will learn:

    • Ways that instruments are used for decision-making, research and evaluation,

    • How research methodology may influence the choice of instrument,

    • Approaches to constructing instruments and the pros and cons of each approach,

    • Ways to demonstrate the validity and reliability of the results produced by an instrument.

    David Colton and Robert W. Covert teach instrument construction at the Curry School of Education, University of Virginia. The presenters are coauthors of the text Designing and Constructing Instruments for Social Research and Evaluation to be published this summer by Jossey-Bass.

    Session 11: Developing Questionnaires
    Scheduled:
    Tuesday, November 6, 9:00 AM to 4:00 PM
    Level: Beginner


    12. Growing Your Evaluation Business from Surviving to Thriving

    Interested in growing your evaluation business? This workshop brings together business, service, and marketing concepts from recent publications such as Jim Collin's Good to Great, Robert Schwartz and John Mayne's Quality Matters, and the recent New Directions for Evaluation issue on Independent Evaluation Consulting, and applies them to small evaluation consulting firms.

    The workshop begins with a self-assessment where attendees rate their evaluation businesses in terms of being a sustainable asset. You will then look into the future and dream about what you would like your business to become in terms of sales, profitability and sustainability. The remainder of the workshop focuses on ways to get from where the business is today to where you would like your business to be in the future. The workshop will be highly interactive and use numerous real-life situations for analysis and recommendations for ways to proactively and deliberately grow.

    You will learn:

    • To move your evaluation businesses forward in terms of sales, profitability and sustainability,

    • To build a sustainable plan for marketing your evaluation services,

    • Methods for assuring the highest quality of evaluation services to clients,

    • Ways to structure your company and services so that your business becomes a saleable asset.

    Melanie Hwalek is the founder and owner of SPEC Associates, a program evaluation and research company that has thrived over the past 27 years. She is co-author of the 2006 New Directions for Evaluation article "Building Your Evaluation Business into a Valuable Asset.”  Victoria Essenmacher is a partner and business manager of SPEC Associates and has provided extensive consulting to non-profit organizations on issues of high-quality performance measurement systems.

    Session 12: Growing your Eval Business
    Prerequisites:
    Experience conducting evaluations as a small business owner or self-employed contractor.
    Scheduled:
    Tuesday, November 6, 9:00 AM to 4:00 PM
    Level: Intermediate


    13. Best Practices in Quantitative Methods: Attending to the Little Things Makes a Big Difference

    Learn the latest advances in data management, statistical testing, and outcome measurement! Best practices are common in every field. Program evaluators and, in particular, quantitatively-oriented evaluators, ought to have the same benefit of keeping abreast of these "best" practices as their professional counterparts in other fields.

    Through short lectures, didactic inquiry, and demonstrations, the session will explore data-handling including coding and transforming variables and computing new variables, working with missing data, statistical testing including statistical power and effect size estimation, and quantitatively capturing outcomes in program and policy implementation.

    You will learn:

    • Best practices for handling and managing data including coding and transformation of variables,

    • Best practices for statistical testing including estimating statistical power and effect size,

    • Best practices for capturing outcomes including designing useful measures for relevant outcomes.

    Patrick McKnight is a professor of psychology at George Mason University where he teaches statistics and methods courses, and is the co-chair of AEA's Quantitative Methods Topical Interest Group (TIG). An experienced facilitator, his engaging style renders the complex accessible and well worth the time and investment.

    Session 13: Best Practices in Quant
    Prerequisites:
    Working knowledge of a statistical package and a sound understanding of univariate, bivariate (correlations and t-tests), and multivariate (GLM, ANOVA, multiple regression) statistical procedures.
    Scheduled:
    Tuesday, November 6, 9:00 AM to 4:00 PM
    Level: Intermediate


    14. Using Systems Tools in Evaluation Situations

    The field of systems inquiry is as diverse and complicated as the field of evaluation. When the two are placed side-by-side, the complexities seem to multiply. The purpose of this session is to bring the use of systems concepts down to earth for real-world evaluation scholars and practitioners.

    Over the course of the day, we will address a variety of difficult questions: When and why should an evaluator think about systemic aspects of a situation? What does it mean to treat a situation in a systemic way? How can I take the step from thinking about situations systemically and to evaluating them systemically? In short lecturettes, reflection, experience, and group discussion, you will explore these questions and come to a personal realization about what a systems-approach would mean to your own evaluation practice.

    You will learn:

    • Basic systems principles that underpin system tools,

    • Which systems tools are appropriate for particular evaluation tasks,

    • Three useful evaluation tools from three distinct systems traditions.

    Bob Williams is an independent consultant who has been at the forefront of incorporating systems based ideas into evaluation practice. His own experience of using systems theory in practice dates back over 30 years. Glenda Eoyang is founding Executive Director of the Human Systems Dynamics Institute. Among other publications, she is the author or co-author of Coping with Chaos: Seven Simple Tools, and Facilitating Organization Change: Lessons from Complexity Science.

    Session 14: Systems Tools
    Prerequisites: Knowledge of multiple evaluation methods and experience conducting evaluations, basics of qualitative analysis
    Scheduled:
    Tuesday, November 6, 9:00 AM to 4:00 PM
    Level: Intermediate


    15. Lenses, Filters, Frames: Cultivating Self as Responsive Instrument

    Evaluative judgments are inextricably bound up with culture and context and call for diversity-grounded, multilateral self-awareness. Excellence and ethical practice in evaluation are intertwined with orientations toward, responsiveness to, and capacities for engaging diversity. Breathing life into this expectation calls for critical ongoing personal homework for evaluators regarding their lenses, filters and frames vis-a-vis judgment-making.

    Together, we will cultivate a deliberative forum for exploring these issues using micro-level assessment processes that will help attendees to explore mindfully the uses of self as knower, inquirer and engager of others within as well as across salient diversity divides. We often look but still do not see, listen but do not hear, touch but do not feel. Evaluators have a professional and ethical responsibility to address the ways our lenses, filters and frames may obscure or distort more than they illuminate.
     

    You will learn:

    • To cultivate the self responsive instrument and understand yourself in dynamically diverse contexts,

    • To expand and enrich your diversity-relevant knowledge and skills repertoire,

    • To engage in ongoing assessment of your own lenses, filters, and frames,

    • To engage in empathic perspective taking,

    • To develop intercultural/multicultural competencies as process and stance and not simply as a status or fixed state of being.

    Hazel Symonette brings over 30 years of work in diversity-related arenas and currently serves as a senior policy/planning analyst at the University of Wisconsin-Madison. She designed, and has offered annually, the Institute on Program Assessment for over 10 years. Her passion lies in expanding the cadre of practitioners who embrace end-to-end evaluative thinking/praxis within their program design and development efforts.

    Session 15: Cultivating Self
    Scheduled:
    Tuesday, November 6, 9:00 AM to 4:00 PM
    Level: Beginner, no prerequisites


    16. Evaluation Strategies for Communicating and Reporting

    This workshop is full. There is no waiting list available for this workshop. Please choose another.

    Communicating evaluation processes and results is one of the most critical aspects of evaluation practice. Yet, evaluators continually experience frustration with hours spent on writing reports that are seldom read or shared. While final reports will continue to be an expectation of many evaluation contracts, there are other ways in which evaluators can communicate and report on the progress and findings from an evaluation.

    Using hands-on demonstrations and real-world examples, we will explore how a variety of strategies for communicating and reporting can increase learning from the evaluation’s findings, stakeholders’ understanding of evaluation processes, the evaluation’s credibility, and action on the evaluation’s recommendations.

    You will learn:

    • Reasons for communicating and reporting throughout an evaluation’s life cycle,

    • How stakeholders’ information needs influence your choice of communicating approaches,

    • More than 15 strategies for communicating and reporting evaluation processes and findings.

    Rosalie T Torres is president of Torres Consulting Group, a research, evaluation and management consulting firm specializing in the feedback-based development of programs and organizations since 1992. She has authored/co-authored numerous books and articles including,  Evaluation Strategies for Communicating and Reporting (Torres, Preskill, & Piontek, 2005), and Evaluative Inquiry for Learning in Organizations (Preskill & Torres, 1999). 

    Session 16: Communicating and Reporting
    Scheduled:
    Tuesday, November 6, 9:00 AM to 4:00 PM
    Level: Beginner, no prerequisites


    17. Introduction to Needs Assessment and Designing Needs Assessment Surveys

    Assessing needs is a task often assigned to evaluators with the assumption that they have been trained in or have experience with the activity. However, surveys of evaluation training indicated that only one formal course on the topic was being taught in university based evaluation programs.

    This workshop uses multiple hands-on activities interspersed with mini-presentations and discussions to provide an overview of needs assessment and a strong emphasis on designing needs assessment surveys. The focus will be on basic terms and concepts, models of needs assessment, steps necessary to conduct a needs assessment, and an overview of methods with particular focus on the design and nature of needs assessment surveys.

    You will learn:

    • Definitions of need and need assessment,,

    • Models of needs assessment with emphasis on a comprehensive 3-phase model,

    • How to plan a needs assessment through the use of a Needs Assessment Committee,

    • How to design and analyze a needs assessment survey,

    • Qualitative techniques to improve needs assessment.

    James Altschuld is a well known author and trainer in the area of needs assessment and was a pioneer in offering academic training in needs assessment to evaluators. His recent publications include co-authorship of the text From Needs Assessment to Action: Transforming Needs in Solution Strategies (SAGE 2000).

    Session 17: Needs Assessment
    Scheduled:
    Tuesday, November 6, 9:00 AM to 4:00 PM
    Level: Beginner, no prerequisites


    18. Qualitative Software: Considerations of Context and Analysis

    This workshop is based on the premise that the use of qualitative software does not threaten the methodological integrity of qualitative researchers’ work, but rather such software serves as a tool to encourage researchers to maintain their role as primary agents of their analysis. Coding and qualitative software are presented as heuristic devices that assist the search for meaning in qualitative data. 

    The agenda is designed to use practical experience with real data to direct conversation around important principles that shape qualitative analysis. “Context” is explored from several angles as a way to emphasize the importance of movement from the particular to the holistic. Pre-code work can outline the context of data collection episodes. Code evolution should occur with conscious attention to the context of an entire research project. Memo writing is presented as a resource for considering context of real-life meaning to what we see in data. 

    You will learn:

    • How and when to integrate qualitative software into the analysis process,

    • The value of context in analytic decision-making,

    • Processes that support the evolution of coding qualitative data,,

    • Strategies for moving through coding to latter phases of ascertaining meaning from qualitative data.

    Ray Maietta is President and founder of ResearchTalk Inc, a qualitative inquiry consulting firm. He is an active qualitative researcher, research consultant, and teacher of qualitative analysis. Over 10 years of consultation with qualitative researchers provide the backdrop of this workshop, which uses materials from a manuscript in preparation by the facilitator, Sort and Sift, Think and Shift, to be completed in 2008.  

    Session 18: Qualitative Software
    Prerequisites: Basic understanding of qualitative data analysis.
    Scheduled:
    Tuesday, November 6, 9:00 AM to 4:00 PM
    Level: Intermediate


    One Day Workshops, Wednesday, November 7, 8 AM to 3 PM


    19. Using Effect Size and Association Measures

    Answer the call to report effect size and association measures as part of your evaluation results. Improve your capacity to understand and apply a range of measures including: standardized measures of effect sizes from Cohen, Glass, and Hedges; Eta-squared; Omega-squared; the Intraclass correlation coefficient; and Cramer’s V.

    Through mini-lecture, hands-on exercises, and demonstration, you will improve your understanding of the theoretical foundation and computational procedures for each measure as well as ways to identify and correct for bias.

    You will learn:

    • How to select, compute, and interpret the appropriate measure of effect size or association,

    • Considerations in the use of confidence intervals,

    • SAS and SPSS macros to compute common effect size and association measures,

    • Basic relationships among the measures.

    Jack Barnette hails from The University of Alabama at Birmingham. He has been conducting research and writing on this topic for the past ten years. Jack has won awards for outstanding teaching and is a regular facilitator both at AEA's annual conference and the CDC/AEA Summer Evaluation Institute.

    Session 19: Effect Size Measures
    Prerequisites: Univariate statistics through ANOVA and understanding of and use of confidence levels.
    Scheduled: Wednesday, November 7, 8:00 am to 3:00 pm
    Level: Advanced


    20. Evaluating Advocacy and Policy Change Efforts

    This workshop is full. There is no waiting list available for this workshop. Please choose another.

    Evaluations of advocacy, community organizing, and other policy change efforts present unique challenges for evaluators, particularly those looking to use evaluation for ongoing learning. On the ground, change can take years to happen, and when it does, it may occur in fits and starts. Outside forces can affect efforts in unforeseen ways, causing advocates' strategies to shift, goals to be abandoned, and new goals to be taken up. And current policy losses can belie gains that spell future success. Evaluators need strategies for addressing these challenges in evaluation design and implementation.

    Through lecture, discussion, demonstration, and hands-on activities, this workshop will walk participants through a variety of strategies for evaluating advocacy and policy change efforts.  We will draw from specific case studies that address real-world challenges and discuss ways to overcome them. 

    You will learn:

    • Frameworks and guidelines for conducting advocacy evaluations,

    • Ways to create nimble and flexible evaluations that allow for real-time improvement,

    • Practical tools that can assist with evaluation efforts,

    • Techniques to identify outcomes that can be used as milestones for success.

    Justin Louie, a consultant with Blueprint Research & Design, Inc., works with nonprofits and foundations to help them evaluate their advocacy efforts, and has conducted leading research on this topic for The California Endowment. Ehren Reed, a Senior Associate with Innovation Network, Inc., leads a number of evaluations of policy change initiatives and conducts field-building research for national foundations.

    Session 20: Evaluating Advocacy/Policy
    Scheduled: Wednesday, November 7, 8:00 am to 3:00 pm
    Prerequisites: Basic Evaluation Skills
    Level: Intermediate


    21. Utilization-focused Evaluation

    This workshop is full. There is no waiting list available for this workshop. Please choose another.

    Evaluations should be useful, practical, accurate and ethical. Utilization-focused Evaluation is a process that meets these expectations and promotes use of evaluation from beginning to end. With a focus on carefully targeting and implementing evaluations for increased utility, this approach encourages situational responsiveness, adaptability and creativity.

    With an overall goal of teaching you the process of Utilization-focused Evaluation, the session will combine lectures with concrete examples and interactive case analyses, including cases provided by the participants.

    You will learn:

    • Basic premises and principles of Utilization-focused Evaluation (U-FE),

    • Practical steps and strategies for implementing U-FE,

    • Strengths and weaknesses of U-FE, and situations for which it is appropriate.

    Michael Quinn Patton is an independent consultant and professor at the Union Institute. An internationally known expert on Utilization-focused Evaluation, this workshop is based on the newly completed fourth edition of his best-selling evaluation text, Utilization Focused Evaluation: The New Century Text (SAGE). 

    Session 21: Utilization-focused
    Scheduled: Wednesday, November 7, 8:00 am to 3:00 pm
    Level: Beginner, no prerequisites


    22. Evaluating Program Implementation: Concepts, Methods, and Applications

    This workshop is full. There is no waiting list available for this workshop. Please choose another.

    Monitoring the manner and degree to which a program, service, or treatment is implemented is key to valid process and outcome evaluation. Yet, while many program stakeholders and evaluators have a general awareness of the diagnostic value associated with measuring and documenting the delivery and receipt of a planned intervention, the collection and examination of implementation data is often an overlooked or contentious aspect of the evaluation process.

    Through lecture, discussion, demonstration, and hands-on activities, this workshop will explore the benefits offered and the challenges posed by the collection and usage of implementation data. Qualitative and quantitative measurement and analytic strategies will be presented, and the merits of strict adherence to and strategic adaptation of program protocol will be considered.

    You will learn:

    • How the collection and usage of implementation data can strength an evaluation,

    • How program theory can be used to identify key intervention components,

    • Selected approaches to measuring implementation,

    • The types of analyses that implementation data facilitate,

    • How to interpret and report findings from these analyses.

    Keith Zvoch is an assistant professor at the University of Oregon with over ten years experience designing and conducting evaluations of educational and social service interventions. Lawrence Letourneau is a federal programs administrator at the University of Nevada, Las Vegas (UNLV) involved in all aspects of service delivery, management, and evaluation of UNLV’s suite of 16 college access programs.

    Session 22: Evaluating Program Implementation
    Scheduled: Wednesday, November 7, 8:00 AM to 3:00 PM
    Prerequisites: A basic understanding of research design and statistics.
    Level: Intermediate


    23. Logic Modeling for Program Success

    This workshop is full. There is no waiting list available for this workshop. Please choose another.

    This workshop provides a practical framework for developing logic models that has been used throughout the United States, Canada, and Africa. You will practice the skills necessary to develop a logic map for a problem of interest, prioritize the underlying conditions appearing in the logic map for strategy development, and identify potential measures to assess the underlying conditions. Overall, use of the logic modeling process will help to ensure programs have the best chance of producing intended outcomes.

    Through mini-lectures, discussion, and small group exercises, we will explore the logic modeling process including how to avoid activity traps, identify antecedent conditions, and setting up a program for success.

    You will learn:

    • A three step logic modeling process,

    • How to use the logic modeling process to complete the logic model table often required by funding agencies,

    • Ways to ensure a program has the best chance of producing its intended effect.

    Ralph Renger will lead a team of experienced facilitators who have offered training in logic modeling to learners at all levels. The facilitation team developed the three step approach to logic models and have worked with local, state, national, and international agencies to develop new programs and restructure existing programs using the three step logic modeling process.

    Session 23: Logic Modeling Success
    Prerequisites: Basic understanding of logic models and familiarity with completing logic models for projects.

    Scheduled: Wednesday, November 7, 8:00 AM to 3:00 PM
    Level: Intermediate


    24. Concept Mapping for Evaluation: A Mixed Methods, Participatory Approach

    'Concept mapping" is a tool for assisting and enhancing many types of thinking. By using this methodology, it can help a group describe and organize its assessment on a topic. Ideas are represented visually in a series of easy-to-read graphics that capture specific ideas generated by a group; relationships between ideas; how ideas cluster together; and how those ideas are valued.

    This workshop explore this methodology using lecture, group discussion and  project examples. There will be a particular focus on the planning stages of a project, as the decisions at this stage are applicable to any participatory project. A secondary focus will be on the unique analyses that create a shared conceptual framework for complex, systems-based issues and represent that in easy-to-read visuals.

    You will learn:

    • Key principles, decisions and steps in the engagement of stakeholders in systems-based evaluation,

    • How to describe and to recognize appropriate applications of the concept mapping methodology,

    • The steps in the concept mapping methodology and how those can be adapted to various situations,

    • How the concept mapping analysis converts qualitative input into quantitative data that is useful in evaluation projects,

    • to apply the methodology to their own projects.

    Mary Kane and Kathleen Quinlan are, respectively, President and Senior Consultant at Concept Systems, Inc, a consulting company that uses the concept mapping methodology as a primary tool in its planning and evaluation consulting projects. William Trochim is a Professor and Director of Evaluation for Extension and Outreach at Cornell University and the author of many peer reviewed publications on the methodology and countless conference presentations.

    Session 24: Concept Mapping
    Scheduled: Wednesday, November 7, 8:00 AM to 3:00 PM
    Level: Beginner, no prerequisites


    25. Performance Measurement in the Public and Nonprofit Sectors

    This workshop is full. There is no waiting list available for this workshop. Please choose another.

    Managing for Results! Performance-Based Budgeting! Balanced Score Cards! Dash Boards! Program managers and executives in the public and non profit sectors are being pushed to embrace these approaches to assessing how well their  programs and agencies are doing. The unifying thread linking all of these efforts is performance measurement. So what is needed to measure performance in an effective and useful manner?

    This workshop will provide you with instruction, materials and exercises to increase your understanding of what constitutes performance measurement and how to measure program performance in the public and nonprofit sectors.

    You will learn:

    • How to identify pressures and opportunities for measuring performance,

    • The political challenges to measuring performance and how to respond to them,

    • Ways to assess the reliability and validity of performance measures,

    • How to identify performance measures for social services,

    • Approaches to getting performance measures used.

    Kathryn Newcomer is the Director of the Phd in Public Policy and Administration program at the George Washington University where she teaches public and nonprofit program evaluation, research design, and applied statistics. She conducts research and training for federal and local government agencies on performance measurement and program evaluation, and has published five books and numerous articles about performance measurement in the government and nonprofit sectors.

    Session 25: Performance Measurement
    Scheduled: Wednesday, November 7, 8:00 AM to 3:00 PM
    Level: Beginner, no prerequisites


    26. How to Prepare an Evaluation Dissertation Proposal

    Developing an acceptable dissertation proposal often seems more difficult than conducting the actual research. Further, proposing an evaluation as a dissertation study can raise faculty concerns of acceptability and feasibility. This workshop will lead you through a step-by-step process for preparing a strong, effective dissertation proposal with special emphasis on the evaluation dissertation.

    The workshop will cover such topics as the nature, structure, and multiple functions of the dissertation proposal; how to construct a compelling argument; how to develop an effective problem statement and methods section; and how to provide the necessary assurances to get the proposal approved. Practical procedures and review criteria will be provided for each step. The workshop will emphasize application of the knowledge and skills taught to the participants’ personal dissertation situation through the use of an annotated case example, multiple self-assessment worksheets, and several opportunities for questions of personal application.

    You will learn:

    • The pros and cons of using an evaluation study as dissertation research,

    • How to construct a compelling argument in a dissertation proposal,

    • The basic process and review criteria for constructing an effective problem statement and methods section,

    • How to provide the assurances necessary to guarantee approval of the proposal.

    Nick L Smith is the co-author of How to Prepare a Dissertation Proposal from Syracuse University Press and a past-president of AEA. He has taught research and evaluation courses for over 20 years at Syracuse University and is an experienced workshop presenter through NOVA University's doctoral program in evaluation.

    Session 26: Evaluation Dissertation
    Scheduled: Wednesday, November 7, 8:00 AM to 3:00 PM
    Level: Beginner, no prerequisites


    27. Theory-Driven Evaluation for Assessing and
    Improving Program Planning, Implementation, and Effectiveness

    This workshop is full. There is no waiting list available for this workshop. Please choose another.

    Learn the theory-driven approach for assessing and improving program planning, implementation and effectiveness. You will explore the conceptual framework of program theory and its structure, which facilitates precise communication between evaluators and stakeholders regarding evaluation needs and approaches to addressing those needs. From there, the workshop moves to how program theory and theory-driven evaluation are useful in the assessment and improvement of a program at each stage throughout its life-cycle.

    Mini-lectures, group exercises and case studies will illustrate the use of program theory and theory-driven evaluation for program planning, initial implementation, mature implementation and outcomes. In the outcome stages, you will explore the differences among outcome monitoring, efficacy evaluation and effectiveness evaluation.  

    You will learn:

    • How to apply the conceptual framework of program theory and theory-driven evaluations,

    • How to conduct theory-driven process and outcome evaluations,

    • How to conduct integrative process/outcome evaluations,

    • How to apply program theory to improve program planning processes.

    Huey Chen, professor at the University of Alabama at Birmingham, is the author of Theory-Driven Evaluations (SAGE), the classic text for understanding program theory and theory-driven evaluation and most recently of Practical Program Evaluation (2005). He is an internationally know workshop facilitator on the subject.

    Session 27: Theory-Driven Evaluation
    Prerequisites: Basic background in evaluation.
    Scheduled: Wednesday, November 7, 8:00 AM to 3:00 PM
    Level: Intermediate


    28. Introduction to IRT/Rasch Measurement in Evaluation

    Program evaluation has great need for the development of valid measures, e.g. of the quantity and quality of services and of the outcomes of those services. Many evaluators are frustrated when existing instruments are not well tailored to the task and do not produce the needed sensitive, accurate, valid findings. 

    Through an extensive presentation, followed by discussion and hands-on work with data sets and computer-generated output, this workshop will explore Rasch Measurement as a means to effectively measure program services. It provides an overview of “modern” measurement as practiced using item response theory with a focus on Rasch measurement. Rasch analysis provides the social sciences with the kind of measurement that characterizes measurement in the natural sciences.

    You will learn:

    • Differences between Classical Test Theory and Rasch Measurement,

    • Why, when, and how to apply Rasch measurement,

    • Why and how Rasch seeks to create linear, interval measures,

    • Interpretation of Rasch/Winsteps output

    Kendon Conrad and Barth Riley are from the University of Illinois at Chicago. They bring extensive experience in both teaching about, and applying, Rasch measurement to evaluation. Their workshops have won high praise from participants for their down-to-earth, clear, applied presentation with discussion.

    Session 28: Rasch Measurement
    Prerequisites: Basic background in evaluation.
    Scheduled: Wednesday, November 7, 8:00 AM to 3:00 PM
    Level: Intermediate


    29. Applications of Multiple Regression in Evaluation: Mediation, Moderation, and More

    Multiple regression is a powerful tool that has wide applications in evaluation and applied research. Regression analyses are used to describe relationships, test theories, make predictions with data from experimental or observational studies, and model linear or nonlinear relationships. Issues we’ll explore include selecting specific regression models that are appropriate to your data and research questions, preparing data for analysis, running the analyses, interpreting the results, and presenting findings to a nontechnical audience.

    The facilitator will demonstrate applications from start to finish with SPSS and Excel, and then you will tackle multiple real-world case examples in small groups. Detailed handouts include explanations and examples that can be used at home to guide similar applications.

    You will learn:

    • Concepts important for understanding regression,

    • Procedures for conducting computer analysis, including SPSS code,

    • How to conduct mediation and moderation analyses,

    • How to interpret SPSS regression output,

    • How to present findings in useful ways.

    Dale Berger is Professor of Psychology at Claremont Graduate University where he teaches a range of statistics and methods courses for graduate students in psychology and evaluation. He was President of the Western Psychological Association and recipient of the WPA Outstanding Teaching Award.

    Session 29: Multiple Regression
    Prerequisites: Basic understanding of Statistics
    Scheduled:
    Wednesday, November 7, 8:00 AM to 3:00 PM
    Level: Intermediate


    30. Public Health Evaluation: Getting to the Right Questions

    In 1999, the Centers for Disease Control and Prevention published the Evaluation Framework to provide public health professionals with a common evaluation frame of reference. Public health practitioners have successfully used the framework in a variety of settings and contexts. Beyond the framework, however, there are nuances and complexities to planning and implementing evaluations in public health settings.

    Employing discussions, real examples, and activities, this workshop will focus on topics for evaluators to consider and strategies for approaching public health evaluations to get to the right questions to be addressed in a variety of evaluation contexts. This session will go beyond the CDC Evaluation Framework to examine confounders and complexities of public health evaluation.

    You will learn

    • Unique aspects of evaluability assessment in public health settings,

    • Elements to consider to get to the right questions for the evaluation, including politics, accountability, ongoing evaluations, and rotating personnel,

    • Strategies to work with stakeholders to identify what types of evidence will have credibility,

    • Strategies to develop indicators for chosen evaluation questions.

    Mary V Davis is Director of Evaluation Services at the North Carolina Institute for Public Health and Adjunct Faculty in the University of North Carolina School of Public Health where she teaches several advanced evaluation courses. Diane Dunet is a Senior Program Evaluator in the Division of Nutrition and Physical Activity where she conducts and supervises public health evaluations.

    Session 30: Public Health Eval
    Prerequisites: Basic understanding of Evaluation
    Scheduled:
    Wednesday, November 7, 8:00 AM to 3:00 PM
    Level: Intermediate


    31. Managing Experimental Designs in Evaluation

    This workshop is full. There is no waiting list available for this workshop. Please choose another.

    Evaluators and administrators are increasingly expected to conduct studies using what are called scientifically-based methods. This workshop will provide you with the knowledge and ability to design and implement both random assignment experiments and alternative rigorous designs that can satisfy demands for scientifically-based methods.

    With an emphasis on hands-on exercises and individual consultation within the group setting, this workshop will provide you with concrete skills in improving your current or anticipated work with experimental design studies. 

    You will learn:

    • How to conduct evaluability assessments of experimental and quasi-experimental designs,

    • How to write or evaluate proposals to satisfy demands for scientifically-based research methods,

    • How to modify experimental designs to respond to specific contexts,

    • How to conduct quantitative analyses to strengthen the validity of conclusions and reveal hidden program impacts.

    George Julnes, Associate Professor of Psychology at Utah State University, has been contributing to evaluation theory for over 15 years and has been working with federal agencies, including the Social Security Administration, on the design and implementation of randomized field trials. Fred Newman is a Professor at Florida International University with over thirty years of experience in performing front line program evaluation studies.

    Session 31: Experimental Design
    Prerequisites: Understanding of threats to validity and the research designs used to minimize them, practical experience with eval helpful.
    S
    cheduled: Wednesday, November 7, 8:00 AM to 3:00 PM
    Level: Intermediate


    32. Visual Presentations of Quantitative Data 

    Presenting data through graphics, rather that numbers, can be a powerful tool for understanding data and disseminating findings. Unfortunately, this method is commonly used to confuse audiences, complicate research and obscure findings.  

    This workshop will enable participants to capitalize on the benefits of visual representation by providing them with tools for displaying data graphically for presentations, evaluation reports, publications and continued dialogue with program funders, personnel and recipients. This workshop will teach you about cognitive processing and heighten your awareness of the common errors made when visually displaying multivariate relationships, making you a more critical consumer of quantitative information.

    You will learn:

    • How people process visual information,

    • Ways to capitalize in cognitive processing and minimize cognitive load,

    • Quick and easy methods for presenting data,

    • Innovative methods for graphing data,

    • Ways to decide if quantitative information should be presented graphically/visually versus in words/text.

    Stephanie Reich is an assistant professor in the department of education at the University of California, Irvine where her research focuses on cognitive development and how people process information. David Streiner is a Professor of Psychiatry at the University of Toronto and has authored four widely used books in statistics, epidemiology and scale development. 

    Session 32: Visual Presentations
    Scheduled:
    Wednesday, November 7, 8:00 AM to 3:00 PM
    Level: Beginner, no prerequisites


    33. The "State of the Art" in Evaluation

    Together we will explore four 'hot topics' in the field today: (1) The Geography of the Discipline -- What are its components, what differentiates it from other disciplines, from a collection of bits and pieces, leaving 'the brand,' etc.; (2) Current Models and Theories -- their variety, strengths and weaknesses, best uses, possible future directions, etc.; (3) Methodologies and Uses -- what have we borrowed and extended or improved, which if any are new and distinctive of evaluation, what is still needed; and (4) The Rest of the Story -- the political, psychological, economic, educational dimensions, and what else; and, finally, what should we treat as the priorities for future development?

    This will be a participatory workshop aimed at developing everyone's individual perspective on the present situation in the discipline of evaluation. The facilitator will 'open the bidding' on each topic with a summary and then chair a discussion including all, to take the topic further. We'll engage in discussion about the current state of the art in evaluation, the directions the discipline is taking, and what we can do to set and realize priorities for future development.

    You will learn:

    • What evaluators bring to the table that our research counterparts do not,

    • How one's identity as an evaluator provides a unique stance and framing for client and stakeholder relationships,

    • The state of the art in evaluation - what innovations lie out there on the cutting edge,

    • Where is the discipline headed and how might we contribute to a valued and valuable future for evaluation.

    Michael Scriven is among the most well-known professionals in the field today with 25 years of work on the philosophy of science. He has over 90 publications in the field of evaluation. Michael is excited to offer this brand new workshop at Evaluation 2007.

    Session 33: State of the Art
    Scheduled:
    Wednesday, November 7, 8:00 AM to 3:00 PM
    Level: Beginner, no prerequisites


    34. Multilevel Models in Program Evaluation

    Multilevel models (also called hierarchical linear models) open the door to understanding the inter-relationships among nested structures (students in classrooms in schools in districts for instance), or the ways evaluands change across time (perhaps longitudinal examinations of health interventions). This workshop will demystify multilevel models and present them at an accessible level, stressing their practical applications in evaluation.

    Through lectures supplemented with practical examples and discussion of crucial concepts, the workshop will address four key questions: When are multilevel models necessary? How can they be implemented using standard software? How does one interpret multilevel results? What are recent developments in this arena?

    You will learn:

    • The basics of multilevel modeling,

    • When to use multilevel models in your evaluation practice,

    • How to implement models using widely available software,

    • Practical applications of multilevel models in education, health, and international development,

    • The importance of considering multilevel structures in understanding program theory. 

    Sanjeev Sridharan is head of evaluation programs and a senior research fellow at the University of Edinburgh as well as a trainer for SPSS and an Associate Editor for the American Journal of Evaluation. He has taught and presented on statistical topics to a wide variety of audiences including university students, program practitioners, policy makers, and faculty.

    Session 34: Multilevel Models
    Prerequisites: Basic understanding of Statistics
    Scheduled:
    Wednesday, November 7, 8:00 AM to 3:00 PM
    Level: Intermediate


    35. Evaluation Practice: A Collaborative Approach

    Collaborative evaluation is an approach that actively engages program stakeholders in the evaluation process. When stakeholders collaborate with evaluators, stakeholder and evaluator understanding increases and the utility of the evaluation is often enhanced. Strategies to promote this type of evaluation include evaluation conferences, member checking, joint instrument development, analysis and reporting.  

    Employing discussion, hands-on activities, and role-playing, this workshop focuses on these strategies and techniques for conducting successful collaborative evaluations, including ways to avoid common collaborative evaluation pitfalls.

    You will learn:

    • A collaborative approach to evaluation,

    • Levels of collaborative evaluation and when and how to employ them,

    • Techniques used in collaborative evaluation,

    • Collaborative evaluation design and data-collection strategies.

    Rita O'Sullivan of the University of North Carolina and John O'Sullivan of North Carolina A&T State University have offered this well-received session for the past six years at AEA. The presenters have used collaborative evaluation techniques in a variety of program settings, including education, extension, family support, health, and non-profit organizations.

    Session 35: Collaborative Evaluation
    Prerequisites: Basic Eval Skills
    Scheduled:
    Wednesday, November 7, 8:00 am to 3:00 pm
    Level: Intermediate


    Half Day Workshops, Wednesday, November 7, 8 AM to 11 AM


    36. Conducting Online Surveys

    This workshop is full. There is no waiting list available for this workshop. Please choose another.

    The uses for surveys in evaluation are endless, and online surveys are a relatively new way to conduct survey research. Online surveys provide promising opportunities for addressing many of the prohibitive issues of conducting paper surveys or phone surveys, including reaching target audiences within resource constraints, yet they are not without their limitations and challenges. This presentation will better equip evaluators to create design, and distribute effective online surveys.

    Through mini-lectures, discussion, and demonstration, we will explore how to select an appropriate online survey host, format survey questions for online administration, create an online survey, and download data for analysis and reporting. Please note that this short course focuses on providing examples and an overview rather than teaching how to use a specific application. As such, there is no computer use by individual participants.

    You will learn:

    • Differences between online and paper surveys, and the pros and cons of using an online survey,

    • General design principles and ways to make online surveys visually appealing,

    • Ways to increase response rate, including approaches for follow-up emails and options for incentives,

    • Options and advantages of commercial software program and web-based survey hosts.

    Lois Ritter and Valerie Sue are faculty members at California State University and co-authors of Conducting Online Surveys (SAGE, 2007), a comprehensive guide to the creation, implementation, and analysis of email and web-based surveys. They are developing a New Directions for Evaluation volume focusing on online surveys due out this fall.

    Session 36: Conducting Online Surveys
    Scheduled:
    Wednesday, November 7, 8:00 AM to 11:00 AM
    Level: Beginner, no prerequisites


    37. Advanced Performance Measurement

    Performance measurement has been a popular analytic and management tool for the past several years. As a result, many evaluators have considerable experience in this technique, and they recognize both the enormous potential and the many challenges involved. This advanced workshop is for experienced practitioners who want to wrestle with those difficult issues and develop options for tackling each one.

    In order to respect and utilize the rich experience of the participants, the workshop will operate more as a seminar than a training session. In advance of the workshop, you will be asked to email the facilitator 3-4 difficult issues you wish to have addressed, and these issues will collectively generate the agenda. The facilitator will certainly lead the seminar and contribute suggestions from his experience, but participants are also expected to share their own ideas and expertise.

    You will learn:

    • The important distinction between performance measurement and performance management,

    • Specific tips to help at various steps of the performance measurement process,

    • Ideas for analyzing performance measurement data,

    • Ways to encourage the all-important use of the performance measurement data,

    • A range of content generated from our collaboratively-developed agenda.

    Michael (Mike) Hendricks has helped a variety of public and non-profit organizations design and implement performance measurement systems in his 23 years as a consultant. He has written articles on performance measurement, co-authored a manual on analyzing outcome data, and is an experienced trainer and facilitator.

    Session 37: Advanced Performance Measurement
    Prerequisites:
    Solid understanding of the principles and procedures of performance measurement, several years of real-world experience implementing a performance measurement system.
    Scheduled:
    Wednesday, November 7, 8:00 AM to 11:00 AM
    Level: Advanced


    38. Identifying, Measuring and Interpreting Racism in Evaluation Efforts

    Historically, racism has been a contributing factor to the racial disparities that persist across contemporary society. This workshop will help you to identify, frame, and measure racism's presence. The workshop includes strategies for removing racism from various evaluation processes, as well as ways for identifying types of racism that may be influencing the contexts in which racial disparities- and other societal programs operate.

    Through mini-lectures, discussion, small group exercises, and handouts, learners will practice at identifying racial biases that may be embedded in certain research literature, influence of racism in the contexts of racial disparities programs and eliminating inadvertent racism that may become embedded in cross-cultural research.

    You will learn:

    • A variety of cross-disciplinary and international definitions of racism,

    • Strategies for removing/averting racism's presence in evaluation processes,

    • Common places where racism may hide and influence the context of programs and problems,

    • How to collect five broad types of data concerning racism as a variable,

    • Strategies for collecting data on eight of the several dozen types of racism described in contemporary cross-disciplinary English-language research literature.

    Pauline Brooks is an evaluator and researcher by formal training and practice. She has had years of university-level teaching and evaluation experience in both public and private education, particularly in the fields of education, psychology, social work and public health. For over 20 years, she has worked in culturally diverse settings focusing on issues pertaining to underserved populations, class, race, gender, and culture.

    Session 38: Racism in Evaluation
    Prerequisites: Previous thinking, work, or study in the area of discrimination's influence on programs and processes and an openness to further dialogue and exploration of racism.

    Scheduled:
    Wednesday, November 7, 8:00 AM to 11:00 AM
    Level:
     Intermediate


    39. Evaluating Large Scale Initiatives Using Systems Thinking

    This workshop is full. There is no waiting list available for this workshop. Please choose another.

    Are you lost in the systems jargon, multiple systems concepts, and their application to evaluation situations? Are you uncertain how to evaluate large scale initiatives? Join us to explore ways to select and apply evaluation methods based on the dynamics of the complex systems intertwined within large scale initiatives. Learn to apply system dynamics modeling and tools related to complex adaptive systems concepts.

    Through lecture, small group exercises involving case studies, and a Q& A session, this workshop will provide a framework for understanding the nature of the systems involved in large scale multi-site/multi-project initiatives and the kinds of evaluative questions that arise out of an understanding of system dynamics.

    You will learn:

    • How to consider three different system dynamics—organized, self-organizing, and unorganized—within large scale initiatives,

    • How to align different evaluation designs with different types of system dynamics,

    • How to design and apply nonlinear systems dynamics modeling to organized system dynamics,

    • How to design and apply evaluation methods to self-organizing system dynamics,

    • How this systems orientation relates to commonly used evaluation methods (e.g., outcomes, process, participatory, and empowerment evaluation methods).

    Beverly Parsons is the executive director of InSites in Colorado and has over 20 years experience in evaluating education and social service initiatives. She focuses on the evaluation and planning of systemic change. Teresa Behrens is the  director of evaluation at the WK Kellogg Foundation. She recently co-edited a special issue of the American Journal of Community Psychology on systems change.

    Session 39: Using Systems Thinking
    Prerequisites:
    Knowledge of or experience in conducting or planning cluster and/or multi-site evaluations.
    Knowledge of multiple theories of evaluation

    Scheduled:
    Wednesday, November 7, 8:00 AM to 11:00 AM
    Level: Intermediate


    40. Using Stories in Evaluation

    Stories are an effective means of communicating the ways in which individuals are influenced by educational, health, and human service agencies and programs. Unfortunately, the story has been undervalued and largely ignored as a research and reporting procedure. Stories are sometimes regarded with suspicion because of the haphazard manner in which they are captured or the cavalier promise of what the story depicts.

    Through short lecture, discussion, demonstration, and hands-on activities, this workshop explores effective strategies for discovering, collecting, analyzing and reporting stories that illustrate program processes, benefits, strengths or weaknesses.

    You will learn:

    • How stories can reflect disciplined inquiry,

    • How to capture, save, and analyze stories in evaluation contexts,

    • How stories for evaluation purposes are often different from other types of stories.

    Richard Krueger is a senior fellow at the University of Minnesota and has been actively listening for evaluation stories for over a decade. He has offered well-received professional development workshops at AEA and for non-profit and government audiences for over 15 years. Richard is a past president of AEA.

    Session 40: Using Stories
    Scheduled:
    Wednesday, November 7, 8:00 AM to 11:00 AM
    Level: Beginner, no prerequisites


    Half Day Workshops, Wednesday, November 7, 12 PM to 3 PM


    41. Level Best: How to Help Small and Grassroots Organizations Tackle Evaluation

    Small and grassroots organizations usually have much different needs and funding sources  than larger organizations.  This workshop is based on the presenters' new book, Level Best: How Small and Grassroots Organizations Can Tackle Evaluation and Talk Results. This workshop introduces the concept of "rolling evaluation," and emphasizes that evaluation at its best is about learning rather than judging, about improving rather than proving, and that overall, evaluation does not need to be costly, overwrought or burdensome.

    Through lecture, discussion and handouts, we will address the myths and misperceptions surrounding evaluation of nonprofit programs, and through the sharing of specific tools and strategies, will teach you how to support nonprofits in their evaluation efforts.

     You will learn:

    • An evaluation process that is scaled to nonprofit realities and capacity,

    • How to guide a grassroots agency through evaluation,

    • How to respond to funder concerns,

    • How to integrate evaluation into the ongoing work of even the smallest agency.

    Marianne Philbin is a consultant with extensive experience working with foundations and nonprofit organizations on issues related to evaluation and planning, capacity building and organizational development. Marcia Festen is the Executive Director of ArtsWork Fund and co-author with Ms. Philbin of Level Best: How Grassroots Organizations Can Tackle Evaluation and Talk Results (Wiley , October 2006).

    Session 41: Level Best
    Scheduled: Wednesday, November 7, 12:00 PM to 3:00 PM
    Level: Beginner, no prerequisites


    42. Empowerment Evaluation

    This workshop is full. There is no waiting list available for this workshop. Please choose another.

    Empowerment Evaluation builds program capacity and fosters program improvement. It teaches people to help themselves by learning how to evaluate their own programs. The basic steps of empowerment evaluation include: 1) establishing a mission or unifying purpose for a group or program; 2) taking stock - creating a baseline to measure future growth and improvement; and 3) planning for the future - establishing goals and strategies to achieve goals, as well as credible evidence to monitor change. The role of the evaluator is that of coach or facilitator in an empowerment evaluation, since the group is in charge of the evaluation itself.

    Employing lecture, activities, demonstration and case examples ranging from townships in South Africa to a $15 million Hewlett-Packard Digital Village project, the workshop will introduce you to the steps of empowerment evaluation and tools to facilitate the approach.

    You will learn:

    • How to plan and conduct an empowerment evaluation,

    • Ways to employ new technologies as part of empowerment evaluation including use of digital photography, quicktime video, online surveys, and web-based telephone/videoconferencing,

    • The dynamics of process use, theories of action, and theories of use.

    David Fetterman hails from Stanford University and is the editor of (and a contributor to) the recently published Empowerment Evaluation Principles in Practice (Guilford). He Chairs the Collaborative, Participatory and Empowerment Evaluation AEA Topical Interest Group and is a highly experienced and sought after facilitator.

    Session 42: Empowerment Evaluation
    Scheduled: Wednesday, November 7, 12:00 PM to 3:00 PM
    Level: Beginner, no prerequisites


    43. Strategies to Respond to the Top 10 Problems, Challenges and Headaches
    with Focus Group Interviewing

    This workshop is full. There is no waiting list available for this workshop. Please choose another.

    Focus groups don't always work as expected. Find out what leading focus group practitioners say are their top 10 problems, challenges and headaches AND how they solve those concerns. Also, find out the concerns among AEA members. AEA workshop participants will be invited to submit their top concerns as well. So, make your list, attend the workshop and discover helpful solution strategies.

    Through presentations, question-and-answer, and exploration of responses to attendee challenges, this workshop will help you move your focus group facilitation skills to the next level.

    You will learn:

    • Challenges with focus groups as seen by professional focus group moderators,

    • Challenges with focus groups as seen by AEA members,

    • Solution strategies to the top 10 problems, challenges, and headaches.

    Richard Krueger is a senior fellow at the University of Minnesota. In 30+ years of practice he has conducted thousands of focus group interviews and he still gets excited about listening to people. He is the author of 6 books on focus group interviewing and is a past president of AEA.

    Session 43: Focus Group Challenges
    Prerequisites: Experience in conducting focus groups
    Scheduled:
    Wednesday, November 7, 12:00 PM to 3:00 PM
    Level: Intermediate


    44. Handling Data: From Logic Model to Final Report

    This workshop is full. There is no waiting list available for this workshop. Please choose another.

    Collect, analyze and present data from complex evaluation studies in ways that are feasible for the evaluator and meaningful to the client. Explore lessons learned through over twenty years in evaluation consulting to ask the right questions, collect the right data and analyze and present findings in simple yet comprehensive ways.

    Actual data samples will be presented along with examples of analysis techniques. You will have an opportunity to work in small groups with sample data and will explore various analysis techniques. Throughout the workshop, the presenter will respond to individual questions and facilitate group discussion on data handling topics. At the end of the workshop, you will take away fresh ideas to tackle you data handling challenges.

    You will learn:

    • To develop and link a program theory, a holistic logic model, a data collection matrix, and evaluation tools,

    • To ask the right questions and get the answers you need,

    • To develop a data summary that triangulates the information collected from different sources,

    • To extract and map themes, prepare an evidence table, and report findings in a comprehensive but user-friendly way.

    Gail Barrington started Barrington Research Group more than 20 years ago and has been conducting complex evaluations ever since. A top rated facilitator, she has taught workshops throughout the US and Canada for many years.

    Session 44: Handling Data
    Prerequisites: Experience collecting data in evaluation projects - No in-depth statistical knowledge required
    Scheduled:
    Wednesday, November 7, 12:00 PM to 3:00 PM
    Level: Intermediate


    45. Practical Applications of Propensity Scores

    Quasi-experiments are excellent alternatives to true experiments when random assignment is not feasible. Unfortunately, causal conclusions cannot easily be made from results that are potentially biased. Some advances in statistics that attempt to reduce selection bias in quasi-experiments use propensity scores, the predicted probability that units will be in a particular treatment group.

    Using real data sets as examples, demonstrations of computation of propensity scores in SAS and SPSS, and hands-on analysis of output, you will become adept at when and how to use propensity scores to adjust for selection bias.

    You will learn:

    • A basic methodology for computing propensity scores,

    • The conditions under which propensity scores should, and should not, be used,

    • How propensity scores can be used to make statistical adjustments using matching, stratifying, weighting and covariate adjustment;

    • Known limitations and problems when using propensity score adjustments; and

    • How to improve propensity score computations.

    M H Clark received her PhD in Experimental Psychology from the University of Memphis with a specialization in research design and statistics. She currently is an assistant professor at Southern Illinois University where she teachers courses focusing on advanced research methodology, statistics, and program evaluation.

    Session 45: Propensity Scores
    Prerequisites: Basic Statistics through regression
    Scheduled:
    Wednesday, November 7, 12:00 PM to 3:00 PM
    Level: Intermediate


    Half Day Workshops, Sunday, November 11, 9 AM to 12 PM


    46. Getting to Outcomes in Public Health

    Getting To Outcomes: Methods and Tools for Planning, Evaluation, and Accountability (GTO) was developed as an approach to help practitioners plan, implement, and evaluate their programs to achieve results. GTO is based on answering 10 accountability questions. By answering the questions well, program developers increase their probability of achieving outcomes and demonstrate their accountability to stakeholders.
    Addressing the 10 questions involves a comprehensive approach to results-based accountability that includes evaluation and much more.

    Through lecture, demonstration, and hands-on activities, this workshop will explore the basics of the GTO approach and provide resources for further investigation and action. Research funded by a grant from CDC has shown that use of the GTO model can improve individual capacity and program performance to facilitate the planning, implementation, and evaluation of prevention programs. GTO has been customized for several areas of public health including: substance abuse prevention, underage drinking prevention, positive youth development, teen pregnancy prevention, and emergency preparedness.

    You will learn:

    • A comprehensive approach to results-based accountability

    • Ten questions to ask to improve program planning, accountability, and results

    • How to select evidence-based models and best practices

    • Strategies for continuous program improvement

    Abraham Wandersman is a Professor of Psychology at the University of South Carolina-Columbia. He is a co-author of Prevention Plus III and a co-editor of Empowerment Evaluation: Principles in Practice. Catherine Lesesne is a Behavioral Scientist in the Division of Reproductive Health at the Centers for Disease Control and Prevention. She is the lead author of the newly developed GTO manual, Promoting Science-based Approaches to Teen Pregnancy Prevention Using Getting to Outcomes.

    Session 46: Getting to Outcomes
    Scheduled: Sunday, November 11, 9:00 AM to 12:00 PM
    Level: Beginner, no prerequisites


    47. Conflict Resolution Skills for Evaluators

    Unacknowledged and unresolved conflict can challenge even the most skilled evaluators. Conflict between evaluators and clients and among stakeholders create barriers to successful completion of the evaluation project. This workshop will delve into ways to improve listening, problem solving, communication and facilitation skills and introduce a streamlined process of conflict resolution that may be used with clients and stakeholders.

    Through a hands-on, experiential approach using real-life examples from program evaluation, you will become skilled at the practical applications of conflict resolution theory as they apply to conflict situations in program evaluation.

    You will learn:

    • The nature of conflict in program evaluation and possible positive outcomes,

    • How to incorporate the five styles of conflict-resolution as part of reflective practice,

    • Approaches to resolving conflict among stakeholders with diverse backgrounds and experiences,

    • Techniques for responding to anger and high emotion in conflict situations,

    • To problem solve effectively, including win-win guidelines, clarifying, summarizing, and reframing.

    Jeanne Zimmer has served as Executive Director of the Dispute Resolution Center since 2001 and is completing a doctorate in evaluation studies with a minor in conflict management at the University of Minnesota. For over a decade, she has been a very well-received professional trainer in conflict resolution and communications skills.

    Session 47: Conflict Resolution
    Scheduled:
    Sunday, November 11, 9:00 AM to 12:00 PM
    Level: Beginner, no prerequisites


    48. Advanced Focus Group Moderator Training

    The literature is rich in textbooks and case studies on many aspects of focus groups including design, implementation and analyses. Missing however are guidelines and discussions on how to moderate a focus group.

    In this experiential learning environment, you will find out how to maximize time, build rapport, create energy and apply communication tools in a focus group to maintain the flow of discussion among the participants and elicit more than one-person answers. Using practical exercises and examples, including role play and constructive peer-critique as a focus group leader or respondent, you will explore effective focus group moderation including ways to increase and limit responses among individuals and the group as a whole.

    You will learn:

    • Fifteen practical strategies to create and maintain focus group discussion,

    • Approaches to moderating a focus group while being sensitive to cross-cultural issues,

    • How to stimulate discussion in community forums, committee meetings, and social settings.

    Nancy-Ellen Kiernan has facilitated over 150 workshops on evaluation methodology and moderated focus groups in 50+ studies with groups ranging from Amish dairy farmers in barns to at-risk teens in youth centers, to university faculty in classrooms.

    Session 48: Moderator Training
    Prerequisites: Having moderated 2 focus groups and written focus group questions and probes
    Scheduled:
    Sunday, November 11, 9:00 AM to 12:00 PM
    Level: Intermediate


    49. Advanced Applications of Program Theory

    While simple logic models are an adequate way to gain clarity and initial understanding about a program, sound program theory can enhance understanding of the underlying logic of the program by providing a disciplined way to state and test assumptions about how program activities are expected to lead to program outcomes. 

    Lecture, exercises, discussion, and peer-critique will help you to develop and use program theory as a basis for decisions about measurement and evaluation methods, to disentangle the success or failure of a program from the validity of its conceptual model, and to facilitate the participation and engagement of diverse stakeholder groups. 

    You will learn:

    • To employ program theory to understand the logic of a program,

    • How program theory can improve evaluation accuracy and use,

    • To use program theory as part of participatory evaluation practice.  

    Stewart Donaldson is Dean of the School of Behavioral and Organizational Sciences at Claremont Graduate University. He has published widely on the topic of applying program theory, developed one of the largest university-based evaluation training programs, and has conducted theory-driven evaluations for more than 100 organizations during the past decade.

    Session 49: Adv Program Theory
    Prerequisites: Experience or Training in Logic Models
    Scheduled:
    Sunday, November 11, 9:00 am to 12:00 pm
    Level: Intermediate


    50. Building Evaluation Capacity Within Community Organizations

    Are you working with community groups (coalitions, nonprofits, social service agencies, local health departments, volunteers, school boards) that are trying to evaluate the outcomes of their work to meet a funding requirement, an organizational expectation, or to enhance their own program performance? 

    Join us in this highly interactive workshop were you will practice and reflect on a variety of activities and adult learning techniques associated with three components of evaluation planning: focus, data collection, and communicating. Try these activities out, assess their appropriateness for your own situation, and expand your toolbox. We will draw from a compendium of practical tools and strategies that we have developed over the past years and have found useful in our own work. We encourage you to bring your own ‘best practices' to share as we work towards building the evaluation capacity of communities.

    You will learn:

    • Activities to use in building essential evaluation competence within community-based organizations;

    • Techniques that facilitate learning including use of peripherals, energizers, role play, reflection, games;

    • What to consider in choosing among options to better suit needs, requests and realities.

    Ellen Taylor-Powell is widely recognized for her work in evaluation capacity building. Her nearly 20 years in Extension have continuously focused on evaluation training and capacity building with focus on individual, team, organizational learning. She will lead a team of four facilitators with extensive experience both in teaching adults and in working with community groups and agencies.

    Session 50: Building Evaluation Capacity
    Prerequisites: Involvement in evaluation capacity building at the community level
    Scheduled:
    Sunday, November 11, 9:00 am to 12:00 pm
    Level: Intermediate

    American Evaluation Association│16 Sconticut Neck Rd #290 │Fairhaven MA 02719
    1-888-232-2275 │
    1-508-748-3326 │info@eval.org