crjs 4466 program & policy evaluation lecture #1 & 2 1. course outlines? 2. textbook? 3....

29
CRJS 4466 PROGRAM & POLICY EVALUATION LECTURE #1 & 2 1. Course outlines? 2. Textbook? 3. Questions?

Upload: arthur-parker

Post on 02-Jan-2016

220 views

Category:

Documents


2 download

TRANSCRIPT

CRJS 4466PROGRAM & POLICY EVALUATION

LECTURE #1 & 2

1. Course outlines?

2. Textbook?

3. Questions?

4. Types of research:

1. Descriptive (e.g. characteristics, needs)2. Evaluative (are needs being met)3. Explanatory (why/why not needs being met)

• evaluation is practical and applied, designed to answer specific questions about which approach works best to address a need

• evaluation is also a ‘moral’ enterprise – the intent is to make things ‘better’ for individuals, groups, societies

• socially responsible evaluation – why?

5. Program: an organized collection of activities designed to achieve one or several related objectives

an organized intervention to address a need

program activities are those practices that are expected to produce results which meet statedobjectives

‘good’ programs are characterized by1. adequate and qualified staffing2. adequate and stable funding3. recognized identity3. valid theoretical foundation4. service philosophy5. empirical evaluation6. evidence-based research foundation

6. Evaluation: “an arena of activity directed at collecting, analyzing and interpreting information on the

need for, implementation of, and effectiveness and efficiency of efforts to better the lot of humankind”

“the systematic application of social research procedures for assessing the conceptualization, design, implementation and utility of social intervention programs”

“systematic investigation to determine the success of a specific program”

“applied research used as part of the managerial process”

Figure 1-1: Performance Management Cycle

• note that program evaluation research is systematic and empirical – it is a logical, scientific approach to the collection, analysis and evaluation of empirical data,making use of a range of methodologies ranging from experiments, through questionnaires, to qualitative content analysis

• program evaluation is most often ‘comparative’ research

• program evaluation is different than ‘pure’ research in that the ‘research task’ is most often given to the evaluation researcher

Figures 1-2 and 1-3: Intended and Observed Outcomes

Linking Programs and Intended Objectives

The Two Program Effectiveness Questions Involved in Most Evaluations

7. Reasons for evaluating programs:

• mandated evaluation

• competition for scarce funds

• evaluation of new interventions

• accountability

• performance measurement

8. Evaluation can be directed at answering a wide variety of questions:

a. what is the nature and scope of the problem - where is it located, who does it affect, why is it a problem (e.g. illiteracy)

b. what are feasible interventions (e.g. mandatory drug screening)

c. what are appropriate target populations for interventions (e.g. male batterer treatment programs)

d. is the intervention reaching its target population (e.g. needle exchange programs)

e. is it effective (e.g. defensive tactics training)

f. how much does it cost (e.g. on-the-job training)

g. what are costs relative to effectiveness and benefits (e.g. mammogram screening program)

Note: the need to overcome the ‘subjective’ perspective – the problem with clinical ‘case-based’ versus empirical, evidence-based evaluations

Philosophical Assumptions of Program Evaluation

• realism • determinism• positivism• rationalism• empiricism• operationalism• parsimony• pragmatism• scientific skepticism &• rejection of nihilism• rejection of anecdotism

6. Evaluation is multidisciplinary, with multidisciplinary applications:

- marketing- engineering- social programs- education & training- government services- technology- medicine- accounting and auditing- military- policy making

* in short - almost any behaviour or practice can be the subject of evaluation

7. Evaluation as applied science

• empirical• ethical • internal and external validity of research design• reliability and validity of measures• relevant• responsible• accountable

e.g. the evaluation of the Wapekeka First Nation Suicide Prevention Program (SPP)

8. History of Evaluation Research

• pre-WWI - the strong association between rudimentary evaluation research and public health, education

• by 1930’s, a growing field, spurred on by positivist social science - the Western Electric experiments, 1924 - 1932; Stouffer’s ‘The American Soldier’

• following WWII, rapid growth of evaluation research, large-scale projects - e.g. urban renewal, job-retraining

• critical period in the 1960’s following Russian launch of Sputnik - the use of social science methods to assess, intervene in, improve the ‘American way of life’

e.g. American school system; project ‘Head Start’; the war on poverty; NASA; the Vietnam war; public health; automobile safety, etc.

• the impact of computers, computer technology on the conduct of large-scale social science and evaluation research (e.g. SPSS; CATI)

• the growth in government programs and services, and the need to assess, evaluate these for accountability purposes

• the rise of policy and public administration specialists, and the ‘new army’ of professionals training in research and evaluation methods

• the establishment of the General Accounting Office (GAO), and in 1980 the Program Evaluation and Methodology Division

• in the 1990’s - the move to better ‘accountability’ in both the private and public sectors - ‘what are we spending, are we spending it well, could we be spending it better’?

• ‘are we doing the right thing, the right way’?

• the rise of a whole host of ‘new’ social problems - AIDS, homelessness, terrorism, domestic and sexual assault, illiteracy, DNA issues

• the impact of litigation

• “evaluation is more than an application of methods: it is also a managerial and political activity”

Figure 1-4: An Open Systems Model of Programs and Key Evaluation Issues

10. Evaluation in practice:

• occurs in what is called a ‘policy space’- resources, priorities and relative influence of sponsors of research can change- interest and influence of stakeholders can change- priorities and responsibilities of organizations and agencies responsible for programs can change- unanticipated problems with delivering intervention- partial findings from evaluation can produce information program is not working, causing program to change mid-stream- unanticipated problems in implementing the evaluation design - effect of history, lack of sufficient sample, drop outs, lack of money to continue, etc.

11. Evaluation paradigms:

• the ‘scientific view’ - Donald Campbell, 1969

• the ‘pragmatic view’ - Lee Cronbach, 1982

• the tension between the use of ‘quantitative’ versus ‘qualitative’ methods

• maintaining scientific rigour – validity, reliability, and the three criteria of causality

• multimethod, multimeasure, multiphase approaches; e.g, the NBPS patrol and supervisor workload studies; the APT research

12. Overview of Evaluations

“the goal is to design and implement an evaluation that is as reproducible as possible; that given the design, could be perfectly replicated by another evaluator, or by the same evaluator again”

• types (formative [process] versus summative [outcome]:

1. program conceptualization and design

2. monitoring and accountability of program implementation

3. assessment of program utility (impact, efficiency)

• program stages;

1. evaluation of innovative programs2. evaluations for fine tuning3. evaluations of established programs

• a program is an organized collection of activities designed to reach certain objectives

• characteristics of good programs:-staffing-budgets & stable funding-service philosophy-empirically validated interventions/programs (evidence-based practice)

13. How evaluations are used

• go/no go decisions (e.g. Police Foundations)• developing a rationale for action, policy making (e.g. the boot camps lit review and preliminary data analysis to make decisions about what to do)• legitimation and accountability (e.g. SPP)• policy and administrative studies (e.g. EM versus intensive supervision, versus house arrest; the conditional sentence evaluation)

14. So - why do we do evaluations? What is the difference between evaluation research and other research?

Diagnostic Procedures

14. The role of evaluators in diagnosis

• note the consequences of poor diagnosis - the EM program, or the implementation of community policing programs; boot camps; the ‘tractors for India’ campaign; the ‘get tough on crime’ initiatives, teenage pregnancy, etc.

• social problems as ‘social constructions’ - the importance of understanding how social problems come to be defined, and by whom (the moral entrepreneur)

15. Specifying the problem” where is it, and how big is it

• use of existing data sources• use of social indicators• conducting social research (key person surveys, agency records, surveys and censuses, key informants)• qualitative needs assessment• forecasting needs

Table 1-1: Summary of Key Questions and Steps in Conducting Evaluation Assessments and

Evaluation Studies

Table 1-1: Summary of Key Questions and Steps in Conducting Evaluation Assessments and

Evaluation Studies

5. Ethical Issues in Evaluation

• IRB’s (Institutional Review Boards) and REB’s (Research Ethics Boards) – the influence of the Tuskagee study, and other controversial types of research

• the protection of both human and non-human subjects- volunteers (and rights)- sufficient information about project- no harm from participation- protection of sensitive information (anonymity and confidentiality) - rights to information/results- use of research results- profit from results

5. Ethical Issues in Evaluation (cont’d)

• research with special populations - children, mentally ill, offenders, etc.

• Professional Codes of Ethics

• bias in researching/reporting – Max Weber and ‘value free’ research

• researcher independence and objectivity

• maintaining researcher integrity