Knowledge Check
Evaluation Essentials
Quick Recall
Think It Through
Challenge Round
100

Why do we select key informants?

They are selected because they have expertise or specialized knowledge.


100

Provide an example of history bias

An event, not related to the intervention, occurs and influences the outcomes.

Exposure to competing intervention 

Exposure to similar information provided through media 

A political/social/community event

A natural disaster

Selection differences between the participants in the intervention and the comparison groups lead to differences in exposure or impact of historical events

100

Define "Counterfactual"


The control group.

What would have happened to intervention group participants in the absence of the intervention.

100

Give an example of convenience sampling

Approaching students in Tidewater to complete a survey, interviewing public health professionals while on a break from a workshop, asking clients seeking services at your law firm to answer questions about their experience within the criminal justice system, etc.

100

This design is best when looking for changes over time

Interrupted time series

200

List one advantage of using an existing standardized survey

Reliability 

Ability to compare data.

200

Provide one example of measurement bias

Instrumentation: reliability of the instrument changes; bias associated with data collection methods (type of methods used, social desirability, recall bias)

Testing: when the same instrument is used multiple times, respondents' scores improve because they are able to recall information from the pretest

Researcher effects: quality of data collection changes due to interviewer fatigue/experience; interviewer age/style influence responses; different interviewers/observers for the control and experimental groups

200

Consists of two groups and three waves of measurement.


A switching replication design

200

Define the term "study population"

The group of people that meets the eligibility criteria and to whom the results apply 

200

Define "Net effect"

A study net effect is the difference between an observed outcome and the outcome that is observed int he control group or that would have occurred for those same study participants had they not been exposed to the program.

300

A small group of participants with a research purpose?

Focus Groups

300

Strategies to prevent selection bias with a two group evaluation design?

Random assignment; do not allow self-assignment

Large sample

Ensure a complete sampling frame

Match study and control people as closely as possible

300

Example of Matching Variable other than "demongraphics"

Predisposition

Motivation

 Pre-existing learning skills

300

They are selected because they offer unique insights or provide contrasting perspectives.

Deviant case

300

A selection method where members of a population are selected using fixed intervals

Systematic Sampling Method

400

What threat when experimental and control groups are not equivalent at baseline?

Selection bias

400

Provide an example of maturation bias

A change occurs in participants as a result of maturation or passage of time (teens aging, and becoming better educated)

400

The counterfactual in the Reflexive Control Design?

When one group is compared against itself, the pretest serves as counterfactual

400

A type of data collection methods to  obtain feedback on program materials, or interventions under development.

Qualitative Methods  (Focus groups or individual indepth interview)

400

A list from which a sample is drawn?

Sampling Frame

500

The purpose of grounded theory?

Create theories from qualitative research data.

500

Benefits of using an iterative process in qualitative research?

The researchers engage in reflection, and discussion to further refine and develop the emerging theory.

Allow to refine interview questions, or re-evaluate coding strategies

500

Give an example of a political games of evaluators?

Evaluators may manipulate data to support a particular political agenda or to make a program appear more effective than it actually is.

Evaluators may delay or withhold evaluation results.

Accepting bribes or engaging in conflicts of interest,

500

Purpose of Program Evaluation Principles

Intended to provide evaluators with a framework for ethical and practical evaluation practices

500

In four words, define "Beneficence"

Limit harm; maximize benefits