Qualitative Research
Experimental Designs
Evidences of Validity
Evidences of Reliability
Key Terms
100

What is the goal of qualitative research?

To understand and explore social phenomena through the lens of the participants

100

Which type of experimental designs can imply cause and effect?

True Experimental

100

Define Measurement Validity

the extent to which the instrument measures what it intends to measure
100

What value is considered a "good" evidence of reliability?

.80-.89

100

Define Independent AND Dependent Variable

Independent - what you are manipulating

Dependent - what you are measuring

200

Name and define 3 types of interview structures.

- unstructured (open) interviews - no rules.
- semi-structured interviews - an outline of a guide, but the ability to stray.
- structured interviews - no straying from a list of questions.

200
What are the three letters and what do they represent in Campbell & Stanley Notation?

R - randomization
T - treatment/intervention
O - observation/measurement

200

Describe the difference between Exploratory Factor Analysis and Confirmatory Factor Analysis.

Exploratory - searches for themes

Confirmatory - determines themes/factors actually exist

200

This reliability test examines a measure for consistency over multiple days.

Test-retest Reliability

200

This procedure is when either the researcher or the participant is unaware of group assignment

Blinding

300

Name and describe 2 types of coding.

Open - identifying key words/phrases

Axial - groups open codes into categories

Selective - groups axial codes into themes

300

Which pre-experimental design uses two or more intact groups and only one gets an experimental intervention?

Static Group Comparison Design

300

This evidence of validity compares similar (but not the same) tests.

Convergent Evidence of Validity

300

This type of reliability attempts to determine if all items on a questionnaire are measuring the intended construct

Alpha Reliability

300

A "fake" treatment/intervention

Placebo

400

What is a reflexivity statement and why is it important?

A statement to address researchers' biases before analysis in order to separate biases from analysis.

400

Identify and describe the two main flaws with *almost* all experimental designs.

- no randomization
- no control/pre-test/baseline

400

Describe Discriminant Evidence as it relates to validity.

Two separate groups that are known to be different (divergent) or unrelated (correlational)

400

What statistic is used to quantify most evidences of reliability?

(Pearson) Correlation Coefficient
400

Other factors external to the subjects occur by virtue of time (future or past)

History

500

Define Trustworthiness and provide two ways that a researcher can contribute to trustworthiness.

Trustworthiness - the ability to say the results of a study would be the same if another researcher had conducted the study

- Triangulation
- Member Checking
- Peer Debriefing
- Audit Trail
- Reflexivity Statement

500

Provide the Campbell & Stanley notation for a Post-test Only Control Groups Design with Randomization.

R T O

R    O

500

Identify and define the 3 concepts under Content Evidence of Validity.

Domain Clarity - clearly verifying the definition of constructs

Content Relevance - determining all items are related to the topic of interest

Content Representativeness - determining all relevant items are included and all non-relevant items are excluded

500
Describe the difference between Inter-Rater Reliability and Test Administrator Reliability.

Inter-Rater - analyzing consistency between the same researcher over multiple trials

Test Administrator - analyzing consistency between numerous researchers over multiple trials

500

Explain the difference between random assignment and random selection.

Random Assignment - randomly placing participants in groups

Random Selection - randomly recruiting participants