Internal Validity & Confounds
Experimental Designs
Threats to Validity
Factorial Designs
External Validity & Replication
100

This occurs when participants choose their own condition, threatening causal inference.

selection effects (Self-selection introduces preexisting group differences)

100

Another name for an independent-groups design

between-subjects design (Different participants in each condition)

100

A threat unique to within-groups designs involving improvement from repetition.

practice effects (Repeated exposure improves performance)

100

Another term for an independent variable in factorial designs.

factor

100

The first step in evaluating a study’s quality.

replicability (you must establish consistency before generalizing)

200

This design specifically helps reduce preexisting differences between groups without full random assignment.

matched-groups design (Matching equates groups on key variables)

200

Researchers choose this design partly because it requires fewer participants.

within-groups design (Same participants experience all conditions)

200

Spontaneous improvement over time without treatment.

maturation (Natural change, not the IV, explains results)

200

The number of main effects equals this.

number of independent variables (One main effect per IV)

200

The most important question when generalizing to a population.

how participants were sampled (sampling determines generalizability)

300

This type of confound hides a real relationship rather than creating a false one

reverse confound (it suppresses or masks an effect)

300

This technique prevents order effects by varying condition order.

counterbalancing (It distributes order effects across conditions)

300

To detect attrition bias, researchers compare these two groups.

the people that dropout vs. people that complete the entire study (differences reveal systematic attrition)

300

Parallel lines on a graph indicate this.

no interaction (effects stay consistent across levels)

300

Another term for ecological validity.

mundane realism (real-world similarity)

400

Even with a control group, this bias can still threaten validity because it comes from the researcher, not the design.

Observer bias (Researcher expectations influence measurement regardless of design)

400

This design measures preference while presenting all IV levels at once.

concurrent-measures (participants compare options simultaneously)

400

This solution reduces instrumentation problems in observational coding.

clear coding manuals (standardization ensures consistency)

400

This phrase describes how interactions are calculated conceptually.

difference in differences (interaction = effect changes across conditions)

400

This type of replication increases ecological validity by changing settings.

conceptual replication (tests theory across contexts)

500

In an experiment, the treatment group always meets in the morning while the control group always meets at night. The researcher concludes the treatment caused the observed differences.


design confound (time of day systematically varies with the independent variable, so you cannot determine whether the effect comes from the treatment or the confounding variable (time))

500

The key difference between pretest/posttest and within-groups designs lies in this.

number of IV levels participants experience (Within-groups = multiple IV levels; pre/post = same condition over time)

500

This threat occurs when an external event affects most participants during the study.

history threat (must systematically impact the group)

500

A 2×4 independent-groups design with 25 per cell requires this many participants.

200 participants (2×4 = 8 cells → 8×25 = 200)

500

This replication keeps core variables but adds new ones and changes sample/population.

replication-plus extension (expands original findings)

M
e
n
u