Validity & Reliability
Experimental & Survey Design
Data & Statistics
Measurement & Concepts
Research Methods in Practice
100

What is the difference between reliability and validity in measurement?

Reliability is consistency of a measure; validity is whether it measures what it's intended to.

100

What is the main distinction between random sampling and random assignment?

Sampling selects who is in the study; assignment decides which group participants go into.

100

What is the purpose of descriptive statistics in research?

To summarize or describe data (e.g., mean, median, mode).

100

Which scale of measurement involves ranking items but not assuming equal intervals between ranks?

Ordinal scale

100

Which research method involves observing people in their natural environments without interference?

Field observation / Naturalistic inquiry

200

Which type of validity assesses whether a study’s results can be generalized to other populations and settings?

External validity

200

What is a pretest-posttest control group design? Why is it better than a one-group pretest-posttest design?

Design includes two groups: one gets treatment, one doesn’t. It controls for testing and maturation.

200

What is the difference between confidence interval and confidence level?

Interval = range of values; level = likelihood interval contains the true value (e.g., 95%).

200

What is the operational definition of a variable, and why is it important?

It explains how the variable is measured. Needed for clarity and replication.

200

What is a key limitation of using focus groups in applied communication research?

Groupthink or dominant voices can bias results; lack of generalizability.

300

Name one threat to internal validity and give a brief example.

Example: Maturation – participants naturally change over time, such as becoming more tired or experienced.

300

In survey design, what is a filter question and how is it used effectively?

It screens respondents (e.g., “Do you own a smartphone?” → only owners see follow-up questions).

300

Define skewness and give an example of a positively skewed distribution.

Skewness = asymmetry; positively skewed = most values on the left, like income data.

300

Give an example of a double-barreled survey question and explain why it's problematic.

“Do you like your instructor and the class?” → two ideas in one; unclear which one is being rated.

300

Describe the basic logic of grounded theory research.

Theory emerges inductively from patterns found in qualitative data.

400

How do you improve construct validity when designing a measurement instrument?

Use established scales, pilot testing, and clear operational definitions.

400

Define and give an example of a factorial experiment.

A study that tests two or more IVs simultaneously. Ex: effect of ad color (red/blue) and gender (M/F) on recall.

400

A study finds a significant p-value (p < .05). What does this mean in terms of hypothesis testing?

There is less than a 5% chance the results occurred by random chance, so we reject the null hypothesis.

400

What’s the difference between manifest and latent content in content analysis?

Manifest = surface-level (e.g., word count); Latent = underlying meaning (e.g., tone, values).

400

How does triangulation enhance the credibility of a research study?

It uses multiple methods or sources to cross-check findings, increasing trustworthiness.

500

A researcher finds high reliability across items in a scale but poor validity. What might be happening, and why is it problematic?

The scale consistently measures something, but not the intended concept. It's like a clock that always tells the wrong time.

500

What is a major threat to internal validity in a field experiment, and how might it be addressed?

Lack of control over environment. Can be addressed using random assignment and matched groups.

500

How do Type I and Type II errors differ, and what are the implications for interpreting research findings?

Type I: false positive (reject true null); Type II: false negative (fail to reject false null). Trade-off affects conclusions.

500

Explain the process of concept explication with an example, including conceptual and operational definitions.

Conceptual: define abstract idea (e.g., stress = mental strain). Operational: how it’s measured (e.g., survey score or cortisol level).

500

In a mixed methods study, what distinguishes a sequential design from a concurrent design? Give an example.

Sequential = one after the other (e.g., survey → interview); Concurrent = both done at the same time.